Chrome Extension
WeChat Mini Program
Use on ChatGLM

An In-Situ Spatial-Temporal Sequence Detector for Neuromorphic Vision Sensor Empowered by High Density Vertical NAND Storage

Zijian Zhao, Varun Darshana Parekh, Po-Kai Hsu, Yixin Qin, Yiming Song, A N M Nafiul Islam,Ningyuan Cao,Siddharth Joshi, Thomas Kämpfe, Moonyoung Jung, Kwangyou Seo, Kwangsoo Kim, Wanki Kim,Daewon Ha,Sourav Dutta,Abhronil Sengupta,Xiao Gong,Shimeng Yu,Vijaykrishnan Narayanan,Kai Ni

arXiv · Emerging Technologies(2025)

Cited 0|Views5
Abstract
Neuromorphic vision sensors require efficient real-time pattern recognition, yet conventional architectures struggle with energy and latency constraints. Here, we present a novel in-situ spatiotemporal sequence detector that leverages vertical NAND storage to achieve massively parallel pattern detection. By encoding each cell with two single-transistor-based multi-level cell (MLC) memory elements, such as ferroelectric field-effect transistors (FeFETs), and mapping a pixel's temporal sequence onto consecutive word lines (WLs), we enable direct temporal pattern detection within NAND strings. Each NAND string serves as a dedicated reference for a single pixel, while different blocks store patterns for distinct pixels, allowing large-scale spatial-temporal pattern recognition via simple direct bit-line (BL) sensing, a well-established operation in vertical NAND storage. We experimentally validate our approach at both the cell and array levels, demonstrating that vertical NAND-based detector achieves more than six orders of magnitude improvement in energy efficiency and more than three orders of magnitude reduction in latency compared to conventional CPU-based methods. These findings establish vertical NAND storage as a scalable and energy-efficient solution for next-generation neuromorphic vision processing.
More
Translated text
PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一种利用垂直NAND存储的高密度在位时空序列检测器,实现了神经形态视觉传感器中的高效实时模式识别。

方法】:通过在每个存储单元中编码两个基于单晶体管的MLC存储元件(如FeFETs),并将像素的时间序列映射到连续的字线上,实现了在NAND字符串内部的直接时间模式检测。

实验】:通过细胞级别和阵列级别的实验验证,使用未明确指出的数据集,结果表明该垂直NAND基检测器相比传统CPU基方法在能效上提高了六个数量级,在延迟上减少了三个数量级。