Chrome Extension
WeChat Mini Program
Use on ChatGLM

A Monolithic 3D IGZO-RRAM-SRAM-integrated Architecture for Robust and Efficient Compute-in-memory Enabling Equivalent-Ideal Device Metrics

Science China Information Sciences(2025)

Institute of Microelectronics of the Chinese Academy of Sciences

Cited 0|Views6
Abstract
Compute-in-memory (CIM) based on various devices such as static random access memory (SRAM) and resistive random access memory (RRAM) and other emerging devices such as indium-gallium-zinc-oxide (IGZO) transistor, magnetoresistive RAM (MRAM), and ferroelectric RAM (FeRAM) has been explored for better performance and energy efficiency on neural networks applications. However, CIM based on a single-type device suffers a variety of non-ideal metrics to reach the simultaneous optimal accuracy, density, and energy efficiency. This work presents equivalent-ideal CIM (Eq-CIM), a monolithic 3D (M3D) IGZO-RRAM-SRAM integrated architecture for robust and efficient compute-in-memory enabling equivalent-ideal device metrics. To overcome the non-ideality (variation, endurance, temperature, etc.) of the single-type devices, system-technology co-optimization (STCO) is performed. This work highlights the non-ideality-aware functionality breakdown for robust high accuracy and simultaneous high density/efficiency, by utilizing 2T0C IGZO for temporal activation storage, RRAM for high-density weight storage, and SRAM for accurate CIM. Device-to-algorithm variation transfer is applied to analyze the system-level accuracy. We benchmark Eq-CIM architecture on CIFAR-10/ImageNet, with 5.06× storage density and 5.05×/2.45× area/energy efficiency compared with single-type-device-based CIM, and high robustness (<0.27
More
Translated text
Key words
compute-in-memory (CIM),monolithic 3D,RRAM,IGZO,3D-stack,simulation
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Related Papers
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一种新型的等效理想计算存储架构(Eq-CIM),通过单芯片三维集成IGZO晶体管、RRAM和SRAM,实现了高鲁棒性、高密度及高能效的计算存储功能,通过系统技术与工艺协同优化克服了单一设备非理想特性带来的挑战。

方法】:研究采用系统技术与工艺协同优化(STCO)方法,通过IGZO晶体管实现时间激活存储,RRAM实现高密度权重存储,SRAM实现精确计算存储,并应用设备到算法的变差传递技术来分析系统级精度。

实验】:使用CIFAR-10和ImageNet数据集对Eq-CIM架构进行基准测试,结果表明,相比基于单一设备的计算存储架构,Eq-CIM在存储密度上提高了5.06倍,面积效率提高了5.05倍,能效提高了2.45倍,且表现出高度鲁棒性(误差小于0.27)。