A Monolithic 3D IGZO-RRAM-SRAM-integrated Architecture for Robust and Efficient Compute-in-memory Enabling Equivalent-Ideal Device Metrics
Science China Information Sciences(2025)
Institute of Microelectronics of the Chinese Academy of Sciences
Abstract
Compute-in-memory (CIM) based on various devices such as static random access memory (SRAM) and resistive random access memory (RRAM) and other emerging devices such as indium-gallium-zinc-oxide (IGZO) transistor, magnetoresistive RAM (MRAM), and ferroelectric RAM (FeRAM) has been explored for better performance and energy efficiency on neural networks applications. However, CIM based on a single-type device suffers a variety of non-ideal metrics to reach the simultaneous optimal accuracy, density, and energy efficiency. This work presents equivalent-ideal CIM (Eq-CIM), a monolithic 3D (M3D) IGZO-RRAM-SRAM integrated architecture for robust and efficient compute-in-memory enabling equivalent-ideal device metrics. To overcome the non-ideality (variation, endurance, temperature, etc.) of the single-type devices, system-technology co-optimization (STCO) is performed. This work highlights the non-ideality-aware functionality breakdown for robust high accuracy and simultaneous high density/efficiency, by utilizing 2T0C IGZO for temporal activation storage, RRAM for high-density weight storage, and SRAM for accurate CIM. Device-to-algorithm variation transfer is applied to analyze the system-level accuracy. We benchmark Eq-CIM architecture on CIFAR-10/ImageNet, with 5.06× storage density and 5.05×/2.45× area/energy efficiency compared with single-type-device-based CIM, and high robustness (<0.27
MoreTranslated text
Key words
compute-in-memory (CIM),monolithic 3D,RRAM,IGZO,3D-stack,simulation
求助PDF
上传PDF
View via Publisher
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
- Pretraining has recently greatly promoted the development of natural language processing (NLP)
- We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
- We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
- The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
- Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2016
被引用2350 | 浏览
2020
被引用250 | 浏览
2021
被引用325 | 浏览
2021
被引用184 | 浏览
2020
被引用121 | 浏览
International Electron Devices Meeting 2020
被引用21 | 浏览
2021
被引用18 | 浏览
2021
被引用9 | 浏览
2022
被引用32 | 浏览
2022
被引用17 | 浏览
2023
被引用112 | 浏览
2022
被引用5 | 浏览
2023
被引用1 | 浏览
2024
被引用4 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper