Chrome Extension
WeChat Mini Program
Use on ChatGLM

An RRAM-Based 40.6 TOPS/W Energy-Efficient AI Inference Accelerator with Quad Neuromorphic-Processor-Unit for Highly Contrast Recognition

2024 INTERNATIONAL VLSI SYMPOSIUM ON TECHNOLOGY, SYSTEMS AND APPLICATIONS, VLSI TSA(2024)

Cited 0|Views3
Key words
Inference Acceleration,AI Inference,Neural Network,Quantization Error,MNIST Dataset,Image Recognition,Floating-point Operations,Edge Devices,Inference Stage,Read Operation,Memory Array,Sign Bit
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined