WeChat Mini Program
Old Version Features

A 0.67-to-5.4 TSOPs/W Spiking Neural Network Accelerator with 128/256 Reconfigurable Neurons and Asynchronous Fully Connected Synapses

IEEE Journal of Solid-State Circuits(2024)SCI 1区

Shanghai Jiao Tong Univ

Cited 1|Views26
Abstract
Spiking neural networks (SNNs) are garnering increasing attention due to their potential to explore the complexities of the human brain and utilize its capabilities. The broad spectrum of applications presents challenges in designing SNN-based neuromorphic systems First, the SNN uses complex models e.g., Izhikevich (IZ) for brain simulations and simpler models e.g., Leaky Integrate and Fire (LIF) for efficient machine learning, presenting a challenge in realizing neuron circuits supporting diverse applications. Second, densely connected networks with uneven spike distributions lead to Network-on-Chip (NoC) congestion and delays, complicating the optimization of throughput/area. An SNN accelerator, featuring 128/256 reconfigurable neurons and asynchronous fully connected synapses, has been developed to address these challenges. The reconfigurable neuron circuit is capable of switching between the LIF neuron model and the IZ neuron model. The proposed chip achieves a peak power efficiency of 5.37 TSOPs/W and throughput of 25.6 MSOPs/s. The near-threshold operation of neurons, in conjunction with asynchronous fully connected synapse, reduces energy by 9.42 x to a 9.27 pJ/pixel in image feature extraction.
More
Translated text
Key words
Neurons,Synapses,Integrated circuit modeling,Circuits,Brain modeling,Micromechanical devices,Computational modeling,Asynchronous fully connected synapse,Izhikevich (IZ),leaky integrate and fire (LIF),neuromorphic circuits,reconfigurable neuron,spiking neural networks (SNNs)
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper

要点】:本文提出了一种具有128/256可重构神经元和异步完全连接突触的0.67-to-5.4 TSOPs/W脉冲神经网络加速器,旨在解决基于SNN的类脑系统设计挑战,通过支持多种应用实现了高能效。

方法】:该加速器通过可重构神经元电路实现LIF和IZ神经元模型的切换,解决了支持不同应用的神经元电路实现问题。

实验】:实验使用该加速器在图像特征提取任务中实现了接近阈值操作,与传统方法相比,能量消耗降低了9.42倍,达到每像素9.27皮焦耳的能效。