Cambricon-LLM: A Chiplet-Based Hybrid Architecture for On-Device Inference of 70B LLM
Annual IEEE/ACM International Symposium on Microarchitecture(2024)
Key words
In-Flash Computing,Large Language Model Accelerator,Robotic Accelerator
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined