谷歌浏览器插件
订阅小程序
在清言上使用

SHMT: an SRAM and HBM Hybrid Computing-in-Memory Architecture with Optimized KV Cache for Multimodal Transformer

Xiangqu Fu,Jinshan Yue, Muhammad Faizan, Zhi Li, Qiang Huo,Feng Zhang

IEEE Transactions on Circuits and Systems I Regular Papers(2025)

引用 0|浏览1
关键词
Multimodal transformer (MMT),computing-in-memory (CIM),computing-near-memory (CNM),large language model (LLM),KV Cache
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要