SHMT: an SRAM and HBM Hybrid Computing-in-Memory Architecture with Optimized KV Cache for Multimodal Transformer
IEEE Transactions on Circuits and Systems I Regular Papers(2025)
关键词
Multimodal transformer (MMT),computing-in-memory (CIM),computing-near-memory (CNM),large language model (LLM),KV Cache
AI 理解论文
溯源树
样例

生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要