EdgeLLM: Fast On-device LLM Inference with Speculative Decoding
IEEE Trans Mob Comput(2025)
Key words
Mobile Computing,Large Language Models,Speculative Decoding
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined