SpeedLLM: An FPGA Co-Design of Large Language Model Inference Accelerator

3 months ago 2

[Submitted on 7 May 2025]

View PDF HTML (experimental)

Abstract:This paper introduces SpeedLLM, a neural network accelerator designed on the Xilinx Alevo U280 platform and optimized for the Tinyllama framework to enhance edge computing performance. Key innovations include data stream parallelism, a memory reuse strategy, and Llama2 operator fusion, which collectively reduce latency and energy consumption. SpeedLLM's data pipeline architecture optimizes the read-compute-write cycle, while the memory strategy minimizes FPGA resource demands. The operator fusion boosts computational density and throughput. Results show SpeedLLM outperforms traditional Tinyllama implementations, achieving up to 4.8* faster performance and 1.18* lower energy consumption, offering improvements in edge devices.

Submission history

From: Peipei Wang [view email]
[v1] Wed, 7 May 2025 05:39:07 UTC (2,519 KB)

Read Entire Article