Gộp bộ nhớ CPU để suy luận LLM với độ trễ thấp hơn và thông lượng cao hơn (UC Berkeley)
A new technical paper titled “Pie: Pooling CPU Memory for LLM Inference” was published by researchers at UC Berkeley.
Abstract
“The rapid growth of LLMs has revolutionized natural language processing and AI analysis, but their increasing size and memory demands present significant challenges. A common solution is to spill over to CPU memory; however, traditional GPU-CPU memory swapping often results in higher latency and lower throughput.
This paper introduces Pie, an LLM inference framework that addresses these challenges with performance-transparent swapping and adaptive expansion. By leveraging predictable memory access patterns and the high bandwidth of modern hardware like the NVIDIA GH200 Grace Hopper Superchip, Pie enables concurrent data swapping without affecting foreground computation, expanding effective memory without added latency. Adaptive expansion dynamically adjusts CPU memory allocation based on real-time information, optimizing memory usage and performance under varying conditions.
Pie maintains low computation latency, high throughput, and high elasticity. Our experimental evaluation demonstrates that Pie achieves optimal swapping policy during cache warmup and effectively balances increased memory capacity with negligible impact on computation. With its extended capacity, Pie outperforms vLLM by up to 1.9X in throughput and 2X in latency. Additionally, Pie can reduce GPU memory usage by up to 1.67X while maintaining the same performance. Compared to FlexGen, an offline profiling-based swapping solution, Pie achieves magnitudes lower latency and 9.4X higher throughput.”
Find the technical paper here. November 2024.
Xu, Yi, Ziming Mao, Xiangxi Mo, Shu Liu, and Ion Stoica. “Pie: Pooling CPU Memory for LLM Inference.” arXiv preprint arXiv:2411.09317 (2024).