Sitemap

A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.

Pages

Posts

publications

MELL: Memory-Efficient Large Language Model Serving via Multi-GPU KV Cache Management

Published in INFOCOM 2025, 2024

Serving large language models (LLMs) for massive users is challenged by the significant memory footprint of the transient state, known as the \emph{key-value (KV) cache}, which scales with sequence length and number of requests. Instead of renting or buying more expensive GPUs, the load imbalance of the KV cache across GPUs, coupled with recent advances in inter-GPU communication, provides an opportunity to serve more requests via request migration. However, high migration overhead and unpredictable request patterns make it challenging. Therefore, this paper proposes \textsc{Mell}, a memory-efficient LLM serving system via \emph{multi-GPU KV cache management}. It saves the number of GPUs needed in the system by considering the dynamic KV cache load and the costly request migration. Specifically, we first develop an adaptive request migration mechanism to balance the computational and communication overheads and adapt to diverse resource conditions. Then, we design an online algorithm tailored to a multi-LLM request and multi-GPU scheduling problem with migration enabled. It aims to minimise the required GPUs while limiting the number of migrations. % and achieves a competition ratio of $4/3$. Finally, we implement a prototype of \textsc{Mell} and demonstrate that it reduces the number of GPUs by $31\%$ and increases the GPU utilization by $43\%$ at most compared to existing LLM serving systems.

Recommended citation: Qianli Liu, Zicong Hong, Peng Li, Fahao Chen, and Song Guo. (2025). "MELL: Memory-Efficient Large Language Model Serving via Multi-GPU KV Cache Management." INFOCOM 2025.
Download Paper

DIRECTOR: Accelerating Distributed MoE Serving via Online Proactive Expert Placement

Published in INFOCOM 2026, 2026

This paper accelerates distributed Mixture-of-Experts (MoE) serving with a proactive online expert placement strategy. It improves end-to-end latency and throughput under dynamic request patterns by balancing communication and compute overheads across GPU servers.

Recommended citation: Qianli Liu, Kaibin Guo, Zicong Hong, Peng Li, Fahao Chen, Haodong Wang, Jian Lin, and Song Guo. (2026). DIRECTOR: Accelerating Distributed MoE Serving via Online Proactive Expert Placement. INFOCOM 2026.
Download Paper

PPAI: Enabling Personalized LLM Agent Interoperability for Collaborative Edge Intelligence

Published in INFOCOM 2026, 2026

PPAI studies personalized LLM agents in dynamic P2P edge networks and proposes a scalable query-agent pairing mechanism that adapts to heterogeneous local resources. It improves response quality and load balance while preserving low-latency interaction.

Recommended citation: Zile Wang, Qianli Liu, Kaibin Guo, Haodong Wang, Jian Lin, Zicong Hong, and Song Guo. (2026). PPAI: Enabling Personalized LLM Agent Interoperability for Collaborative Edge Intelligence. INFOCOM 2026.
Download Paper

KVDrive: A Holistic Multi-Tier KV Cache Management System for Long-Context LLM Inference

Published in SIGMOD 2026, 2026

KVDrive proposes a holistic multi-tier memory management system (GPU memory, host DRAM, and SSD) for long-context LLM inference. It coordinates KV-cache admission, tiering, and scheduling to improve throughput while reducing service degradation under high memory pressure.

Recommended citation: Jian Lin, Jiazhi Mi, Zicong Hong, Haodong Wang, Qianli Liu, Haoyue Zhang, Peng Li, and Song Guo. (2026). KVDrive: A Holistic Multi-Tier KV Cache Management System for Long-Context LLM Inference. SIGMOD 2026.
Download Paper

teaching