Covert Prompt Transmission for Secure Large Language Model Services
Abstract: This paper investigates covert prompt transmission for secure and efficient LLM services over wireless networks. We formulate a latency minimization problem under fidelity and detectability constraints to ensure confidential and covert communication by jointly optimizing the transmit power and prompt compression ratio. To solve this problem, we first propose a prompt compression and encryption (PCAE) framework, performing surprisal-guided compression followed by lightweight permutation-based encryption. Specifically, PCAE employs a locally deployed small LLM (SLM) to estimate token-level surprisal scores, selectively retaining semantically critical tokens while discarding redundant ones. This significantly reduces computational overhead and transmission duration. To further enhance covert wireless transmission, we then develop a group-based proximal policy optimization (GPPO) method that samples multiple candidate actions for each state, selecting the optimal one within each group and incorporating a Kullback-Leibler (KL) divergence penalty to improve policy stability and exploration. Simulation results show that PCAE achieves comparable LLM response fidelity to baseline methods while reducing preprocessing latency by over five orders of magnitude, enabling real-time edge deployment. We further validate PCAE effectiveness across diverse LLM backbones, including DeepSeek-32B, Qwen-32B, and their smaller variants. Moreover, GPPO reduces covert transmission latency by up to 38.6\% compared to existing reinforcement learning strategies, with further analysis showing that increased transmit power provides additional latency benefits.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.