Title: CSPO: chain-structured prompt optimisation for large language models
Authors: Jinshui Wang; Sining Lin; Xingsi Xue; Shuguang Chen; Zhengyi Tang
Addresses: School of Computer Science and Mathematics, Fujian University of Technology, Fuzhou 350118, China ' School of Computer Science and Mathematics, Fujian University of Technology, Fuzhou 350118, China ' School of Computer Science and Mathematics, Fujian University of Technology, Fuzhou 350118, China ' School of Computer Science and Mathematics, Fujian University of Technology, Fuzhou 350118, China ' School of Computer Science and Mathematics, Fujian University of Technology, Fuzhou 350118, China
Abstract: Large language models (LLMs) show promise in improving content distribution in mobile communication networks, but their performance heavily depends on input prompts. Manually crafting effective prompts for complex tasks is time-consuming and often suboptimal, highlighting the need for automated optimisation. This paper proposes a chain-structured prompt optimisation (CSPO) method to automatically optimise prompts for LLMs. Inspired by neural network training, CSPO decomposes tasks into a series of ordered instruction steps, forming an instruction chain. It then employs an iterative optimisation process analogous to forward and backward propagation to refine this chain. CSPO automatically constructs an initial instruction chain, executes it to generate output, analyses errors, formulates improvement strategies, and updates the chain accordingly. Evaluations on instruction induction, mathematical reasoning, and counter factual tasks demonstrate that CSPO generally outperforms baseline methods. This study contributes to the field of prompt engineering, offering a novel approach to automatic prompt optimisation for LLMs.
Keywords: large language models; automatic prompt optimisation; natural language processing; mobile communication networks; content distribution.
DOI: 10.1504/IJAHUC.2025.145202
International Journal of Ad Hoc and Ubiquitous Computing, 2025 Vol.48 No.4, pp.233 - 243
Received: 14 Oct 2024
Accepted: 26 Nov 2024
Published online: 25 Mar 2025 *