Papers
Topics
Authors
Recent
Search
2000 character limit reached

AgentSquare: Modular LLM Agent Search

Updated 19 January 2026
  • AgentSquare is an automatic LLM agent search framework that decomposes agent design into four core modules: Planning, Reasoning, Tool Use, and Memory.
  • It leverages LLM-driven module evolution, recombination, and in-context performance prediction to optimize architectures for diverse tasks.
  • Experiments across web, embodied, tool use, and game domains show significant performance gains over human-engineered baselines with interpretable designs.

AgentSquare is an automatic LLM agent search framework that formalizes and operationalizes the Modularized LLM Agent Search (MoLAS) problem. It defines a systematic and extensible design space for LLM-based agents, decomposing them into four core, swappable modules—Planning, Reasoning, Tool Use, and Memory—with uniform IO interfaces. AgentSquare leverages LLM-driven module evolution, module recombination, and in-context performance prediction to search efficiently for agent architectures that optimize task-specific evaluation functions. Experiments across web, embodied, tool, and game application domains demonstrate that AgentSquare outperforms human-engineered baselines and yields interpretable design patterns for agentic systems (Shang et al., 2024).

1. Modularized LLM Agent Search: Formalization and Motivation

The MoLAS problem is defined by introducing a fixed, standardized module pool containing four module types: Planning (P)(\mathcal{P}), Reasoning (R)(\mathcal{R}), Tool Use (T)(\mathcal{T}), and Memory (M)(\mathcal{M}). Each agent is a tuple A=(P,R,T,M)A = (P, R, T, M), where PPP \in \mathcal{P}, RRR \in \mathcal{R}, TTT \in \mathcal{T}, and MMM \in \mathcal{M}. Given a task description dd and a task-level evaluation function Evald()\text{Eval}_d(\cdot), the optimization objective is:

argmaxPP,RR,TT,MM Evald(P,R,T,M)\underset{P \in \mathcal{P}, R \in \mathcal{R}, T \in \mathcal{T}, M \in \mathcal{M}}{\arg\max}\ \text{Eval}_d(P, R, T, M)

(Modularized LLM Agent Search Objective; Eq. 1 in (Shang et al., 2024))

The rationale for this modular abstraction is threefold:

  • Reusability: Existing agent designs can be decomposed into these modules (Chain-of-Thought ↔ Reasoning, WebGPT’s browser advisor ↔ Tool Use, Voyager’s skill memory ↔ Memory).
  • Extensibility: The design space expands as new modules are published and added to any of the four pools.
  • Searchability: The uniform IO interface (both for code and LLM prompting) enables automatic swapping, facilitating AutoML-style architecture search instead of manual, task-specific engineering.

2. Modular Design Space: Module Definitions and Interfaces

Each of the four fundamental modules has a well-specified IO contract:

Module Input(s) and Output(s) Functionality Example
Planning P(d,f){s1,,sn}P(d, f) \rightarrow \{s_1,\ldots,s_n\} Decomposes task dd (plus optional feedback ff) into sub-tasks sis_i
Reasoning R(si,fi)riR(s_i, f_i) \rightarrow r_i Solves or reasons about sub-task sis_i, possibly via CoT or ToT
Tool Use T(pij,τ)tijT(p_{ij}, \tau) \rightarrow t_{ij} Selects tool tijt_{ij} from tool pool τ\tau for subtask pijp_{ij}
Memory Mwrite(o,mem)memM_{\text{write}}(o, \text{mem}) \rightarrow \text{mem}' Writes observation/action oo to memory
Mretrieve(o,mem)mM_{\text{retrieve}}(o, \text{mem}) \rightarrow m Retrieves relevant memory content mm

All modules operate over textual input, with optional feedback or context, and output type-specific responses (sub-tasks, solutions, tool choices, or memory states).

3. AgentSquare Search Framework: Evolution, Recombination, and Surrogate Prediction

The AgentSquare framework employs an iterative, population-based search guided by two LLM-driven processes—module evolution (πξ\pi_\xi) and module recombination (πθ\pi_\theta)—and a surrogate performance predictor (πp\pi_p) to accelerate selection.

3.1 High-Level Search Algorithm

  1. Initialization: Start with a randomly-sampled agent A0A_0.
  2. Module Evolution (πξ\pi_\xi): Generate new module code variants for any of (P,R,T,M)(P,R,T,M), prompt πξ\pi_\xi to mutate, recombine, or extend modules, yielding candidate agents {Ae1,,AeN}\{A_e^1, \ldots, A_e^N\}, each evaluated and recorded in the experience pool E\mathbb{E}.
    • Ae=πξ((P0,R0,T0,M0),d,N,P,R,T,M,E)A_e = \pi_\xi((P_0', R_0', T_0', M_0'), d, N, \mathcal{P}, \mathcal{R}, \mathcal{T}, \mathcal{M}, \mathbb{E}) (Eq. 3)
  3. Module Recombination (πθ\pi_\theta): Swap in/out published modules in P,R,T,M\mathcal{P}, \mathcal{R}, \mathcal{T}, \mathcal{M} based on argmax selection over E\mathbb{E}, generating new {Ar1,,ArN}\{A_r^1, \ldots, A_r^N\}.
    • Ar=πθ((P0,R0,T0,M0),d,N,P,R,T,M,E)A_r = \pi_\theta((P_0, R_0, T_0, M_0), d, N, \mathcal{P}, \mathcal{R}, \mathcal{T}, \mathcal{M}, \mathbb{E}) (Eq. 2)
  4. Performance Predictor (πp\pi_p): For candidate agent AA', predict vEvald(A)v' \approx \text{Eval}_d(A') using in-context learning and a small set of past ((agent, performance)) pairs:
    • v=πp(A,d,P,R,T,M,E)v' = \pi_p(A', d, \mathcal{P}, \mathcal{R}, \mathcal{T}, \mathcal{M}, \mathbb{E}) (Eq. 4)
  5. Selection and Repeat: Retain best candidates; repeat until convergence or max episodes (KK) reached.

Algorithm 1 in (Shang et al., 2024) details the episode-based alternation, population size per episode (NN), and pool/experience management.

3.2 LLM Roles

  • πθ\pi_\theta (proposer): Selects and replaces modules from the pool, informed by real-world performance history.
  • πξ\pi_\xi (programmer): Mutates/generates module code, exploring beyond previously published designs.
  • πp\pi_p (predictor): Quickly estimates task performance to reduce expensive full-agent rollouts (\sim0.025% the cost of real evaluation in ALFWorld).

Empirically, the surrogate predictor correlates strongly with real task reward (Pearson ρ0.9\rho \approx 0.9 or higher across benchmarks).

4. Empirical Evaluation: Benchmarks and Comparative Results

Comprehensive experiments span six benchmarks across four domains:

  • Web: WebShop (e-commerce)
  • Embodied: ALFWorld (navigation), ScienceWorld (simulated science tasks)
  • Tool Use: TravelPlanner (external data search/planning), M3ToolEval (multi-turn tool selection)
  • Game: Classical planning via PDDL tasks

Metrics include success rate, progress rate, task score, and micro-pass rate, as appropriate per environment.

Baseline methods:

  • 12 prominent human-crafted agents (including Chain-of-Thought, Self-refine, ToT, Step-back, Voyager, HuggingGPT, Generative Agents, DEPS, OPENAGI, DiLu).
  • Module-level search: random and Bayesian optimization over (P,R,T,M)(P, R, T, M) tuples.
  • Prompt-level search: OPRO (iterative prompt search).

Main results under GPT-4o (see Table 2 in (Shang et al., 2024)):

  • An average performance gain of 17.2% over the best-known human-designed baselines.
  • Individual task improvements include +26.1% on ALFWorld, +30.6% on M3Tool, +20.5% on ScienceWorld, +14.1% on WebShop.
  • The search trajectory (Figure 1) is smooth and monotonic, unlike plateauing trends in random, Bayesian, or module-only search.
  • Per-iteration cost and efficiency: e.g., ALFWorld uses %%%%55ff56%%%%\sim\$252total).</li></ul><p>Thelearnedmodulesarereusable,amortizingsearchcostacrossfuturetasksandreducingoverheadforsubsequentdeployments.</p><h2class=paperheadingid=modulelevelinsightsanddesigninterpretability>5.ModuleLevelInsightsandDesignInterpretability</h2><p>AgentSquareproducesexplicit,humaninterpretablemodulerecipesidentifyingwhichinnovationsdriveperformance.</p><ul><li><strong>ALFWorld:</strong>PlanningleveragesalearnedTD(treedecomposition)module;reasoningemploysSFToT(selffeedback+TreeofThought,synthesizeofSelfrefine[Madaanetal.]andToT[Yaoetal.]);memoryusesGenerativeAgentsepisodicschema.ThistriadsurpassesSelfrefinealonebyover25<li><strong>WebShop:</strong>EmploysIO(iterativeoptimization)planning,andHTSS(hierarchicaltaxonomysearchstrategy)reasoning.</li><li><strong>Othertasks:</strong>SimilarinterpretablecombinationsariseforScienceWorld,M3Tool,TravelPlanner,andPDDL,asdetailedinTableA.5andFiguresA.12A.17.</li></ul><p>Anotableobservationisthatforopenworldorcompositionaltasks,planningandreasoningmodulesaretypicallyperformancebottlenecks,whilesophisticatedmemorymodulesarecriticalprimarilyforlonghorizon,embodiedscenarios.Thesemodulardiscoverieschartthelandscapeforfuturedesignerdrivenorhybridsearchrefinement.</p><h2class=paperheadingid=costscalabilityandreusabilityconsiderations>6.Cost,Scalability,andReusabilityConsiderations</h2><p>AgentSquareshybridreal+surrogateevaluationloopamortizescostby:</p><ul><li>Using total).</li> </ul> <p>The learned modules are reusable, amortizing search cost across future tasks and reducing overhead for subsequent deployments.</p> <h2 class='paper-heading' id='module-level-insights-and-design-interpretability'>5. Module-Level Insights and Design Interpretability</h2> <p>AgentSquare produces explicit, human-interpretable “module recipes” identifying which innovations drive performance.</p> <ul> <li><strong>ALFWorld:</strong> Planning leverages a learned “TD” (tree-decomposition) module; reasoning employs “SF-ToT” (self-feedback + Tree-of-Thought, synthesize of Self-refine [Madaan et al.] and ToT [Yao et al.]); memory uses Generative Agents’ episodic schema. This triad surpasses Self-refine alone by over 25%.</li> <li><strong>WebShop:</strong> Employs “IO” (iterative optimization) planning, and “HTSS” (hierarchical taxonomy search strategy) reasoning.</li> <li><strong>Other tasks:</strong> Similar interpretable combinations arise for ScienceWorld, M3Tool, TravelPlanner, and PDDL, as detailed in Table A.5 and Figures A.12–A.17.</li> </ul> <p>A notable observation is that for open-world or compositional tasks, planning and reasoning modules are typically performance bottlenecks, while sophisticated memory modules are critical primarily for long-horizon, embodied scenarios. These modular discoveries “chart” the landscape for future designer-driven or hybrid search-refinement.</p> <h2 class='paper-heading' id='cost-scalability-and-reusability-considerations'>6. Cost, Scalability, and Reusability Considerations</h2> <p>AgentSquare’s hybrid real+surrogate evaluation loop amortizes cost by:</p> <ul> <li>Using \pi_p$ for cheap, high-throughput candidate screening.</li> <li>Persisting high-performing modules in official pools for future tasks (&quot;catalogue effect&quot;).</li> </ul> <p>For example, most tasks converge within 10–20 iterations, and code-level innovations discovered are immediately transferable by virtue of the standardized module interface.</p> <p>One-time search expense contrasts with repeated per-task engineering in prior work.</p> <h2 class='paper-heading' id='key-equations-and-formalization-summary'>7. Key Equations and Formalization Summary</h2> <p>The framework’s main equations and algorithmic operations:</p> <ul> <li><strong>MoLAS objective (Eq. 1):</strong></li> </ul> <p>$\underset{P \in \mathcal{P}, R \in \mathcal{R}, T \in \mathcal{T}, M \in \mathcal{M}}{\arg\max} \ \text{Eval}_d(P, R, T, M)</p><ul><li><strong>Modulerecombinationproposer(Eq.2):</strong></li></ul><p></p> <ul> <li><strong>Module recombination proposer (Eq. 2):</strong></li> </ul> <p>A_r = \pi_\theta((P_0, R_0, T_0, M_0), d, N, \mathcal{P}, \mathcal{R}, \mathcal{T}, \mathcal{M}, \mathbb{E})</p><ul><li><strong>Moduleevolutionprogrammer(Eq.3):</strong></li></ul><p></p> <ul> <li><strong>Module evolution programmer (Eq. 3):</strong></li> </ul> <p>A_e = \pi_\xi((P'_0, R'_0, T'_0, M'_0), d, N, \mathcal{P}, \mathcal{R}, \mathcal{T}, \mathcal{M}, \mathbb{E})</p><ul><li><strong>Performancepredictor(Eq.4):</strong></li></ul><p></p> <ul> <li><strong>Performance predictor (Eq. 4):</strong></li> </ul> <p>v' = \pi_p(A', d, \mathcal{P}, \mathcal{R}, \mathcal{T}, \mathcal{M}, \mathbb{E})$

    Algorithm 1 describes the detailed alternation, pool management, and experience updating.


    In summary, AgentSquare operationalizes LLM agent design as a discrete, standardized, modular search problem, leveraging LLMs both as generative operators (module programming and recombination) and as performance surrogates, resulting in empirically superior and interpretable agent architectures for diverse reasoning and interaction environments (Shang et al., 2024).

    Definition Search Book Streamline Icon: https://streamlinehq.com
    References (1)

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to AgentSquare.