AI Papers Reader

Personalized digests of latest AI research

View on GitHub

The "Smart Controller" Making AI Agents Efficient and Adaptive

In the rapidly evolving world of artificial intelligence, “agentic” systems—AI models that can use tools, follow multi-step workflows, and verify their own work—are the new frontier. However, most of these systems suffer from a “one-size-fits-all” problem. Whether you ask a complex agent to solve a high-level calculus problem or simply ask it for the current time, it often uses the same expensive, multi-stage reasoning process.

A team of researchers from Arizona State University is aiming to change that. In a new paper titled “Learning to Configure Agentic AI Systems,” the authors introduce ARC (Agentic Resource & Configuration learner), a framework that allows AI to dynamically “right-size” its own architecture for every individual question it receives.

The Problem with Static AI

Currently, developers build AI agents using fixed templates. They might design a “Reasoning Agent” that always performs three steps of internal thought before answering. This is great for difficult tasks but a waste of computational power and money for easy ones. Furthermore, flooding an AI with too many tools or too much previous context can actually confuse it—a phenomenon known as the “lost-in-the-middle” effect.

How ARC Works: The Project Manager for AI

ARC acts like an experienced project manager. When a query comes in, ARC doesn’t just pass it to a Large Language Model (LLM). Instead, it uses a hierarchical policy to make two critical sets of decisions:

  1. The Structure Policy: This decides what to use. It chooses from various workflows (like “Direct” for easy tasks or “Evaluator-Optimizer” for hard ones), selects specific tools (like a calculator or web search), and sets a “token budget” to limit or expand how much the AI can “think.”
  2. The Prompt Policy: This decides how to talk to the agents. It selects specific instruction fragments—such as “verify intermediate steps” or “decompose the problem”—to tailor the AI’s behavior to the specific task.

Concrete Examples: Intuition in Action

To understand the power of ARC, consider two different user inputs:

  • Query A: “What is 15% of 200?” A traditional agent might spin up a complex multi-agent “voting” workflow, spending cents on a problem that costs a fraction of a penny. ARC, seeing the simplicity, would select a “Direct” workflow, assign a calculator tool, and set a “Low” token budget. It’s fast, accurate, and cheap.

  • Query B: “Find the value of x where $log_2(x) + log_2(x-7) = 3$.” This is a multi-step algebraic problem prone to “hallucination.” ARC recognizes the complexity and selects a “Reason+Verify+Ans” workflow. It allocates a “High” token budget and specifically includes a prompt instruction to “verify intermediate steps” to ensure the math stays on track.

Significant Gains in Efficiency and Accuracy

The researchers tested ARC across several rigorous benchmarks, including math reasoning (GSM8k) and tool-based question answering (HotpotQA). The results were striking: ARC achieved up to 25% higher task accuracy than standard “one-size-fits-all” designs.

Crucially, because ARC learns to be frugal with simple queries, it significantly reduced runtime costs and token usage. The system also showed a remarkable ability to “scale up.” A policy trained on a small, cheap model (like Qwen 7B) could be successfully applied to much larger, more powerful models (like Qwen 72B) without needing to be retrained from scratch.

The Future of Autonomous AI

The significance of ARC lies in its move away from rigid, hand-tuned heuristics. By treating “agent configuration” as a problem that the AI can learn to solve through experience, the researchers have paved the way for AI systems that are not only smarter but significantly more practical for real-world deployment where speed and budget are always a factor.