Five Tool-API Design Patterns to Stop LLM Agents from Looping and Failing Silently

<h2>The $50,000 Wake-Up Call</h2><p>In July 2025, a developer's Claude Code instance entered a recursion loop and consumed 1.67 billion tokens in just five hours. The resulting API charges ranged from $16,000 to $50,000 before anyone noticed the problem. The agent didn't crash or throw an error. It simply kept calling tools, getting confused, and calling more tools—silently accumulating costs. Traditional software crashes; LLM agents spend.</p><figure style="margin:20px 0"><img src="https://media2.dev.to/dynamic/image/width=1200,height=627,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5f4smld9sxt909l6fyao.png" alt="Five Tool-API Design Patterns to Stop LLM Agents from Looping and Failing Silently" style="width:100%;height:auto;border-radius:8px" loading="lazy"><figcaption style="font-size:12px;color:#666;margin-top:5px">Source: dev.to</figcaption></figure><p>This is the kind of failure that most teams discover the hard way. You design a clean tool interface, the agent works perfectly in your test environment, you ship to production, and three weeks later an edge case triggers a loop. The following five patterns—drawn from production systems handling thousands of tool calls per day—prevent exactly these issues. None rely on better prompts. They rely on better tool design.</p><h2 id="understanding-loops">Understanding Why Agents Loop</h2><p>An LLM agent takes a user request, reasons about which tool to call, calls it, gets a result, decides if the goal is achieved, and either responds or calls another tool. This loop should terminate when the goal is achieved or the model decides no further action is needed. But it doesn't terminate when:</p><ul><li>The tool result is ambiguous. The model can't tell if the call succeeded, so it tries again with slightly different parameters.</li><li>The tool fails silently. The model receives a non-error response that doesn't contain the data it needed, interpreting it as a signal to retry.</li><li>The tool returns conflicting information. Two consecutive calls yield different results, the model loses confidence, and tries to 'verify' by calling more tools.</li><li>The model misreads its own previous output. With long context windows, the agent sees a previous tool result, forgets it already processed it, and treats it as new information.</li></ul><p>Every one of these is preventable through tool design. The model isn't the problem; the interface is.</p><h2 id="pattern-1">Pattern 1: Make Every Tool Result Self-Describing</h2><p>The most common cause of agent loops is tool results that the model cannot interpret without making assumptions. Consider a bad result:</p><pre><code>{ "results": [ {"id": "h_1234", "name": "Hotel Granbell", "price": 128}, {"id": "h_5678", "name": "Shibuya Stream", "price": 142} ] }</code></pre><p>The model must guess what this means. Are these all the matches? Is the search complete? What was searched for? When confused, it will call the tool again to 'verify.' A self-describing result clarifies everything:</p><pre><code>{ "status": "success", "search_id": "srch_abc123", "query_summary": { "destination": "Shibuya, Tokyo", "check_in": "2026-07-12", "check_out": "2026-07-15", "guests": 1, "max_price": 150 }, "results": [ {"id": "h_1234", "name": "Hotel Granbell", "price": 128, "currency": "USD"}, {"id": "h_5678", "name": "Shibuya Stream", "price": 142, "currency": "USD"} ], "total_matches": 2, "has_more": false }</code></pre><p>Now the model knows the search parameters, the total count, and whether there are additional pages. It has no reason to retry.</p><h2 id="pattern-2">Pattern 2: Include Explicit Status and Next Steps</h2><p>Even with self-describing data, an agent can get stuck if it doesn't know what to do next. Every tool response should include a <strong>status</strong> field and an <strong>available_actions</strong> array. For example:</p><figure style="margin:20px 0"><img src="https://media2.dev.to/dynamic/image/width=1000,height=420,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5f4smld9sxt909l6fyao.png" alt="Five Tool-API Design Patterns to Stop LLM Agents from Looping and Failing Silently" style="width:100%;height:auto;border-radius:8px" loading="lazy"><figcaption style="font-size:12px;color:#666;margin-top:5px">Source: dev.to</figcaption></figure><pre><code>{ "status": "requires_confirmation", "message": "Booking found but price has changed from $128 to $145.", "available_actions": ["confirm_booking", "cancel_booking", "search_again"] }</code></pre><p>This tells the agent explicitly which tools it may call next, eliminating guesswork and preventing loops where the model tries to call the same tool again with slight variations.</p><h2 id="pattern-3">Pattern 3: Design Tools with Idempotent Operations</h2><p>Idempotency ensures that calling a tool multiple times with the same parameters produces the same result. For write operations like transferring funds or updating records, this is critical. Use <strong>idempotency keys</strong> or <strong>request IDs</strong>. If an agent retries a transfer, the second call should either return the original success response or a clear 'already processed' message. Without idempotency, retries can duplicate actions, leading to data corruption and further confusion.</p><h2 id="pattern-4">Pattern 4: Implement Bounded Retry Budgets at the Tool Level</h2><p>Some loops are inevitable—network timeouts, transient errors. Instead of letting the agent retry infinitely, each tool should enforce a <strong>retry budget</strong>. Return a <strong>retry_remaining</strong> count in the response. When the budget is exhausted, the tool returns a hard failure and the agent must escalate to a human or try an alternative path. This caps costs and prevents silent spirals.</p><h2 id="pattern-5">Pattern 5: Provide Semantic Versioning for Tool Schemas</h2><p>Tool schemas change. When an agent built for one version encounters a different schema, it may misinterpret fields and loop. Use semantic versioning in the <strong>tool_metadata</strong>. If the agent's schema is outdated, the tool can respond with a <strong>schema_mismatch</strong> status and a link to the updated schema. This allows the agent to adapt or fail gracefully rather than repeatedly calling with wrong parameters.</p><h2>Conclusion</h2><p>Agent loops and silent failures are not inevitable. They are symptoms of poor tool <a href="#pattern-1">API design</a>. By making results self-describing, including explicit next steps, ensuring idempotency, enforcing retry budgets, and versioning schemas, you can build LLM-powered systems that are robust, predictable, and cost-effective. The patterns above have prevented millions of wasted tokens in production. Apply them to your own tools and save your budget—and your sanity.</p>
Tags: