Back to Blog
Best Practices

LLMs Are the Brains, Tools Are the Plumbing – Keeping AI Infrastructure Out of the Way

A minimalist, radically transparent approach to LLM-tool integration: let models do the thinking while infrastructure stays out of the way. Why "smart agents, not smart tools" wins—plus practical design principles.

Herman Sjøberg
Herman Sjøberg
AI Integration Expert
August 11, 20257 min read
LLM agentsMCPModel Context ProtocolAI infrastructureAPI integrationLangChainOpenAI function calling

Key Takeaways

  • Smart agents, not smart tools - put intelligence in the LLM
  • Radical transparency beats hidden magic
  • Return raw, schema-rich data with metadata
  • Tools should be simple, single-purpose pipes
  • Separation of concerns: agent decides what, tools execute how
  • One multi-tenant gateway serves 100+ services

title: "LLMs Are the Brains, Tools Are the Plumbing – Keeping AI Infrastructure Out of the Way" description: "A minimalist, radically transparent approach to LLM-tool integration: let models do the thinking while infrastructure stays out of the way. Why 'smart agents, not smart tools' wins—plus practical design principles to build AI systems that scale." date: 2025-08-23 author: "MCPify Team" tags: ["LLM agents", "MCP", "Model Context Protocol", "AI infrastructure", "API integration", "LangChain", "OpenAI function calling"] canonical_url: https://mcpify.org/blog/llms-brains-tools-plumbing

Most AI infrastructure today does more harm than good.

We have incredibly capable large language models (LLMs) like GPT-5 and Claude, yet the scaffolding around them often limits their potential—layers of orchestration, opaque gateways, and "helpful" transformations that overfit to happy paths and hide critical context. The result? Slower development, brittle behavior, and agents that can't flex their full reasoning power.

This article lays out MCPify's core philosophy: LLMs are the brains; tools are the plumbing. Keep the integration layer minimal and radically transparent so the model can think with complete information.


TL;DR

  • Smart agents, not smart tools. Put the intelligence in the LLM; keep tools simple, single-purpose, and transparent.
  • Radical transparency beats hidden magic. Return raw, schema-rich data with metadata; avoid silent transformations.
  • Separation of concerns. The agent decides what to do; tools execute how to do it.
  • MCPify = gateway-as-plumbing. One multi-tenant gateway exposing clear, self-documenting tools an LLM can use directly.

When AI infrastructure gets in the way

Developers often wedge LLMs behind orchestration layers that pre-interpret inputs, filter outputs, and inject "business logic." That feels comforting—but it obscures context and reduces the model's ability to reason. If the toolchain quietly discards fields or summarizes results too early, the agent can't verify assumptions, cross-check facts, or adapt when requirements shift.

Guiding principle: give models the fullest truthful picture possible and let them decide what matters.

This is exactly why protocol-first approaches like the Model Context Protocol (MCP) are compelling: standard, explicit, metadata-rich tool interfaces that make capabilities discoverable and predictable to agents.


LLMs are the brains — let models do the thinking

Modern LLMs excel at deciding which tools to use and when—provided we expose tools clearly. Instead of hand-coding brittle if/then logic, let the model select tools via structured outputs (e.g., JSON function calls).

  • OpenAI formalized this pattern with Function calling—the model chooses to call a function by emitting a JSON payload.
  • Claude Desktop supports MCP servers, allowing Claude to discover and call tools exposed by a local or remote MCP server. See: Using MCP Servers in Claude Desktop.

The big idea echoes Andrej Karpathy's Software 2.0: push decision logic into learned systems where appropriate, and stop over-specifying control flow the model can infer.


Tools are the plumbing — keep integration minimal

If the agent is the brain, tools are pipes and valves. They should:

  1. Do one thing well. Fetch data, execute an action, or transform a payload—nothing more.
  2. Expose full shapes. Return raw JSON with schemas, types, timestamps, units, costs, and rate-limit info.
  3. Avoid opinionated filtering. Don't guess what the agent "probably needs." Provide evidence; let the agent decide.
  4. Be self-documenting. Include exhaustive descriptions, sample calls, response examples, and pagination semantics.

This mindset is compatible with popular agent frameworks. For example, LangChain conceptualizes tools as callable utilities the model can invoke, with outputs fed back into the model's context.

Historically, ChatGPT Plugins pioneered the "LLM calls external services" pattern. Even though the product evolved, the pattern endures: models treat tool outputs as additional context to reason over.


Traditional API gateways vs. an AI-first gateway

Traditional gateways optimize for developer-to-service use cases: auth, routing, and transformations that present a polished slice of data. That's useful for microservices—but counterproductive for agents, which need complete context and schema transparency.

An AI-first gateway (MCPify) flips the priorities:

  • Rich tool descriptions (inputs/outputs, examples, costs, rate limits, latency stats).
  • Schema + metadata transparency (not just fields, but their relationships and constraints).
  • Explicit pagination and chunking controls.
  • Cache and state visibility (what's cached, freshness, invalidation tools).
  • Batching and parallelism for performance.
  • Unified multi-tenant gateway so one control plane serves 100+ services.

Explore how this looks in practice:


Radical transparency: why your LLM needs raw data

LLMs reason better when they can see the evidence:

  • Raw payloads support verification (the agent can re-summarize, re-rank, or cross-check).
  • Metadata (timestamps, units, provenance, costs) prevents misinterpretation.
  • Schemas reduce hallucinations by clarifying field meanings and relationships.

In other words, don't pass back a pre-digested sentence if you can deliver the full JSON + a short, faithful summary. The agent can skim the summary and dive into the raw fields if something looks off.


Design checklist: "smart agent, dumb tools"

Use this quick audit to keep infrastructure out of the way:

  • Clarity over cleverness. Tool descriptions read like a user manual for the agent.
  • Full-fidelity I/O. Inputs validate; outputs are typed, complete, and documented.
  • No hidden transforms. Summaries never replace raw data—include both.
  • Explicit pagination. Page size, cursor, and iteration tools are exposed.
  • Cost & rate transparency. The agent can trade accuracy vs. spend with eyes open.
  • State & cache introspection. The agent can read, write, and invalidate state on demand.
  • Batch & parallel tools. Reduce round trips where possible.
  • One gateway, many services. Reuse integrations across teams and projects.

If you're migrating from a traditional API gateway, start by turning off transformations that strip fields or reformat payloads. Then add metadata and examples so models hit near-100% first-call success.


(Optional) diagram

Diagram suggestion: "LLM brain with transparent tools as spokes."
A central box labeled Agent (LLM), surrounded by small boxes (CRM API, Payments API, Search, DB Query, Cache, Batch, State). Arrows show: Agent decides ➜ Tool executes ➜ Raw result + metadata ➜ Agent reasons. Minimal logic in the hub; zero logic in the spokes.


Conclusion

Powerful systems emerge when we trust the model to think and we make our infrastructure honest and simple. Keep tools transparent, single-purpose, and self-documenting; give the LLM full context; and let it choose how to use that context.

If this philosophy resonates, see how MCPify operationalizes it:


Sources

Who This Article Is For

Architects and engineers designing LLM-tool integration layers

About the Author

Herman Sjøberg

Herman Sjøberg

AI Integration Expert

Herman excels at assisting businesses in generating value through AI adoption. With expertise in cloud architecture (Azure Solutions Architect Expert), DevOps, and machine learning, he's passionate about making AI integration accessible to everyone through MCPify.

Connect on LinkedIn