Back to Blog
Best Practices

Best Practices for Making Your API LLM-Friendly: Lessons from MCPify

Practical guidance to help your REST/GraphQL endpoints work reliably with GPT-5, Claude, and other agentic LLMs—plus a checklist you can apply today.

Herman Sjøberg
Herman Sjøberg
AI Integration Expert
August 21, 20257 min read
API DesignLLMGPT-5ClaudeBest PracticesOpenAPIGraphQL

Key Takeaways

  • Write concise summaries and descriptive docs for all endpoints
  • Use consistent naming conventions across your API
  • Make response schemas unambiguous with proper typing
  • Use enums and constraints to guide LLM choices
  • Include examples in your OpenAPI/GraphQL spec
  • Design for idempotency and clear error messages

Best Practices for Making Your API LLM‑Friendly: Lessons from MCPify

Practical guidance to help your REST/GraphQL endpoints work reliably with GPT‑5, Claude, and other agentic LLMs—plus a checklist you can apply today.

Modern AI agents call APIs the way developers do: they read descriptions, look at examples, and then choose tools to invoke. If those descriptions are vague or inconsistent, the agent hesitates, retries, or fails. MCPify (a gateway that turns any API into an MCP—Model Context Protocol—service) can auto‑generate great tool metadata, but the underlying API design still determines the agent’s success rate. This post distills field‑tested practices you can use—whether you publish an OpenAPI/GraphQL spec directly or let MCPify expose it for you.


Why “LLM‑friendly” API design matters

LLMs are pattern recognizers. When your API is predictable, documented, and typed, first‑call success rises dramatically, token spend drops (fewer retries), and your users get snappier answers. Conversely, overloaded endpoints, unclear parameters, and ad‑hoc error bodies force the model to guess.

The good news: you don’t need special AI endpoints. You need clear contracts—and to expose those contracts (schemas, examples, limits) to the model via MCP or OpenAPI.


10 best practices (with mini‑examples)

1) Write concise summaries and descriptive docs

Keep the summary short and imperative (“Create a task”, “List users”), then add a description that clarifies semantics, side effects, and edge cases.

paths:
  /tasks:
    get:
      summary: List tasks
      description: |
        Returns tasks filtered by status and assignee. Results are paginated.
      parameters:
        - in: query
          name: status
          description: Filter by task status
          schema: { type: string, enum: [open, in_progress, done] }

Why it helps: Agents choose between tools by reading these lines. Plain language + constraints removes guesswork.


2) Name things consistently (resources, params, casing)

Pick a casing style (snake_case or camelCase) and stick to it across all endpoints and responses. Prefer explicit names (user_id, start_date) over ambiguous ones (id, start). Keep resource naming parallel (/users, /users/{id}; /orders, /orders/{id}) and avoid one‑off verbs.

Why it helps: LLMs generalize patterns. Consistency lets them apply a correct call pattern everywhere.


3) Make response schemas unambiguous

Return predictable JSON with stable keys, minimal nesting, and typed values. Use objects to group related data and arrays for lists. Avoid polymorphic types unless you also expose a clear discriminator.

{
  "id": "T-9321",
  "title": "Ship v1.3 release notes",
  "status": "in_progress",
  "assignee": { "id": "u_18", "name": "Alex Chen" },
  "due": "2025-09-01T17:00:00Z"
}

Why it helps: The model doesn’t have to “parse by vibe.” Clear structure → accurate extraction and follow‑up calls.


4) Type everything; use enums, formats, and examples

Leverage your spec to constrain inputs and outputs—and show examples.

components:
  schemas:
    Task:
      type: object
      required: [id, title, status]
      properties:
        id: { type: string, pattern: "^T-[0-9]+$" }
        title: { type: string, minLength: 1 }
        status: { type: string, enum: [open, in_progress, done] }
        due: { type: string, format: date-time, example: "2025-09-01T17:00:00Z" }

Why it helps: Enums and formats drastically reduce invalid calls. Examples act like “micro‑tests” the model can imitate.


5) Expose pagination and filtering explicitly

Prefer cursor or page/limit with safe defaults (e.g., limit=20). Add common filters (date ranges, status, owner) and—when feasible—a fields selector for partial responses.

parameters:
  - in: query
    name: limit
    schema: { type: integer, default: 20, minimum: 1, maximum: 100 }
  - in: query
    name: page
    schema: { type: integer, default: 1, minimum: 1 }
  - in: query
    name: fields
    description: Comma-separated list of fields to include
    schema: { type: string, example: "id,title,status" }

Why it helps: Keeps payloads small (lower tokens), makes multi‑page iteration obvious to the agent.


6) Make errors actionable and consistent

Return structured error bodies with machine‑readable codes and human tips.

{
  "error": {
    "code": "LIMIT_TOO_HIGH",
    "message": "limit must be <= 100",
    "hint": "Decrease limit or omit to use default (20)"
  }
}

Why it helps: The model can recover autonomously (adjust params, retry) instead of stalling.


7) Document auth, scopes, costs, and limits

Specify security schemes (OAuth2, API key), required scopes, and any rate‑limit headers. If cost or latency varies by endpoint, document it; tooling like MCPify can surface this to help the model choose cheaper or faster calls.

Why it helps: Agents plan calls and avoid 401/429 loops.


8) Avoid ambiguous or overloaded endpoints

Prefer one clear purpose per endpoint. Don’t multiplex behavior via a generic /process with a dozen modes. If a multi‑step flow is common, provide a purpose‑built shortcut (e.g., /users/search?name=Jane%20Doe).

Why it helps: Tool choice becomes obvious; fewer hops and fewer retries.


9) Write tool descriptions for agents (not humans only)

If you’re exposing endpoints through MCPify, treat each tool description like a tiny prompt:

  • Start with “Use to …” (“Use to create a new task assigned to a user”).
  • List required inputs and typical ranges/defaults.
  • Include a one‑line example of valid input/output.
  • Note side effects (“Creates a record; idempotent by client_token”).

Why it helps: LLMs skim. High‑signal, imperative descriptions yield near‑first‑call success.


10) Version and deprecate predictably

Add a visible version (/v1) and mark deprecations early (OpenAPI deprecated: true plus a replacement pointer). Keep response shapes backward‑compatible where possible.

Why it helps: Agents can keep working as you evolve the API; MCPify can advertise the safest tool variant first.


A quick checklist you can apply today

  • Every endpoint has a one‑line summary and a precise description.
  • Parameters are consistently named and typed; ambiguous names removed.
  • Schemas declare types, enums, formats, and examples.
  • Pagination and common filters are explicit with sane defaults.
  • Errors use a uniform JSON shape with codes and hints.
  • Auth (scheme, scopes) and limits (rate, size) are documented.
  • No overloaded endpoints; frequent multi‑step flows have shortcut endpoints.
  • Versioning strategy is published; deprecations are marked in spec.
  • Spec validated (OpenAPI/GraphQL) and published where agents can fetch it.
  • Expose the spec via MCPify so tools are self‑documenting for agents.

Turn your API into an MCP tool (in minutes)

MCPify takes your OpenAPI/GraphQL description (or a small JSON config) and exposes self‑documenting tools with schemas, examples, rate limits, and optional cost/latency hints—so agents pick the right call the first time. Point your AI client (GPT‑5, Claude Desktop, agent frameworks) at the MCP endpoint and you’re live.


Sources

Who This Article Is For

API designers and backend engineers who want their APIs to work seamlessly with AI agents

About the Author

Herman Sjøberg

Herman Sjøberg

AI Integration Expert

Herman excels at assisting businesses in generating value through AI adoption. With expertise in cloud architecture (Azure Solutions Architect Expert), DevOps, and machine learning, he's passionate about making AI integration accessible to everyone through MCPify.

Connect on LinkedIn