Philosophy: UNIX Tools for LLMs
We build tools for intelligent LLMs, not intelligent tools.
Core Principle
LLMs are the intelligence layer. MCP is the execution layer. We provide predictable, composable, transparent tools that give LLMs complete information and control.
The Separation of Concerns
LLM (Intelligence Layer)
- • Reasoning and decision-making
- • Understanding context
- • Planning and strategy
- • Interpreting user intent
MCP (Execution Layer)
- • Predictable tool execution
- • Transparent data access
- • Complete control surfaces
- • Deterministic behavior
The UNIX Philosophy
Think of MCPify like the `ls`, `find`, and `grep` commands in UNIX:
What We DON'T Build
- ❌ "Smart" endpoints that try to guess what the LLM wants
- ❌ Semantic interpretation layers ("you probably meant...")
- ❌ Auto-correction or fuzzy matching without being explicit
- ❌ Hidden transformations or magic behavior
- ❌ Any "AI-powered" features that make decisions for the LLM
- ❌ Usage-based recommendations
- ❌ Session state or learning from patterns
// BAD: Smart tool that tries to guess intent
{
"name": "get_relevant_data",
"description": "Returns data that might be relevant to your query"
// What's relevant? The tool shouldn't decide!
}
What We DO Build
- ✅ Transparent, predictable tools that do exactly what they say
- ✅ Rich metadata so LLMs can make informed decisions
- ✅ Deterministic behavior - same input ALWAYS gives same output
- ✅ Explicit capabilities - clearly state what we can and cannot do
- ✅ Raw access - give LLMs direct access to APIs, not interpretations
- ✅ Complete information - all options, parameters, and constraints documented
- ✅ Explicit control - LLM decides caching, pagination, filtering
// GOOD: Predictable tool with explicit control
{
"name": "search_data",
"arguments": {
"query": "user input",
"match_type": "exact", // LLM controls matching
"fields": ["name", "description"], // LLM specifies fields
"limit": 10 // LLM controls results
}
}
Implementation Guidelines
1. Discovery Should Be Descriptive, Not Prescriptive
❌ BAD: "You should use endpoint X for unemployment data"
✅ GOOD: "Endpoint /table/07459: Returns employment_status by region, age, sex, year. Dimensions: [complete list]. Constraints: [exact limits]"
2. Dynamic Discovery Over Hardcoded Lists
Never hardcode service lists - discover them from the source of truth at runtime.
// BAD: Hardcoded service list
const SERVICES = ["hubspot", "stripe", "slack"]; // Will be outdated
// GOOD: Dynamic discovery from source of truth
const services = await discoverFromLoggingSinks(); // Always current
3. Errors Should Be Informative, Not Helpful
❌ BAD: "I think you meant table 07459, trying that instead"
✅ GOOD: "Table 07458 not found. Available tables: 07459, 07457, 07460"
4. Every Behavior Must Be Deterministic
- • Same input → Same output (always)
- • No time-based behavior without explicit parameters
- • No random selection without explicit seeds
- • No hidden state between calls
The Mental Model
We are the Instrument Panel
- • We show accurate readings
- • We provide all available controls
- • We don't decide where to fly
- • We don't suggest routes
- • We give complete, accurate information
The LLM is the Pilot
- • It decides what information it needs
- • It chooses which controls to use
- • It interprets the readings
- • It makes the decisions
- • It understands the context
Why This Philosophy Matters
Predictability
LLMs can reason about predictable tools
Composability
Simple tools can be combined in complex ways
Debuggability
Clear behavior makes issues obvious
Reliability
No magic means no surprises
Trust
LLMs can trust tools that don't try to be smart
Key Question for Every Feature
Instead of asking:
"How can we help the LLM?"
Ask:
"How can we give the LLM complete information and control?"
"The LLMs using our tools are smart enough - they don't need our help making decisions, they need reliable tools that give them complete information and do exactly what they claim to do."