6 min read

MCP Servers: What It Is and 5 Costly Don’ts


What Is MCP?

What you’ll learn: What the Model Context Protocol is, the 5 costly mistakes to avoid, and a production playbook with Rails examples for building reliable MCP servers.

MCP is the standard layer that lets AI agents discover and safely use your capabilitiestools, resources, and promptsover a simple protocol. Think of it as a clean contract between reasoning and execution.

  • Agents learn “what they can do” from tool definitions and prompts
  • They fetch small, structured data via resources instead of lugging fat payloads
  • The protocol stays thin so you can scale the heavy work elsewhere

It’s the bridge that keeps intelligence and infrastructure decoupled.

đź’ˇ

During Apollo 13, clear interfaces and ruthless constraints saved lives. In MCP, constraints like token budgets and latency force good taste toopass references, not blobs; design small, reliable tools; and keep surfaces tight.

MCP in one graphic-free nutshell

“Expose capability, not chaos.”

  • Tools: action endpoints with strict schemas and plain-English descriptions.
  • Resources: small, fetchable artifacts or metadata, plus links to big stuff.
  • Prompts: versioned instruction sets that guide agents how to use tools.
  • Transports: STDIO for local, HTTP/JSON-RPC for remote; stderr for logs.

A tiny layer, big leverage.


The Building Blocks and Architectural Principles

Start simple, stay deliberate. Good MCP servers feel like Unix utilities: do one thing well, compose the rest.

  • Single responsibility: one domain per server (email, files, billing).
  • Stateless by default: horizontal scale, externalize state and jobs.
  • Human vs AI interfaces: human UIs can show slugs; agents should get UUIDs.

Keep surfaces sharp and boring; your ops team will thank you.

Tools: the agent’s power sockets

Brief, specific, safe. Avoid “do-everything” functions.

  • Prefer fine-grained verbs: create_asset, transform_image, get_invoice.
  • Describe when to use and when not to use a tool.
  • Return structured, semantic results, not cryptic codes.

Let the description carry intent so the agent calls tools responsibly.

Resources and prompts: context without clutter

Resources give agents durable references; prompts give them reusable guidance.

  • Resources: return metadata or small content; link to large files with URLs.
  • Prompts: version, test, and roll back like code; keep them minimal and crisp.

This pairing reduces token waste and surprises.

Transports and logging: tiny details, big wins

STDIO is great locally; HTTP scales out. Just don’t corrupt the wire.

  • Never write logs to stdout on STDIO; use stderr or files.
  • Time out defensively; fail fast and explain errors clearly.
  • Prefer idempotent behavior where possible.

Small hygiene beats heroic debugging.

# Rails: prefer UUIDs for agent-facing identifiers
class EnableUuid < ActiveRecord::Migration[7.2]
  def change
    enable_extension 'pgcrypto'
    create_table :assets, id: :uuid do |t|
      t.string :public_url, null: false
      t.jsonb  :metadata,   null: false, default: {}
      t.timestamps
    end
  end
end

The Critical “Don’ts” (and the better patterns)

Opinionated guardrails save weeks of rework. Use these as non-negotiables.

Anti‑patternWhy it failsDo this instead
Passing file blobs through MCPBloats JSON-RPC, torches tokens, couples transport to storagePass public URLs; fetch server-side; stream or stage out-of-band
Naive polling for reusable flowsLatency, wasted calls, brittle error pathsKeep common ops synchronous; reserve tasks/async for truly long jobs
Sequential IDs for agentsEasy enumeration, leaks scale, invites mistakesUse UUIDv4 for agent-facing IDs; keep pretty IDs only in human UIs
Kitchen‑sink serversEntangled blast radius, scaling pain, perms hellOne server per domain; compose at the client/orchestrator
Shipping without auth/validation/loggingSilent failures, security gaps, zero auditAuthN+AuthZ, schema validation, structured logs, metrics, alerts

Trade convenience now for compounding reliability later.

Quick scenarios from production

  • Image ops: don’t send a 5MB base64; send https://.../image.jpg and a transform spec.
  • CRM lookups: return a UUID; mirror a human-friendly slug separately.
  • Batch jobs: accept params, enqueue, emit a task handle; notify on completion.

Simple interfaces age well under load.

{
  "tool": "image.transform",
  "params": {
    "source_url": "https://cdn.example.com/img/550e84.jpg",
    "ops": [{"resize": {"w": 1200, "h": 800}}, {"format": "webp"}]
  }
}
# STDIO server hygiene
# Good: write logs to stderr, never stdout
app 1> /dev/stdout   # protocol only
app 2> /var/log/mcp  # logs to file or stderr

From Prototype to Production: Rails + Cloud Image Ops (Example)

Earlier this year, I rebuilt “Cloudinary‑like” transforms behind an MCP server in Rails. The rules above made it stable and cheap.

  1. Model the domain.
    • Tools: upload_via_url, transform_image, get_variant.
    • Resources: asset://{uuid}, variant://{uuid}.
  2. Keep files out of MCP.
    • Accept public_url; validate content type/size; store canonical URL.
    • Fetch/stream via backend job when needed.
  3. Sync where fast.
    • transform_image returns a variant UUID and a CDN URL in one hop.
    • Only large chains become tasks.
  4. Tasks for long work.
    • Return {task_id, status:"working"}; push progress events; TTL results.
    • Idempotent replays by content digest.
  5. Ops and safety nets.
    • Rate limits per tool; circuit breakers on external calls; SLOs and alerts.
    • Structured logs: request_id, tool, duration, outcome.

Minimal surface, maximum throughput.

# Pseudo: FastMCP-style tool
mcp.tool "image.transform" do |source_url:, ops:|
  asset = Asset.ingest!(source_url:)
  variant = Variant.generate!(asset:, ops:)
  { variant_id: variant.id, cdn_url: variant.public_url, cost_ms: variant.cost_ms }
rescue Image::Unsupported => e
  error!("unsupported_image", details: e.message)
end
ConcernPrototype (local)Production (remote)
TransportSTDIOHTTP/JSON-RPC + SSE
IdentityUUIDs in DBUUIDs + per‑tenant scoping
StorageDisk cacheObject store + CDN
JobsInlineQueue + workers + retries
SecurityDev secretsVault/KMS, OAuth/API keys, mTLS option
ObservabilityConsole logsStructured logs, metrics, tracing, alerts

Change transports without changing your contract.


Security, Reliability, and the Final Checklist

Security isn’t a feature; it’s the substrate your tools run on.

  • AuthN: OAuth/API keys per client; rotate and scope; deny by default.
  • AuthZ: per‑tool and per‑resource permissions; least privilege.
  • Validation: JSON Schema on inputs; size/length limits; allowlists.

Bake this in before the demo becomes production.

  • Observability: logs with correlation IDs, metrics per tool, traces for slow paths.
  • Failure modes: timeouts, retries with jitter, dead‑letter queues, backpressure.
  • Data hygiene: PII redaction, encrypted at rest/in transit, deterministic deletes.

Robustness compounds just like tech debt does.

Quick checklist you can paste in your PR:

  • One domain per MCP server; name tools as verbs.
  • No file blobs; only public URLs and small metadata.
  • Keep common flows synchronous; reserve tasks for true long‑runners.
  • UUIDs for agent‑visible IDs; pretty IDs only for humans.
  • AuthN, AuthZ, validation, rate limits, structured logs from day one.
  • Clear errors with recovery hints; idempotency where it matters.
  • SLOs, alerts, and load tests before launch.

Ship small, safe, and sharp; your agents will feel faster and your ops calmer.

đź’ˇ

TL;DR: Design for constraints, not heroics. Make the protocol light, the tools specific, the IDs opaque, and the security boring. That’s how MCP survives contact with production.

đź“§