Fading Coder

One Final Commit for the Last Sprint

Home > Tech > Content

Model Context Protocol Core Concepts: Architecture, Communication, and Capabilities

Tech 1

Host, Client, and Server Architecture

MCP organizes interaction between large language models (LLMs) and external systems using a triadic architecture:

  • Host: The user-facing AI application (e.g., Cursor IDE, Claude Desktop). It orchestrates sessions, manages UI, and hosts one or more clients.
  • Client: Resides inside the host; each maintains a dedicated stateful link to exactly one server. Multiple clients can coexist, each acting as a conduit to its respective server.
  • Server: A lightweight program exposing specific capabilities—data access or executable functions—via MCP. Each server specializes in a domain, analogous to a kitchen producing a certain cuisine.

This separation lets the host focus on user experience and model coordination, while servers remain single-purpose and easily maintainable.

Communication Layer

MCP messaging rests on two pillars: a message format based on JSON-RPC 2.0 and a transport mechanism for delivery.

Message Format: JSON-RPC 2.0

All exchanges follow JSON-RPC 2.0, chosen for simplicity and language neutrality.

Three message kinds drive protocol interactions:

  1. Request — Invokes an operation; includes id, method, optional params.

    {
      "jsonrpc": "2.0",
      "id": 101,
      "method": "data/list_items",
      "params": {}
    }
    
  2. Response — Matches request id. Contains either result on success or error with code and message on failure.

    Success example:

    {
      "jsonrpc": "2.0",
      "id": 101,
      "result": {
        "items": [ { "ref": "file:///app/config.yaml", "label": "config.yaml" } ]
      }
    }
    
  3. Notification — One-way event; no id, no reply expected.

    {
      "jsonrpc": "2.0",
      "method": "notifications/tools/updated"
    }
    

Transport Mechanisms

  • Stdio — For local servers launched as child processes; communication occurs over standard input/output streams. Suited for file system utilities, Git integration, etc.
  • HTTP + SSE — For remote servers. Clients send requests via HTTP POST; servers push responses/notifications through persistent SSE connections. Fits SaaS APIs and multi-user scenarios.
Feature Stdio Transport HTTP+SSE Transport
Environment Local machine Network-accessible
Deployment Spawned by host Independent web service
Authentication Env vars, manual config OAuth, API keys
Ideal Use Cases Dev tools, file access Shared services, SaaS links
Example Git history reader Jira API connector

Connection Lifecycle

  1. Initialization — Client sends initialize with supported version and capabilities; server replies with its own version/capabilities; client confirms via initialized notification.
  2. Exchange — After handshake, request/response/notification flow proceeds for operations like listing tools or reading data.
  3. Termination — No explicit shutdown message; connection ends via underlying transport closure (stdio stream end or HTTP connection drop).

Core Functional Primitives

MCP enables LLMs to reason and act with five foundational concepts:

Resources (Read-Only Data)

Expose static or dynamic datasets via URIs. Examples: source files, logs, DB rows, API payloads.

  • Discoveryresources/list returns available items; templates support parameterized URIs.
  • Accessresources/read fetches content (text or base64 binary).
  • Updates — Notifications signal list/content changes; subscription model available for frequent updates.

Example server (TypeScript) exposing a markdown file:

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import fs from 'fs/promises';

const srv = new McpServer({
  name: 'doc-provider',
  version: '1.0.0',
  capabilities: { resources: {} }
});

const docRef = 'file:///proj/docs/start.md';

srv.registerResourceProvider(async () => ({
  resources: [{ uri: docRef, name: 'start.md' }]
}));

srv.registerResourceProvider(async req => {
  if (req.uri !== docRef) return { contents: [] };
  const txt = await fs.readFile('/proj/docs/start.md', 'utf-8');
  return { contents: [{ uri: docRef, text: txt }] };
});

const transport = new StdioServerTransport();
await srv.connect(transport);
console.log('Doc provider running on stdio');

Tools (Executable Actions)

Allow LLMs to perform side-effecting operations such as invoking APIs, modifying files, or executing commands.

  • Definition — Each tool has a name, description, and inputSchema (JSON Schema) guiding LLM usage.
  • Discoverytools/list reveals definitions.
  • Invocationtools/call executes with validated arguments; results or structured errors returned.

Example: square root calculator (TypeScript):

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';

const srv = new McpServer({
  name: 'calc-tool',
  version: '1.0.0',
  capabilities: { tools: {} }
});

srv.registerTool(
  'sqrt_compute',
  {
    title: 'Square Root',
    description: 'Computes sqrt of a number.',
    inputSchema: { val: z.number().describe('Input value') }
  },
  async ({ val }) => {
    if (val < 0) {
      return { isError: true, content: [{ type: 'text', text: 'Negative input invalid' }] };
    }
    return { content: [{ type: 'text', text: String(Math.sqrt(val)) }] };
  }
);

const transport = new StdioServerTransport();
await srv.connect(transport);
console.log('Calc tool ready');

Error handling distinguishes protocol faults from tool-level failures, aiding agent resilience.

Prompts (Reusable Workflow Templates)

Parameterized prompt templates streamline recurring tasks.

  • Structure — Name, description, argument list.
  • Discoveryprompts/list.
  • Executionprompts/get generates structured messages for LLM consumption.

Translation prompt example:

srv.registerPrompt(
  'translator',
  {
    title: 'Text Translator',
    description: 'Translates text into a target language.',
    argsSchema: {
      src: z.string().describe('Source text'),
      tgtLang: z.string().describe('Target language')
    }
  },
  ({ src, tgtLang }) => ({
    messages: [
      {
        role: 'user',
        content: {
          type: 'text',
          text: `Translate to ${tgtLang}:\n\n---\n${src}\n---`
        }
      }
    ]
  })
);

Supports multimodal content and embedded resource references.

Sampling (Server-Initiated LLM Calls)

Servers request LLM inference from the client, enabling advanced reasoning in tools.

Flow:

  1. Server sends sampling/createMessage with prompt and model preferences.
  2. Host presents request to user for review/edit and approves result.
  3. Response sent back for server-side continuation.

Human-in-the-loop ensures privacy and control.

Roots (Operational Boundaries)

Clients suggest working scopes (typically file:// URIs) to confine server activity.

  • Declaration — Host advertises roots capability.
  • Queryroots/list fetches current scope.
  • Change Notificationnotifications/roots/list_changed signals updates.

Well-behaved servers respect these hints for safety and relevance.

Primitive Purpose Analogy Originator
Resources Read-only context/data Library card Client/LLM
Tools State-changing actions Power tools Client/LLM
Prompts Parameterized workflows Pre-filled form User
Sampling Deferred LLM inference Ask an expert Server
Roots Define workspace limits Fenced yard Client

Security Principles

Connecting LLMs to external systems introduces risks: data exfiltration, malicious execution, unauthorized changes.

Core tenets:

  • User Consent — Host must surface clear approval prompts for sensitive operations.
  • Reviewability — Especially for sampling, users should inspect/edit prompts and results.
  • Sandboxing & Least Privilege — Limit server permissions; validate all inputs; confine filesystem access via roots.
  • Defense Against Malicious Servers — Use trusted sources, signatures, and runtime restrictions.
  • Prevent Path Traversal — Sanitize and constrain URIs/paths within allowed roots.

Server developers should document behavior transparently, enforce rate limiting, and handle binaries cautiously.

Ongoing collaboration among protocol designers, host implementers, server authors, and users is essential to maintain a secure MCP ecosystem.

Related Articles

Understanding Strong and Weak References in Java

Strong References Strong reference are the most prevalent type of object referencing in Java. When an object has a strong reference pointing to it, the garbage collector will not reclaim its memory. F...

Comprehensive Guide to SSTI Explained with Payload Bypass Techniques

Introduction Server-Side Template Injection (SSTI) is a vulnerability in web applications where user input is improper handled within the template engine and executed on the server. This exploit can r...

Implement Image Upload Functionality for Django Integrated TinyMCE Editor

Django’s Admin panel is highly user-friendly, and pairing it with TinyMCE, an effective rich text editor, simplifies content management significantly. Combining the two is particular useful for bloggi...

Leave a Comment

Anonymous

◎Feel free to join the discussion and share your thoughts.