What is MCP server implementation?
Permalink to “What is MCP server implementation?”An MCP (Model Context Protocol) server is a lightweight service that exposes tools, resources, and prompts to AI agents through a standardized protocol. It translates natural-language requests from large language models into structured API calls against your existing data infrastructure, databases, and services.
MCP follows a client-server architecture built on JSON-RPC 2.0. The host application (like Claude Desktop or an IDE) runs an MCP client that connects to one or more MCP servers, each providing specific capabilities.
- Architecture pattern: Client-server model where AI hosts connect to tool servers via JSON-RPC over stdio or HTTP with Server-Sent Events
- Core primitives: Tools (executable functions), resources (read-only data), and prompts (reusable templates) form the three building blocks
- Transport options: Local servers use stdio pipes for speed; remote servers use HTTP with SSE for network access
- Security model: Every input must be validated, every tool scoped to minimum permissions, and every action logged
- Ecosystem reach: MCP SDKs for TypeScript and Python support 97 million-plus monthly downloads across the npm and PyPI ecosystems
Below, we’ll explore: MCP architecture and protocol design, building your first server, tool and resource registration, security hardening, production deployment, and how Atlan implements MCP for data governance.
Understanding MCP architecture and protocol design
Permalink to “Understanding MCP architecture and protocol design”The Model Context Protocol defines how AI agents discover and interact with external tools. Before writing code, you need to understand the protocol’s communication model, message flow, and capability negotiation system.
1. The client-server communication model
Permalink to “1. The client-server communication model”MCP uses a layered architecture with three distinct roles:
- Host: The application users interact with, such as Claude Desktop, Cursor, or a custom AI assistant
- Client: Lives inside the host and manages connections to one or more servers
- Server: Exposes a specific set of capabilities (tools, resources, prompts) to the client
Communication flows through JSON-RPC 2.0 messages. The client sends requests; the server responds. Both sides can also send notifications, which are one-way messages that don’t expect a response.
2. Transport layer options
Permalink to “2. Transport layer options”Two transport mechanisms exist for different deployment scenarios. Stdio transport pipes messages through standard input and output streams, which is ideal for local servers running on the same machine. HTTP with Server-Sent Events (SSE) handles remote deployments where servers run on separate infrastructure.
Stdio offers lower latency and simpler setup. HTTP with SSE provides network accessibility and works behind load balancers. Most teams start with stdio for development and move to HTTP for production.
3. Capability negotiation and lifecycle
Permalink to “3. Capability negotiation and lifecycle”Every MCP connection begins with an initialization handshake. The client sends an initialize request declaring its supported protocol version and capabilities. The server responds with its own capabilities, including which primitives it supports.
The client then sends an initialized notification to confirm the connection is ready. This three-step handshake (initialize, response, initialized) ensures both sides agree on protocol version and available features before any tool calls begin.
Not every server supports every primitive. A server might expose only tools, or only resources. Clients must respect what the server declares.
Modern data catalogs that implement MCP follow this same negotiation pattern to expose their specific feature set.
Building your first MCP server step by step
Permalink to “Building your first MCP server step by step”This section walks through creating a working MCP server from scratch. We’ll use the TypeScript SDK, but the same concepts apply to the Python SDK. Both SDKs follow identical protocol semantics with language-appropriate patterns.
1. Set up your project and install dependencies
Permalink to “1. Set up your project and install dependencies”Start by initializing a new Node.js project and installing the MCP TypeScript SDK. You need @modelcontextprotocol/sdk as your primary dependency. Create a src/index.ts file as your entry point.
Configure your tsconfig.json with "module": "node16" and "moduleResolution": "node16" to handle the SDK’s ESM exports correctly. This avoids the most common setup errors teams encounter. Also add zod for runtime schema validation of tool inputs.
2. Initialize the server and declare capabilities
Permalink to “2. Initialize the server and declare capabilities”Create a Server instance with your server’s name and version. Call server.setRequestHandler for each capability you want to support. At minimum, register handlers for ListToolsRequestSchema and CallToolRequestSchema.
The ListToolsRequestSchema handler returns your tool definitions, including names, descriptions, and JSON Schema input definitions. The CallToolRequestSchema handler executes the actual tool logic when an AI agent invokes it. Active metadata platforms like Atlan register tools for asset search, lineage tracking, and governance operations through this same pattern.
3. Connect transport and start listening
Permalink to “3. Connect transport and start listening”For local development, use StdioServerTransport to connect your server to stdio. Instantiate the transport and call server.connect(transport). Your server is now listening for MCP client connections.
Test your server by running it with Claude Desktop or the MCP Inspector tool. The Inspector provides a web-based UI where you can send requests, view responses, and debug tool invocations without needing a full AI host application.
For Claude Desktop integration, add your server to the claude_desktop_config.json file under mcpServers. Specify the command to run your server (e.g., npx ts-node src/index.ts) and any required environment variables. Restart Claude Desktop to pick up the configuration changes.
Registering tools, resources, and prompts
Permalink to “Registering tools, resources, and prompts”The three MCP primitives serve different purposes and follow different patterns. Getting tool definitions right is critical because LLMs rely on descriptions and schemas to decide when and how to call your tools.
1. Defining tools with clear schemas
Permalink to “1. Defining tools with clear schemas”Each tool needs a unique name, a human-readable description, and a JSON Schema defining its input parameters. The description is what the LLM reads to decide whether to call your tool, so write it for an AI audience. Be specific about what the tool does, what inputs it expects, and what it returns.
Input schemas should use strict types and mark required fields explicitly. Include description properties on individual parameters to help the LLM fill them correctly.
Data governance teams building MCP tools often define schemas for operations like searching assets, running data quality checks, and updating metadata. Well-structured schemas reduce hallucinated inputs and improve tool call accuracy.
2. Exposing resources for read-only access
Permalink to “2. Exposing resources for read-only access”Resources represent data the AI can read but not modify. Each resource has a URI (using a custom scheme like myapp://), a name, and a MIME type. Resources can be static (file-like, with a fixed URI) or dynamic (using URI templates with placeholders).
Register a ListResourcesRequestSchema handler to advertise available resources and a ReadResourceRequestSchema handler to serve the content. Resources are ideal for exposing configuration files, database schemas, documentation, or business glossary terms that the AI might need as context.
3. Creating reusable prompt templates
Permalink to “3. Creating reusable prompt templates”Prompts are pre-defined message templates that help users invoke common workflows. Unlike tools (called by the LLM) and resources (read by the LLM), prompts are selected by the user from a menu. Each prompt can accept arguments and return a structured list of messages.
Use prompts for operations that benefit from standardized formatting, like generating reports, running analysis workflows, or creating documentation from data. Prompts are optional; many servers operate with just tools and resources.
4. Error handling and response formatting
Permalink to “4. Error handling and response formatting”Every tool handler should return structured responses with content arrays containing typed blocks (text, image, or resource). Return isError: true in the response when a tool call fails, rather than throwing exceptions.
Provide clear error messages that help the LLM understand what went wrong. Instead of “Error 500,” return “Database connection failed: the users table is not accessible with current credentials.” Detailed errors help the AI agent recover and try alternative approaches.
Security hardening for MCP servers
Permalink to “Security hardening for MCP servers”Security is the most critical and most frequently overlooked aspect of MCP server implementation. A 2025 audit by Invariant Labs found that 43% of early MCP servers contained command injection vulnerabilities. Your server runs with your permissions, so a compromised tool can access everything you can.
1. Input validation and sanitization
Permalink to “1. Input validation and sanitization”Never pass user-supplied input directly to shell commands, database queries, or file system operations. Validate every parameter against its JSON Schema before processing.
Use allowlists for file paths, database names, and command arguments. Reject unexpected input rather than trying to sanitize it. A common pattern is to define an enum of allowed values and reject anything not on the list.
Build parameterized queries instead of string concatenation for any database interaction. For file operations, resolve paths and verify they fall within an allowed directory. The OWASP injection prevention cheat sheet covers patterns that apply directly to MCP tool implementations.
2. Principle of least privilege
Permalink to “2. Principle of least privilege”Grant each tool the minimum permissions needed for its function. A search tool should have read-only database access. A metadata update tool should only modify specific fields, not entire records.
Use separate database credentials for different tool categories. This isolation means a compromised tool cannot escalate its access beyond its intended scope.
Run your MCP server process under a restricted user account. Container deployments should use non-root users with read-only file systems where possible. Platforms like Atlan apply role-based access policies that automatically scope MCP tool operations to what the calling user is authorized to do.
3. Logging, auditing, and rate limiting
Permalink to “3. Logging, auditing, and rate limiting”Log every tool invocation with the tool name, input parameters, calling client identity, timestamp, and result status. This audit trail is essential for debugging and compliance. Use structured logging formats (JSON) so logs are machine-parseable.
Implement rate limiting per client and per tool. Without rate limiting, a misbehaving agent can exhaust API quotas, overload databases, or generate excessive costs. Start with conservative limits and adjust based on observed usage patterns.
Deploying MCP servers to production
Permalink to “Deploying MCP servers to production”Moving from a working local server to a reliable production deployment requires attention to process management, monitoring, configuration, and scaling. Most production failures come from missing operational basics, not protocol bugs.
1. Process management and health checks
Permalink to “1. Process management and health checks”Use a process manager (PM2 for Node.js, systemd for Linux, or container orchestration) to ensure your server restarts automatically after crashes. Implement a health check endpoint that verifies database connectivity, API availability, and memory usage.
Handle graceful shutdown by listening for SIGTERM and SIGINT signals. Close active connections, flush logs, and release resources before exiting. Docker-based deployments should include HEALTHCHECK instructions and reasonable stop grace periods.
2. Configuration and secrets management
Permalink to “2. Configuration and secrets management”Never hardcode credentials, API keys, or connection strings in your server code. Use environment variables for all configuration. In production, use a secrets manager (AWS Secrets Manager, HashiCorp Vault, or Kubernetes secrets) to inject credentials at runtime.
Separate configuration by environment. Development servers might connect to local databases; staging to test instances; production to live systems.
Use a configuration schema to validate all required settings at startup, failing fast if anything is missing. Libraries like zod or joi let you define expected environment variables with types and defaults, surfacing misconfiguration before the server processes any requests.
3. Monitoring and observability
Permalink to “3. Monitoring and observability”Track three categories of metrics: system health (CPU, memory, connection count), protocol metrics (requests per second, error rate, latency per tool), and business metrics (which tools are called most, average response time, error patterns).
Set up alerting on error rate spikes and latency degradation. Use distributed tracing if your tools make downstream API calls to trace requests end to end. Production MCP servers should export metrics in Prometheus format or push to your existing observability stack.
How Atlan implements MCP for data governance
Permalink to “How Atlan implements MCP for data governance”Building and maintaining individual MCP servers for each data tool creates fragmentation. Data teams often manage dozens of systems, and building custom MCP integrations for each one is not sustainable. This is where platform-level MCP implementations become valuable.
Atlan’s open-source Agent Toolkit implements MCP to provide a unified interface across the entire data ecosystem. Instead of building separate MCP servers for your data catalog, lineage system, glossary, and quality tools, Atlan exposes them all through a single MCP server.
The Atlan MCP server supports asset search using natural-language queries, column-level lineage exploration, metadata updates and annotations, business glossary management, data quality rule execution, and DSL-based advanced queries. AI agents running in Claude, Cursor, VS Code, or custom applications connect to one Atlan MCP endpoint and gain access to the full catalog.
Deployment options include Docker containers and the uv package manager. Getting started takes minutes, not weeks, because the server handles protocol compliance, capability negotiation, and connection management out of the box.
The server inherits Atlan’s existing access controls, so every MCP tool call respects the same role-based policies your team already configured. This eliminates the security gap that occurs when teams build ad-hoc MCP servers with overly broad permissions. Teams using Atlan report faster AI agent development because they skip the custom integration work entirely and connect directly to a governed, production-ready MCP endpoint.
Book a demo to see how Atlan’s MCP server connects your AI agents to governed data and metadata across your entire stack.
Conclusion
Permalink to “Conclusion”Implementing an MCP server gives your AI agents structured, secure access to the tools and data they need. Start with a clear architecture, register tools with precise schemas, and harden every input path against injection.
Build operational maturity with health checks, monitoring, and proper configuration management before scaling to production workloads. Whether you build custom servers or adopt platform-level implementations like Atlan’s Agent Toolkit, the principles of input validation, least privilege, and observability apply universally.
Book a demo to explore how Atlan’s MCP server can accelerate your AI agent infrastructure.
FAQs about MCP server implementation
Permalink to “FAQs about MCP server implementation”1. What programming languages can I use to build an MCP server?
Permalink to “1. What programming languages can I use to build an MCP server?”The official MCP SDKs support TypeScript and Python, which cover the majority of data and AI tooling ecosystems. Community implementations also exist for Go, Rust, Java, and C#. Choose the language that matches your team’s existing stack and the systems your server will integrate with.
2. How is MCP different from a REST API?
Permalink to “2. How is MCP different from a REST API?”REST APIs are designed for application-to-application communication with fixed endpoints. MCP is designed specifically for AI agent communication, with built-in capability negotiation, typed tool schemas, and structured prompts. MCP servers describe their capabilities so LLMs can reason about which tools to call and how to use them.
3. Can one AI agent connect to multiple MCP servers?
Permalink to “3. Can one AI agent connect to multiple MCP servers?”Yes. An MCP client can maintain concurrent connections to multiple servers, each exposing different tools and resources. This is a core design principle of the protocol. An agent might connect to one server for database queries, another for file access, and a third for API integrations, all in the same session.
4. What are the biggest security risks with MCP servers?
Permalink to “4. What are the biggest security risks with MCP servers?”The primary risks include command injection through unsanitized inputs, excessive permissions granted to tool functions, credential exposure in server configurations, and lack of rate limiting. A 2025 audit found that 43% of early MCP servers were vulnerable to prompt injection attacks that could execute arbitrary commands on the host system.
5. Do I need MCP if I already use function calling?
Permalink to “5. Do I need MCP if I already use function calling?”Function calling is model-specific and tightly coupled to a single provider’s API. MCP provides a universal standard that works across different AI models and clients.
If you only use one model and have few tools, function calling may suffice. MCP becomes valuable when you need interoperability across models, want to share tool definitions across teams, or plan to scale your AI agent infrastructure.
Share this article
