MCP Security Guide: 10 Things Every Developer Should Know
A technical guide to securing Model Context Protocol implementations. Covers authentication, data handling, GDPR compliance, token security, and SOC 2 considerations for MCP server operators.
The Model Context Protocol is moving from experimental to production fast. MCP server downloads grew from 100,000 to over 8 million in just five months (MCP Manager, 2025). AI assistants connected to live APIs, databases, and third-party services are no longer a proof-of-concept -- they're in production at companies of all sizes.
The security model for MCP is still being worked out. Anthropic's specification defines the protocol, but it deliberately leaves security implementation to the server operator. That's the right architectural choice. It's also why developers building MCP servers need to think carefully about security from the start, not retrofit it later.
This guide covers 10 security considerations that matter most for production MCP deployments, with particular attention to credential handling, data privacy, and GDPR compliance.
Defence in Depth
Five security layers between an MCP client and your upstream API credentials
Tool Schema Validation
Constrained inputs, type checking, pattern matching
Rate Limiting
Per-session, per-license, per-upstream endpoint
Input Sanitisation
SQL parameterisation, path traversal prevention, injection blocking
Authentication Layer
Bearer tokens, OAuth 2.0, session management
Server-Side Credentials
Encrypted at rest, never transmitted to clients
Upstream APIs -- Google, Meta, Amazon, YouTube -- credentials never leave the server
1. Never Pass Credentials Through the Client
The most common MCP security mistake is putting upstream API credentials in a configuration that lives on the user's machine.
If your Claude Desktop config looks like this:
{
"mcpServers": {
"my-tool": {
"command": "node",
"args": ["server.js"],
"env": {
"GOOGLE_API_KEY": "AIza...",
"META_APP_SECRET": "abc123..."
}
}
}
}You've given every process on that machine access to your upstream API credentials. If the machine is shared, compromised, or the config file is accidentally included in a repo, those credentials are exposed.
According to GitGuardian's 2024 State of Secrets Sprawl report, over 12.8 million new secrets were detected in public GitHub commits in 2023 alone (GitGuardian, 2024). API keys in config files are one of the most common vectors.
The correct architecture: credentials live on your server, not the client. The client authenticates to your MCP server using a token (a license key, a session token, an OAuth access token). Your server holds the upstream API keys and proxies requests on the client's behalf.
This is the "server-side proxy" pattern. It's the only pattern that keeps upstream credentials safe in a multi-tenant or end-user-facing deployment. It's also exactly how Ooty's architecture works.
2. Implement Proper Authentication on Every Endpoint
MCP servers that accept HTTP connections need authentication on every route. "I'll add auth later" is how security holes ship to production.
The two primary options:
Bearer token authentication: The client includes Authorization: Bearer {token} in every request. Your server validates the token before processing any tool calls. This is appropriate for license-key or API-key based authentication.
OAuth 2.0: For user-delegated access (the user authorises your MCP server to act on their behalf), OAuth is the correct standard. The Streamable HTTP transport supports OAuth token passing natively. This is appropriate when users are authorising access to their own accounts (Google, Meta, etc.).
For development-only environments, no-auth is fine. For anything that touches real data or real users, implement auth before you write the first tool handler.
3. Token Scope: Don't Ask for More Than You Need
When implementing OAuth flows for connecting upstream services, request minimum necessary scopes.
Bad practice:
Google OAuth scopes: https://www.googleapis.com/auth/analytics.readonly
https://mail.google.com/
https://www.googleapis.com/auth/calendar
Requesting Gmail and Calendar access for an analytics tool is unnecessary and alarming to users. OAuth consent screens showing excessive permissions reduce conversion and create legitimate security concerns.
Principle of least privilege: request exactly the scopes your tool needs. If you're building an MCP server for Google Ads data, you need the Google Ads readonly scope. You don't need Search Console, Analytics, Gmail, or anything else.
This matters for GDPR compliance too. GDPR's data minimisation principle (Article 5(1)(c)) requires that personal data is "adequate, relevant and limited to what is necessary." Requesting unnecessary OAuth scopes and potentially accessing unnecessary personal data creates GDPR exposure.
MCP Threat Model
The six most common security risks in MCP deployments and how to address them
Credential exposure
API keys in client-side config files
Server-side proxy pattern
Token theft
Session tokens accessible to other processes
Short-lived tokens, machine binding, revocation
Injection attacks
Malicious tool parameters (SQL, path traversal)
Parameterised queries, input validation, schema constraints
Runaway agent costs
Agent stuck in loop exhausts API quotas
Three-tier rate limiting (session, license, upstream)
Supply chain compromise
Vulnerable transitive npm/pip dependencies
Lock files, CI auditing, SCA tools
GDPR violations
Over-logging, excessive data retention, missing DPAs
Log metadata only, set retention limits, right to erasure
4. Session Token Design and Storage
If your MCP server issues session tokens (shorter-lived tokens derived from a license key or OAuth credential), the token design matters.
Token design checklist:
- Use cryptographically random, unguessable token values (not incremental IDs)
- Set appropriate expiry (24 hours is a reasonable default for session tokens)
- Include a machine or client fingerprint binding if you want to prevent token sharing
- Store tokens server-side in a database or Redis, not stateless JWT if you need revocation
- Implement token refresh before expiry (don't require users to reauthenticate mid-session)
On JWT: JWTs are stateless -- they can't be revoked without a blocklist. For MCP session tokens that need to be revocable (license cancellation, security incident), store sessions server-side and validate against the database. Ooty uses AES-256-GCM encrypted tokens with server-side validation specifically for this reason.
Client-side storage: Tokens stored at ~/.ooty/auth.json or equivalent are accessible to any process running as that user. This is acceptable for trusted development environments. For enterprise deployments, consider secure keychain storage or OS-level secret management.
5. Input Validation and Injection Prevention
MCP tool parameters are user-controlled inputs. Treat them as untrusted data.
The attack surface depends on what your tools do with the parameters:
- SQL queries: Parameterise everything. Never interpolate user-supplied values into query strings.
- Shell commands: Don't construct shell commands from tool parameters. If you must call external processes, use argument arrays, not string interpolation.
- API calls: Validate parameter types, ranges, and formats before passing to upstream APIs. Unexpected parameter values can cause upstream API errors that leak information about your backend.
- File paths: If your MCP server reads files based on tool parameters, validate and sanitise paths. Path traversal (
../../etc/passwd) is trivially constructed if you're using string concatenation.
The MCP specification doesn't validate tool parameters -- that's your responsibility. A well-defined tool schema with strict type validation catches most accidental misuse. Explicit sanitisation handles adversarial inputs.
6. Rate Limiting at the MCP Layer
Your MCP server sits between Claude (or any MCP client) and your upstream APIs. If you don't rate limit at the MCP layer, a runaway agent can exhaust your upstream API quotas in minutes.
This is both a security concern and a cost concern. Many upstream APIs (Google Ads, Meta, Amazon) have both rate limits and cost-per-call billing. Unrestricted tool calls from an agent that gets stuck in a loop can generate significant unexpected costs.
Implement rate limiting at three levels:
- Per-session: Limit tool calls per session per minute (e.g., 60 calls/minute)
- Per-license-key: Limit total daily calls per license (appropriate for SaaS billing tiers)
- Per-upstream-endpoint: Respect upstream API rate limits with queuing or backoff
Token bucket or sliding window algorithms work well. Redis is the standard backend for distributed rate limiting.
Return the right error: When rate limited, return HTTP 429 with Retry-After header. MCP clients that implement proper error handling will back off. Clients that don't will at least get a clear error rather than a cryptic failure.
7. Data Residency and GDPR Compliance
If your MCP server handles data about EU residents (which includes most marketing data -- analytics, ad performance, social media metrics), GDPR applies.
Key considerations:
Data minimisation in logging: If you log tool requests for debugging, be careful about what you log. Analytics data, search queries, and ad performance metrics may constitute personal data if they're linkable to individuals. Log request metadata (timing, error codes, tool names) rather than request payloads.
Data processing agreements: If you proxy requests to upstream APIs on behalf of users, you're a data processor for those users' data. Your terms of service and privacy policy need to reflect this, and you may need Data Processing Agreements with your customers depending on the nature of the data.
Right to erasure: If you store any user data server-side (session tokens, cached API responses, usage logs), you need a mechanism to delete this data when requested. Build the deletion path before you need it.
Cross-border transfer: If your servers are in the US and you're processing EU user data, you need a valid legal basis for the transfer (Standard Contractual Clauses, adequacy decision, etc.). The EU-US Data Privacy Framework covers many US-based services, but you need to be enrolled.
Retention limits: Don't store upstream API responses longer than necessary. If you're caching Google Ads data for performance, set appropriate cache TTLs (hours, not months) and implement automated cleanup.
8. Audit Logging Without Leaking Data
Good security posture requires knowing what happened when something goes wrong. MCP server logs should capture enough to reconstruct events without storing sensitive data.
Log what matters:
- Tool name called
- License key or session ID (truncated or hashed, not full value)
- Response time
- Error codes and types
- Rate limit events
Don't log:
- Full request payloads (may contain personal data)
- Upstream API responses (same concern)
- Complete token values (truncate to first/last 4 chars)
- OAuth access tokens (never log these)
Structured logging helps: use JSON logs with consistent fields. This makes it practical to search for specific license keys, error patterns, or tool call patterns when investigating incidents.
Log retention: Define explicit retention periods. Three months of access logs is typically sufficient for incident investigation. Longer retention increases your GDPR data minimisation exposure.
9. Dependency Security
MCP servers are typically built on npm, pip, or similar ecosystems. These ecosystems have a supply chain problem -- transitive dependencies can introduce vulnerabilities that aren't visible in your direct dependency list.
Minimum viable dependency security:
- Run
npm auditorpip-auditas part of your CI/CD pipeline - Keep dependencies updated, especially security-relevant ones (HTTP libraries, auth libraries, cryptography)
- Use lock files (
package-lock.json,poetry.lock) to ensure reproducible installs - Consider using a software composition analysis (SCA) tool for production services
Sonatype's 2024 State of the Software Supply Chain report found that open-source supply chain attacks increased 200% year-over-year (Sonatype, 2024). This matters more for MCP than typical web services because of the trust model.
The specific MCP risk: MCP servers that run as local processes (stdio transport) execute on the user's machine with the user's permissions. If a compromised dependency executes arbitrary code, it runs with full user-level access to the local system. This is a meaningful threat for MCP servers distributed as npm packages.
For server-side MCP deployments (Streamable HTTP transport), the risk profile is similar to any Node.js/Python web service. Standard web security practices apply.
10. The Tool Schema as a Security Boundary
The MCP tool schema -- the JSON Schema definition of what parameters each tool accepts -- is a security control, not just documentation.
A tool with a poorly defined schema is an invitation to misuse:
// Dangerous: any string accepted
{
"name": "search_data",
"parameters": {
"query": { "type": "string" }
}
}// Better: constrained, validated inputs
{
"name": "search_data",
"parameters": {
"query": {
"type": "string",
"maxLength": 200,
"pattern": "^[a-zA-Z0-9 _-]+$"
},
"date_range": {
"type": "string",
"enum": ["7d", "30d", "90d", "6m", "12m"]
},
"limit": {
"type": "integer",
"minimum": 1,
"maximum": 100,
"default": 10
}
}
}The schema does three things: it tells the AI client what's expected (reducing hallucinated parameters), it provides a first line of validation before your code runs, and it limits the attack surface for adversarial inputs.
Validate against the schema in your server code, not just in the schema declaration. Don't trust that the client has enforced schema constraints -- validate them yourself.
A Note on SOC 2
If you're building MCP infrastructure for enterprise customers, SOC 2 Type II is the most relevant compliance framework. SOC 2 evaluates security controls across five trust service criteria: Security, Availability, Processing Integrity, Confidentiality, and Privacy.
The security controls described in this guide -- access controls, audit logging, encryption, rate limiting, vulnerability management -- are directly relevant to SOC 2 Security criteria. If SOC 2 is a goal, implement these controls systematically and document them. The audit is primarily about demonstrating that controls are in place and operating consistently over time.
The 10-Point Security Checklist
The baseline for responsible MCP server operation
Server-side credentials
No upstream API keys on client machines
Auth on every endpoint
Bearer token or OAuth before shipping
Minimum OAuth scopes
Request only what the tool needs
Proper token design
Random, expiring, revocable, stored securely
Input validation
All tool parameters treated as untrusted
Three-tier rate limiting
Session, license, and upstream API
GDPR-compliant handling
Minimise storage, know your legal basis
Safe audit logging
Log metadata not payloads, set retention
Dependency hygiene
Audit regularly, lock files, keep updated
Schema as security control
Constrain inputs, validate server-side
Summary
The MCP ecosystem is moving fast. Over 5,800 MCP servers are now available with 300+ client applications (MCP Manager, 2025), and remote MCP servers increased nearly 4x since May 2025. Security practices need to keep pace.
These 10 points represent the baseline -- the floor, not the ceiling -- for responsible MCP server operation. If you're building MCP servers that handle real user data, start here and build up.
For a concrete example of these principles in practice, see how Ooty's architecture implements each of these security layers.
From Ooty
AI native marketing tools for SEO, Amazon, YouTube, and social — replace your expensive dashboards.
Start freeWritten by
Finn Hartley
Product Lead at Ooty. Writes about MCP architecture, security, and developer tooling.
Related posts
How Ooty Works: The Architecture Behind AI-Native Marketing Tools
When you connect Claude to your Google Analytics account through Ooty's Compass MCP server and ask "What were my top traffic sources last month?", several things happen in the next two seconds. Understanding what happens -- and why it's designed that way -- ma
MCP vs API: When to Use Each (A Practical Decision Framework)
When developers first encounter the Model Context Protocol, a common question surfaces: "Why not just use an API?" It's a fair question. APIs have worked for two decades. They're well understood, well documented, and supported by every programming language, fr
AI Marketing + MCP Glossary 2026: 200+ Terms Defined
AI marketing has developed its own vocabulary fast — and the MCP ecosystem has added another layer on top of it. This glossary defines the terms you'll encounter when working with AI marketing tools, reading research, or setting up MCP connections. Terms are l