The Security Risks of Model Context Protocol (MCP)

At Code Drifters, we help organizations bring AI into highly regulated industries — responsibly and securely. One protocol getting a lot of attention right now is the Model Context Protocol (MCP). It’s designed to help large language models (LLMs) connect to external tools and APIs, maintain context, and act more like intelligent agents than simple chatbots.
It’s a promising technology. But as with anything new, it comes with important security risks — especially if it’s rolled out without careful planning. In this article, we’ll walk through some of the big security issues we’ve seen when teams put MCP into practice in industries like healthcare, government, and finance.
Why Use MCP Instead of Existing Alternatives?
Some teams may ask, why not stick with existing standards like OpenAPI? After all, OpenAPI already works well for regular APIs.
OpenAPI is great for clear, structured APIs. It’s built around human developers manually creating and documenting endpoints. Whereas MCP was built specifically for AI agents – using JSON-RPC 2.0 under the hood. MCP lets AI:
- Dynamically discover tools and understand how to use them.
- Use plain-language descriptions for tools, making it easier for AI to know what they do.
- Handle context and memory so agents can make more informed decisions.
- Focus more on the task itself, not just the technical details of API calls.
This dynamic approach is great for complex workflows, but it also opens up new risks. Tools might change unexpectedly, or data could leak between different tasks. This means MCP needs new types of safeguards.
Security Risks We’re Seeing (and How to Fix Them)
A recent security audit from Equixly Security Labs found some serious vulnerabilities in real-world MCP deployments:
- 43% allowed command injection attacks.
- 30% had SSRF vulnerabilities.
- 22% were vulnerable to unauthorized file access.
- Session IDs were openly exposed in URLs.
- None had proper authentication or message signing built in.
Here’s a clear breakdown of these issues, plus our recommendations to fix them:
Prompt Injection & Tool Poisoning
Risk: Agents might follow harmful instructions hidden in tool descriptions, which users never see directly.
Mitigation: Always sanitize tool metadata. Be strict about what your agents trust, and validate all tool inputs.
Trojan Horse Updates
Risk: Trusted tools could quietly become malicious after being installed, similar to supply-chain attacks.
Mitigation: Pin your tool versions and require digital signatures. Alert users clearly if tools change after installation.
Command Injection
Risk: Unsafe tools can accidentally run malicious code sent by attackers.
Mitigation: Completely avoid shell commands. Validate inputs and regularly scan your code for risky patterns.
Cross-Server Tool Shadowing
Risk: A malicious server could pretend to be a trusted tool, intercepting or modifying data.
Mitigation: Clearly isolate your servers. Ensure tool identities are strongly tied to trusted sources.
Credential & Context Leakage
Risk: Sensitive tokens and user data might leak between sessions or get exposed to the wrong tools.
Mitigation: Always encrypt sensitive context. Clearly isolate memory between sessions and tenants and never share credentials as tool inputs.
Session IDs Exposed in URLs
Risk: Putting session identifiers in URLs exposes them to logs and browser histories.
Mitigation: Use secure cookies or headers for session identifiers, and scrub logs carefully to remove sensitive data and context.
Excessive Tool Permissions
Risk: Giving tools too much access (files, network, environment) puts your entire system at risk.
Mitigation: Limit permissions strictly, sandbox your tools, and use clear access policies.
Risky Dependencies
Risk: Outdated or insecure libraries introduce hidden vulnerabilities.
Mitigation: Regularly scan dependencies, pin versions, and require manual approval for updates.
Why This Matters So Much for Regulated Industries
If you’re working in healthcare, government, finance, or other heavily regulated fields, you don’t have the luxury of overlooking these risks. Regulations like HIPAA, FedRAMP, GDPR, and PCI DSS require:
- Clear audit trails
- Strong access controls
- Proven security measures
- Data encryption at every step
Right now, MCP doesn’t handle these requirements out of the box. It can work, but you need careful planning and solid architecture to use it safely.
Our Security Recommendations
- Double-check all user input
- Stay away from shell commands
- Regularly clean your tool metadata
- Use trusted, pinned dependencies
- Require clear tool versioning
- Let users see exactly what tools do
- Monitor tool behaviors closely
- Clearly alert users to tool changes
- Treat all MCP servers as external risks
- Log and monitor carefully
- Use least-access permissions for tools
- Regularly audit your entire stack
What Could a Secure MCP Look Like?
We think MCP needs a secure version — let’s call it MCPS (Model Context Protocol Secure), inspired by HTTPS but for AI agents. This secure protocol would include:
- Digital signature requirements for tool validation
- Specifications for immutable and secure logs
- Fully encrypted session contexts
- Clearly declared tool capabilities, scopes and permissions
- Mutual authentication using proven methods like mTLS
- Specifications for active security monitoring and real-time alerts
Creating this secure version would need cooperation between security experts, developers, and industry regulators. Until then, we are investigating emerging security tools, such as ScanMCP.com, which can help spot vulnerabilities.
Final Thoughts
We love AI at Code Drifters — but we also know it can cause real problems if not done right. MCP is powerful and exciting, but powerful tools need strong safety measures. Our priority is building secure AI systems that genuinely help people. If you’re looking at MCP and want some help navigating the risks, drop us a line at hello@drifters.io.
Note: This article expands on some great insights shared by Elena Cross in her article “The ‘S’ in MCP Stands for Security”.