After going to the LisbonAI Conference, one talk really stood out. It summed up something I’d been thinking about for a while: How quickly the way we search for and use information online is changing.
I couldn’t help notice a clear trend that more and more people are turning to ChatGPT instead of Google for information, but the web was not built for this kind of behaviour.
Every site you visit is designed for human eyes: colours, layouts, buttons and animations. So when an AI agent tries to gather information from a website, it is like trying to learn a recipe from a picture of a restaurant menu. It is possible, but you will probably miss something important.
How agents see the web differently
When you open a webpage, you see a visually appealing interface with clear sections and clickable buttons. An AI agent, on the other hand, sees code with layers of nested tags, styling instructions and scripts. The AI agent has to dig through visual design elements that help humans but mean nothing to a machine.
Colour schemes, hover effects and page layouts are all built for human brains whereas an AI, on the other hand, does not care whether your button is blue or red. It simply needs to know what the button does and how to use it.
With this in mind, there is a challenge driving a new movement: building an agent-accessible internet.
The basic idea is simple. If agents struggle with human-friendly websites, then the solution is to give them structured data they can actually use. This is where the Model Context Protocol (MCP) comes in.
The Model Context Protocol (MCP)
The MCP is designed to standardise how AI agents connect to data and tools. Anthropic calls it the “USB-C for AI”, a universal way for agents to plug into different services.
Here is how it works in practice: Instead of making an agent scrape your website or read through API documentation, you build an MCP server. This server exposes your tools and data in a format that agents can understand immediately. Companies such as Mintify have already done this for their documentation, allowing agents to access their content without navigating HTML.
Something interesting is happening across the industry wherein companies are beginning to add support for MCP servers, and some are even pivoting to become agent-first.
Instead of creating visualisations for analysts to interpret, teams are now building MCP servers that agents can query directly. FuseBase is a good example, where it was once just a dashboard and note-taking platform, it has evolved to let AI agents access and act on its data and workflows. This marks a meaningful shift in how we approach data access and interaction.
But we’re not fully there yet. As one LisbonAI speaker pointed out, there are currently more MCP builders than consumers, a moment reminiscent of the early days of the web. The key difference however, is that the barrier to entry is significantly lower.
It is now possible to build an MCP server without deep technical knowledge. If you type “MCP” into YouTube, you will find an abundance of tutorials claiming you do not need to write any code to create your own MCP server.
This lower barrier is both exciting and risky because it encourages rapid adoption, but also leads to a rapid accumulation of poor implementations.
MCP design challenges
While MCP is positioned as a solution for seamlessly connecting AI agents to tools, its design raises some concerns.
MCP servers load their entire tool catalogue upfront, bloating the context window with function definitions the model may never use. For example, if you ask an agent to count open issues across your GitHub repositories (something easily done with the GitHub API), the model doesn't just receive the relevant tools – it receives metadata from potentially hundreds of them. Valuable context space gets consumed before any actual work happens.
Every tool call requires resending the full context, even the unused tool definitions. A simple multi-step workflow can burn through thousands of tokens just to handle context. For applications requiring multiple tool interactions, costs escalate rapidly while performance degrades as context limits approach.
Even Anthropic seems to acknowledge this issue. They recently published a post on using code execution with MCP, which can dramatically reduce token usage – from 150 000 to 2000. But if you need a workaround that reduces token usage by nearly 99%, perhaps the original approach wasn't optimal.
But efficiency problems are not the only ones. As the number of MCPs grows, so do potential security risks.
What about security?
Here is where my scepticism begins. There is a great deal of hype and excitement about this agent-friendly future, but we are repeating the same security mistakes we supposedly learnt from decades ago. Recent security research from Equixly demonstrates this well.
They analysed popular MCP server implementations and found that 43% contained command injection vulnerabilities. Another 22% had path traversal issues that allowed unauthorised file access, and 30% were vulnerable to Server-Side Request Forgery attacks. These are not theoretical concerns or edge cases. They are real security problems in production code.
What is even more concerning is how developers responded when notified. Only 30% acknowledged the issues and released fixes. Nearly half, 45%, said the security risks were “theoretical” or “acceptable”. A quarter did not respond at all and it paints a worrying picture.
The MCP design itself also raises concerns. MCP servers often use session IDs in URLs, exposing sensitive identifiers in logs. The protocol does not provide clear rules for authentication, so implementations vary widely. There is no requirement to verify messages, which leaves the door open to message tampering.
It is important to remember that anyone can call an MCP server, not just the AI it was built for. When an AI uses a tool, it usually explains what it plans to do. A hacker does not. If someone compromises one MCP server, they could potentially access everything that agent can reach, including databases, APIs and internal tools.
The MCP design itself has concerning choices. MCP servers often use session IDs in URLs, exposing sensitive identifiers in logs. The protocol doesn't give clear rules about authentication, so everyone implements it differently. There's no requirement to verify messages, which opens doors to message tampering.
And let's not forget that anyone can call an MCP server, not just the AI it was built for. When an AI uses a tool, it usually tells you what it's planning. A hacker doesn't. If someone compromises one MCP server, they might get access to everything that agent can reach: databases, APIs, internal tools.
Our responsibility
Agents are tools, and when an MCP server gets compromised, the impact falls on humans – leaked data, breached systems, violated privacy. When developers dismiss security risks as "acceptable" or "theoretical," they're making decisions that affect real users.
The shift towards agent-accessible systems is happening. Companies are adopting this technology, and usage is growing. This is probably inevitable at this point.
Which means we need to apply what we already know about security. The types of vulnerabilities showing up in MCP servers – command injection, path traversal, SSRF – are problems we've known about for decades. Just because the technology is new doesn't justify ignoring established security practices.
Here's what proper implementation looks like:
- Security as a baseline. When 43% of implementations have command injection problems, that's a priority issue. Input validation, authentication, and rate limiting are basic requirements for systems handling user data.
- Clear security standards. The protocol should specify authentication requirements clearly. Right now, inconsistent implementations create vulnerabilities.
- Permission-based access. For sensitive operations, agents should request permission rather than acting autonomously.
- Audit trails. Systems need to log what agents do, what data they access, and what actions they take.
The agent-accessible internet is being built now, and we have decades of web security experience to draw from. The question is whether we'll actually apply it.
—
If you’re exploring the potential of AI models or need support across AI engineering, MLOps, or data science, get in touch.

