Is MCP really just a glow up for HTTP?

MCP glow up
Posted by
MCP glow up

This article was also published on LinkedIn

MCP is arguably the most hyped topic in AI at the moment. It’s being positioned as the key that will unlock every aspect of business data in every company for AI agents. And the hype isn’t without merit: If even a modest percentage of that vision pans out, it will mean a transformation on par with technologies like the Internet, mobile phones, or databases.

But what, really, is the MCP protocol? And how does it relate to entrenched technologies like HTTP that have been shipping our data to clients for decades?

Let’s start with the basics. MCP stands for “Model Context Protocol”. The word “protocol” here is important, because MCP isn’t an implementation or a solution: It’s a specification for how data gets shipped around the Internet. If that sounds a bit familiar, it’s because we’ve had several flavors of HTTP doing just that for the last few decades. (In fact, if you took MCP back in time five or ten years and showed it to an engineer, they’d likely understand it perfectly well as an improvement over HTTP; they just wouldn’t guess that there was any connection to AI.)

MCP was introduced by Anthropic with a specific goal in mind: To connect data sources, especially enterprise ones, with AI agents that needed access to proprietary, real-time data, not just the general purpose knowledge baked into a foundational LLM. There were several key requirements at play for Anthropic’s design of MCP:

  • Discovery — understanding what resources and tools are available to the agent, with the intent of enabling businesses to expose any of their operational and analytical data and applications to AI agents over time.
  • Access — enabling AI agents to actually request those resources or use those tools.
  • Reusability across agents — ensuring that the above things happen without having to write a custom interface between every agent and every data source. Sometimes this requirement gets described as avoiding the “M x N” problem or some other cute moniker, but the idea is the same: Write the wrapper once per source, not once per agent per source.

So what's wrong with HTTP?

In a perfect world, neither Anthropic nor any other AI company would have needed to invent MCP because our existing protocols for discovering, accessing, and reusing data and applications over the Internet would have already had all the capabilities they needed. In fact, after several decades of Internet-based, well, everything, you might be forgiven for thinking that was indeed already the case. So, what was wrong?

As protocols go, HTTP is sort of the poster child for the Innovator’s Dilemma problem — it’s a victim of its own popularity. It’s sheer ubiquity and reach makes it challenging to agree upon and deploy improvements, even if those improvements would be beneficial in many cases. And HTTP has a lot of legacy; after all, it was originally designed only to support transmitting static HTML pages to browser clients. Today, HTTP is the backbone for everything from streaming to database queries to APIs to file transfer to…well, just about anything you can imagine that involves computers, phones, data, or applications. That’s a tall order for a protocol, and changing that protocol without breaking any of those billions of clients and trillions of in-flight messages is a tall order. As a result, HTTP, for all its many benefits, has some gaps.

One of the reasons these gaps persist is that they’re well served in other, more restricted, ways. For instance, every database and datalake comes with some flavor of a catalog that helps users discover what tables and datasets are available, govern who can use them, and help those authorized to access the underlying data. Ditto API Gateways, ditto object stores in the cloud, and so on and so forth. It’s not that these capabilities don’t exist, it’s that they’ve grown up in ways that are domain specific and/or proprietary in nature…and not part of the HTTP protocol itself.

The Semantic Web Cometh

Almost since the world wide web-slash-Internet was introduced, people have been talking about how to make it more self documenting, navigable, and automateable. For years, the lament could be heard, “Why can’t the Internet have real semantics?”, and this yearning even had a name: the so-called “Semantic Web” was to be a newer, better incarnation of the Internet where things would be easier for both humans and machines to find and use.

So, why didn’t we get this vaunted semantic web? Well there are the Innovator’s Dilemma reasons mentioned above, but there’s an even more basic problem: Absent any kind of intelligence, semantics is pretty darn hard to achieve. Pick any label — like “engine” — and you quickly realize that it means one thing to an auto mechanic, another thing to a macroeconomist, and yet another thing to a database software engineer.

Disambiguating a term like “engine” across every possible topic — and while translating into and out of every possible human language — was an impossible challenge…at least, before the advent of LLMs. Suddenly, with LLMs we have a way to understand context, and then actually do something interesting with that context, without the need for a human brain to mediate and orchestrate the entire affair manually.

Icing on the cake or jet fuel for the Internet?

MCP is actually fairly thin veneer on top of HTTP; it isn’t even meaningful without it, as it depends critically on HTTP verbs, data representation, parameter/result formatting, synchronous and asynchronous data transfer techniques, and more. Simply put, MCP is sort of the missing catalog that HTTP should have had all along. It’s interesting to note that MCP itself has almost nothing AI-specific anywhere in its specification, apart from a distinguished resource type for prompt strings. That aside, it’s basically a more powerful and informative version of HTTP…and that’s a good thing, because it means MCP has broader applicability than just AI agents. It can help solve connectivity and data integration challenges both inside and outside a company’s four walls — including AI agents of course, but not limited to just that.

To do this effectively, MCP needs great implementations. Like HTTP, it’s “just” a protocol — a specification for how data is represented and when and where it gets shipped to. To succeed, we need implementations of MCP that expose SaaS applications, enterprise data, and more. And these implementations have some heavy lifting to do: Integrating multi-modal data both inside and outside a company’s four walls while remaining compliant and auditable, and while tracking end-to-end forensics around what data is being used for which purposes is no small feat. This is why Vendia has been working not just on MCP but on the deeper and broader problems of data integration and collaboration. AI at its heart is a data infrastructure problem, and liberating enterprise data requires both a successful protocol and a powerful, secure implementation. Together, these two make it possible to create a truly semantic web and expose the entire range of corporate data to the AI agents of today…and to those that will be developed tomorrow.

To learn more about Vendia’s fully managed MCP server and its data integration capabilities, visit www.vendia.com.

Posted by
Related reading
Related reading

Search Vendia.com