This article was also published on LinkedIn
MCP puts sprinkles on HTTP’s ice cream, and that tasty sundae isn’t limited to just AI agents.
REST APIs running on top of HTTP (well, technically HTTPS) are the mainstay of modern applications. Cloud backends, mobile apps, SaaS, … the approach is ubiquitous. So it would stand to reason that administrative activities related to these APIs would be well supported – e.g., discovering APIs, learning what parameters they require and what types they return, getting semantic information about what they do or how to use them. But actually, none of that is handled by HTTP, so humans resort to documentation, copy-paste, tribal knowledge, etc. to figure out which APIs to call, how to call them, and what to do with the results.
Inside a company’s four walls, things can be better – for example, API Management solutions can offer some of these additional services, but usually only to other employees or applications working for the same company. Out in the wild, the additional capabilities of an API management solution typically don’t apply.
This was never an ideal situation, but it’s the sort of thing companies have long paid developers to “just deal with”…and that worked, up until about a year ago when the world realized the potential of letting AI agents connect to enterprise applications and their APIs. Suddenly there was an urgent need to make APIs available not just in the literal sense but with enough associated discovery, type safety, and semantics that an AI agent would be able to do what a developer does manually (and without having to write a lot of custom code to make it happen). And thus MCP was born.
MCP, or Managed Context Protocol, was originally developed by Anthropic as a solution to enable its AI clients to gain selective access to enterprise data and resources, such as APIs. Anthropic’s open sourcing of MCP has led to it rapidly becoming a de facto standard, with other AI heavyweights such as Google, OpenAI, Amazon, and Microsoft pledging product support for the protocol as well.
Strictly speaking, MCP doesn’t replace APIs – MCP is just a protocol, and it still has to eventually call an API (or run a SQL query or access a resource) to get something useful done. It would be more accurate to state that MCP is replacing API Gateways and management solutions, by generalizing, standardizing, and open sourcing what has historically been a proprietary capability of those solutions. MCP didn’t set out to do that explicitly, it was just a side effect of its goal to make these APIs callable by AI agents, who needed that additional information and capability to find, understand, and use enterprise APIs successfully.
This isn’t the first time APIs have gotten a “glow up”. Google’s open source gRPC compiler is a good example of API-related technology that makes it easier to define the types of an API’s arguments and results in a way that different software systems (and humans and now AI agents) can manipulate safely and accurately, versus just relying on documentation. MCP encompasses some of the type safety that gRPC tools provide, but also goes further in offering catalog functionality that’s typically been the province of API gateway and management solutions. It also offers the ability for the API owner to deliver hints to the AI agent, an important advance in conveying semantics.
API Hints – the Missing Semantic Ingredient
Suppose you’re an AI agent (or even just a regular human), and you found the following API in a catalog:
getProductWeightInLbs(productId:String) → Integer
You would probably assume that this API takes one required argument, the product identifier, and returns an integer representing the weight of the product in pounds. So far, so good. But how about something a little less obvious?
getPdInfo(productId:String, productName:String) → {
Id:String,
Name:String,
Weight:String,
Height:String,
Length:String,
SKU:String,
SEQ:String
}
Now things are a lot less clear. “Pd” here probably refers to product info, especially given the names of the arguments and the kinds of response fields that are present. But are the name and id both required? If not, should they be sent as NULL values or empty strings or some other marker (sentinel) value? Is the returned weight in pounds or kilograms? Are height and length in inches or centimeters? What exactly do SKU and SEQ mean, and how are they formatted? And so on.
It’s possible that retrieving some sample data might make things clearer, but how to do that also isn’t obvious without more information.
For a human being, this is usually where you’d break down and read the manual – for example, if this is a business partner’s external (public) API, they probably have some documentation either online or previously shared that explains how to use it, what the fields mean, etc.
AI agents are getting increasingly smart, so maybe if this is a really well known API, like Stripe’s, it may already have included that documentation in its LLM training set. But if not, it’s going to need some help. And that’s where hinting comes in.
Hints are like API documentation connected directly to the API in question. Rather than hoping the LLM magically “knows” about this API, it can offer more information directly to the AI agent. Think of it as the kind of information an experienced developer would share with a newbie just learning to call this API:
- “You have to provide at least one of the two parameters.”
- “A missing parameter should be represented as an empty string, not NULL.”
- “In the result, Weight is in pounds and length and height are in inches.”
- “In the result, SKU indicates which larger assembly this item is part of and has a format that looks like, ‘SKUxxxxx’”
- “In the result, SEQ is an internal value that can be ignored.”
Similar to a human being, the AI agent now has the information it needs to use this API effectively, constructing valid calls and interpreting the results in a useful way. A query like, “How much does a wrench weigh?” can be translated into a call like, “getPdInfo(“”, “wrench”).Weight”, and used to construct a human language answer such as, “A wrench from this supplier weighs about half a pound.”
API hints can be constructed in various ways – manually programmed by a human, derived from API documentation, or compiled from a model used to generate the API itself (a feature of some API gateways and management solutions and of data integration platforms such as Vendia).
MCP as a “Universal API Platform”
If MCP makes it easier to discover, catalog, safely use, and semantically understand APIs for AI agents, wouldn’t it offer similar advantages to humans and potentially other, non-AI, platforms? In fact, it would – there’s nothing that restricts MCP’s enhanced API capabilities from being used by any sort of client whatsoever. In fact, while it does other things as well, one way to view MCP is as a sort of “open source, general purpose API Management” platform that makes it easier to find, use, and comprehend APIs, especially those that weren’t implemented by your own company.
MCP even has the potential to do things that many existing API Management solutions don’t always support well, such as providing real-time notifications when an API is added, deprecated, removed, or modified. (Notifications are an emerging element of MCP, and this feature isn’t available yet, but it exists in MCP’s support for resources and it’s a reasonable supposition that it would eventually make its way over to commands, MCP’s term for APIs.)
One could argue that features like this would always have been great to add to the raw HTTP protocol – and not in the proprietary, “internal-only” way that API Management and gateway solutions typically work. But prior to the explosion in AI agents, and especially the interest in hooking them up to real-time business operations systems, the benefits weren’t generally considered worth the effort. AI has created a sense of urgency and common interest that’s fundamentally altered how this API information, or “metadata”, is handled and the value it provides.
How to take advantage of this technology
MCP’s support for APIs, and especially what it means for intelligent AI agents being able to help humans with complex business operations, is incredibly exciting. At the same time, it raises questions about security, privacy, interoperability, and – at a more prosaic level – how to get started.
The easiest solution of all is to take advantage of existing MCP implementations. Large SaaS companies, such as Stripe, will undoubtedly offer high quality implementations of their APIs through the “lens” of MCP, and will handle hinting and other details themselves. All a consumer has to do is point their AI agent to the Stripe MCP server to gain the advantages, equivalent to the historical process of pointing their developers at Stripe’s public APIs and documentation pages…but with a lot less work and cost.
It’s a different story if you’re an enterprise with internal APIs that you want to surface to an AI agent, whether for internal or external use. Now, the implementation challenges are yours to address, and API hinting is just one element of a larger problem that might involve integrating multiple, disparate systems, MDM and other data format conversions, authentication and authorization controls, access to different types of information (“multiple modalities”) that could span files and databases, and so forth. For assistance with challenges like this, including handling APIs, solutions such as Vendia’s fully managed MCP server and data integration platform can help. It includes a data model compiler that can turn data schemas into APIs, including hints, that are MCP/AI ready out of the box, simplifying the process of connecting agents to backend enterprise systems and services.
Conclusion
APIs are one of the most common ways in which information is exchanged and actions are taken in a modern enterprise environment. MCP enables APIs to be used by a variety of AI agents, allowing them to both read operational data and take actions in real time in response to user requests. As a result, MCP is rapidly becoming a sort of “enterprise API management platform”, despite not having set out to accomplish that task per se. As more APIs get hooked up to it, capabilities such as API catalogs, discovery, type safety, and semantic information become more and more important. While this can feel like a challenging proposition for an enterprise facing AI-related security, governance, safety, and performance concerns, solutions exist that can help simplify the tasks of safely integrating, redacting, and exposing enterprise data and systems to agents and other client applications.