Demystifying the Model Context Protocol (MCP)
APIs Aren’t Dead, They Evolve
APIs have been evolving for decades, adapting to new paradigms in computing. From SOAP to REST, then GraphQL, event-driven APIs, and now MCP, we’ve seen a natural progression towards more dynamic, flexible interactions. But at its core, MCP is still a programming interface as it defines a structured way for systems to communicate, much like JSON-RPC 2.0, which it is based on.
I encourage those who believe that MCP is an API killer to do a bit of homework and research the history of programming interfaces. In case it helps, here’s a research I did a few years back whilst writing Enterprise API Management.
Understanding MCP Without the Hype
Ok, MCP is not just another API in the traditional sense, but it still operates within API boundaries. Below a break it down based on my current understanding:
- MCP servers typically wrap existing APIs but can also include additional logic, caching, or data transformations before responding. Note: MCP is opinionated however there are few SDK's that can be used to implement servers and clients.
- MCP clients interact with a single MCP server, but an MCP server can handle multiple clients.
- The host (e.g., a chatbot) interacts with one or multiple MCP clients, which provide capabilities such as tools (e.g., ‘get reservations’), knowledge resources, or predefined prompts.
- MCP Clients fetch capabilities from MCP Servers and register them with the Host before they can be used.
- When a user makes a request, the host sends it to the LLM, along with capabilities that were previously provided by the MCP Client.
- The LLM determines how to handle the request—whether to respond directly, use a tool, or request additional information.
- If a tool is needed, the LLM instructs the host to execute it (e.g., ‘get reservations’).
- The host interacts with the relevant MCP client to call the tool.
- Before executing a tool, the MCP Client presents a request for approval to the user to ensure human oversight.
- Once approved, the MCP Client sends the request to the MCP Server, which executes the function (often by calling an API).
- The MCP server returns the response to the MCP client, which forwards it to the host.
- The host may either:
- Send the tool response directly to the user (if no further processing is needed).
- Pass the tool response back to the LLM for additional reasoning or formatting.
- If sent back to the LLM, it constructs a final response.
- The final response is then sent to the user.
- The MCP model ensures a human-in-the-loop approach, where tool invocation may require approval before execution.
- Model-controlled tools allow the LLM to autonomously determine which tools to invoke based on context and user prompts.
Here’s an illustration of the above:
Current Limitations
At the time of writing this article, there were a couple of fundamental limitations:
- Authentication and Authorization: In the current version, MCP doesn't provide support for standards like OAuth 2.0. So key features like identity propagation (also crucial for auditability) are not there yet. This is a major limitation, and although in the roadmap, it shows that it’s still very early days for this protocol.
- Remote MCP servers: One of the key goals of MCP is to enable re-usability of MCP servers and their functions. However, without the ability to securely access MCP servers over the internet, this remains more of a vision than a reality.
Although both limitations are being addressed in the protocol roadmap, it shows it's still early days.
Final Thoughts
MCP isn’t replacing APIs; it is an evolution in how AI systems interact with services. It abstracts execution and enables intent-driven workflows, but it still functions as an API in a broader sense. Rather than eliminating APIs, it builds on them, adapting to AI-first architectures without discarding foundational API principles.
But there’s one crucial point: standardisation matters. Unless other popular GenAI platforms like OpenAI, HuggingFace, Microsoft Copilot, Google AI Studio, Oracle Gen AI, etc etc too add support for MCP, the risk is proliferation of multiple standards like in the image. This has happened many times before in tech history.... so time will tell.
For MCP to succeed, it needs broad adoption and industry-wide consistency.
Comments
Post a Comment