A simple explainer on Anthropic's Model Context Protocol — why LLMs struggle to call APIs, how MCP bridges the gap, and what it means for the future of AI tooling.

MCP, or The Model Context Protocol, was launched by Anthropic late last year, as an attempt to make LLMs call APIs reliably.
Many folks in the industry, technical and non-technical alike, do not realize that LLMs are pretty bad at calling APIs — at least without some serious coaxing via prompts or bespoke code.
APIs are the foundation for building and connecting software, so it's surprising that LLMs are unable to utilize such standard technology out of the box.
The current implementation of MCP is typically deployed as an "MCP Server" that runs locally on the same machine where you're accessing your models.
So, if you utilize a program like Claude Desktop or Cursor on your Mac, the MCP server will run behind the scenes, locally, on your Mac.
This server provides the LLM with basic information such as:
If you've written any significant amount of code in the last decade, you'd quickly realize that this setup of needing a local server to call a remote server (or an API for your APIs) is really weird.
This section is not intended to dunk on the fine engineers at Anthropic who came out with MCP. They built MCP to solve a real problem and there are real constraints in dealing with LLMs to enact a solution.
LLMs are unbelievably good at following patterns in language, be it human or a programming language.
LLMs aren't great at being asked a very human question and then being expected to switch to behaving like a computer for a brief moment mid-sentence, and then finish the conversation as a human. They can sort of do this, but it's very unreliable.
In order to solve this problem, an LLM needs outside help. In most software, outside help usually comes in the form of calling an external API. But most LLMs don't have the infrastructure needed to call APIs.
To solve this, you need an API to call APIs.
In order to give LLMs an API to call APIs, the Anthropic team came up with this clever solution of installing an MCP server that plugs in to wherever your models are being used to help translate this human-to-machine-back-to-human transition without requiring any fundamental changes to the model.
The future when any LLM is able to reliably call external APIs without much friction will be an enormous unlock for every industry. While not perfect, and a little weird, MCP is currently the best first step in that direction.
Supergood was built for this inevitable future — the demand for LLMs needing to access a product's APIs will exceed the supply of APIs available for use.
Supergood generates and actively maintains APIs for products that do not have APIs. The company utilizes a combination of human-in-the-loop code generation paired with observability platforms to quickly create new integrations and free up engineering teams from ongoing maintenance.
If this sounds useful or interesting to you, we'd love to chat.