MCP: A Stealthy Earthquake in AI

Something quietly seismic recently happened in the AI world - an event that didn’t make CNBC headlines, like tariffs and trade. It may prove to be the most impactful tech shift in hindsight 5 years from now.

Last year, Anthropic introduced the Model Context Protocol (MCP) - a kind of common language that lets different AI models talk to each other, with shared rules for how they carry and remember information. Like REST did for web services, MCP seeks to create a way to access multiple models. One would expect this type of innovation from the open source side of the LLM world. On March 26, 2025, OpenAI and Microsoft announced support for MCP. This move marked a significant step toward enhancing interoperability in AI systems. This is a market moving moment that provides acceleration to the applied layer of AI that can use MCP to call multiple foundational models.

At GRIT, one of the sectors we invest in is commerce. You can imagine that to reimagine workflows across organizations like manufacturers or retailers, it might be useful for a new generation of AI companies to enable broad visibility and actionability for these organizations. To do this, MCP adoption enables the creation of services that may call one model for inventory, another for sales prediction, and another for warehouse management - all feeding a single innovative platform where the combination of value, meets the demands of an ideal customer profile with a complex, multi-layered problem to solve.

So, what is the big deal about MCP? Isn’t it just API calls like REST? And of course, open standards accelerate things, right? Well…it is not just that. Here is the thing, computers don’t integrate context as well as humans. The breakthrough isn’t just in communication - it’s in context. Unlike traditional systems that treat each request as isolated, MCP lets AI models carry memory between calls. That makes them more human-like in how they reason.

Think about how humans blend short-term frustration with long-term loyalty. A good metaphor is the New York Yankees. A human fan of the Yankees can say “The Yankees Suck” and still decide to buy tickets for 10 games this year. What is happening is that humans can take transactional and short-term information, combine it with long term information (the storied franchise, the salaries, the will to win, the manager), and they can arrive at a place of meaning, which informs their reasoning to make the purchase. REST has worked well as a protocol for APIs calling servers, because it is“stateless,” such that every call just has the information requested or posted, clearing the way for execution. In AI, what MCP communicates allows the transfer of “state.” A far too simple way to understand it, is the concept of pausing a movie on Netflix, and returning to it to resume when you already know what came before the pause. This kind of “stateful” interaction - where the model remembers what happened before, is what enables more advanced reasoning, known in the field as “chain-of-thought.” Powerfully, this will allow the context of one model to influence the reasoning of another model.

For applied layer AI companies building either vertical tooling, or sticky, next generation AI workflows, this is a rocket fuel moment. It will bring better solutions with a greater ability to infer and reason. It will bring more leverage and cost advantage to the use of foundational models, and it will allow innovation between related but different sets of training data.

MCP becoming an industry standard is a quiet revolution. We’ll be watching for innovations not just on the server side - but on the client side too. At GRIT, our team’s background includes building mobile SDKs. This time in AI’s history reminds us that transformative value often emerges where others aren’t yet looking.

"
"
No items found.