Jul 15, 2025

Introduction
The artificial intelligence landscape is evolving at breakneck speed, with new paradigms emerging seemingly every week. Among these developments, one technology stands out as potentially transformative: the Model Context Protocol (MCP). As Godfrey Nolan, President of RIIS, demonstrated at a recent OpenAI Applications Explorers Meetup, MCP represents what could be the most significant advancement in AI integration since the introduction of function calling.
MCP represents what many consider the “USB of AI” – a standardized way to connect AI systems to external services, databases, and tools. Unlike the limited custom actions available only to ChatGPT Pro users, MCP servers enable anyone to create powerful connections between AI agents and virtually any external system imaginable.
As usual, you can follow along with this overview or the video from the meetup. Let’s dive in!
Understanding the Model Context Protocol
The Model Context Protocol, developed by Anthropic, has become the standard for connecting AI models with external resources. Originally created for Claude, MCP now has widespread adoption across OpenAI, GitHub Copilot, and various IDE integrations.
The Problem MCP Solves

Before MCP, connecting AI systems to external resources required implementing custom function definitions using complex JSON schemas. Each integration demanded careful crafting of tool descriptions, parameter definitions, and response handling. This approach, while functional, created several challenges:
Complexity: Each external connection required extensive custom code
Inconsistency: Different implementations led to varying approaches and quality
Maintenance burden: Updates to external APIs necessitated manual changes to function definitions
Limited reusability: Custom functions couldn’t easily be shared across projects

MCP addresses these challenges by providing a standardized middleware layer that sits between your AI application and external services. This protocol-based approach offers several key advantages:
Multi-agent Collaboration: Agents work together on complex tasks
Tool Integration: Seamless external tool invocation
Persistent Memory: Stateful reasoning across interactions
Traditional AI integration might look like this: AI Model → Custom Function → External API. With MCP, the architecture becomes: AI Model → MCP Client → MCP Server → External API/Service. This seemingly simple change creates profound improvements in how AI systems interact with external resources.
In short, MCP connects the outside world of data and APIs to our LLM models in a language they can understand in a way that maintains data integrity. It excels beyond just adding the text to the prompt and reduces the labor of connecting to outside services, kind of like Zapier does for REST API calls between services.
The Architecture of MCP Systems
Understanding MCP requires grasping its fundamental architecture, which consists of two primary components working in concert, server and client.
MCP Servers: The Bridge to External Systems

MCP servers are the workhorses of the protocol, responsible for connecting to external systems and exposing their capabilities in a standardized format. These servers can be implemented in two primary modes:
Standard Input/Output (stdio) Mode
This approach leverages standard Unix-style input/output streams for communication. It’s particularly useful for local development and testing scenarios where the MCP server runs as a subprocess alongside your main application.
You can see in this example above, stdio is accomplished with a relatively simple async function. The npx command parameter acts as a package installer (think npm if you’ve done JavaScript development), We are calling the @modelcontextprotocol
and pulling up the server-filesystem
, which is the code we are going to invoke. The last argument is invoking the samples_dir
, which in this example we will have our call ask a bunch of questions for the files located there.
The final line `tools = await server.list_tools() will list the available tools in the file system MCP, which will be passed to our LLM to guide its work and increase its capabilities.
HTTP Mode (SSE)
For production deployments and remote integrations, MCP servers can operate over HTTP, allowing them to run on separate machines, cloud services, or as part of distributed architectures.
As expected, this is a little more straightforward. The class this time is MCPServerSse
and the only parameter is the url you will be pointing to. This option does have the very helpful trace feature, which allows for intuitive debugging. The trace essentially allows you to find the logic where the call broke down if you are getting weird or errant responses.
MCP Clients: The AI Interface
MCP clients integrate directly with your AI application framework, translating the model’s tool requests into MCP protocol calls. When using the OpenAI Agents SDK, for example, the client seamlessly bridges the gap between the agent’s decision-making process and the available MCP tools.
This architecture enables powerful multi-agent scenarios where different agents can share access to the same MCP servers, creating collaborative workflows that were previously difficult to implement. You can also stack multiple MCP Servers together.
Hello World!
Alright, we’ve spent a good amount of time covering the structure and advantages of MCP, so now we can finally get to what we’ll be calling our Hello World examples. Open up your command line and clone this repo:
Take a look at the examples in the repo (Filesystem, git, and SSE). Then we can move on to the next step, installing uv
(It’s just another package manager like pip or npm.):
The Filesystem example
Then you can start playing around with some of the examples. Let’s start with the filesystem:
The filesystem example demonstrates how to use the official MCP filesystem server to give your agent the ability to read, write, and interact with files on your local system. The filesystem server exposes tools like list_directory()
, read_file()
, and write_file()
that the agent can use dynamically during conversations. You may remember from before in our little code snippet where we mentioned the samples_dir
. That comes into play here and will configure our server for us.
Once the MCP server is configured, you integrate it with an OpenAI agent and run the complete workflow:
The framework automatically calls list_tools()
on the MCP server, each time the agent runs, making the LLM aware of all available filesystem tools. The agent then dynamically chooses which tools to use based on the user's request. The async with
context manager ensures proper connection lifecycle management, automatically connecting to the MCP server when entering the context and cleaning up resources when exiting.
You should see an output similar to this:
The exact file contents would depend on what's actually in the samples_dir
directory, but the agent would systematically discover and read through available text files while explaining its actions to the user.
The Git example
The next example demonstrates how to use the mcp-server-git
package with OpenAI's Agents SDK to create an AI assistant that can analyze Git repositories. The agent can dynamically discover and use Git tools such as get_contributors()
, get_commit_history()
, and analyze_changes()
to answer complex questions about any Git repository and provide some simple analysis. We are going to ask who the top contributor is of one of our repositories to see if it can figure that out.
The core setup uses MCPServerStdio
to connect to the mcp-server-git
package via uvx
(a Python package runner). Here's how the connection and agent are configured:
The run()
function creates an agent specifically instructed to analyze Git repositories, with the repository path dynamically injected into its instructions. The agent receives access to Git analysis tools through the MCP server connection, allowing it to perform sophisticated repository analysis tasks.
The main function orchestrates the entire workflow, including user input, dependency checking, and execution tracing:
When you run this in the command line, you should be prompted for the repo location. Hand that over and you should get your responses to Who’s the most frequent contributor and a summary of the last change.
MCP SSE Example
This example demonstrates how to build and connect to an MCP server using the Server-Sent Events (SSE) transport, which enables communication with remote MCP servers over HTTP. You may recall that we previously referred to this as HTTP mode.
The following can be found in the server.py
file, and this will be responsible for building our server.
Each of the @mcp_server.tool()
's represent a custom functionality you can build in. The possibilities are limited to any MCP servers you can find in the wild and your imagination.
The client-side implementation is broken down into three main components: server connection setup, agent creation and tool testing, and the main execution function.
Server Connection and Agent Setup
This first block establishes the SSE connection using MCPServerSse
with the server URL and appropriate headers for Server-Sent Events. The cache_tools_list=True
parameter optimizes performance by caching the available tools locally rather than querying the server on each agent run. The agent is created within both the server context manager and tracing context to ensure proper resource management.
Tool Testing and Execution
This second block demonstrates how to interact with the remote MCP tools through natural language prompts. Each Runner.run()
call allows the agent to automatically discover and execute the appropriate remote tool based on the user input, showcasing the seamless integration between the agent and remote SSE server.
Then finally the main()
function wraps the SSE example execution with exception handling as one would expect.
Now call main via uv run python examples/mcp/sse_example/main.py
in the command line. The current examples may give you things like the current time, do calculations, and a custom greeting, but with the flexibility of MCP you can add things like a dedicated weather agent, or perhaps even you can build a full-fledged trave agent from multiple servers!
Conclusion
You now understand MCP as the "USB of AI" - a standardized protocol that connects AI agents to external systems without complex custom coding. You've seen how MCP servers expose tools through stdio or HTTP modes, while clients integrate with AI frameworks to enable seamless tool access. Through the filesystem, Git, and SSE examples, you've learned to build agents that can read files, analyze repositories, and connect to remote services. Most importantly, you've discovered how MCP transforms AI development from fragmented custom integrations into a unified, reusable ecosystem that enables powerful multi-agent collaboration.
In the our next article, we'll dive into advanced MCP patterns that take your AI agents from simple tool users to sophisticated orchestrators by creating a custom server that can act as your travel agent. Check back soon!
Additional Resources
https://github.com/openai/openai-agents-python