Jul 15, 2025

Introduction to Model Context Protocol Servers for Agentic AI

Introduction to Model Context Protocol Servers for Agentic AI

Tutorial: MCP (Model Context Protocol) is the "USB of AI" - learn to connect AI systems to external services with standardized integration examples.

Introduction

The artificial intelligence landscape is evolving at breakneck speed, with new paradigms emerging seemingly every week. Among these developments, one technology stands out as potentially transformative: the Model Context Protocol (MCP). As Godfrey Nolan, President of RIIS, demonstrated at a recent OpenAI Applications Explorers Meetup, MCP represents what could be the most significant advancement in AI integration since the introduction of function calling.

MCP represents what many consider the “USB of AI” – a standardized way to connect AI systems to external services, databases, and tools. Unlike the limited custom actions available only to ChatGPT Pro users, MCP servers enable anyone to create powerful connections between AI agents and virtually any external system imaginable.

As usual, you can follow along with this overview or the video from the meetup. Let’s dive in!

Understanding the Model Context Protocol

The Model Context Protocol, developed by Anthropic, has become the standard for connecting AI models with external resources. Originally created for Claude, MCP now has widespread adoption across OpenAI, GitHub Copilot, and various IDE integrations.

The Problem MCP Solves

Before MCP, connecting AI systems to external resources required implementing custom function definitions using complex JSON schemas. Each integration demanded careful crafting of tool descriptions, parameter definitions, and response handling. This approach, while functional, created several challenges:

  • Complexity: Each external connection required extensive custom code

  • Inconsistency: Different implementations led to varying approaches and quality

  • Maintenance burden: Updates to external APIs necessitated manual changes to function definitions

  • Limited reusability: Custom functions couldn’t easily be shared across projects

MCP addresses these challenges by providing a standardized middleware layer that sits between your AI application and external services. This protocol-based approach offers several key advantages:

  • Multi-agent Collaboration: Agents work together on complex tasks

  • Tool Integration: Seamless external tool invocation

  • Persistent Memory: Stateful reasoning across interactions

Traditional AI integration might look like this: AI Model → Custom Function → External API. With MCP, the architecture becomes: AI Model → MCP Client → MCP Server → External API/Service. This seemingly simple change creates profound improvements in how AI systems interact with external resources.

In short, MCP connects the outside world of data and APIs to our LLM models in a language they can understand in a way that maintains data integrity. It excels beyond just adding the text to the prompt and reduces the labor of connecting to outside services, kind of like Zapier does for REST API calls between services.

The Architecture of MCP Systems

Understanding MCP requires grasping its fundamental architecture, which consists of two primary components working in concert, server and client.

MCP Servers: The Bridge to External Systems

MCP servers are the workhorses of the protocol, responsible for connecting to external systems and exposing their capabilities in a standardized format. These servers can be implemented in two primary modes:

Standard Input/Output (stdio) Mode

This approach leverages standard Unix-style input/output streams for communication. It’s particularly useful for local development and testing scenarios where the MCP server runs as a subprocess alongside your main application.

async with MCPServerStdio(
    params={
        "command": "npx",
        "args": ["-y", "@modelcontextprotocol/server-filesystem", samples_dir]
    }
) as server:
    tools = await server.list_tools()

You can see in this example above, stdio is accomplished with a relatively simple async function. The npx command parameter acts as a package installer (think npm if you’ve done JavaScript development), We are calling the @modelcontextprotocol and pulling up the server-filesystem , which is the code we are going to invoke. The last argument is invoking the samples_dir, which in this example we will have our call ask a bunch of questions for the files located there.

The final line `tools = await server.list_tools() will list the available tools in the file system MCP, which will be passed to our LLM to guide its work and increase its capabilities.

HTTP Mode (SSE)

For production deployments and remote integrations, MCP servers can operate over HTTP, allowing them to run on separate machines, cloud services, or as part of distributed architectures.

async def main():
    async with MCPServerSse(
        name="SSE Python Server",
        params={
            "url": "<http://localhost:8000/sse>",
        },
    ) as server:
        trace_id = gen_trace_id()
        with trace(workflow_name="SSE Example", trace_id=trace_id):
            print(f"View trace: <https://platform.openai.com/traces/trace?trace_id={trace_id}\\n>")
            await run(server)

As expected, this is a little more straightforward. The class this time is MCPServerSse and the only parameter is the url you will be pointing to. This option does have the very helpful trace feature, which allows for intuitive debugging. The trace essentially allows you to find the logic where the call broke down if you are getting weird or errant responses.

MCP Clients: The AI Interface

MCP clients integrate directly with your AI application framework, translating the model’s tool requests into MCP protocol calls. When using the OpenAI Agents SDK, for example, the client seamlessly bridges the gap between the agent’s decision-making process and the available MCP tools.

agent=Agent(
    name="Assistant",
    instructions="Use the tools to achieve the task",
    mcp_servers=[mcp_server_1, mcp_server_2]
)

This architecture enables powerful multi-agent scenarios where different agents can share access to the same MCP servers, creating collaborative workflows that were previously difficult to implement. You can also stack multiple MCP Servers together.

Hello World!

Alright, we’ve spent a good amount of time covering the structure and advantages of MCP, so now we can finally get to what we’ll be calling our Hello World examples. Open up your command line and clone this repo:

git

Take a look at the examples in the repo (Filesystem, git, and SSE). Then we can move on to the next step, installing uv (It’s just another package manager like pip or npm.):

curl -LsSf <https://astral.sh/uv/install.sh> | sh

The Filesystem example

Then you can start playing around with some of the examples. Let’s start with the filesystem:

The filesystem example demonstrates how to use the official MCP filesystem server to give your agent the ability to read, write, and interact with files on your local system. The filesystem server exposes tools like list_directory(), read_file(), and write_file() that the agent can use dynamically during conversations. You may remember from before in our little code snippet where we mentioned the samples_dir. That comes into play here and will configure our server for us.

import asyncio
import os
from agents import Agent, Runner, gen_trace_id, trace
from agents.mcp import MCPServerStdio

# Define the directory path for the MCP server to access
samples_dir = os.path.join(os.path.dirname(__file__), "sample_files")

# Create the MCP server instance
server = MCPServerStdio(
    name="Filesystem Server",
    params={
        "command": "npx",
        "args": ["-y", "@modelcontextprotocol/server-filesystem"

Once the MCP server is configured, you integrate it with an OpenAI agent and run the complete workflow:

async def main():
    # Use the MCP server within an async context manager
    async with server:
        # Generate a trace ID for debugging and monitoring
        trace_id = gen_trace_id()
        print(f"View trace: <https://platform.openai.com/traces/{trace_id}>")
        
        # Create the agent with the MCP server attached
        with trace(workflow_name="MCP Filesystem Example", trace_id=trace_id):
            agent = Agent(
                name="File Assistant",
                instructions="You can read and write files using the available tools. "
                           "Help the user with file operations in the sample_files directory.",
                mcp_servers=[server]
            )
            
            # Run the agent with user input
            result = await Runner.run(
                agent, 
                input="List the files in the directory and read the contents of any text files you find"
            )
            
            print(result.final_output)

# Execute the async main function
if __name__ == "__main__"

The framework automatically calls list_tools() on the MCP server, each time the agent runs, making the LLM aware of all available filesystem tools. The agent then dynamically chooses which tools to use based on the user's request. The async with context manager ensures proper connection lifecycle management, automatically connecting to the MCP server when entering the context and cleaning up resources when exiting.

You should see an output similar to this:

View trace: <https://platform.openai.com/traces/abc123def456>

I'll help you explore the files in the sample_files directory. Let me start by listing what's available.

[The agent uses the list_directory tool]

I found several files in the directory:
- sample_data.txt
- config.json
- notes.md
- README.txt

Now let me read the contents of these text files for you:

**sample_data.txt:**
This is a sample data file containing some example information for testing the MCP filesystem server functionality.

**notes.md:**
# Sample Notes
- This is a markdown file
- Used for testing file reading capabilities
- Contains structured text data

**README.txt:**

The exact file contents would depend on what's actually in the samples_dir directory, but the agent would systematically discover and read through available text files while explaining its actions to the user.

The Git example

The next example demonstrates how to use the mcp-server-git package with OpenAI's Agents SDK to create an AI assistant that can analyze Git repositories. The agent can dynamically discover and use Git tools such as get_contributors(), get_commit_history(), and analyze_changes() to answer complex questions about any Git repository and provide some simple analysis. We are going to ask who the top contributor is of one of our repositories to see if it can figure that out.

The core setup uses MCPServerStdio to connect to the mcp-server-git package via uvx (a Python package runner). Here's how the connection and agent are configured:

import asyncio
import shutil
from agents import Agent, Runner, trace
from agents.mcp import MCPServer, MCPServerStdio

async def run(mcp_server: MCPServer, directory_path: str):
    # Create an agent with Git repository analysis capabilities
    agent = Agent(
        name="Assistant",
        instructions=f"Answer questions about the git repository at {directory_path}, use that for repo_path",
        mcp_servers=[mcp_server],
    )

    # Example: Analyze repository contributors
    message = "Who's the most frequent contributor?"
    print("\\n" + "-" * 40)
    print(f"Running: {message}")
    result = await Runner.run(starting_agent=agent, input=message)
    print(result.final_output)

    # Example: Examine recent changes
    message = "Summarize the last change in the repository."
    print("\\n" + "-" * 40)
    print(f"Running: {message}")
    result = await Runner.run(starting_agent=agent, input=message)
    print(result.final_output)

The run() function creates an agent specifically instructed to analyze Git repositories, with the repository path dynamically injected into its instructions. The agent receives access to Git analysis tools through the MCP server connection, allowing it to perform sophisticated repository analysis tasks.

The main function orchestrates the entire workflow, including user input, dependency checking, and execution tracing:

async def main():
    # Interactive repository path input
    directory_path = input("Please enter the path to the git repository: ")

    # Create MCP server connection with caching enabled
    async with MCPServerStdio(
        cache_tools_list=True,  # Cache tools for better performance
        params={"command": "uvx", "args": ["mcp-server-git"]},
    ) as server:
        # Enable tracing for debugging and monitoring
        with trace(workflow_name="MCP Git Example"):
            await run(server, directory_path)

if __name__ == "__main__":
    # Ensure uvx is available before proceeding
    if not shutil.which("uvx"):
        raise RuntimeError("uvx is not installed. Please install it with `pip install uvx`.")

    asyncio.run(main())

When you run this in the command line, you should be prompted for the repo location. Hand that over and you should get your responses to Who’s the most frequent contributor and a summary of the last change.

MCP SSE Example

This example demonstrates how to build and connect to an MCP server using the Server-Sent Events (SSE) transport, which enables communication with remote MCP servers over HTTP. You may recall that we previously referred to this as HTTP mode.

The following can be found in the server.py file, and this will be responsible for building our server.

import asyncio
import json
from typing import Any, Dict
from fastapi import FastAPI, Request
from fastapi.responses import StreamingResponse
from mcp import Server
from mcp.types import Tool, TextContent

# Create FastAPI app and MCP server instance
app = FastAPI()
mcp_server = Server("demo-sse-server")

# Define MCP tools that will be exposed to agents
@mcp_server.tool()
async def get_current_time() -> str:
    """Get the current time in a readable format."""
    from datetime import datetime
    return datetime.now().strftime("%Y-%m-%d %H:%M:%S")

@mcp_server.tool()
async def calculate_sum(a: float, b: float) -> float:
    """Calculate the sum of two numbers."""
    return a + b

@mcp_server.tool() 
async def greet_user(name: str) -> str:
    """Generate a personalized greeting."""
    return f"Hello, {name}! Welcome to the MCP SSE server."

# SSE endpoint that streams MCP messages
@app.get("/sse")
async def sse_endpoint(request: Request):
    """Handle SSE connections for MCP communication."""
    
    async def event_stream():
        try:
            # Initialize MCP session
            async with mcp_server.create_session() as session:
                # Send initial connection message
                yield f"data: {json.dumps({'type': 'connection', 'status': 'connected'})}\\n\\n"
                
                # Handle incoming MCP requests (simplified)
                while True:
                    # In a real implementation, you'd handle incoming MCP requests here
                    # For demo purposes, we'll just keep the connection alive
                    await asyncio.sleep(1)
                    yield f"data: {json.dumps({'type': 'heartbeat'})}\\n\\n"
                    
        except asyncio.CancelledError:
            break
        except Exception as e:
            yield f"data: {json.dumps({'type': 'error', 'message': str(e)})}\\n\\n"
    
    return StreamingResponse(
        event_stream(),
        media_type="text/event-stream",
        headers={
            "Cache-Control": "no-cache",
            "Connection": "keep-alive",
        }
    )

if __name__ == "__main__":
    import uvicorn
    print("Starting SSE server at <http://localhost:8000/sse>")
    uvicorn.run(app, host="0.0.0.0", port=8000)

Each of the @mcp_server.tool()'s represent a custom functionality you can build in. The possibilities are limited to any MCP servers you can find in the wild and your imagination.

The client-side implementation is broken down into three main components: server connection setup, agent creation and tool testing, and the main execution function.

Server Connection and Agent Setup

import asyncio
from agents import Agent, Runner, trace
from agents.mcp import MCPServerSse

async def run_sse_example():
    """Demonstrate connecting to an SSE MCP server."""

    # Create connection to the SSE MCP server
    async with MCPServerSse(
        name="Demo SSE Server",
        params={
            "url": "<http://localhost:8000/sse>",
            "headers": {
                "User-Agent": "OpenAI-Agents-SDK",
                "Accept": "text/event-stream"
            }
        },
        cache_tools_list=True,  # Cache tools for better performance
    ) as sse_server:

        # Enable tracing for debugging
        with trace(workflow_name="SSE MCP Example"):
            # Create agent with access to SSE server tools
            agent = Agent(
                name="SSE Assistant",
                instructions="You have access to remote tools via SSE. Use them to help the user.",
                mcp_servers=[sse_server]
            )

This first block establishes the SSE connection using MCPServerSse with the server URL and appropriate headers for Server-Sent Events. The cache_tools_list=True parameter optimizes performance by caching the available tools locally rather than querying the server on each agent run. The agent is created within both the server context manager and tracing context to ensure proper resource management.

Tool Testing and Execution

            # Test the remote tools
            print("Testing SSE MCP server tools...")

            # Example 1: Get current time
            result = await Runner.run(
                agent,
                input="What's the current time?"
            )
            print(f"Time result: {result.final_output}")

            # Example 2: Mathematical calculation
            result = await Runner.run(
                agent,
                input="Calculate the sum of 25 and 17"
            )
            print(f"Math result: {result.final_output}")

            # Example 3: Personalized greeting
            result = await Runner.run(
                agent,
                input="Greet me with my name 'Alice'"
            )
            print(f"Greeting result: {result.final_output}")

This second block demonstrates how to interact with the remote MCP tools through natural language prompts. Each Runner.run() call allows the agent to automatically discover and execute the appropriate remote tool based on the user input, showcasing the seamless integration between the agent and remote SSE server.

Then finally the main() function wraps the SSE example execution with exception handling as one would expect.

Now call main via uv run python examples/mcp/sse_example/main.py in the command line. The current examples may give you things like the current time, do calculations, and a custom greeting, but with the flexibility of MCP you can add things like a dedicated weather agent, or perhaps even you can build a full-fledged trave agent from multiple servers!

Conclusion

You now understand MCP as the "USB of AI" - a standardized protocol that connects AI agents to external systems without complex custom coding. You've seen how MCP servers expose tools through stdio or HTTP modes, while clients integrate with AI frameworks to enable seamless tool access. Through the filesystem, Git, and SSE examples, you've learned to build agents that can read files, analyze repositories, and connect to remote services. Most importantly, you've discovered how MCP transforms AI development from fragmented custom integrations into a unified, reusable ecosystem that enables powerful multi-agent collaboration.

In the our next article, we'll dive into advanced MCP patterns that take your AI agents from simple tool users to sophisticated orchestrators by creating a custom server that can act as your travel agent. Check back soon!

Additional Resources

https://github.com/openai/openai-agents-python

https://astral.sh

If you had fun with this tutorial, be sure to join the OpenAI Application Explorers Meetup Group to learn more about awesome apps you can build with AI.