```
├── .cursor/
├── rules/
├── core-mcp-objects.mdc
├── .github/
├── ISSUE_TEMPLATE/
├── bug.yml
├── config.yml
├── enhancement.yml
├── release.yml
├── workflows/
├── publish.yml
├── run-static.yml
├── run-tests.yml
├── .gitignore
├── .pre-commit-config.yaml
├── .python-version
├── LICENSE
├── README.md
├── Windows_Notes.md
├── docs/
├── assets/
├── demo-inspector.png
├── clients/
├── client.mdx
├── transports.mdx
├── docs.json
├── getting-started/
├── installation.mdx
├── quickstart.mdx
├── welcome.mdx
├── patterns/
├── composition.mdx
├── contrib.mdx
├── decorating-methods.mdx
├── fastapi.mdx
├── openapi.mdx
├── proxy.mdx
├── testing.mdx
├── servers/
├── context.mdx
├── fastmcp.mdx
├── prompts.mdx
├── resources.mdx
├── tools.mdx
├── snippets/
├── version-badge.mdx
├── style.css
├── examples/
├── complex_inputs.py
├── desktop.py
├── echo.py
├── memory.py
├── mount_example.py
├── readme-quickstart.py
├── sampling.py
├── screenshot.py
├── simple_echo.py
├── smart_home/
├── README.md
├── pyproject.toml
├── src/
├── smart_home/
├── __init__.py
├── __main__.py
├── hub.py
├── lights/
├── __init__.py
├── hue_utils.py
├── server.py
├── py.typed
├── settings.py
├── uv.lock
```
## /.cursor/rules/core-mcp-objects.mdc
```mdc path="/.cursor/rules/core-mcp-objects.mdc"
---
description:
globs:
alwaysApply: true
---
There are four major MCP object types:
- Tools (src/tools/)
- Resources (src/resources/)
- Resource Templates (src/resources/)
- Prompts (src/prompts)
While these have slightly different semantics and implementations, in general changes that affect interactions with any one (like adding tags, importing, etc.) will need to be adopted, applied, and tested on all others. Be sure to look at not only the object definition but also the related `Manager` (e.g. `ToolManager`, `ResourceManager`, and `PromptManager`). Also note that while resources and resource templates are different objects, they both are handled by the `ResourceManager`.
```
## /.github/ISSUE_TEMPLATE/bug.yml
```yml path="/.github/ISSUE_TEMPLATE/bug.yml"
name: 🐛 Bug Report
description: Report a bug or unexpected behavior in FastMCP
labels: [bug, pending]
body:
- type: markdown
attributes:
value: Thank you for contributing to FastMCP! 🙏
- type: textarea
id: description
attributes:
label: Description
description: |
Please explain what you're experiencing and what you would expect to happen instead.
Provide as much detail as possible to help us understand and solve your problem quickly.
validations:
required: true
- type: textarea
id: example
attributes:
label: Example Code
description: >
If applicable, please provide a self-contained,
[minimal, reproducible example](https://stackoverflow.com/help/minimal-reproducible-example)
demonstrating the bug.
placeholder: |
from fastmcp import FastMCP
...
render: Python
- type: textarea
id: version
attributes:
label: Version Information
description: |
Please provide information about your FastMCP version, MCP version, Python version, and OS.
To get this information, run the following command in your terminal and paste the output below:
\`\`\`bash
fastmcp version
\`\`\`
If there is other information that would be helpful, please include it as well.
render: Text
validations:
required: true
- type: textarea
id: additional_context
attributes:
label: Additional Context
description: |
Add any other context about the problem here. This could include:
- The full error message and traceback (if applicable)
- Information about your environment (e.g., virtual environment, installed packages)
- Steps to reproduce the issue
- Any recent changes in your code or setup that might be relevant
```
## /.github/ISSUE_TEMPLATE/config.yml
```yml path="/.github/ISSUE_TEMPLATE/config.yml"
blank_issues_enabled: false
contact_links:
- name: FastMCP Documentation
url: https://gofastmcp.com
about: Please review the documentation before opening an issue.
- name: MCP Python SDK
url: https://github.com/modelcontextprotocol/python-sdk/issues
about: Issues related to the low-level MCP Python SDK, including the FastMCP 1.0 module that is included in the `mcp` package, should be filed on the official MCP repository.
```
## /.github/ISSUE_TEMPLATE/enhancement.yml
```yml path="/.github/ISSUE_TEMPLATE/enhancement.yml"
name: 💡 Enhancement Request
description: Suggest an idea or improvement for FastMCP
labels: [enhancement, pending]
body:
- type: markdown
attributes:
value: Thank you for contributing to FastMCP! We value your ideas for improving the framework. 💡
- type: textarea
id: description
attributes:
label: Enhancement Description
description: |
Please describe the enhancement you'd like to see in FastMCP.
- What problem would this solve?
- How would this improve your workflow or experience with FastMCP?
- Are there any alternative solutions you've considered?
validations:
required: true
- type: textarea
id: use_case
attributes:
label: Use Case
description: |
Describe a specific use case or scenario where this enhancement would be beneficial.
If possible, provide an example of how you envision using this feature.
- type: textarea
id: example
attributes:
label: Proposed Implementation
description: >
If you have ideas about how this enhancement could be implemented,
please share them here. Code snippets, pseudocode, or general approaches are welcome.
render: Python
```
## /.github/release.yml
```yml path="/.github/release.yml"
changelog:
exclude:
labels:
- ignore in release notes
categories:
- title: New Features 🎉
labels:
- feature
- enhancement
exclude:
labels:
- breaking change
- title: Fixes 🐞
labels:
- bug
exclude:
labels:
- breaking change
- title: Breaking Changes 🛫
labels:
- breaking change
- title: Docs 📚
labels:
- documentation
- title: Other Changes 🦾
labels:
- "*"
```
## /.github/workflows/publish.yml
```yml path="/.github/workflows/publish.yml"
name: Publish FastMCP to PyPI
on:
release:
types: [published]
workflow_dispatch:
jobs:
pypi-publish:
name: Upload to PyPI
runs-on: ubuntu-latest
permissions:
id-token: write # For PyPI's trusted publishing
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: "Install uv"
uses: astral-sh/setup-uv@v3
- name: Build
run: uv build
- name: Publish to PyPi
run: uv publish -v dist/*
```
## /.github/workflows/run-static.yml
```yml path="/.github/workflows/run-static.yml"
name: Run static analysis
env:
# enable colored output
# https://github.com/pytest-dev/pytest/issues/7443
PY_COLORS: 1
on:
push:
branches: ["main"]
paths:
- "src/**"
- "tests/**"
- "uv.lock"
- "pyproject.toml"
- ".github/workflows/**"
pull_request:
paths:
- "src/**"
- "tests/**"
- "uv.lock"
- "pyproject.toml"
- ".github/workflows/**"
workflow_dispatch:
permissions:
contents: read
jobs:
static_analysis:
timeout-minutes: 2
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v5
with:
enable-cache: true
cache-dependency-glob: "uv.lock"
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install dependencies
run: uv sync --dev
- name: Run pre-commit
uses: pre-commit/action@v3.0.1
```
## /.github/workflows/run-tests.yml
```yml path="/.github/workflows/run-tests.yml"
name: Run tests
env:
# enable colored output
PY_COLORS: 1
on:
push:
branches: ["main"]
paths:
- "src/**"
- "tests/**"
- "uv.lock"
- "pyproject.toml"
- ".github/workflows/**"
pull_request:
paths:
- "src/**"
- "tests/**"
- "uv.lock"
- "pyproject.toml"
- ".github/workflows/**"
workflow_dispatch:
permissions:
contents: read
jobs:
run_tests:
name: "Run tests: Python ${{ matrix.python-version }} on ${{ matrix.os }}"
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-latest, windows-latest]
python-version: ["3.10"]
fail-fast: false
timeout-minutes: 5
steps:
- uses: actions/checkout@v4
- name: Install uv
uses: astral-sh/setup-uv@v5
with:
enable-cache: true
cache-dependency-glob: "uv.lock"
- name: Set up Python ${{ matrix.python-version }}
run: uv python install ${{ matrix.python-version }}
- name: Install FastMCP
run: uv sync --dev
- name: Fix pyreadline on Windows
if: matrix.os == 'windows-latest'
run: |
uv pip uninstall -y pyreadline
uv pip install pyreadline3
- name: Run tests
run: uv run pytest -vv
```
## /.gitignore
```gitignore path="/.gitignore"
# Python-generated files
__pycache__/
*.py[cod]
*$py.class
build/
dist/
wheels/
*.egg-info/
*.egg
MANIFEST
.pytest_cache/
.coverage
htmlcov/
.tox/
nosetests.xml
coverage.xml
*.cover
# Virtual environments
.venv
venv/
env/
ENV/
.env
# System files
.DS_Store
# Version file
src/fastmcp/_version.py
# Editors and IDEs
.cursorrules
.vscode/
.idea/
*.swp
*.swo
*~
.project
.pydevproject
.settings/
# Jupyter Notebook
.ipynb_checkpoints
# Type checking
.mypy_cache/
.dmypy.json
dmypy.json
.pyre/
.pytype/
# Local development
.python-version
.envrc
.direnv/
# Logs and databases
*.log
*.sqlite
*.db
*.ddb
```
## /.pre-commit-config.yaml
```yaml path="/.pre-commit-config.yaml"
fail_fast: true
repos:
- repo: https://github.com/abravalheri/validate-pyproject
rev: v0.23
hooks:
- id: validate-pyproject
- repo: https://github.com/pre-commit/mirrors-prettier
rev: v3.1.0
hooks:
- id: prettier
types_or: [yaml, json5]
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.11.4
hooks:
# Run the linter.
- id: ruff
args: [--fix, --exit-non-zero-on-fix]
# Run the formatter.
- id: ruff-format
- repo: https://github.com/northisup/pyright-pretty
rev: v0.1.0
hooks:
- id: pyright-pretty
files: ^src/|^tests/
```
## /.python-version
```python-version path="/.python-version"
3.12
```
## /LICENSE
``` path="/LICENSE"
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
## /README.md
# FastMCP v2 🚀
The fast, Pythonic way to build MCP servers and clients.
[](https://gofastmcp.com)
[](https://pypi.org/project/fastmcp)
[](https://github.com/jlowin/fastmcp/actions/workflows/run-tests.yml)
[](https://github.com/jlowin/fastmcp/blob/main/LICENSE)
The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) is a new, standardized way to provide context and tools to your LLMs, and FastMCP makes building MCP servers and clients simple and intuitive. Create tools, expose resources, define prompts, and connect components with clean, Pythonic code.
```python
# server.py
from fastmcp import FastMCP
mcp = FastMCP("Demo 🚀")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
if __name__ == "__main__":
mcp.run()
```
Run the server locally:
```bash
fastmcp run server.py
```
FastMCP handles the complex protocol details and server management, letting you focus on building great tools and applications. It's designed to feel natural to Python developers.
## Table of Contents
- [What is MCP?](#what-is-mcp)
- [Why FastMCP?](#why-fastmcp)
- [Key Features](#key-features)
- [Servers](#servers)
- [Clients](#clients)
- [What's New in v2?](#whats-new-in-v2)
- [Documentation](#documentation)
- [Installation](#installation)
- [Quickstart](#quickstart)
- [Core Concepts](#core-concepts)
- [The `FastMCP` Server](#the-fastmcp-server)
- [Tools](#tools)
- [Resources](#resources)
- [Prompts](#prompts)
- [Context](#context)
- [Images](#images)
- [MCP Clients](#mcp-clients)
- [Client Methods](#client-methods)
- [Transport Options](#transport-options)
- [LLM Sampling](#llm-sampling)
- [Roots Access](#roots-access)
- [Advanced Features](#advanced-features)
- [Proxy Servers](#proxy-servers)
- [Composing MCP Servers](#composing-mcp-servers)
- [OpenAPI \& FastAPI Generation](#openapi--fastapi-generation)
- [Handling `stderr`](#handling-stderr)
- [Running Your Server](#running-your-server)
- [Development Mode (Recommended for Building \& Testing)](#development-mode-recommended-for-building--testing)
- [Claude Desktop Integration (For Regular Use)](#claude-desktop-integration-for-regular-use)
- [Direct Execution (For Advanced Use Cases)](#direct-execution-for-advanced-use-cases)
- [Server Object Names](#server-object-names)
- [Examples](#examples)
- [Contributing](#contributing)
- [Prerequisites](#prerequisites)
- [Setup](#setup)
- [Testing](#testing)
- [Formatting \& Linting](#formatting--linting)
- [Pull Requests](#pull-requests)
## What is MCP?
The [Model Context Protocol (MCP)](https://modelcontextprotocol.io) lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. Think of it like a web API, but specifically designed for LLM interactions. MCP servers can:
- Expose data through **Resources** (think GET endpoints; load info into context)
- Provide functionality through **Tools** (think POST/PUT endpoints; execute actions)
- Define interaction patterns through **Prompts** (reusable templates)
- And more!
FastMCP provides a high-level, Pythonic interface for building and interacting with these servers.
## Why FastMCP?
The MCP protocol is powerful but implementing it involves a lot of boilerplate - server setup, protocol handlers, content types, error management. FastMCP handles all the complex protocol details and server management, so you can focus on building great tools. It's designed to be high-level and Pythonic; in most cases, decorating a function is all you need.
FastMCP aims to be:
🚀 **Fast:** High-level interface means less code and faster development
🍀 **Simple:** Build MCP servers with minimal boilerplate
🐍 **Pythonic:** Feels natural to Python developers
🔍 **Complete:** FastMCP aims to provide a full implementation of the core MCP specification for both servers and clients
## Key Features
### Servers
- **Create** servers with minimal boilerplate using intuitive decorators
- **Proxy** existing servers to modify configuration or transport
- **Compose** servers into complex applications
- **Generate** servers from OpenAPI specs or FastAPI objects
### Clients
- **Interact** with MCP servers programmatically
- **Connect** to any MCP server using any transport
- **Test** your servers without manual intervention
- **Innovate** with core MCP capabilities like LLM sampling
## What's New in v2?
FastMCP 1.0 made it so easy to build MCP servers that it's now part of the [official Model Context Protocol Python SDK](https://github.com/modelcontextprotocol/python-sdk)! For basic use cases, you can use the upstream version by importing `mcp.server.fastmcp.FastMCP` (or installing `fastmcp=1.0`).
Based on how the MCP ecosystem is evolving, FastMCP 2.0 builds on that foundation to introduce a variety of new features (and more experimental ideas). It adds advanced features like proxying and composing MCP servers, as well as automatically generating them from OpenAPI specs or FastAPI objects. FastMCP 2.0 also introduces new client-side functionality like LLM sampling.
## Documentation
📚 FastMCP's documentation is available at [gofastmcp.com](https://gofastmcp.com).
---
### Installation
We strongly recommend installing FastMCP with [uv](https://docs.astral.sh/uv/), as it is required for deploying servers via the CLI:
```bash
uv pip install fastmcp
```
Note: on macOS, uv may need to be installed with Homebrew (`brew install uv`) in order to make it available to the Claude Desktop app.
For development, install with:
```bash
# Clone the repo first
git clone https://github.com/jlowin/fastmcp.git
cd fastmcp
# Install with dev dependencies
uv sync
```
### Quickstart
Let's create a simple MCP server that exposes a calculator tool and some data:
```python
# server.py
from fastmcp import FastMCP
# Create an MCP server
mcp = FastMCP("Demo")
# Add an addition tool
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
# Add a dynamic greeting resource
@mcp.resource("greeting://{name}")
def get_greeting(name: str) -> str:
"""Get a personalized greeting"""
return f"Hello, {name}!"
```
You can install this server in [Claude Desktop](https://claude.ai/download) and interact with it right away by running:
```bash
fastmcp install server.py
```

## Core Concepts
These are the building blocks for creating MCP servers, using the familiar decorator-based approach.
### The `FastMCP` Server
The central object representing your MCP application. It handles connections, protocol details, and routing.
```python
from fastmcp import FastMCP
# Create a named server
mcp = FastMCP("My App")
# Specify dependencies needed when deployed via `fastmcp install`
mcp = FastMCP("My App", dependencies=["pandas", "numpy"])
```
### Tools
Tools allow LLMs to perform actions by executing your Python functions. They are ideal for tasks that involve computation, external API calls, or side effects.
Decorate synchronous or asynchronous functions with `@mcp.tool()`. FastMCP automatically generates the necessary MCP schema based on type hints and docstrings. Pydantic models can be used for complex inputs.
```python
import httpx
from pydantic import BaseModel
class UserInfo(BaseModel):
user_id: int
notify: bool = False
@mcp.tool()
async def send_notification(user: UserInfo, message: str) -> dict:
"""Sends a notification to a user if requested."""
if user.notify:
# Simulate sending notification
print(f"Notifying user {user.user_id}: {message}")
return {"status": "sent", "user_id": user.user_id}
return {"status": "skipped", "user_id": user.user_id}
@mcp.tool()
def get_stock_price(ticker: str) -> float:
"""Gets the current price for a stock ticker."""
# Replace with actual API call
prices = {"AAPL": 180.50, "GOOG": 140.20}
return prices.get(ticker.upper(), 0.0)
```
### Resources
Resources expose data to LLMs. They should primarily provide information without significant computation or side effects (like GET requests).
Decorate functions with `@mcp.resource("your://uri")`. Use curly braces `{}` in the URI to define dynamic resources (templates) where parts of the URI become function parameters.
```python
# Static resource returning simple text
@mcp.resource("config://app-version")
def get_app_version() -> str:
"""Returns the application version."""
return "v2.1.0"
# Dynamic resource template expecting a 'user_id' from the URI
@mcp.resource("db://users/{user_id}/email")
async def get_user_email(user_id: str) -> str:
"""Retrieves the email address for a given user ID."""
# Replace with actual database lookup
emails = {"123": "alice@example.com", "456": "bob@example.com"}
return emails.get(user_id, "not_found@example.com")
# Resource returning JSON data
@mcp.resource("data://product-categories")
def get_categories() -> list[str]:
"""Returns a list of available product categories."""
return ["Electronics", "Books", "Home Goods"]
```
### Prompts
Prompts define reusable templates or interaction patterns for the LLM. They help guide the LLM on how to use your server's capabilities effectively.
Decorate functions with `@mcp.prompt()`. The function should return the desired prompt content, which can be a simple string, a `Message` object (like `UserMessage` or `AssistantMessage`), or a list of these.
```python
from fastmcp.prompts.base import UserMessage, AssistantMessage
@mcp.prompt()
def ask_review(code_snippet: str) -> str:
"""Generates a standard code review request."""
return f"Please review the following code snippet for potential bugs and style issues:\n```python\n{code_snippet}\n```"
@mcp.prompt()
def debug_session_start(error_message: str) -> list[Message]:
"""Initiates a debugging help session."""
return [
UserMessage(f"I encountered an error:\n{error_message}"),
AssistantMessage("Okay, I can help with that. Can you provide the full traceback and tell me what you were trying to do?")
]
```
### Context
Gain access to MCP server capabilities *within* your tool or resource functions by adding a parameter type-hinted with `fastmcp.Context`.
```python
from fastmcp import Context, FastMCP
mcp = FastMCP("Context Demo")
@mcp.resource("system://status")
async def get_system_status(ctx: Context) -> dict:
"""Checks system status and logs information."""
await ctx.info("Checking system status...")
# Perform checks
await ctx.report_progress(1, 1) # Report completion
return {"status": "OK", "load": 0.5, "client": ctx.client_id}
@mcp.tool()
async def process_large_file(file_uri: str, ctx: Context) -> str:
"""Processes a large file, reporting progress and reading resources."""
await ctx.info(f"Starting processing for {file_uri}")
# Read the resource using the context
file_content_resource = await ctx.read_resource(file_uri)
file_content = file_content_resource[0].content # Assuming single text content
lines = file_content.splitlines()
total_lines = len(lines)
for i, line in enumerate(lines):
# Process line...
if (i + 1) % 100 == 0: # Report progress every 100 lines
await ctx.report_progress(i + 1, total_lines)
await ctx.info(f"Finished processing {file_uri}")
return f"Processed {total_lines} lines."
```
The `Context` object provides:
* Logging: `ctx.debug()`, `ctx.info()`, `ctx.warning()`, `ctx.error()`
* Progress Reporting: `ctx.report_progress(current, total)`
* Resource Access: `await ctx.read_resource(uri)`
* Request Info: `ctx.request_id`, `ctx.client_id`
* Sampling (Advanced): `await ctx.sample(...)` to ask the connected LLM client for completions.
### Images
Easily handle image outputs using the `fastmcp.Image` helper class.
The below code requires the `pillow` library to be installed.
```python
from mcp.server.fastmcp import FastMCP, Image
from io import BytesIO
try:
from PIL import Image as PILImage
except ImportError:
raise ImportError("Please install the `pillow` library to run this example.")
mcp = FastMCP("My App")
@mcp.tool()
def create_thumbnail(image_path: str) -> Image:
"""Create a thumbnail from an image"""
img = PILImage.open(image_path)
img.thumbnail((100, 100))
buffer = BytesIO()
img.save(buffer, format="PNG")
return Image(data=buffer.getvalue(), format="png")
```
Return the `Image` helper class from your tool to send an image to the client. The `Image` helper class handles the conversion to/from the base64-encoded format required by the MCP protocol. It works with either a path to an image file, or a bytes object.
### MCP Clients
The `Client` class lets you interact with any MCP server (not just FastMCP ones) from Python code:
```python
from fastmcp import Client
async with Client("path/to/server") as client:
# Call a tool
result = await client.call_tool("weather", {"location": "San Francisco"})
print(result)
# Read a resource
res = await client.read_resource("db://users/123/profile")
print(res)
```
You can connect to servers using any supported transport protocol (Stdio, SSE, FastMCP, etc.). If you don't specify a transport, the `Client` class automatically attempts to detect an appropriate one from your connection string or server object.
#### Client Methods
The `Client` class exposes several methods for interacting with MCP servers.
```python
async with Client("path/to/server") as client:
# List available tools
tools = await client.list_tools()
# List available resources
resources = await client.list_resources()
# Call a tool with arguments
result = await client.call_tool("generate_report", {"user_id": 123})
# Read a resource
user_data = await client.read_resource("db://users/123/profile")
# Get a prompt
greeting = await client.get_prompt("welcome", {"name": "Alice"})
# Send progress updates
await client.progress("task-123", 50, 100) # 50% complete
# Basic connectivity testing
await client.ping()
```
These methods correspond directly to MCP protocol operations, making it easy to interact with any MCP-compatible server (not just FastMCP ones).
#### Transport Options
FastMCP supports various transport protocols for connecting to MCP servers:
```python
from fastmcp import Client
from fastmcp.client.transports import (
SSETransport,
PythonStdioTransport,
FastMCPTransport
)
# Connect to a server over SSE (common for web-based MCP servers)
async with Client(SSETransport("http://localhost:8000/mcp")) as client:
# Use client here...
# Connect to a Python script using stdio (useful for local tools)
async with Client(PythonStdioTransport("path/to/script.py")) as client:
# Use client here...
# Connect directly to a FastMCP server object in the same process
from your_app import mcp_server
async with Client(FastMCPTransport(mcp_server)) as client:
# Use client here...
```
Common transport options include:
- `SSETransport`: Connect to a server via Server-Sent Events (HTTP)
- `PythonStdioTransport`: Run a Python script and communicate via stdio
- `FastMCPTransport`: Connect directly to a FastMCP server object
- `WSTransport`: Connect via WebSockets
In addition, if you pass a connection string or `FastMCP` server object to the `Client` constructor, it will try to automatically detect the appropriate transport.
#### LLM Sampling
Sampling is an MCP feature that allows a server to request a completion from the client LLM, enabling sophisticated use cases while maintaining security and privacy on the server.
```python
import marvin # Or any other LLM client
from fastmcp import Client, Context, FastMCP
from fastmcp.client.sampling import RequestContext, SamplingMessage, SamplingParams
# -- SERVER SIDE --
# Create a server that requests LLM completions from the client
mcp = FastMCP("Sampling Example")
@mcp.tool()
async def generate_poem(topic: str, context: Context) -> str:
"""Generate a short poem about the given topic."""
# The server requests a completion from the client LLM
response = await context.sample(
f"Write a short poem about {topic}",
system_prompt="You are a talented poet who writes concise, evocative verses."
)
return response.text
@mcp.tool()
async def summarize_document(document_uri: str, context: Context) -> str:
"""Summarize a document using client-side LLM capabilities."""
# First read the document as a resource
doc_resource = await context.read_resource(document_uri)
doc_content = doc_resource[0].content # Assuming single text content
# Then ask the client LLM to summarize it
response = await context.sample(
f"Summarize the following document:\n\n{doc_content}",
system_prompt="You are an expert summarizer. Create a concise summary."
)
return response.text
# -- CLIENT SIDE --
# Create a client that handles the sampling requests
async def sampling_handler(
messages: list[SamplingMessage],
params: SamplingParams,
ctx: RequestContext,
) -> str:
"""Handle sampling requests from the server using your preferred LLM."""
# Extract the messages and system prompt
prompt = [m.content.text for m in messages if m.content.type == "text"]
system_instruction = params.systemPrompt or "You are a helpful assistant."
# Use your preferred LLM client to generate completions
return await marvin.say_async(
message=prompt,
instructions=system_instruction,
)
# Connect them together
async with Client(mcp, sampling_handler=sampling_handler) as client:
result = await client.call_tool("generate_poem", {"topic": "autumn leaves"})
print(result.content[0].text)
```
This pattern is powerful because:
1. The server can delegate text generation to the client LLM
2. The server remains focused on business logic and data handling
3. The client maintains control over which LLM is used and how requests are handled
4. No sensitive data needs to be sent to external APIs
#### Roots Access
FastMCP exposes the MCP roots functionality, allowing clients to specify which file system roots they can access. This creates a secure boundary for tools that need to work with files. Note that the server must account for client roots explicitly.
```python
from fastmcp import Client, RootsList
# Specify file roots that the client can access
roots = ["file:///path/to/allowed/directory"]
async with Client(mcp_server, roots=roots) as client:
# Now tools in the MCP server can access files in the specified roots
await client.call_tool("process_file", {"filename": "data.csv"})
```
## Advanced Features
Building on the core concepts, FastMCP v2 introduces powerful features for more complex scenarios:
### Proxy Servers
Create a FastMCP server that acts as an intermediary, proxying requests to another MCP endpoint (which could be a server or another client connection).
**Use Cases:**
* **Transport Conversion:** Expose a server running on Stdio (like many local tools) over SSE or WebSockets, making it accessible to web clients or Claude Desktop.
* **Adding Functionality:** Wrap an existing server to add authentication, request logging, or modified tool behavior.
* **Aggregating Servers:** Combine multiple backend MCP servers behind a single proxy interface (though `mount` might be simpler for this).
```python
import asyncio
from fastmcp import FastMCP, Client
from fastmcp.client.transports import PythonStdioTransport
# Create a client that connects to the original server
proxy_client = Client(
transport=PythonStdioTransport('path/to/original_stdio_server.py'),
)
# Create a proxy server that connects to the client and exposes its capabilities
proxy = FastMCP.from_client(proxy_client, name="Stdio-to-SSE Proxy")
if __name__ == "__main__":
proxy.run(transport='sse')
```
`FastMCP.from_client` is a class method that connects to the target, discovers its capabilities, and dynamically builds the proxy server instance.
### Composing MCP Servers
Structure larger MCP applications by creating modular FastMCP servers and "mounting" them onto a parent server. This automatically handles prefixing for tool names and resource URIs, preventing conflicts.
```python
from fastmcp import FastMCP
# --- Weather MCP ---
weather_mcp = FastMCP("Weather Service")
@weather_mcp.tool()
def get_forecast(city: str):
return f"Sunny in {city}"
@weather_mcp.resource("data://temp/{city}")
def get_temp(city: str):
return 25.0
# --- News MCP ---
news_mcp = FastMCP("News Service")
@news_mcp.tool()
def fetch_headlines():
return ["Big news!", "Other news"]
@news_mcp.resource("data://latest_story")
def get_story():
return "A story happened."
# --- Composite MCP ---
mcp = FastMCP("Composite")
# Mount sub-apps with prefixes
mcp.mount("weather", weather_mcp) # Tools prefixed "weather/", resources prefixed "weather+"
mcp.mount("news", news_mcp) # Tools prefixed "news/", resources prefixed "news+"
@mcp.tool()
def ping():
return "Composite OK"
if __name__ == "__main__":
mcp.run()
```
This promotes code organization and reusability for complex MCP systems.
### OpenAPI & FastAPI Generation
Leverage your existing web APIs by automatically generating FastMCP servers from them.
By default, the following rules are applied:
- `GET` requests -> MCP resources
- `GET` requests with path parameters -> MCP resource templates
- All other HTTP methods -> MCP tools
You can override these rules to customize or even ignore certain endpoints.
**From FastAPI:**
```python
from fastapi import FastAPI
from fastmcp import FastMCP
# Your existing FastAPI application
fastapi_app = FastAPI(title="My Existing API")
@fastapi_app.get("/status")
def get_status():
return {"status": "running"}
@fastapi_app.post("/items")
def create_item(name: str, price: float):
return {"id": 1, "name": name, "price": price}
# Generate an MCP server directly from the FastAPI app
mcp_server = FastMCP.from_fastapi(fastapi_app)
if __name__ == "__main__":
mcp_server.run()
```
**From an OpenAPI Specification:**
```python
import httpx
import json
from fastmcp import FastMCP
# Load the OpenAPI spec (dict)
# with open("my_api_spec.json", "r") as f:
# openapi_spec = json.load(f)
openapi_spec = { ... } # Your spec dict
# Create an HTTP client to make requests to the actual API endpoint
http_client = httpx.AsyncClient(base_url="https://api.yourservice.com")
# Generate the MCP server
mcp_server = FastMCP.from_openapi(openapi_spec, client=http_client)
if __name__ == "__main__":
mcp_server.run()
```
### Handling `stderr`
The MCP spec allows for the server to write anything it wants to `stderr`, and it
doesn't specify the format in any way. FastMCP will forward the server's `stderr`
to the client's `stderr`.
## Running Your Server
Choose the method that best suits your needs:
### Development Mode (Recommended for Building & Testing)
Use `fastmcp dev` for an interactive testing environment with the MCP Inspector.
```bash
fastmcp dev your_server_file.py
# With temporary dependencies
fastmcp dev your_server_file.py --with pandas --with numpy
# With local package in editable mode
fastmcp dev your_server_file.py --with-editable .
```
### Claude Desktop Integration (For Regular Use)
Use `fastmcp install` to set up your server for persistent use within the Claude Desktop app. It handles creating an isolated environment using `uv`.
```bash
fastmcp install your_server_file.py
# With a custom name in Claude
fastmcp install your_server_file.py --name "My Analysis Tool"
# With extra packages and environment variables
fastmcp install server.py --with requests -v API_KEY=123 -f .env
```
### Direct Execution (For Advanced Use Cases)
Run your server script directly for custom deployments or integrations outside of Claude. You manage the environment and dependencies yourself.
Add to your `your_server_file.py`:
```python
if __name__ == "__main__":
mcp.run() # Assuming 'mcp' is your FastMCP instance
```
Run with:
```bash
python your_server_file.py
# or
uv run python your_server_file.py
```
### Server Object Names
If your `FastMCP` instance is not named `mcp`, `server`, or `app`, specify it using `file:object` syntax for the `dev` and `install` commands:
```bash
fastmcp dev my_module.py:my_mcp_instance
fastmcp install api.py:api_app
```
## Examples
Explore the `examples/` directory for code samples demonstrating various features:
* `simple_echo.py`: Basic tool, resource, and prompt.
* `complex_inputs.py`: Using Pydantic models for tool inputs.
* `mount_example.py`: Mounting multiple FastMCP servers.
* `sampling.py`: Using LLM completions within your MCP server.
* `screenshot.py`: Tool returning an Image object.
* `text_me.py`: Tool interacting with an external API.
* `memory.py`: More complex example with database interaction.
## Contributing
Contributions make the open-source community vibrant! We welcome improvements and features.
Open Developer Guide
#### Prerequisites
* Python 3.10+
* [uv](https://docs.astral.sh/uv/)
#### Setup
1. Clone: `git clone https://github.com/jlowin/fastmcp.git && cd fastmcp`
2. Install Env & Dependencies: `uv venv && uv sync` (Activate the `.venv` after creation)
#### Testing
Run the test suite:
```bash
uv run pytest -vv
```
#### Formatting & Linting
We use `ruff` via `pre-commit`.
1. Install hooks: `pre-commit install`
2. Run checks: `pre-commit run --all-files`
#### Pull Requests
1. Fork the repository.
2. Create a feature branch.
3. Make changes, commit, and push to your fork.
4. Open a pull request against the `main` branch of `jlowin/fastmcp`.
Please open an issue or discussion for questions or suggestions!
## /Windows_Notes.md
# Getting your development environment set up properly
To get your environment up and running properly, you'll need a slightly different set of commands that are windows specific:
```bash
uv venv
.venv\Scripts\activate
uv pip install -e ".[dev]"
```
This will install the package in editable mode, and install the development dependencies.
# Fixing `AttributeError: module 'collections' has no attribute 'Callable'`
- open `.venv\Lib\site-packages\pyreadline\py3k_compat.py`
- change `return isinstance(x, collections.Callable)` to
```
from collections.abc import Callable
return isinstance(x, Callable)
```
# Helpful notes
For developing FastMCP
## Install local development version of FastMCP into a local FastMCP project server
- ensure
- change directories to your FastMCP Server location so you can install it in your .venv
- run `.venv\Scripts\activate` to activate your virtual environment
- Then run a series of commands to uninstall the old version and install the new
```bash
# First uninstall
uv pip uninstall fastmcp
# Clean any build artifacts in your fastmcp directory
cd C:\path\to\fastmcp
del /s /q *.egg-info
# Then reinstall in your weather project
cd C:\path\to\new\fastmcp_server
uv pip install --no-cache-dir -e C:\Users\justj\PycharmProjects\fastmcp
# Check that it installed properly and has the correct git hash
pip show fastmcp
```
## Running the FastMCP server with Inspector
MCP comes with a node.js application called Inspector that can be used to inspect the FastMCP server. To run the inspector, you'll need to install node.js and npm. Then you can run the following commands:
```bash
fastmcp dev server.py
```
This will launch a web app on http://localhost:5173/ that you can use to inspect the FastMCP server.
## If you start development before creating a fork - your get out of jail free card
- Add your fork as a new remote to your local repository `git remote add fork git@github.com:YOUR-USERNAME/REPOSITORY-NAME.git`
- This will add your repo, short named 'fork', as a remote to your local repository
- Verify that it was added correctly by running `git remote -v`
- Commit your changes
- Push your changes to your fork `git push fork `
- Create your pull request on GitHub
## /docs/assets/demo-inspector.png
Binary file available at https://raw.githubusercontent.com/jlowin/fastmcp/refs/heads/main/docs/assets/demo-inspector.png
## /docs/clients/client.mdx
---
title: Client Overview
sidebarTitle: Overview
description: Learn how to use the FastMCP Client to interact with MCP servers.
icon: user-robot
---
import { VersionBadge } from '/snippets/version-badge.mdx'
The `fastmcp.Client` provides a high-level, asynchronous interface for interacting with any Model Context Protocol (MCP) server, whether it's built with FastMCP or another implementation. It simplifies communication by handling protocol details and connection management.
## FastMCP Client
The FastMCP Client architecture separates the protocol logic (`Client`) from the connection mechanism (`Transport`).
- **`Client`**: Handles sending MCP requests (like `tools/call`, `resources/read`), receiving responses, and managing callbacks.
- **`Transport`**: Responsible for establishing and maintaining the connection to the server (e.g., via WebSockets, SSE, Stdio, or in-memory).
### Transports
Clients must be initialized with a `transport`. You can either provide an already instantiated transport object, or provide a transport source and let FastMCP attempt to infer the correct transport to use.
The following inference rules are used to determine the appropriate `ClientTransport` based on the input type:
1. **`ClientTransport` Instance**: If you provide an already instantiated transport object, it's used directly.
2. **`FastMCP` Instance**: Creates a `FastMCPTransport` for efficient in-memory communication (ideal for testing).
3. **`Path` or `str` pointing to an existing file**:
* If it ends with `.py`: Creates a `PythonStdioTransport` to run the script using `python`.
* If it ends with `.js`: Creates a `NodeStdioTransport` to run the script using `node`.
4. **`AnyUrl` or `str` pointing to a URL**:
* If it starts with `http://` or `https://`: Creates an `SSETransport`.
* If it starts with `ws://` or `wss://`: Creates a `WSTransport`.
5. **Other**: Raises a `ValueError` if the type cannot be inferred.
```python
import asyncio
from fastmcp import Client, FastMCP
# Example transports (more details in Transports page)
server_instance = FastMCP(name="TestServer") # In-memory server
sse_url = "http://localhost:8000/sse" # SSE server URL
ws_url = "ws://localhost:9000" # WebSocket server URL
server_script = "my_mcp_server.py" # Path to a Python server file
# Client automatically infers the transport type
client_in_memory = Client(server_instance)
client_sse = Client(sse_url)
client_ws = Client(ws_url)
client_stdio = Client(server_script)
print(client_in_memory.transport)
print(client_sse.transport)
print(client_ws.transport)
print(client_stdio.transport)
# Expected Output (types may vary slightly based on environment):
#
#
#
#
```
For more control over connection details (like headers for SSE, environment variables for Stdio), you can instantiate the specific `ClientTransport` class yourself and pass it to the `Client`. See the [Transports](/clients/transports) page for details.
## Client Usage
### Connection Lifecycle
The client operates asynchronously and must be used within an `async with` block. This context manager handles establishing the connection, initializing the MCP session, and cleaning up resources upon exit.
```python
import asyncio
from fastmcp import Client
client = Client("my_mcp_server.py") # Assumes my_mcp_server.py exists
async def main():
# Connection is established here
async with client:
print(f"Client connected: {client.is_connected()}")
# Make MCP calls within the context
tools = await client.list_tools()
print(f"Available tools: {tools}")
if any(tool.name == "greet" for tool in tools):
result = await client.call_tool("greet", {"name": "World"})
print(f"Greet result: {result}")
# Connection is closed automatically here
print(f"Client connected: {client.is_connected()}")
if __name__ == "__main__":
asyncio.run(main())
```
You can make multiple calls to the server within the same `async with` block using the established session.
### Client Methods
The `Client` provides methods corresponding to standard MCP requests:
#### Tool Operations
* **`list_tools()`**: Retrieves a list of tools available on the server.
```python
tools = await client.list_tools()
# tools -> list[mcp.types.Tool]
```
* **`call_tool(name: str, arguments: dict[str, Any] | None = None)`**: Executes a tool on the server.
```python
result = await client.call_tool("add", {"a": 5, "b": 3})
# result -> list[mcp.types.TextContent | mcp.types.ImageContent | ...]
print(result[0].text) # Assuming TextContent, e.g., '8'
```
* Arguments are passed as a dictionary. FastMCP servers automatically handle JSON string parsing for complex types if needed.
* Returns a list of content objects (usually `TextContent` or `ImageContent`).
#### Resource Operations
* **`list_resources()`**: Retrieves a list of static resources.
```python
resources = await client.list_resources()
# resources -> list[mcp.types.Resource]
```
* **`list_resource_templates()`**: Retrieves a list of resource templates.
```python
templates = await client.list_resource_templates()
# templates -> list[mcp.types.ResourceTemplate]
```
* **`read_resource(uri: str | AnyUrl)`**: Reads the content of a resource or a resolved template.
```python
# Read a static resource
readme_content = await client.read_resource("file:///path/to/README.md")
# readme_content -> list[mcp.types.TextResourceContents | mcp.types.BlobResourceContents]
print(readme_content[0].text) # Assuming text
# Read a resource generated from a template
weather_content = await client.read_resource("data://weather/london")
print(weather_content[0].text) # Assuming text JSON
```
#### Prompt Operations
* **`list_prompts()`**: Retrieves available prompt templates.
* **`get_prompt(name: str, arguments: dict[str, Any] | None = None)`**: Retrieves a rendered prompt message list.
### Callbacks
MCP allows servers to make requests *back* to the client for certain capabilities. The `Client` constructor accepts callback functions to handle these server requests:
#### Roots
* **`roots: RootsList | RootsHandler | None`**: Provides the server with a list of root directories the client grants access to. This can be a static list or a function that dynamically determines roots.
```python
from pathlib import Path
from fastmcp.client.roots import RootsHandler, RootsList
from mcp.shared.context import RequestContext # For type hint
# Option 1: Static list
static_roots: RootsList = [str(Path.home() / "Documents")]
# Option 2: Dynamic function
def dynamic_roots_handler(context: RequestContext) -> RootsList:
# Logic to determine accessible roots based on context
print(f"Server requested roots (Request ID: {context.request_id})")
return [str(Path.home() / "Downloads")]
client_with_roots = Client(
"my_server.py",
roots=dynamic_roots_handler # or roots=static_roots
)
# Tell the server the roots might have changed (if needed)
# async with client_with_roots:
# await client_with_roots.send_roots_list_changed()
```
See `fastmcp.client.roots` for helpers.
#### LLM Sampling
* **`sampling_handler: SamplingHandler | None`**: Handles `sampling/createMessage` requests from the server. This callback receives messages from the server and should return an LLM completion.
```python
from fastmcp.client.sampling import SamplingHandler, MessageResult
from mcp.types import SamplingMessage, SamplingParams, TextContent
from mcp.shared.context import RequestContext # For type hint
async def my_llm_handler(
messages: list[SamplingMessage],
params: SamplingParams,
context: RequestContext
) -> str | MessageResult:
print(f"Server requested sampling (Request ID: {context.request_id})")
# In a real scenario, call your LLM API here
last_user_message = next((m for m in reversed(messages) if m.role == 'user'), None)
prompt = last_user_message.content.text if last_user_message and isinstance(last_user_message.content, TextContent) else "Default prompt"
# Simulate LLM response
response_text = f"LLM processed: {prompt[:50]}..."
# Return simple string (becomes TextContent) or a MessageResult object
return response_text
client_with_sampling = Client(
"my_server.py",
sampling_handler=my_llm_handler
)
```
See `fastmcp.client.sampling` for helpers.
#### Logging
* **`log_handler: LoggingFnT | None`**: Receives log messages sent from the server (`ctx.info`, `ctx.error`, etc.).
```python
from mcp.client.session import LoggingFnT, LogLevel
def my_log_handler(level: LogLevel, message: str, logger_name: str | None):
print(f"[Server Log - {level.upper()}] {logger_name or 'default'}: {message}")
client_with_logging = Client(
"my_server.py",
log_handler=my_log_handler
)
```
### Utility Methods
* **`ping()`**: Sends a ping request to the server to verify connectivity.
```python
async def check_connection():
async with client:
await client.ping()
print("Server is reachable")
```
### Error Handling
When a `call_tool` request results in an error on the server (e.g., the tool function raised an exception), the `client.call_tool()` method will raise a `fastmcp.client.ClientError`.
```python
async def safe_call_tool():
async with client:
try:
# Assume 'divide' tool exists and might raise ZeroDivisionError
result = await client.call_tool("divide", {"a": 10, "b": 0})
print(f"Result: {result}")
except ClientError as e:
print(f"Tool call failed: {e}")
except ConnectionError as e:
print(f"Connection failed: {e}")
except Exception as e:
print(f"An unexpected error occurred: {e}")
# Example Output if division by zero occurs:
# Tool call failed: Division by zero is not allowed.
```
Other errors, like connection failures, will raise standard Python exceptions (e.g., `ConnectionError`, `TimeoutError`).
The client transport often has its own error-handling mechanisms, so you can not always trap errors like those raised by `call_tool` outside of the `async with` block. Instead, you can call `call_tool(..., _return_raw_result=True)` to get the raw `mcp.types.CallToolResult` object and handle errors yourself by checking its `isError` attribute.
## /docs/clients/transports.mdx
---
title: Client Transports
sidebarTitle: Transports
description: Understand the different ways FastMCP Clients can connect to servers.
icon: link
---
import { VersionBadge } from "/snippets/version-badge.mdx"
The FastMCP `Client` relies on a `ClientTransport` object to handle the specifics of connecting to and communicating with an MCP server. FastMCP provides several built-in transport implementations for common connection methods.
While the `Client` often infers the correct transport automatically (see [Client Overview](/clients/client#transport-inference)), you can also instantiate transports explicitly for more control.
## Stdio Transports
These transports manage an MCP server running as a subprocess, communicating with it via standard input (stdin) and standard output (stdout). This is the standard mechanism used by clients like Claude Desktop.
### Python Stdio
* **Class:** `fastmcp.client.transports.PythonStdioTransport`
* **Inferred From:** Paths to `.py` files.
* **Use Case:** Running a Python-based MCP server script (like one using FastMCP or the base `mcp` library) in a subprocess.
This is the most common way to interact with local FastMCP servers during development or when integrating with tools that expect to launch a server script.
```python
from fastmcp import Client
from fastmcp.client.transports import PythonStdioTransport
server_script = "my_mcp_server.py" # Assumes this file exists and runs mcp.run()
# Option 1: Inferred transport
client_inferred = Client(server_script)
# Option 2: Explicit transport (e.g., to use a specific python executable or add args)
transport_explicit = PythonStdioTransport(
script_path=server_script,
python_cmd="/usr/bin/python3.11", # Specify python version
# args=["--some-server-arg"], # Pass args to the script
# env={"MY_VAR": "value"}, # Set environment variables
# cwd="/path/to/run/in" # Set working directory
)
client_explicit = Client(transport_explicit)
async def use_stdio_client(client):
async with client:
tools = await client.list_tools()
print(f"Connected via Python Stdio, found tools: {tools}")
# asyncio.run(use_stdio_client(client_inferred))
# asyncio.run(use_stdio_client(client_explicit))
```
The server script (`my_mcp_server.py` in the example) *must* include logic to start the MCP server and listen on stdio, typically via `mcp.run()` or `fastmcp.server.run()`. The `Client` only launches the script; it doesn't inject the server logic.
### Node.js Stdio
* **Class:** `fastmcp.client.transports.NodeStdioTransport`
* **Inferred From:** Paths to `.js` files.
* **Use Case:** Running a Node.js-based MCP server script in a subprocess.
Similar to the Python transport, but for JavaScript servers.
```python
from fastmcp import Client
from fastmcp.client.transports import NodeStdioTransport
node_server_script = "my_mcp_server.js" # Assumes this JS file starts an MCP server on stdio
# Option 1: Inferred transport
client_inferred = Client(node_server_script)
# Option 2: Explicit transport
transport_explicit = NodeStdioTransport(
script_path=node_server_script,
node_cmd="node" # Or specify path to Node executable
)
client_explicit = Client(transport_explicit)
# Usage is the same as other clients
# async with client_explicit:
# tools = await client_explicit.list_tools()
```
### UVX Stdio (Experimental)
* **Class:** `fastmcp.client.transports.UvxStdioTransport`
* **Inferred From:** Not automatically inferred. Must be instantiated explicitly.
* **Use Case:** Running an MCP server packaged as a Python tool using [`uvx`](https://docs.astral.sh/uv/reference/cli/#uvx) (part of the `uv` toolchain). This allows running tools without explicitly installing them into the current environment.
This is useful for executing MCP servers distributed as command-line tools or packages.
```python
from fastmcp.client.transports import UvxStdioTransport
# Example: Run a hypothetical 'cloud-analyzer-mcp' tool via uvx
# Assume this tool, when run, starts an MCP server on stdio
transport = UvxStdioTransport(
tool_name="cloud-analyzer-mcp",
# from_package="cloud-analyzer-cli", # Optionally specify package if tool name differs
# with_packages=["boto3", "requests"], # Add dependencies if needed
# tool_args=["--config", "prod.yaml"] # Pass args to the tool itself
)
client = Client(transport)
# async with client:
# analysis = await client.call_tool("analyze_bucket", {"name": "my-data"})
```
### NPX Stdio (Experimental)
* **Class:** `fastmcp.client.transports.NpxStdioTransport`
* **Inferred From:** Not automatically inferred. Must be instantiated explicitly.
* **Use Case:** Running an MCP server packaged as an NPM package using `npx`.
Similar to `UvxStdioTransport`, but for the Node.js ecosystem.
```python
from fastmcp.client.transports import NpxStdioTransport
# Example: Run a hypothetical 'npm-mcp-server-package' via npx
transport = NpxStdioTransport(
package="npm-mcp-server-package",
# args=["--port", "stdio"] # Args passed to the package script
)
client = Client(transport)
# async with client:
# response = await client.call_tool("get_npm_data", {})
```
## Network Transports
These transports connect to servers running over a network, typically long-running services accessible via URLs.
### SSE (Server-Sent Events)
* **Class:** `fastmcp.client.transports.SSETransport`
* **Inferred From:** `http://` or `https://` URLs
* **Use Case:** Connecting to persistent MCP servers exposed over HTTP/S, often using FastMCP's `mcp.run(transport="sse")` mode.
SSE is a simple, unidirectional protocol where the server pushes messages to the client over a standard HTTP connection.
```python
from fastmcp import Client
from fastmcp.client.transports import SSETransport
sse_url = "http://localhost:8000/sse"
# Option 1: Inferred transport
client_inferred = Client(sse_url)
# Option 2: Explicit transport (e.g., to add custom headers)
headers = {"Authorization": "Bearer mytoken"}
transport_explicit = SSETransport(url=sse_url, headers=headers)
client_explicit = Client(transport_explicit)
async def use_sse_client(client):
async with client:
tools = await client.list_tools()
print(f"Connected via SSE, found tools: {tools}")
# asyncio.run(use_sse_client(client_inferred))
# asyncio.run(use_sse_client(client_explicit))
```
### WebSocket
* **Class:** `fastmcp.client.transports.WSTransport`
* **Inferred From:** `ws://` or `wss://` URLs
* **Use Case:** Connecting to MCP servers using the WebSocket protocol for bidirectional communication.
WebSockets provide a persistent, full-duplex connection between client and server.
```python
from fastmcp import Client
from fastmcp.client.transports import WSTransport
ws_url = "ws://localhost:9000"
# Option 1: Inferred transport
client_inferred = Client(ws_url)
# Option 2: Explicit transport
transport_explicit = WSTransport(url=ws_url)
client_explicit = Client(transport_explicit)
async def use_ws_client(client):
async with client:
tools = await client.list_tools()
print(f"Connected via WebSocket, found tools: {tools}")
# asyncio.run(use_ws_client(client_inferred))
# asyncio.run(use_ws_client(client_explicit))
```
## In-Memory Transports
### FastMCP Transport
* **Class:** `fastmcp.client.transports.FastMCPTransport`
* **Inferred From:** An instance of `fastmcp.server.FastMCP`.
* **Use Case:** Connecting directly to a `FastMCP` server instance running in the *same Python process*.
This is extremely useful for:
* **Testing:** Writing unit or integration tests for your FastMCP server without needing subprocesses or network connections.
* **Embedding:** Using an MCP server as a component within a larger application.
```python
from fastmcp import FastMCP, Client
from fastmcp.client.transports import FastMCPTransport
# 1. Create your FastMCP server instance
server = FastMCP(name="InMemoryServer")
@server.tool()
def ping(): return "pong"
# 2. Create a client pointing directly to the server instance
# Option A: Inferred
client_inferred = Client(server)
# Option B: Explicit
transport_explicit = FastMCPTransport(mcp=server)
client_explicit = Client(transport_explicit)
# 3. Use the client (no subprocess or network involved)
async def test_in_memory():
async with client_inferred: # Or client_explicit
result = await client_inferred.call_tool("ping")
print(f"In-memory call result: {result[0].text}") # Output: pong
# asyncio.run(test_in_memory())
```
Communication happens through efficient in-memory queues, making it very fast.
## Choosing a Transport
* **Local Development/Testing:** Use `PythonStdioTransport` (inferred from `.py` files) or `FastMCPTransport` (for same-process testing).
* **Connecting to Remote/Persistent Servers:** Use `SSETransport` (for `http/s`) or `WSTransport` (for `ws/s`).
* **Running Packaged Tools:** Use `UvxStdioTransport` (Python/uv) or `NpxStdioTransport` (Node/npm) if you need to run MCP servers without local installation.
* **Integrating with Claude Desktop (or similar):** These tools typically expect to run a Python script, so your server should be runnable via `python your_server.py`, making `PythonStdioTransport` the relevant mechanism on the client side.
## /docs/docs.json
```json path="/docs/docs.json"
{
"$schema": "https://mintlify.com/docs.json",
"background": {
"color": {
"dark": "#222831",
"light": "#EEEEEE"
},
"decoration": "windows"
},
"colors": {
"dark": "#f72585",
"light": "#4cc9f0",
"primary": "#2d00f7"
},
"description": "The fast, Pythonic way to build MCP servers and clients.",
"footer": {
"socials": {
"bluesky": "https://bsky.app/profile/jlowin.dev",
"github": "https://github.com/jlowin/fastmcp",
"x": "https://x.com/jlowin"
}
},
"name": "FastMCP",
"navbar": {
"primary": {
"href": "https://github.com/jlowin/fastmcp",
"type": "github"
}
},
"navigation": {
"groups": [
{
"group": "Get Started",
"pages": [
"getting-started/welcome",
"getting-started/installation",
"getting-started/quickstart"
]
},
{
"group": "Servers",
"pages": [
"servers/fastmcp",
"servers/tools",
"servers/resources",
"servers/prompts",
"servers/context"
]
},
{
"group": "Clients",
"pages": [
"clients/client",
"clients/transports"
]
},
{
"group": "Patterns",
"pages": [
"patterns/proxy",
"patterns/composition",
"patterns/decorating-methods",
"patterns/openapi",
"patterns/fastapi",
"patterns/contrib",
"patterns/testing"
]
},
{
"group": "Deployment",
"pages": []
}
]
},
"theme": "mint"
}
```
## /docs/getting-started/installation.mdx
---
title: Installation
icon: arrow-down-to-line
---
## Install FastMCP
We recommend using [uv](https://docs.astral.sh/uv/getting-started/installation/) to install and manage FastMCP.
If you plan to use FastMCP in your project, you can add it as a dependency with:
```bash
uv add fastmcp
```
Alternatively, you can install it directly with `pip` or `uv pip`:
```bash uv
uv pip install fastmcp
```
```bash pip
pip install fastmcp
```
## Verify Installation
To verify that FastMCP is installed correctly, you can run the following command:
```bash
fastmcp version
```
You should see output like the following:
```bash
$ fastmcp version
FastMCP version: 0.4.2.dev41+ga077727.d20250410
MCP version: 1.6.0
Python version: 3.12.2
Platform: macOS-15.3.1-arm64-arm-64bit
FastMCP root path: ~/Developer/fastmcp
```
## Installing for Development
If you plan to contribute to FastMCP, you should begin by cloning the repository and using uv to install all dependencies.
```bash
git clone https://github.com/jlowin/fastmcp.git
cd fastmcp
uv sync
```
This will install all dependencies, including ones for development, and create a virtual environment.
To run the tests, use pytest:
```bash
pytest
```
## /docs/getting-started/quickstart.mdx
---
title: Quickstart
icon: rocket
---
Welcome! This guide will help you quickly set up FastMCP and run your first MCP server.
If you haven't already installed FastMCP, follow the [installation instructions](/getting-started/installation).
## Creating a FastMCP Server
A FastMCP server is a collection of tools, resources, and other MCP components. To create a server, start by instantiating the `FastMCP` class.
Create a new file called `my_server.py` and add the following code:
```python my_server.py
from fastmcp import FastMCP
mcp = FastMCP("My MCP Server")
```
That's it! You've created a FastMCP server, albeit a very boring one. Let's add a tool to make it more interesting.
## Adding a Tool
To add a tool that returns a simple greeting, write a function and decorate it with `@mcp.tool` to register it with the server:
```python my_server.py {5-7}
from fastmcp import FastMCP
mcp = FastMCP("My MCP Server")
@mcp.tool()
def greet(name: str) -> str:
return f"Hello, {name}!"
```
## Testing the Server
To test the server, create a FastMCP client and point it at the server object.
```python my_server.py {1, 9-16}
from fastmcp import FastMCP, Client
mcp = FastMCP("My MCP Server")
@mcp.tool()
def greet(name: str) -> str:
return f"Hello, {name}!"
client = Client(mcp)
async def call_tool(name: str):
async with client:
result = await client.call_tool("greet", {"name": name})
print(result)
asyncio.run(call_tool("Ford"))
```
There are a few things to note here:
- Clients are asynchronous, so we need to use `asyncio.run` to run the client.
- We must enter a client context (`async with client:`) before using the client. You can make multiple client calls within the same context.
## Running the server
In order to run the server with Python, we need to add a `run` statement to the `__main__` block of the server file.
```python my_server.py {9-10}
from fastmcp import FastMCP, Client
mcp = FastMCP("My MCP Server")
@mcp.tool()
def greet(name: str) -> str:
return f"Hello, {name}!"
if __name__ == "__main__":
mcp.run()
```
This lets us run the server with `python my_server.py`, using the default `stdio` transport, which is the standard way to expose an MCP server to a client.
Why do we need the `if __name__ == "__main__":` block?
Within the FastMCP ecosystem, this line may be unecessary. However, including it ensures that your FastMCP server runs for all users and clients in a consistent way and is therefore recommended as best practice.
### Interacting with the Python server
Now that the server can be executed with `python my_server.py`, we can interact with it like any other MCP server.
In a new file, create a client and point it at the server file:
```python my_client.py
from fastmcp import Client
client = Client("my_server.py")
async def call_tool(name: str):
async with client:
result = await client.call_tool("greet", {"name": name})
print(result)
asyncio.run(call_tool("Ford"))
```
### Using the FastMCP CLI
To have FastMCP run the server for us, we can use the `fastmcp run` command. This will start the server and keep it running until it is stopped. By default, it will use the `stdio` transport, which is a simple text-based protocol for interacting with the server.
```bash
fastmcp run my_server.py:mcp
```
Note that FastMCP *does not* require the `__main__` block in the server file, and will ignore it if it is present. Instead, it looks for the server object provided in the CLI command (here, `mcp`). If no server object is provided, `fastmcp run` will automatically search for servers called "mcp", "app", or "server" in the file.
We pointed our client at the server file, which is recognized as a Python MCP server and executed with `python my_server.py` by default. This exceutes the `__main__` block of the server file. There are other ways to run the server, which are described in the [server configuration](/servers/fastmcp#running-the-server) guide.
## /docs/getting-started/welcome.mdx
---
title: "Welcome to FastMCP!"
sidebarTitle: "Welcome!"
description: The fast, Pythonic way to build MCP servers and clients.
icon: hand-wave
---
The [Model Context Protocol](https://modelcontextprotocol.io/) (MCP) is a new, standardized way to provide context and tools to your LLMs, and FastMCP makes building MCP servers and clients simple and intuitive. Create tools, expose resources, define prompts, and more with clean, Pythonic code:
```python {1, 3, 5, 11}
from fastmcp import FastMCP
mcp = FastMCP("Demo 🚀")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
if __name__ == "__main__":
mcp.run()
```
## What is MCP?
The Model Context Protocol lets you build servers that expose data and functionality to LLM applications in a secure, standardized way. It is often described as "the USB-C port for AI", providing a uniform way to connect LLMs to resources they can use. It may be easier to think of it as an API, but specifically designed for LLM interactions. MCP servers can:
- Expose data through `Resources` (think of these sort of like GET endpoints; they are used to load information into the LLM's context)
- Provide functionality through `Tools` (sort of like POST endpoints; they are used to execute code or otherwise produce a side effect)
- Define interaction patterns through `Prompts` (reusable templates for LLM interactions)
- And more!
There is a low-level Python SDK available for implementing the protocol directly, but FastMCP aims to make that easier by providing a high-level, Pythonic interface.
FastMCP 1.0 was so successful that it is now included as part of the official [MCP Python SDK](https://github.com/modelcontextprotocol/python-sdk)!
## Why FastMCP?
The MCP protocol is powerful but implementing it involves a lot of boilerplate - server setup, protocol handlers, content types, error management. FastMCP handles all the complex protocol details and server management, so you can focus on building great tools. It's designed to be high-level and Pythonic; in most cases, decorating a function is all you need.
FastMCP aims to be:
🚀 **Fast**: High-level interface means less code and faster development
🍀 **Simple**: Build MCP servers with minimal boilerplate
🐍 **Pythonic**: Feels natural to Python developers
🔍 **Complete**: FastMCP aims to provide a full implementation of the core MCP specification
**FastMCP v1** focused on abstracting the most common boilerplate of exposing MCP server functionality, and is now included in the official MCP Python SDK. **FastMCP v2** expands on that foundation to introduce novel functionality mainly focused on simplifying server interactions, including flexible clients, proxying and composition, and deployment.
## /docs/patterns/composition.mdx
---
title: Server Composition
sidebarTitle: Composition
description: Combine multiple FastMCP servers into a single, larger application using mounting and importing.
icon: puzzle-piece
---
import { VersionBadge } from '/snippets/version-badge.mdx'
As your MCP applications grow, you might want to organize your tools, resources, and prompts into logical modules or reuse existing server components. FastMCP supports composition through two methods:
- **`import_server`**: For a one-time copy of components with prefixing (static composition).
- **`mount`**: For creating a live link where the main server delegates requests to the subserver (dynamic composition).
## Why Compose Servers?
- **Modularity**: Break down large applications into smaller, focused servers (e.g., a `WeatherServer`, a `DatabaseServer`, a `CalendarServer`).
- **Reusability**: Create common utility servers (e.g., a `TextProcessingServer`) and mount them wherever needed.
- **Teamwork**: Different teams can work on separate FastMCP servers that are later combined.
- **Organization**: Keep related functionality grouped together logically.
### Importing vs Mounting
The choice of importing or mounting depends on your use case and requirements. In general, importing is best for simpler cases because it copies the imported server's components into the main server, treating them as native integrations. Mounting is best for more complex cases where you need to delegate requests to the subserver at runtime.
| Feature | Importing | Mounting |
|---------|----------------|---------|
| **Method** | `FastMCP.import_server()` | `FastMCP.mount()` |
| **Composition Type** | One-time copy (static) | Live link (dynamic) |
| **Updates** | Changes to subserver NOT reflected | Changes to subserver immediately reflected |
| **Lifespan** | Not managed | Automatically managed |
| **Synchronicity** | Async (must be awaited) | Sync |
| **Best For** | Bundling finalized components | Modular runtime composition |
### Proxy Servers
FastMCP supports [MCP proxying](/patterns/proxy), which allows you to mirror a local or remote server in a local FastMCP instance. Proxies are fully compatible with both importing and mounting.
## Importing (Static Composition)
The `import_server()` method copies all components (tools, resources, templates, prompts) from one `FastMCP` instance (the *subserver*) into another (the *main server*). A `prefix` is added to avoid naming conflicts.
```python
from fastmcp import FastMCP
import asyncio
# --- Define Subservers ---
# Weather Service
weather_mcp = FastMCP(name="WeatherService")
@weather_mcp.tool()
def get_forecast(city: str) -> dict:
"""Get weather forecast."""
return {"city": city, "forecast": "Sunny"}
@weather_mcp.resource("data://cities/supported")
def list_supported_cities() -> list[str]:
"""List cities with weather support."""
return ["London", "Paris", "Tokyo"]
# Calculator Service
calc_mcp = FastMCP(name="CalculatorService")
@calc_mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers."""
return a + b
@calc_mcp.prompt()
def explain_addition() -> str:
"""Explain the concept of addition."""
return "Addition is the process of combining two or more numbers."
# --- Define Main Server ---
main_mcp = FastMCP(name="MainApp")
# --- Import Subservers ---
async def setup():
# Import weather service with prefix "weather"
await main_mcp.import_server("weather", weather_mcp)
# Import calculator service with prefix "calc"
await main_mcp.import_server("calc", calc_mcp)
# --- Now, main_mcp contains *copied* components ---
# Tools:
# - "weather_get_forecast"
# - "calc_add"
# Resources:
# - "weather+data://cities/supported" (prefixed URI)
# Prompts:
# - "calc_explain_addition"
if __name__ == "__main__":
# In a real app, you might run this async or setup imports differently
asyncio.run(setup())
# Run the main server, which now includes components from both subservers
main_mcp.run()
```
### How Importing Works
When you call `await main_mcp.import_server(prefix, subserver)`:
1. **Tools**: All tools from `subserver` are added to `main_mcp`. Their names are automatically prefixed using the `prefix` and a default separator (`_`).
- `subserver.tool(name="my_tool")` becomes `main_mcp.tool(name="{prefix}_my_tool")`.
2. **Resources**: All resources from `subserver` are added. Their URIs are prefixed using the `prefix` and a default separator (`+`).
- `subserver.resource(uri="data://info")` becomes `main_mcp.resource(uri="{prefix}+data://info")`.
3. **Resource Templates**: All templates from `subserver` are added. Their URI *templates* are prefixed similarly to resources.
- `subserver.resource(uri="data://{id}")` becomes `main_mcp.resource(uri="{prefix}+data://{id}")`.
4. **Prompts**: All prompts from `subserver` are added, with names prefixed like tools.
- `subserver.prompt(name="my_prompt")` becomes `main_mcp.prompt(name="{prefix}_my_prompt")`.
Note that `import_server` performs a **one-time copy** of components from the `subserver` into the `main_mcp` instance at the time the method is called. Changes made to the `subserver` *after* `import_server` is called **will not** be reflected in `main_mcp`. Also, the `subserver`'s `lifespan` context is **not** executed by the main server when using `import_server`.
### Customizing Separators
You might prefer different separators for the prefixed names and URIs. You can customize these when calling `import_server()`:
```python
await main_mcp.import_server(
prefix="api",
app=some_subserver,
tool_separator="/", # Tool name becomes: "api/sub_tool_name"
resource_separator=":", # Resource URI becomes: "api:data://sub_resource"
prompt_separator="." # Prompt name becomes: "api.sub_prompt_name"
)
```
Be cautious when choosing separators. Some MCP clients (like Claude Desktop) might have restrictions on characters allowed in tool names (e.g., `/` might not be supported). The defaults (`_` for names, `+` for URIs) are generally safe.
## Mounting (Live Linking)
The `mount()` method creates a **live link** between the `main_mcp` server and the `subserver`. Instead of copying components, requests for components matching the `prefix` are **delegated** to the `subserver` at runtime.
```python
import asyncio
from fastmcp import FastMCP, Client
# --- Define Subserver ---
dynamic_mcp = FastMCP(name="DynamicService")
@dynamic_mcp.tool()
def initial_tool(): return "Initial Tool Exists"
# --- Define Main Server ---
main_mcp = FastMCP(name="MainAppLive")
# --- Mount Subserver (Sync operation) ---
main_mcp.mount("dynamic", dynamic_mcp)
print("Mounted dynamic_mcp.")
# --- Add a tool AFTER mounting ---
@dynamic_mcp.tool()
def added_later(): return "Tool Added Dynamically!"
print("Added 'added_later' tool to dynamic_mcp.")
# --- Test Access ---
async def test_dynamic_mount():
# Need to use await for get_tools now
tools_before = await main_mcp.get_tools()
print("Tools available via main_mcp:", list(tools_before.keys()))
# Expected: ['dynamic_initial_tool', 'dynamic_added_later']
async with Client(main_mcp) as client:
# Call the dynamically added tool via the main server
result = await client.call_tool("dynamic_added_later")
print("Result of calling dynamic_added_later:", result[0].text)
# Expected: Tool Added Dynamically!
if __name__ == "__main__":
# Need async context to test
asyncio.run(test_dynamic_mount())
# To run the server itself:
# main_mcp.run()
```
### How Mounting Works
When you call `main_mcp.mount(prefix, server)`:
1. **Live Link**: A live connection is established between `main_mcp` and the `subserver`.
2. **Dynamic Updates**: Changes made to the `subserver` (e.g., adding new tools) **will be reflected** immediately when accessing components through `main_mcp`.
3. **Lifespan Management**: The `subserver`'s `lifespan` context **is automatically managed** and executed within the `main_mcp`'s lifespan.
4. **Delegation**: Requests for components matching the prefix are delegated to the subserver at runtime.
The same prefixing rules apply as with `import_server` for naming tools, resources, templates, and prompts.
### Customizing Separators
Similar to `import_server`, you can customize the separators for the prefixed names and URIs:
```python
main_mcp.mount(
prefix="api",
app=some_subserver,
tool_separator="/", # Tool name becomes: "api/sub_tool_name"
resource_separator=":", # Resource URI becomes: "api:data://sub_resource"
prompt_separator="." # Prompt name becomes: "api.sub_prompt_name"
)
```
## Example: Modular Application
Here's how a modular application might use `import_server`:
```python main.py
from fastmcp import FastMCP
import asyncio
# Import the servers (see other files)
from modules.text_server import text_mcp
from modules.data_server import data_mcp
app = FastMCP(name="MainApplication")
# Setup function for async imports
async def setup():
# Import the utility servers
await app.import_server("text", text_mcp)
await app.import_server("data", data_mcp)
@app.tool()
def process_and_analyze(record_id: int) -> str:
"""Fetches a record and analyzes its string representation."""
# In a real application, you'd use proper methods to interact between
# imported tools rather than accessing internal managers
# Get record data
record = {"id": record_id, "value": random.random()}
# Count words in the record string representation
word_count = len(str(record).split())
return (
f"Record {record_id} has {word_count} words in its string "
f"representation."
)
if __name__ == "__main__":
# Run async setup before starting the server
asyncio.run(setup())
# Run the server
app.run()
```
```python modules/text_server.py
from fastmcp import FastMCP
text_mcp = FastMCP(name="TextUtilities")
@text_mcp.tool()
def count_words(text: str) -> int:
"""Counts words in a text."""
return len(text.split())
@text_mcp.resource("resource://stopwords")
def get_stopwords() -> list[str]:
"""Return a list of common stopwords."""
return ["the", "a", "is", "in"]
```
```python modules/data_server.py
from fastmcp import FastMCP
import random
from typing import dict
data_mcp = FastMCP(name="DataAPI")
@data_mcp.tool()
def fetch_record(record_id: int) -> dict:
"""Fetches a dummy data record."""
return {"id": record_id, "value": random.random()}
@data_mcp.resource("data://schema/{table}")
def get_table_schema(table: str) -> dict:
"""Provides a dummy schema for a table."""
return {"table": table, "columns": ["id", "value"]}
```
Now, running `main.py` starts a server that exposes:
- `text_count_words`
- `data_fetch_record`
- `process_and_analyze`
- `text+resource://stopwords`
- `data+data://schema/{table}` (template)
This pattern promotes code organization and reuse within your FastMCP projects.
## /docs/patterns/contrib.mdx
---
title: "Contrib Modules"
description: "Community-contributed modules extending FastMCP"
icon: "cubes"
---
import { VersionBadge } from "/snippets/version-badge.mdx"
FastMCP includes a `contrib` package that holds community-contributed modules. These modules extend FastMCP's functionality but aren't officially maintained by the core team.
Contrib modules provide additional features, integrations, or patterns that complement the core FastMCP library. They offer a way for the community to share useful extensions while keeping the core library focused and maintainable.
The available modules can be viewed in the [contrib directory](https://github.com/jlowin/fastmcp/tree/main/src/contrib).
## Usage
To use a contrib module, import it from the `fastmcp.contrib` package:
```python
from fastmcp.contrib import my_module
```
## Important Considerations
- **Stability**: Modules in `contrib` may have different testing requirements or stability guarantees compared to the core library.
- **Compatibility**: Changes to core FastMCP might break modules in `contrib` without explicit warnings in the main changelog.
- **Dependencies**: Contrib modules may have additional dependencies not required by the core library. These dependencies are typically documented in the module's README or separate requirements files.
## Contributing
We welcome contributions to the `contrib` package! If you have a module that extends FastMCP in a useful way, consider contributing it:
1. Create a new directory in `src/fastmcp/contrib/` for your module
3. Add proper tests for your module in `tests/contrib/`
2. Include comprehensive documentation in a README.md file, including usage and examples, as well as any additional dependencies or installation instructions
5. Submit a pull request
The ideal contrib module:
- Solves a specific use case or integration need
- Follows FastMCP coding standards
- Includes thorough documentation and examples
- Has comprehensive tests
- Specifies any additional dependencies
## /docs/patterns/decorating-methods.mdx
---
title: Decorating Methods
sidebarTitle: Decorating Methods
description: Properly use instance methods, class methods, and static methods with FastMCP decorators.
icon: at
---
FastMCP's decorator system is designed to work with functions, but you may see unexpected behavior if you try to decorate an instance or class method. This guide explains the correct approach for using methods with all FastMCP decorators (`@tool()`, `@resource()`, and `@prompt()`).
## Why Are Methods Hard?
When you apply a FastMCP decorator like `@tool()`, `@resource()`, or `@prompt()` to a method, the decorator captures the function at decoration time. For instance methods and class methods, this poses a challenge because:
1. For instance methods: The decorator gets the unbound method before any instance exists
2. For class methods: The decorator gets the function before it's bound to the class
This means directly decorating these methods doesn't work as expected. In practice, the LLM would see parameters like `self` or `cls` that it cannot provide values for.
## Recommended Patterns
### Instance Methods
**Don't do this** (it doesn't work properly):
```python
from fastmcp import FastMCP
mcp = FastMCP()
class MyClass:
@mcp.tool() # This won't work correctly
def add(self, x, y):
return x + y
@mcp.resource("resource://{param}") # This won't work correctly
def get_resource(self, param: str):
return f"Resource data for {param}"
```
When the decorator is applied this way, it captures the unbound method. When the LLM later tries to use this component, it will see `self` as a required parameter, but it won't know what to provide for it, causing errors or unexpected behavior.
**Do this instead**:
```python
from fastmcp import FastMCP
mcp = FastMCP()
class MyClass:
def add(self, x, y):
return x + y
def get_resource(self, param: str):
return f"Resource data for {param}"
# Create an instance first, then add the bound methods
obj = MyClass()
mcp.add_tool(obj.add)
mcp.add_resource_fn(obj.get_resource, uri="resource://{param}") # For resources or templates
# Note: FastMCP provides add_resource() for adding Resource objects directly and
# add_resource_fn() for adding functions that generate resources or templates
# Now you can call it without 'self' showing up as a parameter
await mcp.call_tool('add', {'x': 1, 'y': 2}) # Returns 3
```
This approach works because:
1. You first create an instance of the class (`obj`)
2. When you access the method through the instance (`obj.add`), Python creates a bound method where `self` is already set to that instance
3. When you register this bound method, the system sees a callable that only expects the appropriate parameters, not `self`
### Class Methods
Similar to instance methods, decorating class methods directly doesn't work properly:
**Don't do this**:
```python
from fastmcp import FastMCP
mcp = FastMCP()
class MyClass:
@classmethod
@mcp.tool() # This won't work correctly
def from_string(cls, s):
return cls(s)
```
The problem here is that the FastMCP decorator is applied before the `@classmethod` decorator (Python applies decorators bottom-to-top). So it captures the function before it's transformed into a class method, leading to incorrect behavior.
**Do this instead**:
```python
from fastmcp import FastMCP
mcp = FastMCP()
class MyClass:
@classmethod
def from_string(cls, s):
return cls(s)
# Add the class method after the class is defined
mcp.add_tool(MyClass.from_string)
```
This works because:
1. The `@classmethod` decorator is applied properly during class definition
2. When you access `MyClass.from_string`, Python provides a special method object that automatically binds the class to the `cls` parameter
3. When registered, only the appropriate parameters are exposed to the LLM, hiding the implementation detail of the `cls` parameter
### Static Methods
Unlike instance and class methods, static methods work fine with FastMCP decorators:
```python
from fastmcp import FastMCP
mcp = FastMCP()
class MyClass:
@staticmethod
@mcp.tool() # This works!
def utility(x, y):
return x + y
@staticmethod
@mcp.resource("resource://data") # This works too!
def get_data():
return "Static resource data"
```
This approach works because:
1. The `@staticmethod` decorator is applied first (executed last), transforming the method into a regular function
2. When the FastMCP decorator is applied, it's capturing what is effectively just a regular function
3. A static method doesn't have any binding requirements - it doesn't receive a `self` or `cls` parameter
Alternatively, you can use the same pattern as the other methods:
```python
from fastmcp import FastMCP
mcp = FastMCP()
class MyClass:
@staticmethod
def utility(x, y):
return x + y
# This also works
mcp.add_tool(MyClass.utility)
```
This works for the same reason - a static method is essentially just a function in a class namespace.
## Additional Patterns
### Creating Components at Class Initialization
You can automatically register instance methods when creating an object:
```python
from fastmcp import FastMCP
mcp = FastMCP()
class ComponentProvider:
def __init__(self, mcp_instance):
# Register methods
mcp_instance.add_tool(self.tool_method)
mcp_instance.add_resource_fn(self.resource_method, uri="resource://data")
def tool_method(self, x):
return x * 2
def resource_method(self):
return "Resource data"
# The methods are automatically registered when creating the instance
provider = ComponentProvider(mcp)
```
This pattern is useful when:
- You want to encapsulate registration logic within the class itself
- You have multiple related components that should be registered together
- You want to ensure that methods are always properly registered when creating an instance
The class automatically registers its methods during initialization, ensuring they're properly bound to the instance before registration.
## Summary
While FastMCP's decorator pattern works seamlessly with regular functions and static methods, for instance methods and class methods, you should add them after creating the instance or class. This ensures that the methods are properly bound before being registered.
These patterns apply to all FastMCP decorators and registration methods:
- `@tool()` and `add_tool()`
- `@resource()` and `add_resource_fn()`
- `@prompt()` and `add_prompt()`
Understanding these patterns allows you to effectively organize your components into classes while maintaining proper method binding, giving you the benefits of object-oriented design without sacrificing the simplicity of FastMCP's decorator system.
## /docs/patterns/fastapi.mdx
---
title: FastAPI Integration
sidebarTitle: FastAPI
description: Generate MCP servers from FastAPI apps
icon: square-bolt
---
import { VersionBadge } from '/snippets/version-badge.mdx'
FastMCP can automatically convert FastAPI applications into MCP servers.
FastMCP does *not* include FastAPI as a dependency; you must install it separately to run these examples.
```python {2, 22, 25}
from fastapi import FastAPI
from fastmcp import FastMCP
# A FastAPI app
app = FastAPI()
@app.get("/items")
def list_items():
return [{"id": 1, "name": "Item 1"}, {"id": 2, "name": "Item 2"}]
@app.get("/items/{item_id}")
def get_item(item_id: int):
return {"id": item_id, "name": f"Item {item_id}"}
@app.post("/items")
def create_item(name: str):
return {"id": 3, "name": name}
# Create an MCP server from your FastAPI app
mcp = FastMCP.from_fastapi(app=app)
if __name__ == "__main__":
mcp.run() # Start the MCP server
```
## Route Mapping
By default, FastMCP will map FastAPI routes to MCP components according to the following rules:
| FastAPI Route Type | FastAPI Example | MCP Component | Notes |
|--------------------|--------------|---------|-------|
| GET without path params | `@app.get("/stats")` | Resource | Simple resources for fetching data |
| GET with path params | `@app.get("/users/{id}")` | Resource Template | Path parameters become template parameters |
| POST, PUT, DELETE, etc. | `@app.post("/users")` | Tool | Operations that modify data |
For more details on route mapping or custom mapping rules, see the [OpenAPI integration documentation](/patterns/openapi#route-mapping); FastMCP uses the same mapping rules for both FastAPI and OpenAPI integrations.
## Complete Example
Here's a more detailed example with a data model:
```python
import asyncio
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from fastmcp import FastMCP, Client
# Define your Pydantic model
class Item(BaseModel):
name: str
price: float
# Create your FastAPI app
app = FastAPI()
items = {} # In-memory database
@app.get("/items")
def list_items():
"""List all items"""
return list(items.values())
@app.get("/items/{item_id}")
def get_item(item_id: int):
"""Get item by ID"""
if item_id not in items:
raise HTTPException(404, "Item not found")
return items[item_id]
@app.post("/items")
def create_item(item: Item):
"""Create a new item"""
item_id = len(items) + 1
items[item_id] = {"id": item_id, **item.model_dump()}
return items[item_id]
# Test your MCP server with a client
async def test():
# Create MCP server from FastAPI app
mcp = await FastMCP.from_fastapi(app=app)
# List the components that were created
tools = await mcp.list_tools()
resources = await mcp.list_resources()
templates = await mcp.list_resource_templates()
print(f"Generated {len(tools)} tools")
print(f"Generated {len(resources)} resources")
print(f"Generated {len(templates)} templates")
# In a real scenario, you would run the server:
# mcp.run()
if __name__ == "__main__":
asyncio.run(test())
```
## Benefits
- **Leverage existing FastAPI apps** - No need to rewrite your API logic
- **Schema reuse** - FastAPI's Pydantic models and validation are inherited
- **Full feature support** - Works with FastAPI's authentication, dependencies, etc.
- **ASGI transport** - Direct communication without additional HTTP overhead
## /docs/patterns/openapi.mdx
---
title: OpenAPI Integration
sidebarTitle: OpenAPI
description: Generate MCP servers from OpenAPI specs
icon: code-branch
---
import { VersionBadge } from '/snippets/version-badge.mdx'
FastMCP can automatically generate an MCP server from an OpenAPI specification. Users only need to provide an OpenAPI specification (3.0 or 3.1) and an API client.
```python
import httpx
from fastmcp import FastMCP
# Create a client for your API
api_client = httpx.AsyncClient(base_url="https://api.example.com")
# Load your OpenAPI spec
spec = {...}
# Create an MCP server from your OpenAPI spec
mcp = FastMCP.from_openapi(openapi_spec=spec, client=api_client)
if __name__ == "__main__":
mcp.run()
```
## Route Mapping
By default, OpenAPI routes are mapped to MCP components based on these rules:
| OpenAPI Route | Example |MCP Component | Notes |
|- | - | - | - |
| `GET` without path params | `GET /stats` | Resource | Simple resources for fetching data |
| `GET` with path params | `GET /users/{id}` | Resource Template | Path parameters become template parameters |
| `POST`, `PUT`, `PATCH`, `DELETE`, etc. | `POST /users` | Tool | Operations that modify data |
Internally, FastMCP uses a priority-ordered set of `RouteMap` objects to determine the component type. Route maps indicate that a specific HTTP method (or methods) and path pattern should be treated as a specific component type. This is the default set of route maps:
```python
# Simplified version of the actual mapping rules
DEFAULT_ROUTE_MAPPINGS = [
# GET with path parameters -> ResourceTemplate
RouteMap(methods=["GET"], pattern=r".*\{.*\}.*",
route_type=RouteType.RESOURCE_TEMPLATE),
# GET without path parameters -> Resource
RouteMap(methods=["GET"], pattern=r".*",
route_type=RouteType.RESOURCE),
# All other methods -> Tool
RouteMap(methods=["POST", "PUT", "PATCH", "DELETE", "OPTIONS", "HEAD"],
pattern=r".*", route_type=RouteType.TOOL),
]
```
### Custom Route Maps
Users can add custom route maps to override the default mapping behavior. User-supplied route maps are always applied first, before the default route maps.
```python
from fastmcp.server.openapi import RouteMap, RouteType
# Custom mapping rules
custom_maps = [
# Force all analytics endpoints to be Tools
RouteMap(methods=["GET"],
pattern=r"^/analytics/.*",
route_type=RouteType.TOOL)
]
# Apply custom mappings
mcp = await FastMCP.from_openapi(
openapi_spec=spec,
client=api_client,
route_maps=custom_maps
)
```
## How It Works
1. FastMCP parses your OpenAPI spec to extract routes and schemas
2. It applies mapping rules to categorize each route
3. When an MCP client calls a tool or accesses a resource:
- FastMCP constructs an HTTP request based on the OpenAPI definition
- It sends the request through the provided httpx client
- It translates the HTTP response to the appropriate MCP format
## Complete Example
```python
import asyncio
import httpx
from fastmcp import FastMCP
# Sample OpenAPI spec for a Pet Store API
petstore_spec = {
"openapi": "3.0.0",
"paths": {
"/pets": {
"get": {
"operationId": "listPets",
"summary": "List all pets"
},
"post": {
"operationId": "createPet",
"summary": "Create a new pet"
}
},
"/pets/{petId}": {
"get": {
"operationId": "getPet",
"summary": "Get a pet by ID",
"parameters": [
{
"name": "petId",
"in": "path",
"required": True,
"schema": {"type": "string"}
}
]
}
}
}
}
async def main():
# Client for the Pet Store API
client = httpx.AsyncClient(base_url="https://petstore.example.com/api")
# Create the MCP server
mcp = await FastMCP.from_openapi(
openapi_spec=petstore_spec,
client=client,
name="PetStore"
)
# List what components were created
tools = await mcp.list_tools()
resources = await mcp.list_resources()
templates = await mcp.list_resource_templates()
print(f"Tools: {len(tools)}") # Should include createPet
print(f"Resources: {len(resources)}") # Should include listPets
print(f"Templates: {len(templates)}") # Should include getPet
# Start the MCP server
mcp.run()
if __name__ == "__main__":
asyncio.run(main())
```
## /docs/patterns/proxy.mdx
---
title: Proxying Servers
sidebarTitle: Proxying
description: Use FastMCP to act as an intermediary or change transport for other MCP servers.
icon: arrows-retweet
---
import { VersionBadge } from '/snippets/version-badge.mdx'
FastMCP provides a powerful proxying capability that allows one FastMCP server instance to act as a frontend for another MCP server (which could be remote, running on a different transport, or even another FastMCP instance). This is achieved using the `FastMCP.from_client()` class method.
## What is Proxying?
Proxying means setting up a FastMCP server that doesn't implement its own tools or resources directly. Instead, when it receives a request (like `tools/call` or `resources/read`), it forwards that request to a *backend* MCP server, receives the response, and then relays that response back to the original client.
```mermaid
sequenceDiagram
participant ClientApp as Your Client (e.g., Claude Desktop)
participant FastMCPProxy as FastMCP Proxy Server
participant BackendServer as Backend MCP Server (e.g., remote SSE)
ClientApp->>FastMCPProxy: MCP Request (e.g. stdio)
Note over FastMCPProxy, BackendServer: Proxy forwards the request
FastMCPProxy->>BackendServer: MCP Request (e.g. sse)
BackendServer-->>FastMCPProxy: MCP Response (e.g. sse)
Note over ClientApp, FastMCPProxy: Proxy relays the response
FastMCPProxy-->>ClientApp: MCP Response (e.g. stdio)
```
### Use Cases
- **Transport Bridging**: Expose a server running on one transport (e.g., a remote SSE server) via a different transport (e.g., local Stdio for Claude Desktop).
- **Adding Functionality**: Insert a layer in front of an existing server to add caching, logging, authentication, or modify requests/responses (though direct modification requires subclassing `FastMCPProxy`).
- **Security Boundary**: Use the proxy as a controlled gateway to an internal server.
- **Simplifying Client Configuration**: Provide a single, stable endpoint (the proxy) even if the backend server's location or transport changes.
## Creating a Proxy
The easiest way to create a proxy is using the `FastMCP.from_client()` class method. This creates a standard FastMCP server that forwards requests to another MCP server.
```python
from fastmcp import FastMCP, Client
# Create a client configured to talk to the backend server
# This could be any MCP server - remote, local, or using any transport
backend_client = Client("backend_server.py") # Could be "http://remote.server/sse", etc.
# Create the proxy server with from_client()
proxy_server = FastMCP.from_client(
backend_client,
name="MyProxyServer" # Optional settings for the proxy
)
# That's it! You now have a proxy FastMCP server that can be used
# with any transport (SSE, stdio, etc.) just like any other FastMCP server
```
**How `from_client` Works:**
1. It connects to the backend server using the provided client.
2. It discovers all the tools, resources, resource templates, and prompts available on the backend server.
3. It creates corresponding "proxy" components that forward requests to the backend.
4. It returns a standard `FastMCP` server instance that can be used like any other.
Currently, proxying focuses primarily on exposing the major MCP objects (tools, resources, templates, and prompts). Some advanced MCP features like notifications and sampling are not fully supported in proxies in the current version. Support for these additional features may be added in future releases.
### Bridging Transports
A common use case is to bridge transports. For example, making a remote SSE server available locally via Stdio:
```python
from fastmcp import FastMCP, Client
# Client targeting a remote SSE server
client = Client("http://example.com/mcp/sse")
# Create a proxy server - it's just a regular FastMCP server
proxy = FastMCP.from_client(client, name="SSE to Stdio Proxy")
# The proxy can now be used with any transport
# No special handling needed - it works like any FastMCP server
```
### In-Memory Proxies
You can also proxy an in-memory `FastMCP` instance, which is useful for adjusting the configuration or behavior of a server you don't completely control.
```python
from fastmcp import FastMCP
# Original server
original_server = FastMCP(name="Original")
@original_server.tool()
def tool_a() -> str:
return "A"
# Create a proxy of the original server
proxy = FastMCP.from_client(
original_server,
name="Proxy Server"
)
# proxy is now a regular FastMCP server that forwards
# requests to original_server
```
## `FastMCPProxy` Class
Internally, `FastMCP.from_client()` uses the `FastMCPProxy` class. You generally don't need to interact with this class directly, but it's available if needed.
Using the class directly might be necessary for advanced scenarios, like subclassing `FastMCPProxy` to add custom logic before or after forwarding requests.
## /docs/patterns/testing.mdx
---
title: Testing MCP Servers
sidebarTitle: Testing
description: Learn how to test your FastMCP servers effectively
icon: vial
---
Testing your MCP servers thoroughly is essential for ensuring they work correctly when deployed. FastMCP makes this easy through a variety of testing patterns.
## In-Memory Testing
The most efficient way to test an MCP server is to pass your FastMCP server instance directly to a Client. This enables in-memory testing without having to start a separate server process, which is particularly useful because managing an MCP server programmatically can be challenging.
Here is an example of using a `Client` to test a server with pytest:
```python
import pytest
from fastmcp import FastMCP, Client
@pytest.fixture
def mcp_server():
server = FastMCP("TestServer")
@server.tool()
def greet(name: str) -> str:
return f"Hello, {name}!"
return server
async def test_tool_functionality(mcp_server):
# Pass the server directly to the Client constructor
async with Client(mcp_server) as client:
result = await client.call_tool("greet", {"name": "World"})
assert "Hello, World!" in str(result[0])
```
This pattern creates a direct connection between the client and server, allowing you to test your server's functionality efficiently.
## /docs/servers/context.mdx
---
title: MCP Context
sidebarTitle: Context
description: Access MCP capabilities like logging, progress, and resources within your MCP objects.
icon: rectangle-code
---
import { VersionBadge } from '/snippets/version-badge.mdx'
When defining FastMCP [tools](/servers/tools), [resources](/servers/resources), resource templates, or [prompts](/servers/prompts), your functions might need to interact with the underlying MCP session or access server capabilities. FastMCP provides the `Context` object for this purpose.
## What Is Context?
The `Context` object provides a clean interface to access MCP features within your functions, including:
- **Logging**: Send debug, info, warning, and error messages back to the client
- **Progress Reporting**: Update the client on the progress of long-running operations
- **Resource Access**: Read data from resources registered with the server
- **LLM Sampling**: Request the client's LLM to generate text based on provided messages
- **Request Information**: Access metadata about the current request
- **Server Access**: When needed, access the underlying FastMCP server instance
## Accessing the Context
To use the context object within any of your functions, simply add a parameter to your function signature and type-hint it as `Context`. FastMCP will automatically inject the context instance when your function is called.
```python
from fastmcp import FastMCP, Context
mcp = FastMCP(name="ContextDemo")
@mcp.tool()
async def process_file(file_uri: str, ctx: Context) -> str:
"""Processes a file, using context for logging and resource access."""
request_id = ctx.request_id
await ctx.info(f"[{request_id}] Starting processing for {file_uri}")
try:
# Use context to read a resource
contents_list = await ctx.read_resource(file_uri)
if not contents_list:
await ctx.warning(f"Resource {file_uri} is empty.")
return "Resource empty"
data = contents_list[0].content # Assuming TextResourceContents
await ctx.debug(f"Read {len(data)} bytes from {file_uri}")
# Report progress
await ctx.report_progress(progress=50, total=100)
# Simulate work
processed_data = data.upper() # Example processing
await ctx.report_progress(progress=100, total=100)
await ctx.info(f"Processing complete for {file_uri}")
return f"Processed data length: {len(processed_data)}"
except Exception as e:
# Use context to log errors
await ctx.error(f"Error processing {file_uri}: {str(e)}")
raise # Re-raise to send error back to client
```
**Key Points:**
- The parameter name (e.g., `ctx`, `context`) doesn't matter, only the type hint `Context` is important.
- The context parameter can be placed anywhere in your function's signature.
- The context is optional - functions that don't need it can omit the parameter.
- Context is only available during a request; attempting to use context methods outside a request will raise errors.
- Context methods are async, so your function usually needs to be async as well.
## Context Capabilities
### Logging
Send log messages back to the MCP client. This is useful for debugging and providing visibility into function execution during a request.
```python
@mcp.tool()
async def analyze_data(data: list[float], ctx: Context) -> dict:
"""Analyze numerical data with logging."""
await ctx.debug("Starting analysis of numerical data")
await ctx.info(f"Analyzing {len(data)} data points")
try:
result = sum(data) / len(data)
await ctx.info(f"Analysis complete, average: {result}")
return {"average": result, "count": len(data)}
except ZeroDivisionError:
await ctx.warning("Empty data list provided")
return {"error": "Empty data list"}
except Exception as e:
await ctx.error(f"Analysis failed: {str(e)}")
raise
```
**Available Logging Methods:**
- **`ctx.debug(message: str)`**: Low-level details useful for debugging
- **`ctx.info(message: str)`**: General information about execution
- **`ctx.warning(message: str)`**: Potential issues that didn't prevent execution
- **`ctx.error(message: str)`**: Errors that occurred during execution
- **`ctx.log(level: Literal["debug", "info", "warning", "error"], message: str, logger_name: str | None = None)`**: Generic log method supporting custom logger names
### Progress Reporting
For long-running operations, notify the client about the progress. This allows clients to display progress indicators and provide a better user experience.
```python
@mcp.tool()
async def process_items(items: list[str], ctx: Context) -> dict:
"""Process a list of items with progress updates."""
total = len(items)
results = []
for i, item in enumerate(items):
# Report progress as percentage
await ctx.report_progress(progress=i, total=total)
# Process the item (simulated with a sleep)
await asyncio.sleep(0.1)
results.append(item.upper())
# Report 100% completion
await ctx.report_progress(progress=total, total=total)
return {"processed": len(results), "results": results}
```
**Method signature:**
- **`ctx.report_progress(progress: float, total: float | None = None)`**
- `progress`: Current progress value (e.g., 24)
- `total`: Optional total value (e.g., 100). If provided, clients may interpret this as a percentage.
Progress reporting requires the client to have sent a `progressToken` in the initial request. If the client doesn't support progress reporting, these calls will have no effect.
### Resource Access
Read data from resources registered with your FastMCP server. This allows functions to access files, configuration, or dynamically generated content.
```python
@mcp.tool()
async def summarize_document(document_uri: str, ctx: Context) -> str:
"""Summarize a document by its resource URI."""
# Read the document content
content_list = await ctx.read_resource(document_uri)
if not content_list:
return "Document is empty"
document_text = content_list[0].content
# Example: Generate a simple summary (length-based)
words = document_text.split()
total_words = len(words)
await ctx.info(f"Document has {total_words} words")
# Return a simple summary
if total_words > 100:
summary = " ".join(words[:100]) + "..."
return f"Summary ({total_words} words total): {summary}"
else:
return f"Full document ({total_words} words): {document_text}"
```
**Method signature:**
- **`ctx.read_resource(uri: str | AnyUrl) -> list[ReadResourceContents]`**
- `uri`: The resource URI to read
- Returns a list of resource content parts (usually containing just one item)
The returned content is typically accessed via `content_list[0].content` and can be text or binary data depending on the resource.
### LLM Sampling
Request the client's LLM to generate text based on provided messages. This is useful when your function needs to leverage the LLM's capabilities to process data or generate responses.
```python
@mcp.tool()
async def analyze_sentiment(text: str, ctx: Context) -> dict:
"""Analyze the sentiment of a text using the client's LLM."""
# Create a sampling prompt asking for sentiment analysis
prompt = f"Analyze the sentiment of the following text as positive, negative, or neutral. Just output a single word - 'positive', 'negative', or 'neutral'. Text to analyze: {text}"
# Send the sampling request to the client's LLM
response = await ctx.sample(prompt)
# Process the LLM's response
sentiment = response.text.strip().lower()
# Map to standard sentiment values
if "positive" in sentiment:
sentiment = "positive"
elif "negative" in sentiment:
sentiment = "negative"
else:
sentiment = "neutral"
return {"text": text, "sentiment": sentiment}
```
**Method signature:**
- **`ctx.sample(messages: str | list[str | SamplingMessage], system_prompt: str | None = None, temperature: float | None = None, max_tokens: int | None = None) -> TextContent | ImageContent`**
- `messages`: A string or list of strings/message objects to send to the LLM
- `system_prompt`: Optional system prompt to guide the LLM's behavior
- `temperature`: Optional sampling temperature (controls randomness)
- `max_tokens`: Optional maximum number of tokens to generate (defaults to 512)
- Returns the LLM's response as TextContent or ImageContent
When providing a simple string, it's treated as a user message. For more complex scenarios, you can provide a list of messages with different roles.
```python
@mcp.tool()
async def generate_example(concept: str, ctx: Context) -> str:
"""Generate a Python code example for a given concept."""
# Using a system prompt and a user message
response = await ctx.sample(
messages=f"Write a simple Python code example demonstrating '{concept}'.",
system_prompt="You are an expert Python programmer. Provide concise, working code examples without explanations.",
temperature=0.7,
max_tokens=300
)
code_example = response.text
return f"```python\n{code_example}\n```"
```
See [Client Sampling](/clients/client#llm-sampling) for more details on how clients handle these requests.
### Request Information
Access metadata about the current request and client.
```python
@mcp.tool()
async def request_info(ctx: Context) -> dict:
"""Return information about the current request."""
return {
"request_id": ctx.request_id,
"client_id": ctx.client_id or "Unknown client"
}
```
**Available Properties:**
- **`ctx.request_id -> str`**: Get the unique ID for the current MCP request
- **`ctx.client_id -> str | None`**: Get the ID of the client making the request, if provided during initialization
### Advanced Access
For advanced use cases, you can access the underlying MCP session and FastMCP server.
```python
@mcp.tool()
async def advanced_tool(ctx: Context) -> str:
"""Demonstrate advanced context access."""
# Access the FastMCP server instance
server_name = ctx.fastmcp.name
# Low-level session access (rarely needed)
session = ctx.session
request_context = ctx.request_context
return f"Server: {server_name}"
```
**Advanced Properties:**
- **`ctx.fastmcp -> FastMCP`**: Access the server instance the context belongs to
- **`ctx.session`**: Access the raw `mcp.server.session.ServerSession` object
- **`ctx.request_context`**: Access the raw `mcp.shared.context.RequestContext` object
Direct use of `session` or `request_context` requires understanding the low-level MCP Python SDK and may be less stable than using the methods provided directly on the `Context` object.
## Using Context in Different Components
All FastMCP components (tools, resources, templates, and prompts) can use the Context object following the same pattern - simply add a parameter with the `Context` type annotation.
### Context in Resources and Templates
Resources and resource templates can access context to customize their behavior:
```python
@mcp.resource("resource://user-data")
async def get_user_data(ctx: Context) -> dict:
"""Fetch personalized user data based on the request context."""
user_id = ctx.client_id or "anonymous"
await ctx.info(f"Fetching data for user {user_id}")
# Example of using context for dynamic resource generation
return {
"user_id": user_id,
"last_access": datetime.now().isoformat(),
"request_id": ctx.request_id
}
@mcp.resource("resource://users/{user_id}/profile")
async def get_user_profile(user_id: str, ctx: Context) -> dict:
"""Fetch user profile from database with context-aware logging."""
await ctx.info(f"Fetching profile for user {user_id}")
# Example of using context in a template resource
# In a real implementation, you might query a database
return {
"id": user_id,
"name": f"User {user_id}",
"request_id": ctx.request_id
}
```
### Context in Prompts
Prompts can use context to generate more dynamic templates:
```python
@mcp.prompt()
async def data_analysis_request(dataset: str, ctx: Context) -> str:
"""Generate a request to analyze data with contextual information."""
await ctx.info(f"Generating data analysis prompt for {dataset}")
# Could use context to read configuration or personalize the prompt
return f"""Please analyze the following dataset: {dataset}
Request initiated at: {datetime.now().isoformat()}
Request ID: {ctx.request_id}
"""
```
All FastMCP objects now support context injection using the same consistent pattern, making it easy to add session-aware capabilities to all aspects of your MCP server.
## /docs/servers/fastmcp.mdx
---
title: The FastMCP Server
sidebarTitle: FastMCP Server
description: Learn about the core FastMCP server class and how to run it.
icon: server
---
import { VersionBadge } from "/snippets/version-badge.mdx"
The central piece of a FastMCP application is the `FastMCP` server class. This class acts as the main container for your application's tools, resources, and prompts, and manages communication with MCP clients.
## Creating a Server
Instantiating a server is straightforward. You typically provide a name for your server, which helps identify it in client applications or logs.
```python
from fastmcp import FastMCP
# Create a basic server instance
mcp = FastMCP(name="MyAssistantServer")
# You can also add instructions for how to interact with the server
mcp_with_instructions = FastMCP(
name="HelpfulAssistant",
instructions="This server provides data analysis tools. Call get_average() to analyze numerical data."
)
```
The `FastMCP` constructor accepts several arguments:
* `name`: (Optional) A human-readable name for your server. Defaults to "FastMCP".
* `instructions`: (Optional) Description of how to interact with this server. These instructions help clients understand the server's purpose and available functionality.
* `lifespan`: (Optional) An async context manager function for server startup and shutdown logic.
* `tags`: (Optional) A set of strings to tag the server itself.
* `**settings`: Keyword arguments corresponding to additional `ServerSettings` configuration
## Components
FastMCP servers expose several types of components to the client:
### Tools
Tools are functions that the client can call to perform actions or access external systems.
```python
@mcp.tool()
def multiply(a: float, b: float) -> float:
"""Multiplies two numbers together."""
return a * b
```
See [Tools](/servers/tools) for detailed documentation.
### Resources
Resources expose data sources that the client can read.
```python
@mcp.resource("data://config")
def get_config() -> dict:
"""Provides the application configuration."""
return {"theme": "dark", "version": "1.0"}
```
See [Resources & Templates](/servers/resources) for detailed documentation.
### Resource Templates
Resource templates are parameterized resources that allow the client to request specific data.
```python
@mcp.resource("users://{user_id}/profile")
def get_user_profile(user_id: int) -> dict:
"""Retrieves a user's profile by ID."""
# The {user_id} in the URI is extracted and passed to this function
return {"id": user_id, "name": f"User {user_id}", "status": "active"}
```
See [Resources & Templates](/servers/resources) for detailed documentation.
### Prompts
Prompts are reusable message templates for guiding the LLM.
```python
@mcp.prompt()
def analyze_data(data_points: list[float]) -> str:
"""Creates a prompt asking for analysis of numerical data."""
formatted_data = ", ".join(str(point) for point in data_points)
return f"Please analyze these data points: {formatted_data}"
```
See [Prompts](/servers/prompts) for detailed documentation.
## Running the Server
FastMCP servers need a transport mechanism to communicate with clients. In the MCP protocol, servers typically run as separate processes that clients connect to.
### The `__main__` Block Pattern
The standard way to make your server executable is to include a `run()` call inside an `if __name__ == "__main__":` block:
```python
# my_server.py
from fastmcp import FastMCP
mcp = FastMCP(name="MyServer")
@mcp.tool()
def greet(name: str) -> str:
"""Greet a user by name."""
return f"Hello, {name}!"
if __name__ == "__main__":
# This code only runs when the file is executed directly
# Basic run with default settings (stdio transport)
mcp.run()
# Or with specific transport and parameters
# mcp.run(transport="sse", host="127.0.0.1", port=9000)
```
This pattern is important because:
1. **Client Compatibility**: Standard MCP clients (like Claude Desktop) expect to execute your server file directly with `python my_server.py`
2. **Process Isolation**: Each server runs in its own process, allowing clients to manage multiple servers independently
3. **Import Safety**: The main block prevents the server from running when the file is imported by other code
While this pattern is technically optional when using FastMCP's CLI, it's considered a best practice for maximum compatibility with all MCP clients.
### Transport Options
FastMCP supports two transport mechanisms:
#### STDIO Transport (Default)
The standard input/output (STDIO) transport is the default and most widely compatible option:
```python
# Run with stdio (default)
mcp.run() # or explicitly: mcp.run(transport="stdio")
```
With STDIO:
- The client starts a new server process for each session
- Communication happens through standard input/output streams
- The server process terminates when the client disconnects
- This is ideal for integrations with tools like Claude Desktop, where each conversation gets its own server instance
#### SSE Transport (Server-Sent Events)
For long-running servers that serve multiple clients, FastMCP supports SSE:
```python
# Run with SSE on default host/port (0.0.0.0:8000)
mcp.run(transport="sse")
```
With SSE:
- The server runs as a persistent web server
- Multiple clients can connect simultaneously
- The server stays running until explicitly terminated
- This is ideal for remote access to services
You can configure transport parameters directly when running the server:
```python
# Configure with specific parameters
mcp.run(
transport="sse",
host="127.0.0.1", # Override default host
port=8888, # Override default port
log_level="debug" # Set logging level
)
# You can also run asynchronously with the same parameters
import asyncio
asyncio.run(
mcp.run_sse_async(
host="127.0.0.1",
port=8888,
log_level="debug"
)
)
```
Transport parameters passed to `run()` or `run_sse_async()` override any settings defined when creating the FastMCP instance. The most common parameters for SSE transport are:
- `host`: Host to bind to (default: "0.0.0.0")
- `port`: Port to bind to (default: 8000)
- `log_level`: Logging level (default: "INFO")
#### Advanced Transport Configuration
Under the hood, FastMCP's `run()` method accepts arbitrary keyword arguments (`**transport_kwargs`) that are passed to the transport-specific run methods:
```python
# For SSE transport, kwargs are passed to run_sse_async()
mcp.run(transport="sse", **transport_kwargs)
# For stdio transport, kwargs are passed to run_stdio_async()
mcp.run(transport="stdio", **transport_kwargs)
```
This means that any future transport-specific options will be automatically available through the same interface without requiring changes to your code.
### Using the FastMCP CLI
The FastMCP CLI provides a convenient way to run servers:
```bash
# Run a server (defaults to stdio transport)
fastmcp run my_server.py:mcp
# Explicitly specify a transport
fastmcp run my_server.py:mcp --transport sse
# Configure SSE transport with host and port
fastmcp run my_server.py:mcp --transport sse --host 127.0.0.1 --port 8888
# With log level
fastmcp run my_server.py:mcp --transport sse --log-level DEBUG
```
The CLI can dynamically find and run FastMCP server objects in your files, but including the `if __name__ == "__main__":` block ensures compatibility with all clients.
## Composing Servers
FastMCP supports composing multiple servers together using `import_server` (static copy) and `mount` (live link). This allows you to organize large applications into modular components or reuse existing servers.
See the [Server Composition](/patterns/composition) guide for full details, best practices, and examples.
```python
# Example: Importing a subserver
from fastmcp import FastMCP
import asyncio
main = FastMCP(name="Main")
sub = FastMCP(name="Sub")
@sub.tool()
def hello():
return "hi"
main.mount("sub", sub)
```
## Proxying Servers
FastMCP can act as a proxy for any MCP server (local or remote) using `FastMCP.from_client`, letting you bridge transports or add a frontend to existing servers. For example, you can expose a remote SSE server locally via stdio, or vice versa.
See the [Proxying Servers](/patterns/proxy) guide for details and advanced usage.
```python
from fastmcp import FastMCP, Client
backend = Client("http://example.com/mcp/sse")
proxy = FastMCP.from_client(backend, name="ProxyServer")
# Now use the proxy like any FastMCP server
```
## Server Configuration
Server behavior, like transport settings (host, port for SSE) and how duplicate components are handled, can be configured via `ServerSettings`. These settings can be passed during `FastMCP` initialization, set via environment variables (prefixed with `FASTMCP_SERVER_`), or loaded from a `.env` file.
```python
from fastmcp import FastMCP
# Configure during initialization
mcp = FastMCP(
name="ConfiguredServer",
port=8080, # Directly maps to ServerSettings
on_duplicate_tools="error" # Set duplicate handling
)
# Settings are accessible via mcp.settings
print(mcp.settings.port) # Output: 8080
print(mcp.settings.on_duplicate_tools) # Output: "error"
```
### Key Configuration Options
- **`host`**: Host address for SSE transport (default: "0.0.0.0")
- **`port`**: Port number for SSE transport (default: 8000)
- **`log_level`**: Logging level (default: "INFO")
- **`on_duplicate_tools`**: How to handle duplicate tool registrations
- **`on_duplicate_resources`**: How to handle duplicate resource registrations
- **`on_duplicate_prompts`**: How to handle duplicate prompt registrations
All of these can be configured directly as parameters when creating the `FastMCP` instance.
## /docs/servers/prompts.mdx
---
title: Prompts
sidebarTitle: Prompts
description: Create reusable, parameterized prompt templates for MCP clients.
icon: message-lines
---
import { VersionBadge } from "/snippets/version-badge.mdx"
Prompts are reusable message templates that help LLMs generate structured, purposeful responses. FastMCP simplifies defining these templates, primarily using the `@mcp.prompt` decorator.
## What Are Prompts?
Prompts provide parameterized message templates for LLMs. When a client requests a prompt:
1. FastMCP finds the corresponding prompt definition.
2. If it has parameters, they are validated against your function signature.
3. Your function executes with the validated inputs.
4. The generated message(s) are returned to the LLM to guide its response.
This allows you to define consistent, reusable templates that LLMs can use across different clients and contexts.
## Prompts
### The `@prompt` Decorator
The most common way to define a prompt is by decorating a Python function. The decorator uses the function name as the prompt's identifier.
```python
from fastmcp import FastMCP
from fastmcp.prompts.prompt import UserMessage, AssistantMessage, Message
mcp = FastMCP(name="PromptServer")
# Basic prompt returning a string (converted to UserMessage)
@mcp.prompt()
def ask_about_topic(topic: str) -> str:
"""Generates a user message asking for an explanation of a topic."""
return f"Can you please explain the concept of '{topic}'?"
# Prompt returning a specific message type
@mcp.prompt()
def generate_code_request(language: str, task_description: str) -> UserMessage:
"""Generates a user message requesting code generation."""
content = f"Write a {language} function that performs the following task: {task_description}"
return UserMessage(content=content)
```
**Key Concepts:**
* **Name:** By default, the prompt name is taken from the function name.
* **Parameters:** The function parameters define the inputs needed to generate the prompt.
* **Inferred Metadata:** By default:
* Prompt Name: Taken from the function name (`ask_about_topic`).
* Prompt Description: Taken from the function's docstring.
### Return Values
FastMCP intelligently handles different return types from your prompt function:
- **`str`**: Automatically converted to a single `UserMessage`.
- **`Message`** (e.g., `UserMessage`, `AssistantMessage`): Used directly as provided.
- **`dict`**: Parsed as a `Message` object if it has the correct structure.
- **`list[Message]`**: Used as a sequence of messages (a conversation).
```python
@mcp.prompt()
def roleplay_scenario(character: str, situation: str) -> list[Message]:
"""Sets up a roleplaying scenario with initial messages."""
return [
UserMessage(f"Let's roleplay. You are {character}. The situation is: {situation}"),
AssistantMessage("Okay, I understand. I am ready. What happens next?")
]
@mcp.prompt()
def ask_for_feedback() -> dict:
"""Generates a user message asking for feedback."""
return {"role": "user", "content": "What did you think of my previous response?"}
```
### Type Annotations
Type annotations are important for prompts. They:
1. Inform FastMCP about the expected types for each parameter.
2. Allow validation of parameters received from clients.
3. Are used to generate the prompt's schema for the MCP protocol.
```python
from pydantic import Field
from typing import Literal, Optional
@mcp.prompt()
def generate_content_request(
topic: str = Field(description="The main subject to cover"),
format: Literal["blog", "email", "social"] = "blog",
tone: str = "professional",
word_count: Optional[int] = None
) -> str:
"""Create a request for generating content in a specific format."""
prompt = f"Please write a {format} post about {topic} in a {tone} tone."
if word_count:
prompt += f" It should be approximately {word_count} words long."
return prompt
```
### Required vs. Optional Parameters
Parameters in your function signature are considered **required** unless they have a default value.
```python
@mcp.prompt()
def data_analysis_prompt(
data_uri: str, # Required - no default value
analysis_type: str = "summary", # Optional - has default value
include_charts: bool = False # Optional - has default value
) -> str:
"""Creates a request to analyze data with specific parameters."""
prompt = f"Please perform a '{analysis_type}' analysis on the data found at {data_uri}."
if include_charts:
prompt += " Include relevant charts and visualizations."
return prompt
```
In this example, the client *must* provide `data_uri`. If `analysis_type` or `include_charts` are omitted, their default values will be used.
### Prompt Metadata
While FastMCP infers the name and description from your function, you can override these and add tags using arguments to the `@mcp.prompt` decorator:
```python
@mcp.prompt(
name="analyze_data_request", # Custom prompt name
description="Creates a request to analyze data with specific parameters", # Custom description
tags={"analysis", "data"} # Optional categorization tags
)
def data_analysis_prompt(
data_uri: str = Field(description="The URI of the resource containing the data."),
analysis_type: str = Field(default="summary", description="Type of analysis.")
) -> str:
"""This docstring is ignored when description is provided."""
return f"Please perform a '{analysis_type}' analysis on the data found at {data_uri}."
```
- **`name`**: Sets the explicit prompt name exposed via MCP.
- **`description`**: Provides the description exposed via MCP. If set, the function's docstring is ignored for this purpose.
- **`tags`**: A set of strings used to categorize the prompt. Clients *might* use tags to filter or group available prompts.
### Asynchronous Prompts
FastMCP seamlessly supports both standard (`def`) and asynchronous (`async def`) functions as prompts.
```python
# Synchronous prompt
@mcp.prompt()
def simple_question(question: str) -> str:
"""Generates a simple question to ask the LLM."""
return f"Question: {question}"
# Asynchronous prompt
@mcp.prompt()
async def data_based_prompt(data_id: str) -> str:
"""Generates a prompt based on data that needs to be fetched."""
# In a real scenario, you might fetch data from a database or API
async with aiohttp.ClientSession() as session:
async with session.get(f"https://api.example.com/data/{data_id}") as response:
data = await response.json()
return f"Analyze this data: {data['content']}"
```
Use `async def` when your prompt function performs I/O operations like network requests, database queries, file I/O, or external service calls.
### Accessing MCP Context
Prompts can access additional MCP information and features through the `Context` object. To access it, add a parameter to your prompt function with a type annotation of `Context`:
```python {6}
from fastmcp import FastMCP, Context
mcp = FastMCP(name="PromptServer")
@mcp.prompt()
async def generate_report_request(report_type: str, ctx: Context) -> str:
"""Generates a request for a report."""
return f"Please create a {report_type} report. Request ID: {ctx.request_id}"
```
For full documentation on the Context object and all its capabilities, see the [Context documentation](/servers/context).
## Server Behavior
### Duplicate Prompts
You can configure how the FastMCP server handles attempts to register multiple prompts with the same name. Use the `on_duplicate_prompts` setting during `FastMCP` initialization.
```python
from fastmcp import FastMCP
mcp = FastMCP(
name="PromptServer",
on_duplicate_prompts="error" # Raise an error if a prompt name is duplicated
)
@mcp.prompt()
def greeting(): return "Hello, how can I help you today?"
# This registration attempt will raise a ValueError because
# "greeting" is already registered and the behavior is "error".
# @mcp.prompt()
# def greeting(): return "Hi there! What can I do for you?"
```
The duplicate behavior options are:
- `"warn"` (default): Logs a warning, and the new prompt replaces the old one.
- `"error"`: Raises a `ValueError`, preventing the duplicate registration.
- `"replace"`: Silently replaces the existing prompt with the new one.
- `"ignore"`: Keeps the original prompt and ignores the new registration attempt.
## /docs/servers/resources.mdx
---
title: Resources & Templates
sidebarTitle: Resources
description: Expose data sources and dynamic content generators to your MCP client.
icon: database
---
import { VersionBadge } from "/snippets/version-badge.mdx"
Resources represent data or files that an MCP client can read, and resource templates extend this concept by allowing clients to request dynamically generated resources based on parameters passed in the URI.
FastMCP simplifies defining both static and dynamic resources, primarily using the `@mcp.resource` decorator.
## What Are Resources?
Resources provide read-only access to data for the LLM or client application. When a client requests a resource URI:
1. FastMCP finds the corresponding resource definition.
2. If it's dynamic (defined by a function), the function is executed.
3. The content (text, JSON, binary data) is returned to the client.
This allows LLMs to access files, database content, configuration, or dynamically generated information relevant to the conversation.
## Resources
### The `@resource` Decorator
The most common way to define a resource is by decorating a Python function. The decorator requires the resource's unique URI.
```python
import json
from fastmcp import FastMCP
mcp = FastMCP(name="DataServer")
# Basic dynamic resource returning a string
@mcp.resource("resource://greeting")
def get_greeting() -> str:
"""Provides a simple greeting message."""
return "Hello from FastMCP Resources!"
# Resource returning JSON data (dict is auto-serialized)
@mcp.resource("data://config")
def get_config() -> dict:
"""Provides application configuration as JSON."""
return {
"theme": "dark",
"version": "1.2.0",
"features": ["tools", "resources"],
}
```
**Key Concepts:**
* **URI:** The first argument to `@resource` is the unique URI (e.g., `"resource://greeting"`) clients use to request this data.
* **Lazy Loading:** The decorated function (`get_greeting`, `get_config`) is only executed when a client specifically requests that resource URI via `resources/read`.
* **Inferred Metadata:** By default:
* Resource Name: Taken from the function name (`get_greeting`).
* Resource Description: Taken from the function's docstring.
### Return Values
FastMCP automatically converts your function's return value into the appropriate MCP resource content:
- **`str`**: Sent as `TextResourceContents` (with `mime_type="text/plain"` by default).
- **`dict`, `list`, `pydantic.BaseModel`**: Automatically serialized to a JSON string and sent as `TextResourceContents` (with `mime_type="application/json"` by default).
- **`bytes`**: Base64 encoded and sent as `BlobResourceContents`. You should specify an appropriate `mime_type` (e.g., `"image/png"`, `"application/octet-stream"`).
- **`None`**: Results in an empty resource content list being returned.
### Resource Metadata
You can customize the resource's properties using arguments in the decorator:
```python
from fastmcp import FastMCP
mcp = FastMCP(name="DataServer")
# Example specifying metadata
@mcp.resource(
uri="data://app-status", # Explicit URI (required)
name="ApplicationStatus", # Custom name
description="Provides the current status of the application.", # Custom description
mime_type="application/json", # Explicit MIME type
tags={"monitoring", "status"} # Categorization tags
)
def get_application_status() -> dict:
"""Internal function description (ignored if description is provided above)."""
return {"status": "ok", "uptime": 12345, "version": mcp.settings.version} # Example usage
```
- **`uri`**: The unique identifier for the resource (required).
- **`name`**: A human-readable name (defaults to function name).
- **`description`**: Explanation of the resource (defaults to docstring).
- **`mime_type`**: Specifies the content type (FastMCP often infers a default like `text/plain` or `application/json`, but explicit is better for non-text types).
- **`tags`**: A set of strings for categorization, potentially used by clients for filtering.
### Accessing MCP Context
Resources and resource templates can access additional MCP information and features through the `Context` object. To access it, add a parameter to your resource function with a type annotation of `Context`:
```python {6, 14}
from fastmcp import FastMCP, Context
mcp = FastMCP(name="DataServer")
@mcp.resource("resource://system-status")
async def get_system_status(ctx: Context) -> dict:
"""Provides system status information."""
return {
"status": "operational",
"request_id": ctx.request_id
}
@mcp.resource("resource://{name}/details")
async def get_details(name: str, ctx: Context) -> dict:
"""Get details for a specific name."""
return {
"name": name,
"accessed_at": ctx.request_id
}
```
For full documentation on the Context object and all its capabilities, see the [Context documentation](/servers/context).
### Asynchronous Resources
Use `async def` for resource functions that perform I/O operations (e.g., reading from a database or network) to avoid blocking the server.
```python
import aiofiles
from fastmcp import FastMCP
mcp = FastMCP(name="DataServer")
@mcp.resource("file:///app/data/important_log.txt", mime_type="text/plain")
async def read_important_log() -> str:
"""Reads content from a specific log file asynchronously."""
try:
async with aiofiles.open("/app/data/important_log.txt", mode="r") as f:
content = await f.read()
return content
except FileNotFoundError:
return "Log file not found."
```
### Resource Classes
While `@mcp.resource` is ideal for dynamic content, you can directly register pre-defined resources (like static files or simple text) using `mcp.add_resource()` and concrete `Resource` subclasses.
```python
from pathlib import Path
from fastmcp import FastMCP
from fastmcp.resources import FileResource, TextResource, DirectoryResource
mcp = FastMCP(name="DataServer")
# 1. Exposing a static file directly
readme_path = Path("./README.md").resolve()
if readme_path.exists():
# Use a file:// URI scheme
readme_resource = FileResource(
uri=f"file://{readme_path.as_posix()}",
path=readme_path, # Path to the actual file
name="README File",
description="The project's README.",
mime_type="text/markdown",
tags={"documentation"}
)
mcp.add_resource(readme_resource)
# 2. Exposing simple, predefined text
notice_resource = TextResource(
uri="resource://notice",
name="Important Notice",
text="System maintenance scheduled for Sunday.",
tags={"notification"}
)
mcp.add_resource(notice_resource)
# 3. Using a custom key different from the URI
special_resource = TextResource(
uri="resource://common-notice",
name="Special Notice",
text="This is a special notice with a custom storage key.",
)
mcp.add_resource(special_resource, key="resource://custom-key")
# 4. Exposing a directory listing
data_dir_path = Path("./app_data").resolve()
if data_dir_path.is_dir():
data_listing_resource = DirectoryResource(
uri="resource://data-files",
path=data_dir_path, # Path to the directory
name="Data Directory Listing",
description="Lists files available in the data directory.",
recursive=False # Set to True to list subdirectories
)
mcp.add_resource(data_listing_resource) # Returns JSON list of files
```
**Common Resource Classes:**
- `TextResource`: For simple string content.
- `BinaryResource`: For raw `bytes` content.
- `FileResource`: Reads content from a local file path. Handles text/binary modes and lazy reading.
- `HttpResource`: Fetches content from an HTTP(S) URL (requires `httpx`).
- `DirectoryResource`: Lists files in a local directory (returns JSON).
- (`FunctionResource`: Internal class used by `@mcp.resource`).
Use these when the content is static or sourced directly from a file/URL, bypassing the need for a dedicated Python function.
#### Custom Resource Keys
When adding resources directly with `mcp.add_resource()`, you can optionally provide a custom storage key:
```python
# Creating a resource with standard URI as the key
resource = TextResource(uri="resource://data")
mcp.add_resource(resource) # Will be stored and accessed using "resource://data"
# Creating a resource with a custom key
special_resource = TextResource(uri="resource://special-data")
mcp.add_resource(special_resource, key="internal://data-v2") # Will be stored and accessed using "internal://data-v2"
```
Note that this parameter is only available when using `add_resource()` directly and not through the `@resource` decorator, as URIs are provided explicitly when using the decorator.
## Resource Templates
Resource Templates allow clients to request resources whose content depends on parameters embedded in the URI. Define a template using the **same `@mcp.resource` decorator**, but include `{parameter_name}` placeholders in the URI string and add corresponding arguments to your function signature.
Resource templates share most configuration options with regular resources (name, description, mime_type, tags), but add the ability to define URI parameters that map to function parameters.
Resource templates generate a new resource for each unique set of parameters, which means that resources can be dynamically created on-demand. For example, if the resource template `"user://profile/{name}"` is registered, MCP clients could request `"user://profile/ford"` or `"user://profile/marvin"` to retrieve either of those two user profiles as resources, without having to register each resource individually.
Here is a complete example that shows how to define two resource templates:
```python
from fastmcp import FastMCP
mcp = FastMCP(name="DataServer")
# Template URI includes {city} placeholder
@mcp.resource("weather://{city}/current")
def get_weather(city: str) -> dict:
"""Provides weather information for a specific city."""
# In a real implementation, this would call a weather API
# Here we're using simplified logic for example purposes
return {
"city": city.capitalize(),
"temperature": 22,
"condition": "Sunny",
"unit": "celsius"
}
# Template with multiple parameters
@mcp.resource("repos://{owner}/{repo}/info")
def get_repo_info(owner: str, repo: str) -> dict:
"""Retrieves information about a GitHub repository."""
# In a real implementation, this would call the GitHub API
return {
"owner": owner,
"name": repo,
"full_name": f"{owner}/{repo}",
"stars": 120,
"forks": 48
}
```
With these two templates defined, clients can request a variety of resources:
- `weather://london/current` → Returns weather for London
- `weather://paris/current` → Returns weather for Paris
- `repos://jlowin/fastmcp/info` → Returns info about the jlowin/fastmcp repository
- `repos://prefecthq/prefect/info` → Returns info about the prefecthq/prefect repository
### Wildcard Parameters
Please note: FastMCP's support for wildcard parameters is an **extension** of the Model Context Protocol standard, which otherwise follows RFC 6570. Since all template processing happens in the FastMCP server, this should not cause any compatibility issues with other MCP implementations.
Resource templates support wildcard parameters that can match multiple path segments. While standard parameters (`{param}`) only match a single path segment and don't cross "/" boundaries, wildcard parameters (`{param*}`) can capture multiple segments including slashes. Wildcards capture all subsequent path segments *up until* the defined part of the URI template (whether literal or another parameter). This allows you to have multiple wildcard parameters in a single URI template.
```python {15, 23}
from fastmcp import FastMCP
mcp = FastMCP(name="DataServer")
# Standard parameter only matches one segment
@mcp.resource("files://{filename}")
def get_file(filename: str) -> str:
"""Retrieves a file by name."""
# Will only match files://
return f"File content for: {filename}"
# Wildcard parameter can match multiple segments
@mcp.resource("path://{filepath*}")
def get_path_content(filepath: str) -> str:
"""Retrieves content at a specific path."""
# Can match path://docs/server/resources.mdx
return f"Content at path: {filepath}"
# Mixing standard and wildcard parameters
@mcp.resource("repo://{owner}/{path*}/template.py")
def get_template_file(owner: str, path: str) -> dict:
"""Retrieves a file from a specific repository and path, but
only if the resource ends with `template.py`"""
# Can match repo://jlowin/fastmcp/src/resources/template.py
return {
"owner": owner,
"path": path + "/template.py",
"content": f"File at {path}/template.py in {owner}'s repository"
}
```
Wildcard parameters are useful when:
- Working with file paths or hierarchical data
- Creating APIs that need to capture variable-length path segments
- Building URL-like patterns similar to REST APIs
Note that like regular parameters, each wildcard parameter must still be a named parameter in your function signature, and all required function parameters must appear in the URI template.
### Default Values
When creating resource templates, FastMCP enforces two rules for the relationship between URI template parameters and function parameters:
1. **Required Function Parameters:** All function parameters without default values (required parameters) must appear in the URI template.
2. **URI Parameters:** All URI template parameters must exist as function parameters.
However, function parameters with default values don't need to be included in the URI template. When a client requests a resource, FastMCP will:
- Extract parameter values from the URI for parameters included in the template
- Use default values for any function parameters not in the URI template
This allows for flexible API designs. For example, a simple search template with optional parameters:
```python
from fastmcp import FastMCP
mcp = FastMCP(name="DataServer")
@mcp.resource("search://{query}")
def search_resources(query: str, max_results: int = 10, include_archived: bool = False) -> dict:
"""Search for resources matching the query string."""
# Only 'query' is required in the URI, the other parameters use their defaults
results = perform_search(query, limit=max_results, archived=include_archived)
return {
"query": query,
"max_results": max_results,
"include_archived": include_archived,
"results": results
}
```
With this template, clients can request `search://python` and the function will be called with `query="python", max_results=10, include_archived=False`. MCP Developers can still call the underlying `search_resources` function directly with more specific parameters.
An even more powerful pattern is registering a single function with multiple URI templates, allowing different ways to access the same data:
```python
from fastmcp import FastMCP
mcp = FastMCP(name="DataServer")
# Define a user lookup function that can be accessed by different identifiers
@mcp.resource("users://email/{email}")
@mcp.resource("users://name/{name}")
def lookup_user(name: str | None = None, email: str | None = None) -> dict:
"""Look up a user by either name or email."""
if email:
return find_user_by_email(email) # pseudocode
elif name:
return find_user_by_name(name) # pseudocode
else:
return {"error": "No lookup parameters provided"}
```
Now an LLM or client can retrieve user information in two different ways:
- `users://email/alice@example.com` → Looks up user by email (with name=None)
- `users://name/Bob` → Looks up user by name (with email=None)
In this stacked decorator pattern:
- The `name` parameter is only provided when using the `users://name/{name}` template
- The `email` parameter is only provided when using the `users://email/{email}` template
- Each parameter defaults to `None` when not included in the URI
- The function logic handles whichever parameter is provided
**How Templates Work:**
1. **Definition:** When FastMCP sees `{...}` placeholders in the `@resource` URI and matching function parameters, it registers a `ResourceTemplate`.
2. **Discovery:** Clients list templates via `resources/listResourceTemplates`.
3. **Request & Matching:** A client requests a specific URI, e.g., `weather://london/current`. FastMCP matches this to the `weather://{city}/current` template.
4. **Parameter Extraction:** It extracts the parameter value: `city="london"`.
5. **Type Conversion & Function Call:** It converts extracted values to the types hinted in the function and calls `get_weather(city="london")`.
6. **Default Values:** For any function parameters with default values not included in the URI template, FastMCP uses the default values.
7. **Response:** The function's return value is formatted (e.g., dict to JSON) and sent back as the resource content.
Templates provide a powerful way to expose parameterized data access points following REST-like principles.
## Server Behavior
### Duplicate Resources
You can configure how the FastMCP server handles attempts to register multiple resources or templates with the same URI. Use the `on_duplicate_resources` setting during `FastMCP` initialization.
```python
from fastmcp import FastMCP
mcp = FastMCP(
name="ResourceServer",
on_duplicate_resources="error" # Raise error on duplicates
)
@mcp.resource("data://config")
def get_config_v1(): return {"version": 1}
# This registration attempt will raise a ValueError because
# "data://config" is already registered and the behavior is "error".
# @mcp.resource("data://config")
# def get_config_v2(): return {"version": 2}
```
The duplicate behavior options are:
- `"warn"` (default): Logs a warning, and the new resource/template replaces the old one.
- `"error"`: Raises a `ValueError`, preventing the duplicate registration.
- `"replace"`: Silently replaces the existing resource/template with the new one.
- `"ignore"`: Keeps the original resource/template and ignores the new registration attempt.
## /docs/servers/tools.mdx
---
title: Tools
sidebarTitle: Tools
description: Expose functions as executable capabilities for your MCP client.
icon: wrench
---
Tools are the core building blocks that allow your LLM to interact with external systems, execute code, and access data that isn't in its training data. In FastMCP, tools are Python functions exposed to LLMs through the MCP protocol.
## What Are Tools?
Tools in FastMCP transform regular Python functions into capabilities that LLMs can invoke during conversations. When an LLM decides to use a tool:
1. It sends a request with parameters based on the tool's schema.
2. FastMCP validates these parameters against your function's signature.
3. Your function executes with the validated inputs.
4. The result is returned to the LLM, which can use it in its response.
This allows LLMs to perform tasks like querying databases, calling APIs, making calculations, or accessing files—extending their capabilities beyond what's in their training data.
## Tools
### The `@tool` Decorator
Creating a tool is as simple as decorating a Python function with `@mcp.tool()`:
```python
from fastmcp import FastMCP
mcp = FastMCP(name="CalculatorServer")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Adds two integer numbers together."""
return a + b
```
When this tool is registered, FastMCP automatically:
- Uses the function name (`add`) as the tool name.
- Uses the function's docstring (`Adds two integer numbers...`) as the tool description.
- Generates an input schema based on the function's parameters and type annotations.
- Handles parameter validation and error reporting.
The way you define your Python function dictates how the tool appears and behaves for the LLM client.
### Parameters
#### Annotations
Type annotations for parameters are essential for proper tool functionality. They:
1. Inform the LLM about the expected data types for each parameter
2. Enable FastMCP to validate input data from clients
3. Generate accurate JSON schemas for the MCP protocol
Use standard Python type annotations for parameters:
```python
@mcp.tool()
def analyze_text(
text: str,
max_tokens: int = 100,
language: str | None = None
) -> dict:
"""Analyze the provided text."""
# Implementation...
```
#### Parameter Metadata
You can provide additional metadata about parameters using Pydantic's `Field` class with `Annotated`. This approach is preferred as it's more modern and keeps type hints separate from validation rules:
```python
from typing import Annotated
from pydantic import Field
@mcp.tool()
def process_image(
image_url: Annotated[str, Field(description="URL of the image to process")],
resize: Annotated[bool, Field(description="Whether to resize the image")] = False,
width: Annotated[int, Field(description="Target width in pixels", ge=1, le=2000)] = 800,
format: Annotated[
Literal["jpeg", "png", "webp"],
Field(description="Output image format")
] = "jpeg"
) -> dict:
"""Process an image with optional resizing."""
# Implementation...
```
You can also use the Field as a default value, though the Annotated approach is preferred:
```python
@mcp.tool()
def search_database(
query: str = Field(description="Search query string"),
limit: int = Field(10, description="Maximum number of results", ge=1, le=100)
) -> list:
"""Search the database with the provided query."""
# Implementation...
```
Field provides several validation and documentation features:
- `description`: Human-readable explanation of the parameter (shown to LLMs)
- `ge`/`gt`/`le`/`lt`: Greater/less than (or equal) constraints
- `min_length`/`max_length`: String or collection length constraints
- `pattern`: Regex pattern for string validation
- `default`: Default value if parameter is omitted
#### Supported Types
FastMCP supports a wide range of type annotations, including all Pydantic types:
| Type Annotation | Example | Description |
| :---------------------- | :---------------------------- | :---------------------------------- |
| Basic types | `int`, `float`, `str`, `bool` | Simple scalar values - see [Built-in Types](#built-in-types) |
| Binary data | `bytes` | Binary content - see [Binary Data](#binary-data) |
| Date and Time | `datetime`, `date`, `timedelta` | Date and time objects - see [Date and Time Types](#date-and-time-types) |
| Collection types | `list[str]`, `dict[str, int]`, `set[int]` | Collections of items - see [Collection Types](#collection-types) |
| Optional types | `float \| None`, `Optional[float]`| Parameters that may be null/omitted - see [Union and Optional Types](#union-and-optional-types) |
| Union types | `str \| int`, `Union[str, int]`| Parameters accepting multiple types - see [Union and Optional Types](#union-and-optional-types) |
| Constrained types | `Literal["A", "B"]`, `Enum` | Parameters with specific allowed values - see [Constrained Types](#constrained-types) |
| Paths | `Path` | File system paths - see [Paths](#paths) |
| UUIDs | `UUID` | Universally unique identifiers - see [UUIDs](#uuids) |
| Pydantic models | `UserData` | Complex structured data - see [Pydantic Models](#pydantic-models) |
For additional type annotations not listed here, see the [Parameter Types](#parameter-types) section below for more detailed information and examples.
#### Optional Arguments
FastMCP follows Python's standard function parameter conventions. Parameters without default values are required, while those with default values are optional.
```python
@mcp.tool()
def search_products(
query: str, # Required - no default value
max_results: int = 10, # Optional - has default value
sort_by: str = "relevance", # Optional - has default value
category: str | None = None # Optional - can be None
) -> list[dict]:
"""Search the product catalog."""
# Implementation...
```
In this example, the LLM must provide a `query` parameter, while `max_results`, `sort_by`, and `category` will use their default values if not explicitly provided.
### Metadata
While FastMCP infers the name and description from your function, you can override these and add tags using arguments to the `@mcp.tool` decorator:
```python
@mcp.tool(
name="find_products", # Custom tool name for the LLM
description="Search the product catalog with optional category filtering.", # Custom description
tags={"catalog", "search"} # Optional tags for organization/filtering
)
def search_products_implementation(query: str, category: str | None = None) -> list[dict]:
"""Internal function description (ignored if description is provided above)."""
# Implementation...
print(f"Searching for '{query}' in category '{category}'")
return [{"id": 2, "name": "Another Product"}]
```
- **`name`**: Sets the explicit tool name exposed via MCP.
- **`description`**: Provides the description exposed via MCP. If set, the function's docstring is ignored for this purpose.
- **`tags`**: A set of strings used to categorize the tool. Clients *might* use tags to filter or group available tools.
### Async Tools
FastMCP seamlessly supports both standard (`def`) and asynchronous (`async def`) functions as tools.
```python
# Synchronous tool (suitable for CPU-bound or quick tasks)
@mcp.tool()
def calculate_distance(lat1: float, lon1: float, lat2: float, lon2: float) -> float:
"""Calculate the distance between two coordinates."""
# Implementation...
return 42.5
# Asynchronous tool (ideal for I/O-bound operations)
@mcp.tool()
async def fetch_weather(city: str) -> dict:
"""Retrieve current weather conditions for a city."""
# Use 'async def' for operations involving network calls, file I/O, etc.
# This prevents blocking the server while waiting for external operations.
async with aiohttp.ClientSession() as session:
async with session.get(f"https://api.example.com/weather/{city}") as response:
# Check response status before returning
response.raise_for_status()
return await response.json()
```
Use `async def` when your tool needs to perform operations that might wait for external systems (network requests, database queries, file access) to keep your server responsive.
### Return Values
FastMCP automatically converts the value returned by your function into the appropriate MCP content format for the client:
- **`str`**: Sent as `TextContent`.
- **`dict`, `list`, Pydantic `BaseModel`**: Serialized to a JSON string and sent as `TextContent`.
- **`bytes`**: Base64 encoded and sent as `BlobResourceContents` (often within an `EmbeddedResource`).
- **`fastmcp.Image`**: A helper class for easily returning image data. Sent as `ImageContent`.
- **`None`**: Results in an empty response (no content is sent back to the client).
FastMCP will attempt to serialize other types to a string if possible.
At this time, FastMCP responds only to your tool's return *value*, not its return *annotation*.
```python
from fastmcp import FastMCP, Image
import io
try:
from PIL import Image as PILImage
except ImportError:
raise ImportError("Please install the `pillow` library to run this example.")
mcp = FastMCP("Image Demo")
@mcp.tool()
def generate_image(width: int, height: int, color: str) -> Image:
"""Generates a solid color image."""
# Create image using Pillow
img = PILImage.new("RGB", (width, height), color=color)
# Save to a bytes buffer
buffer = io.BytesIO()
img.save(buffer, format="PNG")
img_bytes = buffer.getvalue()
# Return using FastMCP's Image helper
return Image(data=img_bytes, format="png")
@mcp.tool()
def do_nothing() -> None:
"""This tool performs an action but returns no data."""
print("Performing a side effect...")
return None
```
### Error Handling
If your tool encounters an error, simply raise a standard Python exception (`ValueError`, `TypeError`, `FileNotFoundError`, custom exceptions, etc.).
```python
@mcp.tool()
def divide(a: float, b: float) -> float:
"""Divide a by b."""
if b == 0:
# Raise a standard exception
raise ValueError("Division by zero is not allowed.")
if not isinstance(a, (int, float)) or not isinstance(b, (int, float)):
raise TypeError("Both arguments must be numbers.")
return a / b
```
FastMCP automatically catches exceptions raised within your tool function:
1. It converts the exception into an MCP error response, typically including the exception type and message.
2. This error response is sent back to the client/LLM.
3. The LLM can then inform the user or potentially try the tool again with different arguments.
Using informative exceptions helps the LLM understand failures and react appropriately.
## MCP Context
Tools can access MCP features like logging, reading resources, or reporting progress through the `Context` object. To use it, add a parameter to your tool function with the type hint `Context`.
```python
from fastmcp import FastMCP, Context
mcp = FastMCP(name="ContextDemo")
@mcp.tool()
async def process_data(data_uri: str, ctx: Context) -> dict:
"""Process data from a resource with progress reporting."""
await ctx.info(f"Processing data from {data_uri}")
# Read a resource
resource = await ctx.read_resource(data_uri)
data = resource[0].content if resource else ""
# Report progress
await ctx.report_progress(progress=50, total=100)
# Example request to the client's LLM for help
summary = await ctx.sample(f"Summarize this in 10 words: {data[:200]}")
await ctx.report_progress(progress=100, total=100)
return {
"length": len(data),
"summary": summary.text
}
```
The Context object provides access to:
- **Logging**: `ctx.debug()`, `ctx.info()`, `ctx.warning()`, `ctx.error()`
- **Progress Reporting**: `ctx.report_progress(progress, total)`
- **Resource Access**: `ctx.read_resource(uri)`
- **LLM Sampling**: `ctx.sample(...)`
- **Request Information**: `ctx.request_id`, `ctx.client_id`
For full documentation on the Context object and all its capabilities, see the [Context documentation](/servers/context).
## Parameter Types
FastMCP supports a wide variety of parameter types to give you flexibility when designing your tools.
FastMCP generally supports all types that Pydantic supports as fields, including all Pydantic custom types. This means you can use any type that can be validated and parsed by Pydantic in your tool parameters.
FastMCP supports **type coercion** when possible. This means that if a client sends data that doesn't match the expected type, FastMCP will attempt to convert it to the appropriate type. For example, if a client sends a string for a parameter annotated as `int`, FastMCP will attempt to convert it to an integer. If the conversion is not possible, FastMCP will return a validation error.
### Built-in Types
The most common parameter types are Python's built-in scalar types:
```python
@mcp.tool()
def process_values(
name: str, # Text data
count: int, # Integer numbers
amount: float, # Floating point numbers
enabled: bool # Boolean values (True/False)
):
"""Process various value types."""
# Implementation...
```
These types provide clear expectations to the LLM about what values are acceptable and allow FastMCP to validate inputs properly. Even if a client provides a string like "42", it will be coerced to an integer for parameters annotated as `int`.
### Date and Time Types
FastMCP supports various date and time types from the `datetime` module:
```python
from datetime import datetime, date, timedelta
@mcp.tool()
def process_date_time(
event_date: date, # ISO format date string or date object
event_time: datetime, # ISO format datetime string or datetime object
duration: timedelta = timedelta(hours=1) # Integer seconds or timedelta
) -> str:
"""Process date and time information."""
# Types are automatically converted from strings
assert isinstance(event_date, date)
assert isinstance(event_time, datetime)
assert isinstance(duration, timedelta)
return f"Event on {event_date} at {event_time} for {duration}"
```
- `datetime` - Accepts ISO format strings (e.g., "2023-04-15T14:30:00")
- `date` - Accepts ISO format date strings (e.g., "2023-04-15")
- `timedelta` - Accepts integer seconds or timedelta objects
### Collection Types
FastMCP supports all standard Python collection types:
```python
@mcp.tool()
def analyze_data(
values: list[float], # List of numbers
properties: dict[str, str], # Dictionary with string keys and values
unique_ids: set[int], # Set of unique integers
coordinates: tuple[float, float], # Tuple with fixed structure
mixed_data: dict[str, list[int]] # Nested collections
):
"""Analyze collections of data."""
# Implementation...
```
All collection types can be used as parameter annotations:
- `list[T]` - Ordered sequence of items
- `dict[K, V]` - Key-value mapping
- `set[T]` - Unordered collection of unique items
- `tuple[T1, T2, ...]` - Fixed-length sequence with potentially different types
Collection types can be nested and combined to represent complex data structures. JSON strings that match the expected structure will be automatically parsed and converted to the appropriate Python collection type.
### Union and Optional Types
For parameters that can accept multiple types or may be omitted:
```python
@mcp.tool()
def flexible_search(
query: str | int, # Can be either string or integer
filters: dict[str, str] | None = None, # Optional dictionary
sort_field: str | None = None # Optional string
):
"""Search with flexible parameter types."""
# Implementation...
```
Modern Python syntax (`str | int`) is preferred over older `Union[str, int]` forms. Similarly, `str | None` is preferred over `Optional[str]`.
### Constrained Types
When a parameter must be one of a predefined set of values, you can use either Literal types or Enums:
#### Literals
Literals constrain parameters to a specific set of values:
```python
from typing import Literal
@mcp.tool()
def sort_data(
data: list[float],
order: Literal["ascending", "descending"] = "ascending",
algorithm: Literal["quicksort", "mergesort", "heapsort"] = "quicksort"
):
"""Sort data using specific options."""
# Implementation...
```
Literal types:
- Specify exact allowable values directly in the type annotation
- Help LLMs understand exactly which values are acceptable
- Provide input validation (errors for invalid values)
- Create clear schemas for clients
#### Enums
For more structured sets of constrained values, use Python's Enum class:
```python
from enum import Enum
class Color(Enum):
RED = "red"
GREEN = "green"
BLUE = "blue"
@mcp.tool()
def process_image(
image_path: str,
color_filter: Color = Color.RED
):
"""Process an image with a color filter."""
# Implementation...
# color_filter will be a Color enum member
```
When using Enum types:
- Clients should provide the enum's value (e.g., "red"), not the enum member name (e.g., "RED")
- FastMCP automatically coerces the string value into the appropriate Enum object
- Your function receives the actual Enum member (e.g., `Color.RED`)
- Validation errors are raised for values not in the enum
### Binary Data
There are two approaches to handling binary data in tool parameters:
#### Bytes
```python
@mcp.tool()
def process_binary(data: bytes):
"""Process binary data directly.
The client can send a binary string, which will be
converted directly to bytes.
"""
# Implementation using binary data
data_length = len(data)
# ...
```
When you annotate a parameter as `bytes`, FastMCP will:
- Convert raw strings directly to bytes
- Validate that the input can be properly represented as bytes
FastMCP does not automatically decode base64-encoded strings for bytes parameters. If you need to accept base64-encoded data, you should handle the decoding manually as shown below.
#### Base64-encoded strings
```python
from typing import Annotated
from pydantic import Field
@mcp.tool()
def process_image_data(
image_data: Annotated[str, Field(description="Base64-encoded image data")]
):
"""Process an image from base64-encoded string.
The client is expected to provide base64-encoded data as a string.
You'll need to decode it manually.
"""
# Manual base64 decoding
import base64
binary_data = base64.b64decode(image_data)
# Process binary_data...
```
This approach is recommended when you expect to receive base64-encoded binary data from clients.
### Paths
The `Path` type from the `pathlib` module can be used for file system paths:
```python
from pathlib import Path
@mcp.tool()
def process_file(path: Path) -> str:
"""Process a file at the given path."""
assert isinstance(path, Path) # Path is properly converted
return f"Processing file at {path}"
```
When a client sends a string path, FastMCP automatically converts it to a `Path` object.
### UUIDs
The `UUID` type from the `uuid` module can be used for unique identifiers:
```python
import uuid
@mcp.tool()
def process_item(
item_id: uuid.UUID # String UUID or UUID object
) -> str:
"""Process an item with the given UUID."""
assert isinstance(item_id, uuid.UUID) # Properly converted to UUID
return f"Processing item {item_id}"
```
When a client sends a string UUID (e.g., "123e4567-e89b-12d3-a456-426614174000"), FastMCP automatically converts it to a `UUID` object.
### Pydantic Models
For complex, structured data with nested fields and validation, use Pydantic models:
```python
from pydantic import BaseModel, Field
from typing import Optional
class User(BaseModel):
username: str
email: str = Field(description="User's email address")
age: int | None = None
is_active: bool = True
@mcp.tool()
def create_user(user: User):
"""Create a new user in the system."""
# The input is automatically validated against the User model
# Even if provided as a JSON string or dict
# Implementation...
```
Using Pydantic models provides:
- Clear, self-documenting structure for complex inputs
- Built-in data validation
- Automatic generation of detailed JSON schemas for the LLM
- Automatic conversion from dict/JSON input
Clients can provide data for Pydantic model parameters as either:
- A JSON object (string)
- A dictionary with the appropriate structure
- Nested parameters in the appropriate format
### Pydantic Fields
FastMCP supports robust parameter validation through Pydantic's `Field` class. This is especially useful to ensure that input values meet specific requirements beyond just their type.
Note that fields can be used *outside* Pydantic models to provide metadata and validation constraints. The preferred approach is using `Annotated` with `Field`:
```python
from typing import Annotated
from pydantic import Field
@mcp.tool()
def analyze_metrics(
# Numbers with range constraints
count: Annotated[int, Field(ge=0, le=100)], # 0 <= count <= 100
ratio: Annotated[float, Field(gt=0, lt=1.0)], # 0 < ratio < 1.0
# String with pattern and length constraints
user_id: Annotated[str, Field(
pattern=r"^[A-Z]{2}\d{4}$", # Must match regex pattern
description="User ID in format XX0000"
)],
# String with length constraints
comment: Annotated[str, Field(min_length=3, max_length=500)] = "",
# Numeric constraints
factor: Annotated[int, Field(multiple_of=5)] = 10, # Must be multiple of 5
):
"""Analyze metrics with validated parameters."""
# Implementation...
```
You can also use `Field` as a default value, though the `Annotated` approach is preferred:
```python
@mcp.tool()
def validate_data(
# Value constraints
age: int = Field(ge=0, lt=120), # 0 <= age < 120
# String constraints
email: str = Field(pattern=r"^[\w\.-]+@[\w\.-]+\.\w+$"), # Email pattern
# Collection constraints
tags: list[str] = Field(min_length=1, max_length=10) # 1-10 tags
):
"""Process data with field validations."""
# Implementation...
```
Common validation options include:
| Validation | Type | Description |
| :--------- | :--- | :---------- |
| `ge`, `gt` | Number | Greater than (or equal) constraint |
| `le`, `lt` | Number | Less than (or equal) constraint |
| `multiple_of` | Number | Value must be a multiple of this number |
| `min_length`, `max_length` | String, List, etc. | Length constraints |
| `pattern` | String | Regular expression pattern constraint |
| `description` | Any | Human-readable description (appears in schema) |
When a client sends invalid data, FastMCP will return a validation error explaining why the parameter failed validation.
## Server Behavior
### Duplicate Tools
You can control how the FastMCP server behaves if you try to register multiple tools with the same name. This is configured using the `on_duplicate_tools` argument when creating the `FastMCP` instance.
```python
from fastmcp import FastMCP
mcp = FastMCP(
name="StrictServer",
# Configure behavior for duplicate tool names
on_duplicate_tools="error"
)
@mcp.tool()
def my_tool(): return "Version 1"
# This will now raise a ValueError because 'my_tool' already exists
# and on_duplicate_tools is set to "error".
# @mcp.tool()
# def my_tool(): return "Version 2"
```
The duplicate behavior options are:
- `"warn"` (default): Logs a warning and the new tool replaces the old one.
- `"error"`: Raises a `ValueError`, preventing the duplicate registration.
- `"replace"`: Silently replaces the existing tool with the new one.
- `"ignore"`: Keeps the original tool and ignores the new registration attempt.
## /docs/snippets/version-badge.mdx
export const VersionBadge = ({ version }) => {
return (
New in version:
{version}
);
};
## /docs/style.css
```css path="/docs/style.css"
/* Code highlighting -- target only inline code elements, not code blocks */
p code:not(pre code),
table code:not(pre code),
li code:not(pre code),
h1 code:not(pre code),
h2 code:not(pre code),
h3 code:not(pre code),
h4 code:not(pre code),
h5 code:not(pre code),
h6 code:not(pre code) {
color: #f72585 !important;
background-color: rgba(247, 37, 133, 0.09);
}
/* Version badge -- display a badge with the current version of the documentation */
.version-badge {
display: inline-block;
align-items: center;
gap: 0.3em;
padding: 0.2em 0.8em;
font-size: 1.1em;
font-weight: 400;
font-family: "Inter", sans-serif;
letter-spacing: 0.025em;
color: #ff5400;
background: #ffeee6;
border: 1px solid rgb(255, 84, 0, 0.5);
border-radius: 6px;
box-shadow: none;
vertical-align: middle;
position: relative;
transition: box-shadow 0.2s, transform 0.15s;
}
.version-badge-container {
margin: 0;
padding: 0;
}
.version-badge:hover {
box-shadow: 0 2px 8px 0 rgba(160, 132, 252, 0.1);
transform: translateY(-1px) scale(1.03);
}
.dark .version-badge {
color: #fff;
background: #312e81;
border: 1.5px solid #a78bfa;
}
```
## /examples/complex_inputs.py
```py path="/examples/complex_inputs.py"
"""
FastMCP Complex inputs Example
Demonstrates validation via pydantic with complex models.
"""
from typing import Annotated
from pydantic import BaseModel, Field
from fastmcp.server import FastMCP
mcp = FastMCP("Shrimp Tank")
class ShrimpTank(BaseModel):
class Shrimp(BaseModel):
name: Annotated[str, Field(max_length=10)]
shrimp: list[Shrimp]
@mcp.tool()
def name_shrimp(
tank: ShrimpTank,
# You can use pydantic Field in function signatures for validation.
extra_names: Annotated[list[str], Field(max_length=10)],
) -> list[str]:
"""List all shrimp names in the tank"""
return [shrimp.name for shrimp in tank.shrimp] + extra_names
```
## /examples/desktop.py
```py path="/examples/desktop.py"
"""
FastMCP Desktop Example
A simple example that exposes the desktop directory as a resource.
"""
from pathlib import Path
from fastmcp.server import FastMCP
# Create server
mcp = FastMCP("Demo")
@mcp.resource("dir://desktop")
def desktop() -> list[str]:
"""List the files in the user's desktop"""
desktop = Path.home() / "Desktop"
return [str(f) for f in desktop.iterdir()]
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
```
## /examples/echo.py
```py path="/examples/echo.py"
"""
FastMCP Echo Server
"""
from fastmcp import FastMCP
# Create server
mcp = FastMCP("Echo Server")
@mcp.tool()
def echo_tool(text: str) -> str:
"""Echo the input text"""
return text
@mcp.resource("echo://static")
def echo_resource() -> str:
return "Echo!"
@mcp.resource("echo://{text}")
def echo_template(text: str) -> str:
"""Echo the input text"""
return f"Echo: {text}"
@mcp.prompt("echo")
def echo_prompt(text: str) -> str:
return text
```
## /examples/memory.py
```py path="/examples/memory.py"
# /// script
# dependencies = ["pydantic-ai-slim[openai]", "asyncpg", "numpy", "pgvector", "fastmcp"]
# ///
# uv pip install 'pydantic-ai-slim[openai]' asyncpg numpy pgvector fastmcp
"""
Recursive memory system inspired by the human brain's clustering of memories.
Uses OpenAI's 'text-embedding-3-small' model and pgvector for efficient similarity search.
"""
import asyncio
import math
import os
from dataclasses import dataclass
from datetime import datetime, timezone
from pathlib import Path
from typing import Annotated, Self
import asyncpg
import numpy as np
from openai import AsyncOpenAI
from pgvector.asyncpg import register_vector # Import register_vector
from pydantic import BaseModel, Field
from pydantic_ai import Agent
from fastmcp import FastMCP
MAX_DEPTH = 5
SIMILARITY_THRESHOLD = 0.7
DECAY_FACTOR = 0.99
REINFORCEMENT_FACTOR = 1.1
DEFAULT_LLM_MODEL = "openai:gpt-4o"
DEFAULT_EMBEDDING_MODEL = "text-embedding-3-small"
mcp = FastMCP(
"memory",
dependencies=[
"pydantic-ai-slim[openai]",
"asyncpg",
"numpy",
"pgvector",
],
)
DB_DSN = "postgresql://postgres:postgres@localhost:54320/memory_db"
# reset memory with rm ~/.fastmcp/{USER}/memory/*
PROFILE_DIR = (
Path.home() / ".fastmcp" / os.environ.get("USER", "anon") / "memory"
).resolve()
PROFILE_DIR.mkdir(parents=True, exist_ok=True)
def cosine_similarity(a: list[float], b: list[float]) -> float:
a_array = np.array(a, dtype=np.float64)
b_array = np.array(b, dtype=np.float64)
return np.dot(a_array, b_array) / (
np.linalg.norm(a_array) * np.linalg.norm(b_array)
)
async def do_ai[T](
user_prompt: str,
system_prompt: str,
result_type: type[T] | Annotated,
deps=None,
) -> T:
agent = Agent(
DEFAULT_LLM_MODEL,
system_prompt=system_prompt,
result_type=result_type,
)
result = await agent.run(user_prompt, deps=deps)
return result.data
@dataclass
class Deps:
openai: AsyncOpenAI
pool: asyncpg.Pool
async def get_db_pool() -> asyncpg.Pool:
async def init(conn):
await conn.execute("CREATE EXTENSION IF NOT EXISTS vector;")
await register_vector(conn)
pool = await asyncpg.create_pool(DB_DSN, init=init)
return pool
class MemoryNode(BaseModel):
id: int | None = None
content: str
summary: str = ""
importance: float = 1.0
access_count: int = 0
timestamp: float = Field(
default_factory=lambda: datetime.now(timezone.utc).timestamp()
)
embedding: list[float]
@classmethod
async def from_content(cls, content: str, deps: Deps):
embedding = await get_embedding(content, deps)
return cls(content=content, embedding=embedding)
async def save(self, deps: Deps):
async with deps.pool.acquire() as conn:
if self.id is None:
result = await conn.fetchrow(
"""
INSERT INTO memories (content, summary, importance, access_count, timestamp, embedding)
VALUES ($1, $2, $3, $4, $5, $6)
RETURNING id
""",
self.content,
self.summary,
self.importance,
self.access_count,
self.timestamp,
self.embedding,
)
self.id = result["id"]
else:
await conn.execute(
"""
UPDATE memories
SET content = $1, summary = $2, importance = $3,
access_count = $4, timestamp = $5, embedding = $6
WHERE id = $7
""",
self.content,
self.summary,
self.importance,
self.access_count,
self.timestamp,
self.embedding,
self.id,
)
async def merge_with(self, other: Self, deps: Deps):
self.content = await do_ai(
f"{self.content}\n\n{other.content}",
"Combine the following two texts into a single, coherent text.",
str,
deps,
)
self.importance += other.importance
self.access_count += other.access_count
self.embedding = [(a + b) / 2 for a, b in zip(self.embedding, other.embedding)]
self.summary = await do_ai(
self.content, "Summarize the following text concisely.", str, deps
)
await self.save(deps)
# Delete the merged node from the database
if other.id is not None:
await delete_memory(other.id, deps)
def get_effective_importance(self):
return self.importance * (1 + math.log(self.access_count + 1))
async def get_embedding(text: str, deps: Deps) -> list[float]:
embedding_response = await deps.openai.embeddings.create(
input=text,
model=DEFAULT_EMBEDDING_MODEL,
)
return embedding_response.data[0].embedding
async def delete_memory(memory_id: int, deps: Deps):
async with deps.pool.acquire() as conn:
await conn.execute("DELETE FROM memories WHERE id = $1", memory_id)
async def add_memory(content: str, deps: Deps):
new_memory = await MemoryNode.from_content(content, deps)
await new_memory.save(deps)
similar_memories = await find_similar_memories(new_memory.embedding, deps)
for memory in similar_memories:
if memory.id != new_memory.id:
await new_memory.merge_with(memory, deps)
await update_importance(new_memory.embedding, deps)
await prune_memories(deps)
return f"Remembered: {content}"
async def find_similar_memories(embedding: list[float], deps: Deps) -> list[MemoryNode]:
async with deps.pool.acquire() as conn:
rows = await conn.fetch(
"""
SELECT id, content, summary, importance, access_count, timestamp, embedding
FROM memories
ORDER BY embedding <-> $1
LIMIT 5
""",
embedding,
)
memories = [
MemoryNode(
id=row["id"],
content=row["content"],
summary=row["summary"],
importance=row["importance"],
access_count=row["access_count"],
timestamp=row["timestamp"],
embedding=row["embedding"],
)
for row in rows
]
return memories
async def update_importance(user_embedding: list[float], deps: Deps):
async with deps.pool.acquire() as conn:
rows = await conn.fetch(
"SELECT id, importance, access_count, embedding FROM memories"
)
for row in rows:
memory_embedding = row["embedding"]
similarity = cosine_similarity(user_embedding, memory_embedding)
if similarity > SIMILARITY_THRESHOLD:
new_importance = row["importance"] * REINFORCEMENT_FACTOR
new_access_count = row["access_count"] + 1
else:
new_importance = row["importance"] * DECAY_FACTOR
new_access_count = row["access_count"]
await conn.execute(
"""
UPDATE memories
SET importance = $1, access_count = $2
WHERE id = $3
""",
new_importance,
new_access_count,
row["id"],
)
async def prune_memories(deps: Deps):
async with deps.pool.acquire() as conn:
rows = await conn.fetch(
"""
SELECT id, importance, access_count
FROM memories
ORDER BY importance DESC
OFFSET $1
""",
MAX_DEPTH,
)
for row in rows:
await conn.execute("DELETE FROM memories WHERE id = $1", row["id"])
async def display_memory_tree(deps: Deps) -> str:
async with deps.pool.acquire() as conn:
rows = await conn.fetch(
"""
SELECT content, summary, importance, access_count
FROM memories
ORDER BY importance DESC
LIMIT $1
""",
MAX_DEPTH,
)
result = ""
for row in rows:
effective_importance = row["importance"] * (
1 + math.log(row["access_count"] + 1)
)
summary = row["summary"] or row["content"]
result += f"- {summary} (Importance: {effective_importance:.2f})\n"
return result
@mcp.tool()
async def remember(
contents: list[str] = Field(
description="List of observations or memories to store"
),
):
deps = Deps(openai=AsyncOpenAI(), pool=await get_db_pool())
try:
return "\n".join(
await asyncio.gather(*[add_memory(content, deps) for content in contents])
)
finally:
await deps.pool.close()
@mcp.tool()
async def read_profile() -> str:
deps = Deps(openai=AsyncOpenAI(), pool=await get_db_pool())
profile = await display_memory_tree(deps)
await deps.pool.close()
return profile
async def initialize_database():
pool = await asyncpg.create_pool(
"postgresql://postgres:postgres@localhost:54320/postgres"
)
try:
async with pool.acquire() as conn:
await conn.execute("""
SELECT pg_terminate_backend(pg_stat_activity.pid)
FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'memory_db'
AND pid <> pg_backend_pid();
""")
await conn.execute("DROP DATABASE IF EXISTS memory_db;")
await conn.execute("CREATE DATABASE memory_db;")
finally:
await pool.close()
pool = await asyncpg.create_pool(DB_DSN)
try:
async with pool.acquire() as conn:
await conn.execute("CREATE EXTENSION IF NOT EXISTS vector;")
await register_vector(conn)
await conn.execute("""
CREATE TABLE IF NOT EXISTS memories (
id SERIAL PRIMARY KEY,
content TEXT NOT NULL,
summary TEXT,
importance REAL NOT NULL,
access_count INT NOT NULL,
timestamp DOUBLE PRECISION NOT NULL,
embedding vector(1536) NOT NULL
);
CREATE INDEX IF NOT EXISTS idx_memories_embedding ON memories USING hnsw (embedding vector_l2_ops);
""")
finally:
await pool.close()
if __name__ == "__main__":
asyncio.run(initialize_database())
```
## /examples/mount_example.py
```py path="/examples/mount_example.py"
"""Example of mounting FastMCP apps together.
This example demonstrates how to mount FastMCP apps together using
the ToolManager's import_tools functionality. It shows how to:
1. Create sub-applications for different domains
2. Mount those sub-applications to a main application
3. Access tools with prefixed names and resources with prefixed URIs
"""
import asyncio
from fastmcp import FastMCP
# Weather sub-application
weather_app = FastMCP("Weather App")
@weather_app.tool()
def get_weather_forecast(location: str) -> str:
"""Get the weather forecast for a location."""
return f"Sunny skies for {location} today!"
@weather_app.resource(uri="weather://forecast")
async def weather_data():
"""Return current weather data."""
return {"temperature": 72, "conditions": "sunny", "humidity": 45, "wind_speed": 5}
# News sub-application
news_app = FastMCP("News App")
@news_app.tool()
def get_news_headlines() -> list[str]:
"""Get the latest news headlines."""
return [
"Tech company launches new product",
"Local team wins championship",
"Scientists make breakthrough discovery",
]
@news_app.resource(uri="news://headlines")
async def news_data():
"""Return latest news data."""
return {
"top_story": "Breaking news: Important event happened",
"categories": ["politics", "sports", "technology"],
"sources": ["AP", "Reuters", "Local Sources"],
}
# Main application
app = FastMCP(
"Main App", dependencies=["fastmcp@git+https://github.com/jlowin/fastmcp.git"]
)
@app.tool()
def check_app_status() -> dict[str, str]:
"""Check the status of the main application."""
return {"status": "running", "version": "1.0.0", "uptime": "3h 24m"}
# Mount sub-applications
app.mount("weather", weather_app)
app.mount("news", news_app)
async def get_server_details():
"""Print information about mounted resources."""
# Print available tools
tools = app._tool_manager.list_tools()
print(f"\nAvailable tools ({len(tools)}):")
for tool in tools:
print(f" - {tool.name}: {tool.description}")
# Print available resources
print("\nAvailable resources:")
# Distinguish between native and imported resources
# Native resources would be those directly in the main app (not prefixed)
native_resources = [
uri
for uri in app._resource_manager._resources
if not (uri.startswith("weather+") or uri.startswith("news+"))
]
# Imported resources - categorized by source app
weather_resources = [
uri for uri in app._resource_manager._resources if uri.startswith("weather+")
]
news_resources = [
uri for uri in app._resource_manager._resources if uri.startswith("news+")
]
print(f" - Native app resources: {native_resources}")
print(f" - Imported from weather app: {weather_resources}")
print(f" - Imported from news app: {news_resources}")
# Let's try to access resources using the prefixed URI
weather_data = await app.read_resource("weather+weather://forecast")
print(f"\nWeather data from prefixed URI: {weather_data}")
if __name__ == "__main__":
# First run our async function to display info
asyncio.run(get_server_details())
# Then start the server (uncomment to run the server)
app.run()
```
## /examples/readme-quickstart.py
```py path="/examples/readme-quickstart.py"
from fastmcp import FastMCP
# Create an MCP server
mcp = FastMCP("Demo")
# Add an addition tool
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
# Add a dynamic greeting resource
@mcp.resource("greeting://{name}")
def get_greeting(name: str) -> str:
"""Get a personalized greeting"""
return f"Hello, {name}!"
```
## /examples/sampling.py
```py path="/examples/sampling.py"
"""
Example of using sampling to request an LLM completion via Marvin
"""
import asyncio
import marvin
from mcp.types import TextContent
from fastmcp import Client, Context, FastMCP
from fastmcp.client.sampling import RequestContext, SamplingMessage, SamplingParams
# -- Create a server that sends a sampling request to the LLM
mcp = FastMCP("Sampling Example")
@mcp.tool()
async def example_tool(prompt: str, context: Context) -> str:
"""Sample a completion from the LLM."""
response = await context.sample(
"What is your favorite programming language?",
system_prompt="You love languages named after snakes.",
)
assert isinstance(response, TextContent)
return response.text
# -- Create a client that can handle the sampling request
async def sampling_fn(
messages: list[SamplingMessage],
params: SamplingParams,
ctx: RequestContext,
) -> str:
return await marvin.say_async(
message=[m.content.text for m in messages],
instructions=params.systemPrompt,
)
async def run():
async with Client(mcp, sampling_handler=sampling_fn) as client:
result = await client.call_tool(
"example_tool", {"prompt": "What is the best programming language?"}
)
print(result)
if __name__ == "__main__":
asyncio.run(run())
```
## /examples/screenshot.py
```py path="/examples/screenshot.py"
"""
FastMCP Screenshot Example
Give Claude a tool to capture and view screenshots.
"""
import io
from fastmcp import FastMCP, Image
# Create server
mcp = FastMCP("Screenshot Demo", dependencies=["pyautogui", "Pillow"])
@mcp.tool()
def take_screenshot() -> Image:
"""
Take a screenshot of the user's screen and return it as an image. Use
this tool anytime the user wants you to look at something they're doing.
"""
import pyautogui
buffer = io.BytesIO()
# if the file exceeds ~1MB, it will be rejected by Claude
screenshot = pyautogui.screenshot()
screenshot.convert("RGB").save(buffer, format="JPEG", quality=60, optimize=True)
return Image(data=buffer.getvalue(), format="jpeg")
```
## /examples/simple_echo.py
```py path="/examples/simple_echo.py"
"""
FastMCP Echo Server
"""
from fastmcp import FastMCP
# Create server
mcp = FastMCP("Echo Server")
@mcp.tool()
def echo(text: str) -> str:
"""Echo the input text"""
return text
```
## /examples/smart_home/README.md
## /examples/smart_home/pyproject.toml
```toml path="/examples/smart_home/pyproject.toml"
[project]
name = "smart-home"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
authors = [{ name = "zzstoatzz", email = "thrast36@gmail.com" }]
requires-python = ">=3.12"
dependencies = ["fastmcp@git+https://github.com/jlowin/fastmcp.git", "phue2"]
[project.scripts]
smart-home = "smart_home.__main__:main"
[dependency-groups]
dev = ["ruff", "ipython"]
[build-system]
requires = ["uv_build"]
build-backend = "uv_build"
```
## /examples/smart_home/src/smart_home/__init__.py
```py path="/examples/smart_home/src/smart_home/__init__.py"
from smart_home.settings import settings
__all__ = ["settings"]
```
## /examples/smart_home/src/smart_home/__main__.py
```py path="/examples/smart_home/src/smart_home/__main__.py"
from smart_home.hub import hub_mcp
def main():
hub_mcp.run()
if __name__ == "__main__":
main()
```
## /examples/smart_home/src/smart_home/hub.py
```py path="/examples/smart_home/src/smart_home/hub.py"
from phue2 import Bridge
from fastmcp import FastMCP
from smart_home.lights.server import lights_mcp
from smart_home.settings import settings
hub_mcp = FastMCP(
"Smart Home Hub (phue2)",
dependencies=[
"smart_home@git+https://github.com/jlowin/fastmcp.git#subdirectory=examples/smart_home",
],
)
# Mount the lights service under the 'hue' prefix
hub_mcp.mount("hue", lights_mcp)
# Add a status check for the hub
@hub_mcp.tool()
def hub_status() -> str:
"""Checks the status of the main hub and connections."""
try:
bridge = Bridge(
ip=str(settings.hue_bridge_ip),
username=settings.hue_bridge_username,
save_config=False,
)
bridge.connect()
return "Hub OK. Hue Bridge Connected (via phue2)."
except Exception as e:
return f"Hub Warning: Hue Bridge connection failed or not attempted: {e}"
# Add mounting points for other services later
# hub_mcp.mount("thermo", thermostat_mcp)
```
## /examples/smart_home/src/smart_home/lights/__init__.py
```py path="/examples/smart_home/src/smart_home/lights/__init__.py"
```
## /examples/smart_home/src/smart_home/lights/hue_utils.py
```py path="/examples/smart_home/src/smart_home/lights/hue_utils.py"
from typing import Any
from phue2 import Bridge
from phue2.exceptions import PhueException
from smart_home.settings import settings
def _get_bridge() -> Bridge | None:
"""Attempts to connect to the Hue bridge using settings."""
try:
return Bridge(
ip=str(settings.hue_bridge_ip),
username=settings.hue_bridge_username,
save_config=False,
)
except Exception:
# Broad exception to catch potential connection issues
# TODO: Add more specific logging or error handling
return None
def handle_phue_error(
light_or_group: str, operation: str, error: Exception
) -> dict[str, Any]:
"""Creates a standardized error response for phue2 operations."""
base_info = {"target": light_or_group, "operation": operation, "success": False}
if isinstance(error, KeyError):
base_info["error"] = f"Target '{light_or_group}' not found"
elif isinstance(error, PhueException):
base_info["error"] = f"phue2 error during {operation}: {error}"
else:
base_info["error"] = f"Unexpected error during {operation}: {error}"
return base_info
```
## /examples/smart_home/src/smart_home/lights/server.py
```py path="/examples/smart_home/src/smart_home/lights/server.py"
from typing import Annotated, Any, Literal, TypedDict
from phue2.exceptions import PhueException
from pydantic import Field
from typing_extensions import NotRequired
from fastmcp import FastMCP
from smart_home.lights.hue_utils import _get_bridge, handle_phue_error
class HueAttributes(TypedDict, total=False):
"""TypedDict for optional light attributes."""
on: NotRequired[Annotated[bool, Field(description="on/off state")]]
bri: NotRequired[Annotated[int, Field(ge=0, le=254, description="brightness")]]
hue: NotRequired[
Annotated[
int,
Field(
ge=0,
le=254,
description="saturation",
),
]
]
xy: NotRequired[Annotated[list[float], Field(description="xy color coordinates")]]
ct: NotRequired[
Annotated[
int,
Field(ge=153, le=500, description="color temperature"),
]
]
alert: NotRequired[Literal["none", "select", "lselect"]]
effect: NotRequired[Literal["none", "colorloop"]]
transitiontime: NotRequired[Annotated[int, Field(description="deciseconds")]]
lights_mcp = FastMCP(
"Hue Lights Service (phue2)",
dependencies=[
"smart_home@git+https://github.com/jlowin/fastmcp.git#subdirectory=examples/smart_home",
],
)
@lights_mcp.tool()
def read_all_lights() -> list[str]:
"""Lists the names of all available Hue lights using phue2."""
if not (bridge := _get_bridge()):
return ["Error: Bridge not connected"]
try:
light_dict = bridge.get_light_objects("list")
return [light.name for light in light_dict]
except (PhueException, Exception) as e:
# Simplified error handling for list return type
return [f"Error listing lights: {e}"]
# --- Tools ---
@lights_mcp.tool()
def toggle_light(light_name: str, state: bool) -> dict[str, Any]:
"""Turns a specific light on (true) or off (false) using phue2."""
if not (bridge := _get_bridge()):
return {"error": "Bridge not connected", "success": False}
try:
result = bridge.set_light(light_name, "on", state)
return {
"light": light_name,
"set_on_state": state,
"success": True,
"phue2_result": result,
}
except (KeyError, PhueException, Exception) as e:
return handle_phue_error(light_name, "toggle_light", e)
@lights_mcp.tool()
def set_brightness(light_name: str, brightness: int) -> dict[str, Any]:
"""Sets the brightness of a specific light (0-254) using phue2."""
if not (bridge := _get_bridge()):
return {"error": "Bridge not connected", "success": False}
if not 0 <= brightness <= 254:
# Keep specific input validation error here
return {
"light": light_name,
"error": "Brightness must be between 0 and 254",
"success": False,
}
try:
result = bridge.set_light(light_name, "bri", brightness)
return {
"light": light_name,
"set_brightness": brightness,
"success": True,
"phue2_result": result,
}
except (KeyError, PhueException, Exception) as e:
return handle_phue_error(light_name, "set_brightness", e)
@lights_mcp.tool()
def list_groups() -> list[str]:
"""Lists the names of all available Hue light groups."""
if not (bridge := _get_bridge()):
return ["Error: Bridge not connected"]
try:
# phue2 get_group() returns a dict {id: {details}} including name
groups = bridge.get_group()
return [group_details["name"] for group_details in groups.values()]
except (PhueException, Exception) as e:
return [f"Error listing groups: {e}"]
@lights_mcp.tool()
def list_scenes() -> dict[str, list[str]] | list[str]:
"""Lists Hue scenes, grouped by the light group they belong to.
Returns:
dict[str, list[str]]: A dictionary mapping group names to a list of scene names within that group.
list[str]: An error message list if the bridge connection fails or an error occurs.
"""
if not (bridge := _get_bridge()):
return ["Error: Bridge not connected"]
try:
scenes_data = bridge.get_scene() # Returns dict {scene_id: {details...}}
groups_data = bridge.get_group() # Returns dict {group_id: {details...}}
# Create a lookup for group name by group ID
group_id_to_name = {gid: ginfo["name"] for gid, ginfo in groups_data.items()}
scenes_by_group: dict[str, list[str]] = {}
for scene_id, scene_details in scenes_data.items():
scene_name = scene_details.get("name")
# Scenes might be associated with a group via 'group' key or lights
# Using 'group' key if available is more direct for group scenes
group_id = scene_details.get("group")
if scene_name and group_id and group_id in group_id_to_name:
group_name = group_id_to_name[group_id]
if group_name not in scenes_by_group:
scenes_by_group[group_name] = []
# Avoid duplicate scene names within a group listing (though unlikely)
if scene_name not in scenes_by_group[group_name]:
scenes_by_group[group_name].append(scene_name)
# Sort scenes within each group for consistent output
for group_name in scenes_by_group:
scenes_by_group[group_name].sort()
return scenes_by_group
except (PhueException, Exception) as e:
# Return error as list to match other list-returning tools on error
return [f"Error listing scenes by group: {e}"]
@lights_mcp.tool()
def activate_scene(group_name: str, scene_name: str) -> dict[str, Any]:
"""Activates a specific scene within a specified light group, verifying the scene belongs to the group."""
if not (bridge := _get_bridge()):
return {"error": "Bridge not connected", "success": False}
try:
# 1. Find the target group ID
groups_data = bridge.get_group()
target_group_id = None
for gid, ginfo in groups_data.items():
if ginfo.get("name") == group_name:
target_group_id = gid
break
if not target_group_id:
return {"error": f"Group '{group_name}' not found", "success": False}
# 2. Find the target scene and check its group association
scenes_data = bridge.get_scene()
scene_found = False
scene_in_correct_group = False
for sinfo in scenes_data.values():
if sinfo.get("name") == scene_name:
scene_found = True
# Check if this scene is associated with the target group ID
if sinfo.get("group") == target_group_id:
scene_in_correct_group = True
break # Found the scene in the correct group
if not scene_found:
return {"error": f"Scene '{scene_name}' not found", "success": False}
if not scene_in_correct_group:
return {
"error": f"Scene '{scene_name}' does not belong to group '{group_name}'",
"success": False,
}
# 3. Activate the scene (now that we've verified it)
result = bridge.run_scene(group_name=group_name, scene_name=scene_name)
if result:
return {
"group": group_name,
"activated_scene": scene_name,
"success": True,
"phue2_result": result,
}
else:
# This case might indicate the scene/group exists but activation failed internally
return {
"group": group_name,
"scene": scene_name,
"error": "Scene activation failed (phue2 returned False)",
"success": False,
}
except (KeyError, PhueException, Exception) as e:
# Handle potential errors during bridge communication or data parsing
return handle_phue_error(f"{group_name}/{scene_name}", "activate_scene", e)
@lights_mcp.tool()
def set_light_attributes(light_name: str, attributes: HueAttributes) -> dict[str, Any]:
"""Sets multiple attributes (e.g., hue, sat, bri, ct, xy, transitiontime) for a specific light."""
if not (bridge := _get_bridge()):
return {"error": "Bridge not connected", "success": False}
# Basic validation (more specific validation could be added)
if not isinstance(attributes, dict) or not attributes:
return {
"error": "Attributes must be a non-empty dictionary",
"success": False,
"light": light_name,
}
try:
result = bridge.set_light(light_name, dict(attributes))
return {
"light": light_name,
"set_attributes": attributes,
"success": True,
"phue2_result": result,
}
except (KeyError, PhueException, ValueError, Exception) as e:
# ValueError might occur for invalid attribute values
return handle_phue_error(light_name, "set_light_attributes", e)
@lights_mcp.tool()
def set_group_attributes(group_name: str, attributes: HueAttributes) -> dict[str, Any]:
"""Sets multiple attributes for all lights within a specific group."""
if not (bridge := _get_bridge()):
return {"error": "Bridge not connected", "success": False}
if not isinstance(attributes, dict) or not attributes:
return {
"error": "Attributes must be a non-empty dictionary",
"success": False,
"group": group_name,
}
try:
result = bridge.set_group(group_name, dict(attributes))
return {
"group": group_name,
"set_attributes": attributes,
"success": True,
"phue2_result": result,
}
except (KeyError, PhueException, ValueError, Exception) as e:
return handle_phue_error(group_name, "set_group_attributes", e)
@lights_mcp.tool()
def list_lights_by_group() -> dict[str, list[str]] | list[str]:
"""Lists Hue lights, grouped by the room/group they belong to.
Returns:
dict[str, list[str]]: A dictionary mapping group names to a list of light names within that group.
list[str]: An error message list if the bridge connection fails or an error occurs.
"""
if not (bridge := _get_bridge()):
return ["Error: Bridge not connected"]
try:
groups_data = bridge.get_group() # dict {group_id: {details}}
lights_data = bridge.get_light_objects("id") # dict {light_id: {details}}
lights_by_group: dict[str, list[str]] = {}
for group_details in groups_data.values():
group_name = group_details.get("name")
light_ids = group_details.get("lights", [])
if group_name and light_ids:
light_names = []
for light_id in light_ids:
# phue uses string IDs for lights in group, but int IDs in get_light_objects
light_id_int = int(light_id)
if light_id_int in lights_data:
light_name = lights_data[light_id_int].name
if light_name:
light_names.append(light_name)
if light_names:
light_names.sort()
lights_by_group[group_name] = light_names
return lights_by_group
except (PhueException, Exception) as e:
return [f"Error listing lights by group: {e}"]
```
## /examples/smart_home/src/smart_home/py.typed
```typed path="/examples/smart_home/src/smart_home/py.typed"
```
## /examples/smart_home/src/smart_home/settings.py
```py path="/examples/smart_home/src/smart_home/settings.py"
from pydantic import Field, IPvAnyAddress
from pydantic_settings import BaseSettings, SettingsConfigDict
class Settings(BaseSettings):
model_config = SettingsConfigDict(env_file=".env", extra="ignore")
hue_bridge_ip: IPvAnyAddress = Field(default=...)
hue_bridge_username: str = Field(default=...)
settings = Settings()
```
The content has been capped at 50000 tokens, and files over NaN bytes have been omitted. The user could consider applying other filters to refine the result. The better and more specific the context, the better the LLM can follow instructions. If the context seems verbose, the user can refine the filter using uithub. Thank you for using https://uithub.com - Perfect LLM context for any GitHub repo.