MCP ๐ฑ¶
The ๐ฑ Model Context Protocol (MCP) is a standard for ๐งฐ Tools.
Configuration¶
The --mcp
CLI argument specifies which MCP servers are available to Agents.
If not specified it uses the built-in mcp.yaml
by default.
The command
, args
& env
are self-explanatory; origin
is just for documentation.
The boolean roots
flag controls whether the current working directory is exposed; it defaults to false.
The log
field controls the logging level of the MCP server, and can be set to debug
, info
, notice
, warning
, error
, critical
, alert
and emergency
. If unspecified, it defaults to the warning
level. This only controls what the MCP server sends. To actually see all log messages on the client, you must start Enola with -vvvvv
.
Use the names under the servers:
key of a mcp.yaml
in the tools:
of Agents.
MCP servers are only started (or connected to), and queried for their ๐งฐ Tools, if any of the loaded --agents
use them.
Examples¶
Fetch¶
$schema: https://enola.dev/ai/agent
model: google://?model=gemini-2.5-flash
description: >
Demo Agent with access to a fetch tool.
CAUTION: This server can access local/internal IP addresses and may represent a security risk.
Exercise caution when using this MCP server to ensure this does not expose any sensitive data!
instruction: >
You are a helpful agent who can answer user questions about any web page
by fetching its content and then summarizing it in 3 sentences.
tools:
- modelcontextprotocol/fetch
The fetch
MCP server can fetch a webpage, and extract its contents as Markdown:
enola ai -a test/agents/fetch.agent.yaml --in="What is on https://docs.enola.dev/tutorial/agents/ ?"
This needs uvx
to be available; test if launching uvx mcp-server-fetch
works, first.
CAUTION: This server can access local/internal IP addresses, which may represent a security risk. Exercise caution when using this MCP server to ensure this does not expose any sensitive data!
Filesystem¶
$schema: https://enola.dev/ai/agent
model: google://?model=gemini-2.5-flash-lite
tools:
- modelcontextprotocol/filesystem
enola ai -a test/agents/filesystem.agent.yaml --in="list the files in $PWD"
This currently needs npx
to be available; test if launching npx @modelcontextprotocol/server-filesystem
works, first.
Git¶
$schema: https://enola.dev/ai/agent
model: google://?model=gemini-2.5-flash
description: Demo Agent with access to the git CLI tool.
instruction: You are a helpful agent who can use Git.
tools:
- modelcontextprotocol/git
enola ai --agents=test/agents/git.agent.yaml --in "Write a proposed commit message for the uncommitted files in $PWD"
CAUTION: This server is inherently insecure; you should carefully evaluate if it meets your needs.
This needs uvx
to be available; test if launching uvx mcp-server-git
works, first.
Memory¶
$schema: https://enola.dev/ai/agent
model: google://?model=gemini-2.5-flash
instruction: >
Follow these steps for each interaction:
1. User Identification:
- You should assume that you are interacting with default_user
- If you have not identified default_user, proactively try to do so.
2. Memory Retrieval:
- Always begin your chat by saying only "Remembering..." and retrieve all relevant information from your knowledge graph
- Always refer to your knowledge graph as your "memory"
3. Memory
- While conversing with the user, be attentive to any new information that falls into these categories:
a) Basic Identity (age, gender, location, job title, education level, etc.)
b) Behaviors (interests, habits, etc.)
c) Preferences (communication style, preferred language, etc.)
d) Goals (goals, targets, aspirations, etc.)
e) Relationships (personal and professional relationships up to 3 degrees of separation)
4. Memory Update:
- If any new information was gathered during the interaction, update your memory as follows:
a) Create entities for recurring organizations, people, and significant events
b) Connect them to the current entities using relations
c) Store facts about them as observations
tools:
- modelcontextprotocol/memory
Memory can remember things:
$ enola -vv ai --agents=test/agents/memory.agent.yaml --in "John Smith is a person who speaks fluent Spanish."
I have noted that John Smith is a person who speaks fluent Spanish.
cat ~/memory.json
let’s you see the memory ๐ง cells! ๐ Now, perhaps another day:
$ enola -v ai --agents=test/agents/memory.agent.yaml --in "Does John Smith speak Italian?"
Remembering...Based on my memory, John Smith speaks fluent Spanish. I do not have any information indicating that he speaks Italian.
This needs npx
to be available; test if launching npx @modelcontextprotocol/server-memory
works, first.
Everything¶
The everything
MCP server has a number of tools useful for debugging and testing the MCP protocol:
enola ai --agents=test/agents/everything.agent.yaml --in "Print environment variables to debug MCP"
CLI for Debugging¶
To debug MCP, use the dedicated MCP CLI commands.
Directories¶
- https://glama.ai/mcp/servers == https://github.com/punkpeye/awesome-mcp-servers
- https://github.com/wong2/awesome-mcp-servers
- https://hub.docker.com/mcp
- https://mcp.so == https://github.com/chatmcp/mcpso
- https://mcpservers.org
- https://www.mcp.run/registry
- https://cursor.directory/mcp
- https://cline.bot/mcp-marketplace
- https://www.claudemcp.com/servers
- https://www.pulsemcp.com/servers
- https://smithery.ai
- https://mcpmarket.com
- https://www.awesomemcp.com
- https://www.mcpserverfinder.com
- https://mcp.higress.ai
- https://github.com/appcypher/awesome-mcp-servers
- https://github.com/pipedreamhq/awesome-mcp-servers
- https://github.com/MobinX/awesome-mcp-list
- https://github.com/toolsdk-ai/awesome-mcp-registry
- https://github.com/modelcontextprotocol/servers