AI ๐ฎ Command¶
The ai
command works with Agents with input from the CLI:
$ enola ai --llm=echo:/ --prompt="hello, world"
hello, world
$ enola ai --llm="google://?model=gemini-2.5-flash-lite" --prompt="hello, world"
Hello, world! How can I help you today?
You, of course, also use the Chat Web UI or the Console Chat to interact with Agents.
All of these 3 commands (ai
, chat
, and server
) support the following CLI flags / options.
Agents¶
--agents
loads AI Agents. It is possible to specify multiple agents by repeating the flag.
If a “short name” ([a-zA-Z0-9\-]+
) is given, e.g. --agents=weather
, then this is implicitly
mapped to https://raw.githubusercontent.com/enola-dev/$NAME-agent/refs/heads/main/enola.agent.yaml
.
Otherwise, it loads the agent definition from the given local file (e.g. --agents=dir/example.yaml
)
or fetches a non-file remote URL (e.g. --agents=https://example.com/agent.yaml
).
See the tutorial for usage examples.
Default Agent¶
--default-agent
specifies which of the loaded AI Agents to use if none is otherwise selected.
LLM¶
--lm
needs to be a valid AI LM URI.
It is optional, because Agents can set this via model:
as well.
Prompt¶
--prompt
is the input prompt to the LLM / Agent.
Attach¶
--attach
allows attaching files to the LLM / Agent prompt. It can be repeated to attach multiple files (e.g. --attach=image.png --attach=document.pdf
).
The files are referenced by URL, and support relative URLs for local files and fetching remote URLs with various schemes.
Example with image attachment:
$ enola ai --llm="google://?model=gemini-2.5-flash-lite" \
--prompt="What do you see in this image?" \
--attach=test/mystery.jpeg
Supported image formats, understanding, token cost, maximum number of images per request and their maximum sizes obviously depend on the LLM used:
MCP¶
--mcp
enables MCP for the Agent(s).