AI ๐ฎ Command¶
The ai
command works with Agents with input from the CLI:
$ ./enola ai --llm=echo:/ --in="hello, world"
hello, world
$ ./enola ai --llm="google://?model=gemini-2.5-flash-lite" --in="hello, world"
Hello, world! How can I help you today?
You, of course, also use the Chat Web UI or the Console Chat to interact with Agents.
All of these 3 commands (ai
, chat
, and server
) support the following CLI flags / options.
Agents¶
--agents
loads AI Agents. It is possible to specify multiple agents by repeating the flag.
If a “short name” ([a-zA-Z0-9\-]+
) is given, e.g. --agents=weather
, then this is implicitly
mapped to https://raw.githubusercontent.com/enola-dev/$NAME-agent/refs/heads/main/enola.agent.yaml
.
Otherwise, it loads the agent definition from the given local file (e.g. --agents=dir/example.yaml
)
or fetches a non-file remote URL (e.g. --agents=https://example.com/agent.yaml
).
See the tutorial for usage examples.
Default Agent¶
--default-agent
specifies which of the loaded AI Agents to use if none is otherwise selected.
LLM¶
--lm
needs to be a valid AI LM URI.
It is optional, because Agents can set this via model:
as well.
Prompt¶
--in
is the input prompt to the LLM / Agent.
--inURL
is an alternative to --in
, reading the prompt from a local file or fetching it from a remote URL.
MCP¶
--mcp
enables MCP for the Agent(s).