Tutorial
Using Linkar from agents and automation
Prefer the core helpers or the local API when an agent needs to inspect templates, run work, and read structured results.
Linkar is designed for two interfaces:
- a short CLI for humans
- a structured core, local API, and MCP bridge for machines and AI agents
If you are building an agent or automation layer, prefer the machine-facing interfaces over shell scraping.
Best order of operations
- Inspect available templates
- Inspect project runs and outputs
- Choose or resolve params explicitly
- Trigger execution
- Read back metadata and outputs from the structured result
Core helper path
For Python-based agents, use the core helpers directly:
from linkar.core import collect_run_outputs, inspect_run, list_templates, render_template, run_template
templates = list_templates(pack_refs=["./examples/packs/basic"])
result = run_template(
"simple_echo",
params={"name": "Agent"},
pack_refs=["./examples/packs/basic"],
)
metadata = inspect_run(result["outdir"])
bundle = render_template(
"simple_echo",
params={"name": "Agent"},
pack_refs=["./examples/packs/basic"],
outdir="./simple_echo_bundle",
)
collect_run_outputs("./simple_echo_bundle")
This avoids terminal parsing and keeps the semantics aligned with the CLI.
Local API path
If the agent is outside Python or needs process isolation, start the local API:
linkar serve --port 8000 --api-token local-dev:read,resolve,execute
Start with discovery:
curl -s -H 'Authorization: Bearer local-dev' \
"http://127.0.0.1:8000/v1"
curl -s -H 'Authorization: Bearer local-dev' \
"http://127.0.0.1:8000/v1/schema"
curl -s -H 'Authorization: Bearer local-dev' \
"http://127.0.0.1:8000/v1/templates?pack=./examples/packs/basic"
curl -s -H 'Authorization: Bearer local-dev' \
"http://127.0.0.1:8000/v1/templates/simple_echo?pack=./examples/packs/basic"
Then resolve before running:
curl -s -X POST "http://127.0.0.1:8000/v1/templates/simple_echo:resolve" \
-H 'Authorization: Bearer local-dev' \
-H "Content-Type: application/json" \
-d '{"pack_refs":["./examples/packs/basic"],"params":{"name":"Agent"}}'
When the response is ready: true, take the returned resolve_token and confirm the run:
curl -s -X POST "http://127.0.0.1:8000/v1/templates/simple_echo:run" \
-H 'Authorization: Bearer local-dev' \
-H "Content-Type: application/json" \
-d '{"resolve_token":"TOKEN_FROM_RESOLVE","confirm":true}'
If you want a staged bundle without execution:
curl -s -X POST "http://127.0.0.1:8000/v1/templates/simple_echo:render" \
-H 'Authorization: Bearer local-dev' \
-H "Content-Type: application/json" \
-d '{"pack_refs":["./examples/packs/basic"],"params":{"name":"Agent"},"outdir":"./simple_echo_bundle"}'
If you run inside a real project instead of using only pack_refs, inspect the project and the recorded run through:
curl -s -H 'Authorization: Bearer local-dev' \
"http://127.0.0.1:8000/v1/projects/current?project=./study"
curl -s -H 'Authorization: Bearer local-dev' \
"http://127.0.0.1:8000/v1/projects/current/runs?project=./study"
curl -s -H 'Authorization: Bearer local-dev' \
"http://127.0.0.1:8000/v1/runs/simple_echo_001/outputs?project=./study"
curl -s -H 'Authorization: Bearer local-dev' \
"http://127.0.0.1:8000/v1/runs/simple_echo_001/status?project=./study"
Recommended local API surface:
GET /v1GET /v1/schemaGET /v1/projects/currentGET /v1/projects/current/runsGET /v1/templatesGET /v1/templates/{template_id}POST /v1/templates/{template_id}:resolvePOST /v1/templates/{template_id}:runPOST /v1/templates/{template_id}:renderPOST /v1/templates/{template_id}:testGET /v1/runs/{run_ref}GET /v1/runs/{run_ref}/outputsGET /v1/runs/{run_ref}/statusGET /v1/runs/{run_ref}/runtime
Success responses use:
{"ok": true, "data": {...}}
Errors use:
{"ok": false, "error": {"code": "param_resolution_error", "message": "..."}}
The v1 additions that matter most for agents are:
kindon major detail responsesitemsandcounton collection responsesparam_provenance,warnings, andconfirmationon:resolve- short-lived
resolve_tokensupport for:run
Why this is better than shell scraping
- parameters are structured
- outputs are structured
- errors are typed JSON or typed Python exceptions
- the same runtime path is used by CLI, core, and API
- the run artifact still lives on disk in a normal directory
Human fallback
The CLI is still the right interface for quick interactive work:
linkar run simple_echo --pack ./examples/packs/basic --param name=Human
But once an agent needs repeated inspection and execution, the core helpers or the local API are the cleaner path.
MCP path for tool-oriented clients
If the client already speaks MCP, use Linkar’s stdio MCP server instead of wrapping the CLI.
Install the optional dependency:
pip install 'linkar[mcp]'
Then start the server:
linkar mcp serve
or:
linkar-mcp
For Codex, register it once in the shared Codex config:
codex mcp add linkar -- linkar mcp serve
If you are running from a local checkout instead of an installed CLI:
codex mcp add linkar \
--env PYTHONPATH=/home/ckuo/github/linkar/src \
-- python3 -m linkar.mcp_server
Then confirm the server is registered:
codex mcp list
codex mcp get linkar
After restarting the Codex session in VS Code, the agent can use the Linkar MCP tools directly.
The MCP tool surface mirrors the same high-value operations:
linkar_list_templateslinkar_describe_templatelinkar_resolvelinkar_runlinkar_renderlinkar_collectlinkar_testlinkar_inspect_runlinkar_get_run_outputslinkar_get_run_runtime
This is the cleanest path for Codex-style clients because it exposes small, explicit tools instead of forcing shell parsing or a second wrapper layer over the HTTP API.
Pack-side discovery
In some environments the agent also needs help finding likely project paths, FASTQ runs, or local references before it can call Linkar.
That kind of facility-specific knowledge does not have to go into Linkar core. A site pack can carry a separate discovery layer instead.
A good split looks like:
templates/for reusable workflowsfunctions/for binding-time param resolutiondiscovery/for read-only site-specific inventory helpers
That lets an agent:
- discover likely project or dataset candidates from the pack
- choose the right one with the user
- use Linkar API or MCP tools to resolve and run workflows