✦ ContextPack · JudgeLoop · Promotion by Evidence

Turn coding agents
into governed engineering runs.

Salacia is the operating harness for AI coding agents: it builds the right context, enforces boundaries, verifies the outcome, and only then promotes the patch.

Open source CLI-first Eval-native Release gates MCP-ready
$ npx salacia init click to copy

Claude Code · Codex · Cursor · Cline · OpenCode · Antigravity

Run shape
📋
Program

Goal, mutable surface, verification, promotion policy.

🗺️
ContextPack

Repo map, working set, history, guardrails.

⚖️
Judge

Accept, reject, or block with traceable evidence.

accept reject blocked

What the runtime actually emits.

Not a hand-drawn mock. A real trace from a blueprint-backed run.

Salacia trace screenshot
real trace judge report accepted run
ContextPack
repo map · working set · history · guardrails
JudgeLoop
accept · reject · blocked
Evidence
runtime · eval · release gate speak one language

Agents are impressive. Runs are still fragile.

Salacia turns a raw agent loop into an engineering system: bounded context, explicit policy, hard verification, and a real promotion decision.

Raw agent run

  • 🔍It re-discovers the repository from scratch every time
  • 🔄Execution and judgment collapse into one opaque loop
  • A generated patch can quietly become a promoted patch
  • 🧾Product runs and benchmark runs produce incompatible evidence

Salacia harnessed run

  • 🗺️ContextPack gives the agent a bounded, high-signal map
  • ⚖️JudgeLoop turns verification and policy into a hard verdict
  • 🧪Promotion is explicit, not implied
  • 📦Runtime, eval, and release share one evidence model

One project, three ways to use it.

This is where Salacia stops looking like a demo and starts looking like infrastructure.

Drop it in front of the agent you already use.

Stop every run from starting at zero.

Use Salacia when the model is smart enough to solve the task, but still wastes time loading the wrong files, changing too much, or shipping changes without a clear verdict.

  • 🧭 Bounded context instead of a giant prompt
  • 🧪 Verification-backed promotion
  • 📚 Traceable evidence after every run
Use Salacia as the execution layer.

Put a runtime underneath your coding agent.

If you are building an agent, IDE bridge, or internal assistant, Salacia gives you a control plane, a context plane, and a judge loop without forcing you to replace your agent model.

  • 📦 `program.md` + blueprint as a control plane
  • 🔌 CLI, MCP, and release-gate surfaces
  • ⚖️ Judge reports you can consume programmatically
Turn agent runs into team workflows.

Bring eval and release policy closer to production.

Salacia is useful when your problem is no longer “can the model code,” but “how do we make these runs reviewable, comparable, and safe to promote.”

  • 📈 Runtime, eval, and release share one evidence model
  • 🛡️ Mutable surface and protected paths are explicit
  • 🚦 Accept / reject / blocked is a policy output, not a vibe

From repository to promoted patch.

One control file. One bounded run. One verdict.

1

Write `program.md`

Declare the goal, mutable surface, verification, and promotion policy in a form the runtime can enforce.

npx salacia init
2

Build context

Compile the program into a blueprint and build a bounded ContextPack instead of dumping the whole repo into the prompt.

salacia design
3

Run the agent

Dispatch the agent in an isolated workspace with the generated context, budget, and guardrails.

salacia run
4

Judge and inspect

Accept, reject, or block the patch, then inspect the full trace and evidence behind that decision.

salacia judge && salacia trace

Smarter agents help.
Governed runs scale.

Open source. Apache 2.0. Built for teams that want coding agents to behave like engineering systems, not demos.

⭐ Star on GitHub 📦 View on npm