Skip to main content

Everything you need to run production agents

Omium gives you execution history, traces, checkpoints, and replay so your LangGraph, CrewAI, or custom Python workflows stay debuggable when things go wrong. Use the Python SDK for one-line instrumentation, the CLI for day-two operations, or the HTTP API from any stack.

Instrument your stack

Pick the tab that matches how you run agents today. The patterns below are copy-paste friendly; swap in your own graph or crew afterwards.
import omium

omium.init(api_key="omium_xxx")  # or omium.init() after `omium init`
omium.instrument_langgraph()

from langgraph.graph import StateGraph

# Your graph compiles and runs unchanged; Omium records traces and checkpoints.
graph = StateGraph(dict)
# … add nodes and edges …
app = graph.compile()
result = app.invoke({"input": "Hello"})
Get an API key from the dashboard. Prefer omium init so credentials land in ~/.omium/config.json and stay out of source control.

Choose how you build

Python SDK

Auto-instrument LangGraph and CrewAI, or use @omium.trace and @omium.checkpoint on your own code.

CLI

Configure auth, run scripts with tracing, list executions, stream logs, and push projects to Automations.

REST API

List runs, manage checkpoints, and hook Omium into services that are not Python-first.

Dashboard

Watch runs, open traces, and replay from checkpoints in the browser.

From idea to production

Follow the path in order or jump to what you need.
StageGoalStart here
Get startedInstall, authenticate, run something onceInstallationConfigureQuickstart
BuildDeep integration and behaviourPlatform capabilities, LangGraph, CrewAI
OperateProjects in prod, cost, and keysFirst project, Automations, API keys & billing
AutomateHTTP for your own control planeAPI overview and the reference pages

Why teams use Omium

Automatic tracing

See each step without hand-written logging glue.

Checkpointing & replay

Resume after failures instead of restarting long runs from scratch.

Framework-friendly

Keep your existing LangGraph or CrewAI structure; add a couple of lines at startup.

Production dashboard

One place for executions, traces, and recovery actions.

How it fits together

The SDK talks to Omium while your agents run. The platform stores traces and checkpoints so the dashboard and API stay useful after the process exits.

  1. Make your first traced run from your laptop.
  2. Read Platform capabilities so you know which knobs exist.
  3. Wire your real framework: LangGraph or CrewAI.
  4. Push a project when you want it on Automations.

Keep learning

Examples

Patterns and sample flows as we publish them.

Release notes

What changed in the platform and docs.

Open source

Report issues and follow development.

REST API

Base URL, auth, and links to every resource.