Omium provides zero-code instrumentation for LangGraph. All invoke(), ainvoke(), stream(), and astream() calls are automatically traced.
Quick Start
import omium
# 1. Initialize Omium
omium.init(api_key="om_xxx")
# 2. Enable auto-instrumentation
omium.instrument_langgraph()
# 3. Your existing LangGraph code works unchanged
from langgraph.graph import StateGraph
from typing import TypedDict
class MyState(TypedDict):
input: str
output: str
def process_node(state: MyState) -> MyState:
return {"output": f"Processed: {state['input']}"}
# Build graph
graph = StateGraph(MyState)
graph.add_node("process", process_node)
graph.set_entry_point("process")
graph.set_finish_point("process")
app = graph.compile()
# This execution is automatically traced!
result = app.invoke({"input": "Hello, World!"})
View the trace at app.omium.ai.
What Gets Captured
| Data | Description |
|---|
| Execution ID | Unique ID for each invoke()/ainvoke() call |
| Input | The input state passed to the graph |
| Output | The final state returned by the graph |
| Duration | Total execution time |
| Node Count | Number of nodes in the graph |
| Errors | Any exceptions thrown during execution |
For streaming executions (stream(), astream()), chunk counts are also captured.
Instrumentation Methods
instrument_langgraph()
Enable auto-instrumentation. Call once at application startup.
import omium
omium.init(api_key="om_xxx")
omium.instrument_langgraph()
# All subsequent LangGraph executions are traced
uninstrument_langgraph()
Disable instrumentation (restores original methods).
import omium
omium.uninstrument_langgraph()
Async Support
Works with both sync and async execution:
import asyncio
import omium
omium.init(api_key="om_xxx")
omium.instrument_langgraph()
# Async invoke - automatically traced
result = await app.ainvoke({"input": "Hello"})
# Async streaming - automatically traced
async for chunk in app.astream({"input": "Hello"}):
print(chunk)
Streaming
Streaming executions are also traced:
import omium
omium.init(api_key="om_xxx")
omium.instrument_langgraph()
# Sync streaming
for chunk in app.stream({"input": "Hello"}):
print(chunk)
# Async streaming
async for chunk in app.astream({"input": "Hello"}):
print(chunk)
Omium captures:
- Total chunks streamed
- Events every 10 chunks for progress tracking
- Any errors during streaming
Configuration Options
import omium
from omium import OmiumConfig
config = OmiumConfig(
api_key="om_xxx",
project="my-langgraph-app", # Optional: group executions by project
auto_trace=True, # Enable auto-tracing (default: True)
auto_checkpoint=True, # Enable auto-checkpointing (default: True)
)
omium.configure(config)
omium.instrument_langgraph()
Using with Callbacks
You can also use the callback handler for more control:
from omium import OmiumCallbackHandler
handler = OmiumCallbackHandler()
# Pass callback to LangGraph
result = app.invoke(
{"input": "Hello"},
config={"callbacks": [handler]}
)
Example: Multi-Node Graph
import omium
from langgraph.graph import StateGraph
from typing import TypedDict
omium.init(api_key="om_xxx")
omium.instrument_langgraph()
class AgentState(TypedDict):
messages: list
next_step: str
def agent_node(state):
# Your agent logic
return {"messages": state["messages"] + ["Agent thinking..."]}
def tool_node(state):
# Your tool logic
return {"messages": state["messages"] + ["Tool executed"]}
def should_continue(state):
return "tool" if "call_tool" in state.get("next_step", "") else "end"
# Build multi-node graph
graph = StateGraph(AgentState)
graph.add_node("agent", agent_node)
graph.add_node("tool", tool_node)
graph.add_conditional_edges("agent", should_continue, {"tool": "tool", "end": "__end__"})
graph.add_edge("tool", "agent")
graph.set_entry_point("agent")
app = graph.compile()
# All node transitions are automatically traced
result = app.invoke({"messages": [], "next_step": ""})
Troubleshooting
ImportError: LangGraph is not installed
Traces not appearing
- Verify
omium.init() was called before instrument_langgraph()
- Check your API key is valid
- Ensure you have network access to api.omium.ai
Disable tracing temporarily
omium.uninstrument_langgraph()
# Run without tracing
result = app.invoke(input)
# Re-enable
omium.instrument_langgraph()
Next Steps