Example: LangGraph Python
Durable backlog orchestration with human-in-the-loop approval interrupts — each graph step maps to a kanban-lite card.
The runnable app lives in examples/langgraph-python/. It uses a small LangGraph state machine to fetch backlog cards,
generate proposed field changes, pause for human review, and then write only the approved updates back through the public kanban-lite REST API.
What you will build
A Python backlog-review agent that follows this graph:
fetch_backlog -> propose_updates -> human_approval -> apply_updates -> END
^
interrupt() pauses here
The shipped example demonstrates:
- Typed LangGraph state shared across four nodes
- A human-approval checkpoint created with
interrupt() - Resume behavior via
Command(resume=...)and a stableTHREAD_ID - A thin kanban-lite API client that reads tasks and applies partial updates
Before you start
- Python 3.11 or later
- A local clone of this repository
- A running kanban-lite server, because the example talks to the live REST API rather than mocks
-
Start kanban-lite in one terminal
cd /path/to/kanban-light kl serveThe Python example expects the standalone server at
http://localhost:3000by default. If you prefer, the alternative binary noted in the example comments iskanban-md. -
Create a virtual environment inside the example folder
cd /path/to/kanban-light/examples/langgraph-python python -m venv .venv source .venv/bin/activateOn Windows, activate with
.venv\Scripts\activate. -
Install the Python dependencies
pip install -r requirements.txtThe shipped dependencies are intentionally small:
langgraph,python-dotenv, andrequests. You do not need an LLM provider key for the default rule-based proposal engine. -
Copy the environment template
cp .env.example .envThe example reads
KANBAN_LITE_URLandTHREAD_IDfrom.env. Optional provider keys stay commented out unless you replace the default proposal function with an LLM-backed one. -
Run the graph
python main.pyThe entrypoint prints three phases: fetch/propose, human review, and apply. For a safe rehearsal that fetches tasks and generates proposals without prompting for approval or writing changes, run
python main.py --dry-runinstead. -
Approve, skip, or apply everything
When proposals exist, the terminal prompt accepts the same formats implemented in
main.py:all # approve every proposal none # skip every proposal 0, 2, 3 # approve specific proposal indices # blank input also skips everythingAfter you respond, the graph resumes from the paused approval node and sends the approved field changes to kanban-lite.
Environment variables
| Variable | Default | How the example uses it |
|---|---|---|
KANBAN_LITE_URL |
http://localhost:3000 |
Base URL for the live kanban-lite server that serves GET /api/tasks and PUT /api/tasks/{id}. |
THREAD_ID |
backlog-review-001 |
Execution-session key passed into LangGraph config so pause/resume behavior can retrieve the same checkpoint. |
OPENAI_API_KEY / ANTHROPIC_API_KEY |
commented out | Unused by the shipped example. Only enable them if you replace the rule-based proposal engine described in the README. |
What happens during a run?
The example is easiest to understand as a three-phase loop:
-
Phase 1 — fetch and propose.
fetch_backlogcalls the kanban-lite client to list tasks.propose_updatesthen applies the built-in rules fromgraph.py: urgent titles become high priority, and otherwise untriaged backlog items get promoted to medium priority. -
Phase 2 — interrupt for review.
human_approvalcallsinterrupt(payload). LangGraph pauses execution and exposes the payload throughgraph.get_state(config), whichmain.pyreads to print each proposal in the terminal. -
Phase 3 — resume and apply.
main.pyparses your terminal input, resumes the graph withCommand(resume=approved_indices), andapply_updateswrites only the approved field changes back to kanban-lite.
Key code
The graph is assembled in graph.py using a typed StateGraph with an interrupt() gate
for human-in-the-loop approval. Here is the condensed pattern:
# graph.py — StateGraph assembly with interrupt gate
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import END, StateGraph
from langgraph.types import interrupt
class BacklogState(TypedDict):
tasks: list[dict]
proposals: list[dict]
approved_proposals: list[dict]
applied_results: list[dict]
def human_approval(state: BacklogState) -> dict:
"""Pause for operator review via interrupt()."""
approved_indices: list[int] = interrupt({
"message": "Review proposals and reply with indices to apply",
"proposals": state["proposals"],
})
approved = [state["proposals"][i] for i in approved_indices]
return {"approved_proposals": approved}
def build_graph() -> StateGraph:
builder = StateGraph(BacklogState)
builder.add_node("fetch_backlog", fetch_backlog)
builder.add_node("propose_updates", propose_updates)
builder.add_node("human_approval", human_approval)
builder.add_node("apply_updates", apply_updates)
builder.set_entry_point("fetch_backlog")
builder.add_edge("fetch_backlog", "propose_updates")
builder.add_edge("propose_updates", "human_approval")
builder.add_edge("human_approval", "apply_updates")
builder.add_edge("apply_updates", END)
return builder.compile(checkpointer=MemorySaver())
Key files
| File | Why it matters |
|---|---|
examples/langgraph-python/main.py |
CLI-style entrypoint that runs the graph, prints proposals, accepts operator input, and resumes execution. |
examples/langgraph-python/graph.py |
Defines BacklogState, the four nodes, the rule-based proposal logic, and the compiled graph with its checkpointer. |
examples/langgraph-python/kanban_lite_client.py |
Small synchronous REST client that keeps the kanban-lite integration seam obvious and replaceable. |
examples/langgraph-python/.env.example |
Documents the runtime knobs the example actually supports today. |
examples/langgraph-python/README.md |
Companion source-of-truth reference with the same commands plus notes about LLM providers and persistent checkpoint storage. |
Durable execution and interrupts
The example is durable in the LangGraph sense because the graph is compiled with a checkpointer and resumed by THREAD_ID.
In the shipped files, build_graph() uses MemorySaver, which keeps checkpointed state in RAM for the life of the current process.
- Reusing the same
THREAD_IDwithin the same process lets LangGraph resume the paused approval gate. - Restarting the Python process resets the in-memory checkpoint store, so the default example starts clean again.
- The README includes the exact
SqliteSaverswap if you want persistence across process restarts without changing the node logic.
The interrupt boundary itself lives in human_approval(). That node sends a structured payload containing proposal indices, task IDs,
task titles, and field changes. When main.py resumes with approved indices, LangGraph returns that value from interrupt()
and the same node produces approved_proposals for the final write step.
kanban-lite integration seam
The example deliberately keeps kanban-lite integration boring and explicit. Everything goes through the public REST API, so you can inspect, replace, or extend the seam without touching LangGraph internals.
| API call | Used for | Where |
|---|---|---|
GET /api/tasks |
Load the current backlog from the default board. | KanbanLiteClient.list_tasks() → fetch_backlog |
PUT /api/tasks/{id} |
Apply the approved field updates with server-side partial-update semantics. | KanbanLiteClient.update_task() → apply_updates |
That means kanban-lite acts as the work queue and source of truth for task state, while LangGraph handles orchestration, pause/resume behavior, and operator review.
Extension ideas
- Swap the rule-based
_generate_proposals()function for an LLM-backed proposal step using the optional provider keys. - Replace
MemorySaverwith the README’sSqliteSaverexample to keep interrupts resumable across process restarts. - Extend
kanban_lite_client.pywith aboard_idparameter if you want the graph to target a non-default board. - Propose more than priority changes — for example assignee, labels, or metadata — while keeping the same human approval gate.
Related references
If you want to go deeper after the example is running, jump to the CLI, SDK, or examples hub next.