Guides
Pick a starting point. Each guide is short and self-contained.
If you're working on AI for your own work
You've got prompts, skills, and maybe a custom GPT or two. You want to make them work for your team.
- For operators → — The pitch for people who've already built their own AI setup and want to deploy it.
- How to build an AI agent → — A six-step walkthrough for turning a workflow you do every week into something your team can run.
- Turn a skill into a runbook → — Already have a Claude skill or a custom GPT? Fastest path to making it reliable.
If you lead a team that's trying to operationalize AI
You're past the experimentation phase and trying to figure out how to scale this responsibly.
- For teams → — Visibility, security, and vendor portability for the AI workflows your team relies on.
- Model- and agent-agnostic → — Why your runbooks should outlive any one vendor's pricing decision.
If you're a builder
You're shipping AI features and looking for the right primitive.
- For builders → — Reliable agents, durable execution, OpenAI-compatible API, no framework to stand up.
Concepts
The thinking behind how Jetty works.
- What is a runbook? → — The format. Skills + standards = runbooks.
- The folder is the agent → — The thesis. Building on Kieran Klaassen's framing.
- Jetty vs LangGraph and Managed Agents → — Honest comparison. When each is the right tool.
Install & setup
For wiring Jetty into your coding agent, in five minutes.
- Install the agent skill → — Claude Code, Codex, Gemini CLI, Cursor, VS Code, Windsurf, Zed, or any MCP-compatible client.
Technical documentation
For the API reference, agent configuration, sandbox snapshots, webhook payloads — everything past the conceptual material — see the developer docs.