Your AI workflows should outlive any one vendor.
Anthropic is winning today. Six months ago it was OpenAI. Next quarter, it's anyone's guess. If you've built something real on an AI, the most important question isn't which model to pick — it's how to stay portable when the answer changes.
What the last eighteen months have looked like
In the last eighteen months, the leading model has changed four times. Prices have dropped by an order of magnitude. Agents that didn't exist a year ago — Claude Code, Codex, Gemini CLI, Cursor agent, Goose, Antigravity — are now standard tools. A model that was state-of-the-art in August was a midcard by February.
If you built a workflow against a specific model's quirks, you've rewritten it. Probably twice. If you wrote a Python client hardcoded to one provider's SDK, you're maintaining glue code you didn't plan for.
This is the pattern. The AI landscape is still consolidating. It's going to keep moving. The question isn't whether it'll move. The question is what you want to be rewriting when it does.
The part that shouldn't change
The work you want done doesn't change when a new model ships.
- “Review this content against our brand voice guide” is the same task whether the AI behind it is Claude, GPT, or whatever comes next.
- “Extract these fields from this PDF and verify them” doesn't depend on the provider.
- “Run a code review and block the merge if quality drops” is model-agnostic by definition.
Your runbook — the instructions, the standards, the checks — is the part that captures your institutional knowledge. It's what you've invested time in. It's what your team has refined. It should be the constant. The model underneath should be the variable.
Most AI platforms get this backwards. They wrap their API around a specific model family, and when you want to switch you rewrite. They treat the model as permanent and your workflow as the replaceable part.
How Jetty inverts it
Jetty is built so the runbook is the thing you own. The model and the agent are configuration.
One API. 100+ providers.
Jetty speaks the OpenAI Chat Completions protocol. Your SDKs work unchanged. To switch from Claude to GPT to Gemini, you change one field: the model parameter. That's it. The runbook stays the same. The workspace stays the same. The trajectory format stays the same.
Agent-agnostic sandbox.
The sandbox runs whichever agent CLI you specify. Claude Code, Codex, and Gemini CLI are first-class today. The pattern is extensible — any tool that takes a set of instructions and produces files can be a Jetty agent. When a better agent ships next quarter, you switch to it with a config change.
The runbook is markdown.
It's plain text. Not a YAML graph you'd have to rewrite for a different runtime. Not a Python class coupled to one vendor's SDK. Markdown works with every model on the market today and every model on the market tomorrow. It'll still be readable in five years when half the current vendors don't exist.
A/B testing on your real work.
Because every run produces a trajectory — inputs, outputs, cost, tokens, timing — you can run the same runbook through three models and compare them on your actual work. Not on a benchmark someone else wrote. On the thing you care about. When the pricing curve shifts, you move.
What this changes, for two different readers
If you're the team lead
The person making the “which AI vendor do we bet on” decision is making the wrong decision. You don't want to bet on a vendor. You want to bet on a format that survives whichever vendor wins.
Your runbook library is an asset. Your team's time building it is an investment. Both should be protected from a single-vendor pricing change, a model deprecation, or a shift in the competitive landscape. That's what the markdown-as-contract gives you.
When Anthropic raises prices or OpenAI ships something twice as good, you're not rewriting. You're changing a setting. Read more about how this plays out in practice on the for-teams page.
If you're the builder
You already know the cost of provider lock-in. The wrapper SDK you wrote for Claude-specific tool calling. The prompts calibrated for one model's quirks. The Temporal worker that knew too much about which agent it was running.
Jetty lets you keep one runbook, one sandbox abstraction, one trajectory format, and vary the model and agent orthogonally. That's a cleaner architecture to reason about, maintain, and evolve. Read more about the technical shape on the for-builders page.
The honest caveats
Model-agnostic doesn't mean model-identical. Different models have different strengths. Claude handles long context differently than GPT. Gemini's tool-calling is its own thing. A runbook that runs on all three might score slightly differently on each. That's why Jetty captures every trajectory — so you can see the differences on your real work and decide which model is worth it for which runbook.
Agent-agnostic doesn't mean agent-interchangeable. Claude Code and Codex behave differently. What Jetty gives you is the ability to try them side by side without rebuilding your pipeline. The decision about which agent to use for which runbook is still yours.
And there are cases where you'll want to pick a specific model on purpose — latency, regional availability, fine-tuning, specific tool support. Jetty doesn't force you off that. It just makes sure you can move off it when you want to.
Ready to see it?
Bring a workflow where you've felt vendor lock-in. We'll show you what it looks like on Jetty — and how to swap the model when you're ready.
The /v1/chat/completions endpoint, the runbook format, and the full agent configuration.
Related reading
- What is a runbook? — The thing that stays constant.
- The folder is the agent — Why this works: markdown outlives runtimes.
- For teams — Governance and portability from a team lead's point of view.
- For builders — The technical details.