v1.0 · shipping to enterprise pilots now

Agentic coding that stays inside your perimeter.

Taicli is the workspace for long-running coding agents that your security team can actually approve. Runs fully on hardware you already own — or on any major AI provider through the same interface. Agents draft overnight, hand work off to each other, and pause at every destructive step for a human to sign.

On-prem · your network Switch providers in 1 click Agents with approval gates
sess · taicli-7f3a
taicli@workstation ctx · payments-svc
LIVE · MIRRORED FROM CLI
AC RK JL +4
Drafting idempotency-key handler
3 reviewers attached
MODEL claude-sonnet-4.7
CONTEXT payments-svc/ openapi.yaml
TOKENS
handlers/idempotency.ts · proposed
// reject replays - const key = req.headers['Idempotency-Key'] + const key = getIdempotencyKey(req) + if (!key) return badRequest(res, 'missing') + const cached = await store.get(key)
tests 12/12 approved · RK queued · staging
Deployed inside platform & infrastructure teams at
Northwind Labs
Meridian
<kernel/>
Ostraca
Verdant
halide.io

Every provider. One surface. Zero lock-in.

Run fully on-premises — your code never leaves your network, no SaaS account, no telemetry. Or connect to any major provider through the same interface. When a provider has an outage, raises prices, or deprecates a model mid-project, your team keeps working.

Anthropic

Claude
Sonnet & Opus

Long-context refactors, design docs, senior-level code review. Default for architecture work.

200k ctx tools
OpenAI

Enterprise
frontier models

Step-by-step reasoning, test-case generation, and the broadest tool ecosystem for automation.

fn calling vision
Google

Long-context
& multimodal

Cost-efficient batch work, long-document ingest, and multi-modal inputs for spec & diagram review.

2M ctx multimodal
Local / Self-hosted

On-prem
open-weight models

Run everything inside your perimeter. Route regulated traffic to models that physically never leave your network.

air-gapped your hardware

One session.
Many models. Routed by the work.

Every turn in a Taicli session goes to the model best suited to the work: complex refactors to Opus, boilerplate to Haiku, anything touching sensitive files to your local stack. Rules you can read, budgets you can set, overrides when you need them. Teams typically see 40–60% lower model spend without giving up quality on the turns that matter.

taicli.policy.yaml · routing rules 5 active · v12
if task = refactor && files 20 → claude-opus
if task = architecture || design-doc → claude-opus
if context > 120k tokens → claude-sonnet-4.7
if files.tagged("sensitive") → local / llama-70b
if task = rename || lint-fix || typo → claude-haiku
else default → claude-sonnet-4.7
session · taicli-7f3a · payments-svc 6 turns · 14 min
01 Rewrite 34 handlers to new retry interface multi-file refactor · 34 files opus
02 Explain why the idempotency test is flaky reasoning · default sonnet
03 Rename userId → customer_id across repo mechanical · no reasoning needed haiku
04 Audit PII handling in auth/ module files tagged sensitive · stays on-prem local
05 Design doc for new billing ledger architecture · long horizon opus
06 Fix 7 eslint errors in checkout/ lint-fix haiku
session spend · $4.18 single-model baseline · $11.07 ↓ 62% saved
Overrides are first-class. Pin a model per project, per file pattern, or per turn. Run taicli --model opus when you want the heavy one; set policy.default: local when your repo must never leave the building. The router is a default, never a cage.

Type in the terminal.
Review in the browser.
Same brain.

Engineers live in the CLI. Staff engineers, TPMs, and security reviewers live in the browser. Taicli gives them the same session with the same context — edits in one show up live in the other.

  • Shared context window. Files, env, prior turns — mirrored across CLI and Web in real time.
  • Inline approval. Gate destructive actions behind a second human reviewer without breaking flow.
  • Deep-link replay. Every prompt, diff, and decision is addressable. Paste a URL into a PR and reviewers see the trail.
$ TERMINAL · ANDREA
taicli draft "rate limit middleware"
thinking · claude-sonnet-4.7
✓ wrote middleware/rateLimit.ts (+42)
✓ wrote middleware/rateLimit.test.ts (+68)
taicli share --with reviewers
→ link: taicli.app/s/7f3a
WEB · RAHUL joined
Review: rate limit middleware
Andrea · just now · 2 files
middleware/rateLimit.ts + export const rateLimit = (opts) => { + const buckets = new Map() + return async (req, res, next) => {

Many hands.
One codebase.
No surprises.

When several developers — and the long-running agents they dispatch — operate on the same repository at once, the usual outcome is conflicting edits and lost work. Taicli serializes the writes, attributes every action to the engineer (or agent) who triggered it, and keeps the full history queryable long after the session ends.

  • Strict work queue. No overwrites, no race conditions, no surprises. The queue is visible to every teammate in real time.
  • Per-developer attribution. Every prompt, decision, and tool execution is signed to the engineer who initiated it.
  • Audit-grade history. Every session signed, searchable, exportable — with full provenance from day one.
payments-svc · live sessions
last 60 min
ACAndrea C.
draft · idempotency
RKRahul K.
review · 2 files
JLJade L.
tests
openapi.yaml
MPMira P.
bench · on-prem
● DRAFT ● REVIEW ● TEST ● BENCH

Where the code
and the conversation
live together.

Every Taicli project has its own threaded chat — so the debate about a design, the suggestion that sparked it, and the diff it produced all stay in one place. The AI doesn’t arrive at each thread empty-handed; it already knows the history of the project it’s being asked about.

  • Per-project channels. Chat, threads, mentions, and reactions scoped to the repo — not another org-wide firehose.
  • Paste a session, not a screenshot. Drop a Taicli session link and teammates see the prompt, diff, and model — inline, replayable.
  • The AI is a teammate. @taicli in any thread to draft, explain, or review — with the channel’s context already attached.
# payments-svc 24 members
AC RK JL +5
AC
Andrea10:24
Shipped the idempotency handler draft — anyone got a minute to look?
taicli session · handlers/idempotency.ts
claude-sonnet-4.7 · +38 / −2 · tests 12/12
open
RK
Rahul10:26
Looks good — nit: should getIdempotencyKey lowercase the header name?
👀 2✅ 1
taicliAI10:27
Good catch. HTTP header names are case-insensitive per RFC 7230 — normalizing to lowercase and adding tests for IDEMPOTENCY-KEY, Idempotency-Key, and idempotency-key.
JL Jade is typing…
@ taicli rerun tests and summarize failures

Run Taicli
as a shared service
for the whole team.

Point every laptop, CI job, and background agent at one Taicli server. Centralize model credentials, context caches, and policy — so the person who onboards on Friday has the same environment as the one who joined last year, and every agent inherits the same governance.

  • Central credentials. Rotate an API key once; every engineer and CI job picks it up immediately.
  • Multi-project isolation. One server, many projects, each with its own context, policies, and budget.
  • Warm context pools. Shared embedding and file caches cut average prompt latency by up to 3×.
Deployment guide Reference architecture
taicli server eu-prod-01 ● 14 sessions
AC Andrea · CLI
payments-svc
sonnet-4.7
RK Rahul · Web
payments-svc
reviewing
JL Jade · CLI
billing-ui
via terminal
CI agents · CI / nightly
search-svc
background
$ taicli serve --policy ./taicli.policy.yaml
→ listening on :4177 · mTLS required · 3 projects loaded

Planned upgrades leave no scar.

Restarts are invisible to your developers. Conversations, pending work, and connected sessions survive any upgrade — no lost context, no interrupted workflow. Your finance team gets predictability; your engineers get warnings before they overspend.

01 · Continuity

Sessions survive
every restart

Planned upgrades and unexpected restarts leave no scar. Browsers reconnect, in-flight conversations resume mid-sentence, pending work stays queued. What looks routine is an engineering choice your developers never have to notice.

02 · Observability

Live insight
into every session

Connect to the monitoring stack your operations team already uses. See live activity, hardware utilization, and per-user cost in real time.

03 · Cost control

Predictable
by design

Per-user and per-session spending budgets with daily aggregates. Finance gets predictability; developers get a warning before they overspend.

Plugs into the stack
your organization already uses.

Taicli plugs into the identity, audit, and security tooling your organization already uses. Single sign-on through your existing provider. Every developer action attributed and recorded in an append-only audit trail. Tiered permissions — read-only, workspace-write, or full access — with approval gates for anything destructive.

01 · Deployment
Self-contained · single binary
Small, auditable scope. No sprawling dependency tree your security team has to vet. Fully air-gapped reference architecture available.
02 · Identity
SSO via your IdP · no passwords held
Okta, Azure AD, Google, GitHub — or anything else your organization already uses. Taicli never holds credentials.
03 · Audit
Audit-ready out of the box
Complete, attributed history of every developer action. Exports to your existing security monitoring — nothing new for your team to run.
04 · Agent governance
Approval gates for every destructive step
Long-running agents run under tiered execution policies — read-only, workspace-write, or full access — with a human in the loop for anything irreversible. Every agent action signed, attributable, replayable.

Six familiar concerns.
Six clear answers.

Enterprise AI procurement usually stalls on the same questions. Here is how Taicli answers them before the call.

Will our IP leak?
Code can stay 100% on your infrastructure. No SaaS dependency, no phone-home.
Will our regulator approve?
Full audit trail, per-user attribution, on-prem option, integrates with the monitoring you already run.
What if the AI vendor changes?
Switch providers — or to local — with one click. No rewrites, no retraining.
What happens when agents run overnight?
They run under the same policies as humans — signed, attributable, gated at destructive steps. You wake up to a proposed diff, not a surprise deploy.
What will it cost to run?
Predictable. Local mode has no per-token marginal cost. Commercial mode shows per-user spend in real time with budget caps.
How fast can we deploy?
Single self-contained download. Your platform team can have it running the same day.
We replaced four in-house wrappers with one Taicli server. Deployment velocity is up 2.4×, and for the first time our security team is an advocate for AI tooling.
DA
Dana Ashford
VP Engineering · Meridian Financial

Priced for teams,
not per-seat surprise.

One predictable platform fee covers the CLI, Web, and server. Model spend is pass-through — you bring your keys, we bring the workspace.

Starter

Team

$24/ user / month
For product engineering teams up to 40 people shipping in shared repos.
  • CLI + Web workspace
  • Bring your own provider keys
  • Shared context workspaces
  • 90-day audit retention
Start 30-day trial
Sovereign

Air-gapped

Contact
For defense, public sector, and regulated industries that require fully offline operation.
  • 100% offline reference architecture
  • Bundled open-weight model stack
  • FedRAMP-aligned controls
  • On-site onboarding & training
Request briefing

Put the whole team
on the same model.

A 30-minute pilot reveals more than a month of evaluation spreadsheets. Bring a real repo; we'll bring the environment.

Tweakslive

Theme
Accent
Hero variant