arbitor
Sign in Book a demo
v2.5 Self-hosted state backend

Lock the resource,
not the whole state.

Arbitor replaces your S3 + DynamoDB backend with one container against your own PostgreSQL. Plans run lock-free. Applies lock only the resources they touch — so two engineers working in the same state don't wait on each other.

Drop-in for Terraform · OpenTofu BSL · self-hosted
live · Accord parallel applies
terraform.tfstate → accepting connections
alice@blackmesa
us-east-1
aws_lb.api applying…
aws_lb_listener.https applying…
aws_target_group.api applying…
aws_security_group.web applying…
bob@blackmesa
us-west-2
aws_rds.primary applying…
aws_rds_param_group applying…
aws_security_group.db applying…
aws_db_subnet_group.main applying…
Both plans accepted. No lock contention.
no waiting
Two engineers. Disjoint resources. Both succeed.
60s
install footprint
resource
lock granularity
1
container · 1 database
0
plans blocked
Drop-in replacement

The same Terraform you already use.

Point your backend config at arbitor and you're done. No rewrites, no migration tool, no downtime. Same commands, faster results.

BEFORE
# main.tf
backend "s3" {
bucket = "tfstate"
dynamodb_table = "tflock"
}
AFTER
# main.tf
backend "http" {
address = "https://arbitor.internal/state"
}
One config change. No state rewrite.
IMPORT EXISTING STATE
arbitor states import -n NAME FILE # from a local file
arbitor states import -n NAME --pull # from terraform state pull
arbitor states import -n NAME --s3 s3://... # from S3
Bring existing tfstate files. No starting from scratch.
~/infrastructure — arbitor
$ arbitor plan
Reading state from postgres://state.internal/arbitor
Refreshing 3 changed resources only
 
Plan: 2 to add, 1 to change, 0 to destroy.
+ aws_lb_listener.api_https
+ aws_target_group.api
~ aws_lb.api
 
Plan complete.
$ arbitor apply
Acquiring resource locks (3)... done
Applying alongside 1 concurrent operation
Apply complete. Resources: 2 added, 1 changed, 0 destroyed.
How it works

One container. Your database. Done.

Replace S3 + DynamoDB with one container. Engineers and CI/CD pipelines apply in parallel — applies lock only the resources they touch. State stays in a PostgreSQL you own.

Three engineers each running their own CI/CD pipeline, all applying concurrently to arbitor, which stores state in your PostgreSQL. Three horizontal lanes on the left, each showing an engineer running `arbitor apply` and triggering a CI/CD pipeline. All three lanes converge on a central arbitor state-backend card, which writes through to a self-hosted PostgreSQL on the right. The animation cycles through a brief idle pause, simultaneous bursts of activity from each lane, arbitor acknowledging the concurrent arrival, and a single state packet flowing on to PostgreSQL. Engineer A arbitor apply CI / CD A pipeline run Engineer B arbitor apply CI / CD B pipeline run Engineer C arbitor apply CI / CD C pipeline run ARBITOR State backend drop-in for S3 + DynamoDB state PostgreSQL your database
Self-hosted
Runs entirely on your infrastructure.
Your data
State and infra metadata stay in your PostgreSQL.
No backdoor
Outbound license check only. No inbound access required.
01
Deploy
Run the arbitor server in a container. Point it at your PostgreSQL. Done in 60 seconds.
02
Connect
Update backend config in your existing Terraform. Or import an existing tfstate with one command.
03
Apply
Run terraform plan and apply as you always have. Plans never block. Applies lock only what they change.
Deployment

Run it where it has to run.

Some teams can't put state on someone else's servers — not for any vendor, not for any reason. Arbitor ships in two shapes so the answer is always yes.

YOUR NETWORK ARBITOR
Self-hosted
Run the arbitor server inside your own perimeter. State and infrastructure metadata stay in a PostgreSQL you operate. The only outbound traffic is a small periodic license check — no inbound access to your network required. Air-gapped license available on Enterprise.
Self-hosting guide
CLOUD REGION YOURS
Private Cloud
Dedicated stack on dedicated hardware. Pick the region, or bring your own cloud account. Nothing is co-mingled with another customer at any layer.
Private Cloud
What you get

Everything Terraform should be.

Six capabilities your current state backend can't give you. No new tools to learn — same Terraform, faster, safer.

Plans in seconds
Only changed resources refresh. Skip the 247 you didn't touch.
Accord parallel applies
Two engineers, disjoint resources, both succeed. Same-resource conflicts surface immediately.
Upstream protection
Other engineers' work stays scoped. Out-of-scope changes blocked.
Visual dependency map
See blast radius before you apply. Understand what touches what.
Self-hosted
The arbitor server runs in your network. State stored in PostgreSQL you own.
Resource-level locking
Lock individual resources, not entire states. Fine-grained control.
The platform

Built for teams, not just for state.

Arbitor isn't only a state backend. It comes with a web UI, change history, checkouts, and admin controls — so multi-engineer teams can see what's happening and stay out of each other's way.

arbitor.internal/states
prod-us-east-1 / resources
live admin: locked
  • aws_lb.api idle
  • aws_target_group.api idle
  • aws_rds.primary @alice · feat/db-migration · 4 resources held
  • aws_security_group.web idle
Know who you collided with.
Override events surface the user, branch, and commit SHA that touched your resources. Your next plan tells you what changed.
Hold what you're working on.
Checkouts reserve resources for hours, not minutes. Branch-aware, so git worktrees just work.
Admin controls when you need them.
Lock down an entire state during incidents. Write-protect individual resources to prevent regressions.
Live state, no refreshing.
The dashboard reflects state writes as they land. No polling, no stale views.
Free Community tier

Replace your state backend in 60 seconds.

Self-hosted: arbitor runs in your network, against a PostgreSQL you own. No credit card. No catch.