What is BotINFRA?
BotINFRA is compute for agents. You bring a Docker image — we run it in production with hard spend caps, zero shared tenancy, and a remote kill switch.
It is runtime-agnostic. OpenClaw, Manus, LangChain, AutoGPT, CrewAI, or your own Dockerfile — if it runs in Docker, it runs on BotINFRA.
Deploy in 60 seconds
Install the CLI and point it at your image.
# Install
npm install -g @botinfra/cli
# Authenticate
botinfra login
# Deploy your agent
botinfra deploy --image my-agent:latest --budget 500
✓ Live at https://agent-7x2k.botinfra.ai--budget 500 sets a hard monthly cap of $500. Your agent stops if it hits the limit — not your credit card.
Supported runtimes
Managed profiles are pre-configured Docker images with sensible defaults. You can also bring any image via the Custom tier.
Zero config. Best for shipping today.
Persistent multi-step execution for revenue workflows.
Tight ceilings and strict isolation.
Versioned, auditable. Rollback on demand.
Zero cross-instance exposure. Regulated workloads.
Bring your existing LangChain stack.
Goal-driven. Set the objective, set the guardrails.
Coordinate crews of agents toward a shared goal.
Event-driven agent communication and routing.
Your Dockerfile. Your IP. Dedicated infrastructure.
Budget caps
Every deployment requires a --budget flag. This sets the maximum monthly spend in USD. When your agent hits the cap, it receives a SIGTERM and stops.
Budget enforcement runs at the proxy layer — token counts from every LLM call are written to Redis and compared against the cap in real time.
# $100/mo budget
botinfra deploy --image agent:latest --budget 100
# Update budget on a running agent
botinfra budget --agent agent-7x2k --set 250Agent Spec v1
Any HTTP server that implements two endpoints can run on BotINFRA.
/healthReturns 200 when the agent is ready to accept messages.
// response
{ "status": "ok", "version": "1.0.0" }/messageAccepts a task and returns a response. Supports streaming via Server-Sent Events.
// request
{ "task": "Summarize this document", "context": {...} }
// response
{ "output": "...", "done": true }