Overview
AIronClaw exposes a REST management API that you can call programmatically with a Personal Access Token. Use it to provision LLM and MCP proxies, rotate keys, fetch logs and read usage from your own systems.
What is this API#
The management API is the same surface the AIronClaw dashboard uses internally. Every action you can perform from the UI — creating an LLM proxy, listing logs, rotating a key, updating a rule — is also available as a JSON HTTP endpoint.
This means you can wire AIronClaw into Terraform, GitOps pipelines, internal portals and audit jobs without screen-scraping. The API is stable, versioned per resource, and authenticated via a Personal Access Token (PAT) issued from your profile page.
The management API documented here lets you configure AIronClaw. It is separate from the data-plane proxies (LLM Proxy and MCP Firewall) that your applications and agents send model traffic through. Those speak the OpenAI/Anthropic/MCP wire formats — see the LLM Proxies and MCP Proxies docs.
Base URL#
All management endpoints live under the same origin as your AIronClaw dashboard, prefixed with /api:
https://app.aironclaw.com/apiIf you self-host AIronClaw, replace the host with your own domain (e.g. https://aifw.your-company.com/api). HTTPS is required — plain HTTP requests are rejected.
Authentication#
Every request must carry a Personal Access Token in the Authorization header:
curl https://app.aironclaw.com/api/llm \
-H "Authorization: Bearer aifw_pat_xxxxxxxxxxxxxxxxxxxxxxxx"See Authentication for how to issue, rotate and revoke a PAT, and what it can and cannot do.
Request format#
All endpoints accept and return JSON. Request bodies should be sent with Content-Type: application/json. Empty bodies are fine for GET and DELETE.
Path parameters are written as :name in this documentation (e.g. /api/llm/:id). Replace them with the actual resource UUID at request time.
Response format#
Successful responses return either a single resource wrapped in a named key, or a list under a plural key. The shapes are stable per endpoint and never include internal fields like raw provider keys or encrypted ciphertext.
{
"proxy": {
"id": "8b3f...d21",
"name": "production-openai",
"provider": "openai",
"createdAt": 1735689600
}
}Errors#
Errors return a non-2xx HTTP status and a JSON body with an error field describing the failure. Treat the HTTP status as the source of truth; the message is informational.
{
"error": "name is required"
}Pagination#
Most list endpoints return all of the caller's resources in a single response, since the typical tenant has tens (not thousands) of proxies and rules. The two endpoints that page — logs — use cursor-based pagination via a cursor query parameter and a nextCursor field in the response. See the Logs reference for details.
Rate limits#
The management API is gated behind Kong with a per-PAT default of 120 requests/minute and 2000 requests/hour. Bursts above the limit return 429 with a Retry-After header.
On Pro and Enterprise plans the per-PAT limits are configurable. Ping us on Discord if you need to provision automation that runs at higher rates.
Versioning#
The management API is currently v1 and that is implicit in every URL — there is no version segment in the path. Breaking changes will introduce a parallel /api/v2/... surface; the existing endpoints will keep working unchanged for at least 12 months after a v2 launches.
Additive changes (new endpoints, new optional fields, new enum values) are not considered breaking and roll out continuously.