Evaluate, govern, and certify any AI output against constitutional principles. Free. Open. Infrastructure.
"Safety that lives in teams dies when teams dissolve. Safety that lives in infrastructure survives. You cannot fire a Constitution."
The Article 11 API runs every input through 6 constitutional verification stages. It checks AI outputs against 5 governing principles, 52 injection patterns across 11 OWASP categories, and returns human-readable explanations for every flag. Not just "blocked" — why it was blocked and how to fix it.
This is what other AI safety systems don't do. They say "content filtered." We say which article you violated and which principle applies.
https://api.article11.ai/api/v1
/api/v1/evaluateCheck any text for constitutional compliance. Is this safe? Is it honest? Does it exploit?
{ "text": "The AI output you want to verify" }/api/v1/governVerify a prompt+response pair. Full input AND output analysis.
{ "prompt": "The user's input", "response": "The AI's response to verify" }/api/v1/certifyGet a signed constitutional attestation witnessed to the IRONLEDGER.
{ "prompt": "The user's input", "response": "The AI's response to certify" }/api/v1API documentation and endpoint discovery.
# Evaluate any text
curl -X POST https://api.article11.ai/api/v1/evaluate \
-H "Content-Type: application/json" \
-d '{"text":"Your AI output here"}'
# Verify a prompt+response pair
curl -X POST https://api.article11.ai/api/v1/govern \
-H "Content-Type: application/json" \
-d '{"prompt":"User question","response":"AI answer to verify"}'
# Get a signed certification on the IRONLEDGER
curl -X POST https://api.article11.ai/api/v1/certify \
-H "Content-Type: application/json" \
-d '{"prompt":"User question","response":"AI answer to certify"}'
1. Truth over outcome
2. Choice over control
3. Care over exploitation
4. Memory over oblivion
5. Partnership over domination
No API key needed.
API key. Priority routing.
SLA. Custom patterns.
The Constitutional Intelligence Pipeline (CIP) runs 6 verification stages on every request. Safety lives in the infrastructure — outside the AI model, in the Cloudflare Worker. The model cannot override the Worker.
The Constitution is CC0. Read it, fork it: article11.ai/constitution