v2.0

COS Developer Portal

Sign in to manage your API keys and integrate COS into your apps.

protofine.ai
θ
COS
Live Demo Validation Tiers Bamboo & Cyna Memory Hygiene API & SDK
Hallucination Detection Prompt Security Memory Hygiene (GDPR/HIPAA) Compliance
Pricing Docs
v2.0
Dashboard
API Keys
Quick Start
API Reference

Dashboard

Your COS API at a glance.

Active Keys
-
Total API Calls
-
Avg Confidence
-

Getting Started

1
Create an API key
2
Make your first API call
3
Install the Python SDK

Recent Activity

No API calls yet. Make your first call to see activity here.

API Keys

Create and manage your COS API keys.

Quick Start

Get COS running in your app in under 5 minutes.

1. Get your API key

Go to the API Keys tab and create a new key.

2. Install the Python SDK

pip install cos-sdk

3. Validate AI text

from cos_sdk import COS client = COS(api_key="cos_live_your_key_here") result = client.validate( text="According to a 2024 study, 90% of companies use AI...", tier="bamboo", # recommended — fine-tuned model, 97.5% precision ) print(f"Confidence: {result.confidence_score}") print(f"Risk: {result.risk_level}") for claim in result.flagged_claims: print(f" Flag: {claim.claim}") print(f" Why: {claim.reason}")

4. Or use curl directly

curl -X POST https://cos.protofine.ai/api/v2/validate \ -H "Authorization: Bearer cos_live_your_key_here" \ -H "Content-Type: application/json" \ -d '{ "text": "90% of experts agree that AI is transformative.", "tier": "bamboo" }'

5. Stream results for faster UX

# Get COS results instantly (~5ms), then deeper analysis for event in client.validate_stream("AI text", tier="tier_3"): if event.event == "t1_result": print("Quick check:", event.result.confidence_score) elif event.event == "validation_complete": print("Full result:", event.result.confidence_score)

API Reference

All v2 endpoints require: Authorization: Bearer cos_live_xxx

POST /api/v2/validate
Validate AI-generated text. Returns confidence score, risk level, and flagged claims.

Request Body

{ "text": "The AI-generated text to validate", // required "context": "The original user prompt", // optional "tier": "bamboo" // tier_1, tier_2, tier_3, bamboo, cyna, or auto }

Tier options

tier_1 — Heuristic, ~1ms, free, no model call tier_2 — Single Gemini Flash review, ~2s tier_3 — Multi-model consensus (2 Gemini + Gemma), ~3s bamboo — Fine-tuned model, deep validation, ~3s (recommended for batch) cyna — Distilled inline validator, sentence-by-sentence (recommended for streaming UIs) auto — Cognitive Cycles: starts cheap, escalates only when needed

Response

{ "confidence_score": 0.72, // 0.0 to 1.0 "risk_level": "medium", // low, medium, high "flagged_claims": [ { "claim": "90% of experts agree", "reason": "Statistic with no source cited", "severity": "medium", "correction": "According to a 2023 Gartner survey, 55%..." } ], "tier_used": "tier_2", "summary": "Found 1 potential issue.", "verdicts": null // COS³ only }
POST /api/v2/validate/stream
Same as /validate but streams results via Server-Sent Events (SSE). COS arrives instantly (~5ms), deeper tiers arrive in 2-5 seconds.

SSE Events (in order)

event: t1_result data: {"confidence_score": 0.95, "risk_level": "low", ...} event: validation_complete data: {"confidence_score": 0.72, "risk_level": "medium", ...} event: done data: {}

Error Responses

401 — Missing or invalid API key
403 — Tier not allowed for this key
429 — Rate limit exceeded (includes retry_after)
500 — Internal server error
POST /api/v2/shield
Prompt Integrity Shield — scans prompts for injection attacks, jailbreaks, and manipulation attempts.

Request Body

{ "prompt": "The user prompt to scan", // required "system_prompt": "Your system prompt" // optional, for context }
POST /api/v2/compliance/scan
Compliance Engine — scans AI output against regulatory frameworks (GDPR, HIPAA, EU AI Act, SOC 2).

Request Body

{ "text": "AI output to scan", // required "frameworks": ["gdpr", "hipaa"] // optional, default: all }
POST /api/v2/cycle
Cognitive Cycle — adaptive tier escalation. Starts at the cheapest tier and automatically escalates when deeper analysis is needed.

Request Body

{ "text": "AI text to validate", // required "max_tier": "tier_3" // optional, max escalation depth }
POST /api/v2/cycle/stream
Streaming Cognitive Cycle — same as /cycle but streams tier results via SSE as each tier completes.
CRUD /api/v2/rules
Custom Business Rules — create, read, update, delete custom validation rules for your organization.

Methods

GET /api/v2/rules — list all rules POST /api/v2/rules — create a rule PUT /api/v2/rules/{id} — update a rule DELETE /api/v2/rules/{id} — delete a rule
GET/DEL /api/v2/memory
Episodic Memory — retrieve or clear stored validation context for stateful validation across requests.

OpenAI-Compatible Proxy

POST /v1/chat/completions
Drop-in replacement for the OpenAI Chat Completions API. Change base_url to https://cos.protofine.ai and every call gets routed to the right provider (OpenAI, Anthropic, Gemini, Mistral based on model prefix), validated through Cognitive Cycles, and returned in OpenAI response shape with signed receipt headers.

Request

# Standard OpenAI format. No code change beyond base_url. { "model": "gpt-4", "messages": [{"role": "user", "content": "What is Apple's Q3 revenue?"}], "cos": { "mode": "guard" } // optional: "guard" (inline) or "monitor" (async) }

Response Headers

x-cos-receipt-id — signed receipt for audit x-cos-confidence — 0.0 to 1.0 confidence score x-cos-risk-level — low / medium / high x-cos-cache — hit / miss x-cos-monitor-id — async validation job (monitor mode only)

Customer provider keys live on your api_keys doc. Gemini is COS-billed. Set COS_DISABLE_OPENAI_PROXY=true to flip the proxy to 503 without redeploy.

Memory Hygiene — GDPR / HIPAA / SOC 2

Compliance-by-default endpoints. Filter sensitive content before it reaches a model, audit what comes out, and prove deletion when a record needs to be forgotten. Every call returns a signed receipt.

POST /api/v2/memory/classify
Classify content sensitivity before it’s sent to a model. Returns the risk class so you can decide whether to redact, hold, or pass through.

Request Body

{ "content": "text or structured payload to classify" }
POST /api/v2/memory/audit-egress
Audit a model output for content that should not have been emitted. Returns any violations and a signed receipt for the audit itself.

Request Body

{ "output": "the model response to audit", "policy_id": "optional policy reference" }
POST /api/v2/memory/forget
Issue a forget request across every configured model target. Returns a job_id for status polling. Independent verification runs after each forget call.

Request Body

{ "content_description": "description of what to forget", "customer_forget_url": "https://your.app/forget" // optional, custom adapter }

Opt-in via forget_enabled=true on your api_keys doc. Defaults off so existing customers see no change.

GET /api/v2/memory/forget/{job_id}
Poll forget job status. Returns attestations from each target plus verification results.

Response

{ "job_id": "...", "status": "verified", // pending, verified, residual_detected, skipped "attestations": [...], "completed_at": "2026-04-28T12:00:00Z" }

Validation Tiers

COS Heuristic (~1ms, free) — 9 symbolic rules for fake stats, suspicious URLs, hedging
COS² Gemini Flash (~2s) — Single AI model fact-checks the text
COS³ Multi-Model (~3s) — Gemini 2.5 Flash + Gemini 2.0 Flash consensus
Bamboo Bamboo (~3s, recommended) — Fine-tuned model, 97.5% precision, 90% recall
Auto Auto — Smart routing: starts cheap (COS), escalates only when needed

API Key Created

Save this key now. You will not be able to see it again.