Quickstart
Quick guide to making your first verified request with Ollm.
This guide walks you through creating an API key, making your first request, and verifying that your request was executed inside a Trusted Execution Environment (TEE).
You’ll be up and running with Ollm in just a few minutes.
Prerequisites
- An Ollm account
- Access to the Ollm dashboard
- Basic familiarity with HTTP or JavaScript
Create an API Key
All requests to the Ollm API require authentication using an API key.
Create a key from the dashboard
- Open the Dashboard in the Ollm console.
- Navigate to API Keys.
- Click Generate Key.
- Enter a descriptive name (for example,
production-keyorlocal-dev). - Click Create Key.
Your API key will be shown once. Copy it and store it securely.
Important Keep your API key secret. Do not expose it in frontend code or commit it to version control.
Authenticate Requests
Ollm uses Bearer token authentication. Send your API key in the Authorization header.
Required headers
Authorization: Bearer YOUR_API_KEY
Content-Type: application/jsonMake Your First Request
Ollm is OpenAI-compatible. You explicitly choose the model in your request.
Example: Chat completion (curl)
curl -X POST https://api.ollm.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "phala/gemma-3-27b-it",
"messages": [
{ "role": "user", "content": "Hello" }
]
}'Example: Chat completion (JavaScript)
const res = await fetch("https://api.ollm.com/v1/chat/completions", {
method: "POST",
headers: {
Authorization: "Bearer YOUR_API_KEY",
"Content-Type": "application/json",
},
body: JSON.stringify({
model: "phala/gemma-3-27b-it",
messages: [{ role: "user", content: "Hello" }],
}),
});
const data = await res.json();
console.log(data);If the request is successful, you’ll receive a model response along with metadata tied to that specific request.
View and Verify the Request
After sending a request, you can inspect it directly in the Ollm dashboard.
From the Messages view, you can:
- See request status (Success, Failed, Pending)
- View the selected model and provider
- Confirm TEE attestation status (Verified / Pending / Failed)
- Inspect cryptographic verification details
What you can see
- Request ID
- Model and provider
- Token usage, latency, and cost
- Attestation status
- Cryptographic hashes and signatures
What you cannot see
- Prompt contents
- Model outputs
- Any plaintext inference data
This design ensures transparency and auditability without exposing sensitive data.
Understand Attestation Status
Each request includes a TEE attestation result:
- Verified The request was executed inside a verified Trusted Execution Environment. Cryptographic proof is available.
- Pending Attestation is still being finalized.
- Failed The request did not meet verification requirements.
Attestation details can be independently verified using the provided hashes and signatures.
Error Handling
Invalid API key
If your API key is missing or invalid, the API returns:
401 Unauthorized
Ensure:
- The
Authorizationheader is present - The key is correctly formatted
- The key has not been revoked
What’s Next
Now that you’ve made your first verified request, you can:
- Explore available models in the Models view
- Integrate Ollm using existing OpenAI SDKs
- Inspect attestation artifacts for audit or compliance needs