Error Handling
How to handle errors and edge cases when using the API.
Error Response Format
All API errors return a JSON response with a detail field containing the error message:
The HTTP status code (400, 404, 429, 500, etc.) indicates the error type.
Common Errors
400 Bad Request
Cause: Invalid request parameters
Common scenarios:
-
Unsupported Model
Error:
"Model not supported" -
Empty Prompt
Error:
"Prompt cannot be empty" -
Malformed Request
How to handle:
import requests
headers = {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
}
try:
response = requests.post(
"https://app.beginswithai.com/v1/ai",
json={"model": "gpt-5-1", "prompt": "Hello"},
headers=headers
)
response.raise_for_status()
data = response.json()
except requests.exceptions.HTTPError as e:
if e.response.status_code == 400:
error_detail = e.response.json().get("detail", "Unknown error")
print(f"Bad request: {error_detail}")
# Fix your request parameters
else:
raise
const headers = {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
};
try {
const response = await fetch("https://app.beginswithai.com/v1/ai", {
method: "POST",
headers: headers,
body: JSON.stringify({
model: "gpt-5-1",
prompt: "Hello"
})
});
if (response.status === 400) {
const error = await response.json();
console.log(`Bad request: ${error.detail}`);
// Fix your request parameters
return;
}
if (!response.ok) {
throw new Error(`HTTP ${response.status}`);
}
const data = await response.json();
} catch (error) {
console.error(error);
}
401 Unauthorized / 403 Forbidden
Cause: Authentication failure
Common scenarios:
- Missing Authorization header
- Invalid API key
- Deleted or revoked API key
- Malformed header format
How to handle:
import requests
headers = {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
}
try:
response = requests.post(
"https://app.beginswithai.com/v1/ai",
json={"model": "gpt-5-1", "prompt": "Hello"},
headers=headers
)
response.raise_for_status()
except requests.exceptions.HTTPError as e:
if e.response.status_code in [401, 403]:
print("Authentication failed. Check your API key.")
# Regenerate API key at /apikeys.html
else:
raise
const headers = {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
};
try {
const response = await fetch("https://app.beginswithai.com/v1/ai", {
method: "POST",
headers: headers,
body: JSON.stringify({
model: "gpt-5-1",
prompt: "Hello"
})
});
if (response.status === 401 || response.status === 403) {
console.error("Authentication failed. Check your API key.");
// Regenerate API key at /apikeys.html
return;
}
if (!response.ok) {
throw new Error(`HTTP ${response.status}`);
}
const data = await response.json();
} catch (error) {
console.error(error);
}
404 Not Found
Cause: Request ID doesn't exist
Common scenarios:
- Typo in
req_idparameter - Request expired (results are kept for 1 hour)
- Request never existed
How to handle:
import requests
headers = {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
}
try:
response = requests.get(
"https://app.beginswithai.com/v1/ai",
params={"req_id": request_id},
headers=headers
)
response.raise_for_status()
data = response.json()
except requests.exceptions.HTTPError as e:
if e.response.status_code == 404:
print("Request not found. It may have expired.")
# Submit a new request
else:
raise
const headers = {
"Authorization": "Bearer YOUR_API_KEY",
"Content-Type": "application/json"
};
try {
const response = await fetch(
`https://app.beginswithai.com/v1/ai?req_id=${requestId}`,
{ headers: headers }
);
if (response.status === 404) {
console.error("Request not found. It may have expired.");
// Submit a new request
return;
}
if (!response.ok) {
throw new Error(`HTTP ${response.status}`);
}
const data = await response.json();
} catch (error) {
console.error(error);
}
429 Too Many Requests
Cause: Rate limit exceeded
How to handle:
import requests
import time
def make_request_with_backoff(api_key, model, prompt, max_retries=3):
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
for attempt in range(max_retries):
try:
response = requests.post(
"https://app.beginswithai.com/v1/ai",
json={"model": model, "prompt": prompt},
headers=headers
)
if response.status_code == 429:
wait_time = 60 * (attempt + 1) # 60s, 120s, 180s
print(f"Rate limited. Waiting {wait_time}s...")
time.sleep(wait_time)
continue
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
if attempt == max_retries - 1:
raise
time.sleep(5)
raise Exception("Max retries exceeded")
async function makeRequestWithBackoff(apiKey, model, prompt, maxRetries = 3) {
const headers = {
"Authorization": `Bearer ${apiKey}`,
"Content-Type": "application/json"
};
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await fetch("https://app.beginswithai.com/v1/ai", {
method: "POST",
headers: headers,
body: JSON.stringify({ model, prompt })
});
if (response.status === 429) {
const waitTime = 60000 * (attempt + 1); // 60s, 120s, 180s
console.log(`Rate limited. Waiting ${waitTime/1000}s...`);
await new Promise(r => setTimeout(r, waitTime));
continue;
}
if (!response.ok) {
throw new Error(`HTTP ${response.status}`);
}
return await response.json();
} catch (error) {
if (attempt === maxRetries - 1) {
throw error;
}
await new Promise(r => setTimeout(r, 5000));
}
}
throw new Error("Max retries exceeded");
}
See Rate Limits for more strategies.
500 Internal Server Error
Cause: Server-side processing error
Common scenarios:
- Temporary backend unavailability
- Upstream model provider issues
- Unexpected processing failure
How to handle:
import requests
import time
def make_request_with_retry(api_key, model, prompt, max_retries=3):
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
for attempt in range(max_retries):
try:
response = requests.post(
"https://app.beginswithai.com/v1/ai",
json={"model": model, "prompt": prompt},
headers=headers
)
if response.status_code == 500:
if attempt < max_retries - 1:
wait_time = 5 * (2 ** attempt) # 5s, 10s, 20s
print(f"Server error. Retrying in {wait_time}s...")
time.sleep(wait_time)
continue
response.raise_for_status()
return response.json()
except requests.exceptions.RequestException as e:
if attempt == max_retries - 1:
raise
time.sleep(5)
raise Exception("Max retries exceeded")
async function makeRequestWithRetry(apiKey, model, prompt, maxRetries = 3) {
const headers = {
"Authorization": `Bearer ${apiKey}`,
"Content-Type": "application/json"
};
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await fetch("https://app.beginswithai.com/v1/ai", {
method: "POST",
headers: headers,
body: JSON.stringify({ model, prompt })
});
if (response.status === 500) {
if (attempt < maxRetries - 1) {
const waitTime = 5000 * (2 ** attempt); // 5s, 10s, 20s
console.log(`Server error. Retrying in ${waitTime/1000}s...`);
await new Promise(r => setTimeout(r, waitTime));
continue;
}
}
if (!response.ok) {
throw new Error(`HTTP ${response.status}`);
}
return await response.json();
} catch (error) {
if (attempt === maxRetries - 1) {
throw error;
}
await new Promise(r => setTimeout(r, 5000));
}
}
throw new Error("Max retries exceeded");
}
Complete Error Handling Example
import requests
import time
class APIClient:
def __init__(self, api_key):
self.api_key = api_key
self.base_url = "https://app.beginswithai.com/v1"
self.headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
def submit_request(self, model, prompt, max_retries=3):
for attempt in range(max_retries):
try:
response = requests.post(
f"{self.base_url}/ai",
json={"model": model, "prompt": prompt},
headers=self.headers,
timeout=30
)
# Handle specific status codes
if response.status_code == 429:
print("Rate limited. Waiting 60s...")
time.sleep(60)
continue
elif response.status_code == 500:
if attempt < max_retries - 1:
wait_time = 5 * (2 ** attempt)
print(f"Server error. Retrying in {wait_time}s...")
time.sleep(wait_time)
continue
response.raise_for_status()
return response.json()["request_id"]
except requests.exceptions.HTTPError as e:
if e.response.status_code in [400, 401, 403, 404]:
# Client errors - don't retry
error_detail = e.response.json().get("detail", "Unknown error")
print(f"Client error: {error_detail}")
raise
elif attempt == max_retries - 1:
raise
except requests.exceptions.RequestException as e:
if attempt == max_retries - 1:
raise
time.sleep(5)
raise Exception("Max retries exceeded")
def poll_result(self, request_id, poll_interval=5, timeout=300):
start_time = time.time()
while time.time() - start_time < timeout:
try:
response = requests.get(
f"{self.base_url}/ai",
params={"req_id": request_id},
headers=self.headers,
timeout=30
)
if response.status_code == 404:
print("Request not found or expired")
return None
response.raise_for_status()
data = response.json()
if "response" in data:
return data["response"]
time.sleep(poll_interval)
except requests.exceptions.RequestException as e:
print(f"Polling error: {e}")
time.sleep(poll_interval)
print("Timeout waiting for result")
return None
# Usage
client = APIClient("your_api_key_here")
try:
request_id = client.submit_request("gpt-5-1", "Explain async APIs")
print(f"Request submitted: {request_id}")
result = client.poll_result(request_id)
if result:
print(f"Result: {result}")
else:
print("Failed to get result")
except Exception as e:
print(f"Error: {e}")
class APIClient {
constructor(apiKey) {
this.apiKey = apiKey;
this.baseUrl = "https://app.beginswithai.com/v1";
this.headers = {
"Authorization": `Bearer ${apiKey}`,
"Content-Type": "application/json"
};
}
async submitRequest(model, prompt, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await fetch(`${this.baseUrl}/ai`, {
method: "POST",
headers: this.headers,
body: JSON.stringify({ model, prompt })
});
// Handle specific status codes
if (response.status === 429) {
console.log("Rate limited. Waiting 60s...");
await new Promise(r => setTimeout(r, 60000));
continue;
} else if (response.status === 500) {
if (attempt < maxRetries - 1) {
const waitTime = 5000 * (2 ** attempt);
console.log(`Server error. Retrying in ${waitTime/1000}s...`);
await new Promise(r => setTimeout(r, waitTime));
continue;
}
}
if (!response.ok) {
if ([400, 401, 403, 404].includes(response.status)) {
const error = await response.text();
throw new Error(`Client error: ${error}`);
}
throw new Error(`HTTP ${response.status}`);
}
const data = await response.json();
return data.request_id;
} catch (error) {
if (attempt === maxRetries - 1) {
throw error;
}
await new Promise(r => setTimeout(r, 5000));
}
}
throw new Error("Max retries exceeded");
}
async pollResult(requestId, pollInterval = 5000, timeout = 300000) {
const startTime = Date.now();
while (Date.now() - startTime < timeout) {
try {
const response = await fetch(
`${this.baseUrl}/ai?req_id=${requestId}`,
{ headers: this.headers }
);
if (response.status === 404) {
console.log("Request not found or expired");
return null;
}
if (!response.ok) {
throw new Error(`HTTP ${response.status}`);
}
const data = await response.json();
if (data.response) {
return data.response;
}
await new Promise(r => setTimeout(r, pollInterval));
} catch (error) {
console.log(`Polling error: ${error}`);
await new Promise(r => setTimeout(r, pollInterval));
}
}
console.log("Timeout waiting for result");
return null;
}
}
// Usage
const client = new APIClient("your_api_key_here");
try {
const requestId = await client.submitRequest("gpt-5-1", "Explain async APIs");
console.log(`Request submitted: ${requestId}`);
const result = await client.pollResult(requestId);
if (result) {
console.log(`Result: ${result}`);
} else {
console.log("Failed to get result");
}
} catch (error) {
console.error(`Error: ${error}`);
}
Best Practices
- Always implement retry logic for transient errors (500, 429)
- Don't retry client errors (400, 401, 403, 404) - fix your request instead
- Use exponential backoff for retries to avoid overwhelming the API
- Set reasonable timeouts for both requests and polling
- Log errors properly for debugging
- Handle network errors (connection timeouts, DNS failures)
Processing Philosophy
Our system is designed to process every valid request eventually. Unless:
- The upstream model provider is down
- The specific model has issues
Your request will complete when resources are available. We don't currently support fallback models, but this may be added in the future.
If a request is taking unusually long:
- Check the model you're using (thinking models are slower)
- Check queue depth (visible in dashboard - [PLACEHOLDER: Add if available])
- Try a lighter, faster model if speed is critical
Getting Help
If you encounter persistent errors:
- Check this documentation for solutions
- Verify your API key is valid at
api-keys - Confirm you're not hitting rate limits
- Use the feedback button in the dashboard
- Contact support with your request ID and error details