Begins w/ AI API
What's new: GPT 5.2 and Gemini 3 Flash is now available
-
Make your first request in under 2 minutes
-
Full endpoint documentation
-
Available models and how to choose
-
Complete working code (Python + JavaScript)
What is this?
This is an async AI API. You send a request, we queue it, you poll for the result.
There's no streaming. No webhooks. Just HTTP requests and a polling loop.
How it works
sequenceDiagram
participant Client
participant API as BWA API
participant Queue as Processing Queue
Client->>API: POST /v1/ai<br/>{model, prompt}
API->>Queue: Queue request
API-->>Client: 202 Accepted<br/>{request_id}
Note over Client: Wait & Poll
loop Poll until complete
Client->>API: GET /v1/ai?req_id=xxx
alt Still processing
API-->>Client: {status: "queued"}
else Complete
API-->>Client: {status: "completed", result}
end
end
Note over Client: Use result
The flow is dead simple:
- POST your request to
/v1/aiwith a model and prompt - Get back a request_id immediately (HTTP 202)
- Poll
/v1/ai?req_id=<request_id>until status changes fromqueuedtocompleted - Retrieve your result from the response