Skip to main content
The official Python client for pre.dev. predev-api ships both the Architect and Browser Agents surfaces — one package, one key.

Installation

pip install predev-api

Quick Start

from predev_api import PredevAPI

predev = PredevAPI(api_key="YOUR_API_KEY")

result = predev.browser_tasks([
    {
        "url": "https://example.com",
        "instruction": "Extract the page heading.",
        "output": {
            "type": "object",
            "properties": { "heading": { "type": "string" } },
            "required": ["heading"]
        }
    }
])

print(result["results"][0]["data"])
# → { "heading": "Example Domain" }

Authentication

Get your API key

Grab a key from the Browser Agents dashboard.
predev = PredevAPI(api_key="YOUR_API_KEY")

API Methods

browser_tasks()

Submit one or more browser tasks and get the results back.
result = predev.browser_tasks(
    tasks,
    concurrency=None,     # parallel workers, 1–20, default 5
    stream=False,         # set True to get an SSE iterator
    run_async=False,      # set True to return immediately with a batch id
)
Parameters:
  • tasks (required): List[Dict] — see Task shape below
  • concurrency (optional): int — parallel workers within this batch (1–20, default 5)
  • stream (optional): bool — when True, returns an iterator yielding SSE events instead of a final result
  • run_async (optional): bool — when True, returns immediately with { id, status: "processing" }; poll with get_browser_tasks(id)
Returns:
  • Default: Dict with id, total, completed, results, totalCreditsUsed, status
  • With stream=True: Iterator[Dict] yielding { "event": ..., "data": ... } frames
  • With run_async=True: Dict with id and status: "processing" — poll get_browser_tasks(id)

Task shape

{
    "url": "https://...",                 # required
    "instruction": "Extract the price.",  # natural-language goal
    "input": { "query": "laptops" },      # string inputs the agent uses during the run
    "output": {                           # JSON Schema describing the data to extract
        "type": "object",
        "properties": { "price": { "type": "number" } },
        "required": ["price"]
    }
}

get_browser_tasks()

Fetch the status and results of a task submission by id. Works for in-progress and completed submissions.
result = predev.get_browser_tasks(batch_id, include_events=False)
Parameters:
  • batch_id (required): str — id returned from browser_tasks(..., run_async=True) or from streaming
  • include_events (optional): bool — when True, includes the full per-step event timeline (navigation, plan, screenshot, action, validation) for each task. Payloads can be large.
Returns: Dict with the same shape as browser_tasks() sync response, plus events[] on each task (or liveEvents[] for tasks still in flight) when include_events=True.

Response Shape

{
    "id": "65f8a9d2c1e4b5a6f7e8d9c0",
    "total": 2,
    "completed": 2,
    "results": [
        {
            "url": "https://example.com",
            "instruction": "Extract the page heading.",
            "status": "SUCCESS",          # SUCCESS | ERROR | TIMEOUT | BLOCKED | CAPTCHA_FAILED | LOOP | NO_TARGET | PENDING
            "data": { "heading": "Example Domain" },
            "creditsUsed": 0.11,
            "durationMs": 4820
        },
        {
            "url": "https://news.ycombinator.com",
            "status": "SUCCESS",
            "data": { "stories": ["..."] },
            "creditsUsed": 0.18,
            "durationMs": 8132
        }
    ],
    "totalCreditsUsed": 0.29,
    "status": "completed",                # processing | completed | failed
    "createdAt": "2026-04-16T18:22:10.224Z",
    "completedAt": "2026-04-16T18:22:27.510Z"
}

Examples

Sync — wait for every task

result = predev.browser_tasks([
    { "url": "https://news.ycombinator.com", "instruction": "Extract the top 5 story titles." },
    { "url": "https://www.reddit.com/r/programming", "instruction": "Extract the top 5 post titles." },
], concurrency=2)

for r in result["results"]:
    print(r["status"], r.get("data"))

Async — poll for completion

import time

batch = predev.browser_tasks(tasks, run_async=True)
batch_id = batch["id"]

while True:
    status = predev.get_browser_tasks(batch_id)
    if status["status"] in ("completed", "failed"):
        break
    print(f"{status['completed']}/{status['total']} done")
    time.sleep(5)

for r in status["results"]:
    print(r["status"], r.get("data"))

Streaming — live per-step events

for msg in predev.browser_tasks(tasks, stream=True):
    if msg["event"] == "task_event":
        e = msg["data"]
        print(f"[task {e['taskIndex']}] {e['type']}")
    elif msg["event"] == "task_result":
        r = msg["data"]
        print(f"[task {r['taskIndex']}] {r['status']}{r.get('data')}")
    elif msg["event"] == "done":
        print("batch done:", msg["data"]["id"], msg["data"]["totalCreditsUsed"], "credits")
    elif msg["event"] == "error":
        print("error:", msg["data"]["error"])

Full timeline for a past submission

result = predev.get_browser_tasks(batch_id, include_events=True)

for task in result["results"]:
    print(task["status"], task["url"])
    for event in task.get("events", []):
        print("  -", event["type"])

Error Handling

from predev_api import (
    PredevAPIError,
    AuthenticationError,
    RateLimitError,
)

try:
    result = predev.browser_tasks(tasks)
except AuthenticationError:
    # 401 — invalid or missing API key
    ...
except RateLimitError:
    # 429 — per-user in-flight queue cap exceeded
    ...
except PredevAPIError as e:
    # Any other API error (400, 402 insufficient credits, 500, etc.)
    print(e)