Skip to main content

Agent Quickstart

curl -X POST https://api.pre.dev/api/v1/browser-agents \
  -H "Authorization: Bearer $PREDEV_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "tasks": [{
      "url": "https://example.com",
      "instruction": "Extract the page heading.",
      "output": {
        "type": "object",
        "properties": { "heading": { "type": "string" } },
        "required": ["heading"]
      }
    }]
  }'
# → { "results": [{ "status": "SUCCESS", "data": { "heading": "Example Domain" } }] }
That’s the whole thing. Single call, structured data out, ~3 seconds wall time on a warm sandbox.

When to use Browser Agents

  • Scrape authed SaaS dashboards your agent would otherwise need you to log into manually
  • Bulk enrichment — drop a CSV of URLs in, get structured JSON back for each row
  • Multi-step flows — click “Next”, paginate, extract from page 2
  • Form submissions — fill contact forms at scale with custom per-row values
  • QA your own flows — hit your staging site end-to-end on a cron
  • As a tool for your own agent — the MCP integration means your Claude/GPT agent can call Browser Agents without you writing any orchestration code

How it differs from the alternatives

Browser AgentsBrowser Use CloudFirecrawl InteractClaude Code + Chrome
Pass rate on 101 tasks100/10188/10184/101interactive only
Quality (LLM judge, 0-100)79.972.271.0[varies]
Wins (out of 101 judged)706066self-reported
$ / passing task$0.010$0.126$0.0092per-session
Concurrency1,200limited50/account1 at a time
PackagingREST + MCP + SDKRESTRESTIDE tool
Agent-friendlyrequires setup
Numbers from run 2026-04-15T15-40-08 — all 101 tasks, uniform LLM judge scoring anonymized answers against freshly-fetched ground-truth page text. Fully reproducible from the public browser-agents-benchmark repo.

Next