Each research function takes a Stagehand instance, navigates to a source, and uses stagehand.extract() to pull structured data from the page using AI.Here’s an example that searches DuckDuckGo and visits top results:
import { z } from "zod";async function researchGoogle( stagehand: Stagehand, query: string, onFinding: (finding: Finding) => void) { const page = stagehand.context.activePage()!; await page.goto(`https://duckduckgo.com/?q=${encodeURIComponent(query)}`); await page.waitForTimeout(2000); const searchResults = await stagehand.extract( "Extract the top 5 organic search result links with their titles and URLs. Skip any ads.", z.object({ results: z.array(z.object({ title: z.string(), url: z.string(), })).max(5), }) ); for (const result of searchResults.results.slice(0, 3)) { if (!result.url || result.url.includes("duckduckgo.com")) continue; await page.goto(result.url, { waitUntil: "domcontentloaded", timeoutMs: 15000 }); const content = await stagehand.extract( `Extract the key information about "${query}" from this article.`, z.object({ summary: z.string(), keyFacts: z.array(z.string()), }) ); if (content.summary) { onFinding({ title: result.title, source: new URL(result.url).hostname.replace("www.", ""), url: result.url, summary: content.summary, relevance: "high", }); } }}
You can create similar functions for Wikipedia, YouTube, Hacker News, and Google News — each using stagehand.extract() with different schemas. See the full template for all five research functions.
5
Create the API route with SSE streaming
Create app/api/research/route.ts to handle research requests. This route creates parallel Stagehand sessions and streams findings back via Server-Sent Events.
import { generateObject } from "ai";import { anthropic } from "@ai-sdk/anthropic";export const maxDuration = 300;const ResearchSummarySchema = z.object({ overview: z.string().describe("2-3 sentence direct answer to the query"), keyFacts: z.array(z.string()).describe("3-6 specific facts with dates, numbers, or names"), recentDevelopments: z.string().nullable().describe("Latest news if applicable"), sourcesSummary: z.string().describe("Brief note on the types of sources consulted"),});export async function POST(req: Request) { const { query } = await req.json(); const encoder = new TextEncoder(); const stream = new TransformStream(); const writer = stream.writable.getWriter(); const sendEvent = async (event: string, data: unknown) => { await writer.write( encoder.encode(`event: ${event}\ndata: ${JSON.stringify(data)}\n\n`) ); }; (async () => { const sessions = []; const allFindings: Finding[] = []; const researchFunctions = [ { source: "News", fn: researchGoogleNews }, { source: "Hacker News", fn: researchHackerNews }, { source: "YouTube", fn: researchYouTube }, { source: "Wikipedia", fn: researchWikipedia }, { source: "Search", fn: researchGoogle }, ]; try { await sendEvent("status", { message: "Starting browser sessions...", phase: "init" }); // Create all Stagehand sessions in parallel const sessionPromises = researchFunctions.map(({ source }) => createStagehandSession(source) ); const createdSessions = await Promise.all(sessionPromises); sessions.push(...createdSessions); // Send live view URLs to frontend await sendEvent("liveViews", { sessions: sessions.map(s => ({ source: s.source, liveViewUrl: s.liveViewUrl, sessionId: s.sessionId, })), }); // Run all research in parallel await Promise.allSettled( researchFunctions.map(({ source, fn }, index) => fn( sessions[index].stagehand, query, (finding) => { allFindings.push(finding); sendEvent("findings", { findings: allFindings }); } ) ) ); // Synthesize findings with AI if (allFindings.length > 0) { const findingsText = allFindings .map((f) => `Source: ${f.source}\n${f.summary}`) .join("\n\n---\n\n"); const { object: summary } = await generateObject({ model: anthropic("claude-sonnet-4-6"), schema: ResearchSummarySchema, prompt: `Based on these research findings about "${query}", create a structured summary.\n\n${findingsText}`, }); await sendEvent("complete", { findings: allFindings, summary }); } } finally { for (const session of sessions) { try { await session.stagehand.close(); } catch {} } await writer.close(); } })(); return new Response(stream.readable, { headers: { "Content-Type": "text/event-stream", "Cache-Control": "no-cache", Connection: "keep-alive", }, });}
6
Handle concurrency limits
Free Browserbase plans have a concurrency limit of 1. The template automatically detects this and falls back to running sessions sequentially:
async function getProjectConcurrency(): Promise<number> { const projects = await browserbase.projects.list(); if (!projects?.length) return 1; const project = await browserbase.projects.retrieve(projects[0].id); return project.concurrency ?? 1;}// In your POST handler:const concurrency = await getProjectConcurrency();if (concurrency === 1) { // Run browsers one at a time, closing each before starting the next for (const { source, fn } of researchFunctions) { const session = await createStagehandSession(source); await fn(session.stagehand, query, onFinding); await session.stagehand.close(); }} else { // Run all browsers in parallel const sessions = await Promise.all( researchFunctions.map(({ source }) => createStagehandSession(source)) ); await Promise.allSettled( researchFunctions.map(({ fn }, i) => fn(sessions[i].stagehand, query, onFinding) ) );}
Congratulations! You’ve built an AI research agent that runs parallel browser sessions with Stagehand and Browserbase on Vercel.For the complete implementation including the frontend UI with live browser views, check out the full template:
Full Template on GitHub
Browse the complete source code with frontend components, SSE streaming, and live browser views.
Deploy to Vercel
One-click deploy with automatic Browserbase setup via the Vercel Marketplace.