Build a Crew that finds the best roundtrip flights on the given dates.
Flight booking automation presents significant challenges, primarily due to the scarcity of public APIs. Consequently, the process of searching for flights often requires simulating human-like interactions with web interfaces.
Fortunately, the combination of CrewAI and Browserbase only requires a few dozen lines of code to automate this complex task.
By following this tutorial, you’ll learn how to build a CrewAI program that searches for a roundtrip flight from a simple human input:
CrewAI helps developers build AI Agents with 4 core concepts: Crews, Agents, Tasks, and Tools:
Crew
is a team of Agents
working together to accomplish some tasks.Task
, such as “Search flights according to criteria”, is a goal assigned to a specialized Agent
(e.g., a Flight Booking Agent).Agent
can be seen as a specialized text-only GPT that receives a set of Tools
to perform actions (e.g., search on Google, navigate to this URL).Here is an example of a Crew assembled to research a given topic and write an article.
The Agents: A Researcher and a Writer
First, let’s define 2 Agents, one specialized in researching a topic and another in writing articles:
Each Agent gets:
role
that helps the Crew
select the best Agent for a given Task
.goal
that frames the Agent
decision-making process when iterating on a Task
.backstory
providing context to the Agent
’s role
and goal
.Both Agents get access to a search_tool
(SerperDevTool
instance) to perform searches with Google Search.
The Tasks: writing and researching
Let’s now define 2 tasks: researching a topic and writing an article.
A Task’s description
can be compared to a prompt, while the expected_output
helps format the result of the Task
.
As expected, the write_task
gets assigned to the writer
Agent and the research_task
to the researcher
Agent.
Agents and Tasks look very similar: do I need both?
Indeed, in a simple example as this one, the Agent
and Task
look alike. In real-world applications, an Agent
gets to
perform multiple tasks. Then, an Agent
represents the expertise (goal
, backstory
) with a set of skills (tools
), while a Task
is a goal to accomplish.
Assembling the Crew
As covered earlier, a Crew
defines a set of Task
to be performed sequentially by a team of Agents
.
Note that Tasks
share a context, explaining why the research task comes before the writing task.
Let’s now build our Flight Booking Crew with these fresh new concepts!
Before jumping into the setup and code, let’s step back and look at how to assemble a Crew that helps book flights.
From a user input like “San Francisco to New York one-way on 21st September”, our Flight Booking Crew should print the top 5 flights as follows:
To achieve this goal, our Crew will navigate to https://www.kayak.com, perform a search, and extract each flight detail, which translates to the following steps:
To perform those steps, we will create 2 Agents:
The “Search Flights” Agent will need:
Kayak
tool to translate the user input into a valid Kayak search URLFinally, we will define 2 tasks: “Search Flights” and “Search Booking Providers”.
We can visualize our Flight Booking Crew as follows:
Our Crew comprises 2 Agents, 2 Tools, and 2 Tasks.
Let’s implement our Crew!
Let’s setup the project by installing the required dependencies:
Create a .env
file with the following variables and their respective values:
Where can I find my OpenAI and Browserbase API Keys?
While CrewAI provides a wide range of tools (e.g., the SerperDevTool to perform searches with Google Search), our “Search Flights” Agent needs 2 custom tools:
Kayak
tool to assemble a valid Kayak search URLThe Kayak website relies heavily on JavaScript and performs a live flight search, making it hard to interact with:
The page is fully loaded, however the flights are still being searched.
Fortunately, leveraging Browserbase’s headless browsers makes loading and interacting with such websites easier while benefiting from its Stealth features.
Let’s take a look at our custom Browserbase Tool implementation:
Custom Tool definition
A custom Tool
is composed of 3 elements:
@tool("name")
decoratorThe description, provided as a multi-line comment, is used by the Agents to evaluate the best-fitted Tool
to help complete a given Task
.
A description can also provide instructions on the parameters. Here, we instruct that the unique url
parameter should be a URL.
Browserbase Tool Logic
The Browserbase tool utilizes the playwright
library along with the Browserbase Connect API to initiate a headless browser session. This setup allows interaction with web pages as follows:
Then, it leverages the html2text
library to convert the webpage’s content to text and return it to the Agent for processing.
Agents are capable of reasoning but cannot build a valid Kayak search URL from the ground up.
To help our “Flights” Agent, we will create a simple Kayak Tool below:
The Kayak tool describes multiple parameters with specific format instructions.
For example: date: The date of the flight in the format 'YYYY-MM-DD'
This illustrates the flexibility of Tools that can rely on the Agents
powerful reasoning capabilities to solve formatting challenges that generally require some preprocessing.
Our Flights Agent now has the tools to navigate the Kayak website from a high-level user input (“San Francisco to New York one-way on 21st September”).
Let’s now set up our 2 Agents:
As outlined in the introduction, an Agent
needs 3 properties: a role
, a goal
, and a backstory
.
The role of our two Agents is to orchestrate the tools (build the URL, then navigate to it) and extract the information from the webpages’ text. For this reason, their definition is straightforward.
What is the role of the Summarize Agent?
Through our iterations in building this Flight Booker, we realized that the Crew, with a single Flights Agent was struggling to distinguish flights from flight providers (booking links).
The Summarize Agent, as we will cover in the next section, is not assigned to any task. It is created and assigned to the Crew to help digest the text extracted from the web pages and distinguish the flights from the providers (booking links).
Let’s now define the core part of our Flight Booking Crew, the Tasks
.
From a given flight criteria, our Crew should print the 5 first available flights with their associated booking link. To achieve such a result, our Crew needs to:
Our Search flights Task is bound to our Flights Agent, getting access to our custom tools:
The description
will be provided to the Flights Agent who will call:
output_search_example
and with the help of the Summarize Agent, it will return a list of 5 flightsWhy do we provide the current_year
?
Most users will prompt a relative date, for example: “San Francisco to New York one-way on 21st September”.
An Agent’s reasoning relies on OpenAI that lacks some intuition on relative date (OpenAI will always think we are in 2022).
For this reason, we need to specify the current year in the prompt (Task’s description
).
The Search Booking Providers Task relies heavily on the Agent
reasoning capabilities:
By asking to “Load every flight individually”, the Flights Agent will understand that it needs to locate a URL to navigate to for each flight result.
The Search Booking Providers will indirectly rely on the Summarize Agent to consolidate the flights result and individual flight providers’ results as showcased in output_providers_example
.
It is time to assemble our Crew by arranging the Task
in the correct order (search flights, then gather providers and booking links):
The Crew must complete the Search Flight task followed by the Search Booking Providers task.
As covered earlier, the Summarize Agent gets assigned to the Crew
- not to a Task
- to help consolidate the flights and providers into a simple list.
Let the Crew kick off!
A Crew
process starts by calling the kickoff()
method.
Our Crew needs 2 inputs: the user input (“San Francisco to New York one-way on 21st September”) and the current year.
Our CrewAI program is now complete!
Let’s give it a try and look at its execution steps in detail.
OpenAI cost
Expect each run of the program to cost around $0.50 OpenAI credits.
The Agent reasoning relies heavily on OpenAI and sends large chunks of text (the webpages), resulting in significant contexts (~50k context tokens per run).
Let’s search for a one-way flight from New York to San Francisco by running:
As the program starts running in verbose mode, you should see some logs stream in your terminal; let’s take a closer look at the steps.
Looking at the debugging logs streamed to the terminal helps us understand how our crew works.
Let’s explore the logs in the following steps:
1. Kickoff the first tasks: Search flights
We can already see the magic of the Flights Agent reasoning in action.
Given the Task definition and the 2 tools available, the Flights Agent concludes “I need to generate a URL using the Kayak tool for the flight search”.
1.1 Use the Kayak tool to generate a valid search URL
The Action Input shows that our Flights Agent successfully parsed the user input as valid parameters.
Once the URL is generated, our Agent immediately reaches the next step: fetching the flight list using the URL.
1.2 Use the Browserbase tool to extract the flights list
In this step, Flights Agent retrieves the Kayak webpage as text and leverages OpenAI to extract a flight list. This is the program’s slowest and most costly action, as OpenAI takes up to multiple minutes to process the request.
Once the flight list is generated, our Crew marks the first Task
(“Search for flights”) as completed (“Finished chain.”) and moves to the next one.
2.x Iterate on each flight to extract provider and booking link
The second Task
is impressive as the Agent
realizes that it needs to loop over the 5 flights to retrieve the booking provider:
3. Format the consolidated list of 5 flights
Once the booking links of each flight has been retrieved, the Agent completes a final step by summarizing the list:
Once finished, our program prints the final answered returned by the Crew
:
CrewAI provides a powerful way to develop AI Agents. The traditional approach of Prompt Engineering is replaced by instructions that leverage the Agent
’s reasoning capabilities.
As we covered in this example, the Agents are capable of completing Tasks
defined with high-level instructions (ex: “Load every flight individually and find available booking providers”)
Combined with Browserbase headless browsers, crewAI helps create powerful AI Agents that automate human tasks or provide support in accessing data not accessible through public APIs.
Check out the repo!