
🌐 Browser-use is the easiest way to connect your AI agents with the browser.
💡 See what others are building and share your projects in our Discord! Want Swag? Check out our Merch store.
🌤️ Skip the setup - try our hosted version for instant browser automation! Try the cloud ☁︎.
With pip (Python>=3.11):
pip install browser-useFor memory functionality (requires Python<3.13 due to PyTorch compatibility):
pip install "browser-use[memory]"Install Playwright:
playwright install chromiumSpin up your agent:
fromlangchain_openaiimportChatOpenAIfrombrowser_useimportAgentimportasynciofromdotenvimportload_dotenvload_dotenv() asyncdefmain(): agent=Agent( task="Compare the price of gpt-4o and DeepSeek-V3", llm=ChatOpenAI(model="gpt-4o"), ) awaitagent.run() asyncio.run(main())For complex browser automation tasks requiring full Playwright capabilities:
fromlangchain_openaiimportChatOpenAIfrombrowser_useimportAgentfrombrowser_use.browser.browserimportBrowser, BrowserConfigimportasynciofromdotenvimportload_dotenvload_dotenv() asyncdefmain(): # Enable advanced mode with full Playwright capabilitiesbrowser_config=BrowserConfig( advanced_mode=True, # Enable JavaScript execution, iframe support, etc.headless=False# Set to True for headless operation ) browser=Browser(config=browser_config) agent=Agent( task="Navigate complex web interfaces with iframes and dynamic content", llm=ChatOpenAI(model="gpt-4o"), browser=browser ) awaitagent.run() asyncio.run(main())Add your API keys for the provider you want to use to your .env file.
OPENAI_API_KEY= ANTHROPIC_API_KEY= AZURE_OPENAI_ENDPOINT= AZURE_OPENAI_KEY= AZURE_OPENAI_API_VERSION= GEMINI_API_KEY= DEEPSEEK_API_KEY= GROK_API_KEY= NOVITA_API_KEY=The library supports command-line flags for flexible configuration:
# Restaurant standee detection python examples/use-cases/restaurant_standee_detection.py # With command-line flags python examples/use-cases/restaurant_standee_detection.py --cdp-port 9222 --no-headless --model gpt-4o --task "Your custom task"# Using Azure OpenAI python examples/use-cases/restaurant_standee_detection.py --use-azure --model gpt-4o # With screenshots enabled python examples/use-cases/restaurant_standee_detection.py --screenshots --no-headless --advanced-mode # Bulk restaurant image collection python examples/use-cases/bulk_restaurant_image_scraper.py --max-restaurants 50 --authAvailable flags:
--cdp-port PORT: CDP port for browser connection (for connecting to existing browser)--headless/--no-headless: Run browser in headless/visible mode--advanced-mode/--no-advanced-mode: Enable/disable advanced Playwright capabilities--model MODEL: Model to use (default: gpt-4o)--use-azure: Use Azure OpenAI instead of OpenAI--task "TASK": Custom task to perform--screenshots: Enable taking screenshots during automation--debug: Enable debug logging
Example scripts support various command-line arguments:
python examples/use-cases/restaurant_standee_detection.py [options]Options:
--use-azure: Use Azure OpenAI instead of OpenAI--model MODEL: Specify the model name (default: gpt-4o)--no-headless: Run in visible browser mode--advanced-mode: Enable advanced Playwright capabilities--cdp-port PORT: Specify CDP port for browser connection--screenshots: Enable taking screenshots during navigation--debug: Enable debug logging
Example with all options:
python examples/use-cases/restaurant_standee_detection.py --no-headless --advanced-mode --use-azure --model gpt-4o --cdp-port 9222 --screenshots --debugThe browser-use library provides screenshot capabilities in two ways:
- Base64 encoded screenshots: The BrowserContext.take_screenshot() method returns a base64 encoded screenshot (doesn't save to file)
- File-based screenshots: Example scripts like restaurant_standee_detection.py include custom functions to save screenshots to files
Important: Screenshots are not saved to files by default. When running examples that support screenshots (like restaurant_standee_detection.py), you must explicitly enable screenshots with the --screenshots flag:
python examples/use-cases/restaurant_standee_detection.py --screenshots --no-headless --advanced-modeScreenshots will be saved to ~/screenshots/ directory using platform-independent paths. The directory will be created automatically if it doesn't exist on both Linux and macOS systems.
Note: The screenshot functionality has no dependencies on any specific environment and works across all platforms using standard Python and Playwright functionality.
# Extract CDP port CDP_PORT=$(ps -ax | grep -o '\-\-remote-debugging-port=[0-9]\+'| awk -F= '{print $2}'| head -1)# Run with visible browser window python examples/use-cases/restaurant_standee_detection.py --no-headless --task "Your custom task"# Run with advanced mode and custom model python examples/use-cases/restaurant_standee_detection.py --no-headless --advanced-mode --model gpt-4o # For Naver Maps photo navigation on Mac python examples/use-cases/restaurant_standee_detection.py --cdp-port $CDP_PORT --no-headless --advanced-mode --model gpt-4o # With screenshots enabled python examples/use-cases/restaurant_standee_detection.py --no-headless --advanced-mode --screenshotsWhen running on macOS:
- The screenshots directory is created at
~/screenshots/using platform-independent paths - You must explicitly enable screenshots with the
--screenshotsflag - Screenshots are saved with timestamps and descriptive names
- The directory will be created automatically if it doesn't exist
# Extract CDP port for connecting to existing browser CDP_PORT=$(ps aux | grep -o '\-\-remote-debugging-port=[0-9]\+'| awk -F= '{print $2}'| head -1)# Run with CDP port connection python examples/use-cases/restaurant_standee_detection.py --cdp-port $CDP_PORT --task "Your custom task"# Run with Azure OpenAI python examples/use-cases/restaurant_standee_detection.py --cdp-port $CDP_PORT --use-azure --model gpt-4oFor other settings, models, and more, check out the documentation 📕.
The advanced mode (advanced_mode=True) enables full Playwright capabilities:
- JavaScript Execution: Run custom JavaScript via page.evaluate() for complex interactions
- Iframe Support: Access and traverse nested iframes using page.frames()
- Enhanced UI Interaction: Full locator strategy support and React component interaction
- Korean Language Support: Improved handling of Korean text elements
- Dynamic Content Handling: Better waiting for JavaScript page loads and state changes
Advanced mode includes specialized methods for Korean websites:
# Example: Using enhanced Korean text detectionawaitcontext.get_element_by_korean_text("외부") # Find element by Korean textawaitcontext.get_naver_photo_elements(search_in_frames=True) # Find photo elements in Naver Mapsawaitcontext.get_element_by_photo_category("외부") # Find photo category by nameAdvanced mode provides improved iframe handling:
# Example: Working with iframesframes=awaitcontext.get_frames() # Get all frames on the pageplace_frame=awaitcontext.get_frame_by_url_pattern("pcmap.place.naver.com") # Find frame by URL patternawaitcontext.switch_to_frame(place_frame) # Switch context to specific frameAdvanced mode provides improved methods for handling dynamic elements:
# Example: Using dynamic element selection# Find elements with retry logicelement=awaitcontext.get_element_with_retry(selector=".dynamic-class", retry_count=5, wait_time=1000) # Find elements by text content with fuzzy matchingelement=awaitcontext.get_element_by_text_content("Approximate text", fuzzy_match=True)Advanced mode includes improved timing and state management:
# Example: Using enhanced timing and state management# Wait for network to be idleawaitcontext.wait_for_network_idle(timeout=10000) # Wait for element to be visible with custom timeoutawaitcontext.wait_for_element_visible(".selector", timeout=5000) # Wait for page to finish navigationawaitcontext.wait_for_navigation_complete()Advanced mode provides enhanced screenshot capabilities:
# Take a screenshot and get base64 encoded datascreenshot_b64=awaitcontext.take_screenshot(full_page=True) # In example scripts, enable screenshots with the --screenshots flag# python examples/use-cases/restaurant_standee_detection.py --screenshots --no-headless --advanced-mode [other options]Screenshots will be saved to ~/screenshots/ directory with timestamps and descriptive names. The directory will be created if it doesn't exist.
Example screenshot usage in custom scripts:
# Save a screenshot to a file (note: this doesn't require the --screenshots flag)page=awaitcontext.get_current_page() # Use os.path.expanduser to ensure cross-platform compatibilityscreenshot_path=os.path.join(os.path.expanduser("~"), "screenshots", "my_screenshot.png") os.makedirs(os.path.dirname(screenshot_path), exist_ok=True) awaitpage.screenshot(path=screenshot_path) # Process a base64 encoded screenshot (available in both normal and advanced mode)importbase64importosscreenshot_b64=awaitcontext.take_screenshot() screenshot_bytes=base64.b64decode(screenshot_b64) screenshot_path=os.path.join(os.path.expanduser("~"), "screenshots", "decoded_screenshot.png") os.makedirs(os.path.dirname(screenshot_path), exist_ok=True) withopen(screenshot_path, "wb") asf: f.write(screenshot_bytes)You can test browser-use with a UI repository
Or simply run the gradio example:
uv pip install gradio python examples/ui/gradio_demo.pyTask: Add grocery items to cart, and checkout.
Prompt: Add my latest LinkedIn follower to my leads in Salesforce.
Prompt: Read my CV & find ML jobs, save them to a file, and then start applying for them in new tabs, if you need help, ask me.'
apply.to.jobs.8x.mp4
Prompt: Write a letter in Google Docs to my Papa, thanking him for everything, and save the document as a PDF.
Prompt: Look up models with a license of cc-by-sa-4.0 and sort by most likes on Hugging face, save top 5 to file.
hugging_face_high_quality.mp4
For more examples see the examples folder or join the Discord and show off your project.
Tell your computer what to do, and it gets it done.
- Improve agent memory (summarize, compress, RAG, etc.)
- Enhance planning capabilities (load website specific context)
- Reduce token consumption (system prompt, DOM state)
- Improve extraction for datepickers, dropdowns, special elements
- Improve state representation for UI elements
- LLM as fallback
- Make it easy to define workflow templates where LLM fills in the details
- Return playwright script from the agent
- Create datasets for complex tasks
- Benchmark various models against each other
- Fine-tuning models for specific tasks
- Human-in-the-loop execution
- Improve the generated GIF quality
- Create various demos for tutorial execution, job application, QA testing, social media, etc.
We love contributions! Feel free to open issues for bugs or feature requests. To contribute to the docs, check out the /docs folder.
To learn more about the library, check out the local setup 📕.
main is the primary development branch with frequent changes. For production use, install a stable versioned release instead.
The core package containing all the browser automation functionality.
Contains the LLM agent implementation that processes tasks and controls browser automation.
service.py: Core agent service implementationprompts/: System and agent promptsviews.py: Data structures for agent statememory/: Agent memory managementmessage_manager/: Handles message formatting and processing
Manages browser instances and interactions with Playwright.
browser.py: Browser configuration and initializationcontext.py: Browser context managementchrome.py: Chrome-specific implementationutils/: Browser utility functions
Handles DOM tree building, element selection, and interaction.
buildDomTree.js: JavaScript for DOM tree extractionservice.py: DOM service implementationclickable_element_processor/: Processes clickable elementshistory_tree_processor/: Manages navigation history
Manages the control flow between agent and browser.
registry/: Registry for controller components
Handles usage tracking and analytics.
Contains tools that can be used by the agent.
registry.py: Central registry for tool registrationstandee_detection/: Standee detection tool implementation
Project documentation and architecture diagrams.
cloud/,customize/,development/: Documentation sections
Example scripts demonstrating library usage.
simple.py: Basic usage examplerestaurant_standee_detection.py: Advanced restaurant standee detection with full Playwright capabilitiesbulk_restaurant_image_scraper.py: Bulk collection of restaurant images from Naver Maps
Unit and integration tests.
tools/: Tests for tool implementationsmind2web_data/: Test data
Static files used by the project.
We are forming a commission to define best practices for UI/UX design for browser agents. Together, we're exploring how software redesign improves the performance of AI agents and gives these companies a competitive advantage by designing their existing software to be at the forefront of the agent age.
Email Toby to apply for a seat on the committee.
Want to show off your Browser-use swag? Check out our Merch store. Good contributors will receive swag for free 👀.
If you use Browser Use in your research or project, please cite:
@software{browser_use2024, author = {Müller, Magnus and Žunič, Gregor}, title = {Browser Use: Enable AI to control your browser}, year = {2024}, publisher = {GitHub}, url = {https://github.com/browser-use/browser-use} }


