How to use Selenium in Python
Learn how to use Selenium in Python with our guide. Discover tips, real-world applications, and how to debug common errors.

Selenium is a powerful tool for web automation in Python. It lets you control a web browser programmatically, perfect for web application tests and dynamic data scrapes.
In this article, we'll explore techniques, tips, and real-world applications. We'll also cover advice on how to debug your code to build robust automation scripts.
Getting started with Selenium
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()))
driver.get("https://www.python.org")
print("Title:", driver.title)
driver.quit()--OUTPUT--Title: Welcome to Python.org
This script automates a simple browser task. The most important step is creating a webdriver instance. Using ChromeDriverManager().install() is a modern best practice—it automatically downloads and configures the correct browser driver, which saves you from manually managing version compatibility issues.
Once the driver is running, you can use methods like get() to navigate to a page and access properties like title. The quit() method is essential for closing the browser and ending the session cleanly.
Basic Selenium techniques
Beyond just opening a webpage, Selenium's real power lies in interacting with elements, handling forms, and navigating through the site's pages.
Locating elements with different locators
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get("https://www.python.org")
search_box = driver.find_element(By.NAME, "q")
submit_button = driver.find_element(By.ID, "submit")
print(f"Found elements: Search box: {search_box.tag_name}, Button: {submit_button.tag_name}")
driver.quit()--OUTPUT--Found elements: Search box: input, Button: button
To interact with a page, you first need to locate its elements. The find_element() method is your main tool for this, used with the By class to define your search strategy. This approach makes your automation scripts clear and maintainable.
By.NAMEfinds an element using its HTMLnameattribute.By.IDlocates an element by its uniqueid, which is often the most reliable method.
Selenium offers other locators too, like By.CSS_SELECTOR or By.XPATH, giving you flexible ways to target exactly what you need on a page.
Working with forms and input fields
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
driver = webdriver.Chrome()
driver.get("https://www.python.org")
search_box = driver.find_element(By.NAME, "q")
search_box.send_keys("documentation")
search_box.send_keys(Keys.RETURN)
print("Search results URL:", driver.current_url)
driver.quit()--OUTPUT--Search results URL: https://www.python.org/search/?q=documentation&submit=
Automating forms is straightforward. Once you've located an input field, use the send_keys() method to type text into it, just like a user would. This is the primary way to populate text boxes, search bars, and other input elements.
- To handle form submissions or other keyboard actions, you can import the
Keysclass. It lets you simulate pressing non-text keys, such asKeys.RETURNto submit a form orKeys.TABto navigate between fields.
Navigating between pages with back() and forward()
from selenium import webdriver
import time
driver = webdriver.Chrome()
driver.get("https://www.python.org")
time.sleep(1)
driver.get("https://docs.python.org")
time.sleep(1)
driver.back()
print("After going back:", driver.title)
driver.forward()
print("After going forward:", driver.title)
driver.quit()--OUTPUT--After going back: Welcome to Python.org
After going forward: Python documentation
You can navigate your browser's session history programmatically, just like clicking the back and forward buttons. This is essential for testing user flows that jump between different pages. The example uses sleep in Python to add delays between navigation actions.
- The
driver.back()method moves you to the previously visited URL. - After navigating back,
driver.forward()takes you to the next page in your history, effectively undoing the "back" action.
These commands give you precise control over the browser's navigation stack, allowing you to simulate a user moving through their browsing history.
Advanced Selenium techniques
Building on these fundamentals, you can create more robust scripts by learning to handle dynamic content, execute JavaScript, and run browsers invisibly for efficiency.
Implementing explicit waits with WebDriverWait
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
driver.get("https://www.python.org")
wait = WebDriverWait(driver, 10)
element = wait.until(EC.presence_of_element_located((By.ID, "id-search-field")))
print(f"Element found after waiting: {element.get_attribute('placeholder')}")
driver.quit()--OUTPUT--Element found after waiting: Search
Modern web pages often load content dynamically, which can cause your script to fail if it tries to find an element that hasn't appeared yet. Instead of using unreliable fixed delays like time.sleep(), explicit waits are the solution. WebDriverWait pauses your script until a specific condition is met and is memory-efficient. To understand various waiting mechanisms, see our guide on how to wait in Python.
- The
wait.until()method, combined withexpected_conditions(likeEC.presence_of_element_located), tells Selenium to wait for up to 10 seconds for the element to appear. - This makes your script more reliable, as it adapts to the page's loading speed.
Executing JavaScript with execute_script()
from selenium import webdriver
driver = webdriver.Chrome()
driver.get("https://www.python.org")
title = driver.execute_script("return document.title;")
driver.execute_script("window.scrollTo(0, document.body.scrollHeight/2);")
print(f"Title via JavaScript: {title}")
driver.quit()--OUTPUT--Title via JavaScript: Welcome to Python.org
When Selenium's standard tools aren't enough, you can use execute_script() to run JavaScript directly. This gives you a powerful way to interact with the page, performing actions that are tricky with Python alone.
- You can execute JavaScript to perform actions, such as using
window.scrollTo()to scroll the page. - If your JavaScript code includes a
returnstatement, the value is passed back to your Python script. This is how the example captures the page title.
Running Selenium in headless mode
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument("--headless")
driver = webdriver.Chrome(options=chrome_options)
driver.get("https://www.python.org")
print(f"Page title in headless mode: {driver.title}")
print(f"Page source length: {len(driver.page_source)} characters")
driver.quit()--OUTPUT--Page title in headless mode: Welcome to Python.org
Page source length: 53842 characters
You can run your automation scripts without a visible browser window using headless mode. It's faster and ideal for servers or automated testing where a GUI isn't necessary. Your script still interacts with the page fully, just without the visual overhead.
- To enable it, you create an
Optionsobject and use theadd_argument("--headless")method. - You then pass these settings to your
webdriver.Chromeinstance when it's created.
Move faster with Replit
Replit is an AI-powered development platform where you can start coding Python instantly. All the necessary dependencies pre-installed, so you can forget about environment setup and focus on building.
While learning individual techniques like find_element() and WebDriverWait is a great start, building a full application is the next step. This is where Agent 4 comes in, taking your project from an idea to a working product by handling the code, databases, APIs, and even deployment.
- A price monitoring tool that automatically scrapes product data from e-commerce sites.
- An automated testing script that fills and submits web forms to validate user registration flows.
- A website health dashboard that periodically checks a list of URLs and confirms they load the correct title.
Simply describe your app, and Replit will write the code, test it, and fix issues automatically, all within your browser.
Common errors and challenges
Even with the best practices, you'll run into some common exceptions; here’s how to debug the most frequent ones you'll encounter.
Debugging NoSuchElementException
This error is exactly what it sounds like: Selenium couldn't find the element you asked for. It’s one of the most common issues and usually happens for a couple of reasons.
- Timing Issues: The page may not have fully loaded, so the element doesn't exist yet. The best fix is using an explicit wait with
WebDriverWaitto pause your script until the element is present. - Incorrect Locator: Your selector (e.g., ID, class name, or XPath) might be wrong or may have changed. Double-check the element in your browser's developer tools to confirm you're using the right locator strategy.
Handling StaleElementReferenceException
A StaleElementReferenceException occurs when you successfully locate an element but the page changes before you can interact with it. The reference you were holding is now "stale" because the element is no longer attached to the page's DOM. This often happens when the page content is dynamically updated via JavaScript.
The simplest solution is to re-find the element right before you perform an action on it. If you're iterating through a list of elements, for example, locate the list again inside your loop to ensure you always have a fresh reference.
Fixing ElementClickInterceptedException
This exception means Selenium tried to click an element, but something else got in the way. The click was "intercepted" by another element, like a cookie consent banner, a pop-up ad, or a sticky header that overlays your target.
To fix this, you can try a few things. You can add a wait condition for the intercepting element to disappear, or you can programmatically close it first. Sometimes, scrolling the target element into view helps. As a last resort, you can use execute_script() to trigger a JavaScript click, which can often bypass the UI obstruction.
Debugging NoSuchElementException
This exception is thrown when find_element() comes up empty. It’s a frequent hurdle, usually pointing to a typo in your locator or the element not being loaded yet. The code below demonstrates this by searching for a non-existent element ID.
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get("https://www.python.org")
# This will fail if the element doesn't exist or the ID is wrong
download_button = driver.find_element(By.ID, "download-button")
download_button.click()
driver.quit()
The script fails because it searches for an element with the ID "download-button", which doesn't exist on the page, immediately raising a NoSuchElementException. The corrected code below shows how to handle this scenario gracefully.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
driver.get("https://www.python.org")
try:
wait = WebDriverWait(driver, 10)
download_button = wait.until(EC.presence_of_element_located(
(By.CSS_SELECTOR, ".download-button")))
download_button.click()
except NoSuchElementException:
print("Could not find the download button")
driver.quit()
The corrected code handles this by wrapping the action in a try...except block. It uses WebDriverWait to pause for up to 10 seconds, waiting for the element to become available. If the element doesn't appear within the time limit, the except block catches the NoSuchElementException and prints a message instead of crashing. This combination of explicit waits and error handling is essential for building reliable scripts that can handle dynamic page content. For more details on try and except in Python, check out our dedicated guide.
Handling StaleElementReferenceException
This exception happens when you find an element, but the page refreshes or changes before you can use it. Your variable now points to something that's gone. The code below shows this by navigating away after finding an element, making it stale.
from selenium import webdriver
from selenium.webdriver.common.by import By
import time
driver = webdriver.Chrome()
driver.get("https://www.python.org")
menu_item = driver.find_element(By.ID, "documentation")
driver.get("https://www.python.org/about/")
time.sleep(2)
menu_item.click() # This will fail - element is now stale
driver.quit()
The script stores a reference to the menu_item element, but then navigates to a new page. This makes the original reference invalid, so calling click() on the stale variable fails. The corrected code below shows how to handle this.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
driver = webdriver.Chrome()
driver.get("https://www.python.org")
driver.get("https://www.python.org/about/")
wait = WebDriverWait(driver, 10)
menu_item = wait.until(EC.element_to_be_clickable((By.ID, "documentation")))
menu_item.click()
driver.quit()
The solution is to re-locate the element right before you interact with it, especially after a page navigation or dynamic update. The corrected code waits for the new page to load, then uses WebDriverWait with EC.element_to_be_clickable to find a fresh reference. This ensures the element is both present and interactive, preventing the stale reference error. Keep an eye out for this when your script navigates or triggers content refreshes.
Fixing ElementClickInterceptedException
This exception is thrown when your click is blocked by another element, such as a pop-up ad or a cookie consent banner. Your script finds the target, but something else intercepts the click. The code below shows how this can easily happen.
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get("https://www.python.org")
download_link = driver.find_element(By.LINK_TEXT, "Downloads")
download_link.click() # Might fail if there's an overlay
driver.quit()
The script's direct click() action is risky because it doesn't account for overlays that might appear on page load. An element like a cookie banner can block the click, triggering the exception. See the corrected approach below.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.common.exceptions import ElementClickInterceptedException
driver = webdriver.Chrome()
driver.get("https://www.python.org")
download_link = driver.find_element(By.LINK_TEXT, "Downloads")
try:
download_link.click()
except ElementClickInterceptedException:
driver.execute_script("arguments[0].click();", download_link)
driver.quit()
The corrected code handles the interception by wrapping the click() in a try...except block. If a standard click fails, it catches the ElementClickInterceptedException. As a fallback, it uses driver.execute_script() to trigger a JavaScript click directly on the element. This often bypasses the overlay that was blocking the interaction. Use this approach when dealing with cookie banners or pop-ups that might cover your target element.
Real-world applications
Now that you can navigate common errors, you can apply these techniques to practical tasks like scraping product data and automating forms, or explore vibe coding for rapid prototyping. You might also combine Selenium with techniques to call API in Python for comprehensive automation workflows.
Scraping product information with find_elements()
To scrape multiple items at once, like all the product titles on a page, you'll use the find_elements() method, which returns a list of all elements matching your selector.
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get("https://books.toscrape.com/")
book_titles = driver.find_elements(By.CSS_SELECTOR, ".product_pod h3 a")
for i, title in enumerate(book_titles[:3], 1):
print(f"Book {i}: {title.get_attribute('title')}")
driver.quit()
This script efficiently gathers multiple pieces of data from a webpage. Instead of finding one element, find_elements() returns a list of all elements matching the CSS selector. This approach is ideal for scraping repeating items like products or articles. For static content scraping, you might also consider Beautiful Soup in Python as an alternative.
- The code uses a loop to process the first three book titles from the collected list.
- Inside the loop,
get_attribute('title')extracts the full book title from each link's attribute, which is then printed.
Automating form submission with submit() method
The submit() method provides a straightforward way to submit forms, while specialized elements like dropdown menus are best handled using the Select class.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import Select
driver = webdriver.Chrome()
driver.get("https://www.seleniumeasy.com/test/basic-select-dropdown-demo.html")
select = Select(driver.find_element(By.ID, "select-demo"))
select.select_by_visible_text("Tuesday")
print("Selected option:", select.first_selected_option.text)
driver.quit()
Automating dropdown menus involves a few specific steps. After locating the <select> element, you wrap it in a Select object. This gives you access to selection methods that are more reliable than simple clicks.
- The script uses
select_by_visible_text()to pick an option by its display text, simulating a user's choice. - Finally, it confirms the action was successful by accessing the
first_selected_option.textattribute to retrieve and print the text of the chosen option.
Get started with Replit
Now that you've learned the techniques, build a real tool with Replit Agent. Try prompts like, “Build a script to scrape flight prices for a specific route” or “Automate logging into a web app to check for new messages.”
Replit Agent writes the code, tests for errors, and even deploys the app. Start building with Replit.
Describe what you want to build, and Replit Agent writes the code, handles the infrastructure, and ships it live. Go from idea to real product, all in your browser.
Describe what you want to build, and Replit Agent writes the code, handles the infrastructure, and ships it live. Go from idea to real product, all in your browser.



