How to use Selenium for web scraping and use proxies for privacy

SwiftProxy
By - Linh Tran
2025-01-15 20:30:11

Selenium is a powerful tool that allows users to scrape data by simulating user operations in the browser, such as clicking, filling out forms, submitting, etc. When performing web crawlers, using a proxy can hide your real IP address and prevent the target website from being blocked due to frequent visits. This article will detail how to use Selenium for web scraping and combine it with a proxy to protect privacy.

Environment Setup

First, you need to make sure that Python and Selenium libraries are installed on your computer. You can install the Selenium library using the following command:

pip install selenium

Next, download the corresponding browser driver (such as ChromeDriver, GeckoDriver, etc.) according to your browser type (such as Chrome, Firefox, etc.) and add it to the system's PATH environment variable.

Web scraping using Selenium

Here is a simple example of scraping web data using Selenium:


from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options

# Configure ChromeDriver Path
chrome_driver_path = 'path/to/chromedriver'  # Replace with your ChromeDriver path

# Initialize ChromeDriver Service
service = Service(chrome_driver_path)

# Related configuration when opening the browser
options = Options()
options.add_argument("--start-maximized")  # Maximize window on startup

# Initializing WebDriver
driver = webdriver.Chrome(service=service, options=options)

# Open the target web page
driver.get('https://example.com')

# Locate elements and grab data
elements = driver.find_elements_by_xpath("//div[@class='example']")  # Use XPath to locate elements and modify them according to actual conditions
for element in elements:
    data = element.text
    print(data)  # Or save the data to a file, database, etc.

# Close the browser
driver.quit()

Combine with proxy to protect privacy

When crawling the web, using a proxy can hide your real IP address and prevent the target website from being blocked due to frequent visits. Here are the steps to use a proxy in Selenium:

‌1. Get the proxy server address and port‌

You can get the proxy server address and port by purchasing or using a free proxy service. (Considering factors such as security and speed, it is not recommended to use a free proxy)

‌2. Configure ChromeDriver to use a proxy‌

In Selenium, you can set the proxy by modifying the ChromeDriver startup parameters. The following is a sample code:

from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options

# Configure ChromeDriver path
chrome_driver_path = 'path/to/chromedriver' # Replace with your ChromeDriver path

# Initialize ChromeDriver Service
service = Service(chrome_driver_path)

# Related configuration when opening the browser
options = Options()
options.add_argument("--start-maximized") # Maximize the window when starting

# Set up a proxy
proxy_server = "http://proxy_server_address:port" # Replace with your proxy server address and port
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--proxy-server=%s' % proxy_server)

# Initialize WebDriver
driver = webdriver.Chrome(service=service, options=options)

# Open the target webpage
driver.get('https://example.com')

# Locate elements and grab data
elements = driver.find_elements_by_xpath("//div[@class='example']") # Use XPath to locate elements, which can be modified according to actual conditions
for element in elements:
data = element.text
print(data) # Or save the data to a file, database, etc.

# Close the browser
driver.quit()

Notes

  • ‌Proxy availability‌: Make sure the proxy server you use is available and stable, otherwise it may affect the effect of web crawling.
  • ‌Anti-crawler mechanism‌: Many websites use anti-crawler mechanisms, such as verification codes, IP blocking, etc. When using Selenium for web crawling, you need to be careful to avoid triggering these mechanisms.
  • ‌Compliance‌: When performing web crawling, you need to comply with the terms of use and laws and regulations of the target website and must not use it for illegal purposes.

About the author

SwiftProxy
Linh Tran
Senior Technology Analyst at Swiftproxy
Linh Tran is a Hong Kong-based technology writer with a background in computer science and over eight years of experience in the digital infrastructure space. At Swiftproxy, she specializes in making complex proxy technologies accessible, offering clear, actionable insights for businesses navigating the fast-evolving data landscape across Asia and beyond.
The content provided on the Swiftproxy Blog is intended solely for informational purposes and is presented without warranty of any kind. Swiftproxy does not guarantee the accuracy, completeness, or legal compliance of the information contained herein, nor does it assume any responsibility for content on thirdparty websites referenced in the blog. Prior to engaging in any web scraping or automated data collection activities, readers are strongly advised to consult with qualified legal counsel and to review the applicable terms of service of the target website. In certain cases, explicit authorization or a scraping permit may be required.
Join SwiftProxy Discord community Chat with SwiftProxy support via WhatsApp Chat with SwiftProxy support via Telegram
Chat with SwiftProxy support via Email