How to Scrape Google Images with Python

Images aren’t just decoration—they’re data. They power machine learning models, enhance research, and bring projects to life. But collecting them manually? A tedious, time-sucking nightmare. What if you could automate the whole process, fetching hundreds of images in minutes instead of hours? That’s exactly what we’ll cover. We’ll show you how to scrape Google Images with Python—step by step. By the end, you’ll have a repeatable, scalable way to collect high-quality visuals without breaking a sweat.

SwiftProxy
By - Martin Koenig
2025-12-31 15:08:39

How to Scrape Google Images with Python

Understanding Google Image Scraping

Before diving into code, let's get real about Google Images. It's not a static gallery; it's a dynamic beast. When you search, only a few thumbnails appear. Scroll down, and more images load—but behind the scenes via JavaScript.

That means a simple requests.get() call won't cut it. To grab everything, you need tools that can handle JavaScript: think Selenium or Playwright.

How to Scrape Google Images with Python

Step 1: Prepare Your Environment

Install the tools:

pip install requests beautifulsoup4 selenium pandas

If you go the Playwright route:

pip install playwright
playwright install

And don't forget a web driver for Selenium. Chrome? Grab ChromeDriver that matches your browser version.

Step 2: Get Basic Image Search Results

Even without JavaScript, you can grab thumbnails. Start small.

import requests
from bs4 import BeautifulSoup

query = "golden retriever puppy"
url = f"https://www.google.com/search?q={query}&tbm=isch"

headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, "html.parser")

images = soup.find_all("img")

for i, img in enumerate(images[:5]):
    print(f"{i+1}: {img['src']}")

You'll mostly get thumbnails or base64 images—but it's a starting point.

Step 3: Dynamic Loading with Selenium

For higher-quality images, you need to mimic human scrolling.

from selenium import webdriver
from selenium.webdriver.common.by import By
import time

query = "golden retriever puppy"
url = f"https://www.google.com/search?q={query}&tbm=isch"

driver = webdriver.Chrome()
driver.get(url)

for _ in range(3):
    driver.execute_script("window.scrollBy(0, document.body.scrollHeight);")
    time.sleep(2)

images = driver.find_elements(By.TAG_NAME, "img")

for i, img in enumerate(images[:10]):
    print(f"{i+1}: {img.get_attribute('src')}")

driver.quit()

Now you're capturing the real visuals as they load dynamically.

Step 4: Save Images Locally

Once you have URLs, saving them is straightforward:

import os
import requests

save_dir = "images"
os.makedirs(save_dir, exist_ok=True)

for i, img_url in enumerate(images[:10]):
    try:
        img_data = requests.get(img_url).content
        with open(os.path.join(save_dir, f"img_{i}.jpg"), "wb") as f:
            f.write(img_data)
        print(f"Saved img_{i}.jpg")
    except Exception as e:
        print(f"Could not save image {i}: {e}")

Boom. Images stored locally and ready to use.

Step 5: Utilize Proxies to Prevent Blocking

If you scrape too aggressively, Google notices. IP blocks and CAPTCHAs appear fast. Stay safe:

Add random delays between requests.

Rotate headers and user agents.

Use proxy servers for IP rotation.

Example with requests:

proxies = {
    "http": "http://username:password@proxy_host:proxy_port",
    "https": "http://username:password@proxy_host:proxy_port"
}

response = requests.get(url, headers=headers, proxies=proxies)

Services like Swiftproxy handle proxy rotation automatically. No headache, no downtime.

Common Roadblocks and How to Solve Them

1. Captchas

Google detects bots quickly. Manual solving kills automation. Mitigation? Slow your requests, rotate headers, use headless browsers, and rotate IPs.

2. Low-quality or incomplete images

Thumbnails aren't enough. Selenium scrolling, clicking on thumbnails, and waiting for images to load solves this.

3. Handling thousands of images

Automation is key. Retry failed requests, save metadata to avoid duplicates, and use residential proxies for large datasets.

Organizing and Using Your Scraped Images

1. Local Storage

Organize by query to simplify workflows, especially for ML:

import os

def save_image(content, folder, filename):
    os.makedirs(folder, exist_ok=True)
    with open(os.path.join(folder, filename), "wb") as f:
        f.write(content)

2. Metadata Tracking

Keep URLs, file paths, timestamps in a CSV or database:

import pandas as pd

data = {"url": image_urls, "filename": [f"img_{i}.jpg" for i in range(len(image_urls))]}
df = pd.DataFrame(data)
df.to_csv("images_metadata.csv", index=False)

3. Cloud Storage

For massive datasets, think AWS S3 or Google Cloud Storage. Combine with DVC to version-control updates efficiently.

Wrapping Up

Scraping Google Images is simple for small projects but tricky at scale. Requests throttling, user-agent rotation, headless browsers, and proxies are necessary. Master these, and you'll have a reliable, automated pipeline to build datasets like a pro.

About the author

SwiftProxy
Martin Koenig
Head of Commerce
Martin Koenig is an accomplished commercial strategist with over a decade of experience in the technology, telecommunications, and consulting industries. As Head of Commerce, he combines cross-sector expertise with a data-driven mindset to unlock growth opportunities and deliver measurable business impact.
The content provided on the Swiftproxy Blog is intended solely for informational purposes and is presented without warranty of any kind. Swiftproxy does not guarantee the accuracy, completeness, or legal compliance of the information contained herein, nor does it assume any responsibility for content on thirdparty websites referenced in the blog. Prior to engaging in any web scraping or automated data collection activities, readers are strongly advised to consult with qualified legal counsel and to review the applicable terms of service of the target website. In certain cases, explicit authorization or a scraping permit may be required.
Join SwiftProxy Discord community Chat with SwiftProxy support via WhatsApp Chat with SwiftProxy support via Telegram
Chat with SwiftProxy support via Email