
Finding every URL on a website by clicking through page after page? That's yesterday's approach. When you want to grab a full list fast, sitemaps are your shortcut. These neat files map out exactly which pages a site wants indexed. Instead of slow, clunky crawling, sitemaps give you a direct route to all the URLs you need.
However, parsing sitemaps manually isn't always smooth sailing. Many sites use index sitemaps — big files pointing to smaller sitemaps, nested deep. That's extra work, and some contain thousands of URLs. Without the right tools, it quickly becomes a slog.
Enter ultimate-sitemap-parser (usp) — a Python library built to take that headache away. It fetches sitemaps, handles complex nested structures, and pulls out every URL with just a simple call. No fuss. No heavy lifting.
Today, we'll walk you through using usp to crawl the ASOS sitemap. By the end, you'll know exactly how to extract every URL quickly and efficiently.
Not installed yet? Grab the latest version from python.org. Check your install by running this command in your terminal:
python3 --version
Install it with pip:
pip install ultimate-sitemap-parser
Let's jump in. Here's how to grab all URLs from the ASOS homepage sitemap in a snap:
from usp.tree import sitemap_tree_for_homepage
url = "https://www.asos.com/"
tree = sitemap_tree_for_homepage(url)
for page in tree.all_pages():
print(page.url)
That's it. The library does the heavy lifting, fetching the sitemap, parsing XML, and listing every URL.
Many sites don't keep all URLs in one place. They break them down into index sitemaps — think product pages separate from category pages or blog posts. Without the right tool, you’d have to write extra code to dig through each one.
But usp? It just works. It finds those nested sitemaps, fetches them all recursively, and extracts every single URL — no extra work from you.
Want only product pages? Easy. If product URLs contain /product/, just filter them:
product_urls = [page.url for page in tree.all_pages() if "/product/" in page.url]
for url in product_urls:
print(url)
Instantly narrow your crawl to what matters.
Printing URLs to your screen is great for quick checks, but storing them for analysis? Even better.
Here's how to save those URLs into a CSV file:
import csv
from usp.tree import sitemap_tree_for_homepage
url = "https://www.asos.com/"
tree = sitemap_tree_for_homepage(url)
urls = [page.url for page in tree.all_pages()]
csv_filename = "asos_sitemap_urls.csv"
with open(csv_filename, "w", newline="", encoding="utf-8") as file:
writer = csv.writer(file)
writer.writerow(["URL"])
for url in urls:
writer.writerow([url])
print(f"Extracted {len(urls)} URLs and saved to {csv_filename}")
Now you have a neat CSV ready for your next steps.
Parsing sitemaps doesn't have to be complicated. With ultimate-sitemap-parser, the entire process — from fetching nested sitemaps to filtering and saving URLs — is streamlined and straightforward. No more XML headaches or manual digging.
Whether you're building a scraper, conducting SEO analysis, or auditing a website, usp is a powerhouse tool to add to your Python arsenal.