How to Build a Simple Crypto Price Tracker with Python

SwiftProxy
By - Emily Chan
2025-01-24 15:14:29

How to Build a Simple Crypto Price Tracker with Python

Cryptocurrency prices are like roller coasters—constantly changing, fast-moving, and often unpredictable. But that's the beauty of it, right? The key to capitalizing on these fluctuations is having a reliable, real-time data source at your fingertips. Here's where an automated crypto price tracker comes in. Building one might seem like a large task, but with Python, it's entirely doable. I will show you how to scrape the latest prices of the top 150 cryptocurrencies from Crypto.com, rotate proxies to avoid detection, and export the data into a CSV file. This tracker will update every five minutes, ensuring you always have the most up-to-date information.

Step 1: Setting Up the Tools

First, we need to import a few essential libraries:

import requests
from bs4 import BeautifulSoup
import csv
import time
import random

Requests lets us pull the website content. BeautifulSoup will help us parse the HTML and extract data. CSV is for saving the data. Time and Random will control when we fetch new data and rotate through proxies.

Step 2: Proxy Setup

Websites like Crypto.com are no fans of scrapers. If you don't use proxies, your requests might get blocked. Let's set up some proxies to ensure that doesn't happen. Here's a simple proxy setup for non-authenticated proxies:

proxy = {
    "http": "http://Your_proxy_IP_Address:Your_proxy_port",
}
html = requests.get(url, proxies=proxy)

For authenticated proxies, here's how you'd set it up:

proxy = {
    "http": "http://username:password@Your_proxy_IP_Address:Your_proxy_port",
}
html = requests.get(url, proxies=proxy)

You'll need to replace "Your_proxy_IP_Address" and "Your_proxy_port" with your actual proxy details.

Step 3: Rotating Proxy Servers

Using the same proxy for too many requests is a red flag. The solution? Proxy rotation. Here's a simple function that picks a random proxy from a list to keep things fresh:

# List of proxies
proxies = [ 
    "username:password@Your_proxy_IP_Address:Your_proxy_port1",
    "username:password@Your_proxy_IP_Address:Your_proxy_port2",
    "username:password@Your_proxy_IP_Address:Your_proxy_port3",
]

# Method to rotate proxies
def get_proxy(): 
    proxy = random.choice(proxies)  # Randomly pick one
    return {"http": f'http://{proxy}', "https": f'http://{proxy}'}

Now, every time we send a request, it'll come from a different proxy.

Step 4: Scraping Data from Crypto.com

We're after the top 150 cryptocurrencies. Using BeautifulSoup, we'll parse the HTML to grab the coin name, ticker symbol, price, and 24-hour price change. Here's the magic that pulls everything together:

def get_crypto_prices():
    url = "https://crypto.com/price"
    html = requests.get(url, proxies=get_proxy())
    soup = BeautifulSoup(html.text, "html.parser")

    price_rows = soup.find_all('tr', class_='css-1cxc880')  # Locate price rows

    prices = []
    for row in price_rows:
        coin_name_tag = row.find('p', class_='css-rkws3')
        name = coin_name_tag.get_text() if coin_name_tag else "no name entry"

        coin_ticker_tag = row.find('span', class_='css-1jj7b1a')
        ticker = coin_ticker_tag.get_text() if coin_ticker_tag else "no ticker entry"
        
        coin_price_tag = row.find('div', class_='css-b1ilzc')
        price = coin_price_tag.text.strip() if coin_price_tag else "no price entry"

        coin_percentage_tag = row.find('p', class_='css-yyku61')
        percentage = coin_percentage_tag.text.strip() if coin_percentage_tag else "no percentage entry"
        
        prices.append({
            "Coin": name,
            "Ticker": ticker,
            "Price": price,
            "24hr-Percentage": percentage
        })

    return prices

Step 5: Exporting the Data

Once we've scraped the data, we need to store it in a CSV file. Here's how we can write the data to a CSV:

def export_to_csv(prices, filename="crypto_prices.csv"):
    with open(filename, "w", newline="") as file:
        fieldnames = ["Coin", "Ticker", "Price", "24hr-Percentage"]
        writer = csv.DictWriter(file, fieldnames=fieldnames)
        writer.writeheader()
        writer.writerows(prices)

Step 6: Putting It All Together

Now, let's create the loop that keeps our tracker running. It will grab the data, export it, and then wait for five minutes before the next update.

if __name__ == "__main__":
    while True:
        prices = get_crypto_prices()
        export_to_csv(prices)
        print("Prices updated. Waiting for the next update...")
        time.sleep(300)  # Update every 5 minutes

The Complete Script

Here's the complete script that integrates all the steps above:

import requests
from bs4 import BeautifulSoup
import csv
import time
import random

# List of proxies
proxies = [
    "username:password@Your_proxy_IP_Address:Your_proxy_port1",
    "username:password@Your_proxy_IP_Address:Your_proxy_port2",
    "username:password@Your_proxy_IP_Address:Your_proxy_port3",
]

# Proxy rotation function
def get_proxy(): 
    proxy = random.choice(proxies)
    return {"http": f'http://{proxy}', "https": f'http://{proxy}'}

def get_crypto_prices():
    url = "https://crypto.com/price"
    html = requests.get(url, proxies=get_proxy())
    soup = BeautifulSoup(html.content, "html.parser")

    price_rows = soup.find_all('tr', class_='css-1cxc880')
    prices = []

    for row in price_rows:
        coin_name_tag = row.find('p', class_='css-rkws3')
        name = coin_name_tag.get_text() if coin_name_tag else "no name entry"

        coin_ticker_tag = row.find('span', class_='css-1jj7b1a')
        ticker = coin_ticker_tag.get_text() if coin_ticker_tag else "no ticker entry"
        
        coin_price_tag = row.find('div', class_='css-b1ilzc')
        price = coin_price_tag.text.strip() if coin_price_tag else "no price entry"

        coin_percentage_tag = row.find('p', class_='css-yyku61')
        percentage = coin_percentage_tag.text.strip() if coin_percentage_tag else "no percentage entry"
        
        prices.append({
            "Coin": name,
            "Ticker": ticker,
            "Price": price,
            "24hr-Percentage": percentage
        })

    return prices

def export_to_csv(prices, filename="crypto_prices.csv"):
    with open(filename, "w", newline="") as file:
        fieldnames = ["Coin", "Ticker", "Price", "24hr-Percentage"]
        writer = csv.DictWriter(file, fieldnames=fieldnames)
        writer.writeheader()
        writer.writerows(prices)

if __name__ == "__main__":
    while True:
        prices = get_crypto_prices()
        export_to_csv(prices)
        print("Prices updated. Waiting for the next update...")
        time.sleep(300)  # Update every 5 minutes

Conclusion

This Python-based tracker is lightweight, flexible, and easy to tweak. You can add more coins, modify the interval, or change the output format as needed. By rotating proxies and controlling request frequency, we keep our activity low-profile, preventing site blocks. If you want to dive deeper into cryptocurrency tracking, consider adding features like price alerts or incorporating additional data points such as market cap or trading volume. The possibilities are endless.

Note sur l'auteur

SwiftProxy
Emily Chan
Rédactrice en chef chez Swiftproxy
Emily Chan est la rédactrice en chef chez Swiftproxy, avec plus de dix ans d'expérience dans la technologie, les infrastructures numériques et la communication stratégique. Basée à Hong Kong, elle combine une connaissance régionale approfondie avec une voix claire et pratique pour aider les entreprises à naviguer dans le monde en évolution des solutions proxy et de la croissance basée sur les données.
Le contenu fourni sur le blog Swiftproxy est destiné uniquement à des fins d'information et est présenté sans aucune garantie. Swiftproxy ne garantit pas l'exactitude, l'exhaustivité ou la conformité légale des informations contenues, ni n'assume de responsabilité pour le contenu des sites tiers référencés dans le blog. Avant d'engager toute activité de scraping web ou de collecte automatisée de données, il est fortement conseillé aux lecteurs de consulter un conseiller juridique qualifié et de revoir les conditions d'utilisation applicables du site cible. Dans certains cas, une autorisation explicite ou un permis de scraping peut être requis.
Join SwiftProxy Discord community Chat with SwiftProxy support via WhatsApp Chat with SwiftProxy support via Telegram
Chat with SwiftProxy support via Email