How to Build a Simple Crypto Price Tracker with Python

SwiftProxy
By - Emily Chan
2025-01-24 15:14:29

How to Build a Simple Crypto Price Tracker with Python

Cryptocurrency prices are like roller coasters—constantly changing, fast-moving, and often unpredictable. But that's the beauty of it, right? The key to capitalizing on these fluctuations is having a reliable, real-time data source at your fingertips. Here's where an automated crypto price tracker comes in. Building one might seem like a large task, but with Python, it's entirely doable. I will show you how to scrape the latest prices of the top 150 cryptocurrencies from Crypto.com, rotate proxies to avoid detection, and export the data into a CSV file. This tracker will update every five minutes, ensuring you always have the most up-to-date information.

Step 1: Setting Up the Tools

First, we need to import a few essential libraries:

import requests
from bs4 import BeautifulSoup
import csv
import time
import random

Requests lets us pull the website content. BeautifulSoup will help us parse the HTML and extract data. CSV is for saving the data. Time and Random will control when we fetch new data and rotate through proxies.

Step 2: Proxy Setup

Websites like Crypto.com are no fans of scrapers. If you don't use proxies, your requests might get blocked. Let's set up some proxies to ensure that doesn't happen. Here's a simple proxy setup for non-authenticated proxies:

proxy = {
    "http": "http://Your_proxy_IP_Address:Your_proxy_port",
}
html = requests.get(url, proxies=proxy)

For authenticated proxies, here's how you'd set it up:

proxy = {
    "http": "http://username:password@Your_proxy_IP_Address:Your_proxy_port",
}
html = requests.get(url, proxies=proxy)

You'll need to replace "Your_proxy_IP_Address" and "Your_proxy_port" with your actual proxy details.

Step 3: Rotating Proxy Servers

Using the same proxy for too many requests is a red flag. The solution? Proxy rotation. Here's a simple function that picks a random proxy from a list to keep things fresh:

# List of proxies
proxies = [ 
    "username:password@Your_proxy_IP_Address:Your_proxy_port1",
    "username:password@Your_proxy_IP_Address:Your_proxy_port2",
    "username:password@Your_proxy_IP_Address:Your_proxy_port3",
]

# Method to rotate proxies
def get_proxy(): 
    proxy = random.choice(proxies)  # Randomly pick one
    return {"http": f'http://{proxy}', "https": f'http://{proxy}'}

Now, every time we send a request, it'll come from a different proxy.

Step 4: Scraping Data from Crypto.com

We're after the top 150 cryptocurrencies. Using BeautifulSoup, we'll parse the HTML to grab the coin name, ticker symbol, price, and 24-hour price change. Here's the magic that pulls everything together:

def get_crypto_prices():
    url = "https://crypto.com/price"
    html = requests.get(url, proxies=get_proxy())
    soup = BeautifulSoup(html.text, "html.parser")

    price_rows = soup.find_all('tr', class_='css-1cxc880')  # Locate price rows

    prices = []
    for row in price_rows:
        coin_name_tag = row.find('p', class_='css-rkws3')
        name = coin_name_tag.get_text() if coin_name_tag else "no name entry"

        coin_ticker_tag = row.find('span', class_='css-1jj7b1a')
        ticker = coin_ticker_tag.get_text() if coin_ticker_tag else "no ticker entry"
        
        coin_price_tag = row.find('div', class_='css-b1ilzc')
        price = coin_price_tag.text.strip() if coin_price_tag else "no price entry"

        coin_percentage_tag = row.find('p', class_='css-yyku61')
        percentage = coin_percentage_tag.text.strip() if coin_percentage_tag else "no percentage entry"
        
        prices.append({
            "Coin": name,
            "Ticker": ticker,
            "Price": price,
            "24hr-Percentage": percentage
        })

    return prices

Step 5: Exporting the Data

Once we've scraped the data, we need to store it in a CSV file. Here's how we can write the data to a CSV:

def export_to_csv(prices, filename="crypto_prices.csv"):
    with open(filename, "w", newline="") as file:
        fieldnames = ["Coin", "Ticker", "Price", "24hr-Percentage"]
        writer = csv.DictWriter(file, fieldnames=fieldnames)
        writer.writeheader()
        writer.writerows(prices)

Step 6: Putting It All Together

Now, let's create the loop that keeps our tracker running. It will grab the data, export it, and then wait for five minutes before the next update.

if __name__ == "__main__":
    while True:
        prices = get_crypto_prices()
        export_to_csv(prices)
        print("Prices updated. Waiting for the next update...")
        time.sleep(300)  # Update every 5 minutes

The Complete Script

Here's the complete script that integrates all the steps above:

import requests
from bs4 import BeautifulSoup
import csv
import time
import random

# List of proxies
proxies = [
    "username:password@Your_proxy_IP_Address:Your_proxy_port1",
    "username:password@Your_proxy_IP_Address:Your_proxy_port2",
    "username:password@Your_proxy_IP_Address:Your_proxy_port3",
]

# Proxy rotation function
def get_proxy(): 
    proxy = random.choice(proxies)
    return {"http": f'http://{proxy}', "https": f'http://{proxy}'}

def get_crypto_prices():
    url = "https://crypto.com/price"
    html = requests.get(url, proxies=get_proxy())
    soup = BeautifulSoup(html.content, "html.parser")

    price_rows = soup.find_all('tr', class_='css-1cxc880')
    prices = []

    for row in price_rows:
        coin_name_tag = row.find('p', class_='css-rkws3')
        name = coin_name_tag.get_text() if coin_name_tag else "no name entry"

        coin_ticker_tag = row.find('span', class_='css-1jj7b1a')
        ticker = coin_ticker_tag.get_text() if coin_ticker_tag else "no ticker entry"
        
        coin_price_tag = row.find('div', class_='css-b1ilzc')
        price = coin_price_tag.text.strip() if coin_price_tag else "no price entry"

        coin_percentage_tag = row.find('p', class_='css-yyku61')
        percentage = coin_percentage_tag.text.strip() if coin_percentage_tag else "no percentage entry"
        
        prices.append({
            "Coin": name,
            "Ticker": ticker,
            "Price": price,
            "24hr-Percentage": percentage
        })

    return prices

def export_to_csv(prices, filename="crypto_prices.csv"):
    with open(filename, "w", newline="") as file:
        fieldnames = ["Coin", "Ticker", "Price", "24hr-Percentage"]
        writer = csv.DictWriter(file, fieldnames=fieldnames)
        writer.writeheader()
        writer.writerows(prices)

if __name__ == "__main__":
    while True:
        prices = get_crypto_prices()
        export_to_csv(prices)
        print("Prices updated. Waiting for the next update...")
        time.sleep(300)  # Update every 5 minutes

Conclusion

This Python-based tracker is lightweight, flexible, and easy to tweak. You can add more coins, modify the interval, or change the output format as needed. By rotating proxies and controlling request frequency, we keep our activity low-profile, preventing site blocks. If you want to dive deeper into cryptocurrency tracking, consider adding features like price alerts or incorporating additional data points such as market cap or trading volume. The possibilities are endless.

關於作者

SwiftProxy
Emily Chan
Swiftproxy首席撰稿人
Emily Chan是Swiftproxy的首席撰稿人,擁有十多年技術、數字基礎設施和戰略傳播的經驗。她常駐香港,結合區域洞察力和清晰實用的表達,幫助企業駕馭不斷變化的代理IP解決方案和數據驅動增長。
Swiftproxy部落格提供的內容僅供參考,不提供任何形式的保證。Swiftproxy不保證所含資訊的準確性、完整性或合法合規性,也不對部落格中引用的第三方網站內容承擔任何責任。讀者在進行任何網頁抓取或自動化資料蒐集活動之前,強烈建議諮詢合格的法律顧問,並仔細閱讀目標網站的服務條款。在某些情況下,可能需要明確授權或抓取許可。
Join SwiftProxy Discord community Chat with SwiftProxy support via WhatsApp Chat with SwiftProxy support via Telegram
Chat with SwiftProxy support via Email