Running Large Scale Web Scraping Aggregator System In Production

Running Large Scale Web Scraping Aggregator System In Production


Ever considered scraping data from various top-tier sources to power your own solution?

Does this seem straightforward and like a great business idea to dive into?

Think again. I’m here to share the real challenges and sophisticated solutions involved in making it work at scale, based on real project experiences.


Context and Motivation

In recent years, I’ve come across many ideas and projects, ranging from small to large-scale, that involve scraping data from various sources to create chatbots, websites, and platforms in industries such as automotive, real estate, marketing, and e-commerce. While many technical blogs provide general recommendations across different sources with varying complexity, they often lack specific solutions or long-term approaches and techniques that show how to deal with these challenges on a daily basis in production. In this series, I aim to fill that gap by presenting real-world examples with concrete techniques and practices.

Drawing from my experience with well-known titans in the automotive industry, I’ll discuss large-scale production challenges in projects reliant on these sources. This includes:

  • Handling page structure changes
  • Avoiding IP bans
  • Overcoming anti-spam measures
  • Addressing fingerprinting
  • Staying undetected / Hiding scraping behavior
  • Maximizing data coverage
  • Mapping reference data across sources
  • Implementing monitoring and alerting systems

Additionally, I will cover the legal challenges and considerations related to data scraping.

About the project

The project is a web-based distributed microservice system aggregator designed to gather car offers from the most popular sources across CIS and European countries. This system is built for advanced analytics to address critical questions in the automotive market, including:

  • Determining the most profitable way and path to buy a car at the current moment, considering currency exchange rates, global market conditions, and other relevant factors.
  • Assessing whether it is more advantageous to purchase a car from another country or within the internal market.
  • Estimating the average time it takes to sell a specific car model in a particular country.
  • Identifying trends in car prices across different regions.
  • Understanding how economic and political changes impact car sales and prices.

The system maintains and updates a database of around 1 million actual car listings and stores historical data since 2022. In total, it holds over 10 million car listings, enabling comprehensive data collection and detailed analysis. This extensive dataset helps users make informed decisions in the automotive market by providing valuable insights and trends.

High-level architecture overview

1.1 Architecture Overview

Here is architecture diagram in good quality!

Microservices: The system is composed of multiple microservices, each responsible for specific tasks such as data listing, storage, and analytics. This modular approach ensures that each service can be developed, deployed, and scaled independently. The key microservices include:

  • Cars Microservice: Handles the collection, storage, and updating of car listings from various sources.
  • Subscribers Microservice: Manages user subscriptions and notifications, ensuring users are informed of updates and relevant analytics.
  • Analytics Microservice: Processes the collected data to generate insights and answer key questions about the automotive market.
  • Gateway Microservice: Acts as the entry point for all incoming requests, routing them to the appropriate microservices while managing authentication, authorization, and rate limiting.

Data Scrapers: Distributed scrapers are deployed to gather car listings from various sources. These scrapers are designed to handle page structure changes, avoid IP bans, and overcome anti-spam measures like finger.

Data Processing Pipeline: The collected data is processed through a pipeline that includes data cleaning, normalization, and enrichment. This ensures that the data is consistent and ready for analysis.

Storage: The system uses a combination of relational and non-relational databases to store current and historical data. This allows for efficient querying and retrieval of large datasets.

Analytics Engine: An advanced analytics engine processes the data to generate insights and answer key questions about the automotive market. This engine uses machine learning algorithms and statistical models.

API Gateway: The API gateway handles all incoming requests and routes them to the appropriate microservices. It also manages authentication, authorization, and rate limiting.

Monitoring and Alerting: A comprehensive monitoring and alerting system tracks the performance of each microservice and the overall system health. This system is configured with numerous notifications to monitor and track scraping behavior, ensuring that any issues or anomalies are detected and addressed promptly. This includes alerts for changes in page structure and potential anti-scraping measures.

Challenges and Practical Recommendations

Below are the challenges we faced in our web scraping platform and the practical recommendations we implemented to overcome them. These insights are based on real-world experiences and are aimed at providing you with actionable strategies to handle similar issues.

Challenge: Handling page structure changes

Overview

One of the most significant challenges in web scraping is handling changes in the structure of web pages. Websites often update their layouts, either for aesthetic reasons or to improve user experience. These changes can break scrapers that rely on specific HTML structures to extract data.

Impact

When a website changes its structure, scrapers can fail to find the data they need, leading to incomplete or incorrect data collection. This can severely impact the quality of the data and the insights derived from it, rendering the analysis ineffective.

Recommendation 1: Leverage API Endpoints

To handle the challenge of frequent page structure changes, we shifted from scraping HTML to leveraging the underlying API endpoints used by web applications (yes, it’s not always possible). By inspecting network traffic, identifying, and testing API endpoints, we achieved more stable and consistent data extraction. For example, finding the right API endpoint and parameters can take anywhere from an hour to a week. In some cases, we logically deduced endpoint paths, while in the best scenarios, we discovered GraphQL documentation by appending /docs to the base URL. If you're interested in an in-depth guide on how to find and use these APIs, let me know, and I'll provide a detailed description in following parts.

Recommendation 2: Utilize Embedded Data Structures

Some modern web applications embed structured data within their HTML using data structures like _NEXTDATA. This approach can also be leveraged to handle page structure changes effectively.

Recommendation 3: Define Required Properties

To control data quality, define the required properties that must be fetched to save and use the data for further analytics. Attributes from different sources can vary, so it’s critical to define what is required based on your domain model and future usage. Utilize the Template Method Pattern to dictate how and what attributes should be collected during parsing, ensuring consistency across all sources and all types (HTML, Json) of parsers.

namespace Example
{
    public abstract class CarParserBase<TElement, TSource>
    {
        protected ParseContext ParseContext;

        protected virtual int PinnedAdsCount => 0;
        protected abstract string GetDescription(TElement element);
        protected abstract IEnumerable<TElement> GetCarsAds(TSource document);
        protected abstract string GetFullName(TElement element);
        protected abstract string GetAdId(TElement element);
        protected abstract string GetMakeName(TElement element);
        protected abstract string GetModelName(TElement element);
        protected abstract decimal GetPrice(TElement element);
        protected abstract string GetRegion(TElement element);
        protected abstract string GetCity(TElement element);
        protected abstract string GetSourceUrl(TElement element);

        // more attributes here

        private protected List<ParsedCar> ParseInternal(TSource document, ExecutionContext executionContext)
        {
            try
            {
                var cars = GetCarsAds(document)
                .Skip(PinnedAdsCount)
                .Select(element =>
                {
                    ParseContext = new ParseContext();
                    ParseContext.City = GetCity(element);
                    ParseContext.Description = GetDescription(element);
                    ParseContext.FullName = GetFullName(element);
                    ParseContext.Make = GetMakeName(element);
                    ParseContext.Model = GetModelName(element);
                    ParseContext.YearOfIssue = GetYearOfIssue(element);
                    ParseContext.FirstRegistration = GetFirstRegistration(element);
                    ParseContext.Price = GetPrice(element);
                    ParseContext.Region = GetRegion(element);
                    ParseContext.SourceUrl = GetSourceUrl(element);
              
                    return new ParsedCar(
                        fullName: ParseContext.FullName,
                        makeName: ParseContext.Make,
                        modelName: ParseContext.Model,
                        yearOfIssue: ParseContext.YearOfIssue,
                        firstRegistration: ParseContext.FirstRegistration,
                        price: ParseContext.Price,
                        region: ParseContext.Region,
                        city: ParseContext.City,
                        sourceUrl: ParseContext.SourceUrl
                    );
                })
                .ToList();

                return cars;
            }
            catch (Exception e)
            {
                Log.Error(e, "Unexpected parsering error...");
                throw;
            }         
        }
    }
    
}        

Recommendation 4: Dual Parsers Approach

If possible, cover the parsed source with two types of parsers — HTML and JSON (via direct access to API). Place them in priority order and implement something like chain-of-responsibility pattern to have a fallback mechanism if the HTML or JSON structure changes due to updates. This provides a window to update the parsers but requires double effort to maintain both. Additionally, implement rotating priority and the ability to dynamically remove or change the priority of parsers in the chain via metadata in storage. This allows for dynamic adjustments without redeploying the entire system.

Recommendation 5: Integration Tests

Integration tests are crucial, even just for local debugging and quick issue identification and resolution. Especially if something breaks in the live environment and logs are not enough to understand the issue, these tests will be invaluable for debugging. Ideally, these tests can be placed inside the CI/CD pipeline, but if the source requires a proxy or advanced techniques to fetch data, maintaining and supporting these tests inside CI/CD can become overly complicated.


Challenge: Avoiding IP bans

Overview

Avoiding IP bans is a critical challenge in web scraping, especially when scraping large volumes of data from multiple sources. Websites implement various anti-scraping measures to detect and block IP addresses that exhibit suspicious behavior, such as making too many requests in a short period.

Impact

When an IP address is banned, the scraper cannot access the target website, resulting in incomplete data collection. Frequent IP bans can significantly disrupt the scraping process, leading to data gaps and potentially causing the entire scraping operation to halt. This can affect the quality and reliability of the data being collected, which is crucial for accurate analysis and decision-making.

Common Causes of IP Bans

  1. High Request Frequency: Sending too many requests in a short period.
  2. Identical Request Patterns: Making repetitive or identical requests that deviate from normal user behavior.
  3. Suspicious User-Agent Strings: Using outdated or uncommon user-agent strings that raise suspicion.
  4. Lack of Session Management: Failing to manage cookies and sessions appropriately.
  5. Geographic Restrictions: Accessing the website from regions that are restricted or flagged by the target website.

Recommendation 1: Utilize Cloud Services for Distribution

Utilizing cloud services like AWS Lambda, Azure Functions, or Google Cloud Functions can help avoid IP bans. These services have native time triggers, can scale out well, run on a range of IP addresses, and can be located in different regions close to the real users of the source. This approach distributes the load and mimics genuine user behavior, reducing the likelihood of IP bans.

Recommendation 2: Leverage Different Types of Proxies

Employing a variety of proxies can help distribute requests and reduce the risk of IP bans. There are three main types of proxies to consider

Datacenter Proxies

  • Pros: Fast, affordable, and widely available.
  • Cons: Easily detected and blocked by websites due to their non-residential nature.

Residential Proxies

  • Pros: Use IP addresses from real residential users, making them harder to detect and block.
  • Cons: More expensive and slower than datacenter proxies.

Mobile Proxies

  • Pros: Use IP addresses from mobile carriers, offering high anonymity and low detection rates.
  • Cons: The most expensive type of proxy and potentially slower due to mobile network speeds.

By leveraging a mix of these proxy types, you can better distribute your requests and reduce the likelihood of detection and banning.

Recommendation 3: Use Scraping Services

Services like ScraperAPI, ScrapingBee, Brightdata and similar platforms handle much of the heavy lifting regarding scraping and avoiding IP bans. They provide built-in solutions for rotating IP addresses, managing user agents, and avoiding detection. However, these services can be quite expensive. In our experience, we often exhausted a whole month’s plan in a single day due to high data demands. Therefore, these services are best used if budget allows and the data requirements are manageable within the service limits. Additionally, we found that the most complex sources with advanced anti-scraping mechanisms often did not work well with such services.

Recommendation 4: Combine approaches

It makes sense to utilize all the mechanisms mentioned above in a sequential manner, starting from the lowest to the highest cost solutions, using something like chain-of-responsibility pattern like was mentioned for different type of parsers above. This approach, similar to the one used for JSON and HTML parsers, allows for a flexible and dynamic combination of strategies. All these strategies can be stored and updated dynamically as metadata in storage, enabling efficient and adaptive scraping operations.


1.2 Parsers chain attempts.

Recommendation 5: Mimic User Traffic Patterns

Scrapers should be hidden within typical user traffic patterns based on time zones. This means making more requests during the day and almost zero traffic during the night, mimicking genuine user behavior. The idea is to split the parsing schedule frequency into 4–5 parts:

  • Peak Load
  • High Load
  • Medium Load
  • Low Load
  • No Load

This approach reduces the chances of detection and banning. Here’s an example parsing frequency pattern for a typical day:


1.3 Parsing frequency during the day

Challenge: Overcoming anti-spam measures

Overview

Anti-spam measures are employed by websites to prevent automated systems, like scrapers, from overwhelming their servers or collecting data without permission. These measures can be quite sophisticated, including techniques like user-agent analysis, cookie management, and fingerprinting.

Impact

Anti-spam measures can block or slow down scraping activities, resulting in incomplete data collection and increased time to acquire data. This affects the efficiency and effectiveness of the scraping process.

Common Anti-Spam Measures

  • User-Agent Strings: Websites inspect user-agent strings to determine if a request is coming from a legitimate browser or a known scraping tool. Repeated requests with the same user-agent string can be flagged as suspicious.
  • Cookies and Session Management: Websites use cookies to track user sessions and behavior. If a session appears to be automated, it can be terminated or flagged for further scrutiny.
  • TLS Fingerprinting: This involves capturing details from the SSL/TLS handshake to create a unique fingerprint. Differences in these fingerprints can indicate automated tools.
  • TLS Version Detection: Automated tools might use outdated or less common TLS versions, which can be used to identify and block them.

Complex Real-World Reactions

  • Misleading IP Ban Messages: One challenge we faced was receiving messages indicating that our IP was banned (too many requests from your IP). However, the actual issue was related to missing cookies for fingerprinting. We spent considerable time troubleshooting proxies, only to realize the problem wasn’t with the IP addresses.
  • Fake Data Return: Some websites counter scrapers by returning slightly altered data. For instance, the mileage of a car might be listed as 40,000 km when the actual value is 80,000 km. This type of defense makes it difficult to determine if the scraper is functioning correctly.
  • Incorrect Error Message Reasons: Servers sometimes return incorrect error messages, which can mislead the scraper about the actual issue, making troubleshooting more challenging.

Recommendation 1: Rotate User-Agent Strings

To overcome detection based on user-agent strings, rotate user-agent strings regularly. Use a variety of legitimate user-agent strings to simulate requests from different browsers and devices. This makes it harder for the target website to detect and block scraping activities based on user-agent patterns.

Recommendation 2: Manage Cookies and Sessions

Properly manage cookies and sessions to maintain continuous browsing sessions. Implement techniques to handle cookies as a real browser would, ensuring that your scraper maintains session continuity. This includes storing and reusing cookies across requests and managing session expiration appropriately.

Real-world solution

In one of the sources we encountered, fingerprint information was embedded within the cookies. Without this specific cookie, it was impossible to make more than 5 requests in a short period without being banned. We discovered that these cookies could only be generated by visiting the main page of the website with a real/headless browser and waiting 8–10 seconds for the page to fully load. Due to the complexity, performance concerns, and high volume of requests, using Selenium and headless browsers for every request was impractical. Therefore, we implemented the following solution:

We ran multiple Docker instances with Selenium installed. These instances continuously visited the main page, mimicking user authentication, and collected fingerprint cookies. These cookies were then used in subsequent high-volume scraping activities via http request to web server API, rotating them with other headers and proxies to avoid detection. Thus, we were able to make up to 500,000 requests per day bypassing the protection.

Recommendation 3: Implement TLS Fingerprinting Evasion

To avoid detection via TLS fingerprinting, mimic the SSL/TLS handshake of a legitimate browser. This involves configuring your scraping tool to use common cipher suites, TLS extensions and versions that match those of real browsers. Tools and libraries that offer configurable SSL/TLS settings can help in achieving this. This is great article on this topic.

Real-world solution:

One of the sources we scraped started returning fake data due to issues related to TLS fingerprinting. To resolve this, we had to create a custom proxy in Go to modify parameters such as cipher suites and TLS versions, making our scraper appear as a legitimate browser. This approach required deep customization to handle the SSL/TLS handshake properly and avoid detection. This is good example in Go.

Recommendation 4: Rotate TLS Versions

Ensure that your scraper supports multiple TLS versions and rotates between them to avoid detection. Using the latest TLS versions commonly used by modern browsers can help in blending in with legitimate traffic.


Challenge: Maximizing Data Coverage

Overview

Maximizing data coverage is essential for ensuring that the scraped data represents the most current and comprehensive information available. One common approach is to fetch listing pages ordered by the creation date from the source system. However, during peak times, new data offers can be created so quickly that not all offers/ads can be parsed from these pages, leading to gaps in the dataset.

Impact

Failing to capture all new offers can result in incomplete datasets, which affect the accuracy and reliability of subsequent data analysis. This can lead to missed opportunities for insights and reduced effectiveness of the application relying on this data.

Problem Details

  • High Volume of New Offers: During peak times, the number of new offers created can exceed the capacity of the scraper to parse all of them in real-time.
  • Pagination Limitations: Listing pages often have pagination limits, making it difficult to retrieve all new offers if the volume is high.
  • Time Sensitivity: New offers need to be captured as soon as they are created to ensure data freshness and relevance.

Recommendation: Utilize Additional Filters

Use additional filters to split data by categories, locations, or parameters such as engine types, transmission types, etc. By segmenting the data, you can increase the frequency of parsing for each filter category. This targeted approach allows for more efficient scraping and ensures comprehensive data coverage.


Challenge: Mapping reference data across sources

Overview

Mapping reference data is crucial for ensuring consistency and accuracy when integrating data from multiple sources. This challenge is common in various domains, such as automotive and e-commerce, where different sources may use varying nomenclature for similar entities.

Impact

Without proper mapping, the data collected from different sources can be fragmented and inconsistent. This affects the quality and reliability of the analytics derived from this data, leading to potential misinterpretations and inaccuracies in insights.

Automotive Domain

Inconsistent Naming Conventions: Different sources might use different names for the same make, model, or generation. For example, one source might refer to a car model as “Mercedes-benz v-class,” while another might call it “Mercedes v classe

Variations in Attribute Definitions: Attributes such as engine types, transmission types, and trim levels may also have varying names and descriptions across sources.

E-commerce Domain

Inconsistent Category Names: Different e-commerce platforms might categorize products differently. For instance, one platform might use “Electronics > Mobile Phones,” while another might use “Electronics > Smartphones.”

Variations in Product Attributes: Attributes such as brand names, product specifications, and tags can differ across sources, leading to challenges in data integration and analysis.

Recommendation 1: Create a Reference Data Dictionary

Develop a comprehensive reference data dictionary that includes all possible names and variations. This dictionary will serve as the central repository for mapping different names to a standardized set of terms. Use fuzzy matching techniques during the data collection stage to ensure that similar terms from different sources are accurately matched to the standardized terms.

Recommendation 2: Use Image Detection and Classification Techniques

In cases where certain critical attributes, such as the generation of a car model, are not always available from the sources, image detection and classification techniques can be employed to identify these characteristics. For instance, using machine learning models trained to recognize different car makes, models, and generations from images can help fill in the gaps when textual data is incomplete or inconsistent. This approach can dramatically reduce the amount of manual work and the need for constant updates to mappings, but it introduces complexity in the architecture, increases infrastructure costs, and can decrease throughput, impacting the real-time nature of the data.


Challenge: Implementing Monitoring and Alerting Systems

Overview

Implementing effective monitoring and alerting systems is crucial for maintaining the health and performance of a web scraping system. These systems help detect issues early, reduce downtime, and ensure that the data collection process runs smoothly. In the context of web scraping, monitoring and alerting systems need to address specific challenges such as detecting changes in source websites, handling anti-scraping measures, and maintaining data quality.

Impact

Without proper monitoring and alerting, issues can go unnoticed, leading to incomplete data collection, increased downtime, and potentially significant impacts on data-dependent applications. Effective monitoring ensures timely detection and resolution of problems, maintaining the integrity and reliability of the scraping system.

Recommendation: Real-Time Monitoring of Scraping Activities

Implement real-time monitoring to track the performance and status of your scraping system. Use tools and dashboards to visualize key metrics such as the number of successful requests, error rates, and data volume. This helps in quickly identifying issues as they occur.


1.4 Daily notification reports per source.


1.5 Number of cookies generation over time.


1.6 Number of succeed & failed http calls per source per instance over time

Funny Stories at the End

Our system scraped data continuously from different sources, making it highly sensitive to any downtime or changes in website accessibility. There were numerous instances where our scraping system detected that a website was down or not accessible from certain regions. Several times, our team contacted the support teams of these websites, informing them that “User X from Country Y” couldn’t access their site.

In one memorable case, our automated alerts picked up an issue at 6 AM. The website of a popular car listing service was inaccessible from several European countries. We reached out to their support team, providing details of the downtime. The next morning, they thanked us for the heads-up and informed us that they had resolved the issue. It turned out we had notified them before any of their users did!

Final Thoughts

Building and maintaining a web scraping system is not an easy task. It requires dealing with dynamic content, overcoming sophisticated anti-scraping measures, and ensuring high data quality. While it may seem naive to think that parsing data from various sources is straightforward, the reality involves constant vigilance and adaptation. Additionally, maintaining such a system can be costly, both in terms of infrastructure and the continuous effort needed to address the ever-evolving challenges. By following the steps and recommendations outlined above, you can create a robust and efficient web scraping system capable of handling the challenges that come your way.

Get in Touch

If you would like to dive into any of these challenges in detail, please let me know in the comments — I will describe them in more depth. If you have any questions or would like to share your use cases, feel free to let me know. Thanks to everyone who read until this point!

EMMANUEL OKONKWO

Seasoned Logistics Associate | Specialist in Organizational Strategy & Resource Mobilization | Virtual Assistant for Business Consultancies & SME Setup

3 个月

Exciting News for Global Investors! Unlock the immense potential of the UAE with a Non-Resident Account – now processed in just 5 days! Experience seamless banking Access global markets effortlessly Secure and reliable for your peace of mind Whether you're an investor, entrepreneur, or looking to diversify, protect your family's key savings and safeguard your investments with an NRA - Non-Resident Account. Contact us today to get started. Your future in the UAE awaits! +971558271081{please whatsapp only} hashtag #UAE hashtag #NonResidentAccount hashtag #GlobalBanking hashtag #InvestInUAE hashtag #FinancialFreedom hashtag #QuickProcessing

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了