Categories
Development SaaS

Sequentum Cloud: Fix Agent Failures in 3 clicks

We recently applied Sequentum Cloud to a highly protected scrape target and got quite successful results. An initial Agent run had some errors, failing data capture of some of URLs. After a support consultation we were pointed to a built-in feature of Sequentum Cloud to rerun Agent to gather missing data.

Categories
Development SaaS

Sequentum Cloud to bypass strict scrape blocking

In the modern web 2.0 the sites that have valuable data (eg. business directories, data aggregators, social networks and more) implement aggressive blocking measures, which can cause major extraction difficulties. How can modern scraping tools (eg. Sequentum Cloud) still be able to fetch data of actively protected sites?

Sequentum is a closed source point and click scraping platform that integrates everything we need to bypass anti-bot services, including management of browsers, device fingerprints, TSL fingerprints, IP rotation, user agents, and more. Sequentum has had its own custom scraping browser for more than a decade, and as one of the most mature solutions on the market, they are able to support atomic level customization for each request and workflow step. As such, Sequentum Cloud is an out-of-the-box advanced scraping platform with no upfront requirement to stand up infrastructure, software, or proxies. It also has a very responsive support team, which can be useful in coming up to speed on one’s approach and is quite unique in the scraping industry. In this test, we were able to configure a site with very aggressive blocking and, with some refinement of error detection and retry logic, were able to get some of the most protected data consistently over time.

For this test, we pointed their tool at a major brand on Zoro.com, a site with aggressive blocking. Initial attempts yielded 32K records which was 94% of the estimated 34K entries. We worked with support to understand the ways to customize the advanced error detection and retry logic included in the Sequentum platform to the behavior of the Zoro.com site and was able to get 100% of the data. In this article we are sharing what we have learned.

The overall test results of Sequentum Cloud and Oxylabs API (shared in a post) might be summarized in the following comparison table.

Success rateAvg. seconds per pageEstimated costRating
Sequentum Cloud Agent100%0,4
(provided 10 browsers)
$12
($3.75 per 1GB traffic of res. proxy)
Oxylabs' API90%11~$60
($2 per 1000 requests)

The preconfigured API [of Oxylabs] is already built [and maintained] for an end user. Sequentum Cloud Platform is rather an open tool, and agents can be customized in a myriad of ways. Hence it can take longer to build a working agent [compared to a ready-API] but for the most part a custom agent is the better way to apply in an industrial scale for one’s custom use case.

Categories
Challenge Development

Modern Challenges in Web Scraping & Solutions


Web scraping has emerged as a powerful tool for data extraction, enabling businesses, researchers, and individuals to gather insights from the vast amounts of information available online. However, as the web evolves, so do the challenges associated with scraping. This post delves into the modern challenges of web scraping and explores effective strategies to overcome them. Below we’ve selected the critical ones that encompass the most of web scraping today.

  1. Anti-Scraping Measures
  2. Dynamic Content
  3. Legal and Ethical Considerations
  4. Data Quality and Consistency
  5. Website Structure Changes
Categories
Development

5 Best Free Tools for Web Scraping in 2025

In the ever-evolving world of data-driven decision-making, web scraping remains a critical skill for businesses, researchers, and developers. Whether you’re gathering market insights, monitoring competitors, or building datasets for machine learning, having the right tools can make all the difference.

As we step into 2025, the landscape of web scraping tools has continued to evolve, with many free options offering powerful features. Here we share with you of the the 5 best free tools for web scraping in 2025 that you should consider.

Categories
Challenge Development

What is better than residential proxies for web scraping?

Proxies vary significantly in their types and features, serving different purposes in data scraping and web access. They function as intermediaries between data scraping tools and target websites, offering anonymity and helping distribute requests to evade detection by anti-bot systems.

In the post we’ll share on what might be used in case residential proxies are blocked with a target server.

Categories
Challenge Development

Playwright Scraper Undetected: Strategies for Seamless Web Data Extraction

Web scraping has become an essential tool for many businesses seeking to gather data and insights from the web. As companies increasingly rely on this method for analytics and pricing strategies, the techniques used in scraping are evolving. It is crucial for scrapers to simulate human-like behaviors to avoid detection by sophisticated anti-bot measures implemented by various websites.

Understanding the importance of configuring scraping tools effectively can make a significant difference in acquiring the necessary data without interruptions. The growth in demand for such data has led to innovations in strategies and technology that assist scrapers in navigating these challenges. This article will explore recent developments in tools and libraries that help enhance the functionality of web scraping procedures.

Categories
Development Monetize

Web Scraping: contemporary business models

In the evolving world of web data, understanding different business models can greatly benefit you. Since 2010’s, the growth of web scraping has transformed from a niche interest into a widely used practice. As the demand for public data increases, you may find new opportunities in various approaches to data collection and distribution.

In the post we’ll take a look of 4 business model in the data extraction business:

  • Conventional data providers
  • SaaS providers
  • Data / market intelligence tools
  • Data marketplace (multiple buyers & multiple sellers)
Categories
Development

AI Usage in Web Scraping: Optimizing Data Collection and Analysis

The rise of artificial intelligence has transformed various industries, and web scraping is no exception. AI enhances web scraping by increasing efficiency, accuracy, and adaptability in data extraction processes. As businesses increasingly rely on data to drive their decisions, understanding how AI-powered techniques can optimize these scraping efforts becomes crucial for success.

Our exploration of AI in web scraping will cover various techniques and algorithms that enhance traditional methods. We’ll also delve into the challenges organizations face, from ethical concerns to technical limitations, and discuss innovative solutions that can overcome these hurdles. Real-world applications showcase how companies leverage AI to gather insights quickly and effectively, providing a practical lens through which we can view this technology.

By the end of the post, we’ll have a clearer understanding of not only the fundamentals of AI in web scraping but also its potential implications for the future of data collection and usage.

Categories
Development SaaS

My experience with Zyte AI spiders, part 1

Recently I was given a bunch of sites to scrape, most of them being simple e-commerce. I decided to try Zyte AI powered spiders. To utilize it, I had to apply for a Zyte API subscription and access to Scrapy Cloud. Zyte AI proved to be a good choice for the fast data extraction & delivery thru spiders that are Scrapy Spiders. Below you can see my experience and the results.
I have done another “experience” post on the Zyte platform usage.

Categories
Challenge SaaS

My experience with Zyte AI spiders, part 2

I’ve described my initial experience with Zyte AI spiders leveraging Zype API and Scrapy Cloud Units. You might find it here. Now I’d share more sobering report of what happened with the data aggregator scrape.