Categories
Challenge Development

Oxylabs’ Web Scraper API – Experience & more

Experience

We’ve succesfully tested the Web-Scraper-API of Oxylabs. It did well to get data off the highly protected sites. One eg. is Zoro.com protected with Akamai, DataDome, CloudFlare and ReCaptcha! See the numerical results here.

Categories
Без рубрики

Free Proxy lists

  1. Proxy-sale.com
  2. Geonode.com, incl. Elite proxies
  3. Free-proxy-list.net
  4. Proxy-list.download

Categories
Guest posting SaaS

The Importance of Transparency and Trust in Data and Generative AI

Sharing an informative article by Sarah McKenna (CEO of Sequentum & Forbes Technology Council Member), The Importance Of Transparency And Trust In Data And Generative AI. It includes factors for responsible data collection (aka scraping) and web data usefulness for AI post processing. She touches on security, adherence to regulatory requirements, bias prevention, governance, auditability, vendor evaluation and more. 

getty

In the age of data-driven decision-making, the quality of your outcomes depends on the quality of the underlying data. Companies of all sizes seek to harness the power of data, tailored to their specific needs, to understand the market, pricing, opportunities, etc. In this data-rich environment, using generic or unreliable data not only has the intangible costs that prevent companies from achieving their full potential, it has real tangible costs as well.

Categories
Development Monetize

Web Scraping: contemporary business models

In the evolving world of web data, understanding different business models can greatly benefit you. Since 2010’s, the growth of web scraping has transformed from a niche interest into a widely used practice. As the demand for public data increases, you may find new opportunities in various approaches to data collection and distribution.

In the post we’ll take a look of 4 business model in the data extraction business:

  • Conventional data providers
  • SaaS providers
  • Data / market intelligence tools
  • Data marketplace (multiple buyers & multiple sellers)
Categories
Development

Importance of using proxies for web scraping

Categories
Development

AI Usage in Web Scraping: Optimizing Data Collection and Analysis

The rise of artificial intelligence has transformed various industries, and web scraping is no exception. AI enhances web scraping by increasing efficiency, accuracy, and adaptability in data extraction processes. As businesses increasingly rely on data to drive their decisions, understanding how AI-powered techniques can optimize these scraping efforts becomes crucial for success.

Our exploration of AI in web scraping will cover various techniques and algorithms that enhance traditional methods. We’ll also delve into the challenges organizations face, from ethical concerns to technical limitations, and discuss innovative solutions that can overcome these hurdles. Real-world applications showcase how companies leverage AI to gather insights quickly and effectively, providing a practical lens through which we can view this technology.

By the end of the post, we’ll have a clearer understanding of not only the fundamentals of AI in web scraping but also its potential implications for the future of data collection and usage.

Categories
Challenge SaaS

My experience with Zyte AI spiders, part 2

I’ve described my initial experience with Zyte AI spiders leveraging Zype API and Scrapy Cloud Units. You might find it here. Now I’d share more sobering report of what happened with the data aggregator scrape.

Categories
Development SaaS

My experience with Zyte AI spiders, part 1

Recently I was given a bunch of sites to scrape, most of them being simple e-commerce. I decided to try Zyte AI powered spiders. To utilize it, I had to apply for a Zyte API subscription and access to Scrapy Cloud. Zyte AI proved to be a good choice for the fast data extraction & delivery thru spiders that are Scrapy Spiders. Below you can see my experience and the results.
I have done another “experience” post on the Zyte platform usage.

Categories
Challenge Development

Protected: .NET Code Guard

This content is password protected. To view it please enter your password below:

The reCAPTCHA verification period has expired. Please reload the page.
Categories
Challenge

My experience of manual, no-code scrape of a bot-protected site

Recently we discovered a highly protected site — govets.com. Since the number of target brand items of the site was not big (under 3K), I decided to get target data using the handy tools for a fast manual scrape.