Challenge Development

Protected: .NET Code Guard

This content is password protected. To view it please enter your password below:


My experience of manual, no-code scrape of a bot-protected site

Recently we’ve ancounted a highly protected site — Since the number of target brand items of the site was not big (under 3K) so I desided to get target data using the handy tools for a fast manual scrape.

Challenge Development

Bypass Akamai protection

We share with you how we’ve bypassed Akamai protected site.

Antibot check at a Discord channel.

BrowserForge [Python] library to generate scraper headers & fingerprints

Recently I’ve found a Python library that generates fake headers and consistent fingerprints into custom scrapers. Such a generated headers and fingerprints contribute to bypass anti-bots

Intelligent browser header & fingerprint generator

Development SaaS

My experience with Zyte AI spiders, part 1

Recently I was given a bunch of sites to scrape, most of them being simple e-commerce. I decided to try Zyte AI powered spiders. To utilize it, I had to apply for a Zyte API subscription and access to Scrapy Cloud. Zyte AI proved to be a good choice for the fast data extraction & delivery thru spiders that are Scrapy Spiders. Below you can see my experience and the results.
I am going to make another “experience” post on the Zyte platform usage.


Free proxy for simple tasks

1st provider:

Go to and then choose Free from the main menu:


  1. Free Proxy pool is updated regularly.
  2. Includes Anonymous proxy
  3. Includes SOCKS 4/5 proxy as well as HTTP/HTTPS ones
Challenge Development

Experience with CloudFlare bypass

Presently (March 2024) anti-bots are actively applied for web data protection. Some of them with their characteristics & bypass methods might be seen here. If you are interested, take a look at some bot protected websites table. In this post we’ll share our real case experience with fighting CloudFlare proection.


BrowserContext + Persistent Context in Playwright

In Microsoft Playwright, a BrowserContext is an abstraction that represents an independent session of browser activity, similar to an incognito session in a traditional web browser. Each BrowserContext can have its own set of cookies, local storage data, and session storage data, which means that activities performed in one context do not affect or interfere with those in another, providing a clean slate for each test or automation task.


The primary advantage of using BrowserContexts is their ability to simulate multiple users interacting with a web application simultaneously, without the need for multiple browsers to be opened and managed. Additionally, BrowserContexts allow for custom configurations, such as viewport size, geolocation, language, and permissions, enabling us to configure our scrapers differently from one context to another.


Crawling web pages with Netpeak Spider in conjunction with MarsProxies, NetNut and IPRoyal proxies


Agree, it’s hard to overestimate the importance of information – “Master of information, master of situation”. Nowadays, we have everything to become a “master of situation”. We have all needed tools like spiders and parsers that could scrape various data from websites. Today we will consider scraping the Amazon with a web spider equipped with proxy services.


Merge files in Windows cmd & Power Shell

Windows cmd

cmd /c 'copy /y /b *.json output.json' 

Windows Power Shell

2 options are here:

Get-Content *.json | Set-Content result.json
Copy-Item *.json output.json

How to wrap joined lines into valid JSON

After merging json files we append commas to the end of each line and bracket around the whole content with [ ] using eg. Notepadd++ for that. Thus we get a valid JSON:

{ "Full Year Tax":"145,26$"… },
{ "Full Year Tax":"139,00$"… },
{ "Full Year Tax":"100,00$"… }