Categories
Challenge Development

Protected: .NET Code Guard

This content is password protected. To view it please enter your password below:

Categories
Challenge Development

Bypass Akamai protection

We share with you how we’ve bypassed Akamai protected site.

Antibot check at a Discord channel.
Categories
Development

BrowserForge [Python] library to generate scraper headers & fingerprints

Recently I’ve found a Python library that generates fake headers and consistent fingerprints into custom scrapers. Such a generated headers and fingerprints contribute to bypass anti-bots
solutions.

Intelligent browser header & fingerprint generator

Categories
Development SaaS

My experience with Zyte AI spiders, part 1

Recently I was given a bunch of sites to scrape, most of them being simple e-commerce. I decided to try Zyte AI powered spiders. To utilize it, I had to apply for a Zyte API subscription and access to Scrapy Cloud. Zyte AI proved to be a good choice for the fast data extraction & delivery thru spiders that are Scrapy Spiders. Below you can see my experience and the results.
I am going to make another “experience” post on the Zyte platform usage.

Categories
Challenge Development

Experience with CloudFlare bypass

Presently (March 2024) anti-bots are actively applied for web data protection. Some of them with their characteristics & bypass methods might be seen here. If you are interested, take a look at some bot protected websites table. In this post we’ll share our real case experience with fighting CloudFlare proection.

Categories
Development

BrowserContext + Persistent Context in Playwright

In Microsoft Playwright, a BrowserContext is an abstraction that represents an independent session of browser activity, similar to an incognito session in a traditional web browser. Each BrowserContext can have its own set of cookies, local storage data, and session storage data, which means that activities performed in one context do not affect or interfere with those in another, providing a clean slate for each test or automation task.

Advantage

The primary advantage of using BrowserContexts is their ability to simulate multiple users interacting with a web application simultaneously, without the need for multiple browsers to be opened and managed. Additionally, BrowserContexts allow for custom configurations, such as viewport size, geolocation, language, and permissions, enabling us to configure our scrapers differently from one context to another.

Categories
Development

Crawling web pages with Netpeak Spider in conjunction with MarsProxies, NetNut and IPRoyal proxies

NS-owl

Agree, it’s hard to overestimate the importance of information – “Master of information, master of situation”. Nowadays, we have everything to become a “master of situation”. We have all needed tools like spiders and parsers that could scrape various data from websites. Today we will consider scraping the Amazon with a web spider equipped with proxy services.

Categories
Development

Merge files in Windows cmd & Power Shell

Windows cmd

cmd /c 'copy /y /b *.json output.json' 

Windows Power Shell

2 options are here:

Get-Content *.json | Set-Content result.json
Copy-Item *.json output.json

How to wrap joined lines into valid JSON

After merging json files we append commas to the end of each line and bracket around the whole content with [ ] using eg. Notepadd++ for that. Thus we get a valid JSON:

[
{ "Full Year Tax":"145,26$"… },
{ "Full Year Tax":"139,00$"… },
{ "Full Year Tax":"100,00$"… }
]

Categories
Development

Amazon scrape tip

Recently we’ve met requirements to scrape Amazon data in big quantities. So, first of all I’ve tested the data aggregator for being bot-proof or anti-bot protection. For that I used the Discord server Scraping Enthusiasts, namely Anti-bot channel.

Since Amazon is a hige data aggregator we recommend readers to get acquainted with the post Tips & Tricks for Scraping Business Directories.

Categories
Development

How to find out which engine the website is running on

Today, not only programmers, but also many other specialists take part in the work of websites: content managers, copywriters, designers, marketers, SEO-specialists. They do not need to know programming languages and understand the code, thanks to the fact that there are systems that allow you to manage the project through a convenient interface. The systems are called Engines or Content Management Systems (CMS). In this article we explain what it is, why sometimes you need to know the engine of someone else’s site, and share ways how you can do it: look manually or check through online services.