Challenge SaaS

My experience with Zyte AI spiders, part 2

I’ve described my initial experience with Zyte AI spiders leveraging Zype API and Scrapy Cloud Units. You might find it here. Now I’d share more sobering report of what happened with the data aggregator scrape.

Development SaaS

My experience with Zyte AI spiders, part 1

Recently I was given a bunch of sites to scrape, most of them being simple e-commerce. I decided to try Zyte AI powered spiders. To utilize it, I had to apply for a Zyte API subscription and access to Scrapy Cloud. Zyte AI proved to be a good choice for the fast data extraction & delivery thru spiders that are Scrapy Spiders. Below you can see my experience and the results.
I have done another «experience» post on the Zyte platform usage.


Merge files in Windows cmd & Power Shell

Windows cmd

cmd /c 'copy /y /b *.json output.json' 

Windows Power Shell

2 options are here:

Get-Content *.json | Set-Content result.json
Copy-Item *.json output.json

How to wrap joined lines into valid JSON

After merging json files we append commas to the end of each line and bracket around the whole content with [ ] using eg. Notepadd++ for that. Thus we get a valid JSON:

{ "Full Year Tax":"145,26$"… },
{ "Full Year Tax":"139,00$"… },
{ "Full Year Tax":"100,00$"… }

Guest posting Review

MangoProxy Review

MangoProxy is a Premium Proxy provider that offers various features to keep your online activity safe and secure. They don’t require you to register with your personal data, so you can use their service without worrying about any privacy issues. The company has a number of proxy packages to choose from, and each comes with unlimited bandwidth and a no-logs policy.


Amazon scrape tip

Recently we’ve met requirements to scrape Amazon data in big quantities. So, first of all I’ve tested the data aggregator for being bot-proof or anti-bot protection. For that I used the Discord server Scraping Enthusiasts, namely Anti-bot channel.

Since Amazon is a hige data aggregator we recommend readers to get acquainted with the post Tips & Tricks for Scraping Business Directories.


How to find out which engine the website is running on

Today, not only programmers, but also many other specialists take part in the work of websites: content managers, copywriters, designers, marketers, SEO-specialists. They do not need to know programming languages and understand the code, thanks to the fact that there are systems that allow you to manage the project through a convenient interface. The systems are called Engines or Content Management Systems (CMS). In this article we explain what it is, why sometimes you need to know the engine of someone else’s site, and share ways how you can do it: look manually or check through online services.


Node.js to automate a browser XHR (Ajax)

Lately I needed to scrape some data that are dynamically loaded by «Load more» button. A website JavaScript invokes XHR (or Ajax request) to fetch a next data portion. So, the need was to re-run those XHR with some POST parameters as variables.

So, how to make it in Node.js?


Choosing the Best Proxies for Web Scraping

From eCommerce and market research to competitive analysis and more, web scraping has become an integral part of data collection. And for some, it’s the secret sauce for success.

But with great scraping power comes great responsibility. 

Web scraping can result in IP bans and other harsh restrictions. To avoid these issues, many turn to proxies, which act as intermediaries between your requests and the target website. In this article, we’ll explore the top 3 proxy types for web scraping and focus on the key benefits of each proxy. Let’s go!

Guest posting

Bright Data’s Business Capabilities

Bright Data offers its customers a full suite of real-time data collection tools that help them gain and maintain a competitive market edge. BrightData  prides itself on its ethical and 100% legally compliant approach.

Challenge Development

Infinite scroll for getting group members in Linkedin

Recently i was challenged with getting Linkedin group memebers info. The challenge made me to seek some ways to

Here is a video where I try to catch all Linkedin group members through endless scroll**, I having to be a member of the group.
The real speed is ~1 person/second though.

Sometimes you need to click on the «Scroll more results» button or even just hover the mouse over this button.

Automate scroll

**The post on how to start infinite scroll in the browser… But this JS code from the post does not help to replenish the page… Only when there is a focus on the browser tab/page, the script continues loading data again.