Categories
Development

Crawling web pages with Netpeak Spider in conjunction with MarsProxies, NetNut and IPRoyal proxies

NS-owl

Agree, it’s hard to overestimate the importance of information – “Master of information, master of situation”. Nowadays, we have everything to become a “master of situation”. We have all needed tools like spiders and parsers that could scrape various data from websites. Today we will consider scraping the Amazon with a web spider equipped with proxy services.

Categories
Development

Simple Apify Puppeteer crawler

The crawler is to gather names, addresses, emails of the web urls.

const Apify = require('apify');
var total_data=[];
const regex_name = /[A-Z][a-z]+\s[A-Z][a-z]+(?=\.|,|\s|\!|\?)/gm
const regex_address = /stand:(<\/strong>)?\s+(\w+\s+\w+),?\s+(\w+\s+\w+)?/gm;
const regex_email = /(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))/i;


Apify.main(async () => {
    const requestQueue = await Apify.openRequestQueue('123');
    await requestQueue.addRequest(new Apify.Request({ url: 'https://www.freeletics.com/de/pages/imprint/' }));
    await requestQueue.addRequest(new Apify.Request({ url: 'https://di1ara.com/pages/impressum' }));
	console.log('\nStart PuppeteerCrawler\n');
    const crawler = new Apify.PuppeteerCrawler({
        requestQueue,
        handlePageFunction: async ({ request, page }) => {
            const title = await page.title();
            console.log(`Title of ${request.url}: ${title}`);
			const page_content = await page.content();
            console.log(`Page content size:`, page_content.length);
            let obj = { 'url' : request.url }; 
	 
			console.log('Names:');
			while ((m = regex_name.exec(page_content)) !== null) {
				// This is necessary to avoid infinite loops with zero-width matches
				if (m.index === regex_name.lastIndex) {
					regex_name.lastIndex++;
				}
				
				// The result can be accessed through the `m`-variable.
				m.forEach((match, groupIndex) => {
					console.log(`Found match, group ${groupIndex}: ${match}`);
					if (match !='undefined' ) { 
						obj['names'] +=  match + ', ';
					}
				}); 
				
				
			}	
			console.log('\nAddress:');
			while ((m = regex_address.exec(page_content)) !== null) {
				// This is necessary to avoid infinite loops with zero-width matches
				if (m.index === regex_address.lastIndex) {
					regex_address.lastIndex++;
				}
				
				// The result can be accessed through the `m`-variable.
				m.forEach((match, groupIndex) => {
					console.log(`Found match, group ${groupIndex}: ${match}`);
				});
				m[0] = m[0].includes('</strong>') ? m[0].split('</strong>')[1] : m[0];
				m[0] = m[0].replace('<', '');
				obj['address']= m[0] ?? '';
			}
			console.log('\Email:');
			while ((m = regex_email.exec(page_content)) !== null) {
				// This is necessary to avoid infinite loops with zero-width matches
				if (m.index === regex_email.lastIndex) {
					regex_email.lastIndex++;
				}
				
				// The result can be accessed through the `m`-variable.
				m.forEach((match, groupIndex) => {
					console.log(`Found match, group ${groupIndex}: ${match}`);
				}); 
				if (m[0]) 
				{
					obj['email'] = m[0];
					break;
				}
			}
			total_data.push(obj);
			console.log(obj);
        },
        maxRequestsPerCrawl: 2000000,
        maxConcurrency: 20,
    });

    await crawler.run();
	console.log('Total data:');
	console.log(total_data);
});
Categories
Review

What is a Web Crawler and how does it work?

Nowadays, when having some questions, it almost comes naturally for us to just type it in a search bar and get helpful answers. But we rarely wonder how all that information is available and how it appears as soon as we start typing. Search engines provide easy access to information, but web crawling/scraping tools who are not so much known players have a crucial role in wrapping up online content. Over the years, these tools have become a true game-changer in many businesses including e-commerce. So, if you are still unfamiliar with it, keep reading to learn more.

Categories
Review

Price2Spy Web Crawling Service for the Competitors Price Scraping

The number of companies, which use web crawler, is growing rapidly due to the current competitive market conditions. As a result, the number of companies that offer this service is growing day by day. Since the purpose of web crawler varies on a case to case basis, here is a more detailed explanation of how it Price2Spy works.

Categories
Uncategorized

What is Crawlera?

I came across this tool a few weeks ago, and wanted to share it with you. So far I have not tested it myself, but it is a simple concept- Safely download web pages without the fear of overloading websites or getting banned. You write a crawler script using scruping hub, and they will run through there IP proxies and take care of the technical problems of crawling.

Crawlera is now the Smart proxy manager 

Categories
Development

A Simple Email Crawler in Python

I often receive requests asking about email crawling. It is evident that this topic is quite interesting for those who want to scrape contact information from the web (like direct marketers), and previously we have already mentioned GSA Email Spider as an off-the-shelf solution for email crawling. In this article I want to demonstrate how easy it is to build a simple email crawler in Python. This crawler is simple, but you can learn many things from this example (especially if you’re new to scraping in Python).

Categories
Guest posting Miscellaneous

Data extraction: web crawling vs. web scraping in E-commerce

Nowadays, when one has some questions, it comes almost naturally for us to just type it in a search bar and get helpful answers. But we rarely wonder how all that information is available and how it appears as soon as we start typing. Search engines provide easy access to information, but web crawling and scraping tools, which are not such well-known players, have a crucial role in wrapping up online content.

Categories
Uncategorized

Crawler vs Scraper vs Parser

In the post we share the differences between Crawler, Scraper and Parser.

Categories
Development

Simple JAVA email crawler

In this post we share the code of a simple Java email crawler. It crawls emails of a given website, with an infinite crawling depth. A previous post showed us Python simple email crawler.

Categories
Development

Node.js, Puppeteer, Apify for Web Scraping (Xing scrape) – part 2

In the post we share the practical implementation (code) of the Xing companies scrape project using Node.js, Puppeteer and the Apify library. The first post, describing the project objectives, algorithm and results, is available here.

The scrape algorithm you can look at here.