Categories
Challenge Development

AirTable scrape challenge

I need to get info from AirTable, see a table example.

The problem is that data are loaded highly dynamically. HTML contains only the information that you currently see on the browser screen.

enter image description here

If there are a lot of records then it is difficult to collect such a table. One of the possible ways is to calculate the size of the screen and rows in the table. Then using the browser automation and use to make a script that will scroll through it bit by bit and collect data.

Is there any other feasible way to get data of a table? For example there is a HTTP requests coding way to get dynamic data.

JS infinite scroll does not work for AirTable either.

Please comment down here if having some tips, hints.

Categories
Development

Google Sheets or MS Excel to scrape business directories ?

We’ve already stated some Tips and Tricks of scraping business directories or data aggregators sites. Yet recently someone has asked us to do aggregators’ scraping in the context of Google Sheets and/or MS Excel.

Categories
Challenge Development

Yelp scraping for high quality B2B leads

Recently we’ve performed the Yelp business directory scrape for acquiring high quality B2B leads (company + CEO info). This forced us to apply many techniques like proxying, external company site scrape, email verification and more.

Categories
Development Guest posting

Scrape ‘Ticketmaster’ using Selenium with Python

We’ve got some code provided by Akash D. working on ticketmaster.co.uk. He automates browser (Chrome as well as Edge) using Selenium with Python. The rotating authenticated proxies are leveraged to keep undetected. Yet, the site is protected with Distil network.

Categories
Challenge Development

Bypass GoDaddy Firewall thru VPN & browser automation

Recently we encountered a website that worked as usual, yet when composing and running scraping script/agent it has put up blocking measures.

In this post we’ll take a look at how the scraping process went and the measures we performed to overcome that.

Categories
Development

Scrapy to get dynamic business directory data thru API

In this post I want to share on how one may scrape business directory data, real estate using Scrapy framework.

Categories
Development

PHP Curl POSTing JSON example

We share here the example of CURL POSTing JSON data to obtain an Octoparse API token.

<?php
$base_url = "https://openapi.octoparse.com";
$token_url = $base_url . '/token';

$post =[
    'username' => 'igorsavinkin',
	'password' => '<xxxxxx>', 
	'grant_type' => 'password' 
];

$payload = json_encode($post);

$headers = [
	'Content-Type: application/json' ,
	'Content-Length: ' . strlen($payload)
];

$timeout = 30;
$ch_upload = curl_init(); 
curl_setopt($ch_upload, CURLOPT_URL, $token_url);
if ($headers) { 
	curl_setopt($ch_upload, CURLOPT_HTTPHEADER, $headers);
} 
curl_setopt($ch_upload, CURLOPT_POST, true); 
curl_setopt($ch_upload, CURLOPT_POSTFIELDS, $payload /*http_build_query($post)*/ );
curl_setopt($ch_upload, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch_upload, CURLOPT_CONNECTTIMEOUT, $timeout);
$response = curl_exec($ch_upload);

if (curl_errno($ch_upload)) {
    echo 'Curl Error: ' . curl_error($ch);
}

curl_close($ch_upload); 
//echo 'Response length: ', strlen($response);
echo  $response ;
$fp = fopen('octoparse-api-token.json', 'w') ;
fwrite($fp, $response );
fclose($fp);
Categories
Development Web Scraping Software

My experience of choosing web scraping platform for company critical data feed

Recently we engaged with the online e-commerce startup for the need of gov. tenders/RFP scraping. Since the project size is immense , we have to switch from the hand made scripting extractors to a enterprise grade scraping platform. Below I share my experience of the scraping platforms as a feature table.

OctoparseDexi.ioMozendaSequentum SaaSImport.io
Able to set up robot/agent3 min3 failures in a row"For some insight, we are working with customers in managed service engagements for large scale, mission critical web integration requirements - so we no longer have a SaaS tool offering. We have a heavy focus in digital commerce and work with customers on use cases in ecomm/retail, travel/hospitality, and tickets/events." - customer service
Support response12 hours. It does excellent job.12 hours12 hours12 hours
Base64 encodingnoUsing a JavaScript step; btoa() is a function that takes a string and encodes it to Base64. yes, one can encode the given value in the Transformation Script of any command
Robot/agent development assistance yes
Categories
Development

Simple Apify Puppeteer crawler

The crawler is to gather names, addresses, emails of the web urls.

const Apify = require('apify');
var total_data=[];
const regex_name = /[A-Z][a-z]+\s[A-Z][a-z]+(?=\.|,|\s|\!|\?)/gm
const regex_address = /stand:(<\/strong>)?\s+(\w+\s+\w+),?\s+(\w+\s+\w+)?/gm;
const regex_email = /(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))/i;


Apify.main(async () => {
    const requestQueue = await Apify.openRequestQueue('123');
    await requestQueue.addRequest(new Apify.Request({ url: 'https://www.freeletics.com/de/pages/imprint/' }));
    await requestQueue.addRequest(new Apify.Request({ url: 'https://di1ara.com/pages/impressum' }));
	console.log('\nStart PuppeteerCrawler\n');
    const crawler = new Apify.PuppeteerCrawler({
        requestQueue,
        handlePageFunction: async ({ request, page }) => {
            const title = await page.title();
            console.log(`Title of ${request.url}: ${title}`);
			const page_content = await page.content();
            console.log(`Page content size:`, page_content.length);
            let obj = { 'url' : request.url }; 
	 
			console.log('Names:');
			while ((m = regex_name.exec(page_content)) !== null) {
				// This is necessary to avoid infinite loops with zero-width matches
				if (m.index === regex_name.lastIndex) {
					regex_name.lastIndex++;
				}
				
				// The result can be accessed through the `m`-variable.
				m.forEach((match, groupIndex) => {
					console.log(`Found match, group ${groupIndex}: ${match}`);
					if (match !='undefined' ) { 
						obj['names'] +=  match + ', ';
					}
				}); 
				
				
			}	
			console.log('\nAddress:');
			while ((m = regex_address.exec(page_content)) !== null) {
				// This is necessary to avoid infinite loops with zero-width matches
				if (m.index === regex_address.lastIndex) {
					regex_address.lastIndex++;
				}
				
				// The result can be accessed through the `m`-variable.
				m.forEach((match, groupIndex) => {
					console.log(`Found match, group ${groupIndex}: ${match}`);
				});
				m[0] = m[0].includes('</strong>') ? m[0].split('</strong>')[1] : m[0];
				m[0] = m[0].replace('<', '');
				obj['address']= m[0] ?? '';
			}
			console.log('\Email:');
			while ((m = regex_email.exec(page_content)) !== null) {
				// This is necessary to avoid infinite loops with zero-width matches
				if (m.index === regex_email.lastIndex) {
					regex_email.lastIndex++;
				}
				
				// The result can be accessed through the `m`-variable.
				m.forEach((match, groupIndex) => {
					console.log(`Found match, group ${groupIndex}: ${match}`);
				}); 
				if (m[0]) 
				{
					obj['email'] = m[0];
					break;
				}
			}
			total_data.push(obj);
			console.log(obj);
        },
        maxRequestsPerCrawl: 2000000,
        maxConcurrency: 20,
    });

    await crawler.run();
	console.log('Total data:');
	console.log(total_data);
});
Categories
Development

Hoppscotch – API ecosystem

Hoppscotch is the open-source API development ecosystem.