Recently we’ve performed the Yelp business directory scrape for acquiring high quality B2B leads (company + CEO info). This forced us to apply many techniques like proxying, external company site scrape, email verification and more.
Category: Development
We’ve got some code provided by Akash D. working on ticketmaster.co.uk. He automates browser (Chrome as well as Edge) using Selenium with Python. The rotating authenticated proxies are leveraged to keep undetected. Yet, the site is protected with Distil network.

Recently we encountered a website that worked as usual, yet when composing and running scraping script/agent it has put up blocking measures.
In this post we’ll take a look at how the scraping process went and the measures we performed to overcome that.
In this post I want to share on how one may scrape business directory data, real estate using Scrapy framework.
PHP Curl POSTing JSON example
<?php $base_url = "https://openapi.octoparse.com"; $token_url = $base_url . '/token'; $post =[ 'username' => 'igorsavinkin', 'password' => '<xxxxxx>', 'grant_type' => 'password' ]; $payload = json_encode($post); $headers = [ 'Content-Type: application/json' , 'Content-Length: ' . strlen($payload) ]; $timeout = 30; $ch_upload = curl_init(); curl_setopt($ch_upload, CURLOPT_URL, $token_url); if ($headers) { curl_setopt($ch_upload, CURLOPT_HTTPHEADER, $headers); } curl_setopt($ch_upload, CURLOPT_POST, true); curl_setopt($ch_upload, CURLOPT_POSTFIELDS, $payload /*http_build_query($post)*/ ); curl_setopt($ch_upload, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch_upload, CURLOPT_CONNECTTIMEOUT, $timeout); $response = curl_exec($ch_upload); if (curl_errno($ch_upload)) { echo 'Curl Error: ' . curl_error($ch); } curl_close($ch_upload); //echo 'Response length: ', strlen($response); echo $response ; $fp = fopen('octoparse-api-token.json', 'w') ; fwrite($fp, $response ); fclose($fp);
Service | Residential | Cost/month | Traffic/month | $ per GB | Rotating | IP whitelisting | Performance and more | Notes |
---|---|---|---|---|---|---|---|---|
MarsProxies | ![]() | N/A | N/A | 3.5 | yes | yes | 500K+ IPs, 190+ locations Test results | SOCKS5 supported Proxy grey zone restrictions |
Oxylabs.io | ![]() | N/A | 25 GB | 9 - 12 "pay-as-you-go" - 15 | yes | yes | 100M+ IPs, 192 countries - 30K requests - 1.3 GB of data - 5K pages crawled | Not allowing to scrape some of grey zone targets, incl. Linkedin. |
Smartproxy | ![]() | Link to the price page | N/A | 5.2 - 7 "pay-as-you-go" - 8.5 | yes | yes | 65M+ IPs, 195+ countries | Free Trial Not allowing to scrape some of grey zone targets, incl. Linkedin. |
Infatica.io | ![]() | N/A | N/A | 3 - 6.5 "pay-as-you-go" - 8 | yes | yes | Over 95% success *Bans from Cloudflare are also few, less than 5%. | Black list of sites —> proxies do not work with those.
|
Mango Proxy | ![]() | N/A | 1-50 GB | 3-8![]() | yes | yes | 90M+ IPs, 240+ countries | ![]() |
IPRoyal | ![]() | N/A | N/A | $4.55 | yes | yes | 32M+ IPs, 195 countries | Not allowing to scrape some of grey zone targets, incl. Facebook. List of bloked sites. |
Rainproxy.io | yes | $ 4 | from 1 GB | 4 | yes | |||
BrightData | yes | 15 | ||||||
ScrapeOps Proxy Aggregator | yes | API Credits per month | N/A | N/A | yes | Allows multithreading, the service provides browsers at its servers. It allows to run N [cloud] browsers from a local machine. The number of threads depends on the subscription: min 5 threads. | The All-In-One Proxy API that allows to use over 20+ proxy providers from a single API | |
Lunaproxy.com | yes | from $15 | x Gb per 90 days | 0.85 - 5 | Each plan allows certain traffic amount for 90 days limit. | |||
LiveProxies.io | yes | from $45 | 4-50 GB | 5 - 12 | yes | yes | Eg. 200 IPs with 4 GB for $70.00, for 30 days limit. | |
Charity Engine -docs | yes | - | - | starting from 3.6 Additionally: CPU computing - from $0.01 per avg CPU core-hour - from $0.10 per GPU-hour - source. | failed to connect so far | |||
proxy-sale.com | yes | from $17 | N/A | 3 - 6 "pay-as-you-go" - 7 | yes | yes | 10M+ IPs, 210+ countries | 30 days limit for a single proxy batch |
Tabproxy.com | yes | from $15 | N/A | 0.8 - 3 (lowest price is for a chunk of 1000 GB) | yes | yes | 200M+ IPs, 195 countries | ,30-180 days limit for a single proxy batch (eg. 5 GB) |
proxy-seller.com | yes | N/A | N/A | 4.5 - 6 "pay-as-you-go" - 7 | yes | yes | 15M+ IPs, 220 countries | - Generation up to 1000 proxy ports in each proxy list - HTTP / Socks5 support - One will be able to generate an infinite number of proxies by assigning unique parameters to each list |
Simple Apify Puppeteer crawler
const Apify = require('apify'); var total_data=[]; const regex_name = /[A-Z][a-z]+\s[A-Z][a-z]+(?=\.|,|\s|\!|\?)/gm const regex_address = /stand:(<\/strong>)?\s+(\w+\s+\w+),?\s+(\w+\s+\w+)?/gm; const regex_email = /(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))/i; Apify.main(async () => { const requestQueue = await Apify.openRequestQueue('123'); await requestQueue.addRequest(new Apify.Request({ url: 'https://www.freeletics.com/de/pages/imprint/' })); await requestQueue.addRequest(new Apify.Request({ url: 'https://di1ara.com/pages/impressum' })); console.log('\nStart PuppeteerCrawler\n'); const crawler = new Apify.PuppeteerCrawler({ requestQueue, handlePageFunction: async ({ request, page }) => { const title = await page.title(); console.log(`Title of ${request.url}: ${title}`); const page_content = await page.content(); console.log(`Page content size:`, page_content.length); let obj = { 'url' : request.url }; console.log('Names:'); while ((m = regex_name.exec(page_content)) !== null) { // This is necessary to avoid infinite loops with zero-width matches if (m.index === regex_name.lastIndex) { regex_name.lastIndex++; } // The result can be accessed through the `m`-variable. m.forEach((match, groupIndex) => { console.log(`Found match, group ${groupIndex}: ${match}`); if (match !='undefined' ) { obj['names'] += match + ', '; } }); } console.log('\nAddress:'); while ((m = regex_address.exec(page_content)) !== null) { // This is necessary to avoid infinite loops with zero-width matches if (m.index === regex_address.lastIndex) { regex_address.lastIndex++; } // The result can be accessed through the `m`-variable. m.forEach((match, groupIndex) => { console.log(`Found match, group ${groupIndex}: ${match}`); }); m[0] = m[0].includes('</strong>') ? m[0].split('</strong>')[1] : m[0]; m[0] = m[0].replace('<', ''); obj['address']= m[0] ?? ''; } console.log('\Email:'); while ((m = regex_email.exec(page_content)) !== null) { // This is necessary to avoid infinite loops with zero-width matches if (m.index === regex_email.lastIndex) { regex_email.lastIndex++; } // The result can be accessed through the `m`-variable. m.forEach((match, groupIndex) => { console.log(`Found match, group ${groupIndex}: ${match}`); }); if (m[0]) { obj['email'] = m[0]; break; } } total_data.push(obj); console.log(obj); }, maxRequestsPerCrawl: 2000000, maxConcurrency: 20, }); await crawler.run(); console.log('Total data:'); console.log(total_data); });
Hoppscotch – API ecosystem
Add
Adding constraint NetAppToken composing of 3 columns: network, application, token.
Note: you supposed to have chosen a right db: use <database_name>;
ALTER TABLE crypto ADD CONSTRAINT NetAppToken UNIQUE (network, application, token);
View
SELECT table_schema, table_name, constraint_name FROM information_schema.table_constraints WHERE table_name = 'crypto';
Result
+--------------+------------+-----------------+
| table_schema | table_name | constraint_name |
+--------------+------------+-----------------+
| admin_crypto | crypto | PRIMARY |
| admin_crypto | crypto | NetAppToken |
+--------------+------------+-----------------+
2 rows in set (0.06 sec)
CURL request into Curl PHP code
Recently I needed to transform the CURL request into the PHP Curl code, binary data and compressed option having been involved. See the query itself:
curl 'https://terraswap-graph.terra.dev/graphql' -H 'Accept-Encoding: gzip, deflate, br' -H 'Content-Type: application/json' -H 'Accept: application/json' -H 'Connection: keep-alive' -H 'DNT: 1' -H 'Origin: https://terraswap-graph.terra.dev' --data-binary '{"query":"{\n pairs {\n pairAddress\n latestLiquidityUST\n token0 {\n tokenAddress\n symbol\n }\n token1 {\n tokenAddress\n symbol\n }\n commissionAPR\n volume24h {\n volumeUST\n }\n }\n}\n"}' --compressed