Luminati residential proxy for extracting from a data aggregator

In this post I’d like to share my experience with the residential proxy of the Luminati proxy provider.

Residential proxies’ advantages

The traditional proxies’ disadvantage is that they are provided by data centers. Web services can easily recognize that those IPs originate from a dc and thus block it as a web robot (not a regular user) visit. Even with a decent proxy, websites can cloak their data or modify them when detecting a bot visit.

Residential IP is an IP provided to a home user by an Internet Service Provider (ISP). Users [of web apps] give their consent to allow access to their residential IPs while a device is idle, connected to the internet and not in use and has enough power (eg. Luminati does not collect any user data, rather it is interested only in IPs. To date, Luminati connects to over 30 million residential IPs, located across the world.

Luminati Proxy Manager

What does the proxy manager do? It is a [open-source] software for managing multiple proxies seamlessly via API and admin UI.
To see all the ways to install it, use Tools->Proxy Manager on the Luminati’s Dashboard left-side panel.

The advantages of using of the proxy manager:

  • One entry point
  • Concurrent connections
  • Auto retry rules
  • Real time statistics

Besides, proxy manager can be used with someone’s phone for scraping (link).
[box style=”note”]Note that residential proxies require special approval, so it took me 3 working ways till I could get that zone working.[/box] The personal Luminati manager (assistant) has been helpful in getting acquainted with LPM, creating zones and making it to work.


We decided to test the Luminati service, particularly its residential proxies (residential, limited to 7 days only).


First of all we set up the Local Proxy Manager (LPM) and the residential proxies zone. Read more about LPM. Zones are a service’s custom configurations of parameters for [proxided] requests (zones’ board inside Luminati account).

[box style=”note”]Note that residential proxies require special approval, so it took me 3 working ways till I could get that zone working.[/box] The personal Luminati manager (assistant) has been helpful in getting acquainted with LPM, creating zones and making it to work.

We’ve set up 4 zones inside of LPM:

  • data-center proxies zone, port 24000 (port number is assigned automatically or set manually)
  • residential proxies zone, port 24001
  • city asn proxies zone, port 24002
  • gip proxies zone, port 24003

In the test code we were using port 24001, corresponding to the residential zone of my LPM, running at the address Basically the process looks like this:

Below you can see the proxy ports utilizing different proxy zones.


[box style=”note”]Note: There is also a mobile IPs option [proxies zone]. Mobile IPs are almost unblockable. Mobile proxies usage covers (1) website performance, (2) retail/travel: prices fetching, app promotions (Adverification). [/box]

So, we started to test to gather links thru a simple GET request of hotel keyword and 2 letter state abbreviation (NY, CA, etc.){0}&geo_location_terms={1}&page={2}

We performed consequent requests to the YP site and extracted all US hotels rotating thru the 50 US states abbreviations array (as the scraper had not gotten new hotel items for a given state).

Test code

import requests, json
import re, time, sys, random


proxies = {
  'http':  'http://lum-customer-scrapingpro-zone-gen:[luminati password]',

def get_content(url='',
                proxies = {'http':""},
                headers = {'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'}
        real_proxy = requests.get(test_url, proxies = proxies, headers = headers)
        real_proxy = json.loads(real_proxy.text)
        real_proxy={'ip':'', 'country':''}
        r = requests.get( url, proxies = proxies, headers = headers)
        #print "Url: ", url, "\nGot html of "+str(res_length)+' bytes.'
        if '/recaptcha/' in r.text:
        return {'text': r.text,
            'params': { 'size':res_length, 'captcha':captcha,
                       'exit_node': {"ip":real_proxy['ip'], 'country':real_proxy['country']}
        print 'Failure get html by url:', url
        return { 'text':{}, 'params':{'size':0, 'captcha':False, 'exit_node': real_proxy }  }
    #if res_length<10000:
        #print 'Result\n', r.text
state_codes = ['AL','AK','AZ','AR','CA','CO','CT','DE','FL','GA','HI','ID','IL','IN','IA','KS','KY','LA',
curr_state_code = 'CA'
state_codes_processed = []
proxy_type={'data-center':'0', 'residential':'1', 'city-asn':'2', 'gip':'3' }
print '****************************\n'+ p_type , 'proxies test:'
proxies ={
    'http':"" + proxy_type[p_type]
#url = ""
url = "{0}&geo_location_terms={1}&page={2}"
start = time.time() 
total_links = set()
for i in range(1,5000): 
    res = get_content(url.format('hotel', curr_state_code, page_index), proxies,
    {'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'},
    print '**************\nRequest', str(i), 'url: ' , url.format('hotel', curr_state_code, page_index) , '.\n', res['params']
    # write to file
    file_name= 'yellopages_search_{}_{}_page{}_links.html'.format('hotel', curr_state_code, page_index)
    all_sets = re.findall( r"<h2 class=\"n\">.*?<\/h2>", res['text'])
    all_links = re.findall( r'\"business-name\" href=\"(.*?)\"', res['text'])
    for link in all_links:
    # check if links are the same
    if prev_total_links_amount >= len(total_links):
        while curr_state_code in state_codes_processed:
            curr_state_code = random.choice(state_codes)
        print "Processed state codes:", state_codes_processed
        print 'New state code:', curr_state_code
    print 'Found', len(all_links), 'links.'
    print 'All links amount:', len(total_links)
    print 'Total requests:', i
    print "Process time (seconds): " , round(time.time() - start, 1)
    #print ''.join(all_sets)
    with open('total_links.txt', "w" ) as text_file:


All hotel links/items amount: 7147
Total requests: 263
Process time (seconds): 1267.1

We counted total links to assure that the aggregator does not expose the same hotel links in order to spoof a scrape-bot. The test result has shown us that for each request to a web page we got an average 7147/263 = 27 items per page (32 items on a page). The proxy extraction time was 1267/263 =~ 5 seconds per request.

Other service figures

Luminati has a 4-second timeout for DNS lookup.

Network uptime 99.9%, and it can be viewed live at



The Luminati proxy provider proved to be reliable in scraping from a challenging site aggregator. Its residential proxies proved to be high output proxies, and the scraper ran seamlessly using them.


3 replies on “Luminati residential proxy for extracting from a data aggregator”

Hey, do you have alternatives to proxy? Those seem to fault a bit for me.
Any info on and ones? Cheers!

Even if luminati have pretty great services, first of all, their services are really expensive and secondly, sometimes their services lag so it’s not great especially when you’re in the middle of web project or bot testing. I also tried some other providers, like smartproxy and geosurf since i needed advanced pool of ips for scraping and haven’t had any issues with these providers. Would be nice if you could check them as well and add your opinion.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.