Yes, I’m aware that using regex for HTML parsing is not the best idea. But still when I need to quickly extract some small portion of a web page I find myself applying regex more often than executing an XPath query, and its lookahead and lookbehind constructions may be quite helpful.
Author: admin
About Proxy Servers
It’s frequently required to have your actual IP address hidden when doing web scraping or, alternately, to access the website from different counties. That’s why we have anonymizers, also called anonymous proxies. These days, it is possible to find an abundance of proxy software and services. Following is a general summary of the fundamentals of proxy:
As you scrape information from websites, it’s often necessary to keep your real IP hidden, quickly change your IP or simply access a website from a country that differs from your own. All these tasks are achieved by means of proxies, mediators between you and the target website. Though there are plenty of companies offering such services on the market today, in this post I’ll introduce you to CyberGhost, an affordable and nice looking proxy.
I always love a good cheat sheet hanging on my corkboard when I’m working, and XPath is one of the fields where I often refer to it. If you’re looking for a good XPath cheat sheet you will probably find something useful in this post.
Personally, I prefer using online tools for performing quick manipulation on different data formats like JSON, XML, CSV and so on. They’re platform independent and always within reach of my hand (since I mainly work in a browser). After we published an article about 7 best JSON viewers, I was told about Knowledge Walls, a similar service containing many tools for text data manipulation.
Recently, while surfing the web I stumbled upon an simple web scraping service named Web Scrape Master. It is a kind of RESTful web service that extracts data from a specified web site and returns it to you in JSON format.
As we are talking about web scraping, it would be a pity not to mention Yahoo Pipes, an exciting service provided by Yahoo!. This tool provides users with an intuitive graphical interface to assist them in organizing their favorite feeds and webpages into a single stream of content.
There is a question I’ve wanted to shed some light upon for a long time already: “What if I need to scrape several URL’s based on data in some external database?“.
Sometimes it is necessary to use external data sources to provide parameters for the scraping process. For example, you have a database with a bunch of ASINs and you need to scrape all product information for each one of them. As far as Visual Web Ripper is concerned, an input data source can be used to provide a list of input values to a data extraction project.
XPath in Examples
Here we’ll show how XPath works. Let’s take the following XML as a lab rat.