Have you ever thought that there is a difference between such terms as “data”, “information” and “knowledge”? Often people mix and misuse them and it’s not a problem in our daily life, but when we come to Data Mining it’s good to distinguish them. Here I’ll try to show the difference in an comprehensible way.
Selenium IDE and Web Scraping
Selenium is a web application testing framework that supports for a wide variety of browsers and platforms including Java, .Net, Ruby, Python and other. In this post we touch on the basic structure of the framework and how it can be applied in Web Scraping.
As I was searching for data mining and data visualization tools I came across the data visualization website Gapminder by Hans Rosling, the professor of Global Health in Karolinska Institute, Sweden. The website presents over a century of statistic data in visual form in graphs, the data being UN and other world organizations out-sourced.
The professor has done an extensive work with plenty of data sources for this data visualizer, and his efforts are notable.
If you need to quick extract some data from an website and you lack of tech skills of the TheWebMiner’s Get By Sample web tool is a solution for you. Get By Sample works as a cloud web scraper and therefore it may work everywhere, on many devices even tablets and smartphones.
Data Journalism Handbook Poster
The poster is composed by Liliana Bounegru and Lulu Pinney shortly says what is in the Data Journalism Handbook. This referrence book shows how journalists can produce interesting news out of data gathered from the web.
In this video, Dale Stokdyk, explains how to scrape Search Engine Results using OutWit Hub with custom scraper.
80legs Review – Crawler for rent in the sky
80legs offers a crawling service that allows users to (1) easily compose crawl jobs and (2) cloud run their crawl jobs over the distributed computer network.
The modern web requires you to spend huge amount of processing power to mine it for information. How could a start-up or a small business do comprehensive data crawling without having to build the giant server farms used by major search engines?
Scraping in PHP with cURL
In this post, I’ll explain how to do a simple web page extraction in PHP using cURL, the ‘Client URL library’.
The curl is a part of libcurl, a library that allows you to connect to servers with many different types of protocols. It supports the http, https and other protocols. This way of getting data from web is more stable with header/cookie/errors process rather than using simple file_get_contents(). If curl() is not installed, you can read here for Win or here for Linux.
Eppie Vojt at the SEOmoz Meetup on the scrape leverage for the site SEO. Techniques: XPath and Regex in Google Docs to fetch links and more.
Eppie Vojt at the SEOmoz Meetup on the scrape leverage for the site SEO. Techniques: XPath and Regex in Google Docs to fetch links and more. The link to the sample Twitter Scraper developed by Eppie Vojt.