In this article I’d love to revise few well-known methods of protecting website content from automatic scraping. Each one has its advantages and disadvantages, so you need to make your choice basing on the particular situation. None of these methods is ultimate and each one has its own ways around I will mention further.
Tag: anti-scrape
Distil: Scrape Bot Protection Test
The anti scrape bot service test has been my focus for some time now. How well can the Distil service protect the real website from scrape? The only answer comes from an actual active scrape. Here I will share the log results and conclusion of the test. In the previous post we briefly reviewed the service’s features, and now I will do the live test-drive analysis.
Distil Review: Anti-Scrape-Bot Service
Are you thinking of protecting your website content from theft and nonlegal scraping? Are you suspecting that some ‘innocent bots’ are continually visiting your web pages for data retrieval? Now we come to the anti scraping bot software and services. In this post we want to briefly review the new anti scrape bot service called Distil.
As we have been considering web scraping for positive use, there is also the aspect of the negative use of scraping for the purpose of stealing other bloggers’ proprietary content. Let’s consider some anti web scraping WP plugins.
As for a web content ownership the main indicator here is the indexing done mainly by Google. This means that if the content is scraped and immediately reposted, Google might be fooled to index it as the original, while the genuine source will be counted as content farming. Higher ranking sites might have better chances of being indexed earlier than sites with the original content, and the latter might even get a mark for being spam. This is not necessarily a tendency, but in the past some precedents have happened. This seems ridiculous, but through a published feed the offenders might detect and quickly scrape the original content for repost.