Experience
We’ve succesfully tested the Web-Scraper-API of Oxylabs. It did well to get data off the highly protected sites. One eg. is Zoro.com protected with Akamai, DataDome, CloudFlare and ReCaptcha! See the numerical results here.
We’ve succesfully tested the Web-Scraper-API of Oxylabs. It did well to get data off the highly protected sites. One eg. is Zoro.com protected with Akamai, DataDome, CloudFlare and ReCaptcha! See the numerical results here.
Our brand new version Octoparse 8 (OP 8) just came out a few weeks ago. To help you get a better understanding of what the differences between OP 8 and 7 are, we have included all the updates in this article.
The web is becoming increasingly difficult to scrape. There are more and more websites using single page application frameworks like Vue.js / Angular.js / React.js and you need to use headless browsers to extract data from those websites.
Using headless Chrome on your local computer is easy. But scaling to dozens of Chrome instances in production is a difficult task. There are many problems, you need powerful servers with plenty of RAM, you’ll get into random crashes, zombie processes…
Hi,
If you were an Amazon seller, would you want to know the listing price of a product of all competitors? Since you don’t have direct access to the Amazon database, you are out of luck and have to browse and click through every listing in order to construct a table of sellers and prices. A web scraping tool comes in handy. It automatically downloads your desired information such as product name, seller’s name, price, etc. However, web scraping that requires coding skill can be painful for professionals in IT, SEO, marketing, e-commerce, real estate, hospitality, etc.
It seems beyond one’s job description if he/she needs to learn how to code in order to obtain certain useful data from the web. For example, I have a friend who graduated in Mass Communication and works as a content marketer. She wants to scrape some data from the web, so she decided to learn Python herself. It took her two weeks to come up with a page of messy codes. Not only did she waste time on learning Python, but she also lost the time she could have used for doing her real work.
Question: “How do I set up a daily automatic scraping of www.pollen.com data into a Google sheet?” (link)
Answer: Originally I doubted if svg HTML elements are scrapable. After some trial and error experience I realized, that svg elements are indeed scrapable; one can get their xPath, children nodes. Yet, they are scrapable by importXML() when being static html.
The Dexi.io web scraping service has remade its functionality by adding [paid plan] addons. Through addons, more features are made available to customers, e.g. more step types/pipe actions. Those features also allow the integration of scrape results to data stores and endpoints like PostgreSQL, MySQL, Amazon S3 and other.
has recently launched a brand new version 7.0, which has turned out to be the most revolutionary upgrade in the past two years, with not only a more user-friendly UI, but also some of the advanced features make web scraping even easier. In this post, I will walk through some of the new features/changes made available in this new version, with respect to how a beginner, even one without any coding background, can approach this web scraping tool.
I want to extract the hotel name and the current room price of some hotels daily from https://www.expedia.ca/Hotel-
I am a small hotel owner and want those info quite often, and hope I can do it with codes automatically in someway. You are expert in this field, what is the easiest ways to get those information? Can you give me some example codes?
Some may argue that extracting 3 records per minute is not fast enough for an automated scraper (see my last post on Dexi multi-threaded jobs). However, you should realize that Dexi extractor robots behave like a full-blown modern browser and fetch all the resources that crawled pages load (CSS, JS, fonts, etc.).
In terms of performance, an extractor robot might not be as fast as a pure HTTP scraping script, but its advantage is the ability to extract data from dynamic websites which require running JavaScript code in order to generate a user-facing content. It will also be harder for anti-bot mechanisms to detect and block it.