Hi,
Answer:
- The scrapers are moving to the cloud. Scrapers become more like cloud-scraping services providing multi-threading crawling capabilities, storage options, etc. One may run many scraper instances at once (eg. dexi.io, contentgrabber.com) in a cloud infrastructure.
- The scraping frameworks get wrapped up with the convenience suites. They are still to be used by developers but extended with all kinds of features: script cloud execution and result cloud storage, scaling on demand and others. E.g. Scrapy framework gets extended as a scrapinghub platform, webrobots.io provide a scraping IDE for JS scraping robots.
The GUI or visual point-&-click scrapers have hit a ceiling in their functionality for the most part (open page, click an item, find similar ones, make a pattern of it, etc.).
The Node.JS, .NET, Python and other tools are still good for regular scraper development. Yet, the largest number of web scraping libraries are of Python.
I recommend to you WEB SCRAPING TOOLS AND SERVICES LANDSCAPE where you can filter out the scraping tools by features (not all, but most relevant features).