
Octoparse will scrape data from each URL in the list, and no page would be omitted. You can add particular web pages to the list, and it doesn't matter whether they are consecutive pages or not, as long as they share the same page layout. When a task built using "Lists of URLs" is set to run in the Cloud, the task will be split up into sub-tasks which are then set to run on various cloud servers simultaneously. As a result, the speed of extraction will be faster, especially for Cloud Extraction. Octoparse will load the URL one by one and scrape the data from each page.īy creating a "List of URLs" loop mode, Octoparse has no need to deal with extra steps like "Click to paginate" or "Click Item" to enter the item page. To scrape by using a list of URLs, we'll simply set up a loop of all the URLs we need to scrape from then add a data extraction action right after it to get the data we need. And another example, if you are scraping news articles from any particular website, most likely the article page will share the same page structure. Questions : When should you consider scraping by using a list of URLs? For example, when you scrape listings from Yelp, you may need to paginate through the search results. Want a systematic guidance? Download the Octoparse handbook for step-by-step learning.
