I'm a newbie with Scrapebox and this might sound stupid, but I don't understand how to scrape a big list of urls at once.
I mean if I use for an example drupal footprints like here:
http://scrapebox-footprints.blogspot.fi/...rints.html
It takes under 1 minute and it says harvester is completed and there are only under 300 urls (no duplicate domains) in the list. If I want to harvest all footprints I have to export not completed keywords to keyword list and start harvesting again and over and over again the same process.
I don't understand how I could scape all those keywords (footprints) at once by clicking start harvesting button at once.
I use 30 semi-private proxies and 100 connections for harvesting and other settings are default setting I guess.
Thank you
I mean if I use for an example drupal footprints like here:
http://scrapebox-footprints.blogspot.fi/...rints.html
It takes under 1 minute and it says harvester is completed and there are only under 300 urls (no duplicate domains) in the list. If I want to harvest all footprints I have to export not completed keywords to keyword list and start harvesting again and over and over again the same process.
I don't understand how I could scape all those keywords (footprints) at once by clicking start harvesting button at once.
I use 30 semi-private proxies and 100 connections for harvesting and other settings are default setting I guess.
Thank you