A web crawler (also known as web spider) is a program which browses the World Wide Web in a methodical, automated manner. Web crawlers not only keep a copy of all the visited pages for later processing - for example by a search engine but also index these pages to make the search narrower.
In general, the web crawler starts with a list of URLs to visit. As it visits these URLs, it identifies all the links in the page and adds them to the list of URLs to visit. The process is either ended manually, or after a certain number of links have been followed.
Web crawlers typically take great care to spread their visits to a particular site over a period of time, because they access many more pages than the normal (human) user and therefore can make the site appear slow to the other users if they access the same site repeatedly.
For similar reasons, web crawlers are supposed to obey the robots.txt protocol, with which web site owners can indicate which pages should not be spidered.