Spidering

Spidering
A search engine robot’s action is called spidering, as it resembles the multiple legged spiders. The spider’s job is to go to a web page, read the contents, connect to any other pages on that web site through links, and bring back the information. From one page it will travel to several pages and this proliferation follows several parallel and nested paths simultaneously.
Spiders frequent the site at some interval, may be a month to a few months, and re-index the pages. This way any changes that may have occurred in your pages could also be reflected in the index.
The spiders automatically visit your web pages and create their listings. An important aspect is to study what factors promote “deep crawl” – the depth to which the spider will go into your website from the page it first visited.
Listing (submitting or registering) with a search engine is a step that could accelerate and increase the chances of that engine “spidering” your pages.
The spider’s movement across web pages stores those pages in its memory, but the key action is in indexing. The index is a huge database containing all the information brought back by the spider. The index is constantly being updated as the spider collects more information. The entire page is not indexed and the searching and page-ranking algorithm is applied only to the index that has been created.
Share Subscribe
 

Copyright © 2011 SEO Plan | Design by Kenga Ads-template

Related Posts Plugin for WordPress, Blogger...