Crawling
Crawling is the process search engines discover updated content on the web, such as new sites or pages, changes to existing sites, and dead links.
To do this, a search engine uses a algorithm that can be referred to as a 'crawler', ‘bot’ or ‘spider’ which follows an algorithmic process to determine which sites to crawl and how often. As a search engine's crawler moves through your site it will also detect and record any links it finds on these pages and add them to a list that will be crawled later. This is how new content is discovered.
Indexing
once a search engine processes each of the pages it crawls, it compiles a massive index of all the words it sees and their location on each page. It is essentially a database of billions of web pages.
This extracted content is then stored, with the information then organized and interpreted by the search engine’s algorithm to measure its importance compared to similar pages.These are the major functions of search engine.
__________________
Learn more about all the Marine Courses in Tamilnadu and Chennai for Nautical Studies
To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts. | To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts. | To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts. | To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts. | To view links or images in signatures your post count must be 10 or greater. You currently have 0 posts.
|