How search engines work
A search engine operates, in the following order: 1) Crawling;
2) Deep Crawling Depth-first search (DFS); 3) Fresh Crawling
Breadth-first search (BFS); 4) Indexing; 5) Searching.
Web search engines work by storing information about a large
number of web pages, which they retrieve from the WWW itself.
These pages are retrieved by a web crawler (also known as a
spider) -- an automated web browser which follows every link it
sees, exclusions can be made by the use of robots.txt. The
contents of each page are then analyzed to determine how it
should be indexed. Data about web pages is stored in an index
database for use in later queries. Some search engines, such as
Google, store all or part of the source page (referred to as a
cache) as well as information about the web pages, whereas some
store every word of every page it finds, such as AltaVista. This
cached page always holds the actual search text since it is the
one that was actually indexed, so it can be very useful when the
content of the current page has been updated and the search
terms are no longer in it. This problem might be considered to
be a mild form of linkrot, and Google's handling of it increases
usability by satisfying user expectations that the search terms
will be on the returned web page. This satisfies the principle
of least astonishment since the user normally expects the search
terms to be on the returned pages. Increased search relevance
makes these cached pages very useful, even beyond the fact that
they may contain data that may no longer be available elsewhere.
When a user comes to the search engine and makes a query,
typically by giving keywords, the engine looks up the index and
provides a listing of best-matching web pages according to its
criteria, usually with a short summary containing the document's
title and sometimes parts of the text. Most search engines
support the use of the boolean terms AND, OR and NOT to further
specify the search query. An advanced feature is proximity
search, which allows you to define the distance between keywords.
The usefulness of a search engine depends on the relevance of
the results it gives back. While there may be millions of Web
pages that include a particular word or phrase, some pages may
be more relevant, popular, or authoritative than others. Most
search engines employ methods to rank the results to provide the
"best" results first. How a search engine decides which pages
are the best matches, and what order the results should be shown
in, varies widely from one engine to another. The methods also
change over time as Internet usage changes and new techniques
evolve.
Most web search engines are commercial ventures supported by
advertising revenue and, as a result, some employ the
controversial practice of allowing advertisers to pay money to
have their listings ranked higher in search results.
The vast majority of search engines are run by private companies
using proprietary algorithms and closed databases, the most
popular currently being Google, MSN Search, and Yahoo! Search.
However, Open source search engine technology does exist, such
as ht://Dig, Nutch, Senas, Egothor, OpenFTS, DataparkSearch and
many others.
-----------------------------------------------------------