Search Engine Robots Agree Over Standards

How I would like to be a fly on the wall during Microsoft, Yahoo, and Google meetings like the one where they agreed upon Robots Exclusion Protocols (REP). Do they trade barbs, quips and underhanded comments? In my imagination, the gathering is much like a Three Stooges episode. However these meetings actually go, Web Developers […]

How I would like to be a fly on the wall during Microsoft, Yahoo, and Google meetings like the one where they agreed upon Robots Exclusion Protocols (REP). Do they trade barbs, quips and underhanded comments? In my imagination, the gathering is much like a Three Stooges episode.

However these meetings actually go, Web Developers reap the benefits when the three competitors agree as evidenced by the announcement of a standard robots.com protocol.

Microsoft, Yahoo, and Google each announced their involvement in the protocol over the past week along with documentation describing the protocol.

Search engines gather their information by creating tiny programs, or robots, that scan the internet for information. When the programs detect a web server, they copy all files in the server's directories to their local cache, scan their data, and categorize them for inclusion into search results. Robots.txt is a file that is placed in web server directories that allow permission to the directory to search engines. If a robots.txt file is absent in the directory, the robot automatically assumes you are allowing the contents of that directory to be accessed by search engine.

The REP standardizes how the robots.txt file is interpreted by search engines. It allows web developers more control over privacy and how their data will appear.

All parties benefit by the new agreed-upon protocol because inconsistencies between search engines are erased. Now robots.txt files will be honored equally among the biggest search engines, and presumably by the rest of web-crawling robot community.