This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org

Difference between revisions of "Review Webserver Metafiles for Information Leakage (OTG-INFO-003)"

From OWASP
Jump to: navigation, search
m (Black Box testing and example)
m (Incorporated the use of http://www.google.com/robots.txt as an example of robots.txt)
Line 12: Line 12:
 
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].
 
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].
  
Within the robots.txt file, the ''User-Agent'' directive refers to the specific web spider/robot/crawler, e.g., ''User-Agent: Googlebot'' refers to the ''GoogleBot'' crawler. ''User-Agent: *'' applies to all web spiders/robots/crawlers [2].
+
As an example, the robots.txt file from http://www.google.com/robots.txt taken on 24 August 2008 is quoted below:
 +
<pre>
 +
User-agent: *
 +
Allow: /searchhistory/
 +
Disallow: /news?output=xhtml&
 +
Allow: /news?output=xhtml
 +
Disallow: /search
 +
Disallow: /groups
 +
Disallow: /images
 +
...
 +
</pre>
  
The ''Disallow'' directive specifies which resources should *not* be retrieved by spiders/robots/crawlers. For example, ''Disallow: /cgi-bin/'' indicates that the ''/cgi-bin'' directory and its sub-directories should not be crawled.
+
The ''User-Agent'' directive refers to the specific web spider/robot/crawler. For example the ''User-Agent: Googlebot'' refers to the ''GoogleBot'' crawler while ''User-Agent: *'' in the example above applies to all web spiders/robots/crawlers [2] as quoted below:
 +
<pre>
 +
User-agent: *
 +
</pre>
  
 +
The ''Disallow'' directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:
 +
<pre>
 +
...
 +
Disallow: /search
 +
Disallow: /groups
 +
Disallow: /images
 +
...
 +
</pre>
 
Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a  robots.txt file [3].  Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.  
 
Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a  robots.txt file [3].  Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.  
 
<br>
 
<br>

Revision as of 02:25, 24 August 2008

OWASP Testing Guide v3 Table of Contents

This article is part of the OWASP Testing Guide v3. The entire OWASP Testing Guide v3 can be downloaded here.

OWASP at the moment is working at the OWASP Testing Guide v4: you can browse the Guide here

This is a draft of a section of the new Testing Guide v3

Brief Summary


This section describes how to test the robots.txt file.

Description of the Issue


Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the Robots Exclusion Protocol of the robots.txt file in the web root directory [1].

As an example, the robots.txt file from http://www.google.com/robots.txt taken on 24 August 2008 is quoted below:

User-agent: *
Allow: /searchhistory/
Disallow: /news?output=xhtml&
Allow: /news?output=xhtml
Disallow: /search
Disallow: /groups
Disallow: /images
...

The User-Agent directive refers to the specific web spider/robot/crawler. For example the User-Agent: Googlebot refers to the GoogleBot crawler while User-Agent: * in the example above applies to all web spiders/robots/crawlers [2] as quoted below:

User-agent: *

The Disallow directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:

... 
Disallow: /search
Disallow: /groups
Disallow: /images
...

Web spiders/robots/crawlers can intentionally ignore the Disallow directives specified in a robots.txt file [3]. Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.

Black Box testing and example

The robots.txt file is retrieved from the web root directory of the web server. For example, the URL "http://www.google.com/robots.txt" is the robots.txt file of www.google.com

To retrieve the robots.txt from www.google.com using wget:

$ wget http://www.google.com/robots.txt
--23:59:24-- http://www.google.com/robots.txt
           => 'robots.txt'
Resolving www.google.com... 74.125.19.103, 74.125.19.104, 74.125.19.147, ...
Connecting to www.google.com|74.125.19.103|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/plain]

    [ <=>                                 ] 3,425        --.--K/s

23:59:26 (13.67MB/s) - 'robots.txt' saved [3425]

Google provides an "Analyze robots.txt" function as part of its "Google Webmaster Tools", which can assist with testing [4].

Gray Box testing and example

The process is the same as Black Box testing above.

References

Whitepapers