This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org

Difference between revisions of "Review Webserver Metafiles for Information Leakage (OTG-INFO-003)"

From OWASP
Jump to: navigation, search
(1st DRAFT for v4)
(Initial DRAFT for v4)
Line 12: Line 12:
 
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].
 
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].
  
As an example, the robots.txt file from http://www.google.com/robots.txt taken on 24 August 2008 is quoted below:
+
As an example, the beginning of the robots.txt file from http://www.google.com/robots.txt sampled on 11 August 2013 is quoted below:
 
<pre>
 
<pre>
 
User-agent: *
 
User-agent: *
Allow: /searchhistory/
 
Disallow: /news?output=xhtml&
 
Allow: /news?output=xhtml
 
 
Disallow: /search
 
Disallow: /search
 +
Disallow: /sdch
 
Disallow: /groups
 
Disallow: /groups
 
Disallow: /images
 
Disallow: /images
 +
Disallow: /catalogs
 
...
 
...
 
</pre>
 
</pre>
Line 33: Line 32:
 
...  
 
...  
 
Disallow: /search
 
Disallow: /search
 +
Disallow: /sdch
 
Disallow: /groups
 
Disallow: /groups
 
Disallow: /images
 
Disallow: /images
 +
Disallow: /catalogs
 
...
 
...
 
</pre>
 
</pre>
Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a robots.txt file [3].  Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.  
+
Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a robots.txt file [3].  Hence, <b>robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties</b>.  
 
<br>
 
<br>
  
Line 44: Line 45:
 
The robots.txt file is retrieved from the web root directory of the web server.
 
The robots.txt file is retrieved from the web root directory of the web server.
  
For example, to retrieve the robots.txt from www.google.com using ''wget'':
+
For example, to retrieve the robots.txt from www.google.com using ''wget'' or "curl":
 
<pre>
 
<pre>
$ wget http://www.google.com/robots.txt
+
cmlh$ wget http://www.google.com/robots.txt
--23:59:24-- http://www.google.com/robots.txt
+
--2013-08-11 14:40:36-- http://www.google.com/robots.txt
          => 'robots.txt'
+
Resolving www.google.com... 74.125.237.17, 74.125.237.18, 74.125.237.19, ...
Resolving www.google.com... 74.125.19.103, 74.125.19.104, 74.125.19.147, ...
+
Connecting to www.google.com|74.125.237.17|:80... connected.
Connecting to www.google.com|74.125.19.103|:80... connected.
 
 
HTTP request sent, awaiting response... 200 OK
 
HTTP request sent, awaiting response... 200 OK
 
Length: unspecified [text/plain]
 
Length: unspecified [text/plain]
 +
Saving to: ‘robots.txt.1’
  
     [ <=>                                 ] 3,425        --.--K/s
+
     [ <=>                                   ] 7,074      --.-K/s   in 0s     
  
23:59:26 (13.67MB/s) - 'robots.txt' saved [3425]
+
2013-08-11 14:40:37 (59.7 MB/s) - ‘robots.txt’ saved [7074]
 +
 
 +
cmlh$ head -n5 robots.txt
 +
User-agent: *
 +
Disallow: /search
 +
Disallow: /sdch
 +
Disallow: /groups
 +
Disallow: /images
 +
cmlh$
 +
</pre>
 +
<pre>
 +
cmlh$ curl -O http://www.google.com/robots.txt
 +
  % Total    % Received % Xferd  Average Speed  Time    Time    Time  Current
 +
                                Dload  Upload  Total  Spent    Left  Speed
 +
101  7074    0  7074    0    0  9410      0 --:--:-- --:--:-- --:--:-- 27312
 +
 
 +
cmlh$ head -n5 robots.txt
 +
User-agent: *
 +
Disallow: /search
 +
Disallow: /sdch
 +
Disallow: /groups
 +
Disallow: /images
 +
cmlh$
 
</pre>
 
</pre>
 
'''Analyze robots.txt using Google Webmaster Tools'''<br>
 
'''Analyze robots.txt using Google Webmaster Tools'''<br>

Revision as of 04:43, 11 August 2013

This article is part of the new OWASP Testing Guide v4.
Back to the OWASP Testing Guide v4 ToC: https://www.owasp.org/index.php/OWASP_Testing_Guide_v4_Table_of_Contents Back to the OWASP Testing Guide Project: https://www.owasp.org/index.php/OWASP_Testing_Project

Summary

This section describes how to test the robots.txt file for Information Leakage of the web application's directory/folder path(s). Furthermore the list of directories that are to be avoided by Spiders/Robots/Crawlers can also be created as a dependency for OWASP-IG-009[1]

Test Objectives

1. Information Leakage of the web application's directory/folder path(s).

2. Create the list of directories that are to be avoided by Spiders/Robots/Crawlers

How to Test

Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the Robots Exclusion Protocol of the robots.txt file in the web root directory [1].

As an example, the beginning of the robots.txt file from http://www.google.com/robots.txt sampled on 11 August 2013 is quoted below:

User-agent: *
Disallow: /search
Disallow: /sdch
Disallow: /groups
Disallow: /images
Disallow: /catalogs
...

The User-Agent directive refers to the specific web spider/robot/crawler. For example the User-Agent: Googlebot refers to the GoogleBot crawler while User-Agent: * in the example above applies to all web spiders/robots/crawlers [2] as quoted below:

User-agent: *

The Disallow directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:

... 
Disallow: /search
Disallow: /sdch
Disallow: /groups
Disallow: /images
Disallow: /catalogs
...

Web spiders/robots/crawlers can intentionally ignore the Disallow directives specified in a robots.txt file [3]. Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.

Black Box testing and example

wget
The robots.txt file is retrieved from the web root directory of the web server.

For example, to retrieve the robots.txt from www.google.com using wget or "curl":

cmlh$ wget http://www.google.com/robots.txt
--2013-08-11 14:40:36--  http://www.google.com/robots.txt
Resolving www.google.com... 74.125.237.17, 74.125.237.18, 74.125.237.19, ...
Connecting to www.google.com|74.125.237.17|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/plain]
Saving to: ‘robots.txt.1’

    [ <=>                                   ] 7,074       --.-K/s   in 0s      

2013-08-11 14:40:37 (59.7 MB/s) - ‘robots.txt’ saved [7074]

cmlh$ head -n5 robots.txt
User-agent: *
Disallow: /search
Disallow: /sdch
Disallow: /groups
Disallow: /images
cmlh$ 
cmlh$ curl -O http://www.google.com/robots.txt
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
101  7074    0  7074    0     0   9410      0 --:--:-- --:--:-- --:--:-- 27312

cmlh$ head -n5 robots.txt
User-agent: *
Disallow: /search
Disallow: /sdch
Disallow: /groups
Disallow: /images
cmlh$ 

Analyze robots.txt using Google Webmaster Tools
Google provides an "Analyze robots.txt" function as part of its "Google Webmaster Tools", which can assist with testing [4] and the procedure is as follows:

1. Sign into Google Webmaster Tools with your Google Account.
2. On the Dashboard, click the URL for the site you want.
3. Click Tools, and then click Analyze robots.txt.

Gray Box testing and example

The process is the same as Black Box testing above.

Tools

  • Browser (View Source function)
  • curl
  • wget

References

Whitepapers