This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org
Review Webserver Metafiles for Information Leakage (OTG-INFO-003)
OWASP Testing Guide v3 Table of Contents
This article is part of the OWASP Testing Guide v3. The entire OWASP Testing Guide v3 can be downloaded here.
OWASP at the moment is working at the OWASP Testing Guide v4: you can browse the Guide here
This is a draft of a section of the new Testing Guide v3
Brief Summary
This section describes how to test the robots.txt file.
Description of the Issue
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. While their accepted behavior is specified by the web server and its web pages they may accidentally or intentionally retrieve web content not intended to be stored or published.
Black Box testing and example
Description and goal
Our goal is to create a map of the application with all the points of access (gates) to the application. This will be useful for the second active phase of penetration testing. You can use a tool such as wget (powerful and very easy to use) to retrieve all the information published by the application.
Test:
The -S option is used to collect the HTTP header of the web requests. The --spider options is used to not download anything since we only want the HTTP header.
wget -S --spider <target>
Result:
http://www.<target>/ => `index.html' Resolving www.<target>... 64.xxx.xxx.23, 64.xxx.xxx.24, 64.xxx.xxx.20, ... Connecting to www.<target>|64.xxx.xxx.23|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Date: Mon, 10 Sep 2007 00:43:04 GMT Server: Apache Accept-Ranges: bytes Cache-Control: max-age=60, private Expires: Mon, 10 Sep 2007 00:44:01 GMT Vary: Accept-Encoding,User-Agent Content-Type: text/html X-Pad: avoid browser bug Content-Length: 135750 Keep-Alive: timeout=5, max=64 Connection: Keep-Alive Length: 135,750 (133K) [text/html] 200 OK
Test:
The -r option is used to collect recursively the web-site's content and the -D option restricts the request only for the specified domain.
wget -r -D <domain> <target>
Result:
22:13:55 (15.73 KB/s) - `www.******.org/indice/13' saved [8379] --22:13:55-- http://www.******.org/*****/******** => `www.******.org/*****/********' Connecting to www.******.org[xx.xxx.xxx.xx]:80... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] [ <=> ] 11,308 17.72K/s ...
Gray Box testing and example
The process is the same as Black Box testing above.
References
Whitepapers
- [1] "The Web Robots Pages" - http://www.robotstxt.org/
- [2] "How Google crawls my site" - http://www.google.com/support/webmasters/bin/topic.py?topic=8843
- [3] "Preventing content from appearing in Google search results " - http://www.google.com/support/webmasters/bin/topic.py?topic=8459
Tools
- wget - http://www.gnu.org/software/wget/
- Burp Spider - http://portswigger.net/spider/