This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org

Difference between revisions of "Testing: Spidering and googling"

From OWASP
Jump to: navigation, search
(Googling)
 
(27 intermediate revisions by 8 users not shown)
Line 1: Line 1:
[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]<br>
+
{{Template:OWASP Testing Guide v3}}
{{Template:OWASP Testing Guide v2}}
 
  
 
== Brief Summary ==
 
== Brief Summary ==
  
In this paragraph is described how to retrieve informations about the application to test using spidering and googling techniques.
+
This section describes how to retrieve information about the application being tested using spidering and googling techniques.
  
 
== Description of the Issue ==  
 
== Description of the Issue ==  
  
Web spiders are the most powerful and useful tools developed for both good and bad intentions on the Internet. A spider serves one major function, Data Mining. The way a typical spider (like Google) works is by crawling a website one page at a time and gathering and storing the relevant information such as email address, meta-tags, hidden form data, URL information, links and so much more. Then the spider crawls all the links in that page, collecting relevant information in each following page, and so on. Before you know it the spider has crawled thousands of links and pages gathering bits of information and storing into a database. This web of paths is where the term 'spider' is derived from.  
+
Web spiders are the most powerful and useful tools developed for both good and bad intentions on the internet. A spider serves one major function, Data Mining. The way a typical spider (like Google) works is by crawling a web site one page at a time, gathering and storing the relevant information such as email addresses, meta-tags, hidden form data, URL information, links, etc. The spider then crawls all the links in that page, collecting relevant information in each following page, and so on. Before you know it, the spider has crawled thousands of links and pages gathering bits of information and storing it into a database. This web of paths is where the term 'spider' is derived from.  
  
 
The Google search engine found at http://www.google.com offers many features, including language and document translation; web, image, newsgroups, catalog, and news searches; and more. These features offer obvious benefits to even the most uninitiated web surfer, but these same features offer far more nefarious possibilities to the most malicious Internet users, including hackers, computer criminals, identity thieves, and even terrorists. This article outlines the more harmful applications of the Google search engine, techniques that have collectively been termed "Google Hacking."
 
The Google search engine found at http://www.google.com offers many features, including language and document translation; web, image, newsgroups, catalog, and news searches; and more. These features offer obvious benefits to even the most uninitiated web surfer, but these same features offer far more nefarious possibilities to the most malicious Internet users, including hackers, computer criminals, identity thieves, and even terrorists. This article outlines the more harmful applications of the Google search engine, techniques that have collectively been termed "Google Hacking."
In 1992, there was about 15,000 websites, now in 2006 the number has exceeded 100 million.  What if a simply query to a search engine such as Google such as "Hackable Websites w/ Credit Card Information" produced a list of websites that contained customer credit card data of thousands of customers per company.  
+
In 1992, there were about 15,000 web sites, in 2006 the number has exceeded 100 million.  What if a simple query to a search engine like Google such as "Hackable Websites w/ Credit Card Information" produced a list of websites that contained customer credit card data of thousands of customers per company?  
If the attacker was aware of a web application that perhaps utilized a clear text password file in a directory and wanted to gather these targets you could search on "intitle:"Index of" .mysql_history" and found on any of the 100 million websites will provide you with a list of the database usernames and passwords OR maybe the attacker has a new method to attack a Lotus Notes web server and he wants to simply see how many targets are on the Internet, he could search "inurl:domcfg.nsf". Apply the same logic to a worm looking for its new victim.
+
If the attacker is aware of a web application that stores a clear text password file in a directory and wants to gather these targets, then he could search on "intitle:"Index of" .mysql_history" and the search engine will provide him with a list of target systems that may divulge these database usernames and passwords (out of a possible 100 million web sites available). Or perhaps the attacker has a new method to attack a Lotus Notes web server and simply wants to see how many targets are on the internet, he could search on "inurl:domcfg.nsf". Apply the same logic to a worm looking for its new victim.
  
 
== Black Box testing and example ==
 
== Black Box testing and example ==
Line 20: Line 19:
 
'''Description and goal'''
 
'''Description and goal'''
  
--> Description, goal, what information are useful for pen test? Why wget?
+
Our goal is to create a map of the application with all the points of access (gates) to the application.
 
+
This will be useful for the second active phase of penetration testing.
Our goal is to create a map of the application with all the points of access (gates) to the application. This will be useful for the second active phase of pen testing.<br>
+
You can use a tool such as wget (powerful and very easy to use) to retrieve all the information published by the application.
You can use tool as wget to retrieve all the informations published by the application.
 
  
 
'''Test:'''
 
'''Test:'''
  
The -s option is used to collect the HTTP header of the web requests.  
+
The -S option is used to collect the HTTP header of the web requests. The --spider options is used to not download anything since we only want the HTTP header.
  
 
<pre>
 
<pre>
wget -s <target>
+
wget -S --spider <target>
 
</pre>
 
</pre>
  
 
'''Result:'''
 
'''Result:'''
  
<pre>
+
<pre>http://www.<target>/
HTTP/1.1 200 OK
+
          => `index.html'
Date: Tue, 12 Dec 2006 20:46:39 GMT
+
Resolving www.<target>... 64.xxx.xxx.23, 64.xxx.xxx.24, 64.xxx.xxx.20, ...
Server: Apache/1.3.37 (Unix) mod_jk/1.2.8 mod_deflate/1.0.21 PHP/5.1.6 mod_auth_
+
Connecting to www.<target>|64.xxx.xxx.23|:80... connected.
passthrough/1.8 mod_log_bytes/1.2 mod_bwlimited/1.4 FrontPage/5.0.2.26
+
HTTP request sent, awaiting response...  
34a mod_ssl/2.8.28 OpenSSL/0.9.7a
+
  HTTP/1.1 200 OK
X-Powered-By: PHP/5.1.6
+
  Date: Mon, 10 Sep 2007 00:43:04 GMT
Set-Cookie: PHPSESSID=b7f5c903f8fdc254ccda8dc33651061f; expires=Friday, 05-Jan-0
+
  Server: Apache
7 00:19:59 GMT; path=/
+
  Accept-Ranges: bytes
Expires: Sun, 19 Nov 1978 05:00:00 GMT
+
  Cache-Control: max-age=60, private
Last-Modified: Tue, 12 Dec 2006 20:46:39 GMT
+
  Expires: Mon, 10 Sep 2007 00:44:01 GMT
Cache-Control: no-store, no-cache, must-revalidate
+
  Vary: Accept-Encoding,User-Agent
Cache-Control: post-check=0, pre-check=0
+
  Content-Type: text/html
Pragma: no-cache
+
  X-Pad: avoid browser bug
Connection: close
+
  Content-Length: 135750
Content-Type: text/html; charset=utf-8
+
  Keep-Alive: timeout=5, max=64
</pre>
+
  Connection: Keep-Alive
 +
Length: 135,750 (133K) [text/html]
 +
200 OK</pre>
  
 
'''Test:'''
 
'''Test:'''
Line 81: Line 81:
 
'''Description and goal'''
 
'''Description and goal'''
  
The scope of this activity is to find the information about a single web-site published on internet or to find a specific kind of application as Webmin, VNC and much more.<br>
+
The scope of this activity is to find information about a single web site published on the internet or to find a specific kind of application such as Webmin or VNC.
There are many tools that carry out these specific queries as ''googlegath'' but is possibile to perform this operation also using directly Google's search on the web-site.
+
There are tools available that can assist with this technique, for example googlegath, but it is also possibile to perform this operation manually using Google's web site search facilities.  This operation does not require specialist technical skills and is a good way to collect information about a web target.
  
<br>
 
<b> Tip cases of Advance Search with Google</b>
 
  
* Use the plus sign (+) to force a search for an overly common word. Use the minus sign (-) to exclude a term from a search. No space follows these signs.
+
'''Useful Google Advanced Search techniques '''
 +
 
 +
* Use the plus sign (+) to force a search for an overly common word. Use the minus sign (-) to exclude a term from a search. No spaces follow these signs.
 
* To search for a phrase, supply the phrase surrounded by double quotes (" ").
 
* To search for a phrase, supply the phrase surrounded by double quotes (" ").
 
* A period (.) serves as a single-character wildcard.
 
* A period (.) serves as a single-character wildcard.
* An asterisk (*) represents any word—not the completion of a word, as is traditionally used.
+
* An asterisk (*) represents any word —- not the completion of a word, as is traditionally used.
  
Google advanced operators help refine searches. Advanced operators use a syntax such as the following:
+
Google advanced operators help refine searches. Advanced operators use the following syntax: operator:search_term . Notice that there is no space between the operator, the colon, and the search term. A list of operators and search terms follows:
* operator:search_term (notice that there's no space between the operator, the colon, and the search term)
 
 
* The ''site'' operator instructs Google to restrict a search to a specific web site or domain. The web site to search must be supplied after the colon.
 
* The ''site'' operator instructs Google to restrict a search to a specific web site or domain. The web site to search must be supplied after the colon.
 
* The ''filetype'' operator instructs Google to search only within the text of a particular type of file. The file type to search must be supplied after the colon. Don't include a period before the file extension.
 
* The ''filetype'' operator instructs Google to search only within the text of a particular type of file. The file type to search must be supplied after the colon. Don't include a period before the file extension.
 
* The ''link'' operator instructs Google to search within hyperlinks for a search term.
 
* The ''link'' operator instructs Google to search within hyperlinks for a search term.
 
* The ''cache'' operator displays the version of a web page as it appeared when Google crawled the site. The URL of the site must be supplied after the colon.
 
* The ''cache'' operator displays the version of a web page as it appeared when Google crawled the site. The URL of the site must be supplied after the colon.
* The ''intitle'' operator instructs Google to search for a term within the title of a document.
+
* The ''intitle'', ''allintitle'' operator instructs Google to search for a term within the title of a document.
* The ''inurl'' operator instructs Google to search only within the URL (web address) of a document. The search term must follow the colon.
+
* The ''inurl'', ''allinurl'' operator instructs Google to search only within the URL (web address) of a document. The search term must follow the colon.
 
+
* The ''info'' operator instructs Google to search only within the summary information of a site
The folllowing are a set googling examples (for a complete list look at [1]):
+
* The ''phonebook'' operator instructs Google to search business or residential phone listing.
 +
* The ''stocks'' operator instructs Google to search for stock market information about a company.
 +
* The ''bphonebook'' operator instructs Google to search business phone listing only.
 +
The following are a set googling examples (for a complete list look at [1]):
  
 
'''Test:'''
 
'''Test:'''
Line 111: Line 113:
 
'''Result:'''
 
'''Result:'''
  
The operator :site restricts a search in a specific domain, while with :intitle operator is possibile to find the pages that contain "index of backup" as a link title of the Google output.<br>
+
The operator site: restricts a search in a specific domain, while the intitle: operator makes it possibile to find the pages that contain "index of backup" as a link title of the Google output.<br>
The AND boolean operator is used to combine more conditions in a same query.
+
The AND boolean operator is used to combine more conditions in the same query.
  
 
<pre>
 
<pre>
Line 141: Line 143:
  
 
The filetype operator is used to find specific kind of files on the web-site.
 
The filetype operator is used to find specific kind of files on the web-site.
 +
 +
'''How can you prevent Google hacking?'''
 +
 +
Make sure you are comfortable with sharing everything in your public Web folder with the whole world, because Google will share it, whether you like it or not. Also, in order to prevent attackers from easily figuring out what server software you are running, change the default error messages and other identifiers. Often, when a "404 Not Found" error is detected, servers will return a page like that says something like:
 +
 +
<pre>
 +
Not Found
 +
The requested URL /cgi-bin/xxxxxx was not found on this server.
 +
Apache/1.3.27 Server at your web site Port 80
 +
</pre>
 +
 +
The only information that the legimitate user really needs is a message that says "Page Not found." Restricting the other information will prevent your page from turning up in an attacker's search for a specific flavor of server.
 +
Google periodically purges it's cache, but until then your sensitive files are still being offered to the public. If you realize that the search engine has cached files that you want to be unavailable to be viewed you can go to  http://www.google.com/remove.html  and follow the instructions on how to remove your page, or parts of your page, from their database.
 +
 +
'''Using a search engine to discover virtual hosts'''
 +
 +
Live.com, another well-known search engine (see link at the bottom of the page), provides the "ip" operator, which returns all the pages that are known to belong to a certain IP address. This is a very useful technique to find out which virtual hosts are configured on the tested server. For instance, the following query will return all indexed pages belonging to the domain owasp.org:
 +
<pre>
 +
ip:216.48.3.18
 +
</pre>
  
 
== References ==
 
== References ==
Line 148: Line 170:
 
'''Tools'''<br>
 
'''Tools'''<br>
 
* Google – http://www.google.com<br>
 
* Google – http://www.google.com<br>
 +
* Live Search - http://www.live.com
 
* wget - http://www.gnu.org/software/wget/
 
* wget - http://www.gnu.org/software/wget/
 
* Foundstone SiteDigger - http://www.foundstone.com/index.htm?subnav=resources/navigation.htm&subcontent=/resources/proddesc/sitedigger.htm
 
* Foundstone SiteDigger - http://www.foundstone.com/index.htm?subnav=resources/navigation.htm&subcontent=/resources/proddesc/sitedigger.htm
Line 153: Line 176:
 
* Burp Spider - http://portswigger.net/spider/<br>
 
* Burp Spider - http://portswigger.net/spider/<br>
 
* Wikto - http://www.sensepost.com/research/wikto/<BR>
 
* Wikto - http://www.sensepost.com/research/wikto/<BR>
 
+
* Googlegath - http://www.nothink.org/perl/googlegath/<br>
+
* Advanced Dork (Firefox Add-on) - https://addons.mozilla.org/firefox/2144/
 
 
{{Category:OWASP Testing Project AoC}}
 

Latest revision as of 19:56, 21 May 2009

OWASP Testing Guide v3 Table of Contents

This article is part of the OWASP Testing Guide v3. The entire OWASP Testing Guide v3 can be downloaded here.

OWASP at the moment is working at the OWASP Testing Guide v4: you can browse the Guide here

Brief Summary

This section describes how to retrieve information about the application being tested using spidering and googling techniques.

Description of the Issue

Web spiders are the most powerful and useful tools developed for both good and bad intentions on the internet. A spider serves one major function, Data Mining. The way a typical spider (like Google) works is by crawling a web site one page at a time, gathering and storing the relevant information such as email addresses, meta-tags, hidden form data, URL information, links, etc. The spider then crawls all the links in that page, collecting relevant information in each following page, and so on. Before you know it, the spider has crawled thousands of links and pages gathering bits of information and storing it into a database. This web of paths is where the term 'spider' is derived from.

The Google search engine found at http://www.google.com offers many features, including language and document translation; web, image, newsgroups, catalog, and news searches; and more. These features offer obvious benefits to even the most uninitiated web surfer, but these same features offer far more nefarious possibilities to the most malicious Internet users, including hackers, computer criminals, identity thieves, and even terrorists. This article outlines the more harmful applications of the Google search engine, techniques that have collectively been termed "Google Hacking." In 1992, there were about 15,000 web sites, in 2006 the number has exceeded 100 million. What if a simple query to a search engine like Google such as "Hackable Websites w/ Credit Card Information" produced a list of websites that contained customer credit card data of thousands of customers per company? If the attacker is aware of a web application that stores a clear text password file in a directory and wants to gather these targets, then he could search on "intitle:"Index of" .mysql_history" and the search engine will provide him with a list of target systems that may divulge these database usernames and passwords (out of a possible 100 million web sites available). Or perhaps the attacker has a new method to attack a Lotus Notes web server and simply wants to see how many targets are on the internet, he could search on "inurl:domcfg.nsf". Apply the same logic to a worm looking for its new victim.

Black Box testing and example

Spidering

Description and goal

Our goal is to create a map of the application with all the points of access (gates) to the application. This will be useful for the second active phase of penetration testing. You can use a tool such as wget (powerful and very easy to use) to retrieve all the information published by the application.

Test:

The -S option is used to collect the HTTP header of the web requests. The --spider options is used to not download anything since we only want the HTTP header.

wget -S --spider <target>

Result:

http://www.<target>/
           => `index.html'
Resolving www.<target>... 64.xxx.xxx.23, 64.xxx.xxx.24, 64.xxx.xxx.20, ...
Connecting to www.<target>|64.xxx.xxx.23|:80... connected.
HTTP request sent, awaiting response... 
  HTTP/1.1 200 OK
  Date: Mon, 10 Sep 2007 00:43:04 GMT
  Server: Apache
  Accept-Ranges: bytes
  Cache-Control: max-age=60, private
  Expires: Mon, 10 Sep 2007 00:44:01 GMT
  Vary: Accept-Encoding,User-Agent
  Content-Type: text/html
  X-Pad: avoid browser bug
  Content-Length: 135750
  Keep-Alive: timeout=5, max=64
  Connection: Keep-Alive
Length: 135,750 (133K) [text/html]
200 OK

Test:

The -r option is used to collect recursively the web-site's content and the -D option restricts the request only for the specified domain.

wget -r -D <domain> <target>

Result:

22:13:55 (15.73 KB/s) - `www.******.org/indice/13' saved [8379]

--22:13:55--  http://www.******.org/*****/********
           => `www.******.org/*****/********'
Connecting to www.******.org[xx.xxx.xxx.xx]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]

    [   <=>                                                                                                                                                                ] 11,308        17.72K/s                     

...

Googling

Description and goal

The scope of this activity is to find information about a single web site published on the internet or to find a specific kind of application such as Webmin or VNC. There are tools available that can assist with this technique, for example googlegath, but it is also possibile to perform this operation manually using Google's web site search facilities. This operation does not require specialist technical skills and is a good way to collect information about a web target.


Useful Google Advanced Search techniques

  • Use the plus sign (+) to force a search for an overly common word. Use the minus sign (-) to exclude a term from a search. No spaces follow these signs.
  • To search for a phrase, supply the phrase surrounded by double quotes (" ").
  • A period (.) serves as a single-character wildcard.
  • An asterisk (*) represents any word —- not the completion of a word, as is traditionally used.

Google advanced operators help refine searches. Advanced operators use the following syntax: operator:search_term . Notice that there is no space between the operator, the colon, and the search term. A list of operators and search terms follows:

  • The site operator instructs Google to restrict a search to a specific web site or domain. The web site to search must be supplied after the colon.
  • The filetype operator instructs Google to search only within the text of a particular type of file. The file type to search must be supplied after the colon. Don't include a period before the file extension.
  • The link operator instructs Google to search within hyperlinks for a search term.
  • The cache operator displays the version of a web page as it appeared when Google crawled the site. The URL of the site must be supplied after the colon.
  • The intitle, allintitle operator instructs Google to search for a term within the title of a document.
  • The inurl, allinurl operator instructs Google to search only within the URL (web address) of a document. The search term must follow the colon.
  • The info operator instructs Google to search only within the summary information of a site
  • The phonebook operator instructs Google to search business or residential phone listing.
  • The stocks operator instructs Google to search for stock market information about a company.
  • The bphonebook operator instructs Google to search business phone listing only.

The following are a set googling examples (for a complete list look at [1]):

Test:

site:www.xxxxx.ca AND intitle:"index.of" "backup"

Result:

The operator site: restricts a search in a specific domain, while the intitle: operator makes it possibile to find the pages that contain "index of backup" as a link title of the Google output.
The AND boolean operator is used to combine more conditions in the same query.

Index of /backup/

 Name                    Last modified       Size  Description

 Parent Directory        21-Jul-2004 17:48      -  

Test:

"Login to Webmin" inurl:10000

Result:

The query produces an output with every Webmin authentication interface collected by Google during the spidering process.

Test:

site:www.xxxx.org AND filetype:wsdl wsdl

Result:

The filetype operator is used to find specific kind of files on the web-site.

How can you prevent Google hacking?

Make sure you are comfortable with sharing everything in your public Web folder with the whole world, because Google will share it, whether you like it or not. Also, in order to prevent attackers from easily figuring out what server software you are running, change the default error messages and other identifiers. Often, when a "404 Not Found" error is detected, servers will return a page like that says something like:

Not Found 
The requested URL /cgi-bin/xxxxxx was not found on this server.
Apache/1.3.27 Server at your web site Port 80

The only information that the legimitate user really needs is a message that says "Page Not found." Restricting the other information will prevent your page from turning up in an attacker's search for a specific flavor of server. Google periodically purges it's cache, but until then your sensitive files are still being offered to the public. If you realize that the search engine has cached files that you want to be unavailable to be viewed you can go to http://www.google.com/remove.html and follow the instructions on how to remove your page, or parts of your page, from their database.

Using a search engine to discover virtual hosts

Live.com, another well-known search engine (see link at the bottom of the page), provides the "ip" operator, which returns all the pages that are known to belong to a certain IP address. This is a very useful technique to find out which virtual hosts are configured on the tested server. For instance, the following query will return all indexed pages belonging to the domain owasp.org:

ip:216.48.3.18

References

Whitepapers

Tools