<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://wiki.owasp.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Cmlh</id>
		<title>OWASP - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://wiki.owasp.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Cmlh"/>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php/Special:Contributions/Cmlh"/>
		<updated>2026-05-17T04:52:15Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.27.2</generator>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=179788</id>
		<title>Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=179788"/>
				<updated>2014-08-01T22:54:21Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Replaced forth reference&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v4}}&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
This section describes how to test the robots.txt file for information leakage of the web application's directory or folder path(s).  Furthermore, the list of directories that are to be avoided by Spiders, Robots, or Crawlers can also be created as a dependency for OWASP-IG-009[https://www.owasp.org/index.php/Testing_Map_execution_paths_through_application_(OWASP-IG-009)]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Test Objectives ==&lt;br /&gt;
1. Information leakage of the web application's directory or folder path(s).&lt;br /&gt;
&lt;br /&gt;
2. Create the list of directories that are to be avoided by Spiders, Robots, or Crawlers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== How to Test ==&lt;br /&gt;
Web Spiders, Robots, or Crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''robots.txt in webroot'''&amp;lt;br&amp;gt;&lt;br /&gt;
As an example, the beginning of the robots.txt file from http://www.google.com/robots.txt sampled on 11 August 2013 is quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ''User-Agent'' directive refers to the specific web spider/robot/crawler.  For example the ''User-Agent: Googlebot'' refers to the spider from Google while &amp;quot;User-Agent: bingbot&amp;quot;[http://www.bing.com/blogs/site_blogs/b/webmaster/archive/2010/06/28/bing-crawler-bingbot-on-the-horizon.aspx] refers to crawler from Microsoft/Yahoo!.  ''User-Agent: *'' in the example above applies to all web spiders/robots/crawlers [2] as quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The ''Disallow'' directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
... &lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a robots.txt file [3], such as those from Social Networks[https://www.htbridge.com/news/social_networks_can_robots_violate_user_privacy.html] to ensure that shared linked are still valid.  Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;META&amp;gt; Tag'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;META&amp;gt; tags are located within the HEAD section of each HTML Document and should be consistent across a web site in the likely event that the robot/spider/crawler start point does not begin from a document link other than webroot i.e. a &amp;quot;deep link&amp;quot;[http://en.wikipedia.org/wiki/Deep_linking].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If there is no &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot; ... &amp;gt;&amp;quot; entry then the &amp;quot;Robots Exclusion Protocol&amp;quot; defaults to &amp;quot;INDEX,FOLLOW&amp;quot; respectively.  Therefore, the other two valid entries defined by the &amp;quot;Robots Exclusion Protocol&amp;quot; are prefixed with &amp;quot;NO...&amp;quot; i.e. &amp;quot;NOINDEX&amp;quot; and &amp;quot;NOFOLLOW&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; tag as the robots.txt file convention is preferred.  Hence, &amp;lt;b&amp;gt;&amp;lt;META&amp;gt; Tags should not be considered the primary mechanism, rather a complementary control to robots.txt&amp;lt;/b&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Black Box Testing ===&lt;br /&gt;
'''robots.txt in webroot - with &amp;quot;wget&amp;quot; or &amp;quot;curl&amp;quot;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The robots.txt file is retrieved from the web root directory of the web server. For example, to retrieve the robots.txt from www.google.com using &amp;quot;wget&amp;quot; or &amp;quot;curl&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ wget http://www.google.com/robots.txt&lt;br /&gt;
--2013-08-11 14:40:36--  http://www.google.com/robots.txt&lt;br /&gt;
Resolving www.google.com... 74.125.237.17, 74.125.237.18, 74.125.237.19, ...&lt;br /&gt;
Connecting to www.google.com|74.125.237.17|:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/plain]&lt;br /&gt;
Saving to: ‘robots.txt.1’&lt;br /&gt;
&lt;br /&gt;
    [ &amp;lt;=&amp;gt;                                   ] 7,074       --.-K/s   in 0s      &lt;br /&gt;
&lt;br /&gt;
2013-08-11 14:40:37 (59.7 MB/s) - ‘robots.txt’ saved [7074]&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ curl -O http://www.google.com/robots.txt&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
101  7074    0  7074    0     0   9410      0 --:--:-- --:--:-- --:--:-- 27312&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''robots.txt in webroot - with rockspider'''&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;quot;rockspider&amp;quot;[https://github.com/cmlh/rockspider/releases] automates the creation of the initial scope for Spiders/Robots/Crawlers of files and directories/folders of a web site.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, to create the initial scope based on the Allowed: directive from www.google.com using &amp;quot;rockspider&amp;quot;[https://github.com/cmlh/rockspider/releases]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ ./rockspider.pl -www www.google.com&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Rockspider&amp;quot; Alpha v0.1_2&lt;br /&gt;
&lt;br /&gt;
Copyright 2013 Christian Heinrich&lt;br /&gt;
Licensed under the Apache License, Version 2.0&lt;br /&gt;
&lt;br /&gt;
1. Downloading http://www.google.com/robots.txt&lt;br /&gt;
2. &amp;quot;robots.txt&amp;quot; saved as &amp;quot;www.google.com-robots.txt&amp;quot;&lt;br /&gt;
3. Sending Allow: URIs of www.google.com to web proxy i.e. 127.0.0.1:8080&lt;br /&gt;
	 /catalogs/about sent&lt;br /&gt;
	 /catalogs/p? sent&lt;br /&gt;
	 /news/directory sent&lt;br /&gt;
	...&lt;br /&gt;
4. Done.&lt;br /&gt;
&lt;br /&gt;
cmlh$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;META&amp;gt; Tags - with Burp'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Based on the Disallow directive(s) listed within the robots.txt file in webroot, a regular expression search for &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; within each web page is undertaken and the result compared to the robots.txt file in webroot.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, the robots.txt file from facebook.com has a &amp;quot;Disallow: /ac.php&amp;quot; entry[http://facebook.com/robots.txt] and the resulting search for &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; shown below:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:CMLH-Meta Tag Example-Facebook-Aug 2013.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above might be considered a fail since &amp;quot;INDEX,FOLLOW&amp;quot; is the default &amp;lt;META&amp;gt; Tag specified by the &amp;quot;Robots Exclusion Protocol&amp;quot; yet &amp;quot;Disallow: /ac.php&amp;quot; is listed in robots.txt.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Analyze robots.txt using Google Webmaster Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Web site owners can use the Google &amp;quot;Analyze robots.txt&amp;quot; function to analyse the website as part of its &amp;quot;Google Webmaster Tools&amp;quot; (https://www.google.com/webmasters/tools). This tool can assist with testing and the procedure is as follows:&lt;br /&gt;
&lt;br /&gt;
1. Sign into Google Webmaster Tools with a Google account.&amp;lt;br&amp;gt;&lt;br /&gt;
2. On the dashboard, write the URL for the site to be analyzed.&amp;lt;br&amp;gt;&lt;br /&gt;
3. Choose between the available methods and follow the on screen instruction.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Gray Box Testing === &lt;br /&gt;
The process is the same as Black Box testing above.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Tools ==&lt;br /&gt;
&lt;br /&gt;
* Browser (View Source function)&lt;br /&gt;
* curl&lt;br /&gt;
* wget&lt;br /&gt;
* rockspider[https://github.com/cmlh/rockspider/releases]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] &amp;quot;The Web Robots Pages&amp;quot; - http://www.robotstxt.org/&lt;br /&gt;
* [2] &amp;quot;Block and Remove Pages Using a robots.txt File&amp;quot; - https://support.google.com/webmasters/answer/156449&lt;br /&gt;
* [3] &amp;quot;(ISC)2 Blog: The Attack of the Spiders from the Clouds&amp;quot; - http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html&lt;br /&gt;
* [4] &amp;quot;Telstra customer database exposed&amp;quot; - http://www.smh.com.au/it-pro/security-it/telstra-customer-database-exposed-20111209-1on60.html&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Talk:Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=175193</id>
		<title>Talk:Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Talk:Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=175193"/>
				<updated>2014-05-17T10:04:35Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Add TODO for v5&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
It could be added that, from an attacker point of view, the robots.txt file can provide some useful information on the structure of the web server, e.g., directories that are supposed to be &amp;quot;private&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[User:Marco|Marco]] 18:11, 17 August 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
The intent of robots.txt is *not* to specify access control for directories. Hence to quote the wiki page &amp;quot;''Web spiders/robots/crawlers can intentionally ignore the Disallow directives specified in a robots.txt file [3]. Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.''&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
If you believe this is not your communicated clearly or could be reworded then please amend the wiki page.&lt;br /&gt;
&lt;br /&gt;
[[User:cmlh|cmlh]] 12:34, 24 August 2008 (GMT +10)&lt;br /&gt;
&lt;br /&gt;
== v3 Review Comments ==&lt;br /&gt;
&lt;br /&gt;
I don't see anything here about actually testing robots.txt or using Spiders/Robots/Crawlers to do anything to the web app. It's nice that we can DL the file and that it contains some interesting information and that there's a google tool that can do some analysis of it (though we haven't explained what google webmaster tools gives you or provided an example of the output), but where would that lead a tester or attacker?&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Rick.mitchell|Rick.mitchell]] 09:39, 3 September 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
== Reply from @cmlh ==&lt;br /&gt;
&lt;br /&gt;
Rick may have overlooked the quote &amp;quot;Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties. &amp;quot; from in the &amp;quot;How to Test&amp;quot; section of v3. &lt;br /&gt;
&lt;br /&gt;
The lack of the &amp;quot;Google Webmaster Tools&amp;quot; example is due to me not being the webmaster of owasp.org.  This can be resolved in v4 once the webmaster is known.&lt;br /&gt;
&lt;br /&gt;
: For the first part, either that sentence wasn't part of the content I reviewed during v3 draft or it didn't seem significant enough given the lead-in. &lt;br /&gt;
&lt;br /&gt;
: As for the webmaster tools stuff that sounds good, however like INFO-001 seems awfully google-centric.&lt;br /&gt;
&lt;br /&gt;
:: CH - Bing/Yahoo! have been included in the roadmap for v4.&lt;br /&gt;
&lt;br /&gt;
::: Perfect [[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]]) 19:20, 15 August 2013 (CDT)&lt;br /&gt;
&lt;br /&gt;
: This content also only covers robots.txt though the heading suggests much broader coverage. So either the content should be expanded or the heading made more specific (IMHO). [[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]]) 15:07, 15 August 2013 (CDT)&lt;br /&gt;
&lt;br /&gt;
:: CH - I included http://www.robotstxt.org/meta.html in the OWASP Testing Guide v3 (since I also presented on this content in 2009 and 2010).  I am not sure if it was removed by subsequent edits by others (I haven't checked this) but I will include it for v4 again. &lt;br /&gt;
&lt;br /&gt;
::: Sounds good. Though I'd still argue that this covers an alternative to robots.txt not any actually different or other mechanisms as implied by &amp;quot;web server metafiles&amp;quot; in the heading; which to me reads like there are other configuration, instruction or interaction governors to be discussed in this section. Just my 2 cents [[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]]) 19:20, 15 August 2013 (CDT)&lt;br /&gt;
&lt;br /&gt;
== TODO for v4 ==&lt;br /&gt;
&lt;br /&gt;
1. Insert the &amp;quot;Analyze robots.txt using Google Webmaster Tools&amp;quot; i.e. https://support.google.com/webmasters/answer/156449?hl=en&amp;amp;from=35237&amp;amp;rd=1 with owasp.org (not applicable, since webroot doesn't contain robots.txt) as the example.&lt;br /&gt;
&lt;br /&gt;
2. May need to update the reference to OWASP-IG-009 within the &amp;quot;Summary&amp;quot; section depending on the finalisation of the spidering thread (To be created).&lt;br /&gt;
&lt;br /&gt;
3. Add Microsoft/Yahoo! related content. - DONE&lt;br /&gt;
&lt;br /&gt;
== TODO for v5 ==&lt;br /&gt;
&lt;br /&gt;
http://blog.erratasec.com/2014/05/no-mcafee-didnt-violate-ethics-scraping.html&lt;br /&gt;
http://blog.osvdb.org/2014/05/07/the-scraping-problem-and-ethics/&lt;br /&gt;
https://github.com/behindthefirewalls/Parsero&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Talk:Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=157706</id>
		<title>Talk:Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Talk:Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=157706"/>
				<updated>2013-09-03T07:01:38Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Refactor 3 as done/completed&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
It could be added that, from an attacker point of view, the robots.txt file can provide some useful information on the structure of the web server, e.g., directories that are supposed to be &amp;quot;private&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[User:Marco|Marco]] 18:11, 17 August 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
The intent of robots.txt is *not* to specify access control for directories. Hence to quote the wiki page &amp;quot;''Web spiders/robots/crawlers can intentionally ignore the Disallow directives specified in a robots.txt file [3]. Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.''&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
If you believe this is not your communicated clearly or could be reworded then please amend the wiki page.&lt;br /&gt;
&lt;br /&gt;
[[User:cmlh|cmlh]] 12:34, 24 August 2008 (GMT +10)&lt;br /&gt;
&lt;br /&gt;
== v3 Review Comments ==&lt;br /&gt;
&lt;br /&gt;
I don't see anything here about actually testing robots.txt or using Spiders/Robots/Crawlers to do anything to the web app. It's nice that we can DL the file and that it contains some interesting information and that there's a google tool that can do some analysis of it (though we haven't explained what google webmaster tools gives you or provided an example of the output), but where would that lead a tester or attacker?&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Rick.mitchell|Rick.mitchell]] 09:39, 3 September 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
== Reply from @cmlh ==&lt;br /&gt;
&lt;br /&gt;
Rick may have overlooked the quote &amp;quot;Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties. &amp;quot; from in the &amp;quot;How to Test&amp;quot; section of v3. &lt;br /&gt;
&lt;br /&gt;
The lack of the &amp;quot;Google Webmaster Tools&amp;quot; example is due to me not being the webmaster of owasp.org.  This can be resolved in v4 once the webmaster is known.&lt;br /&gt;
&lt;br /&gt;
: For the first part, either that sentence wasn't part of the content I reviewed during v3 draft or it didn't seem significant enough given the lead-in. &lt;br /&gt;
&lt;br /&gt;
: As for the webmaster tools stuff that sounds good, however like INFO-001 seems awfully google-centric.&lt;br /&gt;
&lt;br /&gt;
:: CH - Bing/Yahoo! have been included in the roadmap for v4.&lt;br /&gt;
&lt;br /&gt;
::: Perfect [[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]]) 19:20, 15 August 2013 (CDT)&lt;br /&gt;
&lt;br /&gt;
: This content also only covers robots.txt though the heading suggests much broader coverage. So either the content should be expanded or the heading made more specific (IMHO). [[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]]) 15:07, 15 August 2013 (CDT)&lt;br /&gt;
&lt;br /&gt;
:: CH - I included http://www.robotstxt.org/meta.html in the OWASP Testing Guide v3 (since I also presented on this content in 2009 and 2010).  I am not sure if it was removed by subsequent edits by others (I haven't checked this) but I will include it for v4 again. &lt;br /&gt;
&lt;br /&gt;
::: Sounds good. Though I'd still argue that this covers an alternative to robots.txt not any actually different or other mechanisms as implied by &amp;quot;web server metafiles&amp;quot; in the heading; which to me reads like there are other configuration, instruction or interaction governors to be discussed in this section. Just my 2 cents [[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]]) 19:20, 15 August 2013 (CDT)&lt;br /&gt;
&lt;br /&gt;
== TODO for v4 ==&lt;br /&gt;
&lt;br /&gt;
1. Insert the &amp;quot;Analyze robots.txt using Google Webmaster Tools&amp;quot; i.e. https://support.google.com/webmasters/answer/156449?hl=en&amp;amp;from=35237&amp;amp;rd=1 with owasp.org (not applicable, since webroot doesn't contain robots.txt) as the example.&lt;br /&gt;
&lt;br /&gt;
2. May need to update the reference to OWASP-IG-009 within the &amp;quot;Summary&amp;quot; section depending on the finalisation of the spidering thread (To be created).&lt;br /&gt;
&lt;br /&gt;
3. Add Microsoft/Yahoo! related content. - DONE&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=157705</id>
		<title>Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=157705"/>
				<updated>2013-09-03T06:59:35Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Add BingBot&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v4}}&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
This section describes how to test the robots.txt file for Information Leakage of the web application's directory/folder path(s).  Furthermore the list of directories that are to be avoided by Spiders/Robots/Crawlers can also be created as a dependency for OWASP-IG-009[https://www.owasp.org/index.php/Testing_Map_execution_paths_through_application_(OWASP-IG-009)]&lt;br /&gt;
&lt;br /&gt;
== Test Objectives ==&lt;br /&gt;
1. Information Leakage of the web application's directory/folder path(s).&lt;br /&gt;
&lt;br /&gt;
2. Create the list of directories that are to be avoided by Spiders/Robots/Crawlers&lt;br /&gt;
&lt;br /&gt;
== How to Test ==&lt;br /&gt;
 Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''robots.txt in webroot'''&amp;lt;br&amp;gt;&lt;br /&gt;
As an example, the beginning of the robots.txt file from http://www.google.com/robots.txt sampled on 11 August 2013 is quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''User-Agent'' directive refers to the specific web spider/robot/crawler.  For example the ''User-Agent: Googlebot'' refers to the spider from Google while &amp;quot;User-Agent: bingbot&amp;quot;[http://www.bing.com/blogs/site_blogs/b/webmaster/archive/2010/06/28/bing-crawler-bingbot-on-the-horizon.aspx] refers to crawler from Microsoft/Yahoo!.  ''User-Agent: *'' in the example above applies to all web spiders/robots/crawlers [2] as quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''Disallow'' directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
... &lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a robots.txt file [3], such as those from Social Networks[https://www.htbridge.com/news/social_networks_can_robots_violate_user_privacy.html] to ensure that shared linked are still valid.  Hence, &amp;lt;b&amp;gt;robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties&amp;lt;/b&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;META&amp;gt; Tag'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;META&amp;gt; tags are located within the HEAD section of each HTML Document and should be consistent across a web site in the likely event that the robot/spider/crawler start point does not begin from a document link other than webroot i.e. a &amp;quot;deep link&amp;quot;[http://en.wikipedia.org/wiki/Deep_linking].&lt;br /&gt;
&lt;br /&gt;
If there is no &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; ... &amp;gt;&amp;quot; entry than the &amp;quot;Robots Exclusion Protocol'' defaults to &amp;quot;INDEX,FOLLOW&amp;quot; respectively.  Therefore, the other two valid entries defined by the &amp;quot;Robots Exclusion Protocol'' are prefixed with &amp;quot;NO...&amp;quot; i.e. &amp;quot;NOINDEX&amp;quot; and &amp;quot;NOFOLLOW&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; tag as the robots.txt file convention is preferred.  Hence, &amp;lt;b&amp;gt;&amp;lt;META&amp;gt; Tags should not be considered the primary mechanism, rather a complementary control to robots.txt&amp;lt;/b&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
=== Black Box testing and example ===&lt;br /&gt;
'''robots.txt in webroot - with &amp;quot;wget&amp;quot; or &amp;quot;curl&amp;quot;&amp;lt;br&amp;gt;&lt;br /&gt;
The robots.txt file is retrieved from the web root directory of the web server.&lt;br /&gt;
&lt;br /&gt;
For example, to retrieve the robots.txt from www.google.com using ''wget'' or &amp;quot;curl&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ wget http://www.google.com/robots.txt&lt;br /&gt;
--2013-08-11 14:40:36--  http://www.google.com/robots.txt&lt;br /&gt;
Resolving www.google.com... 74.125.237.17, 74.125.237.18, 74.125.237.19, ...&lt;br /&gt;
Connecting to www.google.com|74.125.237.17|:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/plain]&lt;br /&gt;
Saving to: ‘robots.txt.1’&lt;br /&gt;
&lt;br /&gt;
    [ &amp;lt;=&amp;gt;                                   ] 7,074       --.-K/s   in 0s      &lt;br /&gt;
&lt;br /&gt;
2013-08-11 14:40:37 (59.7 MB/s) - ‘robots.txt’ saved [7074]&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ curl -O http://www.google.com/robots.txt&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
101  7074    0  7074    0     0   9410      0 --:--:-- --:--:-- --:--:-- 27312&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''robots.txt in webroot - with rockspider'''&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;quot;rockspider&amp;quot;[https://github.com/cmlh/rockspider/releases] automates the creation of the initial scope for Spiders/Robots/Crawlers of files and directories/folders of a web site&lt;br /&gt;
&lt;br /&gt;
For example, to create the initial scope based on the Allowed: directive from www.google.com using &amp;quot;rockspider&amp;quot;[https://github.com/cmlh/rockspider/releases]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ ./rockspider.pl -www www.google.com&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Rockspider&amp;quot; Alpha v0.1_2&lt;br /&gt;
&lt;br /&gt;
Copyright 2013 Christian Heinrich&lt;br /&gt;
Licensed under the Apache License, Version 2.0&lt;br /&gt;
&lt;br /&gt;
1. Downloading http://www.google.com/robots.txt&lt;br /&gt;
2. &amp;quot;robots.txt&amp;quot; saved as &amp;quot;www.google.com-robots.txt&amp;quot;&lt;br /&gt;
3. Sending Allow: URIs of www.google.com to web proxy i.e. 127.0.0.1:8080&lt;br /&gt;
	 /catalogs/about sent&lt;br /&gt;
	 /catalogs/p? sent&lt;br /&gt;
	 /news/directory sent&lt;br /&gt;
	...&lt;br /&gt;
4. Done.&lt;br /&gt;
&lt;br /&gt;
cmlh$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;META&amp;gt; Tags - with Burp'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Based on the Disallow directive(s) listed within the robots.txt file in webroot, a regular expression search for &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; within each web page is undertaken and the result compared to the robots.txt file in webroot.&lt;br /&gt;
&lt;br /&gt;
For example, the robots.txt file from facebook.com has a &amp;quot;Disallow: /ac.php&amp;quot; entry[http://facebook.com/robots.txt] and the resulting search for &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; shown below:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:CMLH-Meta Tag Example-Facebook-Aug 2013.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The above might be considered a fail since &amp;quot;INDEX,FOLLOW&amp;quot; is the default &amp;lt;META&amp;gt; Tag specified by the &amp;quot;Robots Exclusion Protocol&amp;quot; yet &amp;quot;Disallow: /ac.php&amp;quot; is listed in robots.txt.&lt;br /&gt;
&lt;br /&gt;
'''Analyze robots.txt using Google Webmaster Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Google provides an &amp;quot;Analyze robots.txt&amp;quot; function as part of its &amp;quot;Google Webmaster Tools&amp;quot;, which can assist with testing [4] and the procedure is as follows:&lt;br /&gt;
&lt;br /&gt;
1. Sign into Google Webmaster Tools with your Google Account.&amp;lt;br&amp;gt;&lt;br /&gt;
2. On the Dashboard, click the URL for the site you want.&amp;lt;br&amp;gt;&lt;br /&gt;
3. Click Tools, and then click Analyze robots.txt.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gray Box testing and example === &lt;br /&gt;
The process is the same as Black Box testing above.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools ==&lt;br /&gt;
&lt;br /&gt;
* Browser (View Source function)&lt;br /&gt;
* curl&lt;br /&gt;
* wget&lt;br /&gt;
* rockspider[https://github.com/cmlh/rockspider/releases]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] &amp;quot;The Web Robots Pages&amp;quot; - http://www.robotstxt.org/&lt;br /&gt;
* [2] &amp;quot;Block and Remove Pages Using a robots.txt File&amp;quot; - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=40364&amp;amp;rd=1&lt;br /&gt;
* [3] &amp;quot;(ISC)2 Blog: The Attack of the Spiders from the Clouds&amp;quot; - http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html&lt;br /&gt;
* [4] &amp;quot;Block and Remove Pages Using a robots.txt File - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=35237&amp;amp;rd=1&lt;br /&gt;
* [5] &amp;quot;Telstra customer database exposed&amp;quot; - http://www.smh.com.au/it-pro/security-it/telstra-customer-database-exposed-20111209-1on60.html&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=157704</id>
		<title>Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=157704"/>
				<updated>2013-09-03T06:53:49Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Add Social Network Reference from Bridge&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v4}}&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
This section describes how to test the robots.txt file for Information Leakage of the web application's directory/folder path(s).  Furthermore the list of directories that are to be avoided by Spiders/Robots/Crawlers can also be created as a dependency for OWASP-IG-009[https://www.owasp.org/index.php/Testing_Map_execution_paths_through_application_(OWASP-IG-009)]&lt;br /&gt;
&lt;br /&gt;
== Test Objectives ==&lt;br /&gt;
1. Information Leakage of the web application's directory/folder path(s).&lt;br /&gt;
&lt;br /&gt;
2. Create the list of directories that are to be avoided by Spiders/Robots/Crawlers&lt;br /&gt;
&lt;br /&gt;
== How to Test ==&lt;br /&gt;
 Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''robots.txt in webroot'''&amp;lt;br&amp;gt;&lt;br /&gt;
As an example, the beginning of the robots.txt file from http://www.google.com/robots.txt sampled on 11 August 2013 is quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''User-Agent'' directive refers to the specific web spider/robot/crawler.  For example the ''User-Agent: Googlebot'' refers to the ''GoogleBot'' crawler while ''User-Agent: *'' in the example above applies to all web spiders/robots/crawlers [2] as quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''Disallow'' directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
... &lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a robots.txt file [3], such as those from Social Networks[https://www.htbridge.com/news/social_networks_can_robots_violate_user_privacy.html] to ensure that shared linked are still valid.  Hence, &amp;lt;b&amp;gt;robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties&amp;lt;/b&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;META&amp;gt; Tag'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;META&amp;gt; tags are located within the HEAD section of each HTML Document and should be consistent across a web site in the likely event that the robot/spider/crawler start point does not begin from a document link other than webroot i.e. a &amp;quot;deep link&amp;quot;[http://en.wikipedia.org/wiki/Deep_linking].&lt;br /&gt;
&lt;br /&gt;
If there is no &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; ... &amp;gt;&amp;quot; entry than the &amp;quot;Robots Exclusion Protocol'' defaults to &amp;quot;INDEX,FOLLOW&amp;quot; respectively.  Therefore, the other two valid entries defined by the &amp;quot;Robots Exclusion Protocol'' are prefixed with &amp;quot;NO...&amp;quot; i.e. &amp;quot;NOINDEX&amp;quot; and &amp;quot;NOFOLLOW&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; tag as the robots.txt file convention is preferred.  Hence, &amp;lt;b&amp;gt;&amp;lt;META&amp;gt; Tags should not be considered the primary mechanism, rather a complementary control to robots.txt&amp;lt;/b&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
=== Black Box testing and example ===&lt;br /&gt;
'''robots.txt in webroot - with &amp;quot;wget&amp;quot; or &amp;quot;curl&amp;quot;&amp;lt;br&amp;gt;&lt;br /&gt;
The robots.txt file is retrieved from the web root directory of the web server.&lt;br /&gt;
&lt;br /&gt;
For example, to retrieve the robots.txt from www.google.com using ''wget'' or &amp;quot;curl&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ wget http://www.google.com/robots.txt&lt;br /&gt;
--2013-08-11 14:40:36--  http://www.google.com/robots.txt&lt;br /&gt;
Resolving www.google.com... 74.125.237.17, 74.125.237.18, 74.125.237.19, ...&lt;br /&gt;
Connecting to www.google.com|74.125.237.17|:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/plain]&lt;br /&gt;
Saving to: ‘robots.txt.1’&lt;br /&gt;
&lt;br /&gt;
    [ &amp;lt;=&amp;gt;                                   ] 7,074       --.-K/s   in 0s      &lt;br /&gt;
&lt;br /&gt;
2013-08-11 14:40:37 (59.7 MB/s) - ‘robots.txt’ saved [7074]&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ curl -O http://www.google.com/robots.txt&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
101  7074    0  7074    0     0   9410      0 --:--:-- --:--:-- --:--:-- 27312&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''robots.txt in webroot - with rockspider'''&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;quot;rockspider&amp;quot;[https://github.com/cmlh/rockspider/releases] automates the creation of the initial scope for Spiders/Robots/Crawlers of files and directories/folders of a web site&lt;br /&gt;
&lt;br /&gt;
For example, to create the initial scope based on the Allowed: directive from www.google.com using &amp;quot;rockspider&amp;quot;[https://github.com/cmlh/rockspider/releases]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ ./rockspider.pl -www www.google.com&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Rockspider&amp;quot; Alpha v0.1_2&lt;br /&gt;
&lt;br /&gt;
Copyright 2013 Christian Heinrich&lt;br /&gt;
Licensed under the Apache License, Version 2.0&lt;br /&gt;
&lt;br /&gt;
1. Downloading http://www.google.com/robots.txt&lt;br /&gt;
2. &amp;quot;robots.txt&amp;quot; saved as &amp;quot;www.google.com-robots.txt&amp;quot;&lt;br /&gt;
3. Sending Allow: URIs of www.google.com to web proxy i.e. 127.0.0.1:8080&lt;br /&gt;
	 /catalogs/about sent&lt;br /&gt;
	 /catalogs/p? sent&lt;br /&gt;
	 /news/directory sent&lt;br /&gt;
	...&lt;br /&gt;
4. Done.&lt;br /&gt;
&lt;br /&gt;
cmlh$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;META&amp;gt; Tags - with Burp'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Based on the Disallow directive(s) listed within the robots.txt file in webroot, a regular expression search for &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; within each web page is undertaken and the result compared to the robots.txt file in webroot.&lt;br /&gt;
&lt;br /&gt;
For example, the robots.txt file from facebook.com has a &amp;quot;Disallow: /ac.php&amp;quot; entry[http://facebook.com/robots.txt] and the resulting search for &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; shown below:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:CMLH-Meta Tag Example-Facebook-Aug 2013.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The above might be considered a fail since &amp;quot;INDEX,FOLLOW&amp;quot; is the default &amp;lt;META&amp;gt; Tag specified by the &amp;quot;Robots Exclusion Protocol&amp;quot; yet &amp;quot;Disallow: /ac.php&amp;quot; is listed in robots.txt.&lt;br /&gt;
&lt;br /&gt;
'''Analyze robots.txt using Google Webmaster Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Google provides an &amp;quot;Analyze robots.txt&amp;quot; function as part of its &amp;quot;Google Webmaster Tools&amp;quot;, which can assist with testing [4] and the procedure is as follows:&lt;br /&gt;
&lt;br /&gt;
1. Sign into Google Webmaster Tools with your Google Account.&amp;lt;br&amp;gt;&lt;br /&gt;
2. On the Dashboard, click the URL for the site you want.&amp;lt;br&amp;gt;&lt;br /&gt;
3. Click Tools, and then click Analyze robots.txt.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gray Box testing and example === &lt;br /&gt;
The process is the same as Black Box testing above.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools ==&lt;br /&gt;
&lt;br /&gt;
* Browser (View Source function)&lt;br /&gt;
* curl&lt;br /&gt;
* wget&lt;br /&gt;
* rockspider[https://github.com/cmlh/rockspider/releases]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] &amp;quot;The Web Robots Pages&amp;quot; - http://www.robotstxt.org/&lt;br /&gt;
* [2] &amp;quot;Block and Remove Pages Using a robots.txt File&amp;quot; - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=40364&amp;amp;rd=1&lt;br /&gt;
* [3] &amp;quot;(ISC)2 Blog: The Attack of the Spiders from the Clouds&amp;quot; - http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html&lt;br /&gt;
* [4] &amp;quot;Block and Remove Pages Using a robots.txt File - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=35237&amp;amp;rd=1&lt;br /&gt;
* [5] &amp;quot;Telstra customer database exposed&amp;quot; - http://www.smh.com.au/it-pro/security-it/telstra-customer-database-exposed-20111209-1on60.html&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=157293</id>
		<title>Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=157293"/>
				<updated>2013-08-25T10:03:36Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Refactor rockspider item&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v4}}&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
This section describes how to test the robots.txt file for Information Leakage of the web application's directory/folder path(s).  Furthermore the list of directories that are to be avoided by Spiders/Robots/Crawlers can also be created as a dependency for OWASP-IG-009[https://www.owasp.org/index.php/Testing_Map_execution_paths_through_application_(OWASP-IG-009)]&lt;br /&gt;
&lt;br /&gt;
== Test Objectives ==&lt;br /&gt;
1. Information Leakage of the web application's directory/folder path(s).&lt;br /&gt;
&lt;br /&gt;
2. Create the list of directories that are to be avoided by Spiders/Robots/Crawlers&lt;br /&gt;
&lt;br /&gt;
== How to Test ==&lt;br /&gt;
 Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''robots.txt in webroot'''&amp;lt;br&amp;gt;&lt;br /&gt;
As an example, the beginning of the robots.txt file from http://www.google.com/robots.txt sampled on 11 August 2013 is quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''User-Agent'' directive refers to the specific web spider/robot/crawler.  For example the ''User-Agent: Googlebot'' refers to the ''GoogleBot'' crawler while ''User-Agent: *'' in the example above applies to all web spiders/robots/crawlers [2] as quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''Disallow'' directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
... &lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a robots.txt file [3].  Hence, &amp;lt;b&amp;gt;robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties&amp;lt;/b&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;META&amp;gt; Tag'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;META&amp;gt; tags are located within the HEAD section of each HTML Document and should be consistent across a web site in the likely event that the robot/spider/crawler start point does not begin from a document link other than webroot i.e. a &amp;quot;deep link&amp;quot;[http://en.wikipedia.org/wiki/Deep_linking].&lt;br /&gt;
&lt;br /&gt;
If there is no &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; ... &amp;gt;&amp;quot; entry than the &amp;quot;Robots Exclusion Protocol'' defaults to &amp;quot;INDEX,FOLLOW&amp;quot; respectively.  Therefore, the other two valid entries defined by the &amp;quot;Robots Exclusion Protocol'' are prefixed with &amp;quot;NO...&amp;quot; i.e. &amp;quot;NOINDEX&amp;quot; and &amp;quot;NOFOLLOW&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; tag as the robots.txt file convention is preferred.  Hence, &amp;lt;b&amp;gt;&amp;lt;META&amp;gt; Tags should not be considered the primary mechanism, rather a complementary control to robots.txt&amp;lt;/b&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
=== Black Box testing and example ===&lt;br /&gt;
'''robots.txt in webroot - with &amp;quot;wget&amp;quot; or &amp;quot;curl&amp;quot;&amp;lt;br&amp;gt;&lt;br /&gt;
The robots.txt file is retrieved from the web root directory of the web server.&lt;br /&gt;
&lt;br /&gt;
For example, to retrieve the robots.txt from www.google.com using ''wget'' or &amp;quot;curl&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ wget http://www.google.com/robots.txt&lt;br /&gt;
--2013-08-11 14:40:36--  http://www.google.com/robots.txt&lt;br /&gt;
Resolving www.google.com... 74.125.237.17, 74.125.237.18, 74.125.237.19, ...&lt;br /&gt;
Connecting to www.google.com|74.125.237.17|:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/plain]&lt;br /&gt;
Saving to: ‘robots.txt.1’&lt;br /&gt;
&lt;br /&gt;
    [ &amp;lt;=&amp;gt;                                   ] 7,074       --.-K/s   in 0s      &lt;br /&gt;
&lt;br /&gt;
2013-08-11 14:40:37 (59.7 MB/s) - ‘robots.txt’ saved [7074]&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ curl -O http://www.google.com/robots.txt&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
101  7074    0  7074    0     0   9410      0 --:--:-- --:--:-- --:--:-- 27312&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''robots.txt in webroot - with rockspider'''&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;quot;rockspider&amp;quot;[https://github.com/cmlh/rockspider/releases] automates the creation of the initial scope for Spiders/Robots/Crawlers of files and directories/folders of a web site&lt;br /&gt;
&lt;br /&gt;
For example, to create the initial scope based on the Allowed: directive from www.google.com using &amp;quot;rockspider&amp;quot;[https://github.com/cmlh/rockspider/releases]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ ./rockspider.pl -www www.google.com&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Rockspider&amp;quot; Alpha v0.1_2&lt;br /&gt;
&lt;br /&gt;
Copyright 2013 Christian Heinrich&lt;br /&gt;
Licensed under the Apache License, Version 2.0&lt;br /&gt;
&lt;br /&gt;
1. Downloading http://www.google.com/robots.txt&lt;br /&gt;
2. &amp;quot;robots.txt&amp;quot; saved as &amp;quot;www.google.com-robots.txt&amp;quot;&lt;br /&gt;
3. Sending Allow: URIs of www.google.com to web proxy i.e. 127.0.0.1:8080&lt;br /&gt;
	 /catalogs/about sent&lt;br /&gt;
	 /catalogs/p? sent&lt;br /&gt;
	 /news/directory sent&lt;br /&gt;
	...&lt;br /&gt;
4. Done.&lt;br /&gt;
&lt;br /&gt;
cmlh$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;META&amp;gt; Tags - with Burp'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Based on the Disallow directive(s) listed within the robots.txt file in webroot, a regular expression search for &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; within each web page is undertaken and the result compared to the robots.txt file in webroot.&lt;br /&gt;
&lt;br /&gt;
For example, the robots.txt file from facebook.com has a &amp;quot;Disallow: /ac.php&amp;quot; entry[http://facebook.com/robots.txt] and the resulting search for &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; shown below:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:CMLH-Meta Tag Example-Facebook-Aug 2013.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The above might be considered a fail since &amp;quot;INDEX,FOLLOW&amp;quot; is the default &amp;lt;META&amp;gt; Tag specified by the &amp;quot;Robots Exclusion Protocol&amp;quot; yet &amp;quot;Disallow: /ac.php&amp;quot; is listed in robots.txt.&lt;br /&gt;
&lt;br /&gt;
'''Analyze robots.txt using Google Webmaster Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Google provides an &amp;quot;Analyze robots.txt&amp;quot; function as part of its &amp;quot;Google Webmaster Tools&amp;quot;, which can assist with testing [4] and the procedure is as follows:&lt;br /&gt;
&lt;br /&gt;
1. Sign into Google Webmaster Tools with your Google Account.&amp;lt;br&amp;gt;&lt;br /&gt;
2. On the Dashboard, click the URL for the site you want.&amp;lt;br&amp;gt;&lt;br /&gt;
3. Click Tools, and then click Analyze robots.txt.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gray Box testing and example === &lt;br /&gt;
The process is the same as Black Box testing above.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools ==&lt;br /&gt;
&lt;br /&gt;
* Browser (View Source function)&lt;br /&gt;
* curl&lt;br /&gt;
* wget&lt;br /&gt;
* rockspider[https://github.com/cmlh/rockspider/releases]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] &amp;quot;The Web Robots Pages&amp;quot; - http://www.robotstxt.org/&lt;br /&gt;
* [2] &amp;quot;Block and Remove Pages Using a robots.txt File&amp;quot; - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=40364&amp;amp;rd=1&lt;br /&gt;
* [3] &amp;quot;(ISC)2 Blog: The Attack of the Spiders from the Clouds&amp;quot; - http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html&lt;br /&gt;
* [4] &amp;quot;Block and Remove Pages Using a robots.txt File - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=35237&amp;amp;rd=1&lt;br /&gt;
* [5] &amp;quot;Telstra customer database exposed&amp;quot; - http://www.smh.com.au/it-pro/security-it/telstra-customer-database-exposed-20111209-1on60.html&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=157292</id>
		<title>Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=157292"/>
				<updated>2013-08-25T10:02:17Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Add &amp;lt;META&amp;gt; Tag Section - some markup incorrect&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v4}}&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
This section describes how to test the robots.txt file for Information Leakage of the web application's directory/folder path(s).  Furthermore the list of directories that are to be avoided by Spiders/Robots/Crawlers can also be created as a dependency for OWASP-IG-009[https://www.owasp.org/index.php/Testing_Map_execution_paths_through_application_(OWASP-IG-009)]&lt;br /&gt;
&lt;br /&gt;
== Test Objectives ==&lt;br /&gt;
1. Information Leakage of the web application's directory/folder path(s).&lt;br /&gt;
&lt;br /&gt;
2. Create the list of directories that are to be avoided by Spiders/Robots/Crawlers&lt;br /&gt;
&lt;br /&gt;
== How to Test ==&lt;br /&gt;
 Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''robots.txt in webroot'''&amp;lt;br&amp;gt;&lt;br /&gt;
As an example, the beginning of the robots.txt file from http://www.google.com/robots.txt sampled on 11 August 2013 is quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''User-Agent'' directive refers to the specific web spider/robot/crawler.  For example the ''User-Agent: Googlebot'' refers to the ''GoogleBot'' crawler while ''User-Agent: *'' in the example above applies to all web spiders/robots/crawlers [2] as quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''Disallow'' directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
... &lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a robots.txt file [3].  Hence, &amp;lt;b&amp;gt;robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties&amp;lt;/b&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;META&amp;gt; Tag'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;META&amp;gt; tags are located within the HEAD section of each HTML Document and should be consistent across a web site in the likely event that the robot/spider/crawler start point does not begin from a document link other than webroot i.e. a &amp;quot;deep link&amp;quot;[http://en.wikipedia.org/wiki/Deep_linking].&lt;br /&gt;
&lt;br /&gt;
If there is no &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; ... &amp;gt;&amp;quot; entry than the &amp;quot;Robots Exclusion Protocol'' defaults to &amp;quot;INDEX,FOLLOW&amp;quot; respectively.  Therefore, the other two valid entries defined by the &amp;quot;Robots Exclusion Protocol'' are prefixed with &amp;quot;NO...&amp;quot; i.e. &amp;quot;NOINDEX&amp;quot; and &amp;quot;NOFOLLOW&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; tag as the robots.txt file convention is preferred.  Hence, &amp;lt;b&amp;gt;&amp;lt;META&amp;gt; Tags should not be considered the primary mechanism, rather a complementary control to robots.txt&amp;lt;/b&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
=== Black Box testing and example ===&lt;br /&gt;
'''robots.txt in webroot - with &amp;quot;wget&amp;quot; or &amp;quot;curl&amp;quot;&amp;lt;br&amp;gt;&lt;br /&gt;
The robots.txt file is retrieved from the web root directory of the web server.&lt;br /&gt;
&lt;br /&gt;
For example, to retrieve the robots.txt from www.google.com using ''wget'' or &amp;quot;curl&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ wget http://www.google.com/robots.txt&lt;br /&gt;
--2013-08-11 14:40:36--  http://www.google.com/robots.txt&lt;br /&gt;
Resolving www.google.com... 74.125.237.17, 74.125.237.18, 74.125.237.19, ...&lt;br /&gt;
Connecting to www.google.com|74.125.237.17|:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/plain]&lt;br /&gt;
Saving to: ‘robots.txt.1’&lt;br /&gt;
&lt;br /&gt;
    [ &amp;lt;=&amp;gt;                                   ] 7,074       --.-K/s   in 0s      &lt;br /&gt;
&lt;br /&gt;
2013-08-11 14:40:37 (59.7 MB/s) - ‘robots.txt’ saved [7074]&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ curl -O http://www.google.com/robots.txt&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
101  7074    0  7074    0     0   9410      0 --:--:-- --:--:-- --:--:-- 27312&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''robots.txt in webroot - with rockspider'''&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;quot;rockspider&amp;quot;[https://github.com/cmlh/rockspider/releases] automates the creation of the initial scope for Spiders/Robots/Crawlers of files and directories/folders of a web site&lt;br /&gt;
&lt;br /&gt;
For example, to create the initial scope based on the Allowed: directive from www.google.com using &amp;quot;rockspider&amp;quot;[https://github.com/cmlh/rockspider/releases]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ ./rockspider.pl -www www.google.com&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Rockspider&amp;quot; Alpha v0.1_2&lt;br /&gt;
&lt;br /&gt;
Copyright 2013 Christian Heinrich&lt;br /&gt;
Licensed under the Apache License, Version 2.0&lt;br /&gt;
&lt;br /&gt;
1. Downloading http://www.google.com/robots.txt&lt;br /&gt;
2. &amp;quot;robots.txt&amp;quot; saved as &amp;quot;www.google.com-robots.txt&amp;quot;&lt;br /&gt;
3. Sending Allow: URIs of www.google.com to web proxy i.e. 127.0.0.1:8080&lt;br /&gt;
	 /catalogs/about sent&lt;br /&gt;
	 /catalogs/p? sent&lt;br /&gt;
	 /news/directory sent&lt;br /&gt;
	...&lt;br /&gt;
4. Done.&lt;br /&gt;
&lt;br /&gt;
cmlh$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;META&amp;gt; Tags - with Burp'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Based on the Disallow directive(s) listed within the robots.txt file in webroot, a regular expression search for &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; within each web page is undertaken and the result compared to the robots.txt file in webroot.&lt;br /&gt;
&lt;br /&gt;
For example, the robots.txt file from facebook.com has a &amp;quot;Disallow: /ac.php&amp;quot; entry[http://facebook.com/robots.txt] and the resulting search for &amp;quot;&amp;lt;META NAME=&amp;quot;ROBOTS&amp;quot;&amp;quot; shown below:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[File:CMLH-Meta Tag Example-Facebook-Aug 2013.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The above might be considered a fail since &amp;quot;INDEX,FOLLOW&amp;quot; is the default &amp;lt;META&amp;gt; Tag specified by the &amp;quot;Robots Exclusion Protocol&amp;quot; yet &amp;quot;Disallow: /ac.php&amp;quot; is listed in robots.txt.&lt;br /&gt;
&lt;br /&gt;
'''Analyze robots.txt using Google Webmaster Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Google provides an &amp;quot;Analyze robots.txt&amp;quot; function as part of its &amp;quot;Google Webmaster Tools&amp;quot;, which can assist with testing [4] and the procedure is as follows:&lt;br /&gt;
&lt;br /&gt;
1. Sign into Google Webmaster Tools with your Google Account.&amp;lt;br&amp;gt;&lt;br /&gt;
2. On the Dashboard, click the URL for the site you want.&amp;lt;br&amp;gt;&lt;br /&gt;
3. Click Tools, and then click Analyze robots.txt.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gray Box testing and example === &lt;br /&gt;
The process is the same as Black Box testing above.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools ==&lt;br /&gt;
&lt;br /&gt;
* Browser (View Source function)&lt;br /&gt;
* curl&lt;br /&gt;
* wget&lt;br /&gt;
* Speculum[https://github.com/cmlh/Speculum/releases]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] &amp;quot;The Web Robots Pages&amp;quot; - http://www.robotstxt.org/&lt;br /&gt;
* [2] &amp;quot;Block and Remove Pages Using a robots.txt File&amp;quot; - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=40364&amp;amp;rd=1&lt;br /&gt;
* [3] &amp;quot;(ISC)2 Blog: The Attack of the Spiders from the Clouds&amp;quot; - http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html&lt;br /&gt;
* [4] &amp;quot;Block and Remove Pages Using a robots.txt File - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=35237&amp;amp;rd=1&lt;br /&gt;
* [5] &amp;quot;Telstra customer database exposed&amp;quot; - http://www.smh.com.au/it-pro/security-it/telstra-customer-database-exposed-20111209-1on60.html&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=File:CMLH-Meta_Tag_Example-Facebook-Aug_2013.png&amp;diff=157291</id>
		<title>File:CMLH-Meta Tag Example-Facebook-Aug 2013.png</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=File:CMLH-Meta_Tag_Example-Facebook-Aug_2013.png&amp;diff=157291"/>
				<updated>2013-08-25T09:16:22Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Image for &amp;lt;META&amp;gt; Tag example of https://www.owasp.org/index.php?title=Testing:_Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Image for &amp;lt;META&amp;gt; Tag example of https://www.owasp.org/index.php?title=Testing:_Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Talk:Conduct_search_engine_discovery/reconnaissance_for_information_leakage_(OTG-INFO-001)&amp;diff=157029</id>
		<title>Talk:Conduct search engine discovery/reconnaissance for information leakage (OTG-INFO-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Talk:Conduct_search_engine_discovery/reconnaissance_for_information_leakage_(OTG-INFO-001)&amp;diff=157029"/>
				<updated>2013-08-20T02:33:37Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Reply from @cmlh i.e. ::::: CMLH&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== v3 Review Comments ==&lt;br /&gt;
This section does not cover the items stated in the &amp;quot;brief summary&amp;quot;.&lt;br /&gt;
For v3, if the section is to remain completely google'centric I suggest we rename &amp;quot;Search engine discovery&amp;quot; to &amp;quot;Google searching your web application and accessing google's cache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Reply to &amp;quot;v3 Review Comments&amp;quot; from @cmlh ==&lt;br /&gt;
The roadmap was to add Yahoo! and Bing to the next release of the OWASP Testing Guide (i.e. v3 -&amp;gt; v4) and to not appear to promote Google over Yahoo! and Bing.  It should be noted that Yahoo! and Bing might refer to the same &amp;quot;entity&amp;quot; as further research is undertaken i.e. the &amp;quot;Yahoo! and Microsoft Search Alliance&amp;quot;/&amp;quot;Yahoo! Bing Network&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the intent is *not* to promote the inferior http://www.hackersforcharity.org/ghdb/, rather a more scientific and innovative approach.&lt;br /&gt;
&lt;br /&gt;
: Hi cmlh, thanks for the follow-up. That comment was really old and seems to have been migrated for the v3 &amp;gt; v4 draft. I think the new heading/title is more appropriate than previously, however, the content still seems awfully google'centric.&lt;br /&gt;
&lt;br /&gt;
: Should we also be including some Shodan stuff? (http://www.shodanhq.com/) [[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]])&lt;br /&gt;
&lt;br /&gt;
:: Actually now that I'm looking at this. I'm not sure how the heading has changed since v3 was a draft (when the comment was originally made). However, again looking at this now there are a number of goals, etc stated in the summary that don't seem to be covered by the content. Also the summary seems to be written from the perspective of a app/system owner not a tester.&lt;br /&gt;
&lt;br /&gt;
:: I also wonder if we should be including examples such as xssed.com and their ilk, web.archive.org, etc [[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]])&lt;br /&gt;
&lt;br /&gt;
Adding web services, such as xxsed.com or web.archive.org, would depend on if they an API available to the public (I believe archive.org has and API) and if there is a product available (possibly released under FOSS licenses) to provide an example.&lt;br /&gt;
&lt;br /&gt;
: IMHO that only applies from a purely automated point of view. There is no reason we shouldn't be referencing such as a manual step (or steps).&lt;br /&gt;
&lt;br /&gt;
:: CMLH - I am aware of archive.org, I am not sure about the xxsed example that you are referring too?&lt;br /&gt;
&lt;br /&gt;
:::: xssed.com is a community site that catalogs (by submission) vulnerabilities found on public sites. If I were doing a Web App test for a client I'd look to see if anyone else had reported an issue for their site. Having vulnerabilities in your web app publicly outted seems like a serious information leak to me.&lt;br /&gt;
&lt;br /&gt;
::::: CMLH - I know what xssed is already :) Instead I meant did they offer a Public API (which is not the case as far as I am aware).  That stated, a loud minority of OWASP members may complain that we are promoting an unethical web site but I have no issue with including xssed (perhaps in another related section of the testing guide).&lt;br /&gt;
&lt;br /&gt;
: I'm not sure how the majority of the industry ends up getting involved in Web App VA but from my perspective and experience there is usually limited targets so doing a few manual lookups isn't a major stumbling block.&lt;br /&gt;
&lt;br /&gt;
:: CMLH - I believe some of these other services might be out of scope of OTG-INFO-001.&lt;br /&gt;
&lt;br /&gt;
::: Shodan is a search engine specifically designed to catalog systems and version/configuration information. Finding listings for your target client/app with Shodan can provide further information about your target.&lt;br /&gt;
&lt;br /&gt;
:::: CMLH - I believe some of these other services might still be out of scope of OTG-INFO-001 in light of the clarification above but could be addressed in a related section of v4.&lt;br /&gt;
&lt;br /&gt;
::: Further the summary for 001 talks about &amp;quot; Indirect methods relate to gleaning sensitive design and configuration information by searching forums, newsgroups and tendering websites.&amp;quot; none of which is actually covered. There is a huge open source intelligence gathering activity which should be covered based on that statement. (Finding related employees on linkedin, searching google groups for SW and HW questions, etc) [[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]]) 07:23, 19 August 2013 (CDT)&lt;br /&gt;
&lt;br /&gt;
:::: CMLH - That section was added *after* I had contributed to v3 i.e. http://lists.owasp.org/pipermail/owasp-testing/2013-August/002160.html&lt;br /&gt;
&lt;br /&gt;
:::: Oh and if it matters somehow Shodan does have an API. Though I stick by the statement that not everything has to be automated to automate'able :) Here's one small intro for it: http://raidersec.blogspot.ca/2012/02/searching-for-devices-using-shodan.html [[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]]) 07:27, 19 August 2013 (CDT)&lt;br /&gt;
&lt;br /&gt;
::::: CMLH - Yes, I am aware of their API i.e. http://cmlh.id.au/tagged/shodan&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Talk:Conduct_search_engine_discovery/reconnaissance_for_information_leakage_(OTG-INFO-001)&amp;diff=156866</id>
		<title>Talk:Conduct search engine discovery/reconnaissance for information leakage (OTG-INFO-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Talk:Conduct_search_engine_discovery/reconnaissance_for_information_leakage_(OTG-INFO-001)&amp;diff=156866"/>
				<updated>2013-08-16T00:38:16Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Response to Rick&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== v3 Review Comments ==&lt;br /&gt;
This section does not cover the items stated in the &amp;quot;brief summary&amp;quot;.&lt;br /&gt;
For v3, if the section is to remain completely google'centric I suggest we rename &amp;quot;Search engine discovery&amp;quot; to &amp;quot;Google searching your web application and accessing google's cache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Reply to &amp;quot;v3 Review Comments&amp;quot; from @cmlh ==&lt;br /&gt;
The roadmap was to add Yahoo! and Bing to the next release of the OWASP Testing Guide (i.e. v3 -&amp;gt; v4) and to not appear to promote Google over Yahoo! and Bing.  It should be noted that Yahoo! and Bing might refer to the same &amp;quot;entity&amp;quot; as further research is undertaken i.e. the &amp;quot;Yahoo! and Microsoft Search Alliance&amp;quot;/&amp;quot;Yahoo! Bing Network&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the intent is *not* to promote the inferior http://www.hackersforcharity.org/ghdb/, rather a more scientific and innovative approach.&lt;br /&gt;
&lt;br /&gt;
: Hi cmlh, thanks for the follow-up. That comment was really old and seems to have been migrated for the v3 &amp;gt; v4 draft. I think the new heading/title is more appropriate than previously, however, the content still seems awfully google'centric.&lt;br /&gt;
&lt;br /&gt;
: Should we also be including some Shodan stuff? (http://www.shodanhq.com/) [[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]])&lt;br /&gt;
&lt;br /&gt;
:: Actually now that I'm looking at this. I'm not sure how the heading has changed since v3 was a draft (when the comment was originally made). However, again looking at this now there are a number of goals, etc stated in the summary that don't seem to be covered by the content. Also the summary seems to be written from the perspective of a app/system owner not a tester.&lt;br /&gt;
&lt;br /&gt;
:: I also wonder if we should be including examples such as xssed.com and their ilk, web.archive.org, etc [[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]])&lt;br /&gt;
&lt;br /&gt;
Adding web services, such as xxsed.com or web.archive.org, would depend on if they an API available to the public (I believe archive.org has and API) and if there is a product available (possibly released under FOSS licenses) to provide an example.&lt;br /&gt;
&lt;br /&gt;
: IMHO that only applies from a purely automated point of view. There is no reason we shouldn't be referencing such as a manual step (or steps).&lt;br /&gt;
&lt;br /&gt;
::: CMLH - I am aware of archive.org, I am not sure about the xxsed example that you are referring too?&lt;br /&gt;
&lt;br /&gt;
: I'm not sure how the majority of the industry ends up getting involved in Web App VA but from my perspective and experience there is usually limited targets so doing a few manual lookups isn't a major stumbling block.&lt;br /&gt;
&lt;br /&gt;
::: CMLH - I believe some of these other services might be out of scope of OTG-INFO-001.&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Talk:Conduct_search_engine_discovery/reconnaissance_for_information_leakage_(OTG-INFO-001)&amp;diff=156863</id>
		<title>Talk:Conduct search engine discovery/reconnaissance for information leakage (OTG-INFO-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Talk:Conduct_search_engine_discovery/reconnaissance_for_information_leakage_(OTG-INFO-001)&amp;diff=156863"/>
				<updated>2013-08-15T23:17:21Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Draft Reply to Rick Mitchell&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== v3 Review Comments ==&lt;br /&gt;
This section does not cover the items stated in the &amp;quot;brief summary&amp;quot;.&lt;br /&gt;
For v3, if the section is to remain completely google'centric I suggest we rename &amp;quot;Search engine discovery&amp;quot; to &amp;quot;Google searching your web application and accessing google's cache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Reply to &amp;quot;v3 Review Comments&amp;quot; from @cmlh ==&lt;br /&gt;
The roadmap was to add Yahoo! and Bing to the next release of the OWASP Testing Guide (i.e. v3 -&amp;gt; v4) and to not appear to promote Google over Yahoo! and Bing.  It should be noted that Yahoo! and Bing might refer to the same &amp;quot;entity&amp;quot; as further research is undertaken i.e. the &amp;quot;Yahoo! and Microsoft Search Alliance&amp;quot;/&amp;quot;Yahoo! Bing Network&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the intent is *not* to promote the inferior http://www.hackersforcharity.org/ghdb/, rather a more scientific and innovative approach.&lt;br /&gt;
&lt;br /&gt;
: Hi cmlh, thanks for the follow-up. That comment was really old and seems to have been migrated for the v3 &amp;gt; v4 draft. I think the new heading/title is more appropriate than previously, however, the content still seems awfully google'centric.&lt;br /&gt;
&lt;br /&gt;
: Should we also be including some Shodan stuff? (http://www.shodanhq.com/) [[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]])&lt;br /&gt;
&lt;br /&gt;
:: Actually now that I'm looking at this. I'm not sure how the heading has changed since v3 was a draft (when the comment was originally made). However, again looking at this now there are a number of goals, etc stated in the summary that don't seem to be covered by the content. Also the summary seems to be written from the perspective of a app/system owner not a tester.&lt;br /&gt;
&lt;br /&gt;
:: I also wonder if we should be including examples such as xssed.com and their ilk, web.archive.org, etc [[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]])&lt;br /&gt;
&lt;br /&gt;
Adding web services, such as xxsed.com or web.archive.org, would depend on if they an API available to the public (I believe archive.org has and API) and if there is a product available (possibly released under FOSS licenses) to provide an example.&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Talk:Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156862</id>
		<title>Talk:Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Talk:Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156862"/>
				<updated>2013-08-15T23:12:57Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Draft response to Rick Mitchell&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
It could be added that, from an attacker point of view, the robots.txt file can provide some useful information on the structure of the web server, e.g., directories that are supposed to be &amp;quot;private&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[User:Marco|Marco]] 18:11, 17 August 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
The intent of robots.txt is *not* to specify access control for directories. Hence to quote the wiki page &amp;quot;''Web spiders/robots/crawlers can intentionally ignore the Disallow directives specified in a robots.txt file [3]. Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.''&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
If you believe this is not your communicated clearly or could be reworded then please amend the wiki page.&lt;br /&gt;
&lt;br /&gt;
[[User:cmlh|cmlh]] 12:34, 24 August 2008 (GMT +10)&lt;br /&gt;
&lt;br /&gt;
== v3 Review Comments ==&lt;br /&gt;
&lt;br /&gt;
I don't see anything here about actually testing robots.txt or using Spiders/Robots/Crawlers to do anything to the web app. It's nice that we can DL the file and that it contains some interesting information and that there's a google tool that can do some analysis of it (though we haven't explained what google webmaster tools gives you or provided an example of the output), but where would that lead a tester or attacker?&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Rick.mitchell|Rick.mitchell]] 09:39, 3 September 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
== Reply from @cmlh ==&lt;br /&gt;
&lt;br /&gt;
Rick may have overlooked the quote &amp;quot;Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties. &amp;quot; from in the &amp;quot;How to Test&amp;quot; section of v3. &lt;br /&gt;
&lt;br /&gt;
The lack of the &amp;quot;Google Webmaster Tools&amp;quot; example is due to me not being the webmaster of owasp.org.  This can be resolved in v4 once the webmaster is known.&lt;br /&gt;
&lt;br /&gt;
: For the first part, either that sentence wasn't part of the content I reviewed during v3 draft or it didn't seem significant enough given the lead-in. &lt;br /&gt;
&lt;br /&gt;
: As for the webmaster tools stuff that sounds good, however like INFO-001 seems awfully google-centric.&lt;br /&gt;
&lt;br /&gt;
CH - Bing/Yahoo! have been included in the roadmap for v4.&lt;br /&gt;
&lt;br /&gt;
: This content also only covers robots.txt though the heading suggests much broader coverage. So either the content should be expanded or the heading made more specific (IMHO). &lt;br /&gt;
&lt;br /&gt;
CH - I included http://www.robotstxt.org/meta.html in the OWASP Testing Guide v3 (since I also presented on this content in 2009 and 2010).  I am not sure if it was removed by subsequent edits by others (I haven't checked this) but I will include it for v4 again. &lt;br /&gt;
[[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]]) 15:07, 15 August 2013 (CDT)&lt;br /&gt;
&lt;br /&gt;
== TODO for v4 ==&lt;br /&gt;
&lt;br /&gt;
1. Insert the &amp;quot;Analyze robots.txt using Google Webmaster Tools&amp;quot; i.e. https://support.google.com/webmasters/answer/156449?hl=en&amp;amp;from=35237&amp;amp;rd=1 with owasp.org (not applicable, since webroot doesn't contain robots.txt) as the example.&lt;br /&gt;
&lt;br /&gt;
2. May need to update the reference to OWASP-IG-009 within the &amp;quot;Summary&amp;quot; section depending on the finalisation of the spidering thread (To be created).&lt;br /&gt;
&lt;br /&gt;
3. Add Bing/Yahoo! related content.&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Talk:Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156861</id>
		<title>Talk:Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Talk:Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156861"/>
				<updated>2013-08-15T21:53:22Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Update status of robots.txt within webroot of owasp.org&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
It could be added that, from an attacker point of view, the robots.txt file can provide some useful information on the structure of the web server, e.g., directories that are supposed to be &amp;quot;private&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[User:Marco|Marco]] 18:11, 17 August 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
The intent of robots.txt is *not* to specify access control for directories. Hence to quote the wiki page &amp;quot;''Web spiders/robots/crawlers can intentionally ignore the Disallow directives specified in a robots.txt file [3]. Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.''&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
If you believe this is not your communicated clearly or could be reworded then please amend the wiki page.&lt;br /&gt;
&lt;br /&gt;
[[User:cmlh|cmlh]] 12:34, 24 August 2008 (GMT +10)&lt;br /&gt;
&lt;br /&gt;
== v3 Review Comments ==&lt;br /&gt;
&lt;br /&gt;
I don't see anything here about actually testing robots.txt or using Spiders/Robots/Crawlers to do anything to the web app. It's nice that we can DL the file and that it contains some interesting information and that there's a google tool that can do some analysis of it (though we haven't explained what google webmaster tools gives you or provided an example of the output), but where would that lead a tester or attacker?&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Rick.mitchell|Rick.mitchell]] 09:39, 3 September 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
== Reply from @cmlh ==&lt;br /&gt;
&lt;br /&gt;
Rick may have overlooked the quote &amp;quot;Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties. &amp;quot; from in the &amp;quot;How to Test&amp;quot; section of v3. &lt;br /&gt;
&lt;br /&gt;
The lack of the &amp;quot;Google Webmaster Tools&amp;quot; example is due to me not being the webmaster of owasp.org.  This can be resolved in v4 once the webmaster is known.&lt;br /&gt;
&lt;br /&gt;
: For the first part, either that sentence wasn't part of the content I reviewed during v3 draft or it didn't seem significant enough given the lead-in. &lt;br /&gt;
&lt;br /&gt;
: As for the webmaster tools stuff that sounds good, however like INFO-001 seems awfully google-centric.&lt;br /&gt;
&lt;br /&gt;
: This content also only covers robots.txt though the heading suggests much broader coverage. So either the content should be expanded or the heading made more specific (IMHO). [[User:Rick.mitchell|Rick.mitchell]] ([[User talk:Rick.mitchell|talk]]) 15:07, 15 August 2013 (CDT)&lt;br /&gt;
&lt;br /&gt;
== TODO for v4 ==&lt;br /&gt;
&lt;br /&gt;
1. Insert the &amp;quot;Analyze robots.txt using Google Webmaster Tools&amp;quot; i.e. https://support.google.com/webmasters/answer/156449?hl=en&amp;amp;from=35237&amp;amp;rd=1 with owasp.org (not applicable, since webroot doesn't contain robots.txt) as the example.&lt;br /&gt;
&lt;br /&gt;
2. May need to update the reference to OWASP-IG-009 within the &amp;quot;Summary&amp;quot; section depending on the finalisation of the spidering thread (To be created).&lt;br /&gt;
&lt;br /&gt;
3. Add Bing/Yahoo! related content.&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156836</id>
		<title>Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156836"/>
				<updated>2013-08-15T03:04:53Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Added Speculum example&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v4}}&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
This section describes how to test the robots.txt file for Information Leakage of the web application's directory/folder path(s).  Furthermore the list of directories that are to be avoided by Spiders/Robots/Crawlers can also be created as a dependency for OWASP-IG-009[https://www.owasp.org/index.php/Testing_Map_execution_paths_through_application_(OWASP-IG-009)]&lt;br /&gt;
&lt;br /&gt;
== Test Objectives ==&lt;br /&gt;
1. Information Leakage of the web application's directory/folder path(s).&lt;br /&gt;
&lt;br /&gt;
2. Create the list of directories that are to be avoided by Spiders/Robots/Crawlers&lt;br /&gt;
&lt;br /&gt;
== How to Test == &lt;br /&gt;
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].&lt;br /&gt;
&lt;br /&gt;
As an example, the beginning of the robots.txt file from http://www.google.com/robots.txt sampled on 11 August 2013 is quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''User-Agent'' directive refers to the specific web spider/robot/crawler.  For example the ''User-Agent: Googlebot'' refers to the ''GoogleBot'' crawler while ''User-Agent: *'' in the example above applies to all web spiders/robots/crawlers [2] as quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''Disallow'' directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
... &lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a robots.txt file [3].  Hence, &amp;lt;b&amp;gt;robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties&amp;lt;/b&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Black Box testing and example ===&lt;br /&gt;
'''wget'''&amp;lt;br&amp;gt;&lt;br /&gt;
The robots.txt file is retrieved from the web root directory of the web server.&lt;br /&gt;
&lt;br /&gt;
For example, to retrieve the robots.txt from www.google.com using ''wget'' or &amp;quot;curl&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ wget http://www.google.com/robots.txt&lt;br /&gt;
--2013-08-11 14:40:36--  http://www.google.com/robots.txt&lt;br /&gt;
Resolving www.google.com... 74.125.237.17, 74.125.237.18, 74.125.237.19, ...&lt;br /&gt;
Connecting to www.google.com|74.125.237.17|:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/plain]&lt;br /&gt;
Saving to: ‘robots.txt.1’&lt;br /&gt;
&lt;br /&gt;
    [ &amp;lt;=&amp;gt;                                   ] 7,074       --.-K/s   in 0s      &lt;br /&gt;
&lt;br /&gt;
2013-08-11 14:40:37 (59.7 MB/s) - ‘robots.txt’ saved [7074]&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ curl -O http://www.google.com/robots.txt&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
101  7074    0  7074    0     0   9410      0 --:--:-- --:--:-- --:--:-- 27312&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Speculum'''&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;quot;Speculum&amp;quot;[https://github.com/cmlh/Speculum/releases] automates the creation of the initial scope for Spiders/Robots/Crawlers of files and directories/folders of a web site&lt;br /&gt;
&lt;br /&gt;
For example, to create the initial scope based on the Allowed: directive from www.google.com using &amp;quot;Speculum&amp;quot;[https://github.com/cmlh/Speculum/releases]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ ./speculum.pl -www www.google.com&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Speculum&amp;quot; Alpha v0.0_2&lt;br /&gt;
&lt;br /&gt;
Copyright 2013 Christian Heinrich&lt;br /&gt;
Licensed under the Apache License, Version 2.0&lt;br /&gt;
&lt;br /&gt;
1. Downloading http://www.google.com/robots.txt&lt;br /&gt;
2. &amp;quot;robots.txt&amp;quot; saved as &amp;quot;www.google.com-robots.txt&amp;quot;&lt;br /&gt;
3. Sending Allow: URIs of www.google.com to web proxy i.e. 127.0.0.1:8080&lt;br /&gt;
	 /catalogs/about sent&lt;br /&gt;
	 /catalogs/p? sent&lt;br /&gt;
	 /news/directory sent&lt;br /&gt;
	...&lt;br /&gt;
4. Done.&lt;br /&gt;
&lt;br /&gt;
cmlh$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Analyze robots.txt using Google Webmaster Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Google provides an &amp;quot;Analyze robots.txt&amp;quot; function as part of its &amp;quot;Google Webmaster Tools&amp;quot;, which can assist with testing [4] and the procedure is as follows:&lt;br /&gt;
&lt;br /&gt;
1. Sign into Google Webmaster Tools with your Google Account.&amp;lt;br&amp;gt;&lt;br /&gt;
2. On the Dashboard, click the URL for the site you want.&amp;lt;br&amp;gt;&lt;br /&gt;
3. Click Tools, and then click Analyze robots.txt.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gray Box testing and example === &lt;br /&gt;
The process is the same as Black Box testing above.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools ==&lt;br /&gt;
&lt;br /&gt;
* Browser (View Source function)&lt;br /&gt;
* curl&lt;br /&gt;
* wget&lt;br /&gt;
* Speculum[https://github.com/cmlh/Speculum/releases]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] &amp;quot;The Web Robots Pages&amp;quot; - http://www.robotstxt.org/&lt;br /&gt;
* [2] &amp;quot;Block and Remove Pages Using a robots.txt File&amp;quot; - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=40364&amp;amp;rd=1&lt;br /&gt;
* [3] &amp;quot;(ISC)2 Blog: The Attack of the Spiders from the Clouds&amp;quot; - http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html&lt;br /&gt;
* [4] &amp;quot;Block and Remove Pages Using a robots.txt File - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=35237&amp;amp;rd=1&lt;br /&gt;
* [5] &amp;quot;Telstra customer database exposed&amp;quot; - http://www.smh.com.au/it-pro/security-it/telstra-customer-database-exposed-20111209-1on60.html&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156835</id>
		<title>Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156835"/>
				<updated>2013-08-15T03:04:05Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Inserted example of &amp;quot;Speculum&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v4}}&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
This section describes how to test the robots.txt file for Information Leakage of the web application's directory/folder path(s).  Furthermore the list of directories that are to be avoided by Spiders/Robots/Crawlers can also be created as a dependency for OWASP-IG-009[https://www.owasp.org/index.php/Testing_Map_execution_paths_through_application_(OWASP-IG-009)]&lt;br /&gt;
&lt;br /&gt;
== Test Objectives ==&lt;br /&gt;
1. Information Leakage of the web application's directory/folder path(s).&lt;br /&gt;
&lt;br /&gt;
2. Create the list of directories that are to be avoided by Spiders/Robots/Crawlers&lt;br /&gt;
&lt;br /&gt;
== How to Test == &lt;br /&gt;
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].&lt;br /&gt;
&lt;br /&gt;
As an example, the beginning of the robots.txt file from http://www.google.com/robots.txt sampled on 11 August 2013 is quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''User-Agent'' directive refers to the specific web spider/robot/crawler.  For example the ''User-Agent: Googlebot'' refers to the ''GoogleBot'' crawler while ''User-Agent: *'' in the example above applies to all web spiders/robots/crawlers [2] as quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''Disallow'' directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
... &lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a robots.txt file [3].  Hence, &amp;lt;b&amp;gt;robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties&amp;lt;/b&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Black Box testing and example ===&lt;br /&gt;
'''wget'''&amp;lt;br&amp;gt;&lt;br /&gt;
The robots.txt file is retrieved from the web root directory of the web server.&lt;br /&gt;
&lt;br /&gt;
For example, to retrieve the robots.txt from www.google.com using ''wget'' or &amp;quot;curl&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ wget http://www.google.com/robots.txt&lt;br /&gt;
--2013-08-11 14:40:36--  http://www.google.com/robots.txt&lt;br /&gt;
Resolving www.google.com... 74.125.237.17, 74.125.237.18, 74.125.237.19, ...&lt;br /&gt;
Connecting to www.google.com|74.125.237.17|:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/plain]&lt;br /&gt;
Saving to: ‘robots.txt.1’&lt;br /&gt;
&lt;br /&gt;
    [ &amp;lt;=&amp;gt;                                   ] 7,074       --.-K/s   in 0s      &lt;br /&gt;
&lt;br /&gt;
2013-08-11 14:40:37 (59.7 MB/s) - ‘robots.txt’ saved [7074]&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ curl -O http://www.google.com/robots.txt&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
101  7074    0  7074    0     0   9410      0 --:--:-- --:--:-- --:--:-- 27312&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Speculum'''&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;quot;Speculum&amp;quot;[https://github.com/cmlh/Speculum/releases] automates the creation of the initial scope for Spiders/Robots/Crawlers of files and directories/folders of a web site&lt;br /&gt;
&lt;br /&gt;
For example, to create the initial scope based on the Allowed: directive from www.google.com using &amp;quot;Speculum&amp;quot;[https://github.com/cmlh/Speculum/releases]:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ ./speculum.pl -www www.google.com&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Speculum&amp;quot; Alpha v0.0_2&lt;br /&gt;
&lt;br /&gt;
Copyright 2013 Christian Heinrich&lt;br /&gt;
Licensed under the Apache License, Version 2.0&lt;br /&gt;
&lt;br /&gt;
1. Downloading http://www.google.com/robots.txt&lt;br /&gt;
2. &amp;quot;robots.txt&amp;quot; saved as &amp;quot;www.google.com-robots.txt&amp;quot;&lt;br /&gt;
3. Sending Allow: URIs of www.google.com to web proxy i.e. 127.0.0.1:8080&lt;br /&gt;
	 /catalogs/about sent&lt;br /&gt;
	 /catalogs/p? sent&lt;br /&gt;
	 /news/directory sent&lt;br /&gt;
	...&lt;br /&gt;
4. Done.&lt;br /&gt;
&lt;br /&gt;
cmlh$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Analyze robots.txt using Google Webmaster Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Google provides an &amp;quot;Analyze robots.txt&amp;quot; function as part of its &amp;quot;Google Webmaster Tools&amp;quot;, which can assist with testing [4] and the procedure is as follows:&lt;br /&gt;
&lt;br /&gt;
1. Sign into Google Webmaster Tools with your Google Account.&amp;lt;br&amp;gt;&lt;br /&gt;
2. On the Dashboard, click the URL for the site you want.&amp;lt;br&amp;gt;&lt;br /&gt;
3. Click Tools, and then click Analyze robots.txt.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gray Box testing and example === &lt;br /&gt;
The process is the same as Black Box testing above.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools ==&lt;br /&gt;
&lt;br /&gt;
* Browser (View Source function)&lt;br /&gt;
* curl&lt;br /&gt;
* wget&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] &amp;quot;The Web Robots Pages&amp;quot; - http://www.robotstxt.org/&lt;br /&gt;
* [2] &amp;quot;Block and Remove Pages Using a robots.txt File&amp;quot; - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=40364&amp;amp;rd=1&lt;br /&gt;
* [3] &amp;quot;(ISC)2 Blog: The Attack of the Spiders from the Clouds&amp;quot; - http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html&lt;br /&gt;
* [4] &amp;quot;Block and Remove Pages Using a robots.txt File - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=35237&amp;amp;rd=1&lt;br /&gt;
* [5] &amp;quot;Telstra customer database exposed&amp;quot; - http://www.smh.com.au/it-pro/security-it/telstra-customer-database-exposed-20111209-1on60.html&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Talk:Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156654</id>
		<title>Talk:Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Talk:Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156654"/>
				<updated>2013-08-11T05:19:28Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Add &amp;quot;3. Add Bing/Yahoo! related content&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
It could be added that, from an attacker point of view, the robots.txt file can provide some useful information on the structure of the web server, e.g., directories that are supposed to be &amp;quot;private&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[User:Marco|Marco]] 18:11, 17 August 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
The intent of robots.txt is *not* to specify access control for directories. Hence to quote the wiki page &amp;quot;''Web spiders/robots/crawlers can intentionally ignore the Disallow directives specified in a robots.txt file [3]. Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.''&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
If you believe this is not your communicated clearly or could be reworded then please amend the wiki page.&lt;br /&gt;
&lt;br /&gt;
[[User:cmlh|cmlh]] 12:34, 24 August 2008 (GMT +10)&lt;br /&gt;
&lt;br /&gt;
== v3 Review Comments ==&lt;br /&gt;
&lt;br /&gt;
I don't see anything here about actually testing robots.txt or using Spiders/Robots/Crawlers to do anything to the web app. It's nice that we can DL the file and that it contains some interesting information and that there's a google tool that can do some analysis of it (though we haven't explained what google webmaster tools gives you or provided an example of the output), but where would that lead a tester or attacker?&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Rick.mitchell|Rick.mitchell]] 09:39, 3 September 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
== Reply from @cmlh ==&lt;br /&gt;
&lt;br /&gt;
Rick may have overlooked the quote &amp;quot;Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties. &amp;quot; from in the &amp;quot;How to Test&amp;quot; section of v3. &lt;br /&gt;
&lt;br /&gt;
The lack of the &amp;quot;Google Webmaster Tools&amp;quot; example is due to me not being the webmaster of owasp.org.  This can be resolved in v4 once the webmaster is known.&lt;br /&gt;
&lt;br /&gt;
== TODO for v4 ==&lt;br /&gt;
&lt;br /&gt;
1. Insert the &amp;quot;Analyze robots.txt using Google Webmaster Tools&amp;quot; i.e. https://support.google.com/webmasters/answer/156449?hl=en&amp;amp;from=35237&amp;amp;rd=1 with owasp.org as the example.&lt;br /&gt;
&lt;br /&gt;
2. May need to update the reference to OWASP-IG-009 within the &amp;quot;Summary&amp;quot; section depending on the finalisation of the spidering thread (To be created).&lt;br /&gt;
&lt;br /&gt;
3. Add Bing/Yahoo! related content.&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Talk:Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156653</id>
		<title>Talk:Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Talk:Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156653"/>
				<updated>2013-08-11T05:16:53Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: /* TODO for v4 */ new section&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
It could be added that, from an attacker point of view, the robots.txt file can provide some useful information on the structure of the web server, e.g., directories that are supposed to be &amp;quot;private&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[User:Marco|Marco]] 18:11, 17 August 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
The intent of robots.txt is *not* to specify access control for directories. Hence to quote the wiki page &amp;quot;''Web spiders/robots/crawlers can intentionally ignore the Disallow directives specified in a robots.txt file [3]. Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.''&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
If you believe this is not your communicated clearly or could be reworded then please amend the wiki page.&lt;br /&gt;
&lt;br /&gt;
[[User:cmlh|cmlh]] 12:34, 24 August 2008 (GMT +10)&lt;br /&gt;
&lt;br /&gt;
== v3 Review Comments ==&lt;br /&gt;
&lt;br /&gt;
I don't see anything here about actually testing robots.txt or using Spiders/Robots/Crawlers to do anything to the web app. It's nice that we can DL the file and that it contains some interesting information and that there's a google tool that can do some analysis of it (though we haven't explained what google webmaster tools gives you or provided an example of the output), but where would that lead a tester or attacker?&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Rick.mitchell|Rick.mitchell]] 09:39, 3 September 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
== Reply from @cmlh ==&lt;br /&gt;
&lt;br /&gt;
Rick may have overlooked the quote &amp;quot;Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties. &amp;quot; from in the &amp;quot;How to Test&amp;quot; section of v3. &lt;br /&gt;
&lt;br /&gt;
The lack of the &amp;quot;Google Webmaster Tools&amp;quot; example is due to me not being the webmaster of owasp.org.  This can be resolved in v4 once the webmaster is known.&lt;br /&gt;
&lt;br /&gt;
== TODO for v4 ==&lt;br /&gt;
&lt;br /&gt;
1. Insert the &amp;quot;Analyze robots.txt using Google Webmaster Tools&amp;quot; i.e. https://support.google.com/webmasters/answer/156449?hl=en&amp;amp;from=35237&amp;amp;rd=1 with owasp.org as the example.&lt;br /&gt;
&lt;br /&gt;
2. May need to update the reference to OWASP-IG-009 within the &amp;quot;Summary&amp;quot; section depending on the finalisation of the spidering thread (To be created).&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156652</id>
		<title>Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156652"/>
				<updated>2013-08-11T04:49:46Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Add [5]&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v4}}&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
This section describes how to test the robots.txt file for Information Leakage of the web application's directory/folder path(s).  Furthermore the list of directories that are to be avoided by Spiders/Robots/Crawlers can also be created as a dependency for OWASP-IG-009[https://www.owasp.org/index.php/Testing_Map_execution_paths_through_application_(OWASP-IG-009)]&lt;br /&gt;
&lt;br /&gt;
== Test Objectives ==&lt;br /&gt;
1. Information Leakage of the web application's directory/folder path(s).&lt;br /&gt;
&lt;br /&gt;
2. Create the list of directories that are to be avoided by Spiders/Robots/Crawlers&lt;br /&gt;
&lt;br /&gt;
== How to Test == &lt;br /&gt;
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].&lt;br /&gt;
&lt;br /&gt;
As an example, the beginning of the robots.txt file from http://www.google.com/robots.txt sampled on 11 August 2013 is quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''User-Agent'' directive refers to the specific web spider/robot/crawler.  For example the ''User-Agent: Googlebot'' refers to the ''GoogleBot'' crawler while ''User-Agent: *'' in the example above applies to all web spiders/robots/crawlers [2] as quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''Disallow'' directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
... &lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a robots.txt file [3].  Hence, &amp;lt;b&amp;gt;robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties&amp;lt;/b&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Black Box testing and example ===&lt;br /&gt;
'''wget'''&amp;lt;br&amp;gt;&lt;br /&gt;
The robots.txt file is retrieved from the web root directory of the web server.&lt;br /&gt;
&lt;br /&gt;
For example, to retrieve the robots.txt from www.google.com using ''wget'' or &amp;quot;curl&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ wget http://www.google.com/robots.txt&lt;br /&gt;
--2013-08-11 14:40:36--  http://www.google.com/robots.txt&lt;br /&gt;
Resolving www.google.com... 74.125.237.17, 74.125.237.18, 74.125.237.19, ...&lt;br /&gt;
Connecting to www.google.com|74.125.237.17|:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/plain]&lt;br /&gt;
Saving to: ‘robots.txt.1’&lt;br /&gt;
&lt;br /&gt;
    [ &amp;lt;=&amp;gt;                                   ] 7,074       --.-K/s   in 0s      &lt;br /&gt;
&lt;br /&gt;
2013-08-11 14:40:37 (59.7 MB/s) - ‘robots.txt’ saved [7074]&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ curl -O http://www.google.com/robots.txt&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
101  7074    0  7074    0     0   9410      0 --:--:-- --:--:-- --:--:-- 27312&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Analyze robots.txt using Google Webmaster Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Google provides an &amp;quot;Analyze robots.txt&amp;quot; function as part of its &amp;quot;Google Webmaster Tools&amp;quot;, which can assist with testing [4] and the procedure is as follows:&lt;br /&gt;
&lt;br /&gt;
1. Sign into Google Webmaster Tools with your Google Account.&amp;lt;br&amp;gt;&lt;br /&gt;
2. On the Dashboard, click the URL for the site you want.&amp;lt;br&amp;gt;&lt;br /&gt;
3. Click Tools, and then click Analyze robots.txt.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gray Box testing and example === &lt;br /&gt;
The process is the same as Black Box testing above.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools ==&lt;br /&gt;
&lt;br /&gt;
* Browser (View Source function)&lt;br /&gt;
* curl&lt;br /&gt;
* wget&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] &amp;quot;The Web Robots Pages&amp;quot; - http://www.robotstxt.org/&lt;br /&gt;
* [2] &amp;quot;Block and Remove Pages Using a robots.txt File&amp;quot; - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=40364&amp;amp;rd=1&lt;br /&gt;
* [3] &amp;quot;(ISC)2 Blog: The Attack of the Spiders from the Clouds&amp;quot; - http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html&lt;br /&gt;
* [4] &amp;quot;Block and Remove Pages Using a robots.txt File - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=35237&amp;amp;rd=1&lt;br /&gt;
* [5] &amp;quot;Telstra customer database exposed&amp;quot; - http://www.smh.com.au/it-pro/security-it/telstra-customer-database-exposed-20111209-1on60.html&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156651</id>
		<title>Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156651"/>
				<updated>2013-08-11T04:43:50Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Initial DRAFT for v4&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v4}}&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
This section describes how to test the robots.txt file for Information Leakage of the web application's directory/folder path(s).  Furthermore the list of directories that are to be avoided by Spiders/Robots/Crawlers can also be created as a dependency for OWASP-IG-009[https://www.owasp.org/index.php/Testing_Map_execution_paths_through_application_(OWASP-IG-009)]&lt;br /&gt;
&lt;br /&gt;
== Test Objectives ==&lt;br /&gt;
1. Information Leakage of the web application's directory/folder path(s).&lt;br /&gt;
&lt;br /&gt;
2. Create the list of directories that are to be avoided by Spiders/Robots/Crawlers&lt;br /&gt;
&lt;br /&gt;
== How to Test == &lt;br /&gt;
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].&lt;br /&gt;
&lt;br /&gt;
As an example, the beginning of the robots.txt file from http://www.google.com/robots.txt sampled on 11 August 2013 is quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''User-Agent'' directive refers to the specific web spider/robot/crawler.  For example the ''User-Agent: Googlebot'' refers to the ''GoogleBot'' crawler while ''User-Agent: *'' in the example above applies to all web spiders/robots/crawlers [2] as quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''Disallow'' directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
... &lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
Disallow: /catalogs&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a robots.txt file [3].  Hence, &amp;lt;b&amp;gt;robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties&amp;lt;/b&amp;gt;. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Black Box testing and example ===&lt;br /&gt;
'''wget'''&amp;lt;br&amp;gt;&lt;br /&gt;
The robots.txt file is retrieved from the web root directory of the web server.&lt;br /&gt;
&lt;br /&gt;
For example, to retrieve the robots.txt from www.google.com using ''wget'' or &amp;quot;curl&amp;quot;:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ wget http://www.google.com/robots.txt&lt;br /&gt;
--2013-08-11 14:40:36--  http://www.google.com/robots.txt&lt;br /&gt;
Resolving www.google.com... 74.125.237.17, 74.125.237.18, 74.125.237.19, ...&lt;br /&gt;
Connecting to www.google.com|74.125.237.17|:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/plain]&lt;br /&gt;
Saving to: ‘robots.txt.1’&lt;br /&gt;
&lt;br /&gt;
    [ &amp;lt;=&amp;gt;                                   ] 7,074       --.-K/s   in 0s      &lt;br /&gt;
&lt;br /&gt;
2013-08-11 14:40:37 (59.7 MB/s) - ‘robots.txt’ saved [7074]&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cmlh$ curl -O http://www.google.com/robots.txt&lt;br /&gt;
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current&lt;br /&gt;
                                 Dload  Upload   Total   Spent    Left  Speed&lt;br /&gt;
101  7074    0  7074    0     0   9410      0 --:--:-- --:--:-- --:--:-- 27312&lt;br /&gt;
&lt;br /&gt;
cmlh$ head -n5 robots.txt&lt;br /&gt;
User-agent: *&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /sdch&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
cmlh$ &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Analyze robots.txt using Google Webmaster Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Google provides an &amp;quot;Analyze robots.txt&amp;quot; function as part of its &amp;quot;Google Webmaster Tools&amp;quot;, which can assist with testing [4] and the procedure is as follows:&lt;br /&gt;
&lt;br /&gt;
1. Sign into Google Webmaster Tools with your Google Account.&amp;lt;br&amp;gt;&lt;br /&gt;
2. On the Dashboard, click the URL for the site you want.&amp;lt;br&amp;gt;&lt;br /&gt;
3. Click Tools, and then click Analyze robots.txt.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gray Box testing and example === &lt;br /&gt;
The process is the same as Black Box testing above.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools ==&lt;br /&gt;
&lt;br /&gt;
* Browser (View Source function)&lt;br /&gt;
* curl&lt;br /&gt;
* wget&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] &amp;quot;The Web Robots Pages&amp;quot; - http://www.robotstxt.org/&lt;br /&gt;
* [2] &amp;quot;Block and Remove Pages Using a robots.txt File&amp;quot; - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=40364&amp;amp;rd=1&lt;br /&gt;
* [3] &amp;quot;(ISC)2 Blog: The Attack of the Spiders from the Clouds&amp;quot; - http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html&lt;br /&gt;
* [4] &amp;quot;Block and Remove Pages Using a robots.txt File- http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=35237&amp;amp;rd=1&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156650</id>
		<title>Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156650"/>
				<updated>2013-08-11T04:31:39Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: 1st DRAFT for v4&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v4}}&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
This section describes how to test the robots.txt file for Information Leakage of the web application's directory/folder path(s).  Furthermore the list of directories that are to be avoided by Spiders/Robots/Crawlers can also be created as a dependency for OWASP-IG-009[https://www.owasp.org/index.php/Testing_Map_execution_paths_through_application_(OWASP-IG-009)]&lt;br /&gt;
&lt;br /&gt;
== Test Objectives ==&lt;br /&gt;
1. Information Leakage of the web application's directory/folder path(s).&lt;br /&gt;
&lt;br /&gt;
2. Create the list of directories that are to be avoided by Spiders/Robots/Crawlers&lt;br /&gt;
&lt;br /&gt;
== How to Test == &lt;br /&gt;
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].&lt;br /&gt;
&lt;br /&gt;
As an example, the robots.txt file from http://www.google.com/robots.txt taken on 24 August 2008 is quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
Allow: /searchhistory/&lt;br /&gt;
Disallow: /news?output=xhtml&amp;amp;&lt;br /&gt;
Allow: /news?output=xhtml&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''User-Agent'' directive refers to the specific web spider/robot/crawler.  For example the ''User-Agent: Googlebot'' refers to the ''GoogleBot'' crawler while ''User-Agent: *'' in the example above applies to all web spiders/robots/crawlers [2] as quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''Disallow'' directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
... &lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a  robots.txt file [3].  Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Black Box testing and example ===&lt;br /&gt;
'''wget'''&amp;lt;br&amp;gt;&lt;br /&gt;
The robots.txt file is retrieved from the web root directory of the web server.&lt;br /&gt;
&lt;br /&gt;
For example, to retrieve the robots.txt from www.google.com using ''wget'':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ wget http://www.google.com/robots.txt&lt;br /&gt;
--23:59:24-- http://www.google.com/robots.txt&lt;br /&gt;
           =&amp;gt; 'robots.txt'&lt;br /&gt;
Resolving www.google.com... 74.125.19.103, 74.125.19.104, 74.125.19.147, ...&lt;br /&gt;
Connecting to www.google.com|74.125.19.103|:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/plain]&lt;br /&gt;
&lt;br /&gt;
    [ &amp;lt;=&amp;gt;                                 ] 3,425        --.--K/s&lt;br /&gt;
&lt;br /&gt;
23:59:26 (13.67MB/s) - 'robots.txt' saved [3425]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Analyze robots.txt using Google Webmaster Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Google provides an &amp;quot;Analyze robots.txt&amp;quot; function as part of its &amp;quot;Google Webmaster Tools&amp;quot;, which can assist with testing [4] and the procedure is as follows:&lt;br /&gt;
&lt;br /&gt;
1. Sign into Google Webmaster Tools with your Google Account.&amp;lt;br&amp;gt;&lt;br /&gt;
2. On the Dashboard, click the URL for the site you want.&amp;lt;br&amp;gt;&lt;br /&gt;
3. Click Tools, and then click Analyze robots.txt.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gray Box testing and example === &lt;br /&gt;
The process is the same as Black Box testing above.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools ==&lt;br /&gt;
&lt;br /&gt;
* Browser (View Source function)&lt;br /&gt;
* curl&lt;br /&gt;
* wget&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] &amp;quot;The Web Robots Pages&amp;quot; - http://www.robotstxt.org/&lt;br /&gt;
* [2] &amp;quot;Block and Remove Pages Using a robots.txt File&amp;quot; - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=40364&amp;amp;rd=1&lt;br /&gt;
* [3] &amp;quot;(ISC)2 Blog: The Attack of the Spiders from the Clouds&amp;quot; - http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html&lt;br /&gt;
* [4] &amp;quot;Block and Remove Pages Using a robots.txt File- http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=35237&amp;amp;rd=1&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156649</id>
		<title>Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156649"/>
				<updated>2013-08-11T04:27:15Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: 1st DRAFT for v4, might need to update OWASP-IG-009 reference in FINAL&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v4}}&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
This section describes how to test the robots.txt file for Information Leakage of the web application's directory/folder path(s).  Furthermore the list of directories that are to be avoided by Spiders/Robots/Crawlers can also be created as a dependency for OWASP-IG-009[https://www.owasp.org/index.php/Testing_Map_execution_paths_through_application_(OWASP-IG-009)]&lt;br /&gt;
&lt;br /&gt;
== Test Objectives ==&lt;br /&gt;
&lt;br /&gt;
== How to Test == &lt;br /&gt;
Web spiders/robots/crawlers retrieve a web page and then recursively traverse hyperlinks to retrieve further web content. Their accepted behavior is specified by the ''Robots Exclusion Protocol'' of the robots.txt file in the web root directory [1].&lt;br /&gt;
&lt;br /&gt;
As an example, the robots.txt file from http://www.google.com/robots.txt taken on 24 August 2008 is quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
Allow: /searchhistory/&lt;br /&gt;
Disallow: /news?output=xhtml&amp;amp;&lt;br /&gt;
Allow: /news?output=xhtml&lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''User-Agent'' directive refers to the specific web spider/robot/crawler.  For example the ''User-Agent: Googlebot'' refers to the ''GoogleBot'' crawler while ''User-Agent: *'' in the example above applies to all web spiders/robots/crawlers [2] as quoted below:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User-agent: *&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The ''Disallow'' directive specifies which resources are prohibited by spiders/robots/crawlers. In the example above, directories such as the following are prohibited:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
... &lt;br /&gt;
Disallow: /search&lt;br /&gt;
Disallow: /groups&lt;br /&gt;
Disallow: /images&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Web spiders/robots/crawlers can intentionally ignore the ''Disallow'' directives specified in a  robots.txt file [3].  Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Black Box testing and example ===&lt;br /&gt;
'''wget'''&amp;lt;br&amp;gt;&lt;br /&gt;
The robots.txt file is retrieved from the web root directory of the web server.&lt;br /&gt;
&lt;br /&gt;
For example, to retrieve the robots.txt from www.google.com using ''wget'':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ wget http://www.google.com/robots.txt&lt;br /&gt;
--23:59:24-- http://www.google.com/robots.txt&lt;br /&gt;
           =&amp;gt; 'robots.txt'&lt;br /&gt;
Resolving www.google.com... 74.125.19.103, 74.125.19.104, 74.125.19.147, ...&lt;br /&gt;
Connecting to www.google.com|74.125.19.103|:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/plain]&lt;br /&gt;
&lt;br /&gt;
    [ &amp;lt;=&amp;gt;                                 ] 3,425        --.--K/s&lt;br /&gt;
&lt;br /&gt;
23:59:26 (13.67MB/s) - 'robots.txt' saved [3425]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Analyze robots.txt using Google Webmaster Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Google provides an &amp;quot;Analyze robots.txt&amp;quot; function as part of its &amp;quot;Google Webmaster Tools&amp;quot;, which can assist with testing [4] and the procedure is as follows:&lt;br /&gt;
&lt;br /&gt;
1. Sign into Google Webmaster Tools with your Google Account.&amp;lt;br&amp;gt;&lt;br /&gt;
2. On the Dashboard, click the URL for the site you want.&amp;lt;br&amp;gt;&lt;br /&gt;
3. Click Tools, and then click Analyze robots.txt.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Gray Box testing and example === &lt;br /&gt;
The process is the same as Black Box testing above.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Tools ==&lt;br /&gt;
&lt;br /&gt;
* Browser (View Source function)&lt;br /&gt;
* curl&lt;br /&gt;
* wget&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] &amp;quot;The Web Robots Pages&amp;quot; - http://www.robotstxt.org/&lt;br /&gt;
* [2] &amp;quot;Block and Remove Pages Using a robots.txt File&amp;quot; - http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=40364&amp;amp;rd=1&lt;br /&gt;
* [3] &amp;quot;(ISC)2 Blog: The Attack of the Spiders from the Clouds&amp;quot; - http://blog.isc2.org/isc2_blog/2008/07/the-attack-of-t.html&lt;br /&gt;
* [4] &amp;quot;Block and Remove Pages Using a robots.txt File- http://support.google.com/webmasters/bin/answer.py?hl=en&amp;amp;answer=156449&amp;amp;from=35237&amp;amp;rd=1&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Talk:Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156521</id>
		<title>Talk:Review Webserver Metafiles for Information Leakage (OTG-INFO-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Talk:Review_Webserver_Metafiles_for_Information_Leakage_(OTG-INFO-003)&amp;diff=156521"/>
				<updated>2013-08-08T04:45:10Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Draft reply from @cmlh&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== Discussion ==&lt;br /&gt;
It could be added that, from an attacker point of view, the robots.txt file can provide some useful information on the structure of the web server, e.g., directories that are supposed to be &amp;quot;private&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
[[User:Marco|Marco]] 18:11, 17 August 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
The intent of robots.txt is *not* to specify access control for directories. Hence to quote the wiki page &amp;quot;''Web spiders/robots/crawlers can intentionally ignore the Disallow directives specified in a robots.txt file [3]. Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties.''&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
If you believe this is not your communicated clearly or could be reworded then please amend the wiki page.&lt;br /&gt;
&lt;br /&gt;
[[User:cmlh|cmlh]] 12:34, 24 August 2008 (GMT +10)&lt;br /&gt;
&lt;br /&gt;
== v3 Review Comments ==&lt;br /&gt;
&lt;br /&gt;
I don't see anything here about actually testing robots.txt or using Spiders/Robots/Crawlers to do anything to the web app. It's nice that we can DL the file and that it contains some interesting information and that there's a google tool that can do some analysis of it (though we haven't explained what google webmaster tools gives you or provided an example of the output), but where would that lead a tester or attacker?&amp;lt;br&amp;gt;&lt;br /&gt;
[[User:Rick.mitchell|Rick.mitchell]] 09:39, 3 September 2008 (EDT)&lt;br /&gt;
&lt;br /&gt;
== Reply from @cmlh ==&lt;br /&gt;
&lt;br /&gt;
Rick may have overlooked the quote &amp;quot;Hence, robots.txt should not be considered as a mechanism to enforce restrictions on how web content is accessed, stored, or republished by third parties. &amp;quot; from in the &amp;quot;How to Test&amp;quot; section of v3. &lt;br /&gt;
&lt;br /&gt;
The lack of the &amp;quot;Google Webmaster Tools&amp;quot; example is due to me not being the webmaster of owasp.org.  This can be resolved in v4 once the webmaster is known.&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Talk:Conduct_search_engine_discovery/reconnaissance_for_information_leakage_(OTG-INFO-001)&amp;diff=156520</id>
		<title>Talk:Conduct search engine discovery/reconnaissance for information leakage (OTG-INFO-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Talk:Conduct_search_engine_discovery/reconnaissance_for_information_leakage_(OTG-INFO-001)&amp;diff=156520"/>
				<updated>2013-08-08T03:58:25Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Draft reply from @cmlh&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
&lt;br /&gt;
== v3 Review Comments ==&lt;br /&gt;
This section does not cover the items stated in the &amp;quot;brief summary&amp;quot;.&lt;br /&gt;
For v3, if the section is to remain completely google'centric I suggest we rename &amp;quot;Search engine discovery&amp;quot; to &amp;quot;Google searching your web application and accessing google's cache&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
== Reply to &amp;quot;v3 Review Comments&amp;quot; from @cmlh ==&lt;br /&gt;
The roadmap was to add Yahoo! and Bing to the next release of the OWASP Testing Guide (i.e. v3 -&amp;gt; v4) and to not appear to promote Google over Yahoo! and Bing.  It should be noted that Yahoo! and Bing might refer to the same &amp;quot;entity&amp;quot; as further research is undertaken i.e. the &amp;quot;Yahoo! and Microsoft Search Alliance&amp;quot;/&amp;quot;Yahoo! Bing Network&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Furthermore, the intent is *not* to promote the inferior http://www.hackersforcharity.org/ghdb/, rather a more scientific and innovative approach.&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OpenSAMM_Adopters&amp;diff=155202</id>
		<title>OpenSAMM Adopters</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OpenSAMM_Adopters&amp;diff=155202"/>
				<updated>2013-07-07T05:43:23Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Added ISG&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Software Assurance Maturity Model]]&lt;br /&gt;
===List of Organizations Using OpenSAMM===&lt;br /&gt;
&lt;br /&gt;
{|class=&amp;quot;wikitable sortable&amp;quot; style=&amp;quot;text-align: top;&amp;quot; border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
!width=&amp;quot;10%&amp;quot; |Organization Name&lt;br /&gt;
!width=&amp;quot;10%&amp;quot; |Contact&lt;br /&gt;
!width=&amp;quot;10%&amp;quot; |Role&lt;br /&gt;
!width=&amp;quot;10%&amp;quot; |Organization Type ([http://en.wikipedia.org/wiki/Vertical_market *])&lt;br /&gt;
!width=&amp;quot;10%&amp;quot; |Region&lt;br /&gt;
!width=&amp;quot;40%&amp;quot; |Testimonial&lt;br /&gt;
|-&lt;br /&gt;
| Dell, Inc. || Michael J. Craigue || Information Security &amp;amp; Compliance || Technology || US || ''OWASP.org is a valuable resource for any company involved with online payment card transactions. Dell uses OWASP’s Software Assurance Maturity Model (OpenSAMM) to help focus our resources and determine which components of our secure application development program to prioritize. Participation in OWASP’s local chapter meetings and conferences around the globe helps us build stronger networks with our colleagues.''&lt;br /&gt;
|-&lt;br /&gt;
| KBC || Johan Jacobs || ICT Department Head || Banking || Europe || -&lt;br /&gt;
|-&lt;br /&gt;
| Gotham Digital Science || Matt Bartoldus || Co-Founder &amp;amp; Director || Security services || Global || ''SAMM has defined the building blocks for effective software security assurance… Our clients can use the model to see what needs to be done and what skills and resources are needed to do the job. Best of all, businesses can use SAMM to quantify results and improvements by assessing practices against SAMM activities.''&lt;br /&gt;
|-&lt;br /&gt;
| Fortify Software || Brian Chess || Founder &amp;amp; Chief Scientist || Security services || Global || ''These days people understand that security has to be built in–it can’t be bolted on.  But for many a big question remains: what does it take to build secure software?  SAMM tackles that question head on with a framework for creating and growing a software security initiative.  SAMM has focused the way I think about the human side of the software security problem.''&lt;br /&gt;
|-&lt;br /&gt;
| ING Insurance International || Rob Moes || IT Security Manager || Insurance || Europe || ''Within ING Insurance International we adopted SAMM as it is a practical standard which provides guidance to build an Secure Application Development organization in clear and distinctive steps.''&lt;br /&gt;
|-&lt;br /&gt;
| ISG || Christian Heinrich || Application Security Manager || Health || Australia || ''ISG has integrated both OpenSAMM and BSIMM to measure security improvement over time in addition to our overall measurement of the &amp;quot;Capability Maturity Model for Software Development&amp;quot; published by Carnegie Mellon University&amp;quot;.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;Fill in Organisation Name&amp;gt; || &amp;lt;Fill in Contact First Name, Family Name&amp;gt; || &amp;lt;Fill in Contact role in the organisation&amp;gt;|| &amp;lt;Fill in Organisation Type: Government, Finance, Healthcare, ...&amp;gt;|| &amp;lt;Fill in Region: Continent, Country&amp;gt;|| ''&amp;lt;Fill in Contact Testimonial - OPTIONAL&amp;gt;''&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Issues_Concerning_The_OWASP_Top_Ten_2013&amp;diff=153296</id>
		<title>Issues Concerning The OWASP Top Ten 2013</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Issues_Concerning_The_OWASP_Top_Ten_2013&amp;diff=153296"/>
				<updated>2013-06-10T01:00:24Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: http://lists.owasp.org/pipermail/owasp-topten/2013-June/001108.html&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;INTRODUCTION&lt;br /&gt;
&lt;br /&gt;
The Terms of References for the &amp;quot;OWASP Top Ten Code of Ethics violations and project handbook.&amp;quot; agenda item i.e. https://www.owasp.org/index.php/June_10,_2013 are specified below.&lt;br /&gt;
&lt;br /&gt;
There are several complaints made against Aspect Security by OWASP Members, including http://lists.owasp.org/pipermail/owasp-leaders/2013-June/009432.html, http://lists.owasp.org/pipermail/owasp-topten/2013-May/date.html, etc&lt;br /&gt;
&lt;br /&gt;
Each numbered item has been grouped into themes based on headings below:&lt;br /&gt;
&lt;br /&gt;
A9 AND SONATYPE&lt;br /&gt;
&lt;br /&gt;
1. The are several external complaints from OWASP members stating that the Sonatype/Aspect Security statistics are unscientific and bias and some examples are:&lt;br /&gt;
* GWT i.e. https://groups.google.com/forum/?fromgroups#!topic/google-web-toolkit/Ezr6acdyZv0&lt;br /&gt;
* SpringSource i.e. http://www.infosecurity-magazine.com/view/30282/remote-code-vulnerability-in-spring-framework-for-java/.  Furthermore, as the disclosure by Aspect Security occurred in January 2013 this conflicts with their statement that the statistics were sampled well before 2013.&lt;br /&gt;
&lt;br /&gt;
2. Aspect Security have promoted both AntiSammy and ESAPI in A1 or A3 which they also hold the Project Leadership of. However, their paid research for Sonatype states that their insecure releases are still being downloaded.  Therefore, OWASP is placed inm until recently, unknown catastrophic residual risk as it appears that OWASP is hypocritical in not following their own recommendation i.e. http://lists.owasp.org/pipermail/owasp-topten/2013-June/001095.html&lt;br /&gt;
&lt;br /&gt;
3. The residual risk of A9 will be accepted by the developer due to the significant cost with change i.e. http://lists.owasp.org/pipermail/owasp-topten/2013-February/000844.html&lt;br /&gt;
&lt;br /&gt;
4. A9 does not direct the reader to other related open source projects, such as https://github.com/gcmurphy/enforce-victims-rule, https://github.com/jeremylong/DependencyCheck, etc&lt;br /&gt;
&lt;br /&gt;
5. The Press Release from Sonatype quotes Jeff Williams and was not approved under the OWASP Quotes process which he also championed as an OWASP Board Member.  Furthermore, Aspect Security did not attempt to inform the OWASP Foundation once they were alerted to the publication of the Press Release i.e.  http://lists.owasp.org/pipermail/owasp-topten/2013-May/001017.html&lt;br /&gt;
&lt;br /&gt;
6. Aspect Security hosted a Chapter Meeting on 6 June to promote Sonatype and A9 before the actual 2013 release was accepted by the webappsec community i.e. http://www.meetup.com/OWASP-Baltimore-Chapter/events/119389612/&lt;br /&gt;
&lt;br /&gt;
OTHER SOURCES OF STATISTICS&lt;br /&gt;
&lt;br /&gt;
7. The statistics from both WhiteHat and HP (i.e. Fortify and WebInspect) require registration.  &amp;lt;del&amp;gt;Dave Wichers of Aspect Security has *not* published the promised alternate links i.e.  http://lists.owasp.org/pipermail/owasp-topten/2013-May/001041.html&amp;lt;/del&amp;gt; &lt;br /&gt;
&lt;br /&gt;
8. Statistics from either Trustwave&amp;lt;del&amp;gt;, Softek or&amp;lt;/del&amp;gt; Minded Security were *not* analysed as this would have resulted in a second release of the RC or at least notification of the result i.e. http://lists.owasp.org/pipermail/owasp-topten/2013-May/001054.html and http://lists.owasp.org/pipermail/owasp-topten/2013-May/001080.html&lt;br /&gt;
&lt;br /&gt;
9. Aspect Security have not published their statistical analysis i.e. http://lists.owasp.org/pipermail/owasp-topten/2013-June/001096.html  For comparison purposes, Minded Security were able to publish their effort within less than a month (28 January to 19 February).&lt;br /&gt;
&lt;br /&gt;
2010 RELEASE&lt;br /&gt;
&lt;br /&gt;
10. Softek are *not* listed as a sponsor within the pages of the 2010 deliverable as Aspect Security have taken this space for their own enlarged company logo i.e. http://lists.owasp.org/pipermail/owasp-topten/2013-May/001039.html&lt;br /&gt;
&lt;br /&gt;
ABUSE FROM ARSHAN DABIRSIAGHI OF ASPECT SECURITY&lt;br /&gt;
&lt;br /&gt;
11. The formal complaint is available from http://lists.owasp.org/pipermail/owasp-topten/2013-June/001099.html&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Issues_Concerning_The_OWASP_Top_Ten_2013&amp;diff=153268</id>
		<title>Issues Concerning The OWASP Top Ten 2013</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Issues_Concerning_The_OWASP_Top_Ten_2013&amp;diff=153268"/>
				<updated>2013-06-09T13:57:39Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Initial Draft&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;INTRODUCTION&lt;br /&gt;
&lt;br /&gt;
The Terms of References for the &amp;quot;OWASP Top Ten Code of Ethics violations and project handbook.&amp;quot; agenda item i.e. https://www.owasp.org/index.php/June_10,_2013 are specified below.&lt;br /&gt;
&lt;br /&gt;
There are several complaints made against Aspect Security, including http://lists.owasp.org/pipermail/owasp-leaders/2013-June/009432.html, http://lists.owasp.org/pipermail/owasp-topten/2013-May/date.html, etc&lt;br /&gt;
&lt;br /&gt;
Each numbered item has been grouped into themes based on headings below:&lt;br /&gt;
&lt;br /&gt;
A9 AND SONATYPE&lt;br /&gt;
&lt;br /&gt;
1. The are several external complaints stating that the Sonatype/Aspect Security statistics are unscientific and bias and some examples are:&lt;br /&gt;
* GWT i.e. https://groups.google.com/forum/?fromgroups#!topic/google-web-toolkit/Ezr6acdyZv0&lt;br /&gt;
* SpringSource i.e. http://www.infosecurity-magazine.com/view/30282/remote-code-vulnerability-in-spring-framework-for-java/.  Furthermore, as the disclosure by Aspect Security occurred in January 2013 this conflicts with their statement that the statistics were sampled well before 2013.&lt;br /&gt;
&lt;br /&gt;
2. Aspect Security have promoted both AntiSammy and ESAPI in A1 or A3 which they also hold the Project Leadership of. However, their paid research for Sonatype states that their insecure releases are still being downloaded.  Therefore, OWASP is placed inm until recently, unknown catastrophic residual risk as it appears that OWASP is hypocritical in not following their own recommendation i.e. http://lists.owasp.org/pipermail/owasp-topten/2013-June/001095.html&lt;br /&gt;
&lt;br /&gt;
3. The residual risk of A9 will be accepted by the developer due to the significant cost with change i.e. http://lists.owasp.org/pipermail/owasp-topten/2013-February/000844.html&lt;br /&gt;
&lt;br /&gt;
4. A9 does not direct the reader to other related open source projects, such as https://github.com/gcmurphy/enforce-victims-rule, https://github.com/jeremylong/DependencyCheck, etc&lt;br /&gt;
&lt;br /&gt;
5. The Press Release from Sonatype quotes Jeff Williams and was not approved under the OWASP Quotes process which he also championed as an OWASP Board Member.  Furthermore, Aspect Security did not attempt to inform the OWASP Foundation once they were alerted to the publication of the Press Release i.e.  http://lists.owasp.org/pipermail/owasp-topten/2013-May/001017.html&lt;br /&gt;
&lt;br /&gt;
6. Aspect Security hosted a Chapter Meeting on 6 June to promote Sonatype and A9 before the actual 2013 release was accepted by the webappsec community i.e. http://www.meetup.com/OWASP-Baltimore-Chapter/events/119389612/&lt;br /&gt;
&lt;br /&gt;
OTHER SOURCES OF STATISTICS&lt;br /&gt;
&lt;br /&gt;
7. The statistics from both WhiteHat and HP (i.e. Fortify and WebInspect) require registration.  Dave Wichers of Aspect Security has *not* published the promised alternate links i.e.  http://lists.owasp.org/pipermail/owasp-topten/2013-May/001041.html &lt;br /&gt;
&lt;br /&gt;
8. Statistics from either Trustwave, Softek or Minded Security were *not* analysed as this would have resulted in a second release of the RC or at least notification of the result i.e. http://lists.owasp.org/pipermail/owasp-topten/2013-May/001054.html and http://lists.owasp.org/pipermail/owasp-topten/2013-May/001080.html&lt;br /&gt;
&lt;br /&gt;
9. Aspect Security have not published their statistical analysis i.e. http://lists.owasp.org/pipermail/owasp-topten/2013-June/001096.html  For comparison purposes, Minded Security were able to publish their effort within less than a month (28 January to 19 February).&lt;br /&gt;
&lt;br /&gt;
2010 RELEASE&lt;br /&gt;
&lt;br /&gt;
10. Softek are *not* listed as a sponsor within the pages of the deliverable as Aspect Security have taken this space for their own enlarged company logo i.e. http://lists.owasp.org/pipermail/owasp-topten/2013-May/001039.html&lt;br /&gt;
&lt;br /&gt;
ABUSE FROM ARSHAN DABIRSIAGHI OF ASPECT SECURITY&lt;br /&gt;
&lt;br /&gt;
11. The formal complaint is available from http://lists.owasp.org/pipermail/owasp-topten/2013-June/001099.html&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=113251</id>
		<title>User:Cmlh</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=113251"/>
				<updated>2011-06-30T22:56:32Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Added SlideShare/cmlh URL&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Contact=&lt;br /&gt;
&lt;br /&gt;
In relation to OWASP matters, [mailto:christian.heinrich@owasp.org Christian Heinrich] can be reached at [mailto:christian.heinrich@owasp.org christian.heinrich@owasp.org].&lt;br /&gt;
&lt;br /&gt;
For matters not related to OWASP or as an Out of Band Communications Channel to his @owasp.org e-mail address, Christian Heinrich has listed multiple points of contact  at [http://cmlh.id.au/contact http://cmlh.id.au/contact].&lt;br /&gt;
&lt;br /&gt;
=Biography= &lt;br /&gt;
&lt;br /&gt;
Christian Heinrich has a Public Profile on LinkedIn at [http://www.linkedin.com/in/ChristianHeinrich http://www.linkedin.com/in/ChristianHeinrich]&lt;br /&gt;
&lt;br /&gt;
=Contributions to OWASP=&lt;br /&gt;
&lt;br /&gt;
Christian Heinrich's edits to the OWASP wiki are listed at: [[:Special:Contributions/Cmlh|Special:Contributions/Cmlh]]. &lt;br /&gt;
&lt;br /&gt;
==OWASP Projects==&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] is the Leader of the [http://www.owasp.org/index.php/Category:OWASP_PCI_Project OWASP PCI Project] having previously lead the [http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project OWASP &amp;quot;Google Hacking&amp;quot; Project] i.e. [http://code.google.com/p/dic &amp;quot;Download Indexed Cache&amp;quot;] and has contributed to the [http://www.owasp.org/index.php/Testing:_Spiders,_Robots,_and_Crawlers_(OWASP-IG-001) &amp;quot;Spiders/Robots/Crawlers&amp;quot;] and [http://www.owasp.org/index.php/Testing:_Search_engine_discovery/reconnaissance_(OWASP-IG-002) &amp;quot;Search Engine Reconnaissance&amp;quot;] sections of the OWASP Testing Guide v3 and more recently contributed to the development of the OWASP  ESAPI Java WAF, [http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project Top Ten], [http://www.opensamm.org OpenSAMM] and [http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project Application Security Verification Standard (ASVS)] Projects.&lt;br /&gt;
&lt;br /&gt;
==OWASP Presentations==&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] has presented at OWASP Conferences in USA, Australia and Europe and OWASP Chapters in:&lt;br /&gt;
&lt;br /&gt;
*the Netherlands and;&lt;br /&gt;
*London, UK and;&lt;br /&gt;
*Sydney and Melbourne, Australia.&lt;br /&gt;
&lt;br /&gt;
Videos of these presentations are available from [http://www.google.com.au/search?tbm=vid&amp;amp;q=%22Christian+Heinrich%22+OWASP Google] and associated slides are available from [http://www.slideshare.net/cmlh/tag/owasp slideshare.net/cmlh]&lt;br /&gt;
&lt;br /&gt;
=OWASP Board Candidate=&lt;br /&gt;
&lt;br /&gt;
==Global==&lt;br /&gt;
&lt;br /&gt;
While the candidates are either from USA or Europe and have contributed significantly to OWASP, I would like to highlight the many contributions made by Canada, EMEA and Asia Pacific, Central (America) and South America.&lt;br /&gt;
&lt;br /&gt;
==Governance==&lt;br /&gt;
&lt;br /&gt;
===Board===&lt;br /&gt;
&lt;br /&gt;
I believe that during the term of a Board Member that they should disassociate themselves from leadership position of their Chapters and Projects of OWASP with the option to contribute during their term but not in a leadership capacity.&lt;br /&gt;
&lt;br /&gt;
I also believe that funding for Board Members to travel should not be approved.&lt;br /&gt;
&lt;br /&gt;
===Projects===&lt;br /&gt;
&lt;br /&gt;
I believe that Project Leaders should be able to determine their own level of quality which the consumer can measure based on published peer review.  As expected, those who require funding from OWASP to market their project or increase its quality should be subject to project management.&lt;br /&gt;
&lt;br /&gt;
I believe that those who contribute to an OWASP Project should be credited as such irrespective of their employer. &lt;br /&gt;
&lt;br /&gt;
==Significant Experience==&lt;br /&gt;
&lt;br /&gt;
I have founded a number of groups in Australia, including Snort User Group and Australian Information Security Association with over 1000 members within Australia.&lt;br /&gt;
&lt;br /&gt;
I also initiated the OWASP relationship with Mozilla during Hack in the Box Amsterdam in 2010. &lt;br /&gt;
&lt;br /&gt;
== Commercial Independence==&lt;br /&gt;
&lt;br /&gt;
I am not associated with any vendor and/or consultancy and therefore my agenda is *not* to exploit OWASP for  commercial gain.&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=113163</id>
		<title>User:Cmlh</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=113163"/>
				<updated>2011-06-28T22:25:52Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Added Central America to Global&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Contact=&lt;br /&gt;
&lt;br /&gt;
In relation to OWASP matters, [mailto:christian.heinrich@owasp.org Christian Heinrich] can be reached at [mailto:christian.heinrich@owasp.org christian.heinrich@owasp.org].&lt;br /&gt;
&lt;br /&gt;
For matters not related to OWASP or as an Out of Band Communications Channel to his @owasp.org e-mail address, Christian Heinrich has listed multiple points of contact  at [http://cmlh.id.au/contact http://cmlh.id.au/contact].&lt;br /&gt;
&lt;br /&gt;
=Biography= &lt;br /&gt;
&lt;br /&gt;
Christian Heinrich has a Public Profile on LinkedIn at [http://www.linkedin.com/in/ChristianHeinrich http://www.linkedin.com/in/ChristianHeinrich]&lt;br /&gt;
&lt;br /&gt;
=Contributions to OWASP=&lt;br /&gt;
&lt;br /&gt;
Christian Heinrich's edits to the OWASP wiki are listed at: [[:Special:Contributions/Cmlh|Special:Contributions/Cmlh]]. &lt;br /&gt;
&lt;br /&gt;
==OWASP Projects==&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] is the Leader of the [http://www.owasp.org/index.php/Category:OWASP_PCI_Project OWASP PCI Project] having previously lead the [http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project OWASP &amp;quot;Google Hacking&amp;quot; Project] i.e. [http://code.google.com/p/dic &amp;quot;Download Indexed Cache&amp;quot;] and has contributed to the [http://www.owasp.org/index.php/Testing:_Spiders,_Robots,_and_Crawlers_(OWASP-IG-001) &amp;quot;Spiders/Robots/Crawlers&amp;quot;] and [http://www.owasp.org/index.php/Testing:_Search_engine_discovery/reconnaissance_(OWASP-IG-002) &amp;quot;Search Engine Reconnaissance&amp;quot;] sections of the OWASP Testing Guide v3 and more recently contributed to the development of the OWASP  ESAPI Java WAF, [http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project Top Ten], [http://www.opensamm.org OpenSAMM] and [http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project Application Security Verification Standard (ASVS)] Projects.&lt;br /&gt;
&lt;br /&gt;
==OWASP Presentations==&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] has presented at OWASP Conferences in USA, Australia and Europe and OWASP Chapters in:&lt;br /&gt;
&lt;br /&gt;
*the Netherlands and;&lt;br /&gt;
*London, UK and;&lt;br /&gt;
*Sydney and Melbourne, Australia.&lt;br /&gt;
&lt;br /&gt;
Videos of these presentations are available from [http://www.google.com.au/search?tbm=vid&amp;amp;q=%22Christian+Heinrich%22+OWASP Google]&lt;br /&gt;
&lt;br /&gt;
=OWASP Board Candidate=&lt;br /&gt;
&lt;br /&gt;
==Global==&lt;br /&gt;
&lt;br /&gt;
While the candidates are either from USA or Europe and have contributed significantly to OWASP, I would like to highlight the many contributions made by Canada, EMEA and Asia Pacific, Central (America) and South America.&lt;br /&gt;
&lt;br /&gt;
==Governance==&lt;br /&gt;
&lt;br /&gt;
===Board===&lt;br /&gt;
&lt;br /&gt;
I believe that during the term of a Board Member that they should disassociate themselves from leadership position of their Chapters and Projects of OWASP with the option to contribute during their term but not in a leadership capacity.&lt;br /&gt;
&lt;br /&gt;
I also believe that funding for Board Members to travel should not be approved.&lt;br /&gt;
&lt;br /&gt;
===Projects===&lt;br /&gt;
&lt;br /&gt;
I believe that Project Leaders should be able to determine their own level of quality which the consumer can measure based on published peer review.  As expected, those who require funding from OWASP to market their project or increase its quality should be subject to project management.&lt;br /&gt;
&lt;br /&gt;
I believe that those who contribute to an OWASP Project should be credited as such irrespective of their employer. &lt;br /&gt;
&lt;br /&gt;
==Significant Experience==&lt;br /&gt;
&lt;br /&gt;
I have founded a number of groups in Australia, including Snort User Group and Australian Information Security Association with over 1000 members within Australia.&lt;br /&gt;
&lt;br /&gt;
I also initiated the OWASP relationship with Mozilla during Hack in the Box Amsterdam in 2010. &lt;br /&gt;
&lt;br /&gt;
== Commercial Independence==&lt;br /&gt;
&lt;br /&gt;
I am not associated with any vendor and/or consultancy and therefore my agenda is *not* to exploit OWASP for  commercial gain.&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=113101</id>
		<title>User:Cmlh</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=113101"/>
				<updated>2011-06-28T05:04:22Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Campaign Announcement&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Contact=&lt;br /&gt;
&lt;br /&gt;
In relation to OWASP matters, [mailto:christian.heinrich@owasp.org Christian Heinrich] can be reached at [mailto:christian.heinrich@owasp.org christian.heinrich@owasp.org].&lt;br /&gt;
&lt;br /&gt;
For matters not related to OWASP or as an Out of Band Communications Channel to his @owasp.org e-mail address, Christian Heinrich has listed multiple points of contact  at [http://cmlh.id.au/contact http://cmlh.id.au/contact].&lt;br /&gt;
&lt;br /&gt;
=Biography= &lt;br /&gt;
&lt;br /&gt;
Christian Heinrich has a Public Profile on LinkedIn at [http://www.linkedin.com/in/ChristianHeinrich http://www.linkedin.com/in/ChristianHeinrich]&lt;br /&gt;
&lt;br /&gt;
=Contributions to OWASP=&lt;br /&gt;
&lt;br /&gt;
Christian Heinrich's edits to the OWASP wiki are listed at: [[:Special:Contributions/Cmlh|Special:Contributions/Cmlh]]. &lt;br /&gt;
&lt;br /&gt;
==OWASP Projects==&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] is the Leader of the [http://www.owasp.org/index.php/Category:OWASP_PCI_Project OWASP PCI Project] having previously lead the [http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project OWASP &amp;quot;Google Hacking&amp;quot; Project] i.e. [http://code.google.com/p/dic &amp;quot;Download Indexed Cache&amp;quot;] and has contributed to the [http://www.owasp.org/index.php/Testing:_Spiders,_Robots,_and_Crawlers_(OWASP-IG-001) &amp;quot;Spiders/Robots/Crawlers&amp;quot;] and [http://www.owasp.org/index.php/Testing:_Search_engine_discovery/reconnaissance_(OWASP-IG-002) &amp;quot;Search Engine Reconnaissance&amp;quot;] sections of the OWASP Testing Guide v3 and more recently contributed to the development of the OWASP  ESAPI Java WAF, [http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project Top Ten], [http://www.opensamm.org OpenSAMM] and [http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project Application Security Verification Standard (ASVS)] Projects.&lt;br /&gt;
&lt;br /&gt;
==OWASP Presentations==&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] has presented at OWASP Conferences in USA, Australia and Europe and OWASP Chapters in:&lt;br /&gt;
&lt;br /&gt;
*the Netherlands and;&lt;br /&gt;
*London, UK and;&lt;br /&gt;
*Sydney and Melbourne, Australia.&lt;br /&gt;
&lt;br /&gt;
Videos of these presentations are available from [http://www.google.com.au/search?tbm=vid&amp;amp;q=%22Christian+Heinrich%22+OWASP Google]&lt;br /&gt;
&lt;br /&gt;
=OWASP Board Candidate=&lt;br /&gt;
&lt;br /&gt;
==Global==&lt;br /&gt;
&lt;br /&gt;
While the candidates are either from North America or Europe and have contributed significantly to OWASP, I would like to highlight the many contributions made by South America, EMEA and Asia Pacific. &lt;br /&gt;
&lt;br /&gt;
==Governance==&lt;br /&gt;
&lt;br /&gt;
===Board===&lt;br /&gt;
&lt;br /&gt;
I believe that during the term of a Board Member that they should disassociate themselves from leadership position of their Chapters and Projects of OWASP with the option to contribute during their term but not in a leadership capacity.&lt;br /&gt;
&lt;br /&gt;
I also believe that funding for Board Members to travel should not be approved.&lt;br /&gt;
&lt;br /&gt;
===Projects===&lt;br /&gt;
&lt;br /&gt;
I believe that Project Leaders should be able to determine their own level of quality which the consumer can measure based on published peer review.  As expected, those who require funding from OWASP to market their project or increase its quality should be subject to project management.&lt;br /&gt;
&lt;br /&gt;
I believe that those who contribute to an OWASP Project should be credited as such irrespective of their employer. &lt;br /&gt;
&lt;br /&gt;
==Significant Experience==&lt;br /&gt;
&lt;br /&gt;
I have founded a number of groups in Australia, including Snort User Group and Australian Information Security Association with over 1000 members.&lt;br /&gt;
&lt;br /&gt;
I also initiated the OWASP relationship with Mozilla during Hack in the Box Amsterdam in 2010. &lt;br /&gt;
&lt;br /&gt;
== Commercial Independence==&lt;br /&gt;
&lt;br /&gt;
I am not associated with any vendor and/or consultancy and therefore my agenda is *not* to exploit OWASP for  commercial gain.&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=112244</id>
		<title>User:Cmlh</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=112244"/>
				<updated>2011-06-16T07:15:01Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Amended Headings i.e. Biography and Contributions to OWASP.  Consider listing mailing list contributions at a later date.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Contact=&lt;br /&gt;
&lt;br /&gt;
In relation to OWASP matters, [mailto:christian.heinrich@owasp.org Christian Heinrich] can be reached at [mailto:christian.heinrich@owasp.org christian.heinrich@owasp.org].&lt;br /&gt;
&lt;br /&gt;
For matters not related to OWASP or as an Out of Band Communications Channel to his @owasp.org e-mail address, Christian Heinrich has listed multiple points of contact  at [http://cmlh.id.au/contact http://cmlh.id.au/contact].&lt;br /&gt;
&lt;br /&gt;
=Biography= &lt;br /&gt;
&lt;br /&gt;
Christian Heinrich has a Public Profile on LinkedIn at [http://www.linkedin.com/in/ChristianHeinrich http://www.linkedin.com/in/ChristianHeinrich]&lt;br /&gt;
&lt;br /&gt;
=Contributions to OWASP=&lt;br /&gt;
&lt;br /&gt;
Christian Heinrich's edits to the OWASP wiki are listed at: [[:Special:Contributions/Cmlh|Special:Contributions/Cmlh]]. &lt;br /&gt;
&lt;br /&gt;
==OWASP Projects==&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] is the Leader of the [http://www.owasp.org/index.php/Category:OWASP_PCI_Project OWASP PCI Project] having previously lead the [http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project OWASP &amp;quot;Google Hacking&amp;quot; Project] i.e. [http://code.google.com/p/dic &amp;quot;Download Indexed Cache&amp;quot;] and has contributed to the [http://www.owasp.org/index.php/Testing:_Spiders,_Robots,_and_Crawlers_(OWASP-IG-001) &amp;quot;Spiders/Robots/Crawlers&amp;quot;] and [http://www.owasp.org/index.php/Testing:_Search_engine_discovery/reconnaissance_(OWASP-IG-002) &amp;quot;Search Engine Reconnaissance&amp;quot;] sections of the OWASP Testing Guide v3 and more recently contributed to the development of the OWASP  ESAPI Java WAF, [http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project Top Ten], [http://www.opensamm.org OpenSAMM] and [http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project Application Security Verification Standard (ASVS)] Projects.&lt;br /&gt;
&lt;br /&gt;
==OWASP Presentations==&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] has presented at OWASP Conferences in USA, Australia and Europe and OWASP Chapters in:&lt;br /&gt;
&lt;br /&gt;
*the Netherlands and;&lt;br /&gt;
*London, UK and;&lt;br /&gt;
*Sydney and Melbourne, Australia.&lt;br /&gt;
&lt;br /&gt;
Videos of these presentations are available from [http://www.google.com.au/search?tbm=vid&amp;amp;q=%22Christian+Heinrich%22+OWASP Google]&lt;br /&gt;
&lt;br /&gt;
=OWASP Board Candidate=&lt;br /&gt;
&lt;br /&gt;
TBC&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=112132</id>
		<title>User:Cmlh</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=112132"/>
				<updated>2011-06-15T05:01:08Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Added Headings&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Contact=&lt;br /&gt;
&lt;br /&gt;
In relation to OWASP matters, [mailto:christian.heinrich@owasp.org Christian Heinrich] can be reached at [mailto:christian.heinrich@owasp.org christian.heinrich@owasp.org].&lt;br /&gt;
&lt;br /&gt;
For matters not related to OWASP or as an Out of Band Communications Channel to his @owasp.org e-mail address, Christian Heinrich has listed multiple points of contact  at [http://cmlh.id.au/contact http://cmlh.id.au/contact] and a Public Profile on LinkedIn at [http://www.linkedin.com/in/ChristianHeinrich http://www.linkedin.com/in/ChristianHeinrich]&lt;br /&gt;
&lt;br /&gt;
=OWASP Projects=&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] is the Leader of the [http://www.owasp.org/index.php/Category:OWASP_PCI_Project OWASP PCI Project] having previously lead the [http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project OWASP &amp;quot;Google Hacking&amp;quot; Project] i.e. [http://code.google.com/p/dic &amp;quot;Download Indexed Cache&amp;quot;] and has contributed to the [http://www.owasp.org/index.php/Testing:_Spiders,_Robots,_and_Crawlers_(OWASP-IG-001) &amp;quot;Spiders/Robots/Crawlers&amp;quot;] and [http://www.owasp.org/index.php/Testing:_Search_engine_discovery/reconnaissance_(OWASP-IG-002) &amp;quot;Search Engine Reconnaissance&amp;quot;] sections of the OWASP Testing Guide v3 and more recently contributed to the development of the OWASP  ESAPI Java WAF, [http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project Top Ten], [http://www.opensamm.org OpenSAMM] and [http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project Application Security Verification Standard (ASVS)] Projects.&lt;br /&gt;
&lt;br /&gt;
Christian Heinrich's edits to the OWASP wiki are listed at: [[:Special:Contributions/Cmlh|Special:Contributions/Cmlh]]. &lt;br /&gt;
&lt;br /&gt;
=OWASP Presentations=&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] has presented at OWASP Conferences in USA, Australia and Europe and OWASP Chapters in:&lt;br /&gt;
&lt;br /&gt;
*the Netherlands and;&lt;br /&gt;
*London, UK and;&lt;br /&gt;
*Sydney and Melbourne, Australia.&lt;br /&gt;
&lt;br /&gt;
=OWASP Board Candidate=&lt;br /&gt;
&lt;br /&gt;
TBC&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=112131</id>
		<title>User:Cmlh</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=112131"/>
				<updated>2011-06-15T04:56:27Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Relocated &amp;quot;Christian Heinrich's edits to the OWASP wiki are listed at: Special:Contributions/Cmlh&amp;quot; from Top of Page by Tom Brennan&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[mailto:christian.heinrich@owasp.org Christian Heinrich] is the Leader of the [http://www.owasp.org/index.php/Category:OWASP_PCI_Project OWASP PCI Project] having previously lead the [http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project OWASP &amp;quot;Google Hacking&amp;quot; Project] i.e. [http://code.google.com/p/dic &amp;quot;Download Indexed Cache&amp;quot;] and has contributed to the [http://www.owasp.org/index.php/Testing:_Spiders,_Robots,_and_Crawlers_(OWASP-IG-001) &amp;quot;Spiders/Robots/Crawlers&amp;quot;] and [http://www.owasp.org/index.php/Testing:_Search_engine_discovery/reconnaissance_(OWASP-IG-002) &amp;quot;Search Engine Reconnaissance&amp;quot;] sections of the OWASP Testing Guide v3 and more recently contributed to the development of the OWASP  ESAPI Java WAF, [http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project Top Ten], [http://www.opensamm.org OpenSAMM] and [http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project Application Security Verification Standard (ASVS)] Projects.&lt;br /&gt;
&lt;br /&gt;
Christian Heinrich's edits to the OWASP wiki are listed at: [[:Special:Contributions/Cmlh|Special:Contributions/Cmlh]]. &lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] has presented at OWASP Conferences in USA, Australia and Europe and OWASP Chapters in:&lt;br /&gt;
&lt;br /&gt;
*the Netherlands and;&lt;br /&gt;
*London, UK and;&lt;br /&gt;
*Sydney and Melbourne, Australia.&lt;br /&gt;
&lt;br /&gt;
In relation to OWASP matters, [mailto:christian.heinrich@owasp.org Christian Heinrich] can be reached at [mailto:christian.heinrich@owasp.org christian.heinrich@owasp.org].&lt;br /&gt;
&lt;br /&gt;
For matters not related to OWASP or as an Out of Band Communications Channel to his @owasp.org e-mail address, Christian Heinrich has listed multiple points of contact  at [http://cmlh.id.au/contact http://cmlh.id.au/contact] and a Public Profile on LinkedIn at [http://www.linkedin.com/in/ChristianHeinrich http://www.linkedin.com/in/ChristianHeinrich]&lt;br /&gt;
&lt;br /&gt;
=OWASP Board Candidate 2011=&lt;br /&gt;
&lt;br /&gt;
TBC&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=111081</id>
		<title>User:Cmlh</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=111081"/>
				<updated>2011-05-26T06:14:17Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Separated owasp.org and cmlh.id.au contact information into two separate paragraphs&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[mailto:christian.heinrich@owasp.org Christian Heinrich] is the Leader of the [http://www.owasp.org/index.php/Category:OWASP_PCI_Project OWASP PCI Project] having previously lead the [http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project OWASP &amp;quot;Google Hacking&amp;quot; Project] i.e. [http://code.google.com/p/dic &amp;quot;Download Indexed Cache&amp;quot;] and has contributed to the [http://www.owasp.org/index.php/Testing:_Spiders,_Robots,_and_Crawlers_(OWASP-IG-001) &amp;quot;Spiders/Robots/Crawlers&amp;quot;] and [http://www.owasp.org/index.php/Testing:_Search_engine_discovery/reconnaissance_(OWASP-IG-002) &amp;quot;Search Engine Reconnaissance&amp;quot;] sections of the OWASP Testing Guide v3 and more recently contributed to the development of the OWASP  ESAPI Java WAF, [http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project Top Ten], [http://www.opensamm.org OpenSAMM] and [http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project Application Security Verification Standard (ASVS)] Projects. &lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] has presented at OWASP Conferences in USA, Australia and Europe and OWASP Chapters in:&lt;br /&gt;
&lt;br /&gt;
*the Netherlands and;&lt;br /&gt;
*London, UK and;&lt;br /&gt;
*Sydney and Melbourne, Australia.&lt;br /&gt;
&lt;br /&gt;
In relation to OWASP matters, [mailto:christian.heinrich@owasp.org Christian Heinrich] can be reached at [mailto:christian.heinrich@owasp.org christian.heinrich@owasp.org].&lt;br /&gt;
&lt;br /&gt;
For matters not related to OWASP or as an Out of Band Communications Channel to his @owasp.org e-mail address, Christian Heinrich has listed multiple points of contact  at [http://cmlh.id.au/contact http://cmlh.id.au/contact] and a Public Profile on LinkedIn at [http://www.linkedin.com/in/ChristianHeinrich http://www.linkedin.com/in/ChristianHeinrich]&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=110985</id>
		<title>User:Cmlh</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=110985"/>
				<updated>2011-05-24T02:52:04Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Corrected reference to OWASP ESAPI Java WAF&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[mailto:christian.heinrich@owasp.org Christian Heinrich] is the Leader of the [http://www.owasp.org/index.php/Category:OWASP_PCI_Project OWASP PCI Project] having previously lead the [http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project OWASP &amp;quot;Google Hacking&amp;quot; Project] i.e. [http://code.google.com/p/dic &amp;quot;Download Indexed Cache&amp;quot;] and has contributed to the [http://www.owasp.org/index.php/Testing:_Spiders,_Robots,_and_Crawlers_(OWASP-IG-001) &amp;quot;Spiders/Robots/Crawlers&amp;quot;] and [http://www.owasp.org/index.php/Testing:_Search_engine_discovery/reconnaissance_(OWASP-IG-002) &amp;quot;Search Engine Reconnaissance&amp;quot;] sections of the OWASP Testing Guide v3 and more recently contributed to the development of the OWASP  ESAPI Java WAF, [http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project Top Ten], [http://www.opensamm.org OpenSAMM] and [http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project Application Security Verification Standard (ASVS)] Projects. &lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] has presented at OWASP Conferences in USA, Australia and Europe and OWASP Chapters in:&lt;br /&gt;
&lt;br /&gt;
*the Netherlands and;&lt;br /&gt;
*London, UK and;&lt;br /&gt;
*Sydney and Melbourne, Australia.&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] can be reached at [mailto:christian.heinrich@owasp.org christian.heinrich@owasp.org] and has a Public Profile on LinkedIn at [http://www.linkedin.com/in/ChristianHeinrich http://www.linkedin.com/in/ChristianHeinrich]&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=110984</id>
		<title>User:Cmlh</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=110984"/>
				<updated>2011-05-24T02:50:53Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Amended as Leader of OWASP PCI Project and added contribution to ESAPI Java WAF.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[mailto:christian.heinrich@owasp.org Christian Heinrich] is the Leader of the [http://www.owasp.org/index.php/Category:OWASP_PCI_Project OWASP PCI Project] having previously lead the [http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project OWASP &amp;quot;Google Hacking&amp;quot; Project] i.e. [http://code.google.com/p/dic &amp;quot;Download Indexed Cache&amp;quot;] and has contributed to the [http://www.owasp.org/index.php/Testing:_Spiders,_Robots,_and_Crawlers_(OWASP-IG-001) &amp;quot;Spiders/Robots/Crawlers&amp;quot;] and [http://www.owasp.org/index.php/Testing:_Search_engine_discovery/reconnaissance_(OWASP-IG-002) &amp;quot;Search Engine Reconnaissance&amp;quot;] sections of the OWASP Testing Guide v3 and more recently contributed to the development of the OWASP, EASPI Java WAF, [http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project Top Ten], [http://www.opensamm.org OpenSAMM] and [http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project Application Security Verification Standard (ASVS)] Projects. &lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] has presented at OWASP Conferences in USA, Australia and Europe and OWASP Chapters in:&lt;br /&gt;
&lt;br /&gt;
*the Netherlands and;&lt;br /&gt;
*London, UK and;&lt;br /&gt;
*Sydney and Melbourne, Australia.&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] can be reached at [mailto:christian.heinrich@owasp.org christian.heinrich@owasp.org] and has a Public Profile on LinkedIn at [http://www.linkedin.com/in/ChristianHeinrich http://www.linkedin.com/in/ChristianHeinrich]&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Membership/2011Election&amp;diff=110983</id>
		<title>Membership/2011Election</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Membership/2011Election&amp;diff=110983"/>
				<updated>2011-05-24T02:47:29Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Indented cmlh entry for list format&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== About OWASP Foundation - [http://www.owasp.org/index.php/About_OWASP Click Here] ===&lt;br /&gt;
=== 2011 Candidates ===&lt;br /&gt;
The [http://waybackmachine.org/jsp/Interstitial.jsp?seconds=5&amp;amp;date=1009278073000&amp;amp;url=http%3A%2F%2Fwww.owasp.org%2Fabout_owasp%2Forgchart.shtml&amp;amp;target=http%3A%2F%2Freplay.waybackmachine.org%2F20011225110113%2Fhttp%3A%2F%2Fwww.owasp.org%2Fabout_owasp%2Forgchart.shtml OWASP Foundation, est. 2001], the Board of Directors consists of six board seats that are tasked with keeping the organization on [https://www.owasp.org/index.php/About_OWASP#Core_Values mission], helping with all global committee and project efforts for the benefit of the professional association.  Every two years we rotate three of the board members. This year the following are &amp;lt;b&amp;gt;NOT&amp;lt;/b&amp;gt; up for election this term are: [http://www.owasp.org/index.php/Eoin_Keary Eoin Keary], [http://www.owasp.org/index.php/User:Brennan Tom Brennan] and [http://www.owasp.org/index.php/Matt_Tesauro Matt Tesauro].  &amp;lt;b&amp;gt;There will be (3) seats up for election in 2011 &amp;lt;/b&amp;gt; each voter will cast three votes for the available seats.&lt;br /&gt;
&lt;br /&gt;
=== Eligible Voters ===&lt;br /&gt;
Current [http://www.owasp.org/index.php/Membership/members OWASP Individual Members] have (1) vote, [http://www.owasp.org/index.php/Template:OWASP_Members_Horizontal OWASP Supporters] have (1) vote via the primary point of contact.  &lt;br /&gt;
&lt;br /&gt;
=== 2011 OWASP Global Board of Directors Election Timeline ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Milestone #1&amp;lt;/b&amp;gt; Confirmed Candidate Official Announcement Jun 9th - [http://www.owasp.org/index.php/Category:OWASP_AppSec_Conference Global AppSec Europe] Dublin, Ireland.  The individuals listed above should have a link to the campaign platform of who they plan to help OWASP Foundation and what they have done in OWASP to be considered a proven leader in the community.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Milestone #2&amp;lt;/b&amp;gt; Electronic Vote (eMail) managed by the [http://www.owasp.org/index.php/Global_Membership_Committee Global Membership Committee]  will take place before the OWASP AppSec 2011 USA Event allowing candidates ample time to campaign the WHY ME.   Announcement will be sent to all registered OWASP members email. This ballot will be sent to your online primary email address @OWASP.ORG or email that is on file with the [http://www.regonline.com/builder/site/Default.aspx?EventID=919827 regonline] membership system&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Below is a list of nominations to date &amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== HOW TO RUN ===&lt;br /&gt;
&amp;lt;b&amp;gt; If you want to be considered, login to the wiki, edit this page and link your profile in the same format &amp;lt;/b&amp;gt;. &amp;lt;br&amp;gt;&lt;br /&gt;
==== Candidates ====&lt;br /&gt;
 * '''Christian Heinrich''' - [https://www.owasp.org/index.php/User:Cmlh BIO &amp;amp; why vote for me?]  - New Candidate&lt;br /&gt;
&lt;br /&gt;
 * '''Michael Coates''' - [https://www.owasp.org/index.php/User:MichaelCoates BIO] - [https://www.owasp.org/index.php/User:MichaelCoates#OWASP_Board_Candidate_2011 Why Vote For Me?] - New Candidate&lt;br /&gt;
&lt;br /&gt;
 * '''Jim Manico''' - [http://www.owasp.org/index.php/User:jmanico BIO &amp;amp; why vote for me?]  - New Candidate &lt;br /&gt;
&lt;br /&gt;
 * '''Dave Wichers''' - [http://www.owasp.org/index.php/User:Wichers BIO] - Current Board Member - Reelection&lt;br /&gt;
&lt;br /&gt;
 * '''Sebastien Deleersnyder'''  - [http://www.owasp.org/index.php/User:Sdeleersnyder BIO &amp;amp; why vote for me?]  - Current Board Member - Reelection&lt;br /&gt;
&lt;br /&gt;
 * &amp;lt;insert candidate name link to owasp wiki page, list bio and campaign platform views with why elect you?&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 * &amp;lt;insert candidate name link to owasp wiki page, list bio and campaign platform views with why elect you?&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2009 Election Results (History of how this process worked last time) ===&lt;br /&gt;
[http://www.owasp.org/index.php/Board_member Click Here for Election Information from 2009 / Results]&lt;br /&gt;
&lt;br /&gt;
=== OWASP Foundation Bylaws ===&lt;br /&gt;
The [http://www.owasp.org/images/0/0d/OWASP_ByLaws.pdf current bylaws] that govern the OWASP Foundation and board member conduct are asked to adhere to and revise annually.  *NOTE* - This is currently under revision as a result of the [http://www.owasp.org/index.php/Summit_2011_Outcomes OWASP 2011 Summit] revised bylaws are expected by the [http://www.owasp.org/index.php/OWASP_Board_Meetings June Board Meeting] - [https://docs.google.com/document/d/1r_hS2ioEBcNOKqmEjSJmlLUOdQEb5qPb_0GU_VU1Arw/edit?hl=en&amp;amp;authkey=CLe5nZwD Click here for &amp;lt;b&amp;gt;DRAFT BYLAWS &amp;lt;/b&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
=== Primary Responsibilities === &lt;br /&gt;
1. Determine mission and purpose. It is the board's responsibility to create and review a statement of mission and purpose that articulates the organization's goals, means, and primary constituents served globally.&lt;br /&gt;
    &lt;br /&gt;
2. Select the Director(s) and employees. Boards must reach consensus on the Director responsibilities and undertake a careful search to find the most qualified individual for the position.&lt;br /&gt;
   &lt;br /&gt;
3. Support and evaluate all employees. The board should ensure that the employees have the moral and professional support he or she needs to further the goals of the organization.&lt;br /&gt;
    &lt;br /&gt;
4. Ensure effective planning. Boards must actively participate in an overall planning process and assist in implementing and monitoring the plan's goals.&lt;br /&gt;
   &lt;br /&gt;
5. Monitor, and strengthen programs and services. The board's responsibility is to determine which programs are consistent with the organization's mission and monitor their effectiveness.&lt;br /&gt;
    &lt;br /&gt;
6. Ensure adequate financial resources. One of the board's foremost responsibilities is to secure adequate resources for the organization to fulfill its mission.&lt;br /&gt;
    &lt;br /&gt;
7. Protect assets and provide proper financial oversight. The board must assist in developing the annual budget and ensuring that proper financial controls are in place.&lt;br /&gt;
    &lt;br /&gt;
8. Build a competent board. All boards have a responsibility to articulate prerequisites for candidates, orient new members, and periodically and comprehensively evaluate their own performance.&lt;br /&gt;
    &lt;br /&gt;
9. Ensure legal and ethical integrity. The board is ultimately responsible for adherence to legal standards and ethical norms.&lt;br /&gt;
    &lt;br /&gt;
10. Enhance the organization's public standing. The board should clearly articulate the organization's mission, accomplishments, and goals to the public and garner support from the community.&lt;br /&gt;
&lt;br /&gt;
=== Questions ===&lt;br /&gt;
Please direct your questions to members of the [https://www.owasp.org/index.php/Global_Membership_Committee OWASP Global Membership Committee]&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Membership/2011Election&amp;diff=110982</id>
		<title>Membership/2011Election</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Membership/2011Election&amp;diff=110982"/>
				<updated>2011-05-24T02:45:52Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Added cmlh to list of candidates&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== About OWASP Foundation - [http://www.owasp.org/index.php/About_OWASP Click Here] ===&lt;br /&gt;
=== 2011 Candidates ===&lt;br /&gt;
The [http://waybackmachine.org/jsp/Interstitial.jsp?seconds=5&amp;amp;date=1009278073000&amp;amp;url=http%3A%2F%2Fwww.owasp.org%2Fabout_owasp%2Forgchart.shtml&amp;amp;target=http%3A%2F%2Freplay.waybackmachine.org%2F20011225110113%2Fhttp%3A%2F%2Fwww.owasp.org%2Fabout_owasp%2Forgchart.shtml OWASP Foundation, est. 2001], the Board of Directors consists of six board seats that are tasked with keeping the organization on [https://www.owasp.org/index.php/About_OWASP#Core_Values mission], helping with all global committee and project efforts for the benefit of the professional association.  Every two years we rotate three of the board members. This year the following are &amp;lt;b&amp;gt;NOT&amp;lt;/b&amp;gt; up for election this term are: [http://www.owasp.org/index.php/Eoin_Keary Eoin Keary], [http://www.owasp.org/index.php/User:Brennan Tom Brennan] and [http://www.owasp.org/index.php/Matt_Tesauro Matt Tesauro].  &amp;lt;b&amp;gt;There will be (3) seats up for election in 2011 &amp;lt;/b&amp;gt; each voter will cast three votes for the available seats.&lt;br /&gt;
&lt;br /&gt;
=== Eligible Voters ===&lt;br /&gt;
Current [http://www.owasp.org/index.php/Membership/members OWASP Individual Members] have (1) vote, [http://www.owasp.org/index.php/Template:OWASP_Members_Horizontal OWASP Supporters] have (1) vote via the primary point of contact.  &lt;br /&gt;
&lt;br /&gt;
=== 2011 OWASP Global Board of Directors Election Timeline ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Milestone #1&amp;lt;/b&amp;gt; Confirmed Candidate Official Announcement Jun 9th - [http://www.owasp.org/index.php/Category:OWASP_AppSec_Conference Global AppSec Europe] Dublin, Ireland.  The individuals listed above should have a link to the campaign platform of who they plan to help OWASP Foundation and what they have done in OWASP to be considered a proven leader in the community.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;Milestone #2&amp;lt;/b&amp;gt; Electronic Vote (eMail) managed by the [http://www.owasp.org/index.php/Global_Membership_Committee Global Membership Committee]  will take place before the OWASP AppSec 2011 USA Event allowing candidates ample time to campaign the WHY ME.   Announcement will be sent to all registered OWASP members email. This ballot will be sent to your online primary email address @OWASP.ORG or email that is on file with the [http://www.regonline.com/builder/site/Default.aspx?EventID=919827 regonline] membership system&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Below is a list of nominations to date &amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== HOW TO RUN ===&lt;br /&gt;
&amp;lt;b&amp;gt; If you want to be considered, login to the wiki, edit this page and link your profile in the same format &amp;lt;/b&amp;gt;. &amp;lt;br&amp;gt;&lt;br /&gt;
==== Candidates ====&lt;br /&gt;
* '''Christian Heinrich''' - [https://www.owasp.org/index.php/User:Cmlh BIO &amp;amp; why vote for me?]  - New Candidate&lt;br /&gt;
&lt;br /&gt;
 * '''Michael Coates''' - [https://www.owasp.org/index.php/User:MichaelCoates BIO] - [https://www.owasp.org/index.php/User:MichaelCoates#OWASP_Board_Candidate_2011 Why Vote For Me?] - New Candidate&lt;br /&gt;
&lt;br /&gt;
 * '''Jim Manico''' - [http://www.owasp.org/index.php/User:jmanico BIO &amp;amp; why vote for me?]  - New Candidate &lt;br /&gt;
&lt;br /&gt;
 * '''Dave Wichers''' - [http://www.owasp.org/index.php/User:Wichers BIO] - Current Board Member - Reelection&lt;br /&gt;
&lt;br /&gt;
 * '''Sebastien Deleersnyder'''  - [http://www.owasp.org/index.php/User:Sdeleersnyder BIO &amp;amp; why vote for me?]  - Current Board Member - Reelection&lt;br /&gt;
&lt;br /&gt;
 * &amp;lt;insert candidate name link to owasp wiki page, list bio and campaign platform views with why elect you?&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 * &amp;lt;insert candidate name link to owasp wiki page, list bio and campaign platform views with why elect you?&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== 2009 Election Results (History of how this process worked last time) ===&lt;br /&gt;
[http://www.owasp.org/index.php/Board_member Click Here for Election Information from 2009 / Results]&lt;br /&gt;
&lt;br /&gt;
=== OWASP Foundation Bylaws ===&lt;br /&gt;
The [http://www.owasp.org/images/0/0d/OWASP_ByLaws.pdf current bylaws] that govern the OWASP Foundation and board member conduct are asked to adhere to and revise annually.  *NOTE* - This is currently under revision as a result of the [http://www.owasp.org/index.php/Summit_2011_Outcomes OWASP 2011 Summit] revised bylaws are expected by the [http://www.owasp.org/index.php/OWASP_Board_Meetings June Board Meeting] - [https://docs.google.com/document/d/1r_hS2ioEBcNOKqmEjSJmlLUOdQEb5qPb_0GU_VU1Arw/edit?hl=en&amp;amp;authkey=CLe5nZwD Click here for &amp;lt;b&amp;gt;DRAFT BYLAWS &amp;lt;/b&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
=== Primary Responsibilities === &lt;br /&gt;
1. Determine mission and purpose. It is the board's responsibility to create and review a statement of mission and purpose that articulates the organization's goals, means, and primary constituents served globally.&lt;br /&gt;
    &lt;br /&gt;
2. Select the Director(s) and employees. Boards must reach consensus on the Director responsibilities and undertake a careful search to find the most qualified individual for the position.&lt;br /&gt;
   &lt;br /&gt;
3. Support and evaluate all employees. The board should ensure that the employees have the moral and professional support he or she needs to further the goals of the organization.&lt;br /&gt;
    &lt;br /&gt;
4. Ensure effective planning. Boards must actively participate in an overall planning process and assist in implementing and monitoring the plan's goals.&lt;br /&gt;
   &lt;br /&gt;
5. Monitor, and strengthen programs and services. The board's responsibility is to determine which programs are consistent with the organization's mission and monitor their effectiveness.&lt;br /&gt;
    &lt;br /&gt;
6. Ensure adequate financial resources. One of the board's foremost responsibilities is to secure adequate resources for the organization to fulfill its mission.&lt;br /&gt;
    &lt;br /&gt;
7. Protect assets and provide proper financial oversight. The board must assist in developing the annual budget and ensuring that proper financial controls are in place.&lt;br /&gt;
    &lt;br /&gt;
8. Build a competent board. All boards have a responsibility to articulate prerequisites for candidates, orient new members, and periodically and comprehensively evaluate their own performance.&lt;br /&gt;
    &lt;br /&gt;
9. Ensure legal and ethical integrity. The board is ultimately responsible for adherence to legal standards and ethical norms.&lt;br /&gt;
    &lt;br /&gt;
10. Enhance the organization's public standing. The board should clearly articulate the organization's mission, accomplishments, and goals to the public and garner support from the community.&lt;br /&gt;
&lt;br /&gt;
=== Questions ===&lt;br /&gt;
Please direct your questions to members of the [https://www.owasp.org/index.php/Global_Membership_Committee OWASP Global Membership Committee]&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=105449</id>
		<title>User:Cmlh</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=105449"/>
				<updated>2011-02-19T01:44:57Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Added OWASP Netherlands&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[mailto:christian.heinrich@owasp.org Christian Heinrich] has lead the [http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project OWASP &amp;quot;Google Hacking&amp;quot; Project] i.e. [http://code.google.com/p/dic &amp;quot;Download Indexed Cache&amp;quot;] and has contributed to the [http://www.owasp.org/index.php/Testing:_Spiders,_Robots,_and_Crawlers_(OWASP-IG-001) &amp;quot;Spiders/Robots/Crawlers&amp;quot;] and [http://www.owasp.org/index.php/Testing:_Search_engine_discovery/reconnaissance_(OWASP-IG-002) &amp;quot;Search Engine Reconnaissance&amp;quot;] sections of the OWASP Testing Guide v3 and more recently contributed to the development of the OWASP [http://www.owasp.org/index.php/Category:OWASP_PCI_Project PCI], [http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project Top Ten], [http://www.opensamm.org OpenSAMM] and [http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project Application Security Verification Standard (ASVS)] Projects. &lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] has presented at OWASP Conferences in USA, Australia and Europe and OWASP Chapters in:&lt;br /&gt;
&lt;br /&gt;
*the Netherlands and;&lt;br /&gt;
*London, UK and;&lt;br /&gt;
*Sydney and Melbourne, Australia.&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] can be reached at [mailto:christian.heinrich@owasp.org christian.heinrich@owasp.org] and has a Public Profile on LinkedIn at [http://www.linkedin.com/in/ChristianHeinrich http://www.linkedin.com/in/ChristianHeinrich]&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=90341</id>
		<title>User:Cmlh</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=90341"/>
				<updated>2010-09-29T07:06:09Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Added OpenSAMM contribution&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[mailto:christian.heinrich@owasp.org Christian Heinrich] has lead the [http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project OWASP &amp;quot;Google Hacking&amp;quot; Project] i.e. [http://code.google.com/p/dic &amp;quot;Download Indexed Cache&amp;quot;] and has contributed to the [http://www.owasp.org/index.php/Testing:_Spiders,_Robots,_and_Crawlers_(OWASP-IG-001) &amp;quot;Spiders/Robots/Crawlers&amp;quot;]  and [http://www.owasp.org/index.php/Testing:_Search_engine_discovery/reconnaissance_(OWASP-IG-002) &amp;quot;Search Engine Reconnaissance&amp;quot;] sections of the OWASP Testing Guide v3 and more recently contributed to the development of the OWASP [http://www.owasp.org/index.php/Category:OWASP_PCI_Project PCI], [http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project Top Ten], [http://www.opensamm.org OpenSAMM] and [http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project Application Security Verification Standard (ASVS)] Projects.&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] has presented at OWASP Conferences in USA, Australia and Europe and OWASP Chapters in London, UK and Sydney and Melbourne, Australia.&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] can be reached at [mailto:christian.heinrich@owasp.org christian.heinrich@owasp.org] and has a Public Profile on LinkedIn at [http://www.linkedin.com/in/ChristianHeinrich http://www.linkedin.com/in/ChristianHeinrich]&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=90289</id>
		<title>User:Cmlh</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=90289"/>
				<updated>2010-09-28T11:53:50Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[mailto:christian.heinrich@owasp.org Christian Heinrich] has lead the [http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project OWASP &amp;quot;Google Hacking&amp;quot; Project] i.e. [http://code.google.com/p/dic &amp;quot;Download Indexed Cache&amp;quot;] and has contributed to the [http://www.owasp.org/index.php/Testing:_Spiders,_Robots,_and_Crawlers_(OWASP-IG-001) &amp;quot;Spiders/Robots/Crawlers&amp;quot;]  and [http://www.owasp.org/index.php/Testing:_Search_engine_discovery/reconnaissance_(OWASP-IG-002) &amp;quot;Search Engine Reconnaissance&amp;quot;] sections of the OWASP Testing Guide v3 and more recently contributed to the development of the OWASP [http://www.owasp.org/index.php/Category:OWASP_PCI_Project PCI], [http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project Top Ten] and [http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project Application Security Verification Standard (ASVS)] Projects.&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] has presented at OWASP Conferences in USA, Australia and Europe and OWASP Chapters in London, UK and Sydney and Melbourne, Australia.&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] can be reached at [mailto:christian.heinrich@owasp.org christian.heinrich@owasp.org] and has a Public Profile on LinkedIn at [http://www.linkedin.com/in/ChristianHeinrich http://www.linkedin.com/in/ChristianHeinrich]&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=90288</id>
		<title>User:Cmlh</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=90288"/>
				<updated>2010-09-28T11:43:32Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[mailto:christian.heinrich@owasp.org Christian Heinrich] is the former Project Leader of the [http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project OWASP &amp;quot;Google Hacking&amp;quot; Project] i.e. [http://code.google.com/p/dic &amp;quot;Download Indexed Cache&amp;quot;] and has contributed to the [http://www.owasp.org/index.php/Testing:_Spiders,_Robots,_and_Crawlers_(OWASP-IG-001) &amp;quot;Spiders/Robots/Crawlers&amp;quot;]  and [http://www.owasp.org/index.php/Testing:_Search_engine_discovery/reconnaissance_(OWASP-IG-002) &amp;quot;Search Engine Reconnaissance&amp;quot;] sections of the OWASP Testing Guide v3 and more recently contributed to the development of the OWASP [http://www.owasp.org/index.php/Category:OWASP_PCI_Project PCI], [http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project Top Ten] and [http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project Application Security Verification Standard (ASVS)] Projects.&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] has presented at OWASP Conferences in USA, Australia and Europe and OWASP Chapters in London, UK and Sydney and Melbourne, Australia.&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] can be reached at [mailto:christian.heinrich@owasp.org christian.heinrich@owasp.org] and has a Public Profile on LinkedIn at [http://www.linkedin.com/in/ChristianHeinrich http://www.linkedin.com/in/ChristianHeinrich]&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Request_for_Proposals/New_Project_Leader/ASVS/Application_5&amp;diff=88716</id>
		<title>OWASP Request for Proposals/New Project Leader/ASVS/Application 5</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Request_for_Proposals/New_Project_Leader/ASVS/Application_5&amp;diff=88716"/>
				<updated>2010-09-06T06:53:04Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Added Project Roadmap&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:&amp;lt;includeonly&amp;gt;{{{1}}}&amp;lt;/includeonly&amp;gt;&amp;lt;noinclude&amp;gt;New Project Leader Applicants&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
&lt;br /&gt;
| Applicant_Name = Christian Heinrich &amp;lt;!--Please replace 'Applicant 5' by your name (REQUIRED field)--&amp;gt;&lt;br /&gt;
| Applicant_Email = christian.heinrich@owasp.org&amp;lt;!--Please replace all this text by your email address (REQUIRED field)--&amp;gt;&lt;br /&gt;
| Applicant_Wiki_Username = cmlh&amp;lt;!--Please replace this text by your wiki username (REQUIRED field)--&amp;gt;&lt;br /&gt;
| Curriculum_Vitae_url = http://www.linkedin.com/in/ChristianHeinrich&amp;lt;!--Please replace all this text by your CV's web link  (REQUIRED field)--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
| Proposed_Roadmap_url =  http://cmlh.id.au/post/1073836740/owasp-asvs-project-leader&amp;lt;!--Please replace all this text by your Roadmap's web link  (OPTIONAL field - choose between this field and the following one)--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
| Proposed_Roadmap_Text = Specified at http://cmlh.id.au/post/1073836740/owasp-asvs-project-leader&amp;lt;!--Please replace all this text with a Roadmap (OPTIONAL field - choose between this field and the previous one)--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--##### Please replace/edit these variables ##### --&amp;gt; &lt;br /&gt;
| Applicant_name_mask = Application_5 &lt;br /&gt;
| Applicant_home_page = :OWASP_Request_for_Proposals/New_Project_Leader/ASVS/Application_5 &lt;br /&gt;
| Applications_home_page = Seeking_New_Project_Leader_For/ASVS&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Request_for_Proposals/New_Project_Leader/ASVS/Application_5&amp;diff=88715</id>
		<title>OWASP Request for Proposals/New Project Leader/ASVS/Application 5</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Request_for_Proposals/New_Project_Leader/ASVS/Application_5&amp;diff=88715"/>
				<updated>2010-09-06T03:04:42Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Commenced Application - saving prior to adding Proposed Roadmap&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:&amp;lt;includeonly&amp;gt;{{{1}}}&amp;lt;/includeonly&amp;gt;&amp;lt;noinclude&amp;gt;New Project Leader Applicants&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
&lt;br /&gt;
| Applicant_Name = Christian Heinrich &amp;lt;!--Please replace 'Applicant 5' by your name (REQUIRED field)--&amp;gt;&lt;br /&gt;
| Applicant_Email = christian.heinrich@owasp.org&amp;lt;!--Please replace all this text by your email address (REQUIRED field)--&amp;gt;&lt;br /&gt;
| Applicant_Wiki_Username = cmlh&amp;lt;!--Please replace this text by your wiki username (REQUIRED field)--&amp;gt;&lt;br /&gt;
| Curriculum_Vitae_url = http://www.linkedin.com/in/ChristianHeinrich&amp;lt;!--Please replace all this text by your CV's web link  (REQUIRED field)--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
| Proposed_Roadmap_url =  &amp;lt;!--Please replace all this text by your Roadmap's web link  (OPTIONAL field - choose between this field and the following one)--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
| Proposed_Roadmap_Text = &amp;lt;!--Please replace all this text with a Roadmap (OPTIONAL field - choose between this field and the previous one)--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--##### Please replace/edit these variables ##### --&amp;gt; &lt;br /&gt;
| Applicant_name_mask = Application_5 &lt;br /&gt;
| Applicant_home_page = :OWASP_Request_for_Proposals/New_Project_Leader/ASVS/Application_5 &lt;br /&gt;
| Applications_home_page = Seeking_New_Project_Leader_For/ASVS&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=88149</id>
		<title>User:Cmlh</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=User:Cmlh&amp;diff=88149"/>
				<updated>2010-08-29T07:36:06Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Added OWASP PCI Project&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[mailto:christian.heinrich@owasp.org Christian Heinrich] is the Project Leader of the [http://www.owasp.org/index.php/Category:OWASP_Google_Hacking_Project OWASP &amp;quot;Google Hacking&amp;quot; Project] i.e. [http://code.google.com/p/dic &amp;quot;Download Indexed Cache&amp;quot;] and has contributed to the [http://www.owasp.org/index.php/Testing:_Spiders,_Robots,_and_Crawlers_(OWASP-IG-001) &amp;quot;Spiders/Robots/Crawlers&amp;quot;]  and [http://www.owasp.org/index.php/Testing:_Search_engine_discovery/reconnaissance_(OWASP-IG-002) &amp;quot;Search Engine Reconnaissance&amp;quot;] sections of the OWASP Testing Guide v3 and more recently contributed to the development of the OWASP [http://www.owasp.org/index.php/Category:OWASP_PCI_Project PCI], [http://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project Top Ten] and [http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project Application Security Verification Standard (ASVS)] Projects.&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] has presented at OWASP Conferences in USA, Australia and Europe and OWASP Chapters in London, UK and Sydney and Melbourne, Australia.&lt;br /&gt;
&lt;br /&gt;
[mailto:christian.heinrich@owasp.org Christian Heinrich] can be reached at [mailto:christian.heinrich@owasp.org christian.heinrich@owasp.org] and has a Public Profile on LinkedIn at [http://www.linkedin.com/in/ChristianHeinrich http://www.linkedin.com/in/ChristianHeinrich]&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Talk:OWASP_Inquiries/Google_Hacking_Project&amp;diff=86664</id>
		<title>Talk:OWASP Inquiries/Google Hacking Project</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Talk:OWASP_Inquiries/Google_Hacking_Project&amp;diff=86664"/>
				<updated>2010-07-19T01:27:14Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;@Paulo Coimbra&lt;br /&gt;
&lt;br /&gt;
The methodology, which I assume is generic for all inquiries, should be listed as a separate page but linked to all other inquiries.&lt;br /&gt;
&lt;br /&gt;
The initial series of tabs should be:&lt;br /&gt;
6. &amp;quot;Record of Complaint(s)&amp;quot; - complaints made by anonymous and troll plaintiffs must be labeled as such and listed at the bottom of the wiki page with plaintiffs who are exist (including their contact information) and their complaints are listed at the top of the wiki page.&lt;br /&gt;
5. &amp;quot;Response from Project Leader&amp;quot;&lt;br /&gt;
4. &amp;quot;OWASP Board Responses [Rejected/Accepted]&amp;quot; i.e. Three parts i.e. 1. if an inquiry is required? and/or 2. other action items to avoid these type of complaints in the future 3. If these are accepted by the plaintiff (the attempted to contact them would be listed if they do not respond).&lt;br /&gt;
&lt;br /&gt;
Obviously, if the inquiry is launched then the following could be recorded within one or multiple tabs&lt;br /&gt;
3. &amp;quot;Record of Legitimate Complaints&amp;quot;&lt;br /&gt;
2. &amp;quot;Acceptance by Project Leader&amp;quot;&lt;br /&gt;
1. &amp;quot;Concluding Statement by OWASP Board&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The number of 1-6 is based on the ordering as the public will read this left to right and hence more rumor, hearsay, etc may be created if the concluding tab isn't the left most tab (i.e. 4 or 1).&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Talk:OWASP_Inquiries/Google_Hacking_Project&amp;diff=86663</id>
		<title>Talk:OWASP Inquiries/Google Hacking Project</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Talk:OWASP_Inquiries/Google_Hacking_Project&amp;diff=86663"/>
				<updated>2010-07-19T01:25:35Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Suggested revised methodology&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;@Paulo Coimbra&lt;br /&gt;
&lt;br /&gt;
The methodology, which I assume is generic for all inquiries, should be listed as a separate page but linked to the all inquiries.&lt;br /&gt;
&lt;br /&gt;
The initial series of tabs should be:&lt;br /&gt;
6. &amp;quot;Record of Complaint(s)&amp;quot; - complaints made by anonymous and troll plaintiffs must be labeled and listed at the bottom of page with plaintiffs who are exist (including their contact information) complaints are listed at the top.&lt;br /&gt;
5. &amp;quot;Response from Project Leader&amp;quot;&lt;br /&gt;
4. &amp;quot;OWASP Board Responses [Rejected/Accepted]&amp;quot; i.e. Three parts i.e. 1. if an inquiry is required? and/or 2. other action items to avoid these type of complaints in the future 3. If these are accepted by the plaintiff (the attempted to contact them would be listed if they do not respond).&lt;br /&gt;
&lt;br /&gt;
Obviously, if the inquiry is launched then the following could be recorded within one or multiple tabs&lt;br /&gt;
3. &amp;quot;Record of Legitimate Complaints&amp;quot;&lt;br /&gt;
2. &amp;quot;Acceptance by Project Leader&amp;quot;&lt;br /&gt;
1. &amp;quot;Concluding Statement by OWASP Board&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The number of 1-6 is based on the ordering as the public will read this left to right and hence more rumor, hearsay, etc may be created if the concluding tab isn't the left most tab (i.e. 4 or 1).&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Google_Hacking_Project_-_PoC_v0.2_-_Post_Google_SOAP_Search_API_Deprecation&amp;diff=86157</id>
		<title>OWASP Google Hacking Project - PoC v0.2 - Post Google SOAP Search API Deprecation</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Google_Hacking_Project_-_PoC_v0.2_-_Post_Google_SOAP_Search_API_Deprecation&amp;diff=86157"/>
				<updated>2010-07-11T04:35:25Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: 1st DRAFT&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Please note that executing &amp;quot;Download Indexed Cache&amp;quot; post September 2009 is in violation of Google's Terms of Service even with a valid Google SOAP Search API Key. &lt;br /&gt;
&lt;br /&gt;
Information on how to download, build and install this release is available from http://code.google.com/p/dic/wiki/DownloadBuildInstall&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=GPC_Project_Details/OWASP_Google_Hacking_Project&amp;diff=86156</id>
		<title>GPC Project Details/OWASP Google Hacking Project</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=GPC_Project_Details/OWASP_Google_Hacking_Project&amp;diff=86156"/>
				<updated>2010-07-11T04:33:03Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Added RUXCON 2K8 Release as old_release&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:OWASP Project|Google Hacking Project]]&lt;br /&gt;
[[Category:OWASP Tool]]&lt;br /&gt;
[[Category:OWASP Alpha Quality Tool]]&lt;br /&gt;
&lt;br /&gt;
{{Template:&amp;lt;includeonly&amp;gt;{{{1}}}&amp;lt;/includeonly&amp;gt;&amp;lt;noinclude&amp;gt;OWASP Project Identification Tab&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
| project_name = OWASP Google Hacking Project&lt;br /&gt;
| project_description = &amp;quot;Download Indexed Cache&amp;quot; is a Proof of Concept (PoC) which implements the Google SOAP Search API to retrieve content indexed within the Google Cache and supports the &amp;quot;Search Engine Reconnaissance&amp;quot; section of the OWASP Testing Guide v3. &lt;br /&gt;
| project_license = [http://www.apache.org/licenses/LICENSE-2.0 Apache License 2.0]&lt;br /&gt;
| leader_name = Christian Heinrich&lt;br /&gt;
| leader_email = christian.heinrich@owasp.org&lt;br /&gt;
| leader_username = cmlh&lt;br /&gt;
| past_leaders_special_contributions = &lt;br /&gt;
| maintainer_name = &lt;br /&gt;
| maintainer_email = &lt;br /&gt;
| maintainer_username =  &lt;br /&gt;
| contributor_name1 =&lt;br /&gt;
| contributor_email1 = &lt;br /&gt;
| contributor_username1 =  &lt;br /&gt;
| contributor_name2 =&lt;br /&gt;
| contributor_email2 = &lt;br /&gt;
| contributor_username2 = &lt;br /&gt;
| contributor_name3 =&lt;br /&gt;
| contributor_email3 = &lt;br /&gt;
| contributor_username3 = &lt;br /&gt;
| contributor_name4 = &lt;br /&gt;
| contributor_email4 = &lt;br /&gt;
| contributor_username4 = &lt;br /&gt;
| contributor_name5 = &lt;br /&gt;
| contributor_email5 = &lt;br /&gt;
| contributor_username5 = &lt;br /&gt;
| contributor_name6 = &lt;br /&gt;
| contributor_email6 = &lt;br /&gt;
| contributor_username6 = &lt;br /&gt;
| contributor_name7 = &lt;br /&gt;
| contributor_email7 = &lt;br /&gt;
| contributor_username7 = &lt;br /&gt;
| contributor_name8 = &lt;br /&gt;
| contributor_email8 = &lt;br /&gt;
| contributor_username8 = &lt;br /&gt;
| contributor_name9 = &lt;br /&gt;
| contributor_email9 = &lt;br /&gt;
| contributor_username9 = &lt;br /&gt;
| contributor_name10 = &lt;br /&gt;
| contributor_email10 = &lt;br /&gt;
| contributor_username10 =  &lt;br /&gt;
| pamphlet_link = &lt;br /&gt;
| presentation_link = http://www.slideshare.net/cmlh/download-indexed-cache&lt;br /&gt;
| mailing_list_name = owasp-google-hacking&lt;br /&gt;
| links_url1 = http://code.google.com/p/dic/&lt;br /&gt;
| links_name1 = Google Code&lt;br /&gt;
| links_url2 = &lt;br /&gt;
| links_name2 = &lt;br /&gt;
| links_url3 = &lt;br /&gt;
| links_name3 = &lt;br /&gt;
| links_url4 = &lt;br /&gt;
| links_name4 = &lt;br /&gt;
| links_url5 = &lt;br /&gt;
| links_name5 = &lt;br /&gt;
| links_url6 = &lt;br /&gt;
| links_name6 = &lt;br /&gt;
| links_url7 = &lt;br /&gt;
| links_name7 = &lt;br /&gt;
| links_url8 = &lt;br /&gt;
| links_name8 = &lt;br /&gt;
| links_url9 = &lt;br /&gt;
| links_name9 = &lt;br /&gt;
| links_url10 = &lt;br /&gt;
| links_name10 = &lt;br /&gt;
| project_road_map = Category:OWASP_Google_Hacking_Project_RoadMap&lt;br /&gt;
| project_health_status = &lt;br /&gt;
| current_release_name = PoC v0.2 - Post Google SOAP Search API Deprecation&lt;br /&gt;
| current_release_date = September 2009&lt;br /&gt;
| current_release_download_link = http://code.google.com/p/dic/downloads/list&lt;br /&gt;
| current_release_rating = -1&lt;br /&gt;
| current_release_leader_name = Christian Heinrich&lt;br /&gt;
| current_release_leader_email = &lt;br /&gt;
| current_release_leader_username = cmlh&lt;br /&gt;
| current_release_details = OWASP Google Hacking Project - PoC v0.2 - Post Google SOAP Search API Deprecation &lt;br /&gt;
| last_reviewed_release_name = &lt;br /&gt;
| last_reviewed_release_date = &lt;br /&gt;
| last_reviewed_release_download_link = &lt;br /&gt;
| last_reviewed_release_rating = &lt;br /&gt;
| last_reviewed_release_leader_name = &lt;br /&gt;
| last_reviewed_release_leader_email = &lt;br /&gt;
| last_reviewed_release_leader_username = &lt;br /&gt;
| old_release_name1 = RUXCON 2K8&lt;br /&gt;
| old_release_date1 = November 2008&lt;br /&gt;
| old_release_download_link1 = http://code.google.com/p/dic/source/browse/trunk/dic.pl?spec=svn6&amp;amp;r=2&lt;br /&gt;
| old_release_name2 = &lt;br /&gt;
| old_release_date2 = &lt;br /&gt;
| old_release_download_link2 = &lt;br /&gt;
| old_release_name3 = &lt;br /&gt;
| old_release_date3 = &lt;br /&gt;
| old_release_download_link3 = &lt;br /&gt;
| old_release_name4 = &lt;br /&gt;
| old_release_date4 = &lt;br /&gt;
| old_release_download_link4 = &lt;br /&gt;
| old_release_name5 = &lt;br /&gt;
| old_release_date5 = &lt;br /&gt;
| old_release_download_link5 = &lt;br /&gt;
| last_GPC_update = 06/July/2010&lt;br /&gt;
| GPC_Notes = This project has had its status changed (currently inactive) pending the outcome of an inquiry. &amp;lt;!--- This project cannot longer be maintained due to the closure of the Google SOAP Search API i.e. http://googlecode.blogspot.com/2009/08/well-earned-retirement-for-soap-search.html.---&amp;gt;&lt;br /&gt;
| project_home_page = Category:OWASP_Google_Hacking_Project &lt;br /&gt;
| project_details_wiki_page = GPC_Project_Details/OWASP_Google_Hacking_Project&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=GPC_Project_Details/OWASP_Google_Hacking_Project&amp;diff=86155</id>
		<title>GPC Project Details/OWASP Google Hacking Project</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=GPC_Project_Details/OWASP_Google_Hacking_Project&amp;diff=86155"/>
				<updated>2010-07-11T04:29:21Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Updated for PoC v0.2 - Post Google SOAP Search API Deprecation Release&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:OWASP Project|Google Hacking Project]]&lt;br /&gt;
[[Category:OWASP Tool]]&lt;br /&gt;
[[Category:OWASP Alpha Quality Tool]]&lt;br /&gt;
&lt;br /&gt;
{{Template:&amp;lt;includeonly&amp;gt;{{{1}}}&amp;lt;/includeonly&amp;gt;&amp;lt;noinclude&amp;gt;OWASP Project Identification Tab&amp;lt;/noinclude&amp;gt;&lt;br /&gt;
| project_name = OWASP Google Hacking Project&lt;br /&gt;
| project_description = &amp;quot;Download Indexed Cache&amp;quot; is a Proof of Concept (PoC) which implements the Google SOAP Search API to retrieve content indexed within the Google Cache and supports the &amp;quot;Search Engine Reconnaissance&amp;quot; section of the OWASP Testing Guide v3. &lt;br /&gt;
| project_license = [http://www.apache.org/licenses/LICENSE-2.0 Apache License 2.0]&lt;br /&gt;
| leader_name = Christian Heinrich&lt;br /&gt;
| leader_email = christian.heinrich@owasp.org&lt;br /&gt;
| leader_username = cmlh&lt;br /&gt;
| past_leaders_special_contributions = &lt;br /&gt;
| maintainer_name = &lt;br /&gt;
| maintainer_email = &lt;br /&gt;
| maintainer_username =  &lt;br /&gt;
| contributor_name1 =&lt;br /&gt;
| contributor_email1 = &lt;br /&gt;
| contributor_username1 =  &lt;br /&gt;
| contributor_name2 =&lt;br /&gt;
| contributor_email2 = &lt;br /&gt;
| contributor_username2 = &lt;br /&gt;
| contributor_name3 =&lt;br /&gt;
| contributor_email3 = &lt;br /&gt;
| contributor_username3 = &lt;br /&gt;
| contributor_name4 = &lt;br /&gt;
| contributor_email4 = &lt;br /&gt;
| contributor_username4 = &lt;br /&gt;
| contributor_name5 = &lt;br /&gt;
| contributor_email5 = &lt;br /&gt;
| contributor_username5 = &lt;br /&gt;
| contributor_name6 = &lt;br /&gt;
| contributor_email6 = &lt;br /&gt;
| contributor_username6 = &lt;br /&gt;
| contributor_name7 = &lt;br /&gt;
| contributor_email7 = &lt;br /&gt;
| contributor_username7 = &lt;br /&gt;
| contributor_name8 = &lt;br /&gt;
| contributor_email8 = &lt;br /&gt;
| contributor_username8 = &lt;br /&gt;
| contributor_name9 = &lt;br /&gt;
| contributor_email9 = &lt;br /&gt;
| contributor_username9 = &lt;br /&gt;
| contributor_name10 = &lt;br /&gt;
| contributor_email10 = &lt;br /&gt;
| contributor_username10 =  &lt;br /&gt;
| pamphlet_link = &lt;br /&gt;
| presentation_link = http://www.slideshare.net/cmlh/download-indexed-cache&lt;br /&gt;
| mailing_list_name = owasp-google-hacking&lt;br /&gt;
| links_url1 = &lt;br /&gt;
| links_name1 = &lt;br /&gt;
| links_url2 = &lt;br /&gt;
| links_name2 = &lt;br /&gt;
| links_url3 = &lt;br /&gt;
| links_name3 = &lt;br /&gt;
| links_url4 = &lt;br /&gt;
| links_name4 = &lt;br /&gt;
| links_url5 = &lt;br /&gt;
| links_name5 = &lt;br /&gt;
| links_url6 = &lt;br /&gt;
| links_name6 = &lt;br /&gt;
| links_url7 = &lt;br /&gt;
| links_name7 = &lt;br /&gt;
| links_url8 = &lt;br /&gt;
| links_name8 = &lt;br /&gt;
| links_url9 = &lt;br /&gt;
| links_name9 = &lt;br /&gt;
| links_url10 = &lt;br /&gt;
| links_name10 = &lt;br /&gt;
| project_road_map = Category:OWASP_Google_Hacking_Project_RoadMap&lt;br /&gt;
| project_health_status = &lt;br /&gt;
| current_release_name = PoC v0.2 - Post Google SOAP Search API Deprecation&lt;br /&gt;
| current_release_date = September 2009&lt;br /&gt;
| current_release_download_link = http://code.google.com/p/dic/downloads/list&lt;br /&gt;
| current_release_rating = -1&lt;br /&gt;
| current_release_leader_name = Christian Heinrich&lt;br /&gt;
| current_release_leader_email = &lt;br /&gt;
| current_release_leader_username = cmlh&lt;br /&gt;
| current_release_details = OWASP Google Hacking Project - PoC v0.2 - Post Google SOAP Search API Deprecation &lt;br /&gt;
| last_reviewed_release_name = &lt;br /&gt;
| last_reviewed_release_date = &lt;br /&gt;
| last_reviewed_release_download_link = &lt;br /&gt;
| last_reviewed_release_rating = &lt;br /&gt;
| last_reviewed_release_leader_name = &lt;br /&gt;
| last_reviewed_release_leader_email = &lt;br /&gt;
| last_reviewed_release_leader_username = &lt;br /&gt;
| old_release_name1 = &lt;br /&gt;
| old_release_date1 = &lt;br /&gt;
| old_release_download_link1 = &lt;br /&gt;
| old_release_name2 = &lt;br /&gt;
| old_release_date2 = &lt;br /&gt;
| old_release_download_link2 = &lt;br /&gt;
| old_release_name3 = &lt;br /&gt;
| old_release_date3 = &lt;br /&gt;
| old_release_download_link3 = &lt;br /&gt;
| old_release_name4 = &lt;br /&gt;
| old_release_date4 = &lt;br /&gt;
| old_release_download_link4 = &lt;br /&gt;
| old_release_name5 = &lt;br /&gt;
| old_release_date5 = &lt;br /&gt;
| old_release_download_link5 = &lt;br /&gt;
| last_GPC_update = 06/July/2010&lt;br /&gt;
| GPC_Notes = This project has had its status changed (currently inactive) pending the outcome of an inquiry. &amp;lt;!--- This project cannot longer be maintained due to the closure of the Google SOAP Search API i.e. http://googlecode.blogspot.com/2009/08/well-earned-retirement-for-soap-search.html.---&amp;gt;&lt;br /&gt;
| project_home_page = Category:OWASP_Google_Hacking_Project &lt;br /&gt;
| project_details_wiki_page = GPC_Project_Details/OWASP_Google_Hacking_Project&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Category:OWASP_Google_Hacking_Project_RoadMap&amp;diff=86154</id>
		<title>Category:OWASP Google Hacking Project RoadMap</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Category:OWASP_Google_Hacking_Project_RoadMap&amp;diff=86154"/>
				<updated>2010-07-11T04:27:40Z</updated>
		
		<summary type="html">&lt;p&gt;Cmlh: Updated for PoC v0.2 - Post Google SOAP Search API Deprecation Release&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Sep 2008 - Oct 2008&lt;br /&gt;
  PoC v0.1 demonstrated at OWASP USA Conference 2008, ToorCon X and SecTor 2008&amp;lt;br&amp;gt;&lt;br /&gt;
Nov 2008&lt;br /&gt;
  PoC v0.1 released at RUXCON 2K8&amp;lt;br&amp;gt;&lt;br /&gt;
Sep 2009&lt;br /&gt;
  PoC v0.2 released due to Google deprecating their SOAP Search API&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Cmlh</name></author>	</entry>

	</feed>