<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://wiki.owasp.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Darrellgrundy</id>
		<title>OWASP - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://wiki.owasp.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Darrellgrundy"/>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php/Special:Contributions/Darrellgrundy"/>
		<updated>2026-04-28T00:38:29Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.27.2</generator>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Test_Network/Infrastructure_Configuration_(OTG-CONFIG-001)&amp;diff=16363</id>
		<title>Test Network/Infrastructure Configuration (OTG-CONFIG-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Test_Network/Infrastructure_Configuration_(OTG-CONFIG-001)&amp;diff=16363"/>
				<updated>2007-02-09T10:47:34Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
The intrinsic complexity of interconnected and heterogeneous web server infrastructure, which can count hundreds of web applications, makes configuration management and review a fundamental step in testing and deploying every single application.&lt;br /&gt;
In fact it takes only a single vulnerability to undermine the security of the entire infrastructure, and even small and (almost) unimportant problems may evolve into severe risks for another application on the same server.&lt;br /&gt;
In order to address these problems, it is of utmost importance to perform an in-depth review of configuration and known security issues.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&lt;br /&gt;
Proper configuration management of the web server infrastructure is very important in order to preserve the security of the application itself. If elements such as the web server software, the back-end database servers, or the authentication servers are not properly reviewed and secured, they might introduce undesired risks or introduce new vulnerabilities that might compromise the application itself.&lt;br /&gt;
&lt;br /&gt;
For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the application itself (a vulnerability that has arisen a number of times in both web servers or application servers) could compromise the application, as anonymous users could use the information disclosed in the source code to leverage attacks against the application or its users.&lt;br /&gt;
&lt;br /&gt;
In order to test the configuration management infrastructure, the following steps need to be taken:&lt;br /&gt;
&lt;br /&gt;
* The different elements that make up the infrastructure need to be determined in order to understand how they interact with a web application and how they affect its security.&lt;br /&gt;
* All the elements of the infrastructure need to be reviewed in order to make sure that they don’t hold any known vulnerabilities.&lt;br /&gt;
* A review needs to be made of the administrative tools used to maintain all the different elements.&lt;br /&gt;
* The authentication systems, if any, need to reviewed in order to assure that they serve the needs of the application and that they cannot be manipulated by external users to leverage access.&lt;br /&gt;
* A list of defined ports which are required for the application should be maintained and kept under change control.&lt;br /&gt;
&lt;br /&gt;
== Black Box Testing and examples==&lt;br /&gt;
&lt;br /&gt;
===Review of the application architecture===&lt;br /&gt;
&lt;br /&gt;
The application architecture needs to be reviewed through the test to determine what different components are used to build the web application. In small setups, such as a simple CGI-based application, a single server might be used that runs the web server which executes the C, Perl, or Shell CGIs application, and perhaps authentication is also based on the web server authentication mechanisms. On more complex setups, such as an online bank system, multiple servers might be involved including: a reverse proxy, a front-end web server, an application server and a database server or LDAP server. Each of these servers will be used for different purposes and might be even be divided in different networks with firewalling devices between them, creating different DMZs so that access to the web server will not grant a remote user access to the authentication mechanism itself, and so that compromises of the different elements of the architecture can be isolated in a way such that they will not compromise the whole architecture.&lt;br /&gt;
&lt;br /&gt;
Getting knowledge of the application architecture can be easy if this information is provided to the testing team by the application developers in document form or through interviews, but can also prove to be very difficult if doing a blind penetration test.&lt;br /&gt;
&lt;br /&gt;
In the latter case, a tester will first start with the assumption that there is a simple setup (a single server) and will, through the information retrieved from other tests, derive the different elements and question this assumption that the architecture will be extended. The tester will start by asking simple questions such as: “Is there a firewalling system protecting the web server?” which will be answered based on the results of network scans targeted at the web server and the analysis of whether the network ports of the web server are being filtered in the network edge (no answer or ICMP unreachables are received) or if the server is directly connected to the Internet (i.e. returns RST packets for all non-listening ports). This analysis can be enhanced in order to determine the type of firewall system used based on network packet tests: is it a stateful firewall or is it an access list filter on a router? How is it configured? Can it be bypassed? &lt;br /&gt;
&lt;br /&gt;
Detecting a reverse proxy in front of the web server needs to be done by the analysis of the web server banner, which might directly disclose the existence of a reverse proxy (for example, if ‘WebSEAL’[1]  is returned). It can also be determined by obtaining the answers given by the web server to requests and comparing them to the expected answers. For example, some reverse proxies act as “intrusion prevention systems” (or web-shields) by blocking known attacks targeted at the web server. If the web server is known to answer with a 404 message to a request which targets an unavailable page and returns a different error message for some common web attacks like those done by CGI scanners it might be an indication of a reverse proxy (or an application-level firewall) which is filtering the requests and returning a different error page than the one expected. Another example: if the web server returns a set of available HTTP methods (including TRACE) but the expected methods return errors then there is probably something in between, blocking them. In some cases, even the protection system gives itself away:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
GET / web-console/ServerInfo.jsp%00 HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200&lt;br /&gt;
Pragma: no-cache&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Content-Length: 83&lt;br /&gt;
&lt;br /&gt;
&amp;lt;TITLE&amp;gt;Error&amp;lt;/TITLE&amp;gt;&lt;br /&gt;
&amp;lt;BODY&amp;gt;&lt;br /&gt;
&amp;lt;H1&amp;gt;Error&amp;lt;/H1&amp;gt;&lt;br /&gt;
FW-1 at XXXXXX: Access denied.&amp;lt;/BODY&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server'''&lt;br /&gt;
&lt;br /&gt;
Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header or by timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.&lt;br /&gt;
&lt;br /&gt;
Another element that can be detected: network load balancers. Typically, these systems will balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture elements needs to be done by examining multiple requests and comparing results in order to determine if the requests are going to the same or different web servers. For example, based on the Date: header if the server clocks are not synchronised. In some cases, the network load balance process might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer.&lt;br /&gt;
&lt;br /&gt;
Application web servers are usually easy to detect. The request for several resources is handled by the application server itself (not the web server) and the response header will vary significantly (including different or additional values in the answer header). Another way to detect these is to see if the web server tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers) or to rewrite URLs automatically to do session tracking.&lt;br /&gt;
&lt;br /&gt;
Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way, since they will be hidden by the application itself.&lt;br /&gt;
&lt;br /&gt;
The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly,&amp;quot; it is probably being extracted from some sort of database by the application itself. Sometimes the way information is requested might give insight to the existence of a database back-end. For example, an online shopping application that uses numeric identifiers (‘id’) when browsing the different articles in the shop. However, when doing a blind application test, knowledge of the underlying database is usually only available when a vulnerability surfaces in the application, such as poor exception handling or susceptibility to SQL injection.&lt;br /&gt;
&lt;br /&gt;
===Known server vulnerabilities===&lt;br /&gt;
&lt;br /&gt;
Vulnerabilities found in the different elements that make up the application architecture, be it the web server or the database backend, can severely compromise the application itself. For example, consider a server vulnerability that allows a remote, unauthenticated user, to upload files to the web server, or even to replace files. This vulnerability could compromise the application, since a rogue user may be able to replace the application itself or introduce code that would affect the backend servers, as its application code would be run just like any other application.&lt;br /&gt;
&lt;br /&gt;
Reviewing server vulnerabilities can be hard to do if the test needs to be done through a blind penetration test. In these cases, vulnerabilities need to be tested from a remote site, typically using an automated tool; however, the testing of some vulnerabilities can have unpredictable results to the web server, and testing for others (like those directly involved in denial of service attacks) might not be possible due to the service downtime involved if the test was successful. Also, some automated tools will flag vulnerabilities based on the web server version retrieved. This leads to both false positives and false negatives: on one hand, if the web server version has been removed or obscured by the local site administrator, the scan tool will not flag the server as vulnerable even if it is; on the other hand, if the vendor providing the software does not update the web server version when vulnerabilities are fixed in it, the scan tool will flag vulnerabilities that do not exist. The latter case is actually very common in some operating system vendors that do backport patches of security vulnerabilities to the software they provide in the operating system but do not do a full upload to the latest software version. This happens in most GNU/Linux distributions such as Debian, Red Hat or SuSE. In most cases, vulnerability scanning of an application architecture will only find vulnerabilities associated with the “exposed” elements of the architecture (such as the web server) and will usually be unable to find vulnerabilities associated to elements which are not directly exposed, such as the authentication backends, the database backends, or reverse proxies in use.&lt;br /&gt;
&lt;br /&gt;
Finally, not all software vendors disclose vulnerabilities in public way, and therefore these weaknesses do not become registered within publicly known vulnerability databases[2]. This information is only disclosed to customers or published through fixes that do not have accompanying advisories. This reduces the usefulness of vulnerability scanning tools. Typically, vulnerability coverage of these tools will be very good for common products (such as the Apache web server, Microsoft’s Internet Information Server, or IBM’s Lotus Domino) but will be lacking for lesser known products.&lt;br /&gt;
&lt;br /&gt;
This is why reviewing vulnerabilities is best done when the tester is provided with internal information of the software used, including versions and releases used and patches applied to the software. With With this information, the tester can retrieve the information from the vendor itself and analyse what vulnerabilities might be present in the architecture and how they can affect the application itself. When possible, these vulnerabilities can be tested in order to determine their real effects and to detect if there might be any external elements (such as intrusion detection or prevention systems) that might reduce or negate the possibility of successful exploitation. Testers might even determine, through a configuration review, that the vulnerability is not even present, since it affects a software component that is not in use.&lt;br /&gt;
&lt;br /&gt;
It is also worthwhile to notice that vendors will sometimes silently fix vulnerabilities and make them available on new software releases. Different vendors will have difference release cycles that determines the support they might provide for older releases. A tester with detailed information of the software versions used by the architecture can analyse the risk associated to the use of old software releases that might be unsupported in the short term or are already unsupported. This is critical, since if a vulnerability were to surface in an old software version that is no longer suppoted, the systems personnel might not be directly aware of it. No patches will be ever made available for it and advisories might not list that version as vulnerable (as it is unsupported). Even in the event that they are aware that the vulnerability is present and the system is, indeed, vulnerable, they will need to do a full upgrade to a new software release, which might introduce significant downtime in the application architecture or might force the application to be recoded due to incompatibilities with the latest software version.&lt;br /&gt;
===Administrative tools===&lt;br /&gt;
&lt;br /&gt;
Any web server infrastructure requires the existence of administrative tools to maintain and update the information used by the application: static content (web pages, graphic files), applications source code, user authentication databases, etc. Depending on the site, technology or software used, administrative tools will differ. For example, some web servers will be managed using administrative interfaces which are, themselves, web servers (such as the iPlanet web server) or will be administrated by plain text configuration files (in the Apache case[3]) or use operating-system GUI tools (when using Microsoft’s IIS server or ASP.Net). In most cases, however, the server configuration will be handled using different tools than the maintenance of the files used by the web server, which are managed through FTP servers, WebDAV, network file systems (NFS, CIFS) or other mechanisms. Obviously, the operating system of the elements that make up the application architecture will also be managed using other tools. Applications may also have administrative interfaces embedded in them that are used to manage the application data itself (users, content, etc.).&lt;br /&gt;
&lt;br /&gt;
Review of the administrative interfaces used to manage the different parts of the architecture is very important, since if an attacker gains access to any of them he can then compromise or damage the application architecture. Thus it is important to:&lt;br /&gt;
&lt;br /&gt;
* List all the possible administrative interfaces.&lt;br /&gt;
* Determine if administrative interfaces are available from an internal network or are also available from the Internet.&lt;br /&gt;
* If available from the Internet, determine the mechanisms that control access to these interfaces and their associated susceptibilities.&lt;br /&gt;
* Change the default user &amp;amp; password.&lt;br /&gt;
&lt;br /&gt;
Some companies choose not to manage all aspects of their web server applications, but may have other parties managing the content delivered by the web application. This external company might either provide only parts of the content (news updates or promotions) or might manage the web server completely (including content and code). It is common to find administrative interfaces available from the Internet in these situations, since using the Internet is cheaper than providing a dedicated line that will connect the external company to the application infrastructure through a management-only interface. In this situation, it is very important to test if the administrative interfaces can be vulnerable to attacks. &lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [1] WebSEAL, also known as Tivoli Authentication Manager, is a reverse proxy from IBM which is part of the Tivoli framework.&lt;br /&gt;
* [2] Such as Symantec’s Bugtraq, ISS’ X-Force, or NIST’s National Vulnerability Database (NVD).&lt;br /&gt;
* [3] There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use yet.&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Test_Network/Infrastructure_Configuration_(OTG-CONFIG-001)&amp;diff=16316</id>
		<title>Test Network/Infrastructure Configuration (OTG-CONFIG-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Test_Network/Infrastructure_Configuration_(OTG-CONFIG-001)&amp;diff=16316"/>
				<updated>2007-02-08T15:12:39Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
The intrinsic complexity of interconnected and heterogeneous web server infrastructure, which can count hundreds of web applications, makes configuration management and review a fundamental step in testing and deploying every single application.&lt;br /&gt;
In fact it takes only a single vulnerability to undermine the security of the entire infrastructure, and even small and (almost) unimportant problems may evolve into severe risks for another application on the same server.&lt;br /&gt;
In order to address these problems, it is of utmost importance to perform an in-depth review of configuration and known security issues.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&lt;br /&gt;
Proper configuration management of the web server infrastructure is very important in order to preserve the security of the application itself. If elements such as the web server software, the back-end database servers, or the authentication servers are not properly reviewed and secured, they might introduce undesired risks or introduce new vulnerabilities that might compromise the application itself.&lt;br /&gt;
&lt;br /&gt;
For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the application itself (a vulnerability that has arisen a number of times in both web servers or application servers) could compromise the application, as anonymous users could use the information disclosed in the source code to leverage attacks against the application or its users.&lt;br /&gt;
&lt;br /&gt;
In order to test the configuration management infrastructure, the following steps need to be taken:&lt;br /&gt;
&lt;br /&gt;
* The different elements that make up the infrastructure need to be determined in order to understand how they interact with a web application and how they affect its security.&lt;br /&gt;
* All the elements of the infrastructure need to be reviewed in order to make sure that they don’t hold any known vulnerabilities.&lt;br /&gt;
* A review needs to be made of the administrative tools used to maintain all the different elements.&lt;br /&gt;
* The authentication systems, if any, need to reviewed in order to assure that they serve the needs of the application and that they cannot be manipulated by external users to leverage access.&lt;br /&gt;
* A list of defined ports which are required for the application should be maintained and kept under change control.&lt;br /&gt;
&lt;br /&gt;
== Black Box Testing and examples==&lt;br /&gt;
&lt;br /&gt;
===Review of the application architecture===&lt;br /&gt;
&lt;br /&gt;
The application architecture needs to be reviewed through the test to determine what different components are used to build the web application. In small setups, such as a simple CGI-based application, a single server might be used that runs the web server which executes the C, Perl, or Shell CGIs application, and perhaps authentication is also based on the web server authentication mechanisms. On more complex setups, such as an online bank system, multiple servers might be involved including: a reverse proxy, a front-end web server, an application server and a database server or LDAP server. Each of these servers will be used for different purposes and might be even be divided in different networks with firewalling devices between them, creating different DMZs so that access to the web server will not grant a remote user access to the authentication mechanism itself, and so that compromises of the different elements of the architecture can be isolated in a way such that they will not compromise the whole architecture.&lt;br /&gt;
&lt;br /&gt;
Getting knowledge of the application architecture can be easy if this information is provided to the testing team by the application developers in document form or through interviews, but can also prove to be very difficult if doing a blind penetration test.&lt;br /&gt;
&lt;br /&gt;
In the latter case, a tester will first start with the assumption that there is a simple setup (a single server) and will, through the information retrieved from other tests, derive the different elements and question this assumption that the architecture will be extended. The tester will start by asking simple questions such as: “Is there a firewalling system protecting the web server?” which will be answered based on the results of network scans targeted at the web server and the analysis of whether the network ports of the web server are being filtered in the network edge (no answer or ICMP unreachables are received) or if the server is directly connected to the Internet (i.e. returns RST packets for all non-listening ports). This analysis can be enhanced in order to determine the type of firewall system used based on network packet tests: is it a stateful firewall or is it an access list filter on a router? How is it configured? Can it be bypassed? &lt;br /&gt;
&lt;br /&gt;
Detecting a reverse proxy in front of the web server needs to be done by the analysis of the web server banner, which might directly disclose the existence of a reverse proxy (for example, if ‘WebSEAL’[1]  is returned). It can also be determined by obtaining the answers given by the web server to requests and comparing them to the expected answers. For example, some reverse proxies act as “intrusion prevention systems” (or web-shields) by blocking known attacks targeted at the web server. If the web server is known to answer with a 404 message to a request which targets an unavailable page and returns a different error message for some common web attacks like those done by CGI scanners it might be an indication of a reverse proxy (or an application-level firewall) which is filtering the requests and returning a different error page than the one expected. Another example: if the web server returns a set of available HTTP methods (including TRACE) but the expected methods return errors then there is probably something in between, blocking them. In some cases, even the protection system gives itself away:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
GET / web-console/ServerInfo.jsp%00 HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200&lt;br /&gt;
Pragma: no-cache&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Content-Length: 83&lt;br /&gt;
&lt;br /&gt;
&amp;lt;TITLE&amp;gt;Error&amp;lt;/TITLE&amp;gt;&lt;br /&gt;
&amp;lt;BODY&amp;gt;&lt;br /&gt;
&amp;lt;H1&amp;gt;Error&amp;lt;/H1&amp;gt;&lt;br /&gt;
FW-1 at XXXXXX: Access denied.&amp;lt;/BODY&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server'''&lt;br /&gt;
&lt;br /&gt;
Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header or by timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.&lt;br /&gt;
&lt;br /&gt;
Another element that can be detected: network load balancers. Typically, these systems will balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture elements needs to be done by examining multiple requests and comparing results in order to determine if the requests are going to the same or different web servers. For example, based on the Date: header if the server clocks are not synchronised. In some cases, the network load balance process might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer.&lt;br /&gt;
&lt;br /&gt;
Application web servers are usually easy to detect. The request for several resources is handled by the application server itself (not the web server) and the response header will vary significantly (including different or additional values in the answer header). Another way to detect these is to see if the web server tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers) or to rewrite URLs automatically to do session tracking.&lt;br /&gt;
&lt;br /&gt;
Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way, since they will be hidden by the application itself.&lt;br /&gt;
&lt;br /&gt;
The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly,&amp;quot; it is probably being extracted from some sort of database by the application itself. Sometimes the way information is requested might give insight to the existence of a database back-end. For example, an online shopping application that uses numeric identifiers (‘id’) when browsing the different articles in the shop. However, when doing a blind application test, knowledge of the underlying database is usually only available when a vulnerability surfaces in the application, such as poor exception handling or susceptibility to SQL injection.&lt;br /&gt;
&lt;br /&gt;
===Known server vulnerabilities===&lt;br /&gt;
&lt;br /&gt;
Vulnerabilities found in the different elements that make up the application architecture, be it the web server or the database backend, can severely compromise the application itself. For example, consider a server vulnerability that allows a remote, unauthenticated user, to upload files to the web server, or even to replace files. This vulnerability could compromise the application, since a rogue user may be able to replace the application itself or introduce code that would affect the backend servers, as its application code would be run just like any other application.&lt;br /&gt;
&lt;br /&gt;
Reviewing server vulnerabilities can be hard to do if the test needs to be done through a blind penetration test. In these cases, vulnerabilities need to be tested from a remote site, typically using an automated tool; however, the testing of some vulnerabilities can have unpredictable results to the web server, and testing for others (like those directly involved in denial of service attacks) might not be possible due to the service downtime involved if the test was successful. Also, some automated tools will flag vulnerabilities based on the web server version retrieved. This leads to both false positives and false negatives: on one hand, if the web server version has been removed or obscured by the local site administrator, the scan tool will not flag the server as vulnerable even if it is; on the other hand, if the vendor providing the software does not update the web server version when vulnerabilities are fixed in it, the scan tool will flag vulnerabilities that do not exist. The latter case is actually very common in some operating system vendors that do backport patches of security vulnerabilities to the software they provide in the operating system but do not do a full upload to the latest software version. This happens in most GNU/Linux distributions such as Debian, Red Hat or SuSE. In most cases, vulnerability scanning of an application architecture will only find vulnerabilities associated with the “exposed” elements of the architecture (such as the web server) and will usually be unable to find vulnerabilities associated to elements which are not directly exposed, such as the authentication backends, the database backends, or reverse proxies in use.&lt;br /&gt;
&lt;br /&gt;
Finally, not all software vendors disclose vulnerability information in public way, and information of the vulnerabilities present in their different releases is not published in vulnerability databases[2]. This info is only disclosed to customers or published through fixes that do not have accompanying advisories. This reduces the usefulness of vulnerability scanning tools. Typically, vulnerability coverage of these tools will be very good for common products (such as the Apache web server, Microsoft’s Internet Information Server, or IBM’s Lotus Domino) but will be lacking for lesser known products.&lt;br /&gt;
&lt;br /&gt;
This is why reviewing vulnerabilities is best done when the tester is provided internal information of the software used, including versions and releases used and patches applied to the software. With this information in its hand, the tester can retrieve the information from the vendor itself and analyse what vulnerabilities might be present in the architecture and how they can affect the application itself. When possible, these applications can be tested in order to determine their real effects and to detect if there might be any external elements (such as intrusion detection or prevention systems) that might reduce or negate the possibility of exploiting these vulnerabilities. Testers might even determine, through a configuration review, that the vulnerability is not even present, since it affects a software component that is not in use.&lt;br /&gt;
&lt;br /&gt;
It is also worthwhile to notice that vendors will sometimes silently fix vulnerabilities and make them available on new software releases. Different vendors will have difference release cycles that determines the support they might provide for older releases. A tester with detailed information of the software versions used by the architecture can analyse the risk associated to the use of old software releases that might be unsupported in the short term or are already unsupported. This is critical, since if a vulnerability were to surface in an old software version that is no longer suppoted, the systems personnel might not be directly aware of it: no patches will be ever made available for it and advisories might not list that version as vulnerable (as it is unsupported). Even in the even that they are aware that the vulnerability is present and the system is, indeed, vulnerable, they will need to do a full upgrade to a new software release, which might introduce significant downtime in the application architecture or might force the application to be recoded due to incompatibilities with the latest software version.&lt;br /&gt;
===Administrative tools===&lt;br /&gt;
&lt;br /&gt;
Any web server infrastructure requires the existence of administrative tools to maintain and update the information used by the application: static content (web pages, graphic files), applications source code, user authentication databases, etc. Depending on the site, technology or software used administrative tools will differ. For example, some web servers will be managed using administrative interfaces which are, themselves, web servers (such as the iPlanet web server) or will be administrated by plain text configuration files (in the Apache case[3]) or use operating-system GUI tools (when using Microsoft’s IIS server or ASP.Net). In most cases, however, the server configuration will be handled using different tools than the maintenance of the files used by the web server, which are managed through FTP servers, WebDAV, network file systems (NFS, CIFS) or other mechanisms. Obviously, the operating system of the elements that make up the application architecture will also be managed using other tools. Applications may also have administrative interfaces embedded in them that are used to manage the application data itself (users, content, etc.)&lt;br /&gt;
&lt;br /&gt;
Review of the administrative interfaces used to manage the different parts of the architecture is very important, since if a user gains access to any of them he can then compromise or damage the application architecture. Thus it is important to:&lt;br /&gt;
&lt;br /&gt;
* list all the possible administrative interfaces.&lt;br /&gt;
* determine if administrative interfaces are available from an internal network or are also available from the Internet.&lt;br /&gt;
* if available from the Internet, determine  the access control methods used to access these interfaces their susceptibilities.&lt;br /&gt;
* change the default user &amp;amp; password.&lt;br /&gt;
&lt;br /&gt;
Some sites do not directly manage the web server applications fully. These may have other companies manage the content provided by the web server application. This external company might either provide only parts of the content (news updates or promotions) or might manage the web server completely (including content and code). It is common to find administrative interfaces available from the Internet in these situations, since using the Internet is cheaper than providing a dedicated line that will connect the external company to the application infrastructure through a management-only interface. In this situation, it is very important to test if the administrative interfaces can be vulnerable to attacks. &lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [1] WebSEAL, also known as Tivoli Authentication Manager, is a reverse Proxy from IBM which is part of the Tivoli framework.&lt;br /&gt;
* [2] Such as Symantec’s Bugtraq, ISS’ Xforce, or NIST’s National Vulnerability Database (NVD)&lt;br /&gt;
* [3] There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use yet.&lt;br /&gt;
* [4] It is very common the use of database back-ends for authentication purposes with user tables that include the password that grants access to users in plain text.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Test_Network/Infrastructure_Configuration_(OTG-CONFIG-001)&amp;diff=16315</id>
		<title>Test Network/Infrastructure Configuration (OTG-CONFIG-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Test_Network/Infrastructure_Configuration_(OTG-CONFIG-001)&amp;diff=16315"/>
				<updated>2007-02-08T14:47:36Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
The intrinsic complexity of interconnected and heterogeneous web server infrastructure, which can count hundreds of web applications, makes configuration management and review a fundamental step in testing and deploying every single application.&lt;br /&gt;
In fact it takes only a single vulnerability to undermine the security of the entire infrastructure, and even small and (almost) unimportant problems may evolve into severe risks for another application on the same server.&lt;br /&gt;
In order to address these problems, it is of utmost importance to perform an in-depth review of configuration and known security issues.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&lt;br /&gt;
Proper configuration management of the web server infrastructure is very important in order to preserve the security of the application itself. If elements such as the web server software, the back-end database servers, or the authentication servers are not properly reviewed and secured, they might introduce undesired risks or introduce new vulnerabilities that might compromise the application itself.&lt;br /&gt;
&lt;br /&gt;
For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the application itself (a vulnerability that has arisen a number of times in both web servers or application servers) could compromise the application, as anonymous users could use the information disclosed in the source code to leverage attacks against the application or its users.&lt;br /&gt;
&lt;br /&gt;
In order to test the configuration management infrastructure, the following steps need to be taken:&lt;br /&gt;
&lt;br /&gt;
* The different elements that make up the infrastructure need to be determined in order to understand how they interact with a web application and how they affect its security.&lt;br /&gt;
* All the elements of the infrastructure need to be reviewed in order to make sure that they don’t hold any known vulnerabilities.&lt;br /&gt;
* A review needs to be made of the administrative tools used to maintain all the different elements.&lt;br /&gt;
* The authentication systems, if any, need to reviewed in order to assure that they serve the needs of the application and that they cannot be manipulated by external users to leverage access.&lt;br /&gt;
* A list of defined ports which are required for the application should be maintained and kept under change control.&lt;br /&gt;
&lt;br /&gt;
== Black Box Testing and examples==&lt;br /&gt;
&lt;br /&gt;
===Review of the application architecture===&lt;br /&gt;
&lt;br /&gt;
The application architecture needs to be reviewed through the test to determine what different components are used to build the web application. In small setups, such as a simple CGI-based application, a single server might be used that runs the web server which executes the C, Perl, or Shell CGIs application, and perhaps authentication is also based on the web server authentication mechanisms. On more complex setups, such as an online bank system, multiple servers might be involved including: a reverse proxy, a front-end web server, an application server and a database server or LDAP server. Each of these servers will be used for different purposes and might be even be divided in different networks with firewalling devices between them, creating different DMZs so that access to the web server will not grant a remote user access to the authentication mechanism itself, and so that compromises of the different elements of the architecture can be isolated in a way such that they will not compromise the whole architecture.&lt;br /&gt;
&lt;br /&gt;
Getting knowledge of the application architecture can be easy if this information is provided to the testing team by the application developers in document form or through interviews, but can also prove to be very difficult if doing a blind penetration test.&lt;br /&gt;
&lt;br /&gt;
In the latter case, a tester will first start with the assumption that there is a simple setup (a single server) and will, through the information retrieved from other tests, derive the different elements and question this assumption that the architecture will be extended. The tester will start by asking simple questions such as: “Is there a firewalling system protecting the web server?” which will be answered based on the results of network scans targeted at the web server and the analysis of whether the network ports of the web server are being filtered in the network edge (no answer or ICMP unreachables are received) or if the server is directly connected to the Internet (i.e. returns RST packets for all non-listening ports). This analysis can be enhanced in order to determine the type of firewall system used based on network packet tests: is it a stateful firewall or is it an access list filter on a router? How is it configured? Can it be bypassed? &lt;br /&gt;
&lt;br /&gt;
Detecting a reverse proxy in front of the web server needs to be done by the analysis of the web server banner, which might directly disclose the existence of a reverse proxy (for example, if ‘WebSEAL’[1]  is returned). It can also be determined by obtaining the answers given by the web server to requests and comparing them to the expected answers. For example, some reverse proxies act as “intrusion prevention systems” (or web-shields) by blocking known attacks targeted at the web server. If the web server is known to answer with a 404 message to a request which targets an unavailable page and returns a different error message for some common web attacks like those done by CGI scanners it might be an indication of a reverse proxy (or an application-level firewall) which is filtering the requests and returning a different error page than the one expected. Another example: if the web server returns a set of available HTTP methods (including TRACE) but the expected methods return errors then there is probably something in between, blocking them. In some cases, even the protection system gives itself away:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
GET / web-console/ServerInfo.jsp%00 HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200&lt;br /&gt;
Pragma: no-cache&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Content-Length: 83&lt;br /&gt;
&lt;br /&gt;
&amp;lt;TITLE&amp;gt;Error&amp;lt;/TITLE&amp;gt;&lt;br /&gt;
&amp;lt;BODY&amp;gt;&lt;br /&gt;
&amp;lt;H1&amp;gt;Error&amp;lt;/H1&amp;gt;&lt;br /&gt;
FW-1 at XXXXXX: Access denied.&amp;lt;/BODY&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server'''&lt;br /&gt;
&lt;br /&gt;
Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header or by timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.&lt;br /&gt;
&lt;br /&gt;
Another element that can be detected: network load balancers. Typically, these systems will balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture elements needs to be done by examining multiple requests and comparing results in order to determine if the requests are going to the same or different web servers. For example, based on the Date: header if the server clocks are not synchronised. In some cases, the network load balance process might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer.&lt;br /&gt;
&lt;br /&gt;
Application web servers are usually easy to detect. The request for several resources is handled by the application server itself (not the web server) and the response header will vary significantly (including different or additional values in the answer header). Another way to detect these is to see if the web server tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers) or to rewrite URLs automatically to do session tracking.&lt;br /&gt;
&lt;br /&gt;
Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way, since they will be hidden by the application itself.&lt;br /&gt;
&lt;br /&gt;
The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly,&amp;quot; it is probably being extracted from some sort of database by the application itself. Sometimes the way information is requested might give insight to the existence of a database back-end. For example, an online shopping application that uses numeric identifiers (‘id’) when browsing the different articles in the shop. However, when doing a blind application test, knowledge of the underlying database is usually only available when a vulnerability surfaces in the application, such as poor exception handling or susceptibility to SQL injection.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H1&amp;gt;Error&amp;lt;/H1&amp;gt;&lt;br /&gt;
FW-1 at XXXXXX: Access denied.&amp;lt;/BODY&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server'''&lt;br /&gt;
&lt;br /&gt;
Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header or by timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.&lt;br /&gt;
&lt;br /&gt;
Another element that can be detected: network balancers. Typically, these systems will balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture elements needs to be done by eamining multiple requests and comparing results in order to determine if the requests are going to the same or different web servers. For example, based on the Date: header if the server clocks are not synchronised. In some cases, the network load balance might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer.&lt;br /&gt;
&lt;br /&gt;
Application web servers are usually easy to detect -- the request for several resources is handled by the application server itself (not the web server) and the response header will vary significantly (including different or additional values in the answer header). Another way to detect these is to see if the web servers tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers) or to rewrite URLs automatically to do session tracking.&lt;br /&gt;
&lt;br /&gt;
Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way, since they will be hidden by the application itself.&lt;br /&gt;
&lt;br /&gt;
The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly,&amp;quot; it is probably being extracted from some sort of database by the application itself. Sometimes the way information is requested might give insight to the existence of a database back-end. (For example, an online shopping application that uses numeric identifiers (‘id’) when browsing the different articles in the shop.) However, when doing a blind application test, knowledge of the underlying database is usually only available when some vulnerability surfaces in the application, such as an SQL injection, which indicates that the application is actually talking to a database (the vulnerability would not be possible otherwise).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H1&amp;gt;Error&amp;lt;/H1&amp;gt;&lt;br /&gt;
FW-1 at XXXXXX: Access denied.&amp;lt;/BODY&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server'''&lt;br /&gt;
&lt;br /&gt;
Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header or by timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.&lt;br /&gt;
&lt;br /&gt;
Another element that can be detected: network balancers. Typically, these systems will balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture elements needs to be done by eamining multiple requests and comparing results in order to determine if the requests are going to the same or different web servers. For example, based on the Date: header if the server clocks are not synchronised. In some cases, the network load balance might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer.&lt;br /&gt;
&lt;br /&gt;
Application web servers are usually easy to detect -- the request for several resources is handled by the application server itself (not the web server) and the response header will vary significantly (including different or additional values in the answer header). Another way to detect these is to see if the web servers tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers) or to rewrite URLs automatically to do session tracking.&lt;br /&gt;
&lt;br /&gt;
Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way, since they will be hidden by the application itself.&lt;br /&gt;
&lt;br /&gt;
The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly,&amp;quot; it is probably being extracted from some sort of database by the application itself. Sometimes the way information is requested might give insight to the existence of a database back-end. (For example, an online shopping application that uses numeric identifiers (‘id’) when browsing the different articles in the shop.) However, when doing a blind application test, knowledge of the underlying database is usually only available when some vulnerability surfaces in the application, such as an SQL injection, which indicates that the application is actually talking to a database (the vulnerability would not be possible otherwise).&lt;br /&gt;
&lt;br /&gt;
===Known server vulnerabilities===&lt;br /&gt;
&lt;br /&gt;
Vulnerabilities found in the different elements that make up the application architecture, be it the web server itself or the database backend, can severely compromise the application itself, more dangerously if a vulnerability had been found in the actual application. For example, consider a server vulnerability that allows a remote, unauthenticated user, to upload files to the web server, or even to replace files. This vulnerability would compromise the application, since a rogue user would be able to replace the application itself or introduce code that would affect the backend servers, as its application code would be run just like any other application.&lt;br /&gt;
&lt;br /&gt;
Reviewing server vulnerabilities can be hard to do if the test needs to be done through a blind penetration test. In these cases, vulnerabilities need to be tested from a remote site, typically using an automated tool; however, the testing of some vulnerabilities can have unpredictable results to the web server, and testing for others (like those directly involved in denial of service attacks) might not be possible due to the service downtime involved if the test was successful. Also, some automated tools will flag vulnerabilities based on the web server version retrieved. This leads to both false positives and false negatives: on one hand, if the web server version has been removed or obscured by the local site administrator, the scan tool will not flag the server as vulnerable even if it is; on the other hand, if the vendor providing the software does not update the web server version when vulnerabilities are fixed in it, the scan tool will flag vulnerabilities that do not exist. The later case is actually very common in some operating system vendors that do backport patches of security vulnerabilities to the software they provide in the operating system but do not do a full upload to the latest software version. This happens in most GNU/Linux distributions such as Debian, Red Hat or SuSE. In most cases, vulnerability scanning of an application architecture will only find vulnerabilities associated with the “exposed” elements of the architecture (such as the web server) and will usually be unable to find vulnerabilities associated to elements which are not directly exposed, such as the authentication backends, the database backends, or reverse proxies in use.&lt;br /&gt;
&lt;br /&gt;
Finally, not all software vendors disclose vulnerability information in public way, and information of the vulnerabilities present in their different releases is not published in vulnerability databases[2]. This info is only disclosed to customers or published through fixes that do not have accompanying advisories. This reduces the usefulness of vulnerability scanning tools. Typically, vulnerability coverage of these tools will be very good for common products (such as the Apache web server, Microsoft’s Internet Information Server, or IBM’s Lotus Domino) but will be lacking for lesser known products.&lt;br /&gt;
&lt;br /&gt;
This is why reviewing vulnerabilities is best done when the tester is provided internal information of the software used, including versions and releases used and patches applied to the software. With this information in its hand, the tester can retrieve the information from the vendor itself and analyse what vulnerabilities might be present in the architecture and how they can affect the application itself. When possible, these applications can be tested in order to determine their real effects and to detect if there might be any external elements (such as intrusion detection or prevention systems) that might reduce or negate the possibility of exploiting these vulnerabilities. Testers might even determine, through a configuration review, that the vulnerability is not even present, since it affects a software component that is not in use.&lt;br /&gt;
&lt;br /&gt;
It is also worthwhile to notice that vendors will sometimes silently fix vulnerabilities and make them available on new software releases. Different vendors will have difference release cycles that determines the support they might provide for older releases. A tester with detailed information of the software versions used by the architecture can analyse the risk associated to the use of old software releases that might be unsupported in the short term or are already unsupported. This is critical, since if a vulnerability were to surface in an old software version that is no longer suppoted, the systems personnel might not be directly aware of it: no patches will be ever made available for it and advisories might not list that version as vulnerable (as it is unsupported). Even in the even that they are aware that the vulnerability is present and the system is, indeed, vulnerable, they will need to do a full upgrade to a new software release, which might introduce significant downtime in the application architecture or might force the application to be recoded due to incompatibilities with the latest software version.&lt;br /&gt;
==References==&lt;br /&gt;
* [1] WebSEAL, also known as Tivoli Authentication Manager, is a reverse proxy from IBM which is part of the Tivoli framework.&lt;br /&gt;
* [2] Such as Symantec’s Bugtraq, ISS’ X-Force, or NIST’s National Vulnerability Database (NVD).&lt;br /&gt;
* [3] There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use yet.&lt;br /&gt;
* [4] It is common to see the use of database back-ends for authentication purposes with user tables that include the password that grants access to users in plain text.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [1] WebSEAL, also known as Tivoli Authentication Manager, is a reverse Proxy from IBM which is part of the Tivoli framework.&lt;br /&gt;
* [2] Such as Symantec’s Bugtraq, ISS’ Xforce, or NIST’s National Vulnerability Database (NVD)&lt;br /&gt;
* [3] There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use yet.&lt;br /&gt;
* [4] It is very common the use of database back-ends for authentication purposes with user tables that include the password that grants access to users in plain text.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Test_Network/Infrastructure_Configuration_(OTG-CONFIG-001)&amp;diff=16314</id>
		<title>Test Network/Infrastructure Configuration (OTG-CONFIG-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Test_Network/Infrastructure_Configuration_(OTG-CONFIG-001)&amp;diff=16314"/>
				<updated>2007-02-08T14:22:12Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Review of the application architecture */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
The intrinsic complexity of interconnected and heterogeneous web server infrastructure, which can count hundreds of web applications, makes configuration management and review a fundamental step in testing and deploying every single application.&lt;br /&gt;
In fact it takes only a single vulnerability to undermine the security of the entire infrastructure, and even small and (almost) unimportant problems may evolve into severe risks for another application on the same server.&lt;br /&gt;
In order to address these problems, it is of utmost importance to perform an in-depth review of configuration and known security issues.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&lt;br /&gt;
Proper configuration management of the web server infrastructure is very important in order to preserve the security of the application itself. If elements such as the web server software, the back-end database servers, or the authentication servers are not properly reviewed and secured, they might introduce undesired risks or introduce new vulnerabilities that might compromise the application itself.&lt;br /&gt;
&lt;br /&gt;
For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the application itself (a vulnerability that has arisen a number of times in both web servers or application servers) could compromise the application, as anonymous users could use the information disclosed in the source code to leverage attacks against the application or its users.&lt;br /&gt;
&lt;br /&gt;
In order to test the configuration management infrastructure, the following steps need to be taken:&lt;br /&gt;
&lt;br /&gt;
* The different elements that make up the infrastructure need to be determined in order to understand how they interact with a web application and how they affect its security.&lt;br /&gt;
* All the elements of the infrastructure need to be reviewed in order to make sure that they don’t hold any known vulnerabilities.&lt;br /&gt;
* A review needs to be made of the administrative tools used to maintain all the different elements.&lt;br /&gt;
* The authentication systems, if any, need to reviewed in order to assure that they serve the needs of the application and that they cannot be manipulated by external users to leverage access.&lt;br /&gt;
* A list of defined ports which are required for the application should be maintained and kept under change control.&lt;br /&gt;
&lt;br /&gt;
== Black Box Testing and examples==&lt;br /&gt;
&lt;br /&gt;
===Review of the application architecture===&lt;br /&gt;
&lt;br /&gt;
The application architecture needs to be reviewed through the test to determine what different components are used to build the web application. In small setups, such as a simple CGI-based application, a single server might be used that runs the web server which executes the C, Perl, or Shell CGIs application, and perhaps authentication is also based on the web server authentication mechanisms. On more complex setups, such as an online bank system, multiple servers might be involved including: a reverse proxy, a front-end web server, an application server and a database server or LDAP server. Each of these servers will be used for different purposes and might be even be divided in different networks with firewalling devices between them, creating different DMZs so that access to the web server will not grant a remote user access to the authentication mechanism itself, and so that compromises of the different elements of the architecture can be isolated in a way such that they will not compromise the whole architecture.&lt;br /&gt;
&lt;br /&gt;
Getting knowledge of the application architecture can be easy if this information is provided to the testing team by the application developers in document form or through interviews, but can also prove to be very difficult if doing a blind penetration test.&lt;br /&gt;
&lt;br /&gt;
In the latter case, a tester will first start with the assumption that there is a simple setup (a single server) and will, through the information retrieved from other tests, derive the different elements and question this assumption that the architecture will be extended. The tester will start by asking simple questions such as: “Is there a firewalling system protecting the web server?” which will be answered based on the results of network scans targeted at the web server and the analysis of whether the network ports of the web server are being filtered in the network edge (no answer or ICMP unreachables are received) or if the server is directly connected to the Internet (i.e. returns RST packets for all non-listening ports). This analysis can be enhanced in order to determine the type of firewall system used based on network packet tests: is it a stateful firewall or is it an access list filter on a router? How is it configured? Can it be bypassed? &lt;br /&gt;
&lt;br /&gt;
Detecting a reverse proxy in front of the web server needs to be done by the analysis of the web server banner, which might directly disclose the existence of a reverse proxy (for example, if ‘WebSEAL’[1]  is returned). It can also be determined by obtaining the answers given by the web server to requests and comparing them to the expected answers. For example, some reverse proxies act as “intrusion prevention systems” (or web-shields) by blocking known attacks targeted at the web server. If the web server is known to answer with a 404 message to a request which targets an unavailable page and returns a different error message for some common web attacks like those done by CGI scanners it might be an indication of a reverse proxy (or an application-level firewall) which is filtering the requests and returning a different error page than the one expected. Another example: if the web server returns a set of available HTTP methods (including TRACE) but the expected methods return errors then there is probably something in between, blocking them. In some cases, even the protection system gives itself away:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
GET / web-console/ServerInfo.jsp%00 HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200&lt;br /&gt;
Pragma: no-cache&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Content-Length: 83&lt;br /&gt;
&lt;br /&gt;
&amp;lt;TITLE&amp;gt;Error&amp;lt;/TITLE&amp;gt;&lt;br /&gt;
&amp;lt;BODY&amp;gt;&lt;br /&gt;
&amp;lt;H1&amp;gt;Error&amp;lt;/H1&amp;gt;&lt;br /&gt;
FW-1 at XXXXXX: Access denied.&amp;lt;/BODY&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server'''&lt;br /&gt;
&lt;br /&gt;
Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header or by timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.&lt;br /&gt;
&lt;br /&gt;
Another element that can be detected: network load balancers. Typically, these systems will balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture elements needs to be done by examining multiple requests and comparing results in order to determine if the requests are going to the same or different web servers. For example, based on the Date: header if the server clocks are not synchronised. In some cases, the network load balance process might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer.&lt;br /&gt;
&lt;br /&gt;
Application web servers are usually easy to detect. The request for several resources is handled by the application server itself (not the web server) and the response header will vary significantly (including different or additional values in the answer header). Another way to detect these is to see if the web server tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers) or to rewrite URLs automatically to do session tracking.&lt;br /&gt;
&lt;br /&gt;
Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way, since they will be hidden by the application itself.&lt;br /&gt;
&lt;br /&gt;
The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly,&amp;quot; it is probably being extracted from some sort of database by the application itself. Sometimes the way information is requested might give insight to the existence of a database back-end. For example, an online shopping application that uses numeric identifiers (‘id’) when browsing the different articles in the shop. However, when doing a blind application test, knowledge of the underlying database is usually only available when a vulnerability surfaces in the application, such as poor exception handling or susceptibility to SQL injection.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H1&amp;gt;Error&amp;lt;/H1&amp;gt;&lt;br /&gt;
FW-1 at XXXXXX: Access denied.&amp;lt;/BODY&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server'''&lt;br /&gt;
&lt;br /&gt;
Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header or by timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.&lt;br /&gt;
&lt;br /&gt;
Another element that can be detected: network balancers. Typically, these systems will balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture elements needs to be done by eamining multiple requests and comparing results in order to determine if the requests are going to the same or different web servers. For example, based on the Date: header if the server clocks are not synchronised. In some cases, the network load balance might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer.&lt;br /&gt;
&lt;br /&gt;
Application web servers are usually easy to detect -- the request for several resources is handled by the application server itself (not the web server) and the response header will vary significantly (including different or additional values in the answer header). Another way to detect these is to see if the web servers tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers) or to rewrite URLs automatically to do session tracking.&lt;br /&gt;
&lt;br /&gt;
Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way, since they will be hidden by the application itself.&lt;br /&gt;
&lt;br /&gt;
The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly,&amp;quot; it is probably being extracted from some sort of database by the application itself. Sometimes the way information is requested might give insight to the existence of a database back-end. (For example, an online shopping application that uses numeric identifiers (‘id’) when browsing the different articles in the shop.) However, when doing a blind application test, knowledge of the underlying database is usually only available when some vulnerability surfaces in the application, such as an SQL injection, which indicates that the application is actually talking to a database (the vulnerability would not be possible otherwise).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H1&amp;gt;Error&amp;lt;/H1&amp;gt;&lt;br /&gt;
FW-1 at XXXXXX: Access denied.&amp;lt;/BODY&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server'''&lt;br /&gt;
&lt;br /&gt;
Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header or by timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.&lt;br /&gt;
&lt;br /&gt;
Another element that can be detected: network balancers. Typically, these systems will balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture elements needs to be done by eamining multiple requests and comparing results in order to determine if the requests are going to the same or different web servers. For example, based on the Date: header if the server clocks are not synchronised. In some cases, the network load balance might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer.&lt;br /&gt;
&lt;br /&gt;
Application web servers are usually easy to detect -- the request for several resources is handled by the application server itself (not the web server) and the response header will vary significantly (including different or additional values in the answer header). Another way to detect these is to see if the web servers tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers) or to rewrite URLs automatically to do session tracking.&lt;br /&gt;
&lt;br /&gt;
Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way, since they will be hidden by the application itself.&lt;br /&gt;
&lt;br /&gt;
The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly,&amp;quot; it is probably being extracted from some sort of database by the application itself. Sometimes the way information is requested might give insight to the existence of a database back-end. (For example, an online shopping application that uses numeric identifiers (‘id’) when browsing the different articles in the shop.) However, when doing a blind application test, knowledge of the underlying database is usually only available when some vulnerability surfaces in the application, such as an SQL injection, which indicates that the application is actually talking to a database (the vulnerability would not be possible otherwise).&lt;br /&gt;
&lt;br /&gt;
===Known server vulnerabilities===&lt;br /&gt;
&lt;br /&gt;
Vulnerabilities found in the different elements that make up the application architecture, be it the web server itself or the database backend, can severely compromise the application itself, more dangerously if a vulnerability had been found in the actual application. For example, consider a server vulnerability that allows a remote, unauthenticated user, to upload files to the web server, or even to replace files. This vulnerability would compromise the application, since a rogue user would be able to replace the application itself or introduce code that would affect the backend servers, as its application code would be run just like any other application.&lt;br /&gt;
&lt;br /&gt;
Reviewing server vulnerabilities can be hard to do if the test needs to be done through a blind penetration test. In these cases, vulnerabilities need to be tested from a remote site, typically using an automated tool; however, the testing of some vulnerabilities can have unpredictable results to the web server, and testing for others (like those directly involved in denial of service attacks) might not be possible due to the service downtime involved if the test was successful. Also, some automated tools will flag vulnerabilities based on the web server version retrieved. This leads to both false positives and false negatives: on one hand, if the web server version has been removed or obscured by the local site administrator, the scan tool will not flag the server as vulnerable even if it is; on the other hand, if the vendor providing the software does not update the web server version when vulnerabilities are fixed in it, the scan tool will flag vulnerabilities that do not exist. The later case is actually very common in some operating system vendors that do backport patches of security vulnerabilities to the software they provide in the operating system but do not do a full upload to the latest software version. This happens in most GNU/Linux distributions such as Debian, Red Hat or SuSE. In most cases, vulnerability scanning of an application architecture will only find vulnerabilities associated with the “exposed” elements of the architecture (such as the web server) and will usually be unable to find vulnerabilities associated to elements which are not directly exposed, such as the authentication backends, the database backends, or reverse proxies in use.&lt;br /&gt;
&lt;br /&gt;
Finally, not all software vendors disclose vulnerability information in public way, and information of the vulnerabilities present in their different releases is not published in vulnerability databases[2]. This info is only disclosed to customers or published through fixes that do not have accompanying advisories. This reduces the usefulness of vulnerability scanning tools. Typically, vulnerability coverage of these tools will be very good for common products (such as the Apache web server, Microsoft’s Internet Information Server, or IBM’s Lotus Domino) but will be lacking for lesser known products.&lt;br /&gt;
&lt;br /&gt;
This is why reviewing vulnerabilities is best done when the tester is provided internal information of the software used, including versions and releases used and patches applied to the software. With this information in its hand, the tester can retrieve the information from the vendor itself and analyse what vulnerabilities might be present in the architecture and how they can affect the application itself. When possible, these applications can be tested in order to determine their real effects and to detect if there might be any external elements (such as intrusion detection or prevention systems) that might reduce or negate the possibility of exploiting these vulnerabilities. Testers might even determine, through a configuration review, that the vulnerability is not even present, since it affects a software component that is not in use.&lt;br /&gt;
&lt;br /&gt;
It is also worthwhile to notice that vendors will sometimes silently fix vulnerabilities and make them available on new software releases. Different vendors will have difference release cycles that determines the support they might provide for older releases. A tester with detailed information of the software versions used by the architecture can analyse the risk associated to the use of old software releases that might be unsupported in the short term or are already unsupported. This is critical, since if a vulnerability were to surface in an old software version that is no longer suppoted, the systems personnel might not be directly aware of it: no patches will be ever made available for it and advisories might not list that version as vulnerable (as it is unsupported). Even in the even that they are aware that the vulnerability is present and the system is, indeed, vulnerable, they will need to do a full upgrade to a new software release, which might introduce significant downtime in the application architecture or might force the application to be recoded due to incompatibilities with the latest software version.&lt;br /&gt;
===Administrative tools===&lt;br /&gt;
&lt;br /&gt;
Any web server infrastructure requires the existence of administrative tools to maintain and update the information used by the application: static content (web pages, graphic files), applications source code, user authentication databases, etc. Depending on the site, technology or software used administrative tools will differ. For example, some web servers will be managed using administrative interfaces which are, themselves, web servers (such as the iPlanet web server) or will be administrated by plain text configuration files (in the Apache case[3]) or use operating-system GUI tools (when using Microsoft’s IIS server or ASP.Net). In most cases, however, the server configuration will be handled using different tools than the maintenance of the files used by the web server, which are managed through FTP servers, WebDAV, network file systems (NFS, CIFS) or other mechanisms. Obviously, the operating system of the elements that make up the application architecture will also be managed using other tools. Applications may also have administrative interfaces embedded in them that are used to manage the application data itself (users, content, etc.)&lt;br /&gt;
&lt;br /&gt;
Review of the administrative interfaces used to manage the different parts of the architecture is very important, since if a user gains access to any of them he can then compromise or damage the application architecture. Thus it is important to:&lt;br /&gt;
&lt;br /&gt;
* list all the possible administrative interfaces.&lt;br /&gt;
* determine if administrative interfaces are available from an internal network or are also available from the Internet.&lt;br /&gt;
* if available from the Internet, determine  the access control methods used to access these interfaces their susceptibilities.&lt;br /&gt;
* change the default user &amp;amp; password.&lt;br /&gt;
&lt;br /&gt;
Some sites do not directly manage the web server applications fully. These may have other companies manage the content provided by the web server application. This external company might either provide only parts of the content (news updates or promotions) or might manage the web server completely (including content and code). It is common to find administrative interfaces available from the Internet in these situations, since using the Internet is cheaper than providing a dedicated line that will connect the external company to the application infrastructure through a management-only interface. In this situation, it is very important to test if the administrative interfaces can be vulnerable to attacks. &lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [1] WebSEAL, also known as Tivoli Authentication Manager, is a reverse Proxy from IBM which is part of the Tivoli framework.&lt;br /&gt;
* [2] Such as Symantec’s Bugtraq, ISS’ Xforce, or NIST’s National Vulnerability Database (NVD)&lt;br /&gt;
* [3] There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use yet.&lt;br /&gt;
* [4] It is very common the use of database back-ends for authentication purposes with user tables that include the password that grants access to users in plain text.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Test_Network/Infrastructure_Configuration_(OTG-CONFIG-001)&amp;diff=16313</id>
		<title>Test Network/Infrastructure Configuration (OTG-CONFIG-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Test_Network/Infrastructure_Configuration_(OTG-CONFIG-001)&amp;diff=16313"/>
				<updated>2007-02-08T13:50:03Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Review of the application architecture */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
The intrinsic complexity of interconnected and heterogeneous web server infrastructure, which can count hundreds of web applications, makes configuration management and review a fundamental step in testing and deploying every single application.&lt;br /&gt;
In fact it takes only a single vulnerability to undermine the security of the entire infrastructure, and even small and (almost) unimportant problems may evolve into severe risks for another application on the same server.&lt;br /&gt;
In order to address these problems, it is of utmost importance to perform an in-depth review of configuration and known security issues.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&lt;br /&gt;
Proper configuration management of the web server infrastructure is very important in order to preserve the security of the application itself. If elements such as the web server software, the back-end database servers, or the authentication servers are not properly reviewed and secured, they might introduce undesired risks or introduce new vulnerabilities that might compromise the application itself.&lt;br /&gt;
&lt;br /&gt;
For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the application itself (a vulnerability that has arisen a number of times in both web servers or application servers) could compromise the application, as anonymous users could use the information disclosed in the source code to leverage attacks against the application or its users.&lt;br /&gt;
&lt;br /&gt;
In order to test the configuration management infrastructure, the following steps need to be taken:&lt;br /&gt;
&lt;br /&gt;
* The different elements that make up the infrastructure need to be determined in order to understand how they interact with a web application and how they affect its security.&lt;br /&gt;
* All the elements of the infrastructure need to be reviewed in order to make sure that they don’t hold any known vulnerabilities.&lt;br /&gt;
* A review needs to be made of the administrative tools used to maintain all the different elements.&lt;br /&gt;
* The authentication systems, if any, need to reviewed in order to assure that they serve the needs of the application and that they cannot be manipulated by external users to leverage access.&lt;br /&gt;
* A list of defined ports which are required for the application should be maintained and kept under change control.&lt;br /&gt;
&lt;br /&gt;
== Black Box Testing and examples==&lt;br /&gt;
&lt;br /&gt;
===Review of the application architecture===&lt;br /&gt;
&lt;br /&gt;
The application architecture needs to be reviewed through the test to determine what different components are used to build the web application. In small setups, such as a simple CGI-based application, a single server might be used that runs the web server which executes the C, Perl, or Shell CGIs application, and perhaps authentication is also based on the web server authentication mechanisms. On more complex setups, such as an online bank system, multiple servers might be involved including: a reverse proxy, a front-end web server, an application server and a database server or LDAP server. Each of these servers will be used for different purposes and might be even be divided in different networks with firewalling devices between them, creating different DMZs so that access to the web server will not grant a remote user access to the authentication mechanism itself, and so that compromises of the different elements of the architecture can be isolated in a way such that they will not compromise the whole architecture.&lt;br /&gt;
&lt;br /&gt;
Getting knowledge of the application architecture can be easy if this information is provided to the testing team by the application developers in document form or through interviews, but can also prove to be very difficult if doing a blind penetration test.&lt;br /&gt;
&lt;br /&gt;
In the latter case, a tester will first start with the assumption that there is a simple setup (a single server) and will, through the information retrieved from other tests, derive the different elements and question this assumption that the architecture will be extended. The tester will start by asking simple questions such as: “Is there a firewalling system protecting the web server?” which will be answered based on the results of network scans targeted at the web server and the analysis of whether the network ports of the web server are being filtered in the network edge (no answer or ICMP unreachables are received) or if the server is directly connected to the Internet (i.e. returns RST packets for all non-listening ports). This analysis can be enhanced in order to determine the type of firewall system used based on network packet tests: is it a stateful firewall or is it an access list filter on a router? How is it configured? Can it be bypassed? &lt;br /&gt;
&lt;br /&gt;
Detecting a reverse proxy in front of the web server needs to be done by the analysis of the web server banner, which might directly disclose the existence of a reverse proxy (for example, if ‘WebSEAL’[1]  is returned). It can also be determined by obtaining the answers given by the web server to requests and comparing them to the expected answers. For example, some reverse proxies act as “intrusion prevention systems” (or web-shields) by blocking known attacks targeted at the web server. If the web server is known to answer with a 404 message to a request which targets an unavailable page and returns a different error message for some common web attacks like those done by CGI scanners it might be an indication of a reverse proxy (or an application-level firewall) which is filtering the requests and returning a different error page than the one expected. Another example: if the web server returns a set of available HTTP methods (including TRACE) but the expected methods return errors then there is probably something in between, blocking them. In some cases, even the protection system gives itself away:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
GET / web-console/ServerInfo.jsp%00 HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200&lt;br /&gt;
Pragma: no-cache&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Content-Length: 83&lt;br /&gt;
&lt;br /&gt;
&amp;lt;TITLE&amp;gt;Error&amp;lt;/TITLE&amp;gt;&lt;br /&gt;
&amp;lt;BODY&amp;gt;&lt;br /&gt;
&amp;lt;H1&amp;gt;Error&amp;lt;/H1&amp;gt;&lt;br /&gt;
FW-1 at XXXXXX: Access denied.&amp;lt;/BODY&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server'''&lt;br /&gt;
&lt;br /&gt;
Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header or by timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.&lt;br /&gt;
&lt;br /&gt;
Another element that can be detected: network balancers. Typically, these systems will balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture elements needs to be done by eamining multiple requests and comparing results in order to determine if the requests are going to the same or different web servers. For example, based on the Date: header if the server clocks are not synchronised. In some cases, the network load balance might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer.&lt;br /&gt;
&lt;br /&gt;
Application web servers are usually easy to detect -- the request for several resources is handled by the application server itself (not the web server) and the response header will vary significantly (including different or additional values in the answer header). Another way to detect these is to see if the web servers tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers) or to rewrite URLs automatically to do session tracking.&lt;br /&gt;
&lt;br /&gt;
Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way, since they will be hidden by the application itself.&lt;br /&gt;
&lt;br /&gt;
The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly,&amp;quot; it is probably being extracted from some sort of database by the application itself. Sometimes the way information is requested might give insight to the existence of a database back-end. (For example, an online shopping application that uses numeric identifiers (‘id’) when browsing the different articles in the shop.) However, when doing a blind application test, knowledge of the underlying database is usually only available when some vulnerability surfaces in the application, such as an SQL injection, which indicates that the application is actually talking to a database (the vulnerability would not be possible otherwise).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H1&amp;gt;Error&amp;lt;/H1&amp;gt;&lt;br /&gt;
FW-1 at XXXXXX: Access denied.&amp;lt;/BODY&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server'''&lt;br /&gt;
&lt;br /&gt;
Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header or by timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.&lt;br /&gt;
&lt;br /&gt;
Another element that can be detected: network balancers. Typically, these systems will balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture elements needs to be done by eamining multiple requests and comparing results in order to determine if the requests are going to the same or different web servers. For example, based on the Date: header if the server clocks are not synchronised. In some cases, the network load balance might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer.&lt;br /&gt;
&lt;br /&gt;
Application web servers are usually easy to detect -- the request for several resources is handled by the application server itself (not the web server) and the response header will vary significantly (including different or additional values in the answer header). Another way to detect these is to see if the web servers tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers) or to rewrite URLs automatically to do session tracking.&lt;br /&gt;
&lt;br /&gt;
Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way, since they will be hidden by the application itself.&lt;br /&gt;
&lt;br /&gt;
The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly,&amp;quot; it is probably being extracted from some sort of database by the application itself. Sometimes the way information is requested might give insight to the existence of a database back-end. (For example, an online shopping application that uses numeric identifiers (‘id’) when browsing the different articles in the shop.) However, when doing a blind application test, knowledge of the underlying database is usually only available when some vulnerability surfaces in the application, such as an SQL injection, which indicates that the application is actually talking to a database (the vulnerability would not be possible otherwise).&lt;br /&gt;
&lt;br /&gt;
===Known server vulnerabilities===&lt;br /&gt;
&lt;br /&gt;
Vulnerabilities found in the different elements that make up the application architecture, be it the web server itself or the database backend, can severely compromise the application itself, more dangerously if a vulnerability had been found in the actual application. For example, consider a server vulnerability that allows a remote, unauthenticated user, to upload files to the web server, or even to replace files. This vulnerability would compromise the application, since a rogue user would be able to replace the application itself or introduce code that would affect the backend servers, as its application code would be run just like any other application.&lt;br /&gt;
&lt;br /&gt;
Reviewing server vulnerabilities can be hard to do if the test needs to be done through a blind penetration test. In these cases, vulnerabilities need to be tested from a remote site, typically using an automated tool; however, the testing of some vulnerabilities can have unpredictable results to the web server, and testing for others (like those directly involved in denial of service attacks) might not be possible due to the service downtime involved if the test was successful. Also, some automated tools will flag vulnerabilities based on the web server version retrieved. This leads to both false positives and false negatives: on one hand, if the web server version has been removed or obscured by the local site administrator, the scan tool will not flag the server as vulnerable even if it is; on the other hand, if the vendor providing the software does not update the web server version when vulnerabilities are fixed in it, the scan tool will flag vulnerabilities that do not exist. The later case is actually very common in some operating system vendors that do backport patches of security vulnerabilities to the software they provide in the operating system but do not do a full upload to the latest software version. This happens in most GNU/Linux distributions such as Debian, Red Hat or SuSE. In most cases, vulnerability scanning of an application architecture will only find vulnerabilities associated with the “exposed” elements of the architecture (such as the web server) and will usually be unable to find vulnerabilities associated to elements which are not directly exposed, such as the authentication backends, the database backends, or reverse proxies in use.&lt;br /&gt;
&lt;br /&gt;
Finally, not all software vendors disclose vulnerability information in public way, and information of the vulnerabilities present in their different releases is not published in vulnerability databases[2]. This info is only disclosed to customers or published through fixes that do not have accompanying advisories. This reduces the usefulness of vulnerability scanning tools. Typically, vulnerability coverage of these tools will be very good for common products (such as the Apache web server, Microsoft’s Internet Information Server, or IBM’s Lotus Domino) but will be lacking for lesser known products.&lt;br /&gt;
&lt;br /&gt;
This is why reviewing vulnerabilities is best done when the tester is provided internal information of the software used, including versions and releases used and patches applied to the software. With this information in its hand, the tester can retrieve the information from the vendor itself and analyse what vulnerabilities might be present in the architecture and how they can affect the application itself. When possible, these applications can be tested in order to determine their real effects and to detect if there might be any external elements (such as intrusion detection or prevention systems) that might reduce or negate the possibility of exploiting these vulnerabilities. Testers might even determine, through a configuration review, that the vulnerability is not even present, since it affects a software component that is not in use.&lt;br /&gt;
&lt;br /&gt;
It is also worthwhile to notice that vendors will sometimes silently fix vulnerabilities and make them available on new software releases. Different vendors will have difference release cycles that determines the support they might provide for older releases. A tester with detailed information of the software versions used by the architecture can analyse the risk associated to the use of old software releases that might be unsupported in the short term or are already unsupported. This is critical, since if a vulnerability were to surface in an old software version that is no longer suppoted, the systems personnel might not be directly aware of it: no patches will be ever made available for it and advisories might not list that version as vulnerable (as it is unsupported). Even in the even that they are aware that the vulnerability is present and the system is, indeed, vulnerable, they will need to do a full upgrade to a new software release, which might introduce significant downtime in the application architecture or might force the application to be recoded due to incompatibilities with the latest software version.&lt;br /&gt;
===Administrative tools===&lt;br /&gt;
&lt;br /&gt;
Any web server infrastructure requires the existence of administrative tools to maintain and update the information used by the application: static content (web pages, graphic files), applications source code, user authentication databases, etc. Depending on the site, technology or software used administrative tools will differ. For example, some web servers will be managed using administrative interfaces which are, themselves, web servers (such as the iPlanet web server) or will be administrated by plain text configuration files (in the Apache case[3]) or use operating-system GUI tools (when using Microsoft’s IIS server or ASP.Net). In most cases, however, the server configuration will be handled using different tools than the maintenance of the files used by the web server, which are managed through FTP servers, WebDAV, network file systems (NFS, CIFS) or other mechanisms. Obviously, the operating system of the elements that make up the application architecture will also be managed using other tools. Applications may also have administrative interfaces embedded in them that are used to manage the application data itself (users, content, etc.)&lt;br /&gt;
&lt;br /&gt;
Review of the administrative interfaces used to manage the different parts of the architecture is very important, since if a user gains access to any of them he can then compromise or damage the application architecture. Thus it is important to:&lt;br /&gt;
&lt;br /&gt;
* list all the possible administrative interfaces.&lt;br /&gt;
* determine if administrative interfaces are available from an internal network or are also available from the Internet.&lt;br /&gt;
* if available from the Internet, determine  the access control methods used to access these interfaces their susceptibilities.&lt;br /&gt;
* change the default user &amp;amp; password.&lt;br /&gt;
&lt;br /&gt;
Some sites do not directly manage the web server applications fully. These may have other companies manage the content provided by the web server application. This external company might either provide only parts of the content (news updates or promotions) or might manage the web server completely (including content and code). It is common to find administrative interfaces available from the Internet in these situations, since using the Internet is cheaper than providing a dedicated line that will connect the external company to the application infrastructure through a management-only interface. In this situation, it is very important to test if the administrative interfaces can be vulnerable to attacks. &lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [1] WebSEAL, also known as Tivoli Authentication Manager, is a reverse Proxy from IBM which is part of the Tivoli framework.&lt;br /&gt;
* [2] Such as Symantec’s Bugtraq, ISS’ Xforce, or NIST’s National Vulnerability Database (NVD)&lt;br /&gt;
* [3] There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use yet.&lt;br /&gt;
* [4] It is very common the use of database back-ends for authentication purposes with user tables that include the password that grants access to users in plain text.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Test_Network/Infrastructure_Configuration_(OTG-CONFIG-001)&amp;diff=16312</id>
		<title>Test Network/Infrastructure Configuration (OTG-CONFIG-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Test_Network/Infrastructure_Configuration_(OTG-CONFIG-001)&amp;diff=16312"/>
				<updated>2007-02-08T13:32:02Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Description of the Issue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
The intrinsic complexity of interconnected and heterogeneous web server infrastructure, which can count hundreds of web applications, makes configuration management and review a fundamental step in testing and deploying every single application.&lt;br /&gt;
In fact it takes only a single vulnerability to undermine the security of the entire infrastructure, and even small and (almost) unimportant problems may evolve into severe risks for another application on the same server.&lt;br /&gt;
In order to address these problems, it is of utmost importance to perform an in-depth review of configuration and known security issues.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&lt;br /&gt;
Proper configuration management of the web server infrastructure is very important in order to preserve the security of the application itself. If elements such as the web server software, the back-end database servers, or the authentication servers are not properly reviewed and secured, they might introduce undesired risks or introduce new vulnerabilities that might compromise the application itself.&lt;br /&gt;
&lt;br /&gt;
For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the application itself (a vulnerability that has arisen a number of times in both web servers or application servers) could compromise the application, as anonymous users could use the information disclosed in the source code to leverage attacks against the application or its users.&lt;br /&gt;
&lt;br /&gt;
In order to test the configuration management infrastructure, the following steps need to be taken:&lt;br /&gt;
&lt;br /&gt;
* The different elements that make up the infrastructure need to be determined in order to understand how they interact with a web application and how they affect its security.&lt;br /&gt;
* All the elements of the infrastructure need to be reviewed in order to make sure that they don’t hold any known vulnerabilities.&lt;br /&gt;
* A review needs to be made of the administrative tools used to maintain all the different elements.&lt;br /&gt;
* The authentication systems, if any, need to reviewed in order to assure that they serve the needs of the application and that they cannot be manipulated by external users to leverage access.&lt;br /&gt;
* A list of defined ports which are required for the application should be maintained and kept under change control.&lt;br /&gt;
&lt;br /&gt;
== Black Box Testing and examples==&lt;br /&gt;
&lt;br /&gt;
===Review of the application architecture===&lt;br /&gt;
&lt;br /&gt;
The application architecture needs to be reviewed through the test to determine what different components are used to build the web application. In small setups, such as a simple CGI-based application, a single server might be used that runs the web server which executes the C, Perl, or Shell CGIs application, and perhaps authentication is also based on the web server authentication mechanisms. On more complex setups, such as an online bank system, multiple servers might be involved including: a reverse proxy, a front-end web server, an application server and a database server or LDAP server. Each of these servers will be used for different purposes and might be even be divided in different networks with firewalling devices between them, creating different DMZs so that access to the web server will not grant a remote user access to the authentication mechanism itself, and so that compromises of the different elements of the architecture can be isolated in a way such that they will not compromise the whole architecture.&lt;br /&gt;
&lt;br /&gt;
Getting knowledge of the application architecture can be easy if this information is provided to the testing team by the application developers in document form or through interviews, but can also prove to be very difficult if doing a blind penetration test.&lt;br /&gt;
&lt;br /&gt;
In the latter case, a tester will first start with the assumption that there is a simple setup (a single server) and will, through the information retrieved from other tests, derive the different elements and question this assumption that the architecture will be extended. He will start by making simple questions such as: “Is there a firewalling system protecting the web server?” which will be answered based on the results of network scans targeted at the web server and the analysis of whether the network ports of the web server are being filtered in the network edge (no answer or ICMP unreachables are received) or if the server is directly connected to the Internet (i.e. returns RST packets for all non-listening ports). This analysis can be enhanced in order to determine the type of firewall system used based on network packet tests: is it a stateful firewall or is it an access list filter on a router? How is it configured? Can it be bypassed? &lt;br /&gt;
&lt;br /&gt;
Detecting a reverse proxy in front of the web server needs to be done by the analysis of the web server banner, which might directly disclose the existence of a reverse proxy (for example, if ‘WebSEAL’[1]  is returned). It can also be determined by obtaining the answers given by the web server to requests and comparing them to the expected answers. For example, some reverse proxies act as “intrusion prevention systems” (or web-shields) by blocking known attacks targeted at the web server. If the web server is known to answer with a 404 message to a request which targets an unavailable page and returns a different error message for some common web attacks like those done by CGI scanners it might be an indication of a reverse proxy (or an application-level firewall) which is filtering the requests and returning a different error page than the one expected. Another example: if the web server returns a set of available HTTP methods (including TRACE) but the expected methods return errors then there is probably something in between, blocking them. In some cases, even the protection system gives itself away:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
GET / web-console/ServerInfo.jsp%00 HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200&lt;br /&gt;
Pragma: no-cache&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Content-Length: 83&lt;br /&gt;
&lt;br /&gt;
&amp;lt;TITLE&amp;gt;Error&amp;lt;/TITLE&amp;gt;&lt;br /&gt;
&amp;lt;BODY&amp;gt;&lt;br /&gt;
&amp;lt;H1&amp;gt;Error&amp;lt;/H1&amp;gt;&lt;br /&gt;
FW-1 at XXXXXX: Access denied.&amp;lt;/BODY&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server'''&lt;br /&gt;
&lt;br /&gt;
Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header or by timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.&lt;br /&gt;
&lt;br /&gt;
Another element that can be detected: network balancers. Typically, these systems will balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture elements needs to be done by eamining multiple requests and comparing results in order to determine if the requests are going to the same or different web servers. For example, based on the Date: header if the server clocks are not synchronised. In some cases, the network load balance might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer.&lt;br /&gt;
&lt;br /&gt;
Application web servers are usually easy to detect -- the request for several resources is handled by the application server itself (not the web server) and the response header will vary significantly (including different or additional values in the answer header). Another way to detect these is to see if the web servers tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers) or to rewrite URLs automatically to do session tracking.&lt;br /&gt;
&lt;br /&gt;
Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way, since they will be hidden by the application itself.&lt;br /&gt;
&lt;br /&gt;
The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly,&amp;quot; it is probably being extracted from some sort of database by the application itself. Sometimes the way information is requested might give insight to the existence of a database back-end. (For example, an online shopping application that uses numeric identifiers (‘id’) when browsing the different articles in the shop.) However, when doing a blind application test, knowledge of the underlying database is usually only available when some vulnerability surfaces in the application, such as an SQL injection, which indicates that the application is actually talking to a database (the vulnerability would not be possible otherwise).&lt;br /&gt;
&lt;br /&gt;
===Known server vulnerabilities===&lt;br /&gt;
&lt;br /&gt;
Vulnerabilities found in the different elements that make up the application architecture, be it the web server itself or the database backend, can severely compromise the application itself, more dangerously if a vulnerability had been found in the actual application. For example, consider a server vulnerability that allows a remote, unauthenticated user, to upload files to the web server, or even to replace files. This vulnerability would compromise the application, since a rogue user would be able to replace the application itself or introduce code that would affect the backend servers, as its application code would be run just like any other application.&lt;br /&gt;
&lt;br /&gt;
Reviewing server vulnerabilities can be hard to do if the test needs to be done through a blind penetration test. In these cases, vulnerabilities need to be tested from a remote site, typically using an automated tool; however, the testing of some vulnerabilities can have unpredictable results to the web server, and testing for others (like those directly involved in denial of service attacks) might not be possible due to the service downtime involved if the test was successful. Also, some automated tools will flag vulnerabilities based on the web server version retrieved.This leads to both false positives and false negatives: on one hand, if the web server version has been removed or obscured by the local site administrator, the scan tool will not flag the server as vulnerable even if it is; on the other hand, if the vendor providing the software does not update the web server version when vulnerabilities are fixed in it, the scan tool will flag vulnerabilities that do not exist. The later case is actually very common in some operating system vendors that do backport patches of security vulnerabilities to the software they provide in the operating system but do not do a full upload to the latest software version. This happens in most GNU/Linux distributions such as Debian, Red Hat or SuSE. In most cases, vulnerability scanning of an application architecture will only find vulnerabilities associated with the “exposed” elements of the architecture (such as the web server) and will usually be unable to find vulnerabilities associated to elements which are not directly exposed, such as the authentication backends, the database backends, or reverse proxies in use.&lt;br /&gt;
&lt;br /&gt;
Finally, not all software vendors disclose vulnerability information in public way, and information of the vulnerabilities present in their different releases is not published in vulnerability databases[2]. This info is only disclosed to customers or published through fixes that do not have accompanying advisories. This reduces the usefulness of vulnerability scanning tools. Typically, vulnerability coverage of these tools will be very good for common products (such as the Apache web server, Microsoft’s Internet Information Server, or IBM’s Lotus Domino) but will be lacking for lesser known products.&lt;br /&gt;
&lt;br /&gt;
This is why reviewing vulnerabilities is best done when the tester is provided internal information of the software used, including versions and releases used and patches applied to the software. With this information in its hand, the tester can retrieve the information from the vendor itself and analyse what vulnerabilities might be present in the architecture and how they can affect the application itself. When possible, these applications can be tested in order to determine their real effects and to detect if there might be any external elements (such as intrusion detection or prevention systems) that might reduce or negate the possibility of exploiting these vulnerabilities. Testers might even determine, through a configuration review, that the vulnerability is not even present, since it affects a software component that is not in use.&lt;br /&gt;
&lt;br /&gt;
It is also worthwhile to notice that vendors will sometimes silently fix vulnerabilities and make them available on new software releases. Different vendors will have difference release cycles that determines the support they might provide for older releases. A tester with detailed information of the software versions used by the architecture can analyse the risk associated to the use of old software releases that might be unsupported in the short term or are already unsupported. This is critical, since if a vulnerability were to surface in an old software version that is no longer suppoted, the systems personnel might not be directly aware of it: no patches will be ever made available for it and advisories might not list that version as vulnerable (as it is unsupported). Even in the even that they are aware that the vulnerability is present and the system is, indeed, vulnerable, they will need to do a full upgrade to a new software release, which might introduce significant downtime in the application architecture or might force the application to be recoded due to incompatibilities with the latest software version.&lt;br /&gt;
===Administrative tools===&lt;br /&gt;
&lt;br /&gt;
Any web server infrastructure requires the existence of administrative tools to maintain and update the information used by the application: static content (web pages, graphic files), applications source code, user authentication databases, etc. Depending on the site, technology or software used administrative tools will differ. For example, some web servers will be managed using administrative interfaces which are, themselves, web servers (such as the iPlanet web server) or will be administrated by plain text configuration files (in the Apache case[3]) or use operating-system GUI tools (when using Microsoft’s IIS server or ASP.Net). In most cases, however, the server configuration will be handled using different tools than the maintenance of the files used by the web server, which are managed through FTP servers, WebDAV, network file systems (NFS, CIFS) or other mechanisms. Obviously, the operating system of the elements that make up the application architecture will also be managed using other tools. Applications may also have administrative interfaces embedded in them that are used to manage the application data itself (users, content, etc.)&lt;br /&gt;
&lt;br /&gt;
Review of the administrative interfaces used to manage the different parts of the architecture is very important, since if a user gains access to any of them he can then compromise or damage the application architecture. Thus it is important to:&lt;br /&gt;
&lt;br /&gt;
* list all the possible administrative interfaces.&lt;br /&gt;
* determine if administrative interfaces are available from an internal network or are also available from the Internet.&lt;br /&gt;
* if available from the Internet, determine  the access control methods used to access these interfaces their susceptibilities.&lt;br /&gt;
* change the default user &amp;amp; password.&lt;br /&gt;
&lt;br /&gt;
Some sites do not directly manage the web server applications fully. These may have other companies manage the content provided by the web server application. This external company might either provide only parts of the content (news updates or promotions) or might manage the web server completely (including content and code). It is common to find administrative interfaces available from the Internet in these situations, since using the Internet is cheaper than providing a dedicated line that will connect the external company to the application infrastructure through a management-only interface. In this situation, it is very important to test if the administrative interfaces can be vulnerable to attacks. &lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [1] WebSEAL, also known as Tivoli Authentication Manager, is a reverse Proxy from IBM which is part of the Tivoli framework.&lt;br /&gt;
* [2] Such as Symantec’s Bugtraq, ISS’ Xforce, or NIST’s National Vulnerability Database (NVD)&lt;br /&gt;
* [3] There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use yet.&lt;br /&gt;
* [4] It is very common the use of database back-ends for authentication purposes with user tables that include the password that grants access to users in plain text.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Web_Application_Penetration_Testing&amp;diff=16311</id>
		<title>Web Application Penetration Testing</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Web_Application_Penetration_Testing&amp;diff=16311"/>
				<updated>2007-02-08T11:34:08Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Testing: Introduction and objectives|'''4.1 Introduction and Objectives''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing: Information Gathering|'''4.2 Information Gathering''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Web Application Fingerprint|4.2.1 Testing Web Application Fingerprint]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Application Discovery|4.2.2 Application Discovery]]&lt;br /&gt;
&lt;br /&gt;
[[Testing: Spidering and googling|4.2.3 Spidering and Googling]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Error Code|4.2.4 Analysis of Error Codes]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for infrastructure configuration management|4.2.5 Infrastructure &lt;br /&gt;
Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SSL-TLS|4.2.5.1 SSL/TLS Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DB Listener|4.2.5.2 DB Listener Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for application configuration management|4.2.6 Application Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for file extensions handling|4.2.6.1 Testing for File Extensions Handling]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for old_file|4.2.6.2 Old, backup and unreferenced files]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for business logic|'''4.3 Business logic testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for authentication|'''4.4 Authentication Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Default or Guessable User Account|4.4.1 Testing for guessable (dictionary) user account]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Brute Force|4.4.2 Brute Force Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Bypassing Authentication Schema|4.4.3 Testing for bypassing authentication schema]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Directory Traversal|4.4.4 Testing for directory traversal/file include]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Vulnerable Remember Password and Pwd Reset|4.4.5 Testing for vulnerable remember &lt;br /&gt;
password and pwd reset]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Logout and Browser Cache Management|4.4.6 Testing for Logout and Browser Cache Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Session Management|'''4.5 Session Management Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Session_Management_Schema|4.5.1 Testing for Session Management Schema]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Cookie and Session Token Manipulation|4.5.2 Testing for Cookie and Session Token Manipulation]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Exposed Session Variables|4.5.3 Testing for Exposed Session Variables ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for CSRF|4.5.4 Testing for CSRF]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for HTTP Exploit|4.5.5 Testing for HTTP Exploit ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Data Validation|'''4.6 Data Validation Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Cross site scripting|4.6.1 Testing for Cross Site Scripting]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for HTTP Methods and XST|4.6.1.1 Testing for HTTP Methods and XST ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SQL Injection|4.6.2 Testing for SQL Injection ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Oracle|4.6.2.1 Oracle Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for MySQL|4.6.2.2 MySQL Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SQL Server|4.6.2.3 SQL Server Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for LDAP Injection|4.6.3 Testing for LDAP Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for ORM Injection|4.6.4 Testing for ORM Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Injection|4.6.5 Testing for XML Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SSI Injection|4.6.6 Testing for SSI Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XPath Injection|4.6.7 Testing for XPath Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for IMAP/SMTP Injection|4.6.8 IMAP/SMTP Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Code Injection|4.6.9 Testing for Code Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Command Injection|4.6.10 Testing for Command Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Buffer Overflow|4.6.11 Testing for Buffer overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Heap Overflow|4.6.11.1 Testing for Heap overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Stack Overflow|4.6.11.2 Testing for Stack overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Format String|4.6.11.3 Testing for Format string]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Incubated Vulnerability|4.6.12 Testing for incubated vulnerabilities]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Denial of Service|'''4.7 Testing for Denial of Service''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Locking Customer Accounts|4.7.1 Testing for DoS Locking Customer Accounts]]	&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Buffer Overflows|4.7.2 Testing for DoS Buffer Overflows]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS User Specified Object Allocation|4.7.3 Testing for DoS User Specified Object Allocation]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for User Input as a Loop Counter|4.7.4 Testing for User Input as a Loop Counter]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Writing User Provided Data to Disk|4.7.5 Testing for Writing User Provided Data to Disk]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Failure to Release Resources|4.7.6 Testing for DoS Failure to Release Resources]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Storing too Much Data in Session|4.7.7 Testing for Storing too Much Data in Session]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Web Services|'''4.8 Web Services Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Structural|4.8.1 XML Structural Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Content-Level|4.8.2 XML Content-level Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for WS HTTP GET parameters/REST attacks|4.8.3 HTTP GET parameters/REST Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Naughty SOAP Attachments|4.8.4 Testing for Naughty SOAP attachments]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for WS Replay|4.8.5 WS Replay Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing_for_AJAX:_introduction|'''4.9 AJAX Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for AJAX Vulnerabilities|4.9.1 AJAX Vulnerabilities]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for AJAX|4.9.2 How to test AJAX]]&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Error_Code_(OTG-ERR-001)&amp;diff=16310</id>
		<title>Testing for Error Code (OTG-ERR-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Error_Code_(OTG-ERR-001)&amp;diff=16310"/>
				<updated>2007-02-08T11:30:48Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Gray Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
Often during a penetration test on web applications we come up against many error codes generated from applications or web servers.&lt;br /&gt;
It's possible to cause these errors to be displayed by using a particular request, either specially crafted with tools or created manually.&lt;br /&gt;
These codes are very useful to penetration testers during their activities because they reveal a lot of information about databases, bugs, and other technological components directly linked with web applications.&lt;br /&gt;
Within this section we'll analyse the more common codes (error messages) and bring into focus the steps of vulnerability assessment.&lt;br /&gt;
The most important aspect for this activity is to focus one's attention on these errors, seeing them as a collection of information that will aid in the next steps of our analysis. A good collection can facilitate assessment efficiency by decreasing the overall time taken to perform the penetration test.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
&lt;br /&gt;
A common error that we can see during our search is the HTTP 404 Not Found.&lt;br /&gt;
Often this error code provides useful details about the underlying web server and associated components. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Not Found&lt;br /&gt;
The requested URL /page.html was not found on this server.&lt;br /&gt;
Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g  DAV/2 PHP/5.1.2 Server at localhost Port 80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This error message can be generated by requesting a non-existant URL.&lt;br /&gt;
After the common message that shows a page not found, there is information about web server version, OS, modules and other products used.&lt;br /&gt;
This information can be very important from an OS and application type and version identification point of view.&lt;br /&gt;
&lt;br /&gt;
Web server errors aren't the only useful output returned requiring security analysis. Consider the next example error message:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)&lt;br /&gt;
[DBNETLIB][ConnectionOpen(Connect())] - SQL server does not exist or access denied &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What happened? We will explain step-by-step below.&lt;br /&gt;
&lt;br /&gt;
In this example, the 80004005 is a generic IIS error code which indicates that it could not establish a connection to its associated database. In many cases, the error message will detail the type of the database. This will often indicate the underlying operating system by association. With this information, the penetration tester can plan an appropriate strategy for the security test.&lt;br /&gt;
&lt;br /&gt;
By manipulating the variables that are passed to the database connect string, we can invoke more detailed errors.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers error '80004005'&lt;br /&gt;
[Microsoft][ODBC Access 97 ODBC driver Driver]General error Unable to open registry key 'DriverId'&lt;br /&gt;
&amp;lt;/pre&amp;gt; 	&lt;br /&gt;
&lt;br /&gt;
In this example, we can see a generic error in the same situation which reveals the type and version of the associated database system and a dependence on Windows operating system registry key values.&lt;br /&gt;
&lt;br /&gt;
Now we will look at a practical example with a security test against a web application that loses its link to its database server and does not handle the exception in a controlled manner. This could be caused by a database name resolution issue, processing of unexpected variable values, or other network problems.&lt;br /&gt;
&lt;br /&gt;
Consider the scenario where we have a database administration web portal which can be used as a front end GUI to issue database queries, create tables and modify database fields. During POST of the logon credentials, the following error message is presented to the penetration tester that which indicates the presence of a MySQL database server:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)&lt;br /&gt;
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
If we see in the HTML code of the logon page the presence of a '''hidden field''' with a database IP, we can try to change this value in the URL with the address of database server under the penetration tester's control in an attempt to fool the application into thinking that logon was successful.&lt;br /&gt;
&lt;br /&gt;
Another example: knowing the database server that services a web application, we can take advantage of this information to carry out a SQL Injection for that kind of database or a persistent XSS test.&lt;br /&gt;
&lt;br /&gt;
Information Gathering on web applications with server-side technology is quite difficult, but the information discovered can be useful for the correct execution of an attempted exploit (for example, SQL injection or Cross Site Scripting (XSS) attacks)and can reduce false positives.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
telnet &amp;lt;host target&amp;gt; 80&lt;br /&gt;
GET /&amp;lt;wrong page&amp;gt; HTTP/1.1&lt;br /&gt;
&amp;lt;CRLF&amp;gt;&amp;lt;CRLF&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 404 Not Found&lt;br /&gt;
Date: Sat, 04 Nov 2006 15:26:48 GMT&lt;br /&gt;
Server: Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g&lt;br /&gt;
Content-Length: 310&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html; charset=iso-8859-1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. Network problems&lt;br /&gt;
2. Bad configuration about host database address&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005) '&lt;br /&gt;
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. Authentication failed&lt;br /&gt;
2. Credentials not inserted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
Firewall version used for authentication:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error 407&lt;br /&gt;
FW-1 at &amp;lt;firewall&amp;gt;: Unauthorized to access the document.&lt;br /&gt;
•  Authorization is needed for FW-1.&lt;br /&gt;
•  The authentication required by FW-1 is: unknown.&lt;br /&gt;
•  Reason for failure of last attempt: no user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
Enumeration of the directories with access denied:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://&amp;lt;host&amp;gt;/&amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Directory Listing Denied&lt;br /&gt;
This Virtual Directory does not allow contents to be listed.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Forbidden&lt;br /&gt;
You don't have permission to access /&amp;lt;dir&amp;gt; on this server.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
* [1] [[http://www.ietf.org/rfc/rfc2616.txt?number=2616 RFC2616]] Hypertext Transfer Protocol -- HTTP/1.1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Error_Code_(OTG-ERR-001)&amp;diff=16309</id>
		<title>Testing for Error Code (OTG-ERR-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Error_Code_(OTG-ERR-001)&amp;diff=16309"/>
				<updated>2007-02-08T11:30:18Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
Often during a penetration test on web applications we come up against many error codes generated from applications or web servers.&lt;br /&gt;
It's possible to cause these errors to be displayed by using a particular request, either specially crafted with tools or created manually.&lt;br /&gt;
These codes are very useful to penetration testers during their activities because they reveal a lot of information about databases, bugs, and other technological components directly linked with web applications.&lt;br /&gt;
Within this section we'll analyse the more common codes (error messages) and bring into focus the steps of vulnerability assessment.&lt;br /&gt;
The most important aspect for this activity is to focus one's attention on these errors, seeing them as a collection of information that will aid in the next steps of our analysis. A good collection can facilitate assessment efficiency by decreasing the overall time taken to perform the penetration test.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
&lt;br /&gt;
A common error that we can see during our search is the HTTP 404 Not Found.&lt;br /&gt;
Often this error code provides useful details about the underlying web server and associated components. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Not Found&lt;br /&gt;
The requested URL /page.html was not found on this server.&lt;br /&gt;
Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g  DAV/2 PHP/5.1.2 Server at localhost Port 80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This error message can be generated by requesting a non-existant URL.&lt;br /&gt;
After the common message that shows a page not found, there is information about web server version, OS, modules and other products used.&lt;br /&gt;
This information can be very important from an OS and application type and version identification point of view.&lt;br /&gt;
&lt;br /&gt;
Web server errors aren't the only useful output returned requiring security analysis. Consider the next example error message:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)&lt;br /&gt;
[DBNETLIB][ConnectionOpen(Connect())] - SQL server does not exist or access denied &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What happened? We will explain step-by-step below.&lt;br /&gt;
&lt;br /&gt;
In this example, the 80004005 is a generic IIS error code which indicates that it could not establish a connection to its associated database. In many cases, the error message will detail the type of the database. This will often indicate the underlying operating system by association. With this information, the penetration tester can plan an appropriate strategy for the security test.&lt;br /&gt;
&lt;br /&gt;
By manipulating the variables that are passed to the database connect string, we can invoke more detailed errors.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers error '80004005'&lt;br /&gt;
[Microsoft][ODBC Access 97 ODBC driver Driver]General error Unable to open registry key 'DriverId'&lt;br /&gt;
&amp;lt;/pre&amp;gt; 	&lt;br /&gt;
&lt;br /&gt;
In this example, we can see a generic error in the same situation which reveals the type and version of the associated database system and a dependence on Windows operating system registry key values.&lt;br /&gt;
&lt;br /&gt;
Now we will look at a practical example with a security test against a web application that loses its link to its database server and does not handle the exception in a controlled manner. This could be caused by a database name resolution issue, processing of unexpected variable values, or other network problems.&lt;br /&gt;
&lt;br /&gt;
Consider the scenario where we have a database administration web portal which can be used as a front end GUI to issue database queries, create tables and modify database fields. During POST of the logon credentials, the following error message is presented to the penetration tester that which indicates the presence of a MySQL database server:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)&lt;br /&gt;
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
If we see in the HTML code of the logon page the presence of a '''hidden field''' with a database IP, we can try to change this value in the URL with the address of database server under the penetration tester's control in an attempt to fool the application into thinking that logon was successful.&lt;br /&gt;
&lt;br /&gt;
Another example: knowing the database server that services a web application, we can take advantage of this information to carry out a SQL Injection for that kind of database or a persistent XSS test.&lt;br /&gt;
&lt;br /&gt;
Information Gathering on web applications with server-side technology is quite difficult, but the information discovered can be useful for the correct execution of an attempted exploit (for example, SQL injection or Cross Site Scripting (XSS) attacks)and can reduce false positives.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
telnet &amp;lt;host target&amp;gt; 80&lt;br /&gt;
GET /&amp;lt;wrong page&amp;gt; HTTP/1.1&lt;br /&gt;
&amp;lt;CRLF&amp;gt;&amp;lt;CRLF&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 404 Not Found&lt;br /&gt;
Date: Sat, 04 Nov 2006 15:26:48 GMT&lt;br /&gt;
Server: Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g&lt;br /&gt;
Content-Length: 310&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html; charset=iso-8859-1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. Network problems&lt;br /&gt;
2. Bad configuration about host database address&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005) '&lt;br /&gt;
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. Authentication failed&lt;br /&gt;
2. Credentials not inserted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
Firewall version used for authentication:&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error 407&lt;br /&gt;
FW-1 at &amp;lt;firewall&amp;gt;: Unauthorized to access the document.&lt;br /&gt;
•  Authorization is needed for FW-1.&lt;br /&gt;
•  The authentication required by FW-1 is: unknown.&lt;br /&gt;
•  Reason for failure of last attempt: no user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
Enumeration of the directories with access denied.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://&amp;lt;host&amp;gt;/&amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Directory Listing Denied&lt;br /&gt;
This Virtual Directory does not allow contents to be listed.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Forbidden&lt;br /&gt;
You don't have permission to access /&amp;lt;dir&amp;gt; on this server.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
* [1] [[http://www.ietf.org/rfc/rfc2616.txt?number=2616 RFC2616]] Hypertext Transfer Protocol -- HTTP/1.1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Error_Code_(OTG-ERR-001)&amp;diff=16308</id>
		<title>Testing for Error Code (OTG-ERR-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Error_Code_(OTG-ERR-001)&amp;diff=16308"/>
				<updated>2007-02-08T11:27:32Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Description of the Issue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
Often during a penetration test on web applications we come up against many error codes generated from applications or web servers.&lt;br /&gt;
It's possible to cause these errors to be displayed by using a particular request, either specially crafted with tools or created manually.&lt;br /&gt;
These codes are very useful to penetration testers during their activities because they reveal a lot of information about databases, bugs, and other technological components directly linked with web applications.&lt;br /&gt;
Within this section we'll analyse the more common codes (error messages) and bring into focus the steps of vulnerability assessment.&lt;br /&gt;
The most important aspect for this activity is to focus one's attention on these errors, seeing them as a collection of information that will aid in the next steps of our analysis. A good collection can facilitate assessment efficiency by decreasing the overall time taken to perform the penetration test.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
&lt;br /&gt;
A common error that we can see during our search is the HTTP 404 Not Found.&lt;br /&gt;
Often this error code provides useful details about the underlying web server and associated components. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Not Found&lt;br /&gt;
The requested URL /page.html was not found on this server.&lt;br /&gt;
Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g  DAV/2 PHP/5.1.2 Server at localhost Port 80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This error message can be generated by requesting a non-existant URL.&lt;br /&gt;
After the common message that shows a page not found, there is information about web server version, OS, modules and other products used.&lt;br /&gt;
This information can be very important from an OS and application type and version identification point of view.&lt;br /&gt;
&lt;br /&gt;
Web server errors aren't the only useful output returned requiring security analysis. Consider the next example error message:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)&lt;br /&gt;
[DBNETLIB][ConnectionOpen(Connect())] - SQL server does not exist or access denied &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What happened? We will explain step-by-step below.&lt;br /&gt;
&lt;br /&gt;
In this example, the 80004005 is a generic IIS error code which indicates that it could not establish a connection to its associated database. In many cases, the error message will detail the type of the database. This will often indicate the underlying operating system by association. With this information, the penetration tester can plan an appropriate strategy for the security test.&lt;br /&gt;
&lt;br /&gt;
By manipulating the variables that are passed to the database connect string, we can invoke more detailed errors.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers error '80004005'&lt;br /&gt;
[Microsoft][ODBC Access 97 ODBC driver Driver]General error Unable to open registry key 'DriverId'&lt;br /&gt;
&amp;lt;/pre&amp;gt; 	&lt;br /&gt;
&lt;br /&gt;
In this example, we can see a generic error in the same situation which reveals the type and version of the associated database system and a dependence on Windows operating system registry key values.&lt;br /&gt;
&lt;br /&gt;
Now we will look at a practical example with a security test against a web application that loses its link to its database server and does not handle the exception in a controlled manner. This could be caused by a database name resolution issue, processing of unexpected variable values, or other network problems.&lt;br /&gt;
&lt;br /&gt;
Consider the scenario where we have a database administration web portal which can be used as a front end GUI to issue database queries, create tables and modify database fields. During POST of the logon credentials, the following error message is presented to the penetration tester that which indicates the presence of a MySQL database server:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)&lt;br /&gt;
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
If we see in the HTML code of the logon page the presence of a '''hidden field''' with a database IP, we can try to change this value in the URL with the address of database server under the penetration tester's control in an attempt to fool the application into thinking that logon was successful.&lt;br /&gt;
&lt;br /&gt;
Another example: knowing the database server that services a web application, we can take advantage of this information to carry out a SQL Injection for that kind of database or a persistent XSS test.&lt;br /&gt;
&lt;br /&gt;
Information Gathering on web applications with server-side technology is quite difficult, but the information discovered can be useful for the correct execution of an attempted exploit (for example, SQL injection or Cross Site Scripting (XSS) attacks)and can reduce false positives.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
telnet &amp;lt;host target&amp;gt; 80&lt;br /&gt;
GET /&amp;lt;wrong page&amp;gt; HTTP/1.1&lt;br /&gt;
&amp;lt;CRLF&amp;gt;&amp;lt;CRLF&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 404 Not Found&lt;br /&gt;
Date: Sat, 04 Nov 2006 15:26:48 GMT&lt;br /&gt;
Server: Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g&lt;br /&gt;
Content-Length: 310&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html; charset=iso-8859-1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. network problems&lt;br /&gt;
2. bad configuration about host database address&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005) '&lt;br /&gt;
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. Authentication Failed&lt;br /&gt;
2. Credentials not inserted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
Firewall version used for authentication&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error 407&lt;br /&gt;
FW-1 at &amp;lt;firewall&amp;gt;: Unauthorized to access the document.&lt;br /&gt;
•  Authorization is needed for FW-1.&lt;br /&gt;
•  The authentication required by FW-1 is: unknown.&lt;br /&gt;
•  Reason for failure of last attempt: no user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
Enumeration of the directories with access denied.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://&amp;lt;host&amp;gt;/&amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Directory Listing Denied&lt;br /&gt;
This Virtual Directory does not allow contents to be listed.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Forbidden&lt;br /&gt;
You don't have permission to access /&amp;lt;dir&amp;gt; on this server.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
* [1] [[http://www.ietf.org/rfc/rfc2616.txt?number=2616 RFC2616]] Hypertext Transfer Protocol -- HTTP/1.1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Error_Code_(OTG-ERR-001)&amp;diff=16307</id>
		<title>Testing for Error Code (OTG-ERR-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Error_Code_(OTG-ERR-001)&amp;diff=16307"/>
				<updated>2007-02-08T11:26:44Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Description of the Issue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
Often during a penetration test on web applications we come up against many error codes generated from applications or web servers.&lt;br /&gt;
It's possible to cause these errors to be displayed by using a particular request, either specially crafted with tools or created manually.&lt;br /&gt;
These codes are very useful to penetration testers during their activities because they reveal a lot of information about databases, bugs, and other technological components directly linked with web applications.&lt;br /&gt;
Within this section we'll analyse the more common codes (error messages) and bring into focus the steps of vulnerability assessment.&lt;br /&gt;
The most important aspect for this activity is to focus one's attention on these errors, seeing them as a collection of information that will aid in the next steps of our analysis. A good collection can facilitate assessment efficiency by decreasing the overall time taken to perform the penetration test.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
&lt;br /&gt;
A common error that we can see during our search is the HTTP 404 Not Found.&lt;br /&gt;
Often this error code provides useful details about the underlying web server and associated components. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Not Found&lt;br /&gt;
The requested URL /page.html was not found on this server.&lt;br /&gt;
Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g  DAV/2 PHP/5.1.2 Server at localhost Port 80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This error message can be generated by requesting a non-existant URL.&lt;br /&gt;
After the common message that shows a page not found, there is information about web server version, OS, modules and other products used.&lt;br /&gt;
This information can be very important from an OS and application type and version identification point of view.&lt;br /&gt;
&lt;br /&gt;
Web server errors aren't the only useful output returned requiring security analysis. Consider the next example error message:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)&lt;br /&gt;
[DBNETLIB][ConnectionOpen(Connect())] - SQL server does not exist or access denied &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What happened? We will explain step-by-step below.&lt;br /&gt;
&lt;br /&gt;
In this example, the 80004005 is a generic IIS error code which indicates that it could not establish a connection to its associated database.&lt;br /&gt;
&lt;br /&gt;
In many cases, the error message will detail the type of the database. This will often indicate the underlying operating system by association. With this information, the penetration tester can plan an appropriate strategy for the security test.&lt;br /&gt;
&lt;br /&gt;
By manipulating the variables that are passed to the database connect string, we can invoke more detailed errors.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers error '80004005'&lt;br /&gt;
[Microsoft][ODBC Access 97 ODBC driver Driver]General error Unable to open registry key 'DriverId'&lt;br /&gt;
&amp;lt;/pre&amp;gt; 	&lt;br /&gt;
&lt;br /&gt;
In this example, we can see a generic error in the same situation which reveals the type and version of the associated database system and a dependence on Windows operating system registry key values.&lt;br /&gt;
&lt;br /&gt;
Now we will look at a practical example with a security test against a web application that loses its link to its database server and does not handle the exception in a controlled manner. This could be caused by a database name resolution issue, processing of unexpected variable values, or other network problems.&lt;br /&gt;
&lt;br /&gt;
Consider the scenario where we have a database administration web portal which can be used as a front end GUI to issue database queries, create tables and modify database fields. During POST of the logon credentials, the following error message is presented to the penetration tester that which indicates the presence of a MySQL database server:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)&lt;br /&gt;
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
If we see in the HTML code of the logon page the presence of a '''hidden field''' with a database IP, we can try to change this value in the URL with the address of database server under the penetration tester's control in an attempt to fool the application into thinking that logon was successful.&lt;br /&gt;
&lt;br /&gt;
Another example: knowing the database server that services a web application, we can take advantage of this information to carry out a SQL Injection for that kind of database or a persistent XSS test.&lt;br /&gt;
&lt;br /&gt;
Information Gathering on web applications with server-side technology is quite difficult, but the information discovered can be useful for the correct execution of an attempted exploit (for example, SQL injection or Cross Site Scripting (XSS) attacks)and can reduce false positives.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
telnet &amp;lt;host target&amp;gt; 80&lt;br /&gt;
GET /&amp;lt;wrong page&amp;gt; HTTP/1.1&lt;br /&gt;
&amp;lt;CRLF&amp;gt;&amp;lt;CRLF&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 404 Not Found&lt;br /&gt;
Date: Sat, 04 Nov 2006 15:26:48 GMT&lt;br /&gt;
Server: Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g&lt;br /&gt;
Content-Length: 310&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html; charset=iso-8859-1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. network problems&lt;br /&gt;
2. bad configuration about host database address&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005) '&lt;br /&gt;
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. Authentication Failed&lt;br /&gt;
2. Credentials not inserted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
Firewall version used for authentication&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error 407&lt;br /&gt;
FW-1 at &amp;lt;firewall&amp;gt;: Unauthorized to access the document.&lt;br /&gt;
•  Authorization is needed for FW-1.&lt;br /&gt;
•  The authentication required by FW-1 is: unknown.&lt;br /&gt;
•  Reason for failure of last attempt: no user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
Enumeration of the directories with access denied.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://&amp;lt;host&amp;gt;/&amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Directory Listing Denied&lt;br /&gt;
This Virtual Directory does not allow contents to be listed.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Forbidden&lt;br /&gt;
You don't have permission to access /&amp;lt;dir&amp;gt; on this server.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
* [1] [[http://www.ietf.org/rfc/rfc2616.txt?number=2616 RFC2616]] Hypertext Transfer Protocol -- HTTP/1.1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Error_Code_(OTG-ERR-001)&amp;diff=16304</id>
		<title>Testing for Error Code (OTG-ERR-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Error_Code_(OTG-ERR-001)&amp;diff=16304"/>
				<updated>2007-02-08T10:50:38Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Description of the Issue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
Often during a penetration test on web applications we come up against many error codes generated from applications or web servers.&lt;br /&gt;
It's possible to cause these errors to be displayed by using a particular request, either specially crafted with tools or created manually.&lt;br /&gt;
These codes are very useful to penetration testers during their activities because they reveal a lot of information about databases, bugs, and other technological components directly linked with web applications.&lt;br /&gt;
Within this section we'll analyse the more common codes (error messages) and bring into focus the steps of vulnerability assessment.&lt;br /&gt;
The most important aspect for this activity is to focus one's attention on these errors, seeing them as a collection of information that will aid in the next steps of our analysis. A good collection can facilitate assessment efficiency by decreasing the overall time taken to perform the penetration test.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
&lt;br /&gt;
A common error that we can see during our search is the HTTP 404 Not Found.&lt;br /&gt;
Often this error code provides useful details about the underlying web server and associated components. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Not Found&lt;br /&gt;
The requested URL /page.html was not found on this server.&lt;br /&gt;
Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g  DAV/2 PHP/5.1.2 Server at localhost Port 80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This error message can be generated by requesting a non-existant URL.&lt;br /&gt;
After the common message that shows a page not found, there is information about web server version, OS, modules and other products used.&lt;br /&gt;
This information can be very important from an OS and application type and version identification point of view.&lt;br /&gt;
&lt;br /&gt;
Web server errors aren't the only useful output returned requiring security analysis. Consider the next example error message:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)&lt;br /&gt;
[DBNETLIB][ConnectionOpen(Connect())] - SQL server does not exist or access denied &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What happened? We will explain step-by-step below.&lt;br /&gt;
&lt;br /&gt;
In this example, the 80004005 is a generic IIS error code which indicates that it could not establish a connection to its associated database.&amp;lt;br&amp;gt;&lt;br /&gt;
In many cases, the error message will detail the type of the database. This will often indicate the underlying operating system by association. With this information, the penetration tester can plan an appropriate strategy for the security test.&lt;br /&gt;
&lt;br /&gt;
By manipulating the variables that are passed to the database connect string, we can invoke further more detailed errors.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers error '80004005'&lt;br /&gt;
[Microsoft][ODBC Access 97 ODBC driver Driver]General error Unable to open registry key 'DriverId'&lt;br /&gt;
&amp;lt;/pre&amp;gt; 	&lt;br /&gt;
&lt;br /&gt;
In this example, we can see a generic error in the same situation which reveals the type and version of the associated database system and a dependence on Windows operating system registry key values.&lt;br /&gt;
&lt;br /&gt;
Now we will look at a practical example with a security test on web application that loses the link with the database server because there is badly written code (the next error message is caused by the application, which can't resolve the database server name or when the variable value is modified) or other network problems.&lt;br /&gt;
&lt;br /&gt;
For example, we have a database administration web portal which can be connected to db server after a log-on phase to realize queries, create tables and modify database fields.&lt;br /&gt;
During POST of credentials for the log-on phase meet this message that evidences the presence of a MySQL database server:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)&lt;br /&gt;
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
If we see in the HTML code of the log-on page the presence of a '''hidden field''' with a database IP, we can try to change this value in the URL with the address of another database (our database, for example).&lt;br /&gt;
Another example: knowing the database server that services a web application, we can take advantage of this information to carry out a SQL Injection for that kind of database or a persistent XSS test.&lt;br /&gt;
&lt;br /&gt;
Information Gathering on web applications with server-side technology is quite difficult, but the information discovered can be useful for the correct execution of an attempted exploit and can reduce false positives.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
telnet &amp;lt;host target&amp;gt; 80&lt;br /&gt;
GET /&amp;lt;wrong page&amp;gt; HTTP/1.1&lt;br /&gt;
&amp;lt;CRLF&amp;gt;&amp;lt;CRLF&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 404 Not Found&lt;br /&gt;
Date: Sat, 04 Nov 2006 15:26:48 GMT&lt;br /&gt;
Server: Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g&lt;br /&gt;
Content-Length: 310&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html; charset=iso-8859-1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. network problems&lt;br /&gt;
2. bad configuration about host database address&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005) '&lt;br /&gt;
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. Authentication Failed&lt;br /&gt;
2. Credentials not inserted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
Firewall version used for authentication&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error 407&lt;br /&gt;
FW-1 at &amp;lt;firewall&amp;gt;: Unauthorized to access the document.&lt;br /&gt;
•  Authorization is needed for FW-1.&lt;br /&gt;
•  The authentication required by FW-1 is: unknown.&lt;br /&gt;
•  Reason for failure of last attempt: no user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
Enumeration of the directories with access denied.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://&amp;lt;host&amp;gt;/&amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Directory Listing Denied&lt;br /&gt;
This Virtual Directory does not allow contents to be listed.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Forbidden&lt;br /&gt;
You don't have permission to access /&amp;lt;dir&amp;gt; on this server.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
* [1] [[http://www.ietf.org/rfc/rfc2616.txt?number=2616 RFC2616]] Hypertext Transfer Protocol -- HTTP/1.1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Error_Code_(OTG-ERR-001)&amp;diff=16303</id>
		<title>Testing for Error Code (OTG-ERR-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Error_Code_(OTG-ERR-001)&amp;diff=16303"/>
				<updated>2007-02-08T09:36:44Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Brief Summary */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
Often during a penetration test on web applications we come up against many error codes generated from applications or web servers.&lt;br /&gt;
It's possible to cause these errors to be displayed by using a particular request, either specially crafted with tools or created manually.&lt;br /&gt;
These codes are very useful to penetration testers during their activities because they reveal a lot of information about databases, bugs, and other technological components directly linked with web applications.&lt;br /&gt;
Within this section we'll analyse the more common codes (error messages) and bring into focus the steps of vulnerability assessment.&lt;br /&gt;
The most important aspect for this activity is to focus one's attention on these errors, seeing them as a collection of information that will aid in the next steps of our analysis. A good collection can facilitate assessment efficiency by decreasing the overall time taken to perform the penetration test.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
&lt;br /&gt;
A common error that we can see during our search is the HTTP 404 Not Found.&lt;br /&gt;
Often we can see this error code with many details about web server and other components.&lt;br /&gt;
For Example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Not Found&lt;br /&gt;
The requested URL /page.html was not found on this server.&lt;br /&gt;
Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g  DAV/2 PHP/5.1.2 Server at localhost Port 80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This error message can be generated with the insertion of non-existing URL.&lt;br /&gt;
After the common message that shows a page not found, there is information about web server version, OS, modules and other products used.&lt;br /&gt;
This information can be very important both for OS and for applications during a penetration test, but web server errors aren't the only errors useful in a security analysis.&lt;br /&gt;
&lt;br /&gt;
We will therefore the next occurrence that shows abnormal behavior:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)&lt;br /&gt;
[DBNETLIB][ConnectionOpen(Connect())] - SQL server does not exist or access denied &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What's happened? We'll proceed step by step.&lt;br /&gt;
&lt;br /&gt;
For example, the 80004005 is a generic IIS error code which indicates that isn't possible to access a database.&amp;lt;br&amp;gt;&lt;br /&gt;
In many cases we can see that this code is followed by the version of the database. With this information, the pentester can plan an appropriate strategy for the security test.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers error '80004005'&lt;br /&gt;
[Microsoft][ODBC Access 97 ODBC driver Driver]General error Unable to open registry key 'DriverId'&lt;br /&gt;
&amp;lt;/pre&amp;gt; 	&lt;br /&gt;
&lt;br /&gt;
The first example shows a connection error message obtained by SQL Server Database because the database server which linked into application is down or credentials don't allow access.&lt;br /&gt;
However, this isn't the only information that we know; in fact, in this way we have discovered the kind of operating system.&lt;br /&gt;
In this case we could verify if the web application permits change of variables value to connect to the database.&lt;br /&gt;
In the second case we can see a generic error in the same situation (we know the database version) but with a different error message and database server.&lt;br /&gt;
But in the end...It's the same thing!&lt;br /&gt;
&lt;br /&gt;
Now we will look at a practical example with a security test on web application that loses the link with the database server because there is badly written code (the next error message is caused by the application, which can't resolve the database server name or when the variable value is modified) or other network problems.&lt;br /&gt;
&lt;br /&gt;
For example, we have a database administration web portal which can be connected to db server after a log-on phase to realize queries, create tables and modify database fields.&lt;br /&gt;
During POST of credentials for the log-on phase meet this message that evidences the presence of a MySQL database server:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)&lt;br /&gt;
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
If we see in the HTML code of the log-on page the presence of a '''hidden field''' with a database IP, we can try to change this value in the URL with the address of another database (our database, for example).&lt;br /&gt;
Another example: knowing the database server that services a web application, we can take advantage of this information to carry out a SQL Injection for that kind of database or a persistent XSS test.&lt;br /&gt;
&lt;br /&gt;
Information Gathering on web applications with server-side technology is quite difficult, but the information discovered can be useful for the correct execution of an attempted exploit and can reduce false positives.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
telnet &amp;lt;host target&amp;gt; 80&lt;br /&gt;
GET /&amp;lt;wrong page&amp;gt; HTTP/1.1&lt;br /&gt;
&amp;lt;CRLF&amp;gt;&amp;lt;CRLF&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 404 Not Found&lt;br /&gt;
Date: Sat, 04 Nov 2006 15:26:48 GMT&lt;br /&gt;
Server: Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g&lt;br /&gt;
Content-Length: 310&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html; charset=iso-8859-1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. network problems&lt;br /&gt;
2. bad configuration about host database address&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005) '&lt;br /&gt;
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. Authentication Failed&lt;br /&gt;
2. Credentials not inserted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
Firewall version used for authentication&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error 407&lt;br /&gt;
FW-1 at &amp;lt;firewall&amp;gt;: Unauthorized to access the document.&lt;br /&gt;
•  Authorization is needed for FW-1.&lt;br /&gt;
•  The authentication required by FW-1 is: unknown.&lt;br /&gt;
•  Reason for failure of last attempt: no user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
Enumeration of the directories with access denied.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://&amp;lt;host&amp;gt;/&amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Directory Listing Denied&lt;br /&gt;
This Virtual Directory does not allow contents to be listed.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Forbidden&lt;br /&gt;
You don't have permission to access /&amp;lt;dir&amp;gt; on this server.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
* [1] [[http://www.ietf.org/rfc/rfc2616.txt?number=2616 RFC2616]] Hypertext Transfer Protocol -- HTTP/1.1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing:_Information_Gathering&amp;diff=16302</id>
		<title>Testing: Information Gathering</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing:_Information_Gathering&amp;diff=16302"/>
				<updated>2007-02-08T09:26:00Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/OWASP_Testing_Guide_v2_Table_of_Contents#Web_Application_Penetration_Testing Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
=== Information Gathering ===&lt;br /&gt;
----&lt;br /&gt;
The  first phase in security assessment is focused on collecting as much information as possible about a target application.&lt;br /&gt;
Information Gathering is a necessary step of a penetration test.&lt;br /&gt;
&lt;br /&gt;
This task can be carried out in many different ways.&lt;br /&gt;
&lt;br /&gt;
Using public tools (search engines), scanners, sending simple HTTP requests, or specially crafted requests, it is possible to force the application to leak information by sending back error messages or revealing the versions and technologies used by the application.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Often it is possible to gather information by receiving a response from the application that could reveal vulnerabilities in the bad configuration or bad server management.&lt;br /&gt;
&lt;br /&gt;
[[Testing for Web Application Fingerprint|4.2.1 Testing Web Application Fingerprint]]&lt;br /&gt;
&lt;br /&gt;
Application fingerprint is the first step of the Information Gathering process; knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing. &lt;br /&gt;
&lt;br /&gt;
[[Testing for Application Discovery|4.2.2 Application Discovery]]&lt;br /&gt;
&lt;br /&gt;
Application discovery is an activity oriented to the identification of the web applications hosted on a web server/application server.&amp;lt;br&amp;gt;&lt;br /&gt;
This analysis is important because many times there is not a direct link connecting the main application backend. Discovery analysis can be useful to reveal details such as web-apps used for administrative purposes. In addition, it can reveal old versions of files or artifacts such as undeleted, obsolete scripts crafted during the test/development phase or as the result of maintenance.&lt;br /&gt;
&lt;br /&gt;
[[Testing: Spidering and googling|4.2.3 Spidering and Googling]]&lt;br /&gt;
&lt;br /&gt;
This phase of the Information Gathering process consists of browsing and capturing resources related to the application being tested. Search engines, such as Google, can be used to discover issues related to the web application structure or error pages produced by the application that have been exposed to the public domain.&lt;br /&gt;
&lt;br /&gt;
[[Testing for Error Code|4.2.4 Analysis of Error Code]]&lt;br /&gt;
&lt;br /&gt;
Web applications may divulge information during a penetration test which is not intended to be seen by an end user. Information such as error codes can inform the tester about technologies and products being used by the application.&amp;lt;br&amp;gt;&lt;br /&gt;
In many cases, error codes can be easily invoked without the need for specialist skills or tools due to bad exception handling design and coding. &lt;br /&gt;
&lt;br /&gt;
[[Testing for infrastructure configuration management|4.2.5 Infrastructure Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
Often analysis of the infrastructure and topology architecture can reveal a great deal about a web application. Information such as source code, HTTP methods permitted, administrative functionality, authentication methods and infrastructural configurations can be obtained.&amp;lt;br&amp;gt;&lt;br /&gt;
Clearly, focusing only on the web application will not be an exhaustive test. It cannot be as comprehensive as the information possibly gathered by performing a broader infrastructure analysis.  &lt;br /&gt;
&lt;br /&gt;
[[Testing for SSL-TLS|4.2.5.1 SSL/TLS Testing]]&lt;br /&gt;
&lt;br /&gt;
SSL and TLS are two protocols that provide, with the support of cryptography, secure channels for the protection, confidentiality, and authentication of the information being transmitted.&amp;lt;br&amp;gt;&lt;br /&gt;
Considering the criticality of these security implementations, it is important to verify the usage of a strong cipher algorithm and its proper implementation.&lt;br /&gt;
&lt;br /&gt;
[[Testing for DB Listener|4.2.5.2 DB Listener Testing]]&lt;br /&gt;
&lt;br /&gt;
During the configuration of a database server, many DB administrators do not adequately consider the security of the DB listener component. The listener could reveal sensitive data as well as configuration settings or running database instances if insecurely configured and probed with manual or automated techniques. Information revealed will often be useful to a tester serving as input to more impacting follow-on tests.&lt;br /&gt;
&lt;br /&gt;
[[Testing for application configuration management|4.2.6 Application Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
Web applications hide some information that is usually not considered during the development or configuration of the application itself.&amp;lt;br&amp;gt;&lt;br /&gt;
This data can be discovered in the source code, in the log files or in the default error codes of the web servers. A correct approach to this topic is fundamental during a security assessment.&lt;br /&gt;
&lt;br /&gt;
[[Testing for file extensions handling|4.2.6.1 Testing for File Extensions Handling]]&lt;br /&gt;
&lt;br /&gt;
The file extensions present in a web server or a web application make it possible to identify the technologies which compose the target application, e.g. jsp and asp extensions. File extensions can also expose additional systems connected to the application.&lt;br /&gt;
&lt;br /&gt;
[[Testing for old_file|4.2.6.2 Old, Backup and Unreferenced Files]]&lt;br /&gt;
&lt;br /&gt;
Redundant, readable and downloadable files on a web server, such as old, backup and renamed files, are a big source of information leakage. It is necessary to verify the presence of these files because they may contain parts of source code, installation paths as well as passwords for applications and/or databases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Table_of_Contents&amp;diff=16255</id>
		<title>OWASP Testing Guide v2 Table of Contents</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Table_of_Contents&amp;diff=16255"/>
				<updated>2007-02-07T16:10:15Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* 4. Web Application Penetration Testing  */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Foreword|Foreword by OWASP Chair]]==&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Frontispiece |1. Frontispiece]]==&lt;br /&gt;
&lt;br /&gt;
'''[[Testing Guide Frontispiece|1.1 About the OWASP Testing Guide Project]]'''&lt;br /&gt;
&lt;br /&gt;
1.1.1 Copyright&lt;br /&gt;
&lt;br /&gt;
1.1.2 Editors&lt;br /&gt;
&lt;br /&gt;
1.1.3 Authors and Reviewers&lt;br /&gt;
&lt;br /&gt;
1.1.4 Revision History&lt;br /&gt;
&lt;br /&gt;
1.1.5 Trademarks&lt;br /&gt;
&lt;br /&gt;
'''[[About The Open Web Application Security Project|1.2 About The Open Web Application Security Project]]'''&lt;br /&gt;
&lt;br /&gt;
1.2.1 Overview&lt;br /&gt;
&lt;br /&gt;
1.2.2 Structure&lt;br /&gt;
&lt;br /&gt;
1.2.3 Licensing&lt;br /&gt;
&lt;br /&gt;
1.2.4 Participation and Membership&lt;br /&gt;
&lt;br /&gt;
1.2.5 Projects&lt;br /&gt;
&lt;br /&gt;
1.2.6 OWASP Privacy Policy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Introduction|2. Introduction]]==&lt;br /&gt;
&lt;br /&gt;
'''2.1 The OWASP Testing Project'''&lt;br /&gt;
&lt;br /&gt;
'''2.2 Principles of Testing'''&lt;br /&gt;
&lt;br /&gt;
'''2.3 Testing Techniques Explained''' &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[The OWASP Testing Framework|3. The OWASP Testing Framework]]==&lt;br /&gt;
&lt;br /&gt;
'''3.1. Overview'''&lt;br /&gt;
&lt;br /&gt;
'''3.2. Phase 1: Before Development Begins '''&lt;br /&gt;
&lt;br /&gt;
'''3.3. Phase 2: During Definition and Design'''&lt;br /&gt;
&lt;br /&gt;
'''3.4. Phase 3: During Development'''&lt;br /&gt;
&lt;br /&gt;
'''3.5. Phase 4: During Deployment'''&lt;br /&gt;
&lt;br /&gt;
'''3.6. Phase 5: Maintenance and Operations'''&lt;br /&gt;
&lt;br /&gt;
'''3.7. A Typical SDLC Testing Workflow '''&lt;br /&gt;
&lt;br /&gt;
==[[Web Application Penetration Testing |4. Web Application Penetration Testing ]]==&lt;br /&gt;
&lt;br /&gt;
[[Testing: Introduction and objectives|'''4.1 Introduction and Objectives''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing: Information Gathering|'''4.2 Information Gathering''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Web Application Fingerprint|4.2.1 Testing Web Application Fingerprint]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Application Discovery|4.2.2 Application Discovery]]&lt;br /&gt;
&lt;br /&gt;
[[Testing: Spidering and googling|4.2.3 Spidering and Googling]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Error Code|4.2.4 Analysis of Error Codes]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for infrastructure configuration management|4.2.5 Infrastructure &lt;br /&gt;
Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SSL-TLS|4.2.5.1 SSL/TLS Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DB Listener|4.2.5.2 DB Listener Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for application configuration management|4.2.6 Application Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for file extensions handling|4.2.6.1 Testing for File Extensions Handling]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for old_file|4.2.6.2 Old, backup and unreferenced files]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for business logic|'''4.3 Business Logic Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for authentication|'''4.4 Authentication Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Default or Guessable User Account|4.4.1 Testing for Guessable (Dictionary) User Account]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Brute Force|4.4.2 Brute Force Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Bypassing Authentication Schema|4.4.3 Testing for bypassing authentication schema]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Directory Traversal|4.4.4 Testing for directory traversal/file include]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Vulnerable Remember Password and Pwd Reset|4.4.5 Testing for vulnerable remember &lt;br /&gt;
password and pwd reset]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Logout and Browser Cache Management|4.4.6 Testing for Logout and Browser Cache Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Session Management|'''4.5 Session Management Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Session_Management_Schema|4.5.1 Testing for Session Management Schema]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Cookie and Session Token Manipulation|4.5.2 Testing for Cookie and Session Token Manipulation]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Exposed Session Variables|4.5.3 Testing for Exposed Session Variables ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for CSRF|4.5.4 Testing for CSRF]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for HTTP Exploit|4.5.5 Testing for HTTP Exploit ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Data Validation|'''4.6 Data Validation Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Cross site scripting|4.6.1 Testing for Cross Site Scripting]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for HTTP Methods and XST|4.6.1.1 Testing for HTTP Methods and XST ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SQL Injection|4.6.2 Testing for SQL Injection ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Oracle|4.6.2.1 Oracle Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for MySQL|4.6.2.2 MySQL Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SQL Server|4.6.2.3 SQL Server Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for LDAP Injection|4.6.3 Testing for LDAP Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for ORM Injection|4.6.4 Testing for ORM Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Injection|4.6.5 Testing for XML Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SSI Injection|4.6.6 Testing for SSI Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XPath Injection|4.6.7 Testing for XPath Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for IMAP/SMTP Injection|4.6.8 IMAP/SMTP Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Code Injection|4.6.9 Testing for Code Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Command Injection|4.6.10 Testing for Command Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Buffer Overflow|4.6.11 Testing for Buffer overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Heap Overflow|4.6.11.1 Testing for Heap overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Stack Overflow|4.6.11.2 Testing for Stack overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Format String|4.6.11.3 Testing for Format string]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Incubated Vulnerability|4.6.12 Testing for incubated vulnerabilities]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Denial of Service|'''4.7 Testing for Denial of Service''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Locking Customer Accounts|4.7.1 Testing for DoS Locking Customer Accounts]]	&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Buffer Overflows|4.7.2 Testing for DoS Buffer Overflows]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS User Specified Object Allocation|4.7.3 Testing for DoS User Specified Object Allocation]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for User Input as a Loop Counter|4.7.4 Testing for User Input as a Loop Counter]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Writing User Provided Data to Disk|4.7.5 Testing for Writing User Provided Data to Disk]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Failure to Release Resources|4.7.6 Testing for DoS Failure to Release Resources]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Storing too Much Data in Session|4.7.7 Testing for Storing too Much Data in Session]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Web Services|'''4.8 Web Services Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Structural|4.8.1 XML Structural Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Content-Level|4.8.2 XML Content-level Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for WS HTTP GET parameters/REST attacks|4.8.3 HTTP GET parameters/REST Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Naughty SOAP Attachments|4.8.4 Testing for Naughty SOAP attachments]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for WS Replay|4.8.5 WS Replay Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing_for_AJAX:_introduction|'''4.9 AJAX Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for AJAX Vulnerabilities|4.9.1 AJAX Vulnerabilities]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for AJAX|4.9.2 How to test AJAX]]&lt;br /&gt;
&lt;br /&gt;
==[[Writing Reports: value the real risk |5. Writing Reports: value the real risk ]]==&lt;br /&gt;
&lt;br /&gt;
[[How to value the real risk |5.1 How to value the real risk]]&lt;br /&gt;
&lt;br /&gt;
[[How to write the report of the testing |5.2 How to write the report of the testing]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[Appendix A: Testing Tools |Appendix A: Testing Tools ]]==&lt;br /&gt;
&lt;br /&gt;
* Black Box Testing Tools&lt;br /&gt;
* Source Code Analyzers&lt;br /&gt;
* Other Tools&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[OWASP Testing Guide Appendix B: Suggested Reading | Appendix B: Suggested Reading]]==&lt;br /&gt;
* Whitepapers&lt;br /&gt;
* Books&lt;br /&gt;
* Useful Websites&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[OWASP Testing Guide Appendix C: Fuzz Vectors | Appendix C: Fuzz Vectors]]==&lt;br /&gt;
&lt;br /&gt;
* Fuzz Categories&lt;br /&gt;
** Recursive fuzzing&lt;br /&gt;
** Replasive fuzzing&lt;br /&gt;
* Cross Site Scripting (XSS)&lt;br /&gt;
* Buffer Overflows and Format String Errors&lt;br /&gt;
** Buffer Overflows (BFO)&lt;br /&gt;
** Format String Errors (FSE)&lt;br /&gt;
** Integer Overflows (INT)&lt;br /&gt;
* SQL Injection&lt;br /&gt;
** Passive SQL Injection (SQP)&lt;br /&gt;
** Active SQL Injection (SQI)&lt;br /&gt;
* LDAP Injection&lt;br /&gt;
* XPATH Injection&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Testing Project]]&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Table_of_Contents&amp;diff=16254</id>
		<title>OWASP Testing Guide v2 Table of Contents</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Table_of_Contents&amp;diff=16254"/>
				<updated>2007-02-07T16:09:42Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* 4. Web Application Penetration Testing  */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Foreword|Foreword by OWASP Chair]]==&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Frontispiece |1. Frontispiece]]==&lt;br /&gt;
&lt;br /&gt;
'''[[Testing Guide Frontispiece|1.1 About the OWASP Testing Guide Project]]'''&lt;br /&gt;
&lt;br /&gt;
1.1.1 Copyright&lt;br /&gt;
&lt;br /&gt;
1.1.2 Editors&lt;br /&gt;
&lt;br /&gt;
1.1.3 Authors and Reviewers&lt;br /&gt;
&lt;br /&gt;
1.1.4 Revision History&lt;br /&gt;
&lt;br /&gt;
1.1.5 Trademarks&lt;br /&gt;
&lt;br /&gt;
'''[[About The Open Web Application Security Project|1.2 About The Open Web Application Security Project]]'''&lt;br /&gt;
&lt;br /&gt;
1.2.1 Overview&lt;br /&gt;
&lt;br /&gt;
1.2.2 Structure&lt;br /&gt;
&lt;br /&gt;
1.2.3 Licensing&lt;br /&gt;
&lt;br /&gt;
1.2.4 Participation and Membership&lt;br /&gt;
&lt;br /&gt;
1.2.5 Projects&lt;br /&gt;
&lt;br /&gt;
1.2.6 OWASP Privacy Policy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Introduction|2. Introduction]]==&lt;br /&gt;
&lt;br /&gt;
'''2.1 The OWASP Testing Project'''&lt;br /&gt;
&lt;br /&gt;
'''2.2 Principles of Testing'''&lt;br /&gt;
&lt;br /&gt;
'''2.3 Testing Techniques Explained''' &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[The OWASP Testing Framework|3. The OWASP Testing Framework]]==&lt;br /&gt;
&lt;br /&gt;
'''3.1. Overview'''&lt;br /&gt;
&lt;br /&gt;
'''3.2. Phase 1: Before Development Begins '''&lt;br /&gt;
&lt;br /&gt;
'''3.3. Phase 2: During Definition and Design'''&lt;br /&gt;
&lt;br /&gt;
'''3.4. Phase 3: During Development'''&lt;br /&gt;
&lt;br /&gt;
'''3.5. Phase 4: During Deployment'''&lt;br /&gt;
&lt;br /&gt;
'''3.6. Phase 5: Maintenance and Operations'''&lt;br /&gt;
&lt;br /&gt;
'''3.7. A Typical SDLC Testing Workflow '''&lt;br /&gt;
&lt;br /&gt;
==[[Web Application Penetration Testing |4. Web Application Penetration Testing ]]==&lt;br /&gt;
&lt;br /&gt;
[[Testing: Introduction and objectives|'''4.1 Introduction and Objectives''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing: Information Gathering|'''4.2 Information Gathering''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Web Application Fingerprint|4.2.1 Testing Web Application Fingerprint]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Application Discovery|4.2.2 Application Discovery]]&lt;br /&gt;
&lt;br /&gt;
[[Testing: Spidering and googling|4.2.3 Spidering and Googling]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Error Code|4.2.4 Analysis of Error Codes]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for infrastructure configuration management|4.2.5 Infrastructure &lt;br /&gt;
Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SSL-TLS|4.2.5.1 SSL/TLS Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DB Listener|4.2.5.2 DB Listener Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for application configuration management|4.2.6 Application Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for File Extensions Handling|4.2.6.1 Testing for File Extensions Handling]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for old_file|4.2.6.2 Old, backup and unreferenced files]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Business Logic|'''4.3 Business Logic Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Authentication|'''4.4 Authentication Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Default or Guessable User Account|4.4.1 Testing for Guessable (Dictionary) User Account]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Brute Force|4.4.2 Brute Force Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Bypassing Authentication Schema|4.4.3 Testing for bypassing authentication schema]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Directory Traversal|4.4.4 Testing for directory traversal/file include]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Vulnerable Remember Password and Pwd Reset|4.4.5 Testing for vulnerable remember &lt;br /&gt;
password and pwd reset]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Logout and Browser Cache Management|4.4.6 Testing for Logout and Browser Cache Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Session Management|'''4.5 Session Management Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Session_Management_Schema|4.5.1 Testing for Session Management Schema]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Cookie and Session Token Manipulation|4.5.2 Testing for Cookie and Session Token Manipulation]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Exposed Session Variables|4.5.3 Testing for Exposed Session Variables ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for CSRF|4.5.4 Testing for CSRF]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for HTTP Exploit|4.5.5 Testing for HTTP Exploit ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Data Validation|'''4.6 Data Validation Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Cross site scripting|4.6.1 Testing for Cross Site Scripting]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for HTTP Methods and XST|4.6.1.1 Testing for HTTP Methods and XST ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SQL Injection|4.6.2 Testing for SQL Injection ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Oracle|4.6.2.1 Oracle Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for MySQL|4.6.2.2 MySQL Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SQL Server|4.6.2.3 SQL Server Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for LDAP Injection|4.6.3 Testing for LDAP Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for ORM Injection|4.6.4 Testing for ORM Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Injection|4.6.5 Testing for XML Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SSI Injection|4.6.6 Testing for SSI Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XPath Injection|4.6.7 Testing for XPath Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for IMAP/SMTP Injection|4.6.8 IMAP/SMTP Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Code Injection|4.6.9 Testing for Code Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Command Injection|4.6.10 Testing for Command Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Buffer Overflow|4.6.11 Testing for Buffer overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Heap Overflow|4.6.11.1 Testing for Heap overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Stack Overflow|4.6.11.2 Testing for Stack overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Format String|4.6.11.3 Testing for Format string]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Incubated Vulnerability|4.6.12 Testing for incubated vulnerabilities]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Denial of Service|'''4.7 Testing for Denial of Service''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Locking Customer Accounts|4.7.1 Testing for DoS Locking Customer Accounts]]	&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Buffer Overflows|4.7.2 Testing for DoS Buffer Overflows]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS User Specified Object Allocation|4.7.3 Testing for DoS User Specified Object Allocation]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for User Input as a Loop Counter|4.7.4 Testing for User Input as a Loop Counter]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Writing User Provided Data to Disk|4.7.5 Testing for Writing User Provided Data to Disk]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Failure to Release Resources|4.7.6 Testing for DoS Failure to Release Resources]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Storing too Much Data in Session|4.7.7 Testing for Storing too Much Data in Session]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Web Services|'''4.8 Web Services Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Structural|4.8.1 XML Structural Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Content-Level|4.8.2 XML Content-level Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for WS HTTP GET parameters/REST attacks|4.8.3 HTTP GET parameters/REST Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Naughty SOAP Attachments|4.8.4 Testing for Naughty SOAP attachments]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for WS Replay|4.8.5 WS Replay Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing_for_AJAX:_introduction|'''4.9 AJAX Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for AJAX Vulnerabilities|4.9.1 AJAX Vulnerabilities]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for AJAX|4.9.2 How to test AJAX]]&lt;br /&gt;
&lt;br /&gt;
==[[Writing Reports: value the real risk |5. Writing Reports: value the real risk ]]==&lt;br /&gt;
&lt;br /&gt;
[[How to value the real risk |5.1 How to value the real risk]]&lt;br /&gt;
&lt;br /&gt;
[[How to write the report of the testing |5.2 How to write the report of the testing]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[Appendix A: Testing Tools |Appendix A: Testing Tools ]]==&lt;br /&gt;
&lt;br /&gt;
* Black Box Testing Tools&lt;br /&gt;
* Source Code Analyzers&lt;br /&gt;
* Other Tools&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[OWASP Testing Guide Appendix B: Suggested Reading | Appendix B: Suggested Reading]]==&lt;br /&gt;
* Whitepapers&lt;br /&gt;
* Books&lt;br /&gt;
* Useful Websites&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[OWASP Testing Guide Appendix C: Fuzz Vectors | Appendix C: Fuzz Vectors]]==&lt;br /&gt;
&lt;br /&gt;
* Fuzz Categories&lt;br /&gt;
** Recursive fuzzing&lt;br /&gt;
** Replasive fuzzing&lt;br /&gt;
* Cross Site Scripting (XSS)&lt;br /&gt;
* Buffer Overflows and Format String Errors&lt;br /&gt;
** Buffer Overflows (BFO)&lt;br /&gt;
** Format String Errors (FSE)&lt;br /&gt;
** Integer Overflows (INT)&lt;br /&gt;
* SQL Injection&lt;br /&gt;
** Passive SQL Injection (SQP)&lt;br /&gt;
** Active SQL Injection (SQI)&lt;br /&gt;
* LDAP Injection&lt;br /&gt;
* XPATH Injection&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Testing Project]]&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Table_of_Contents&amp;diff=16253</id>
		<title>OWASP Testing Guide v2 Table of Contents</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Table_of_Contents&amp;diff=16253"/>
				<updated>2007-02-07T16:09:06Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* 4. Web Application Penetration Testing  */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Foreword|Foreword by OWASP Chair]]==&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Frontispiece |1. Frontispiece]]==&lt;br /&gt;
&lt;br /&gt;
'''[[Testing Guide Frontispiece|1.1 About the OWASP Testing Guide Project]]'''&lt;br /&gt;
&lt;br /&gt;
1.1.1 Copyright&lt;br /&gt;
&lt;br /&gt;
1.1.2 Editors&lt;br /&gt;
&lt;br /&gt;
1.1.3 Authors and Reviewers&lt;br /&gt;
&lt;br /&gt;
1.1.4 Revision History&lt;br /&gt;
&lt;br /&gt;
1.1.5 Trademarks&lt;br /&gt;
&lt;br /&gt;
'''[[About The Open Web Application Security Project|1.2 About The Open Web Application Security Project]]'''&lt;br /&gt;
&lt;br /&gt;
1.2.1 Overview&lt;br /&gt;
&lt;br /&gt;
1.2.2 Structure&lt;br /&gt;
&lt;br /&gt;
1.2.3 Licensing&lt;br /&gt;
&lt;br /&gt;
1.2.4 Participation and Membership&lt;br /&gt;
&lt;br /&gt;
1.2.5 Projects&lt;br /&gt;
&lt;br /&gt;
1.2.6 OWASP Privacy Policy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Introduction|2. Introduction]]==&lt;br /&gt;
&lt;br /&gt;
'''2.1 The OWASP Testing Project'''&lt;br /&gt;
&lt;br /&gt;
'''2.2 Principles of Testing'''&lt;br /&gt;
&lt;br /&gt;
'''2.3 Testing Techniques Explained''' &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[The OWASP Testing Framework|3. The OWASP Testing Framework]]==&lt;br /&gt;
&lt;br /&gt;
'''3.1. Overview'''&lt;br /&gt;
&lt;br /&gt;
'''3.2. Phase 1: Before Development Begins '''&lt;br /&gt;
&lt;br /&gt;
'''3.3. Phase 2: During Definition and Design'''&lt;br /&gt;
&lt;br /&gt;
'''3.4. Phase 3: During Development'''&lt;br /&gt;
&lt;br /&gt;
'''3.5. Phase 4: During Deployment'''&lt;br /&gt;
&lt;br /&gt;
'''3.6. Phase 5: Maintenance and Operations'''&lt;br /&gt;
&lt;br /&gt;
'''3.7. A Typical SDLC Testing Workflow '''&lt;br /&gt;
&lt;br /&gt;
==[[Web Application Penetration Testing |4. Web Application Penetration Testing ]]==&lt;br /&gt;
&lt;br /&gt;
[[Testing: Introduction and objectives|'''4.1 Introduction and Objectives''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing: Information Gathering|'''4.2 Information Gathering''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Web Application Fingerprint|4.2.1 Testing Web Application Fingerprint]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Application Discovery|4.2.2 Application Discovery]]&lt;br /&gt;
&lt;br /&gt;
[[Testing: Spidering and googling|4.2.3 Spidering and Googling]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Error Code|4.2.4 Analysis of Error Codes]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for infrastructure configuration management|4.2.5 Infrastructure &lt;br /&gt;
Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SSL-TLS|4.2.5.1 SSL/TLS Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DB Listener|4.2.5.2 DB Listener Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Application Configuration Management|4.2.6 Application Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for File Extensions Handling|4.2.6.1 Testing for File Extensions Handling]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for old_file|4.2.6.2 Old, backup and unreferenced files]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Business Logic|'''4.3 Business Logic Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Authentication|'''4.4 Authentication Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Default or Guessable User Account|4.4.1 Testing for Guessable (Dictionary) User Account]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Brute Force|4.4.2 Brute Force Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Bypassing Authentication Schema|4.4.3 Testing for bypassing authentication schema]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Directory Traversal|4.4.4 Testing for directory traversal/file include]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Vulnerable Remember Password and Pwd Reset|4.4.5 Testing for vulnerable remember &lt;br /&gt;
password and pwd reset]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Logout and Browser Cache Management|4.4.6 Testing for Logout and Browser Cache Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Session Management|'''4.5 Session Management Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Session_Management_Schema|4.5.1 Testing for Session Management Schema]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Cookie and Session Token Manipulation|4.5.2 Testing for Cookie and Session Token Manipulation]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Exposed Session Variables|4.5.3 Testing for Exposed Session Variables ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for CSRF|4.5.4 Testing for CSRF]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for HTTP Exploit|4.5.5 Testing for HTTP Exploit ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Data Validation|'''4.6 Data Validation Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Cross site scripting|4.6.1 Testing for Cross Site Scripting]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for HTTP Methods and XST|4.6.1.1 Testing for HTTP Methods and XST ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SQL Injection|4.6.2 Testing for SQL Injection ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Oracle|4.6.2.1 Oracle Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for MySQL|4.6.2.2 MySQL Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SQL Server|4.6.2.3 SQL Server Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for LDAP Injection|4.6.3 Testing for LDAP Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for ORM Injection|4.6.4 Testing for ORM Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Injection|4.6.5 Testing for XML Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SSI Injection|4.6.6 Testing for SSI Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XPath Injection|4.6.7 Testing for XPath Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for IMAP/SMTP Injection|4.6.8 IMAP/SMTP Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Code Injection|4.6.9 Testing for Code Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Command Injection|4.6.10 Testing for Command Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Buffer Overflow|4.6.11 Testing for Buffer overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Heap Overflow|4.6.11.1 Testing for Heap overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Stack Overflow|4.6.11.2 Testing for Stack overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Format String|4.6.11.3 Testing for Format string]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Incubated Vulnerability|4.6.12 Testing for incubated vulnerabilities]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Denial of Service|'''4.7 Testing for Denial of Service''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Locking Customer Accounts|4.7.1 Testing for DoS Locking Customer Accounts]]	&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Buffer Overflows|4.7.2 Testing for DoS Buffer Overflows]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS User Specified Object Allocation|4.7.3 Testing for DoS User Specified Object Allocation]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for User Input as a Loop Counter|4.7.4 Testing for User Input as a Loop Counter]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Writing User Provided Data to Disk|4.7.5 Testing for Writing User Provided Data to Disk]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Failure to Release Resources|4.7.6 Testing for DoS Failure to Release Resources]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Storing too Much Data in Session|4.7.7 Testing for Storing too Much Data in Session]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Web Services|'''4.8 Web Services Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Structural|4.8.1 XML Structural Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Content-Level|4.8.2 XML Content-level Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for WS HTTP GET parameters/REST attacks|4.8.3 HTTP GET parameters/REST Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Naughty SOAP Attachments|4.8.4 Testing for Naughty SOAP attachments]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for WS Replay|4.8.5 WS Replay Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing_for_AJAX:_introduction|'''4.9 AJAX Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for AJAX Vulnerabilities|4.9.1 AJAX Vulnerabilities]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for AJAX|4.9.2 How to test AJAX]]&lt;br /&gt;
&lt;br /&gt;
==[[Writing Reports: value the real risk |5. Writing Reports: value the real risk ]]==&lt;br /&gt;
&lt;br /&gt;
[[How to value the real risk |5.1 How to value the real risk]]&lt;br /&gt;
&lt;br /&gt;
[[How to write the report of the testing |5.2 How to write the report of the testing]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[Appendix A: Testing Tools |Appendix A: Testing Tools ]]==&lt;br /&gt;
&lt;br /&gt;
* Black Box Testing Tools&lt;br /&gt;
* Source Code Analyzers&lt;br /&gt;
* Other Tools&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[OWASP Testing Guide Appendix B: Suggested Reading | Appendix B: Suggested Reading]]==&lt;br /&gt;
* Whitepapers&lt;br /&gt;
* Books&lt;br /&gt;
* Useful Websites&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[OWASP Testing Guide Appendix C: Fuzz Vectors | Appendix C: Fuzz Vectors]]==&lt;br /&gt;
&lt;br /&gt;
* Fuzz Categories&lt;br /&gt;
** Recursive fuzzing&lt;br /&gt;
** Replasive fuzzing&lt;br /&gt;
* Cross Site Scripting (XSS)&lt;br /&gt;
* Buffer Overflows and Format String Errors&lt;br /&gt;
** Buffer Overflows (BFO)&lt;br /&gt;
** Format String Errors (FSE)&lt;br /&gt;
** Integer Overflows (INT)&lt;br /&gt;
* SQL Injection&lt;br /&gt;
** Passive SQL Injection (SQP)&lt;br /&gt;
** Active SQL Injection (SQI)&lt;br /&gt;
* LDAP Injection&lt;br /&gt;
* XPATH Injection&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Testing Project]]&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Table_of_Contents&amp;diff=16252</id>
		<title>OWASP Testing Guide v2 Table of Contents</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Table_of_Contents&amp;diff=16252"/>
				<updated>2007-02-07T16:07:49Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* 4. Web Application Penetration Testing  */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Foreword|Foreword by OWASP Chair]]==&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Frontispiece |1. Frontispiece]]==&lt;br /&gt;
&lt;br /&gt;
'''[[Testing Guide Frontispiece|1.1 About the OWASP Testing Guide Project]]'''&lt;br /&gt;
&lt;br /&gt;
1.1.1 Copyright&lt;br /&gt;
&lt;br /&gt;
1.1.2 Editors&lt;br /&gt;
&lt;br /&gt;
1.1.3 Authors and Reviewers&lt;br /&gt;
&lt;br /&gt;
1.1.4 Revision History&lt;br /&gt;
&lt;br /&gt;
1.1.5 Trademarks&lt;br /&gt;
&lt;br /&gt;
'''[[About The Open Web Application Security Project|1.2 About The Open Web Application Security Project]]'''&lt;br /&gt;
&lt;br /&gt;
1.2.1 Overview&lt;br /&gt;
&lt;br /&gt;
1.2.2 Structure&lt;br /&gt;
&lt;br /&gt;
1.2.3 Licensing&lt;br /&gt;
&lt;br /&gt;
1.2.4 Participation and Membership&lt;br /&gt;
&lt;br /&gt;
1.2.5 Projects&lt;br /&gt;
&lt;br /&gt;
1.2.6 OWASP Privacy Policy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Introduction|2. Introduction]]==&lt;br /&gt;
&lt;br /&gt;
'''2.1 The OWASP Testing Project'''&lt;br /&gt;
&lt;br /&gt;
'''2.2 Principles of Testing'''&lt;br /&gt;
&lt;br /&gt;
'''2.3 Testing Techniques Explained''' &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[The OWASP Testing Framework|3. The OWASP Testing Framework]]==&lt;br /&gt;
&lt;br /&gt;
'''3.1. Overview'''&lt;br /&gt;
&lt;br /&gt;
'''3.2. Phase 1: Before Development Begins '''&lt;br /&gt;
&lt;br /&gt;
'''3.3. Phase 2: During Definition and Design'''&lt;br /&gt;
&lt;br /&gt;
'''3.4. Phase 3: During Development'''&lt;br /&gt;
&lt;br /&gt;
'''3.5. Phase 4: During Deployment'''&lt;br /&gt;
&lt;br /&gt;
'''3.6. Phase 5: Maintenance and Operations'''&lt;br /&gt;
&lt;br /&gt;
'''3.7. A Typical SDLC Testing Workflow '''&lt;br /&gt;
&lt;br /&gt;
==[[Web Application Penetration Testing |4. Web Application Penetration Testing ]]==&lt;br /&gt;
&lt;br /&gt;
[[Testing: Introduction and Objectives|'''4.1 Introduction and Objectives''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing: Information Gathering|'''4.2 Information Gathering''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Web Application Fingerprint|4.2.1 Testing Web Application Fingerprint]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Application Discovery|4.2.2 Application Discovery]]&lt;br /&gt;
&lt;br /&gt;
[[Testing: Spidering and googling|4.2.3 Spidering and Googling]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Error Code|4.2.4 Analysis of Error Codes]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for infrastructure configuration management|4.2.5 Infrastructure &lt;br /&gt;
Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SSL-TLS|4.2.5.1 SSL/TLS Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DB Listener|4.2.5.2 DB Listener Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Application Configuration Management|4.2.6 Application Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for File Extensions Handling|4.2.6.1 Testing for File Extensions Handling]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for old_file|4.2.6.2 Old, backup and unreferenced files]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Business Logic|'''4.3 Business Logic Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Authentication|'''4.4 Authentication Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Default or Guessable User Account|4.4.1 Testing for Guessable (Dictionary) User Account]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Brute Force|4.4.2 Brute Force Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Bypassing Authentication Schema|4.4.3 Testing for bypassing authentication schema]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Directory Traversal|4.4.4 Testing for directory traversal/file include]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Vulnerable Remember Password and Pwd Reset|4.4.5 Testing for vulnerable remember &lt;br /&gt;
password and pwd reset]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Logout and Browser Cache Management|4.4.6 Testing for Logout and Browser Cache Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Session Management|'''4.5 Session Management Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Session_Management_Schema|4.5.1 Testing for Session Management Schema]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Cookie and Session Token Manipulation|4.5.2 Testing for Cookie and Session Token Manipulation]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Exposed Session Variables|4.5.3 Testing for Exposed Session Variables ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for CSRF|4.5.4 Testing for CSRF]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for HTTP Exploit|4.5.5 Testing for HTTP Exploit ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Data Validation|'''4.6 Data Validation Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Cross site scripting|4.6.1 Testing for Cross Site Scripting]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for HTTP Methods and XST|4.6.1.1 Testing for HTTP Methods and XST ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SQL Injection|4.6.2 Testing for SQL Injection ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Oracle|4.6.2.1 Oracle Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for MySQL|4.6.2.2 MySQL Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SQL Server|4.6.2.3 SQL Server Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for LDAP Injection|4.6.3 Testing for LDAP Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for ORM Injection|4.6.4 Testing for ORM Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Injection|4.6.5 Testing for XML Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SSI Injection|4.6.6 Testing for SSI Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XPath Injection|4.6.7 Testing for XPath Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for IMAP/SMTP Injection|4.6.8 IMAP/SMTP Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Code Injection|4.6.9 Testing for Code Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Command Injection|4.6.10 Testing for Command Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Buffer Overflow|4.6.11 Testing for Buffer overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Heap Overflow|4.6.11.1 Testing for Heap overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Stack Overflow|4.6.11.2 Testing for Stack overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Format String|4.6.11.3 Testing for Format string]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Incubated Vulnerability|4.6.12 Testing for incubated vulnerabilities]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Denial of Service|'''4.7 Testing for Denial of Service''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Locking Customer Accounts|4.7.1 Testing for DoS Locking Customer Accounts]]	&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Buffer Overflows|4.7.2 Testing for DoS Buffer Overflows]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS User Specified Object Allocation|4.7.3 Testing for DoS User Specified Object Allocation]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for User Input as a Loop Counter|4.7.4 Testing for User Input as a Loop Counter]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Writing User Provided Data to Disk|4.7.5 Testing for Writing User Provided Data to Disk]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Failure to Release Resources|4.7.6 Testing for DoS Failure to Release Resources]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Storing too Much Data in Session|4.7.7 Testing for Storing too Much Data in Session]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Web Services|'''4.8 Web Services Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Structural|4.8.1 XML Structural Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Content-Level|4.8.2 XML Content-level Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for WS HTTP GET parameters/REST attacks|4.8.3 HTTP GET parameters/REST Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Naughty SOAP Attachments|4.8.4 Testing for Naughty SOAP attachments]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for WS Replay|4.8.5 WS Replay Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing_for_AJAX:_introduction|'''4.9 AJAX Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for AJAX Vulnerabilities|4.9.1 AJAX Vulnerabilities]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for AJAX|4.9.2 How to test AJAX]]&lt;br /&gt;
&lt;br /&gt;
==[[Writing Reports: value the real risk |5. Writing Reports: value the real risk ]]==&lt;br /&gt;
&lt;br /&gt;
[[How to value the real risk |5.1 How to value the real risk]]&lt;br /&gt;
&lt;br /&gt;
[[How to write the report of the testing |5.2 How to write the report of the testing]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[Appendix A: Testing Tools |Appendix A: Testing Tools ]]==&lt;br /&gt;
&lt;br /&gt;
* Black Box Testing Tools&lt;br /&gt;
* Source Code Analyzers&lt;br /&gt;
* Other Tools&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[OWASP Testing Guide Appendix B: Suggested Reading | Appendix B: Suggested Reading]]==&lt;br /&gt;
* Whitepapers&lt;br /&gt;
* Books&lt;br /&gt;
* Useful Websites&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[OWASP Testing Guide Appendix C: Fuzz Vectors | Appendix C: Fuzz Vectors]]==&lt;br /&gt;
&lt;br /&gt;
* Fuzz Categories&lt;br /&gt;
** Recursive fuzzing&lt;br /&gt;
** Replasive fuzzing&lt;br /&gt;
* Cross Site Scripting (XSS)&lt;br /&gt;
* Buffer Overflows and Format String Errors&lt;br /&gt;
** Buffer Overflows (BFO)&lt;br /&gt;
** Format String Errors (FSE)&lt;br /&gt;
** Integer Overflows (INT)&lt;br /&gt;
* SQL Injection&lt;br /&gt;
** Passive SQL Injection (SQP)&lt;br /&gt;
** Active SQL Injection (SQI)&lt;br /&gt;
* LDAP Injection&lt;br /&gt;
* XPATH Injection&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Testing Project]]&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Table_of_Contents&amp;diff=16251</id>
		<title>OWASP Testing Guide v2 Table of Contents</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Table_of_Contents&amp;diff=16251"/>
				<updated>2007-02-07T16:07:22Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* 4. Web Application Penetration Testing  */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Foreword|Foreword by OWASP Chair]]==&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Frontispiece |1. Frontispiece]]==&lt;br /&gt;
&lt;br /&gt;
'''[[Testing Guide Frontispiece|1.1 About the OWASP Testing Guide Project]]'''&lt;br /&gt;
&lt;br /&gt;
1.1.1 Copyright&lt;br /&gt;
&lt;br /&gt;
1.1.2 Editors&lt;br /&gt;
&lt;br /&gt;
1.1.3 Authors and Reviewers&lt;br /&gt;
&lt;br /&gt;
1.1.4 Revision History&lt;br /&gt;
&lt;br /&gt;
1.1.5 Trademarks&lt;br /&gt;
&lt;br /&gt;
'''[[About The Open Web Application Security Project|1.2 About The Open Web Application Security Project]]'''&lt;br /&gt;
&lt;br /&gt;
1.2.1 Overview&lt;br /&gt;
&lt;br /&gt;
1.2.2 Structure&lt;br /&gt;
&lt;br /&gt;
1.2.3 Licensing&lt;br /&gt;
&lt;br /&gt;
1.2.4 Participation and Membership&lt;br /&gt;
&lt;br /&gt;
1.2.5 Projects&lt;br /&gt;
&lt;br /&gt;
1.2.6 OWASP Privacy Policy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Introduction|2. Introduction]]==&lt;br /&gt;
&lt;br /&gt;
'''2.1 The OWASP Testing Project'''&lt;br /&gt;
&lt;br /&gt;
'''2.2 Principles of Testing'''&lt;br /&gt;
&lt;br /&gt;
'''2.3 Testing Techniques Explained''' &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[The OWASP Testing Framework|3. The OWASP Testing Framework]]==&lt;br /&gt;
&lt;br /&gt;
'''3.1. Overview'''&lt;br /&gt;
&lt;br /&gt;
'''3.2. Phase 1: Before Development Begins '''&lt;br /&gt;
&lt;br /&gt;
'''3.3. Phase 2: During Definition and Design'''&lt;br /&gt;
&lt;br /&gt;
'''3.4. Phase 3: During Development'''&lt;br /&gt;
&lt;br /&gt;
'''3.5. Phase 4: During Deployment'''&lt;br /&gt;
&lt;br /&gt;
'''3.6. Phase 5: Maintenance and Operations'''&lt;br /&gt;
&lt;br /&gt;
'''3.7. A Typical SDLC Testing Workflow '''&lt;br /&gt;
&lt;br /&gt;
==[[Web Application Penetration Testing |4. Web Application Penetration Testing ]]==&lt;br /&gt;
&lt;br /&gt;
[[Testing: Introduction and Objectives|'''4.1 Introduction and Objectives''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing: Information Gathering|'''4.2 Information Gathering''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Web Application Fingerprint|4.2.1 Testing Web Application Fingerprint]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Application Discovery|4.2.2 Application Discovery]]&lt;br /&gt;
&lt;br /&gt;
[[Testing: Spidering and googling|4.2.3 Spidering and Googling]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Error Code|4.2.4 Analysis of Error Codes]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Infrastructure Configuration Management|4.2.5 Infrastructure &lt;br /&gt;
Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SSL-TLS|4.2.5.1 SSL/TLS Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DB Listener|4.2.5.2 DB Listener Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Application Configuration Management|4.2.6 Application Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for File Extensions Handling|4.2.6.1 Testing for File Extensions Handling]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for old_file|4.2.6.2 Old, backup and unreferenced files]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Business Logic|'''4.3 Business Logic Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Authentication|'''4.4 Authentication Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Default or Guessable User Account|4.4.1 Testing for Guessable (Dictionary) User Account]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Brute Force|4.4.2 Brute Force Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Bypassing Authentication Schema|4.4.3 Testing for bypassing authentication schema]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Directory Traversal|4.4.4 Testing for directory traversal/file include]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Vulnerable Remember Password and Pwd Reset|4.4.5 Testing for vulnerable remember &lt;br /&gt;
password and pwd reset]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Logout and Browser Cache Management|4.4.6 Testing for Logout and Browser Cache Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Session Management|'''4.5 Session Management Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Session_Management_Schema|4.5.1 Testing for Session Management Schema]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Cookie and Session Token Manipulation|4.5.2 Testing for Cookie and Session Token Manipulation]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Exposed Session Variables|4.5.3 Testing for Exposed Session Variables ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for CSRF|4.5.4 Testing for CSRF]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for HTTP Exploit|4.5.5 Testing for HTTP Exploit ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Data Validation|'''4.6 Data Validation Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Cross site scripting|4.6.1 Testing for Cross Site Scripting]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for HTTP Methods and XST|4.6.1.1 Testing for HTTP Methods and XST ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SQL Injection|4.6.2 Testing for SQL Injection ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Oracle|4.6.2.1 Oracle Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for MySQL|4.6.2.2 MySQL Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SQL Server|4.6.2.3 SQL Server Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for LDAP Injection|4.6.3 Testing for LDAP Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for ORM Injection|4.6.4 Testing for ORM Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Injection|4.6.5 Testing for XML Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SSI Injection|4.6.6 Testing for SSI Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XPath Injection|4.6.7 Testing for XPath Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for IMAP/SMTP Injection|4.6.8 IMAP/SMTP Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Code Injection|4.6.9 Testing for Code Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Command Injection|4.6.10 Testing for Command Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Buffer Overflow|4.6.11 Testing for Buffer overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Heap Overflow|4.6.11.1 Testing for Heap overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Stack Overflow|4.6.11.2 Testing for Stack overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Format String|4.6.11.3 Testing for Format string]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Incubated Vulnerability|4.6.12 Testing for incubated vulnerabilities]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Denial of Service|'''4.7 Testing for Denial of Service''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Locking Customer Accounts|4.7.1 Testing for DoS Locking Customer Accounts]]	&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Buffer Overflows|4.7.2 Testing for DoS Buffer Overflows]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS User Specified Object Allocation|4.7.3 Testing for DoS User Specified Object Allocation]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for User Input as a Loop Counter|4.7.4 Testing for User Input as a Loop Counter]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Writing User Provided Data to Disk|4.7.5 Testing for Writing User Provided Data to Disk]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Failure to Release Resources|4.7.6 Testing for DoS Failure to Release Resources]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Storing too Much Data in Session|4.7.7 Testing for Storing too Much Data in Session]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Web Services|'''4.8 Web Services Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Structural|4.8.1 XML Structural Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Content-Level|4.8.2 XML Content-level Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for WS HTTP GET parameters/REST attacks|4.8.3 HTTP GET parameters/REST Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Naughty SOAP Attachments|4.8.4 Testing for Naughty SOAP attachments]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for WS Replay|4.8.5 WS Replay Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing_for_AJAX:_introduction|'''4.9 AJAX Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for AJAX Vulnerabilities|4.9.1 AJAX Vulnerabilities]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for AJAX|4.9.2 How to test AJAX]]&lt;br /&gt;
&lt;br /&gt;
==[[Writing Reports: value the real risk |5. Writing Reports: value the real risk ]]==&lt;br /&gt;
&lt;br /&gt;
[[How to value the real risk |5.1 How to value the real risk]]&lt;br /&gt;
&lt;br /&gt;
[[How to write the report of the testing |5.2 How to write the report of the testing]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[Appendix A: Testing Tools |Appendix A: Testing Tools ]]==&lt;br /&gt;
&lt;br /&gt;
* Black Box Testing Tools&lt;br /&gt;
* Source Code Analyzers&lt;br /&gt;
* Other Tools&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[OWASP Testing Guide Appendix B: Suggested Reading | Appendix B: Suggested Reading]]==&lt;br /&gt;
* Whitepapers&lt;br /&gt;
* Books&lt;br /&gt;
* Useful Websites&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[OWASP Testing Guide Appendix C: Fuzz Vectors | Appendix C: Fuzz Vectors]]==&lt;br /&gt;
&lt;br /&gt;
* Fuzz Categories&lt;br /&gt;
** Recursive fuzzing&lt;br /&gt;
** Replasive fuzzing&lt;br /&gt;
* Cross Site Scripting (XSS)&lt;br /&gt;
* Buffer Overflows and Format String Errors&lt;br /&gt;
** Buffer Overflows (BFO)&lt;br /&gt;
** Format String Errors (FSE)&lt;br /&gt;
** Integer Overflows (INT)&lt;br /&gt;
* SQL Injection&lt;br /&gt;
** Passive SQL Injection (SQP)&lt;br /&gt;
** Active SQL Injection (SQI)&lt;br /&gt;
* LDAP Injection&lt;br /&gt;
* XPATH Injection&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Testing Project]]&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Table_of_Contents&amp;diff=16250</id>
		<title>OWASP Testing Guide v2 Table of Contents</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Testing_Guide_v2_Table_of_Contents&amp;diff=16250"/>
				<updated>2007-02-07T16:05:58Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* 4. Web Application Penetration Testing  */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Foreword|Foreword by OWASP Chair]]==&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Frontispiece |1. Frontispiece]]==&lt;br /&gt;
&lt;br /&gt;
'''[[Testing Guide Frontispiece|1.1 About the OWASP Testing Guide Project]]'''&lt;br /&gt;
&lt;br /&gt;
1.1.1 Copyright&lt;br /&gt;
&lt;br /&gt;
1.1.2 Editors&lt;br /&gt;
&lt;br /&gt;
1.1.3 Authors and Reviewers&lt;br /&gt;
&lt;br /&gt;
1.1.4 Revision History&lt;br /&gt;
&lt;br /&gt;
1.1.5 Trademarks&lt;br /&gt;
&lt;br /&gt;
'''[[About The Open Web Application Security Project|1.2 About The Open Web Application Security Project]]'''&lt;br /&gt;
&lt;br /&gt;
1.2.1 Overview&lt;br /&gt;
&lt;br /&gt;
1.2.2 Structure&lt;br /&gt;
&lt;br /&gt;
1.2.3 Licensing&lt;br /&gt;
&lt;br /&gt;
1.2.4 Participation and Membership&lt;br /&gt;
&lt;br /&gt;
1.2.5 Projects&lt;br /&gt;
&lt;br /&gt;
1.2.6 OWASP Privacy Policy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[Testing Guide Introduction|2. Introduction]]==&lt;br /&gt;
&lt;br /&gt;
'''2.1 The OWASP Testing Project'''&lt;br /&gt;
&lt;br /&gt;
'''2.2 Principles of Testing'''&lt;br /&gt;
&lt;br /&gt;
'''2.3 Testing Techniques Explained''' &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[The OWASP Testing Framework|3. The OWASP Testing Framework]]==&lt;br /&gt;
&lt;br /&gt;
'''3.1. Overview'''&lt;br /&gt;
&lt;br /&gt;
'''3.2. Phase 1: Before Development Begins '''&lt;br /&gt;
&lt;br /&gt;
'''3.3. Phase 2: During Definition and Design'''&lt;br /&gt;
&lt;br /&gt;
'''3.4. Phase 3: During Development'''&lt;br /&gt;
&lt;br /&gt;
'''3.5. Phase 4: During Deployment'''&lt;br /&gt;
&lt;br /&gt;
'''3.6. Phase 5: Maintenance and Operations'''&lt;br /&gt;
&lt;br /&gt;
'''3.7. A Typical SDLC Testing Workflow '''&lt;br /&gt;
&lt;br /&gt;
==[[Web Application Penetration Testing |4. Web Application Penetration Testing ]]==&lt;br /&gt;
&lt;br /&gt;
[[Testing: Introduction and Objectives|'''4.1 Introduction and Objectives''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing: Information Gathering|'''4.2 Information Gathering''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Web Application Fingerprint|4.2.1 Testing Web Application Fingerprint]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Application Discovery|4.2.2 Application Discovery]]&lt;br /&gt;
&lt;br /&gt;
[[Testing: Spidering and Googling|4.2.3 Spidering and Googling]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Error Code|4.2.4 Analysis of Error Codes]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Infrastructure Configuration Management|4.2.5 Infrastructure &lt;br /&gt;
Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SSL-TLS|4.2.5.1 SSL/TLS Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DB Listener|4.2.5.2 DB Listener Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Application Configuration Management|4.2.6 Application Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for File Extensions Handling|4.2.6.1 Testing for File Extensions Handling]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for old_file|4.2.6.2 Old, backup and unreferenced files]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Business Logic|'''4.3 Business Logic Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Authentication|'''4.4 Authentication Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Default or Guessable User Account|4.4.1 Testing for Guessable (Dictionary) User Account]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Brute Force|4.4.2 Brute Force Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Bypassing Authentication Schema|4.4.3 Testing for bypassing authentication schema]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Directory Traversal|4.4.4 Testing for directory traversal/file include]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Vulnerable Remember Password and Pwd Reset|4.4.5 Testing for vulnerable remember &lt;br /&gt;
password and pwd reset]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Logout and Browser Cache Management|4.4.6 Testing for Logout and Browser Cache Management Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Session Management|'''4.5 Session Management Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Session_Management_Schema|4.5.1 Testing for Session Management Schema]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Cookie and Session Token Manipulation|4.5.2 Testing for Cookie and Session Token Manipulation]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Exposed Session Variables|4.5.3 Testing for Exposed Session Variables ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for CSRF|4.5.4 Testing for CSRF]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for HTTP Exploit|4.5.5 Testing for HTTP Exploit ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Data Validation|'''4.6 Data Validation Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Cross site scripting|4.6.1 Testing for Cross Site Scripting]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for HTTP Methods and XST|4.6.1.1 Testing for HTTP Methods and XST ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SQL Injection|4.6.2 Testing for SQL Injection ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Oracle|4.6.2.1 Oracle Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for MySQL|4.6.2.2 MySQL Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SQL Server|4.6.2.3 SQL Server Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for LDAP Injection|4.6.3 Testing for LDAP Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for ORM Injection|4.6.4 Testing for ORM Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Injection|4.6.5 Testing for XML Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for SSI Injection|4.6.6 Testing for SSI Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XPath Injection|4.6.7 Testing for XPath Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for IMAP/SMTP Injection|4.6.8 IMAP/SMTP Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Code Injection|4.6.9 Testing for Code Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Command Injection|4.6.10 Testing for Command Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Buffer Overflow|4.6.11 Testing for Buffer overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Heap Overflow|4.6.11.1 Testing for Heap overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Stack Overflow|4.6.11.2 Testing for Stack overflow]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Format String|4.6.11.3 Testing for Format string]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Incubated Vulnerability|4.6.12 Testing for incubated vulnerabilities]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Denial of Service|'''4.7 Testing for Denial of Service''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Locking Customer Accounts|4.7.1 Testing for DoS Locking Customer Accounts]]	&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Buffer Overflows|4.7.2 Testing for DoS Buffer Overflows]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS User Specified Object Allocation|4.7.3 Testing for DoS User Specified Object Allocation]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for User Input as a Loop Counter|4.7.4 Testing for User Input as a Loop Counter]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Writing User Provided Data to Disk|4.7.5 Testing for Writing User Provided Data to Disk]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for DoS Failure to Release Resources|4.7.6 Testing for DoS Failure to Release Resources]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Storing too Much Data in Session|4.7.7 Testing for Storing too Much Data in Session]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Web Services|'''4.8 Web Services Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Structural|4.8.1 XML Structural Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for XML Content-Level|4.8.2 XML Content-level Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for WS HTTP GET parameters/REST attacks|4.8.3 HTTP GET parameters/REST Testing ]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for Naughty SOAP Attachments|4.8.4 Testing for Naughty SOAP attachments]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for WS Replay|4.8.5 WS Replay Testing]]&lt;br /&gt;
&lt;br /&gt;
[[Testing_for_AJAX:_introduction|'''4.9 AJAX Testing''']]&lt;br /&gt;
&lt;br /&gt;
[[Testing for AJAX Vulnerabilities|4.9.1 AJAX Vulnerabilities]]&lt;br /&gt;
&lt;br /&gt;
[[Testing for AJAX|4.9.2 How to test AJAX]]&lt;br /&gt;
&lt;br /&gt;
==[[Writing Reports: value the real risk |5. Writing Reports: value the real risk ]]==&lt;br /&gt;
&lt;br /&gt;
[[How to value the real risk |5.1 How to value the real risk]]&lt;br /&gt;
&lt;br /&gt;
[[How to write the report of the testing |5.2 How to write the report of the testing]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[Appendix A: Testing Tools |Appendix A: Testing Tools ]]==&lt;br /&gt;
&lt;br /&gt;
* Black Box Testing Tools&lt;br /&gt;
* Source Code Analyzers&lt;br /&gt;
* Other Tools&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[OWASP Testing Guide Appendix B: Suggested Reading | Appendix B: Suggested Reading]]==&lt;br /&gt;
* Whitepapers&lt;br /&gt;
* Books&lt;br /&gt;
* Useful Websites&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==[[OWASP Testing Guide Appendix C: Fuzz Vectors | Appendix C: Fuzz Vectors]]==&lt;br /&gt;
&lt;br /&gt;
* Fuzz Categories&lt;br /&gt;
** Recursive fuzzing&lt;br /&gt;
** Replasive fuzzing&lt;br /&gt;
* Cross Site Scripting (XSS)&lt;br /&gt;
* Buffer Overflows and Format String Errors&lt;br /&gt;
** Buffer Overflows (BFO)&lt;br /&gt;
** Format String Errors (FSE)&lt;br /&gt;
** Integer Overflows (INT)&lt;br /&gt;
* SQL Injection&lt;br /&gt;
** Passive SQL Injection (SQP)&lt;br /&gt;
** Active SQL Injection (SQI)&lt;br /&gt;
* LDAP Injection&lt;br /&gt;
* XPATH Injection&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Testing Project]]&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing:_Spidering_and_googling&amp;diff=16249</id>
		<title>Testing: Spidering and googling</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing:_Spidering_and_googling&amp;diff=16249"/>
				<updated>2007-02-07T16:00:39Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Googling */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
This section describes how to retrieve information about the application being tested using spidering and googling techniques.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&lt;br /&gt;
Web spiders are the most powerful and useful tools developed for both good and bad intentions on the internet. A spider serves one major function, Data Mining. The way a typical spider (like Google) works is by crawling a web site one page at a time, gathering and storing the relevant information such as email addresses, meta-tags, hidden form data, URL information, links, etc. The spider then crawls all the links in that page, collecting relevant information in each following page, and so on. Before you know it, the spider has crawled thousands of links and pages gathering bits of information and storing it into a database. This web of paths is where the term 'spider' is derived from. &lt;br /&gt;
&lt;br /&gt;
The Google search engine found at http://www.google.com offers many features, including language and document translation; web, image, newsgroups, catalog, and news searches; and more. These features offer obvious benefits to even the most uninitiated web surfer, but these same features offer far more nefarious possibilities to the most malicious Internet users, including hackers, computer criminals, identity thieves, and even terrorists. This article outlines the more harmful applications of the Google search engine, techniques that have collectively been termed &amp;quot;Google Hacking.&amp;quot;&lt;br /&gt;
In 1992, there were about 15,000 web sites, in 2006 the number has exceeded 100 million.   What if a simple query to a search engine like Google such as &amp;quot;Hackable Websites w/ Credit Card Information&amp;quot; produced a list of websites that contained customer credit card data of thousands of customers per company?  &lt;br /&gt;
If the attacker is aware of a web application that stores a clear text password file in a directory and wants to gather these targets, then he could search on &amp;quot;intitle:&amp;quot;Index of&amp;quot; .mysql_history&amp;quot; and the search engine will provide him with a list of target systems that may divulge these database usernames and passwords (out of a possible 100 million web sites available). Or perhaps the attacker has a new method to attack a Lotus Notes web server and simply wants to see how many targets are on the internet, he could search on &amp;quot;inurl:domcfg.nsf&amp;quot;. Apply the same logic to a worm looking for its new victim.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
===Spidering===&lt;br /&gt;
&lt;br /&gt;
'''Description and goal'''&lt;br /&gt;
&lt;br /&gt;
Our goal is to create a map of the application with all the points of access (gates) to the application.&lt;br /&gt;
This will be useful for the second active phase of penetration testing.&lt;br /&gt;
You can use a tool such as wget (powerful and very easy to use) to retrieve all the information published by the application.&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
The -s option is used to collect the HTTP header of the web requests. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget -s &amp;lt;target&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Tue, 12 Dec 2006 20:46:39 GMT&lt;br /&gt;
Server: Apache/1.3.37 (Unix) mod_jk/1.2.8 mod_deflate/1.0.21 PHP/5.1.6 mod_auth_&lt;br /&gt;
passthrough/1.8 mod_log_bytes/1.2 mod_bwlimited/1.4 FrontPage/5.0.2.26&lt;br /&gt;
34a mod_ssl/2.8.28 OpenSSL/0.9.7a&lt;br /&gt;
X-Powered-By: PHP/5.1.6&lt;br /&gt;
Set-Cookie: PHPSESSID=b7f5c903f8fdc254ccda8dc33651061f; expires=Friday, 05-Jan-0&lt;br /&gt;
7 00:19:59 GMT; path=/&lt;br /&gt;
Expires: Sun, 19 Nov 1978 05:00:00 GMT&lt;br /&gt;
Last-Modified: Tue, 12 Dec 2006 20:46:39 GMT&lt;br /&gt;
Cache-Control: no-store, no-cache, must-revalidate&lt;br /&gt;
Cache-Control: post-check=0, pre-check=0&lt;br /&gt;
Pragma: no-cache&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html; charset=utf-8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
The -r option is used to collect recursively the web-site's content and the -D option restricts the request only for the specified domain.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget -r -D &amp;lt;domain&amp;gt; &amp;lt;target&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
22:13:55 (15.73 KB/s) - `www.******.org/indice/13' saved [8379]&lt;br /&gt;
&lt;br /&gt;
--22:13:55--  http://www.******.org/*****/********&lt;br /&gt;
           =&amp;gt; `www.******.org/*****/********'&lt;br /&gt;
Connecting to www.******.org[xx.xxx.xxx.xx]:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/html]&lt;br /&gt;
&lt;br /&gt;
    [   &amp;lt;=&amp;gt;                                                                                                                                                                ] 11,308        17.72K/s                     &lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Googling===&lt;br /&gt;
&lt;br /&gt;
'''Description and goal'''&lt;br /&gt;
&lt;br /&gt;
The scope of this activity is to find information about a single web site published on the internet or to find a specific kind of application such as Webmin or VNC.&lt;br /&gt;
There are tools available that can assist with this technique, for example googlegath, but it is also possibile to perform this operation manually using Google's web site search facilities.  This operation does not require specialist technical skills and is a good way to collect information about a web target.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Useful Google Advanced Search techniques '''&lt;br /&gt;
&lt;br /&gt;
* Use the plus sign (+) to force a search for an overly common word. Use the minus sign (-) to exclude a term from a search. No spaces follow these signs.&lt;br /&gt;
* To search for a phrase, supply the phrase surrounded by double quotes (&amp;quot; &amp;quot;).&lt;br /&gt;
* A period (.) serves as a single-character wildcard.&lt;br /&gt;
* An asterisk (*) represents any word —- not the completion of a word, as is traditionally used.&lt;br /&gt;
&lt;br /&gt;
Google advanced operators help refine searches. Advanced operators use the following syntax: operator:search_term . Notice that there is no space between the operator, the colon, and the search term. A list of operators and search terms follows:&lt;br /&gt;
* The ''site'' operator instructs Google to restrict a search to a specific web site or domain. The web site to search must be supplied after the colon.&lt;br /&gt;
* The ''filetype'' operator instructs Google to search only within the text of a particular type of file. The file type to search must be supplied after the colon. Don't include a period before the file extension.&lt;br /&gt;
* The ''link'' operator instructs Google to search within hyperlinks for a search term.&lt;br /&gt;
* The ''cache'' operator displays the version of a web page as it appeared when Google crawled the site. The URL of the site must be supplied after the colon.&lt;br /&gt;
* The ''intitle'' operator instructs Google to search for a term within the title of a document.&lt;br /&gt;
* The ''inurl'' operator instructs Google to search only within the URL (web address) of a document. The search term must follow the colon.&lt;br /&gt;
&lt;br /&gt;
The following are a set googling examples (for a complete list look at [1]):&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
site:www.xxxxx.ca AND intitle:&amp;quot;index.of&amp;quot; &amp;quot;backup&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
The operator site: restricts a search in a specific domain, while the intitle: operator makes it possibile to find the pages that contain &amp;quot;index of backup&amp;quot; as a link title of the Google output.&amp;lt;br&amp;gt;&lt;br /&gt;
The AND boolean operator is used to combine more conditions in the same query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Index of /backup/&lt;br /&gt;
&lt;br /&gt;
 Name                    Last modified       Size  Description&lt;br /&gt;
&lt;br /&gt;
 Parent Directory        21-Jul-2004 17:48      -  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;Login to Webmin&amp;quot; inurl:10000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
The query produces an output with every Webmin authentication interface collected by Google during the spidering process.&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
site:www.xxxx.org AND filetype:wsdl wsdl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
The filetype operator is used to find specific kind of files on the web-site.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] Johnny Long: &amp;quot;Google Hacking&amp;quot; - http://johnny.ihackstuff.com&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
* Google – http://www.google.com&amp;lt;br&amp;gt;&lt;br /&gt;
* wget - http://www.gnu.org/software/wget/&lt;br /&gt;
* Foundstone SiteDigger - http://www.foundstone.com/index.htm?subnav=resources/navigation.htm&amp;amp;subcontent=/resources/proddesc/sitedigger.htm&lt;br /&gt;
* NTOInsight - http://www.ntobjectives.com/freeware/index.php&amp;lt;br&amp;gt;&lt;br /&gt;
* Burp Spider - http://portswigger.net/spider/&amp;lt;br&amp;gt;&lt;br /&gt;
* Wikto - http://www.sensepost.com/research/wikto/&amp;lt;BR&amp;gt;&lt;br /&gt;
* Googlegath - http://www.nothink.org/perl/googlegath/&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing:_Spidering_and_googling&amp;diff=16247</id>
		<title>Testing: Spidering and googling</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing:_Spidering_and_googling&amp;diff=16247"/>
				<updated>2007-02-07T15:30:46Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Spidering */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
This section describes how to retrieve information about the application being tested using spidering and googling techniques.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&lt;br /&gt;
Web spiders are the most powerful and useful tools developed for both good and bad intentions on the internet. A spider serves one major function, Data Mining. The way a typical spider (like Google) works is by crawling a web site one page at a time, gathering and storing the relevant information such as email addresses, meta-tags, hidden form data, URL information, links, etc. The spider then crawls all the links in that page, collecting relevant information in each following page, and so on. Before you know it, the spider has crawled thousands of links and pages gathering bits of information and storing it into a database. This web of paths is where the term 'spider' is derived from. &lt;br /&gt;
&lt;br /&gt;
The Google search engine found at http://www.google.com offers many features, including language and document translation; web, image, newsgroups, catalog, and news searches; and more. These features offer obvious benefits to even the most uninitiated web surfer, but these same features offer far more nefarious possibilities to the most malicious Internet users, including hackers, computer criminals, identity thieves, and even terrorists. This article outlines the more harmful applications of the Google search engine, techniques that have collectively been termed &amp;quot;Google Hacking.&amp;quot;&lt;br /&gt;
In 1992, there were about 15,000 web sites, in 2006 the number has exceeded 100 million.   What if a simple query to a search engine like Google such as &amp;quot;Hackable Websites w/ Credit Card Information&amp;quot; produced a list of websites that contained customer credit card data of thousands of customers per company?  &lt;br /&gt;
If the attacker is aware of a web application that stores a clear text password file in a directory and wants to gather these targets, then he could search on &amp;quot;intitle:&amp;quot;Index of&amp;quot; .mysql_history&amp;quot; and the search engine will provide him with a list of target systems that may divulge these database usernames and passwords (out of a possible 100 million web sites available). Or perhaps the attacker has a new method to attack a Lotus Notes web server and simply wants to see how many targets are on the internet, he could search on &amp;quot;inurl:domcfg.nsf&amp;quot;. Apply the same logic to a worm looking for its new victim.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
===Spidering===&lt;br /&gt;
&lt;br /&gt;
'''Description and goal'''&lt;br /&gt;
&lt;br /&gt;
Our goal is to create a map of the application with all the points of access (gates) to the application.&lt;br /&gt;
This will be useful for the second active phase of penetration testing.&lt;br /&gt;
You can use a tool such as wget (powerful and very easy to use) to retrieve all the information published by the application.&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
The -s option is used to collect the HTTP header of the web requests. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget -s &amp;lt;target&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Tue, 12 Dec 2006 20:46:39 GMT&lt;br /&gt;
Server: Apache/1.3.37 (Unix) mod_jk/1.2.8 mod_deflate/1.0.21 PHP/5.1.6 mod_auth_&lt;br /&gt;
passthrough/1.8 mod_log_bytes/1.2 mod_bwlimited/1.4 FrontPage/5.0.2.26&lt;br /&gt;
34a mod_ssl/2.8.28 OpenSSL/0.9.7a&lt;br /&gt;
X-Powered-By: PHP/5.1.6&lt;br /&gt;
Set-Cookie: PHPSESSID=b7f5c903f8fdc254ccda8dc33651061f; expires=Friday, 05-Jan-0&lt;br /&gt;
7 00:19:59 GMT; path=/&lt;br /&gt;
Expires: Sun, 19 Nov 1978 05:00:00 GMT&lt;br /&gt;
Last-Modified: Tue, 12 Dec 2006 20:46:39 GMT&lt;br /&gt;
Cache-Control: no-store, no-cache, must-revalidate&lt;br /&gt;
Cache-Control: post-check=0, pre-check=0&lt;br /&gt;
Pragma: no-cache&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html; charset=utf-8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
The -r option is used to collect recursively the web-site's content and the -D option restricts the request only for the specified domain.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget -r -D &amp;lt;domain&amp;gt; &amp;lt;target&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
22:13:55 (15.73 KB/s) - `www.******.org/indice/13' saved [8379]&lt;br /&gt;
&lt;br /&gt;
--22:13:55--  http://www.******.org/*****/********&lt;br /&gt;
           =&amp;gt; `www.******.org/*****/********'&lt;br /&gt;
Connecting to www.******.org[xx.xxx.xxx.xx]:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/html]&lt;br /&gt;
&lt;br /&gt;
    [   &amp;lt;=&amp;gt;                                                                                                                                                                ] 11,308        17.72K/s                     &lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Googling===&lt;br /&gt;
&lt;br /&gt;
'''Description and goal'''&lt;br /&gt;
&lt;br /&gt;
The scope of this activity is to find the information about a single web-site published on the internet or to find a specific kind of application as Webmin or VNC.&lt;br /&gt;
There are many tools that carry out these specific queries as ''googlegath'' but it is possibile to perform this operation also using directly Google's search on the web-site.&lt;br /&gt;
This operation doesn't require an high technical skill and is a good way to collect information about a web target.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''' Tip cases of Advance Search with Google '''&lt;br /&gt;
&lt;br /&gt;
* Use the plus sign (+) to force a search for an overly common word. Use the minus sign (-) to exclude a term from a search. No spaces follow these signs.&lt;br /&gt;
* To search for a phrase, supply the phrase surrounded by double quotes (&amp;quot; &amp;quot;).&lt;br /&gt;
* A period (.) serves as a single-character wildcard.&lt;br /&gt;
* An asterisk (*) represents any word —- not the completion of a word, as is traditionally used.&lt;br /&gt;
&lt;br /&gt;
Google advanced operators help refine searches. Advanced operators use the following syntax: operator:search_term . Notice that there is no space between the operator, the colon, and the search term. A list of operators and search terms follows:&lt;br /&gt;
* The ''site'' operator instructs Google to restrict a search to a specific web site or domain. The web site to search must be supplied after the colon.&lt;br /&gt;
* The ''filetype'' operator instructs Google to search only within the text of a particular type of file. The file type to search must be supplied after the colon. Don't include a period before the file extension.&lt;br /&gt;
* The ''link'' operator instructs Google to search within hyperlinks for a search term.&lt;br /&gt;
* The ''cache'' operator displays the version of a web page as it appeared when Google crawled the site. The URL of the site must be supplied after the colon.&lt;br /&gt;
* The ''intitle'' operator instructs Google to search for a term within the title of a document.&lt;br /&gt;
* The ''inurl'' operator instructs Google to search only within the URL (web address) of a document. The search term must follow the colon.&lt;br /&gt;
&lt;br /&gt;
The following are a set googling examples (for a complete list look at [1]):&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
site:www.xxxxx.ca AND intitle:&amp;quot;index.of&amp;quot; &amp;quot;backup&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
The operator :site restricts a search in a specific domain, while the :intitle operator makes it possibile to find the pages that contain &amp;quot;index of backup&amp;quot; as a link title of the Google output.&amp;lt;br&amp;gt;&lt;br /&gt;
The AND boolean operator is used to combine more conditions in a same query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Index of /backup/&lt;br /&gt;
&lt;br /&gt;
 Name                    Last modified       Size  Description&lt;br /&gt;
&lt;br /&gt;
 Parent Directory        21-Jul-2004 17:48      -  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;Login to Webmin&amp;quot; inurl:10000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
The query produces an output with every Webmin authentication interface collected by Google during the spidering process.&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
site:www.xxxx.org AND filetype:wsdl wsdl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
The filetype operator is used to find specific kind of files on the web-site.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] Johnny Long: &amp;quot;Google Hacking&amp;quot; - http://johnny.ihackstuff.com&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
* Google – http://www.google.com&amp;lt;br&amp;gt;&lt;br /&gt;
* wget - http://www.gnu.org/software/wget/&lt;br /&gt;
* Foundstone SiteDigger - http://www.foundstone.com/index.htm?subnav=resources/navigation.htm&amp;amp;subcontent=/resources/proddesc/sitedigger.htm&lt;br /&gt;
* NTOInsight - http://www.ntobjectives.com/freeware/index.php&amp;lt;br&amp;gt;&lt;br /&gt;
* Burp Spider - http://portswigger.net/spider/&amp;lt;br&amp;gt;&lt;br /&gt;
* Wikto - http://www.sensepost.com/research/wikto/&amp;lt;BR&amp;gt;&lt;br /&gt;
* Googlegath - http://www.nothink.org/perl/googlegath/&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing:_Spidering_and_googling&amp;diff=16246</id>
		<title>Testing: Spidering and googling</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing:_Spidering_and_googling&amp;diff=16246"/>
				<updated>2007-02-07T15:23:57Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Description of the Issue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
This section describes how to retrieve information about the application being tested using spidering and googling techniques.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&lt;br /&gt;
Web spiders are the most powerful and useful tools developed for both good and bad intentions on the internet. A spider serves one major function, Data Mining. The way a typical spider (like Google) works is by crawling a web site one page at a time, gathering and storing the relevant information such as email addresses, meta-tags, hidden form data, URL information, links, etc. The spider then crawls all the links in that page, collecting relevant information in each following page, and so on. Before you know it, the spider has crawled thousands of links and pages gathering bits of information and storing it into a database. This web of paths is where the term 'spider' is derived from. &lt;br /&gt;
&lt;br /&gt;
The Google search engine found at http://www.google.com offers many features, including language and document translation; web, image, newsgroups, catalog, and news searches; and more. These features offer obvious benefits to even the most uninitiated web surfer, but these same features offer far more nefarious possibilities to the most malicious Internet users, including hackers, computer criminals, identity thieves, and even terrorists. This article outlines the more harmful applications of the Google search engine, techniques that have collectively been termed &amp;quot;Google Hacking.&amp;quot;&lt;br /&gt;
In 1992, there were about 15,000 web sites, in 2006 the number has exceeded 100 million.   What if a simple query to a search engine like Google such as &amp;quot;Hackable Websites w/ Credit Card Information&amp;quot; produced a list of websites that contained customer credit card data of thousands of customers per company?  &lt;br /&gt;
If the attacker is aware of a web application that stores a clear text password file in a directory and wants to gather these targets, then he could search on &amp;quot;intitle:&amp;quot;Index of&amp;quot; .mysql_history&amp;quot; and the search engine will provide him with a list of target systems that may divulge these database usernames and passwords (out of a possible 100 million web sites available). Or perhaps the attacker has a new method to attack a Lotus Notes web server and simply wants to see how many targets are on the internet, he could search on &amp;quot;inurl:domcfg.nsf&amp;quot;. Apply the same logic to a worm looking for its new victim.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
===Spidering===&lt;br /&gt;
&lt;br /&gt;
'''Description and goal'''&lt;br /&gt;
&lt;br /&gt;
Our goal is to create a map of the application with all the points of access (gates) to the application.&lt;br /&gt;
This will be useful for the second active phase of pen testing.&lt;br /&gt;
You can use tool such as wget (powerful and very easy to use) to retrieve all the information published by the application.&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
The -s option is used to collect the HTTP header of the web requests. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget -s &amp;lt;target&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Tue, 12 Dec 2006 20:46:39 GMT&lt;br /&gt;
Server: Apache/1.3.37 (Unix) mod_jk/1.2.8 mod_deflate/1.0.21 PHP/5.1.6 mod_auth_&lt;br /&gt;
passthrough/1.8 mod_log_bytes/1.2 mod_bwlimited/1.4 FrontPage/5.0.2.26&lt;br /&gt;
34a mod_ssl/2.8.28 OpenSSL/0.9.7a&lt;br /&gt;
X-Powered-By: PHP/5.1.6&lt;br /&gt;
Set-Cookie: PHPSESSID=b7f5c903f8fdc254ccda8dc33651061f; expires=Friday, 05-Jan-0&lt;br /&gt;
7 00:19:59 GMT; path=/&lt;br /&gt;
Expires: Sun, 19 Nov 1978 05:00:00 GMT&lt;br /&gt;
Last-Modified: Tue, 12 Dec 2006 20:46:39 GMT&lt;br /&gt;
Cache-Control: no-store, no-cache, must-revalidate&lt;br /&gt;
Cache-Control: post-check=0, pre-check=0&lt;br /&gt;
Pragma: no-cache&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html; charset=utf-8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
The -r option is used to collect recursively the web-site's content and the -D option restricts the request only for the specified domain.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget -r -D &amp;lt;domain&amp;gt; &amp;lt;target&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
22:13:55 (15.73 KB/s) - `www.******.org/indice/13' saved [8379]&lt;br /&gt;
&lt;br /&gt;
--22:13:55--  http://www.******.org/*****/********&lt;br /&gt;
           =&amp;gt; `www.******.org/*****/********'&lt;br /&gt;
Connecting to www.******.org[xx.xxx.xxx.xx]:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/html]&lt;br /&gt;
&lt;br /&gt;
    [   &amp;lt;=&amp;gt;                                                                                                                                                                ] 11,308        17.72K/s                     &lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Googling===&lt;br /&gt;
&lt;br /&gt;
'''Description and goal'''&lt;br /&gt;
&lt;br /&gt;
The scope of this activity is to find the information about a single web-site published on the internet or to find a specific kind of application as Webmin or VNC.&lt;br /&gt;
There are many tools that carry out these specific queries as ''googlegath'' but it is possibile to perform this operation also using directly Google's search on the web-site.&lt;br /&gt;
This operation doesn't require an high technical skill and is a good way to collect information about a web target.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''' Tip cases of Advance Search with Google '''&lt;br /&gt;
&lt;br /&gt;
* Use the plus sign (+) to force a search for an overly common word. Use the minus sign (-) to exclude a term from a search. No spaces follow these signs.&lt;br /&gt;
* To search for a phrase, supply the phrase surrounded by double quotes (&amp;quot; &amp;quot;).&lt;br /&gt;
* A period (.) serves as a single-character wildcard.&lt;br /&gt;
* An asterisk (*) represents any word —- not the completion of a word, as is traditionally used.&lt;br /&gt;
&lt;br /&gt;
Google advanced operators help refine searches. Advanced operators use the following syntax: operator:search_term . Notice that there is no space between the operator, the colon, and the search term. A list of operators and search terms follows:&lt;br /&gt;
* The ''site'' operator instructs Google to restrict a search to a specific web site or domain. The web site to search must be supplied after the colon.&lt;br /&gt;
* The ''filetype'' operator instructs Google to search only within the text of a particular type of file. The file type to search must be supplied after the colon. Don't include a period before the file extension.&lt;br /&gt;
* The ''link'' operator instructs Google to search within hyperlinks for a search term.&lt;br /&gt;
* The ''cache'' operator displays the version of a web page as it appeared when Google crawled the site. The URL of the site must be supplied after the colon.&lt;br /&gt;
* The ''intitle'' operator instructs Google to search for a term within the title of a document.&lt;br /&gt;
* The ''inurl'' operator instructs Google to search only within the URL (web address) of a document. The search term must follow the colon.&lt;br /&gt;
&lt;br /&gt;
The following are a set googling examples (for a complete list look at [1]):&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
site:www.xxxxx.ca AND intitle:&amp;quot;index.of&amp;quot; &amp;quot;backup&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
The operator :site restricts a search in a specific domain, while the :intitle operator makes it possibile to find the pages that contain &amp;quot;index of backup&amp;quot; as a link title of the Google output.&amp;lt;br&amp;gt;&lt;br /&gt;
The AND boolean operator is used to combine more conditions in a same query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Index of /backup/&lt;br /&gt;
&lt;br /&gt;
 Name                    Last modified       Size  Description&lt;br /&gt;
&lt;br /&gt;
 Parent Directory        21-Jul-2004 17:48      -  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;Login to Webmin&amp;quot; inurl:10000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
The query produces an output with every Webmin authentication interface collected by Google during the spidering process.&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
site:www.xxxx.org AND filetype:wsdl wsdl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
The filetype operator is used to find specific kind of files on the web-site.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] Johnny Long: &amp;quot;Google Hacking&amp;quot; - http://johnny.ihackstuff.com&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
* Google – http://www.google.com&amp;lt;br&amp;gt;&lt;br /&gt;
* wget - http://www.gnu.org/software/wget/&lt;br /&gt;
* Foundstone SiteDigger - http://www.foundstone.com/index.htm?subnav=resources/navigation.htm&amp;amp;subcontent=/resources/proddesc/sitedigger.htm&lt;br /&gt;
* NTOInsight - http://www.ntobjectives.com/freeware/index.php&amp;lt;br&amp;gt;&lt;br /&gt;
* Burp Spider - http://portswigger.net/spider/&amp;lt;br&amp;gt;&lt;br /&gt;
* Wikto - http://www.sensepost.com/research/wikto/&amp;lt;BR&amp;gt;&lt;br /&gt;
* Googlegath - http://www.nothink.org/perl/googlegath/&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing:_Spidering_and_googling&amp;diff=16242</id>
		<title>Testing: Spidering and googling</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing:_Spidering_and_googling&amp;diff=16242"/>
				<updated>2007-02-07T14:32:40Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Brief Summary */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]] &amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
This section describes how to retrieve information about the application being tested using spidering and googling techniques.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&lt;br /&gt;
Web spiders are the most powerful and useful tools developed for both good and bad intentions on the Internet. A spider serves one major function, Data Mining. The way a typical spider (like Google) works is by crawling a website one page at a time, gathering and storing the relevant information such as email address, meta-tags, hidden form data, URL information, links, etc. The spider then crawls all the links in that page, collecting relevant information in each following page, and so on. Before you know it, the spider has crawled thousands of links and pages gathering bits of information and storing into a database. This web of paths is where the term 'spider' is derived from. &lt;br /&gt;
&lt;br /&gt;
The Google search engine found at http://www.google.com offers many features, including language and document translation; web, image, newsgroups, catalog, and news searches; and more. These features offer obvious benefits to even the most uninitiated web surfer, but these same features offer far more nefarious possibilities to the most malicious Internet users, including hackers, computer criminals, identity thieves, and even terrorists. This article outlines the more harmful applications of the Google search engine, techniques that have collectively been termed &amp;quot;Google Hacking.&amp;quot;&lt;br /&gt;
In 1992, there were about 15,000 websites, in 2006 the number has exceeded 100 million.   What if a simple query to a search engine like Google such as &amp;quot;Hackable Websites w/ Credit Card Information&amp;quot; produced a list of websites that contained customer credit card data of thousands of customers per company?  &lt;br /&gt;
If the attacker was aware of a web application that utilized a clear text password file in a directory and wanted to gather these targets, he could search on &amp;quot;intitle:&amp;quot;Index of&amp;quot; .mysql_history&amp;quot; and found on any of the 100 million websites the engine will provide you with a list of the database usernames and passwords. Or prehaps the attacker has a new method to attack a Lotus Notes web server and simply wants to see how many targets are on the Internet, he could search &amp;quot;inurl:domcfg.nsf&amp;quot;. Apply the same logic to a worm looking for its new victim.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
===Spidering===&lt;br /&gt;
&lt;br /&gt;
'''Description and goal'''&lt;br /&gt;
&lt;br /&gt;
Our goal is to create a map of the application with all the points of access (gates) to the application.&lt;br /&gt;
This will be useful for the second active phase of pen testing.&lt;br /&gt;
You can use tool such as wget (powerful and very easy to use) to retrieve all the information published by the application.&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
The -s option is used to collect the HTTP header of the web requests. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget -s &amp;lt;target&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Tue, 12 Dec 2006 20:46:39 GMT&lt;br /&gt;
Server: Apache/1.3.37 (Unix) mod_jk/1.2.8 mod_deflate/1.0.21 PHP/5.1.6 mod_auth_&lt;br /&gt;
passthrough/1.8 mod_log_bytes/1.2 mod_bwlimited/1.4 FrontPage/5.0.2.26&lt;br /&gt;
34a mod_ssl/2.8.28 OpenSSL/0.9.7a&lt;br /&gt;
X-Powered-By: PHP/5.1.6&lt;br /&gt;
Set-Cookie: PHPSESSID=b7f5c903f8fdc254ccda8dc33651061f; expires=Friday, 05-Jan-0&lt;br /&gt;
7 00:19:59 GMT; path=/&lt;br /&gt;
Expires: Sun, 19 Nov 1978 05:00:00 GMT&lt;br /&gt;
Last-Modified: Tue, 12 Dec 2006 20:46:39 GMT&lt;br /&gt;
Cache-Control: no-store, no-cache, must-revalidate&lt;br /&gt;
Cache-Control: post-check=0, pre-check=0&lt;br /&gt;
Pragma: no-cache&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html; charset=utf-8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
The -r option is used to collect recursively the web-site's content and the -D option restricts the request only for the specified domain.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget -r -D &amp;lt;domain&amp;gt; &amp;lt;target&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
22:13:55 (15.73 KB/s) - `www.******.org/indice/13' saved [8379]&lt;br /&gt;
&lt;br /&gt;
--22:13:55--  http://www.******.org/*****/********&lt;br /&gt;
           =&amp;gt; `www.******.org/*****/********'&lt;br /&gt;
Connecting to www.******.org[xx.xxx.xxx.xx]:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/html]&lt;br /&gt;
&lt;br /&gt;
    [   &amp;lt;=&amp;gt;                                                                                                                                                                ] 11,308        17.72K/s                     &lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Googling===&lt;br /&gt;
&lt;br /&gt;
'''Description and goal'''&lt;br /&gt;
&lt;br /&gt;
The scope of this activity is to find the information about a single web-site published on the internet or to find a specific kind of application as Webmin or VNC.&lt;br /&gt;
There are many tools that carry out these specific queries as ''googlegath'' but it is possibile to perform this operation also using directly Google's search on the web-site.&lt;br /&gt;
This operation doesn't require an high technical skill and is a good way to collect information about a web target.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''' Tip cases of Advance Search with Google '''&lt;br /&gt;
&lt;br /&gt;
* Use the plus sign (+) to force a search for an overly common word. Use the minus sign (-) to exclude a term from a search. No spaces follow these signs.&lt;br /&gt;
* To search for a phrase, supply the phrase surrounded by double quotes (&amp;quot; &amp;quot;).&lt;br /&gt;
* A period (.) serves as a single-character wildcard.&lt;br /&gt;
* An asterisk (*) represents any word —- not the completion of a word, as is traditionally used.&lt;br /&gt;
&lt;br /&gt;
Google advanced operators help refine searches. Advanced operators use the following syntax: operator:search_term . Notice that there is no space between the operator, the colon, and the search term. A list of operators and search terms follows:&lt;br /&gt;
* The ''site'' operator instructs Google to restrict a search to a specific web site or domain. The web site to search must be supplied after the colon.&lt;br /&gt;
* The ''filetype'' operator instructs Google to search only within the text of a particular type of file. The file type to search must be supplied after the colon. Don't include a period before the file extension.&lt;br /&gt;
* The ''link'' operator instructs Google to search within hyperlinks for a search term.&lt;br /&gt;
* The ''cache'' operator displays the version of a web page as it appeared when Google crawled the site. The URL of the site must be supplied after the colon.&lt;br /&gt;
* The ''intitle'' operator instructs Google to search for a term within the title of a document.&lt;br /&gt;
* The ''inurl'' operator instructs Google to search only within the URL (web address) of a document. The search term must follow the colon.&lt;br /&gt;
&lt;br /&gt;
The following are a set googling examples (for a complete list look at [1]):&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
site:www.xxxxx.ca AND intitle:&amp;quot;index.of&amp;quot; &amp;quot;backup&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
The operator :site restricts a search in a specific domain, while the :intitle operator makes it possibile to find the pages that contain &amp;quot;index of backup&amp;quot; as a link title of the Google output.&amp;lt;br&amp;gt;&lt;br /&gt;
The AND boolean operator is used to combine more conditions in a same query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Index of /backup/&lt;br /&gt;
&lt;br /&gt;
 Name                    Last modified       Size  Description&lt;br /&gt;
&lt;br /&gt;
 Parent Directory        21-Jul-2004 17:48      -  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;Login to Webmin&amp;quot; inurl:10000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
The query produces an output with every Webmin authentication interface collected by Google during the spidering process.&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
site:www.xxxx.org AND filetype:wsdl wsdl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
The filetype operator is used to find specific kind of files on the web-site.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] Johnny Long: &amp;quot;Google Hacking&amp;quot; - http://johnny.ihackstuff.com&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
* Google – http://www.google.com&amp;lt;br&amp;gt;&lt;br /&gt;
* wget - http://www.gnu.org/software/wget/&lt;br /&gt;
* Foundstone SiteDigger - http://www.foundstone.com/index.htm?subnav=resources/navigation.htm&amp;amp;subcontent=/resources/proddesc/sitedigger.htm&lt;br /&gt;
* NTOInsight - http://www.ntobjectives.com/freeware/index.php&amp;lt;br&amp;gt;&lt;br /&gt;
* Burp Spider - http://portswigger.net/spider/&amp;lt;br&amp;gt;&lt;br /&gt;
* Wikto - http://www.sensepost.com/research/wikto/&amp;lt;BR&amp;gt;&lt;br /&gt;
* Googlegath - http://www.nothink.org/perl/googlegath/&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=16240</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=16240"/>
				<updated>2007-02-07T14:28:59Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Gray Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/OWASP_Testing_Guide_v2_Table_of_Contents#Web_Application_Penetration_Testing Up]]&lt;br /&gt;
&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A paramount step for testing for web application vulnerabilities is to find out which particular applications are hosted on a web server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications have known vulnerabilities and known attack strategies that can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this, many applications are often misconfigured or not updated due to the perception that they are only used &amp;quot;internally&amp;quot; and therefore no threat exists.&amp;lt;br/&amp;gt;&lt;br /&gt;
Furthermore, many applications use a common path for administrative interfaces which can be used to guess or brute force administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is losing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve to the same IP address (this scenario is not limited to hosting environments, but also applies to ordinary corporate environments as well).&lt;br /&gt;
&lt;br /&gt;
As a security professional, you are sometimes given a set of IP addresses (or possibly just one) as a target to test. No other knowledge. It is arguable that this scenario is more akin to a pentest-type engagement, but in any case, it is expected that such an assignment would test all web applications accessible through this target (and possibly other things). The problem is that the given IP address hosts an http service on port 80, but if you access it by specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a number of web applications, associated to unrelated symbolic (DNS) names. Obviously, the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client may not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &lt;br /&gt;
&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist of a set of DNS symbolic names or a mix of the two.&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should identify all the applications accessible through the given target. In the following examples, we will examine a few techniques that can be employed to achieve this goal. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as ''192.168.1.100'') which, unless indicated otherwise, represent ''generic'' IP addresses and are used only for anonymity purposes.&lt;br /&gt;
&lt;br /&gt;
There are three factors influencing how many applications are related to a given DNS name (or an IP address):&lt;br /&gt;
&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as:&lt;br /&gt;
http://www.example.com/url1 &lt;br /&gt;
http://www.example.com/url2 &lt;br /&gt;
http://www.example.com/url3 &lt;br /&gt;
In this case, the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, just that their existence and location is not explicitly advertised.&lt;br /&gt;
&lt;br /&gt;
'''2. Non-standard ports'''&amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&lt;br /&gt;
'''3. Virtual hosts'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows us to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to fully ascertain the existence of non-standard-named web applications. Being non-standard, there is no fixed criteria governing the naming convention, however there are a number of techniques that the tester can use to gain some additional insight. &lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help in this respect.&lt;br /&gt;
Second, these applications may be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from URLs such as https://www.example.com/webmail, https://webmail.example.com/, or https://mail.example.com/. The same holds for administrative interfaces, which may be published at hidden URLs (for example, a Tomcat administrative interface), and yet not referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield some results. Vulnerability scanners may help in this respect.&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is easy to check for the existence of web applications on non-standard ports. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nmap –P0 –sT –sV –p1-65535 192.168.1.100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and look for http or the indication of SSL-wrapped services (which should be probed to confirm they are https). For example, the output of the previous command could look like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Interesting ports on 192.168.1.100:&lt;br /&gt;
(The 65527 ports scanned but not shown below are in state: closed)&lt;br /&gt;
PORT      STATE SERVICE     VERSION&lt;br /&gt;
22/tcp    open  ssh         OpenSSH 3.5p1 (protocol 1.99)&lt;br /&gt;
80/tcp    open  http        Apache httpd 2.0.40 ((Red Hat Linux))&lt;br /&gt;
443/tcp   open  ssl         OpenSSL&lt;br /&gt;
901/tcp   open  http        Samba SWAT administration server&lt;br /&gt;
1241/tcp  open  ssl         Nessus security scanner&lt;br /&gt;
3690/tcp  open  unknown&lt;br /&gt;
8000/tcp  open  http-alt?&lt;br /&gt;
8080/tcp  open  http        Apache Tomcat/Coyote JSP engine 1.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From this example, we see that:&lt;br /&gt;
* There is an Apache http server running on port 80.&lt;br /&gt;
* It looks like there is an https server on port 443 (but this needs to be confirmed; for example, by visiting https://192.168.1.100 with a browser).&lt;br /&gt;
* On port 901 there is a Samba SWAT web interface.&lt;br /&gt;
* The service on port 1241 is not https, but is the SSL-wrapped Nessus daemon.&lt;br /&gt;
* Port 3690 features an unspecified service (nmap gives back its ''fingerprint'' - here omitted for clarity - together with instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it represents).&lt;br /&gt;
* Another unspecified service on port 8000; this might possibly be http, since it is not uncommon to find http servers on this port. Let's give it a look:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ telnet 192.168.10.100 8000&lt;br /&gt;
Trying 192.168.1.100...&lt;br /&gt;
Connected to 192.168.1.100.&lt;br /&gt;
Escape character is '^]'.&lt;br /&gt;
GET / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200 OK&lt;br /&gt;
pragma: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Server: MX4J-HTTPD/1.0&lt;br /&gt;
expires: now&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms that in fact it is an HTTP server. Alternatively, we could have visited the URL with a web browser; or used the GET or HEAD Perl commands, which mimic HTTP interactions such as the one given above (however HEAD requests may not be honored by all servers).&lt;br /&gt;
* Apache Tomcat running on port 8080.&lt;br /&gt;
&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative interface).&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&amp;lt;br&amp;gt;&lt;br /&gt;
There are a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&lt;br /&gt;
''DNS zone transfers''&amp;lt;br&amp;gt;&lt;br /&gt;
This technique has limited use nowadays, given the fact that zone transfers are largely not honored by DNS servers. However, it may be worth a try.&lt;br /&gt;
First of all, we must determine the name servers serving ''x.y.z.t''. If a symbolic name is known for ''x.y.z.t'' (let it be ''www.example.com''), its name servers can be determined by means of tools such as ''nslookup'', ''host'' or ''dig'' by requesting DNS NS records.&lt;br /&gt;
If no symbolic names are known for ''x.y.z.t'', but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that ''x.y.z.t'' will be served as well by that name server). For example, if your target consists of the IP address ''x.y.z.t'' and of ''mail.example.com'', determine the name servers for domain ''example.com''.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Example: identifying www.owasp.org name servers by using host&lt;br /&gt;
&lt;br /&gt;
$ host -t ns www.owasp.org&lt;br /&gt;
www.owasp.org is an alias for owasp.org.&lt;br /&gt;
owasp.org name server ns1.secure.net.&lt;br /&gt;
owasp.org name server ns2.secure.net.&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A zone transfer may now be requested to the name servers for domain ''example.com''; if you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Trying to request a zone transfer for owasp.org from one of its name servers&lt;br /&gt;
&lt;br /&gt;
$ host -l www.owasp.org ns1.secure.net&lt;br /&gt;
Using domain server:&lt;br /&gt;
Name: ns1.secure.net&lt;br /&gt;
Address: 192.220.124.10#53&lt;br /&gt;
Aliases:&lt;br /&gt;
&lt;br /&gt;
Host www.owasp.org not found: 5(REFUSED)&lt;br /&gt;
; Transfer failed.&lt;br /&gt;
-bash-2.05b$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNS inverse queries''&amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous one, but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not guaranteed.&lt;br /&gt;
&lt;br /&gt;
''Web-based DNS searches''&amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow it to perform name-based searches on DNS. One such service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertinent to the target you are examining.&lt;br /&gt;
&lt;br /&gt;
''Reverse-IP services''&amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &lt;br /&gt;
(requires free membership) &lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/  &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &lt;br /&gt;
(multiple services available) &lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &lt;br /&gt;
(some services are still private at the time of writing) &lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &lt;br /&gt;
(reverse-IP/domain lookup) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following example shows the result of a query to one of the above reverse-IP services to 216.48.3.18, the IP address of www.owasp.org. Three additional non-obvious symbolic names mapping to the same address have been revealed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:Owasp-Info.jpg]]&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Googling''&amp;lt;br&amp;gt;&lt;br /&gt;
Following information gathering from the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs. &lt;br /&gt;
For instance, considering the previous example regarding ''www.owasp.org'', you could query Google and other search engines looking for information (hence, DNS names) related to the newly discovered domains of ''webgoat.org'', ''webscarab.com'', ''webscarab.net''.&lt;br /&gt;
Googling techniques are explained in [[Spidering and googling AoC]].&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
Not applicable. The methodology remains the same as listed in Black Box testing no matter how much information you start with.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&lt;br /&gt;
* DNS lookup tools such as ''nslookup'', ''dig'' or similar. &lt;br /&gt;
* Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &lt;br /&gt;
* Search engines (Google, and other major engines). &lt;br /&gt;
* Specialized DNS-related web-based search service: see text.&lt;br /&gt;
* nmap - http://www.insecure.org &lt;br /&gt;
* Nessus Vulnerability Scanner - http://www.nessus.org &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=16239</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=16239"/>
				<updated>2007-02-07T14:28:11Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/OWASP_Testing_Guide_v2_Table_of_Contents#Web_Application_Penetration_Testing Up]]&lt;br /&gt;
&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A paramount step for testing for web application vulnerabilities is to find out which particular applications are hosted on a web server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications have known vulnerabilities and known attack strategies that can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this, many applications are often misconfigured or not updated due to the perception that they are only used &amp;quot;internally&amp;quot; and therefore no threat exists.&amp;lt;br/&amp;gt;&lt;br /&gt;
Furthermore, many applications use a common path for administrative interfaces which can be used to guess or brute force administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is losing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve to the same IP address (this scenario is not limited to hosting environments, but also applies to ordinary corporate environments as well).&lt;br /&gt;
&lt;br /&gt;
As a security professional, you are sometimes given a set of IP addresses (or possibly just one) as a target to test. No other knowledge. It is arguable that this scenario is more akin to a pentest-type engagement, but in any case, it is expected that such an assignment would test all web applications accessible through this target (and possibly other things). The problem is that the given IP address hosts an http service on port 80, but if you access it by specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a number of web applications, associated to unrelated symbolic (DNS) names. Obviously, the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client may not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &lt;br /&gt;
&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist of a set of DNS symbolic names or a mix of the two.&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should identify all the applications accessible through the given target. In the following examples, we will examine a few techniques that can be employed to achieve this goal. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as ''192.168.1.100'') which, unless indicated otherwise, represent ''generic'' IP addresses and are used only for anonymity purposes.&lt;br /&gt;
&lt;br /&gt;
There are three factors influencing how many applications are related to a given DNS name (or an IP address):&lt;br /&gt;
&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as:&lt;br /&gt;
http://www.example.com/url1 &lt;br /&gt;
http://www.example.com/url2 &lt;br /&gt;
http://www.example.com/url3 &lt;br /&gt;
In this case, the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, just that their existence and location is not explicitly advertised.&lt;br /&gt;
&lt;br /&gt;
'''2. Non-standard ports'''&amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&lt;br /&gt;
'''3. Virtual hosts'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows us to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to fully ascertain the existence of non-standard-named web applications. Being non-standard, there is no fixed criteria governing the naming convention, however there are a number of techniques that the tester can use to gain some additional insight. &lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help in this respect.&lt;br /&gt;
Second, these applications may be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from URLs such as https://www.example.com/webmail, https://webmail.example.com/, or https://mail.example.com/. The same holds for administrative interfaces, which may be published at hidden URLs (for example, a Tomcat administrative interface), and yet not referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield some results. Vulnerability scanners may help in this respect.&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is easy to check for the existence of web applications on non-standard ports. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nmap –P0 –sT –sV –p1-65535 192.168.1.100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and look for http or the indication of SSL-wrapped services (which should be probed to confirm they are https). For example, the output of the previous command could look like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Interesting ports on 192.168.1.100:&lt;br /&gt;
(The 65527 ports scanned but not shown below are in state: closed)&lt;br /&gt;
PORT      STATE SERVICE     VERSION&lt;br /&gt;
22/tcp    open  ssh         OpenSSH 3.5p1 (protocol 1.99)&lt;br /&gt;
80/tcp    open  http        Apache httpd 2.0.40 ((Red Hat Linux))&lt;br /&gt;
443/tcp   open  ssl         OpenSSL&lt;br /&gt;
901/tcp   open  http        Samba SWAT administration server&lt;br /&gt;
1241/tcp  open  ssl         Nessus security scanner&lt;br /&gt;
3690/tcp  open  unknown&lt;br /&gt;
8000/tcp  open  http-alt?&lt;br /&gt;
8080/tcp  open  http        Apache Tomcat/Coyote JSP engine 1.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From this example, we see that:&lt;br /&gt;
* There is an Apache http server running on port 80.&lt;br /&gt;
* It looks like there is an https server on port 443 (but this needs to be confirmed; for example, by visiting https://192.168.1.100 with a browser).&lt;br /&gt;
* On port 901 there is a Samba SWAT web interface.&lt;br /&gt;
* The service on port 1241 is not https, but is the SSL-wrapped Nessus daemon.&lt;br /&gt;
* Port 3690 features an unspecified service (nmap gives back its ''fingerprint'' - here omitted for clarity - together with instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it represents).&lt;br /&gt;
* Another unspecified service on port 8000; this might possibly be http, since it is not uncommon to find http servers on this port. Let's give it a look:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ telnet 192.168.10.100 8000&lt;br /&gt;
Trying 192.168.1.100...&lt;br /&gt;
Connected to 192.168.1.100.&lt;br /&gt;
Escape character is '^]'.&lt;br /&gt;
GET / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200 OK&lt;br /&gt;
pragma: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Server: MX4J-HTTPD/1.0&lt;br /&gt;
expires: now&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms that in fact it is an HTTP server. Alternatively, we could have visited the URL with a web browser; or used the GET or HEAD Perl commands, which mimic HTTP interactions such as the one given above (however HEAD requests may not be honored by all servers).&lt;br /&gt;
* Apache Tomcat running on port 8080.&lt;br /&gt;
&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative interface).&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&amp;lt;br&amp;gt;&lt;br /&gt;
There are a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&lt;br /&gt;
''DNS zone transfers''&amp;lt;br&amp;gt;&lt;br /&gt;
This technique has limited use nowadays, given the fact that zone transfers are largely not honored by DNS servers. However, it may be worth a try.&lt;br /&gt;
First of all, we must determine the name servers serving ''x.y.z.t''. If a symbolic name is known for ''x.y.z.t'' (let it be ''www.example.com''), its name servers can be determined by means of tools such as ''nslookup'', ''host'' or ''dig'' by requesting DNS NS records.&lt;br /&gt;
If no symbolic names are known for ''x.y.z.t'', but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that ''x.y.z.t'' will be served as well by that name server). For example, if your target consists of the IP address ''x.y.z.t'' and of ''mail.example.com'', determine the name servers for domain ''example.com''.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Example: identifying www.owasp.org name servers by using host&lt;br /&gt;
&lt;br /&gt;
$ host -t ns www.owasp.org&lt;br /&gt;
www.owasp.org is an alias for owasp.org.&lt;br /&gt;
owasp.org name server ns1.secure.net.&lt;br /&gt;
owasp.org name server ns2.secure.net.&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A zone transfer may now be requested to the name servers for domain ''example.com''; if you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Trying to request a zone transfer for owasp.org from one of its name servers&lt;br /&gt;
&lt;br /&gt;
$ host -l www.owasp.org ns1.secure.net&lt;br /&gt;
Using domain server:&lt;br /&gt;
Name: ns1.secure.net&lt;br /&gt;
Address: 192.220.124.10#53&lt;br /&gt;
Aliases:&lt;br /&gt;
&lt;br /&gt;
Host www.owasp.org not found: 5(REFUSED)&lt;br /&gt;
; Transfer failed.&lt;br /&gt;
-bash-2.05b$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNS inverse queries''&amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous one, but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not guaranteed.&lt;br /&gt;
&lt;br /&gt;
''Web-based DNS searches''&amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow it to perform name-based searches on DNS. One such service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertinent to the target you are examining.&lt;br /&gt;
&lt;br /&gt;
''Reverse-IP services''&amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &lt;br /&gt;
(requires free membership) &lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/  &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &lt;br /&gt;
(multiple services available) &lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &lt;br /&gt;
(some services are still private at the time of writing) &lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &lt;br /&gt;
(reverse-IP/domain lookup) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following example shows the result of a query to one of the above reverse-IP services to 216.48.3.18, the IP address of www.owasp.org. Three additional non-obvious symbolic names mapping to the same address have been revealed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:Owasp-Info.jpg]]&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Googling''&amp;lt;br&amp;gt;&lt;br /&gt;
Following information gathering from the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs. &lt;br /&gt;
For instance, considering the previous example regarding ''www.owasp.org'', you could query Google and other search engines looking for information (hence, DNS names) related to the newly discovered domains of ''webgoat.org'', ''webscarab.com'', ''webscarab.net''.&lt;br /&gt;
Googling techniques are explained in [[Spidering and googling AoC]].&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
Not applicable. The methodology remains the same listed in Black Box testing no matter how much information you start with.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&lt;br /&gt;
* DNS lookup tools such as ''nslookup'', ''dig'' or similar. &lt;br /&gt;
* Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &lt;br /&gt;
* Search engines (Google, and other major engines). &lt;br /&gt;
* Specialized DNS-related web-based search service: see text.&lt;br /&gt;
* nmap - http://www.insecure.org &lt;br /&gt;
* Nessus Vulnerability Scanner - http://www.nessus.org &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=16237</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=16237"/>
				<updated>2007-02-07T12:43:34Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Description of the Issue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/OWASP_Testing_Guide_v2_Table_of_Contents#Web_Application_Penetration_Testing Up]]&lt;br /&gt;
&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A paramount step for testing for web application vulnerabilities is to find out which particular applications are hosted on a web server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications have known vulnerabilities and known attack strategies that can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this, many applications are often misconfigured or not updated due to the perception that they are only used &amp;quot;internally&amp;quot; and therefore no threat exists.&amp;lt;br/&amp;gt;&lt;br /&gt;
Furthermore, many applications use a common path for administrative interfaces which can be used to guess or brute force administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is losing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve to the same IP address (this scenario is not limited to hosting environments, but also applies to ordinary corporate environments as well).&lt;br /&gt;
&lt;br /&gt;
As a security professional, you are sometimes given a set of IP addresses (or possibly just one) as a target to test. No other knowledge. It is arguable that this scenario is more akin to a pentest-type engagement, but in any case, it is expected that such an assignment would test all web applications accessible through this target (and possibly other things). The problem is that the given IP address hosts an http service on port 80, but if you access it by specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a number of web applications, associated to unrelated symbolic (DNS) names. Obviously, the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client may not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &lt;br /&gt;
&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist of a set of DNS symbolic names or a mix of the two.&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should identify all the applications accessible through the given target. In the following examples, we will examine a few techniques that can be employed to achieve this goal. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Notes&amp;lt;/u&amp;gt; Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as ''192.168.1.100'') which, unless indicated otherwise, represent ''generic'' IP addresses and are used only for anonymity purposes.&lt;br /&gt;
&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as:&lt;br /&gt;
http://www.example.com/url1 &lt;br /&gt;
http://www.example.com/url2 &lt;br /&gt;
http://www.example.com/url3 &lt;br /&gt;
In this case, the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, just that their existence and location is not explicitly advertised.&lt;br /&gt;
&lt;br /&gt;
'''2. Non-standard ports'''&amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&lt;br /&gt;
'''3. Virtual hosts'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows us to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to absolutely ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a few criteria that will aid in their quest.&lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help with this respect.&lt;br /&gt;
Second, these applications might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is easy to check for the existence of web applications on non-standard ports. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nmap –P0 –sT –sV –p1-65535 192.168.1.100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https). For example, the output of the previous command could look like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Interesting ports on 192.168.1.100:&lt;br /&gt;
(The 65527 ports scanned but not shown below are in state: closed)&lt;br /&gt;
PORT      STATE SERVICE     VERSION&lt;br /&gt;
22/tcp    open  ssh         OpenSSH 3.5p1 (protocol 1.99)&lt;br /&gt;
80/tcp    open  http        Apache httpd 2.0.40 ((Red Hat Linux))&lt;br /&gt;
443/tcp   open  ssl         OpenSSL&lt;br /&gt;
901/tcp   open  http        Samba SWAT administration server&lt;br /&gt;
1241/tcp  open  ssl         Nessus security scanner&lt;br /&gt;
3690/tcp  open  unknown&lt;br /&gt;
8000/tcp  open  http-alt?&lt;br /&gt;
8080/tcp  open  http        Apache Tomcat/Coyote JSP engine 1.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From this example, we see that&lt;br /&gt;
* There is an Apache http server running on port 80&lt;br /&gt;
* It looks like there is an https server on port 443 (but this needs to be confirmed; for example, by visiting https://192.168.1.100 with a browser)&lt;br /&gt;
* On port 901 there is a Samba SWAT web interface&lt;br /&gt;
* The service on port 1241 is not https, but is the SSL-wrapped Nessus daemon&lt;br /&gt;
* Port 3690 features an unspecified service (nmap gives back its ''fingerprint'' - here omitted for clarity - together with instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it represents)&lt;br /&gt;
* Another unspecified service on port 8000; this might possibly be http, since it is not uncommon to find http servers on this port. Let's give it a look:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ telnet 192.168.10.100 8000&lt;br /&gt;
Trying 192.168.1.100...&lt;br /&gt;
Connected to 192.168.1.100.&lt;br /&gt;
Escape character is '^]'.&lt;br /&gt;
GET / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200 OK&lt;br /&gt;
pragma: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Server: MX4J-HTTPD/1.0&lt;br /&gt;
expires: now&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms that in fact it is an HTTP server. Alternatively, we could have visited the URL with a web browser; or used the GET or HEAD Perl commands, which mimic HTTP interactions such as the one given above (however HEAD requests may not be honored by all servers)&lt;br /&gt;
* Apache Tomcat running on port 8080&lt;br /&gt;
&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative interface).&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&amp;lt;br&amp;gt;&lt;br /&gt;
There are a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&lt;br /&gt;
''DNS zone transfers''&amp;lt;br&amp;gt;&lt;br /&gt;
This technique has limited use nowadays, given the fact that zone transfers are largely not honored by DNS servers. However, it may be worth a try.&lt;br /&gt;
First of all, we must determine the name servers serving ''x.y.z.t''. If a symbolic name is known for ''x.y.z.t'' (let it be ''www.example.com''), its name servers can be determined by means of tools such as ''nslookup'', ''host'' or ''dig'' by requesting DNS NS records.&lt;br /&gt;
If no symbolic names are known for ''x.y.z.t'', but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that ''x.y.z.t'' will be served as well by that name server). For example, if your target consists of the IP address ''x.y.z.t'' and of ''mail.example.com'', determine the name servers for domain ''example.com''.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Example: identifying www.owasp.org name servers by using host&lt;br /&gt;
&lt;br /&gt;
$ host -t ns www.owasp.org&lt;br /&gt;
www.owasp.org is an alias for owasp.org.&lt;br /&gt;
owasp.org name server ns1.secure.net.&lt;br /&gt;
owasp.org name server ns2.secure.net.&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A zone transfer may now be requested to the name servers for domain ''example.com''; if you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Trying to request a zone transfer for owasp.org from one of its name servers&lt;br /&gt;
&lt;br /&gt;
$ host -l www.owasp.org ns1.secure.net&lt;br /&gt;
Using domain server:&lt;br /&gt;
Name: ns1.secure.net&lt;br /&gt;
Address: 192.220.124.10#53&lt;br /&gt;
Aliases:&lt;br /&gt;
&lt;br /&gt;
Host www.owasp.org not found: 5(REFUSED)&lt;br /&gt;
; Transfer failed.&lt;br /&gt;
-bash-2.05b$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNS inverse queries''&amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous one, but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&lt;br /&gt;
''Web-based DNS searches''&amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow it to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertinent to the target you are examining.&lt;br /&gt;
&lt;br /&gt;
''Reverse-IP services''&amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &lt;br /&gt;
(requires free membership) &lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/  &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &lt;br /&gt;
(multiple services available) &lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &lt;br /&gt;
(some services are still private at the time of writing) &lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &lt;br /&gt;
(reverse ip/domain lookup) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following example shows the result of a query to one of the above reverse IP services to 216.48.3.18, the IP address of www.owasp.org. Three additional non-obvious symbolic names mapping to the same address have been revealed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:Owasp-Info.jpg]]&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Googling''&amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs. &lt;br /&gt;
For instance, considering the previous example regarding ''www.owasp.org'', you could query Google and other search engines looking for information (hence, DNS names) related to the newly discovered domains of ''webgoat.org'', ''webscarab.com'', ''webscarab.net''.&lt;br /&gt;
Googling techniques are explained in [[Spidering and googling AoC]].&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
Not applicable. The methodology remains the same listed in Black Box testing no matter how much information you start with.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&lt;br /&gt;
* DNS lookup tools such as ''nslookup'', ''dig'' or similar. &lt;br /&gt;
* Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &lt;br /&gt;
* Search engines (Google, and other major engines). &lt;br /&gt;
* Specialized DNS-related web-based search service: see text.&lt;br /&gt;
* nmap - http://www.insecure.org &lt;br /&gt;
* Nessus Vulnerability Scanner - http://www.nessus.org &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=16236</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=16236"/>
				<updated>2007-02-07T12:22:26Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Brief Summary */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/OWASP_Testing_Guide_v2_Table_of_Contents#Web_Application_Penetration_Testing Up]]&lt;br /&gt;
&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A paramount step for testing for web application vulnerabilities is to find out which particular applications are hosted on a web server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications have known vulnerabilities and known attack strategies that can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this, many applications are often misconfigured or not updated due to the perception that they are only used &amp;quot;internally&amp;quot; and therefore no threat exists.&amp;lt;br/&amp;gt;&lt;br /&gt;
Furthermore, many applications use a common path for administrative interfaces which can be used to guess or brute force administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is losing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve to the same IP address (this scenario is not limited to hosting environments, but also applies to ordinary corporate environments as well).&lt;br /&gt;
&lt;br /&gt;
As a security professional, you are sometimes given a set of IP addresses (or possibly just one) as a target to test. No other knowledge. It is arguable that this scenario is more akin to a pentest-type engagement, but, in any case, it is expected that such an assignment would test all web applications accessible through this target (and possibly other things). The problem is that the given IP address hosts an http service on port 80, but if you access it by specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously, the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client may not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &lt;br /&gt;
&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist of a set of DNS symbolic names or a mix of the two.&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should identify all the applications accessible through the given target. In the following examples, we will examine a few techniques that can be employed to achieve this goal. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Notes&amp;lt;/u&amp;gt; Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as ''192.168.1.100'') which, unless indicated otherwise, represent ''generic'' IP addresses and are used only for anonymity purposes.&lt;br /&gt;
&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as:&lt;br /&gt;
http://www.example.com/url1 &lt;br /&gt;
http://www.example.com/url2 &lt;br /&gt;
http://www.example.com/url3 &lt;br /&gt;
In this case, the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, just that their existence and location is not explicitly advertised.&lt;br /&gt;
&lt;br /&gt;
'''2. Non-standard ports'''&amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&lt;br /&gt;
'''3. Virtual hosts'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows us to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to absolutely ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a few criteria that will aid in their quest.&lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help with this respect.&lt;br /&gt;
Second, these applications might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is easy to check for the existence of web applications on non-standard ports. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nmap –P0 –sT –sV –p1-65535 192.168.1.100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https). For example, the output of the previous command could look like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Interesting ports on 192.168.1.100:&lt;br /&gt;
(The 65527 ports scanned but not shown below are in state: closed)&lt;br /&gt;
PORT      STATE SERVICE     VERSION&lt;br /&gt;
22/tcp    open  ssh         OpenSSH 3.5p1 (protocol 1.99)&lt;br /&gt;
80/tcp    open  http        Apache httpd 2.0.40 ((Red Hat Linux))&lt;br /&gt;
443/tcp   open  ssl         OpenSSL&lt;br /&gt;
901/tcp   open  http        Samba SWAT administration server&lt;br /&gt;
1241/tcp  open  ssl         Nessus security scanner&lt;br /&gt;
3690/tcp  open  unknown&lt;br /&gt;
8000/tcp  open  http-alt?&lt;br /&gt;
8080/tcp  open  http        Apache Tomcat/Coyote JSP engine 1.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From this example, we see that&lt;br /&gt;
* There is an Apache http server running on port 80&lt;br /&gt;
* It looks like there is an https server on port 443 (but this needs to be confirmed; for example, by visiting https://192.168.1.100 with a browser)&lt;br /&gt;
* On port 901 there is a Samba SWAT web interface&lt;br /&gt;
* The service on port 1241 is not https, but is the SSL-wrapped Nessus daemon&lt;br /&gt;
* Port 3690 features an unspecified service (nmap gives back its ''fingerprint'' - here omitted for clarity - together with instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it represents)&lt;br /&gt;
* Another unspecified service on port 8000; this might possibly be http, since it is not uncommon to find http servers on this port. Let's give it a look:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ telnet 192.168.10.100 8000&lt;br /&gt;
Trying 192.168.1.100...&lt;br /&gt;
Connected to 192.168.1.100.&lt;br /&gt;
Escape character is '^]'.&lt;br /&gt;
GET / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200 OK&lt;br /&gt;
pragma: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Server: MX4J-HTTPD/1.0&lt;br /&gt;
expires: now&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms that in fact it is an HTTP server. Alternatively, we could have visited the URL with a web browser; or used the GET or HEAD Perl commands, which mimic HTTP interactions such as the one given above (however HEAD requests may not be honored by all servers)&lt;br /&gt;
* Apache Tomcat running on port 8080&lt;br /&gt;
&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative interface).&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&amp;lt;br&amp;gt;&lt;br /&gt;
There are a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&lt;br /&gt;
''DNS zone transfers''&amp;lt;br&amp;gt;&lt;br /&gt;
This technique has limited use nowadays, given the fact that zone transfers are largely not honored by DNS servers. However, it may be worth a try.&lt;br /&gt;
First of all, we must determine the name servers serving ''x.y.z.t''. If a symbolic name is known for ''x.y.z.t'' (let it be ''www.example.com''), its name servers can be determined by means of tools such as ''nslookup'', ''host'' or ''dig'' by requesting DNS NS records.&lt;br /&gt;
If no symbolic names are known for ''x.y.z.t'', but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that ''x.y.z.t'' will be served as well by that name server). For example, if your target consists of the IP address ''x.y.z.t'' and of ''mail.example.com'', determine the name servers for domain ''example.com''.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Example: identifying www.owasp.org name servers by using host&lt;br /&gt;
&lt;br /&gt;
$ host -t ns www.owasp.org&lt;br /&gt;
www.owasp.org is an alias for owasp.org.&lt;br /&gt;
owasp.org name server ns1.secure.net.&lt;br /&gt;
owasp.org name server ns2.secure.net.&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A zone transfer may now be requested to the name servers for domain ''example.com''; if you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Trying to request a zone transfer for owasp.org from one of its name servers&lt;br /&gt;
&lt;br /&gt;
$ host -l www.owasp.org ns1.secure.net&lt;br /&gt;
Using domain server:&lt;br /&gt;
Name: ns1.secure.net&lt;br /&gt;
Address: 192.220.124.10#53&lt;br /&gt;
Aliases:&lt;br /&gt;
&lt;br /&gt;
Host www.owasp.org not found: 5(REFUSED)&lt;br /&gt;
; Transfer failed.&lt;br /&gt;
-bash-2.05b$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNS inverse queries''&amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous one, but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&lt;br /&gt;
''Web-based DNS searches''&amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow it to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertinent to the target you are examining.&lt;br /&gt;
&lt;br /&gt;
''Reverse-IP services''&amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &lt;br /&gt;
(requires free membership) &lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/  &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &lt;br /&gt;
(multiple services available) &lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &lt;br /&gt;
(some services are still private at the time of writing) &lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &lt;br /&gt;
(reverse ip/domain lookup) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following example shows the result of a query to one of the above reverse IP services to 216.48.3.18, the IP address of www.owasp.org. Three additional non-obvious symbolic names mapping to the same address have been revealed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:Owasp-Info.jpg]]&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Googling''&amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs. &lt;br /&gt;
For instance, considering the previous example regarding ''www.owasp.org'', you could query Google and other search engines looking for information (hence, DNS names) related to the newly discovered domains of ''webgoat.org'', ''webscarab.com'', ''webscarab.net''.&lt;br /&gt;
Googling techniques are explained in [[Spidering and googling AoC]].&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
Not applicable. The methodology remains the same listed in Black Box testing no matter how much information you start with.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&lt;br /&gt;
* DNS lookup tools such as ''nslookup'', ''dig'' or similar. &lt;br /&gt;
* Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &lt;br /&gt;
* Search engines (Google, and other major engines). &lt;br /&gt;
* Specialized DNS-related web-based search service: see text.&lt;br /&gt;
* nmap - http://www.insecure.org &lt;br /&gt;
* Nessus Vulnerability Scanner - http://www.nessus.org &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15928</id>
		<title>Testing for Web Application Fingerprint (OWASP-IG-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15928"/>
				<updated>2007-01-29T13:57:10Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Malformed requests test */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
Web server fingerprinting is a critical task for the penetration tester. Knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
There are several different vendors and versions of web servers on the market today. Knowing the type of web server that you are testing significantly helps in the testing process, and will also change the course of the test. This information can be derived by sending the web server specific commands and analyzing the output, as each version of web server software may respond differently to these commands. By knowing how each type of web server responds to specific commands and keeping this information in a web server fingerprint database, a penetration tester can send these commands to the web server, analyze the respsonse, and compare it to the database of known signatures. Please note that it usually takes several different commands to accurately identify the web server, as different versions may react similarly to the same command. Rarely, however, do different versions react the same to all HTTP commands. So, by sending several different commands, you increase the accuracy of your guess.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
The simplest and most basic form of identifying a web server is to look at the Server field in the HTTP response header. For our experiments we use netcat. &lt;br /&gt;
Consider the following HTTP Request-Response: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc 202.41.76.251 80&lt;br /&gt;
HEAD / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Mon, 16 Jun 2003 02:53:29 GMT&lt;br /&gt;
Server: Apache/1.3.3 (Unix)  (Red Hat/Linux)&lt;br /&gt;
Last-Modified: Wed, 07 Oct 1998 11:18:14 GMT&lt;br /&gt;
ETag: &amp;quot;1813-49b-361b4df6&amp;quot;&lt;br /&gt;
Accept-Ranges: bytes&lt;br /&gt;
Content-Length: 1179&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
from the ''Server'' field we understand that the server is Apache, version 1.3.3, running on Linux operating system.&lt;br /&gt;
Three examples of the HTTP response headers are shown below:&lt;br /&gt;
&lt;br /&gt;
From an '''Apache 1.3.23''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From a '''Microsoft IIS 5.0''' server:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Expires: Yours, 17 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Date: Mon, 16 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Wed, 28 May 2003 15:32: 21 GMT &lt;br /&gt;
ETag: b0aac0542e25c31: 89d &lt;br /&gt;
Content-Length: 7369 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From a '''Netscape Enterprise 4.1''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:19: 04 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
However, this testing methodology is not so good. There are several techniques that allow a web site to obfuscate or to modify the server banner string.&lt;br /&gt;
For example we could obtain the following answer:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
403 HTTP/1.1 &lt;br /&gt;
Forbidden Date: Mon, 16 Jun 2003 02:41: 27 GMT &lt;br /&gt;
Server: Unknown-Webserver/1.0 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML; &lt;br /&gt;
charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case the server field of that response is obfuscated: we cannot know what type of web server is running.&lt;br /&gt;
&lt;br /&gt;
== Protocol behaviour == &lt;br /&gt;
Refined techniques of testing take into consideration various characteristics of the several web servers available on the market. We will list some methodologies that allow us to deduce the type of web server in use.&lt;br /&gt;
&lt;br /&gt;
=== HTTP header field ordering === &lt;br /&gt;
The first method consists of observing the ordering of the several headers in the response. Every web server has an inner ordering of the header. We consider the following answers as an example:&lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:01: 40 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
We can notice that the ordering of the ''Date'' field and the ''Server'' field differs between Apache, Netscape Enterprise and IIS.&lt;br /&gt;
&lt;br /&gt;
=== Malformed requests test === &lt;br /&gt;
Another useful test to execute involves sending malformed requests or requests of nonexistent pages to the server.&lt;br /&gt;
We consider the following HTTP response: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:12: 37 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Connection: close &lt;br /&gt;
Transfer: chunked &lt;br /&gt;
Content-Type: text/HTML; charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 505 HTTP Version Not Supported &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:04: 04 GMT &lt;br /&gt;
Content-length: 140 &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We notice that every server answers in a different way. The answer also differs in the version of the server. An analogous issue comes if we create requests with a non-existant protocol. Consider the following responses: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:17: 47 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 34 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Content-Length: 87 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&amp;lt;HEAD&amp;gt;&amp;lt;TITLE&amp;gt;Bad request&amp;lt;/TITLE&amp;gt;&amp;lt;/HEAD&amp;gt; &lt;br /&gt;
&amp;lt;BODY&amp;gt;&amp;lt;H1&amp;gt;Bad request&amp;lt;/H1&amp;gt; &lt;br /&gt;
Your browser sent to query this server could not understand. &lt;br /&gt;
&amp;lt;/BODY&amp;gt;&amp;lt;/HTML&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Automated Testing == &lt;br /&gt;
As can be appreciated from above, a number of tests are needed in order to accurately determine the type and version of web server in use. There are tools available which can automate these tests. One such example is &amp;quot;''httprint''&amp;quot; which compares the web server response with a signature dictionary and then makes a judgement with associated confidence rating.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of such tool is shown below:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:httprint.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H1&amp;gt;Bad request&amp;lt;/H1&amp;gt; Your browser sent to query this server could not understand. &lt;br /&gt;
&amp;lt;/BODY&amp;gt;&amp;lt;/HTML&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Automated Testing == &lt;br /&gt;
As can be appreciated from above, a number of tests are needed in order to accurately determine the type and version of web server in use. There are tools available which can automate these tests. One such example is &amp;quot;''httprint''&amp;quot; which compares the web server response with a signature dictionary and then makes a judgement with associated confidence rating.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of such tool is shown below:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:httprint.jpg]]&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15927</id>
		<title>Testing for Web Application Fingerprint (OWASP-IG-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15927"/>
				<updated>2007-01-29T13:56:02Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Automated Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
Web server fingerprinting is a critical task for the penetration tester. Knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
There are several different vendors and versions of web servers on the market today. Knowing the type of web server that you are testing significantly helps in the testing process, and will also change the course of the test. This information can be derived by sending the web server specific commands and analyzing the output, as each version of web server software may respond differently to these commands. By knowing how each type of web server responds to specific commands and keeping this information in a web server fingerprint database, a penetration tester can send these commands to the web server, analyze the respsonse, and compare it to the database of known signatures. Please note that it usually takes several different commands to accurately identify the web server, as different versions may react similarly to the same command. Rarely, however, do different versions react the same to all HTTP commands. So, by sending several different commands, you increase the accuracy of your guess.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
The simplest and most basic form of identifying a web server is to look at the Server field in the HTTP response header. For our experiments we use netcat. &lt;br /&gt;
Consider the following HTTP Request-Response: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc 202.41.76.251 80&lt;br /&gt;
HEAD / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Mon, 16 Jun 2003 02:53:29 GMT&lt;br /&gt;
Server: Apache/1.3.3 (Unix)  (Red Hat/Linux)&lt;br /&gt;
Last-Modified: Wed, 07 Oct 1998 11:18:14 GMT&lt;br /&gt;
ETag: &amp;quot;1813-49b-361b4df6&amp;quot;&lt;br /&gt;
Accept-Ranges: bytes&lt;br /&gt;
Content-Length: 1179&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
from the ''Server'' field we understand that the server is Apache, version 1.3.3, running on Linux operating system.&lt;br /&gt;
Three examples of the HTTP response headers are shown below:&lt;br /&gt;
&lt;br /&gt;
From an '''Apache 1.3.23''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From a '''Microsoft IIS 5.0''' server:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Expires: Yours, 17 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Date: Mon, 16 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Wed, 28 May 2003 15:32: 21 GMT &lt;br /&gt;
ETag: b0aac0542e25c31: 89d &lt;br /&gt;
Content-Length: 7369 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From a '''Netscape Enterprise 4.1''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:19: 04 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
However, this testing methodology is not so good. There are several techniques that allow a web site to obfuscate or to modify the server banner string.&lt;br /&gt;
For example we could obtain the following answer:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
403 HTTP/1.1 &lt;br /&gt;
Forbidden Date: Mon, 16 Jun 2003 02:41: 27 GMT &lt;br /&gt;
Server: Unknown-Webserver/1.0 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML; &lt;br /&gt;
charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case the server field of that response is obfuscated: we cannot know what type of web server is running.&lt;br /&gt;
&lt;br /&gt;
== Protocol behaviour == &lt;br /&gt;
Refined techniques of testing take into consideration various characteristics of the several web servers available on the market. We will list some methodologies that allow us to deduce the type of web server in use.&lt;br /&gt;
&lt;br /&gt;
=== HTTP header field ordering === &lt;br /&gt;
The first method consists of observing the ordering of the several headers in the response. Every web server has an inner ordering of the header. We consider the following answers as an example:&lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:01: 40 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
We can notice that the ordering of the ''Date'' field and the ''Server'' field differs between Apache, Netscape Enterprise and IIS.&lt;br /&gt;
&lt;br /&gt;
=== Malformed requests test === &lt;br /&gt;
Another useful test to execute involves sending malformed requests or requests of nonexistent pages to the server.&lt;br /&gt;
We consider the following HTTP response: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:12: 37 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Connection: close &lt;br /&gt;
Transfer: chunked &lt;br /&gt;
Content-Type: text/HTML; charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 505 HTTP Version Not Supported &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:04: 04 GMT &lt;br /&gt;
Content-length: 140 &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We notice that every server answers in a different way. The answer also differs in the version of the server. An analogous issue comes if we create requests with a non-existant protocol. Consider the following responses: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:17: 47 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 34 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Content-Length: 87 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&amp;lt;HEAD&amp;gt;&amp;lt;TITLE&amp;gt;Bad request&amp;lt;/TITLE&amp;gt;&amp;lt;/HEAD&amp;gt; &lt;br /&gt;
&amp;lt;BODY&amp;gt;== Automated Testing == &lt;br /&gt;
As can be appreciated from above, a number of tests are needed in order to accurately determine the type and version of web server in use. There are tools available which can automate these tests. One such example is &amp;quot;''httprint''&amp;quot; which compares the web server response with a signature dictionary and then makes a judgement with associated confidence rating.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of such tool is shown below:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:httprint.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H1&amp;gt;Bad request&amp;lt;/H1&amp;gt; Your browser sent to query this server could not understand. &lt;br /&gt;
&amp;lt;/BODY&amp;gt;&amp;lt;/HTML&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Automated Testing == &lt;br /&gt;
As can be appreciated from above, a number of tests are needed in order to accurately determine the type and version of web server in use. There are tools available which can automate these tests. One such example is &amp;quot;''httprint''&amp;quot; which compares the web server response with a signature dictionary and then makes a judgement with associated confidence rating.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of such tool is shown below:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:httprint.jpg]]&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15926</id>
		<title>Testing for Web Application Fingerprint (OWASP-IG-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15926"/>
				<updated>2007-01-29T13:54:53Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Automated Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
Web server fingerprinting is a critical task for the penetration tester. Knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
There are several different vendors and versions of web servers on the market today. Knowing the type of web server that you are testing significantly helps in the testing process, and will also change the course of the test. This information can be derived by sending the web server specific commands and analyzing the output, as each version of web server software may respond differently to these commands. By knowing how each type of web server responds to specific commands and keeping this information in a web server fingerprint database, a penetration tester can send these commands to the web server, analyze the respsonse, and compare it to the database of known signatures. Please note that it usually takes several different commands to accurately identify the web server, as different versions may react similarly to the same command. Rarely, however, do different versions react the same to all HTTP commands. So, by sending several different commands, you increase the accuracy of your guess.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
The simplest and most basic form of identifying a web server is to look at the Server field in the HTTP response header. For our experiments we use netcat. &lt;br /&gt;
Consider the following HTTP Request-Response: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc 202.41.76.251 80&lt;br /&gt;
HEAD / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Mon, 16 Jun 2003 02:53:29 GMT&lt;br /&gt;
Server: Apache/1.3.3 (Unix)  (Red Hat/Linux)&lt;br /&gt;
Last-Modified: Wed, 07 Oct 1998 11:18:14 GMT&lt;br /&gt;
ETag: &amp;quot;1813-49b-361b4df6&amp;quot;&lt;br /&gt;
Accept-Ranges: bytes&lt;br /&gt;
Content-Length: 1179&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
from the ''Server'' field we understand that the server is Apache, version 1.3.3, running on Linux operating system.&lt;br /&gt;
Three examples of the HTTP response headers are shown below:&lt;br /&gt;
&lt;br /&gt;
From an '''Apache 1.3.23''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From a '''Microsoft IIS 5.0''' server:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Expires: Yours, 17 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Date: Mon, 16 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Wed, 28 May 2003 15:32: 21 GMT &lt;br /&gt;
ETag: b0aac0542e25c31: 89d &lt;br /&gt;
Content-Length: 7369 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From a '''Netscape Enterprise 4.1''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:19: 04 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
However, this testing methodology is not so good. There are several techniques that allow a web site to obfuscate or to modify the server banner string.&lt;br /&gt;
For example we could obtain the following answer:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
403 HTTP/1.1 &lt;br /&gt;
Forbidden Date: Mon, 16 Jun 2003 02:41: 27 GMT &lt;br /&gt;
Server: Unknown-Webserver/1.0 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML; &lt;br /&gt;
charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case the server field of that response is obfuscated: we cannot know what type of web server is running.&lt;br /&gt;
&lt;br /&gt;
== Protocol behaviour == &lt;br /&gt;
Refined techniques of testing take into consideration various characteristics of the several web servers available on the market. We will list some methodologies that allow us to deduce the type of web server in use.&lt;br /&gt;
&lt;br /&gt;
=== HTTP header field ordering === &lt;br /&gt;
The first method consists of observing the ordering of the several headers in the response. Every web server has an inner ordering of the header. We consider the following answers as an example:&lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:01: 40 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
We can notice that the ordering of the ''Date'' field and the ''Server'' field differs between Apache, Netscape Enterprise and IIS.&lt;br /&gt;
&lt;br /&gt;
=== Malformed requests test === &lt;br /&gt;
Another useful test to execute involves sending malformed requests or requests of nonexistent pages to the server.&lt;br /&gt;
We consider the following HTTP response: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:12: 37 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Connection: close &lt;br /&gt;
Transfer: chunked &lt;br /&gt;
Content-Type: text/HTML; charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 505 HTTP Version Not Supported &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:04: 04 GMT &lt;br /&gt;
Content-length: 140 &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We notice that every server answers in a different way. The answer also differs in the version of the server. An analogous issue comes if we create requests with a non-existant protocol. Consider the following responses: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:17: 47 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 34 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Content-Length: 87 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&amp;lt;HEAD&amp;gt;&amp;lt;TITLE&amp;gt;Bad request&amp;lt;/TITLE&amp;gt;&amp;lt;/HEAD&amp;gt; &lt;br /&gt;
&amp;lt;BODY&amp;gt;&amp;lt;H1&amp;gt;Bad request&amp;lt;/H1&amp;gt; Your browser sent to query this server could not understand. &lt;br /&gt;
&amp;lt;/BODY&amp;gt;&amp;lt;/HTML&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H1&amp;gt;Bad request&amp;lt;/H1&amp;gt; Your browser sent to query this server could not understand. &lt;br /&gt;
&amp;lt;/BODY&amp;gt;&amp;lt;/HTML&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Automated Testing == &lt;br /&gt;
As can be appreciated from above, a number of tests are needed in order to accurately determine the type and version of web server in use. There are tools available which can automate these tests. One such example is &amp;quot;''httprint''&amp;quot; which compares the web server response with a signature dictionary and then makes a judgement with associated confidence rating.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of such tool is shown below:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:httprint.jpg]]&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15925</id>
		<title>Testing for Web Application Fingerprint (OWASP-IG-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15925"/>
				<updated>2007-01-29T13:53:08Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Malformed requests test */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
Web server fingerprinting is a critical task for the penetration tester. Knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
There are several different vendors and versions of web servers on the market today. Knowing the type of web server that you are testing significantly helps in the testing process, and will also change the course of the test. This information can be derived by sending the web server specific commands and analyzing the output, as each version of web server software may respond differently to these commands. By knowing how each type of web server responds to specific commands and keeping this information in a web server fingerprint database, a penetration tester can send these commands to the web server, analyze the respsonse, and compare it to the database of known signatures. Please note that it usually takes several different commands to accurately identify the web server, as different versions may react similarly to the same command. Rarely, however, do different versions react the same to all HTTP commands. So, by sending several different commands, you increase the accuracy of your guess.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
The simplest and most basic form of identifying a web server is to look at the Server field in the HTTP response header. For our experiments we use netcat. &lt;br /&gt;
Consider the following HTTP Request-Response: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc 202.41.76.251 80&lt;br /&gt;
HEAD / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Mon, 16 Jun 2003 02:53:29 GMT&lt;br /&gt;
Server: Apache/1.3.3 (Unix)  (Red Hat/Linux)&lt;br /&gt;
Last-Modified: Wed, 07 Oct 1998 11:18:14 GMT&lt;br /&gt;
ETag: &amp;quot;1813-49b-361b4df6&amp;quot;&lt;br /&gt;
Accept-Ranges: bytes&lt;br /&gt;
Content-Length: 1179&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
from the ''Server'' field we understand that the server is Apache, version 1.3.3, running on Linux operating system.&lt;br /&gt;
Three examples of the HTTP response headers are shown below:&lt;br /&gt;
&lt;br /&gt;
From an '''Apache 1.3.23''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From a '''Microsoft IIS 5.0''' server:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Expires: Yours, 17 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Date: Mon, 16 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Wed, 28 May 2003 15:32: 21 GMT &lt;br /&gt;
ETag: b0aac0542e25c31: 89d &lt;br /&gt;
Content-Length: 7369 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From a '''Netscape Enterprise 4.1''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:19: 04 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
However, this testing methodology is not so good. There are several techniques that allow a web site to obfuscate or to modify the server banner string.&lt;br /&gt;
For example we could obtain the following answer:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
403 HTTP/1.1 &lt;br /&gt;
Forbidden Date: Mon, 16 Jun 2003 02:41: 27 GMT &lt;br /&gt;
Server: Unknown-Webserver/1.0 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML; &lt;br /&gt;
charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case the server field of that response is obfuscated: we cannot know what type of web server is running.&lt;br /&gt;
&lt;br /&gt;
== Protocol behaviour == &lt;br /&gt;
Refined techniques of testing take into consideration various characteristics of the several web servers available on the market. We will list some methodologies that allow us to deduce the type of web server in use.&lt;br /&gt;
&lt;br /&gt;
=== HTTP header field ordering === &lt;br /&gt;
The first method consists of observing the ordering of the several headers in the response. Every web server has an inner ordering of the header. We consider the following answers as an example:&lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:01: 40 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
We can notice that the ordering of the ''Date'' field and the ''Server'' field differs between Apache, Netscape Enterprise and IIS.&lt;br /&gt;
&lt;br /&gt;
=== Malformed requests test === &lt;br /&gt;
Another useful test to execute involves sending malformed requests or requests of nonexistent pages to the server.&lt;br /&gt;
We consider the following HTTP response: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:12: 37 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Connection: close &lt;br /&gt;
Transfer: chunked &lt;br /&gt;
Content-Type: text/HTML; charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 505 HTTP Version Not Supported &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:04: 04 GMT &lt;br /&gt;
Content-length: 140 &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We notice that every server answers in a different way. The answer also differs in the version of the server. An analogous issue comes if we create requests with a non-existant protocol. Consider the following responses: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:17: 47 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 34 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Content-Length: 87 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&amp;lt;HEAD&amp;gt;&amp;lt;TITLE&amp;gt;Bad request&amp;lt;/TITLE&amp;gt;&amp;lt;/HEAD&amp;gt; &lt;br /&gt;
&amp;lt;BODY&amp;gt;&amp;lt;H1&amp;gt;Bad request&amp;lt;/H1&amp;gt; Your browser sent to query this server could not understand. &lt;br /&gt;
&amp;lt;/BODY&amp;gt;&amp;lt;/HTML&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Automated Testing == &lt;br /&gt;
As can be appreciated from above, a number of tests are needed in order to accurately determine the type and version of web server in use. There are tools available which can automate these tests. One such example is &amp;quot;''httprint''&amp;quot; which compares the web server response with a signature dictionary and then makes a judgement with associated confidence rating.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of such tool is shown below:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:httprint.jpg]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H1&amp;gt;Bad request&amp;lt;/H1&amp;gt; Your browser sent to query this server could not understand. &lt;br /&gt;
&amp;lt;/BODY&amp;gt;&amp;lt;/HTML&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Automated Testing == &lt;br /&gt;
As can be appreciated from above, a number of tests are needed in order to accurately determine the type and version of web server in use. There are tools available which can automate these tests. One such example is &amp;quot;''httprint''&amp;quot; which compares the web server response with a signature dictionary and then makes a judgement with associated confidence rating.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of such tool is shown below:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:httprint.jpg]]&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15924</id>
		<title>Testing for Web Application Fingerprint (OWASP-IG-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15924"/>
				<updated>2007-01-29T13:51:32Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
Web server fingerprinting is a critical task for the penetration tester. Knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
There are several different vendors and versions of web servers on the market today. Knowing the type of web server that you are testing significantly helps in the testing process, and will also change the course of the test. This information can be derived by sending the web server specific commands and analyzing the output, as each version of web server software may respond differently to these commands. By knowing how each type of web server responds to specific commands and keeping this information in a web server fingerprint database, a penetration tester can send these commands to the web server, analyze the respsonse, and compare it to the database of known signatures. Please note that it usually takes several different commands to accurately identify the web server, as different versions may react similarly to the same command. Rarely, however, do different versions react the same to all HTTP commands. So, by sending several different commands, you increase the accuracy of your guess.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
The simplest and most basic form of identifying a web server is to look at the Server field in the HTTP response header. For our experiments we use netcat. &lt;br /&gt;
Consider the following HTTP Request-Response: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc 202.41.76.251 80&lt;br /&gt;
HEAD / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Mon, 16 Jun 2003 02:53:29 GMT&lt;br /&gt;
Server: Apache/1.3.3 (Unix)  (Red Hat/Linux)&lt;br /&gt;
Last-Modified: Wed, 07 Oct 1998 11:18:14 GMT&lt;br /&gt;
ETag: &amp;quot;1813-49b-361b4df6&amp;quot;&lt;br /&gt;
Accept-Ranges: bytes&lt;br /&gt;
Content-Length: 1179&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
from the ''Server'' field we understand that the server is Apache, version 1.3.3, running on Linux operating system.&lt;br /&gt;
Three examples of the HTTP response headers are shown below:&lt;br /&gt;
&lt;br /&gt;
From an '''Apache 1.3.23''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From a '''Microsoft IIS 5.0''' server:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Expires: Yours, 17 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Date: Mon, 16 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Wed, 28 May 2003 15:32: 21 GMT &lt;br /&gt;
ETag: b0aac0542e25c31: 89d &lt;br /&gt;
Content-Length: 7369 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From a '''Netscape Enterprise 4.1''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:19: 04 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
However, this testing methodology is not so good. There are several techniques that allow a web site to obfuscate or to modify the server banner string.&lt;br /&gt;
For example we could obtain the following answer:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
403 HTTP/1.1 &lt;br /&gt;
Forbidden Date: Mon, 16 Jun 2003 02:41: 27 GMT &lt;br /&gt;
Server: Unknown-Webserver/1.0 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML; &lt;br /&gt;
charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case the server field of that response is obfuscated: we cannot know what type of web server is running.&lt;br /&gt;
&lt;br /&gt;
== Protocol behaviour == &lt;br /&gt;
Refined techniques of testing take into consideration various characteristics of the several web servers available on the market. We will list some methodologies that allow us to deduce the type of web server in use.&lt;br /&gt;
&lt;br /&gt;
=== HTTP header field ordering === &lt;br /&gt;
The first method consists of observing the ordering of the several headers in the response. Every web server has an inner ordering of the header. We consider the following answers as an example:&lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:01: 40 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
We can notice that the ordering of the ''Date'' field and the ''Server'' field differs between Apache, Netscape Enterprise and IIS.&lt;br /&gt;
&lt;br /&gt;
=== Malformed requests test === &lt;br /&gt;
Another useful test to execute involves sending malformed requests or requests of nonexistent pages to the server.&lt;br /&gt;
We consider the following HTTP response: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:12: 37 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Connection: close &lt;br /&gt;
Transfer: chunked &lt;br /&gt;
Content-Type: text/HTML; charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 505 HTTP Version Not Supported &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:04: 04 GMT &lt;br /&gt;
Content-length: 140 &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We notice that every server answers in a different way. The answer also differs in the version of the server. An analogous issue comes if we create requests with a non-existant protocol. Consider the following responses: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:17: 47 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 34 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Content-Length: 87 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&amp;lt;HEAD&amp;gt;&amp;lt;TITLE&amp;gt;Bad request&amp;lt;/TITLE&amp;gt;&amp;lt;/HEAD&amp;gt; &lt;br /&gt;
&amp;lt;BODY&amp;gt;&amp;lt;H1&amp;gt;Bad request&amp;lt;/H1&amp;gt; Your browser sent to query this server could not understand. &lt;br /&gt;
&amp;lt;/BODY&amp;gt;&amp;lt;/HTML&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Automated Testing == &lt;br /&gt;
As can be appreciated from above, a number of tests are needed in order to accurately determine the type and version of web server in use. There are tools available which can automate these tests. One such example is &amp;quot;''httprint''&amp;quot; which compares the web server response with a signature dictionary and then makes a judgement with associated confidence rating.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of such tool is shown below:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:httprint.jpg]]&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15923</id>
		<title>Testing for Web Application Fingerprint (OWASP-IG-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15923"/>
				<updated>2007-01-29T13:47:27Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Automated Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
Web server fingerprinting is a critical task for the penetration tester. Knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
There are several different vendors and versions of web servers on the market today. Knowing the type of web server that you are testing significantly helps in the testing process, and will also change the course of the test. This information can be derived by sending the web server specific commands and analyzing the output, as each version of web server software may respond differently to these commands. By knowing how each type of web server responds to specific commands and keeping this information in a web server fingerprint database, a penetration tester can send these commands to the web server, analyze the respsonse, and compare it to the database of known signatures. Please note that it usually takes several different commands to accurately identify the web server, as different versions may react similarly to the same command. Rarely, however, do different versions react the same to all HTTP commands. So, by sending several different commands, you increase the accuracy of your guess.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
The simplest and most basic form of identifying a web server is to look at the Server field in the HTTP response header. For our experiments we use netcat. &lt;br /&gt;
Consider the following HTTP Request-Response: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc 202.41.76.251 80&lt;br /&gt;
HEAD / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Mon, 16 Jun 2003 02:53:29 GMT&lt;br /&gt;
Server: Apache/1.3.3 (Unix)  (Red Hat/Linux)&lt;br /&gt;
Last-Modified: Wed, 07 Oct 1998 11:18:14 GMT&lt;br /&gt;
ETag: &amp;quot;1813-49b-361b4df6&amp;quot;&lt;br /&gt;
Accept-Ranges: bytes&lt;br /&gt;
Content-Length: 1179&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
from the ''Server'' field we understand that the server is Apache, version 1.3.3, running on Linux operating system.&lt;br /&gt;
Three examples of the HTTP response headers are shown below:&lt;br /&gt;
&lt;br /&gt;
From an '''Apache 1.3.23''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From a '''Microsoft IIS 5.0''' server:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Expires: Yours, 17 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Date: Mon, 16 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Wed, 28 May 2003 15:32: 21 GMT &lt;br /&gt;
ETag: b0aac0542e25c31: 89d &lt;br /&gt;
Content-Length: 7369 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From a '''Netscape Enterprise 4.1''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:19: 04 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
However, this testing methodology is not so good. There are several techniques that allow a web site to obfuscate or to modify the server banner string.&lt;br /&gt;
For example we could obtain the following answer:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
403 HTTP/1.1 &lt;br /&gt;
Forbidden Date: Mon, 16 Jun 2003 02:41: 27 GMT &lt;br /&gt;
Server: Unknown-Webserver/1.0 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML; &lt;br /&gt;
charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case the server field of that response is obfuscated: we cannot know what type of web server is running.&lt;br /&gt;
&lt;br /&gt;
== Protocol behaviour == &lt;br /&gt;
Refined techniques of testing take into consideration various characteristics of the several web servers available on the market. We will list some methodologies that allow us to deduce the type of web server in use.&lt;br /&gt;
&lt;br /&gt;
=== HTTP header field ordering === &lt;br /&gt;
The first method consists of observing the ordering of the several headers in the response. Every web server has an inner ordering of the header. We consider the following answers as an example:&lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:01: 40 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
We can notice that the ordering of the ''Date'' field and the ''Server'' field differs between Apache, Netscape Enterprise and IIS.&lt;br /&gt;
&lt;br /&gt;
=== Malformed requests test === &lt;br /&gt;
Another useful test to execute involves sending malformed requests or requests of nonexistent pages to the server.&lt;br /&gt;
We consider the following HTTP response: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:12: 37 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Connection: close &lt;br /&gt;
Transfer: chunked &lt;br /&gt;
Content-Type: text/HTML; charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 505 HTTP Version Not Supported &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:04: 04 GMT &lt;br /&gt;
Content-length: 140 &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We notice that every server answers in a different way. The answer also differs in the version of the server. An analogous issue comes if we create requests with a non-existant protocol. Consider the following responses: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:17: 47 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 34 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Content-Length: 87 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&amp;lt;HEAD&amp;gt;&amp;lt;TITLE&amp;gt;Bad request&amp;lt;/TITLE&amp;gt;&amp;lt;/HEAD&amp;gt; &lt;br /&gt;
&amp;lt;BODY&amp;gt;== Automated Testing == &lt;br /&gt;
As can be appreciated from above, a number of tests are needed in order to accurately determine the type and version of web server in use. There are tools available which can automate these tests. One such example is &amp;quot;''httprint''&amp;quot; which compares the web server response with a signature dictionary and then makes a judgement with associated confidence rating.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of such tool is shown below:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:httprint.jpg]]&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15922</id>
		<title>Testing for Web Application Fingerprint (OWASP-IG-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15922"/>
				<updated>2007-01-29T13:35:52Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Automated Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
Web server fingerprinting is a critical task for the penetration tester. Knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
There are several different vendors and versions of web servers on the market today. Knowing the type of web server that you are testing significantly helps in the testing process, and will also change the course of the test. This information can be derived by sending the web server specific commands and analyzing the output, as each version of web server software may respond differently to these commands. By knowing how each type of web server responds to specific commands and keeping this information in a web server fingerprint database, a penetration tester can send these commands to the web server, analyze the respsonse, and compare it to the database of known signatures. Please note that it usually takes several different commands to accurately identify the web server, as different versions may react similarly to the same command. Rarely, however, do different versions react the same to all HTTP commands. So, by sending several different commands, you increase the accuracy of your guess.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
The simplest and most basic form of identifying a web server is to look at the Server field in the HTTP response header. For our experiments we use netcat. &lt;br /&gt;
Consider the following HTTP Request-Response: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc 202.41.76.251 80&lt;br /&gt;
HEAD / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Mon, 16 Jun 2003 02:53:29 GMT&lt;br /&gt;
Server: Apache/1.3.3 (Unix)  (Red Hat/Linux)&lt;br /&gt;
Last-Modified: Wed, 07 Oct 1998 11:18:14 GMT&lt;br /&gt;
ETag: &amp;quot;1813-49b-361b4df6&amp;quot;&lt;br /&gt;
Accept-Ranges: bytes&lt;br /&gt;
Content-Length: 1179&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
from the ''Server'' field we understand that the server is Apache, version 1.3.3, running on Linux operating system.&lt;br /&gt;
Three examples of the HTTP response headers are shown below:&lt;br /&gt;
&lt;br /&gt;
From an '''Apache 1.3.23''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From a '''Microsoft IIS 5.0''' server:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Expires: Yours, 17 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Date: Mon, 16 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Wed, 28 May 2003 15:32: 21 GMT &lt;br /&gt;
ETag: b0aac0542e25c31: 89d &lt;br /&gt;
Content-Length: 7369 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From a '''Netscape Enterprise 4.1''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:19: 04 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
However, this testing methodology is not so good. There are several techniques that allow a web site to obfuscate or to modify the server banner string.&lt;br /&gt;
For example we could obtain the following answer:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
403 HTTP/1.1 &lt;br /&gt;
Forbidden Date: Mon, 16 Jun 2003 02:41: 27 GMT &lt;br /&gt;
Server: Unknown-Webserver/1.0 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML; &lt;br /&gt;
charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case the server field of that response is obfuscated: we cannot know what type of web server is running.&lt;br /&gt;
&lt;br /&gt;
== Protocol behaviour == &lt;br /&gt;
Refined techniques of testing take into consideration various characteristics of the several web servers available on the market. We will list some methodologies that allow us to deduce the type of web server in use.&lt;br /&gt;
&lt;br /&gt;
=== HTTP header field ordering === &lt;br /&gt;
The first method consists of observing the ordering of the several headers in the response. Every web server has an inner ordering of the header. We consider the following answers as an example:&lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:01: 40 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
We can notice that the ordering of the ''Date'' field and the ''Server'' field differs between Apache, Netscape Enterprise and IIS.&lt;br /&gt;
&lt;br /&gt;
=== Malformed requests test === &lt;br /&gt;
Another useful test to execute involves sending malformed requests or requests of nonexistent pages to the server.&lt;br /&gt;
We consider the following HTTP response: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:12: 37 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Connection: close &lt;br /&gt;
Transfer: chunked &lt;br /&gt;
Content-Type: text/HTML; charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 505 HTTP Version Not Supported &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:04: 04 GMT &lt;br /&gt;
Content-length: 140 &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We notice that every server answers in a different way. The answer also differs in the version of the server. An analogous issue comes if we create requests with a non-existant protocol. Consider the following responses: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:17: 47 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 34 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Content-Length: 87 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&amp;lt;HEAD&amp;gt;&amp;lt;TITLE&amp;gt;Bad request&amp;lt;/TITLE&amp;gt;&amp;lt;/HEAD&amp;gt; &lt;br /&gt;
&amp;lt;BODY&amp;gt;&amp;lt;H1&amp;gt;Bad request&amp;lt;/H1&amp;gt; &lt;br /&gt;
Your browser sent to query this server could not understand. &lt;br /&gt;
&amp;lt;/BODY&amp;gt;&amp;lt;/HTML&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Automated Testing == &lt;br /&gt;
As can be appreciated from above, a number of tests are needed in order to accurately determine the type and version of web server in use. A tool that automates these tests is &amp;quot;''httprint''&amp;quot; that allows one, through a signature dictionary, to recognize the type and the version of the web server in use.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of such tool is shown below:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:httprint.jpg]]&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15921</id>
		<title>Testing for Web Application Fingerprint (OWASP-IG-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15921"/>
				<updated>2007-01-29T13:35:13Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Protocol behaviour */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
Web server fingerprinting is a critical task for the penetration tester. Knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
There are several different vendors and versions of web servers on the market today. Knowing the type of web server that you are testing significantly helps in the testing process, and will also change the course of the test. This information can be derived by sending the web server specific commands and analyzing the output, as each version of web server software may respond differently to these commands. By knowing how each type of web server responds to specific commands and keeping this information in a web server fingerprint database, a penetration tester can send these commands to the web server, analyze the respsonse, and compare it to the database of known signatures. Please note that it usually takes several different commands to accurately identify the web server, as different versions may react similarly to the same command. Rarely, however, do different versions react the same to all HTTP commands. So, by sending several different commands, you increase the accuracy of your guess.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
The simplest and most basic form of identifying a web server is to look at the Server field in the HTTP response header. For our experiments we use netcat. &lt;br /&gt;
Consider the following HTTP Request-Response: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc 202.41.76.251 80&lt;br /&gt;
HEAD / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Mon, 16 Jun 2003 02:53:29 GMT&lt;br /&gt;
Server: Apache/1.3.3 (Unix)  (Red Hat/Linux)&lt;br /&gt;
Last-Modified: Wed, 07 Oct 1998 11:18:14 GMT&lt;br /&gt;
ETag: &amp;quot;1813-49b-361b4df6&amp;quot;&lt;br /&gt;
Accept-Ranges: bytes&lt;br /&gt;
Content-Length: 1179&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
from the ''Server'' field we understand that the server is Apache, version 1.3.3, running on Linux operating system.&lt;br /&gt;
Three examples of the HTTP response headers are shown below:&lt;br /&gt;
&lt;br /&gt;
From an '''Apache 1.3.23''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From a '''Microsoft IIS 5.0''' server:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Expires: Yours, 17 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Date: Mon, 16 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Wed, 28 May 2003 15:32: 21 GMT &lt;br /&gt;
ETag: b0aac0542e25c31: 89d &lt;br /&gt;
Content-Length: 7369 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From a '''Netscape Enterprise 4.1''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:19: 04 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
However, this testing methodology is not so good. There are several techniques that allow a web site to obfuscate or to modify the server banner string.&lt;br /&gt;
For example we could obtain the following answer:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
403 HTTP/1.1 &lt;br /&gt;
Forbidden Date: Mon, 16 Jun 2003 02:41: 27 GMT &lt;br /&gt;
Server: Unknown-Webserver/1.0 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML; &lt;br /&gt;
charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case the server field of that response is obfuscated: we cannot know what type of web server is running.&lt;br /&gt;
&lt;br /&gt;
== Protocol behaviour == &lt;br /&gt;
Refined techniques of testing take into consideration various characteristics of the several web servers available on the market. We will list some methodologies that allow us to deduce the type of web server in use.&lt;br /&gt;
&lt;br /&gt;
=== HTTP header field ordering === &lt;br /&gt;
The first method consists of observing the ordering of the several headers in the response. Every web server has an inner ordering of the header. We consider the following answers as an example:&lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:01: 40 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
We can notice that the ordering of the ''Date'' field and the ''Server'' field differs between Apache, Netscape Enterprise and IIS.&lt;br /&gt;
&lt;br /&gt;
=== Malformed requests test === &lt;br /&gt;
Another useful test to execute involves sending malformed requests or requests of nonexistent pages to the server.&lt;br /&gt;
We consider the following HTTP response: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:12: 37 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Connection: close &lt;br /&gt;
Transfer: chunked &lt;br /&gt;
Content-Type: text/HTML; charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 505 HTTP Version Not Supported &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:04: 04 GMT &lt;br /&gt;
Content-length: 140 &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We notice that every server answers in a different way. The answer also differs in the version of the server. An analogous issue comes if we create requests with a non-existant protocol. Consider the following responses: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:17: 47 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 34 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Content-Length: 87 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&amp;lt;HEAD&amp;gt;&amp;lt;TITLE&amp;gt;Bad request&amp;lt;/TITLE&amp;gt;&amp;lt;/HEAD&amp;gt; &lt;br /&gt;
&amp;lt;BODY&amp;gt;&amp;lt;H1&amp;gt;Bad request&amp;lt;/H1&amp;gt; &lt;br /&gt;
Your browser sent to query this server could not understand. &lt;br /&gt;
&amp;lt;/BODY&amp;gt;&amp;lt;/HTML&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;H1&amp;gt;Bad request&amp;lt;/H1&amp;gt; &lt;br /&gt;
Your browser sent to query this server could not understand. &lt;br /&gt;
&amp;lt;/BODY&amp;gt;&amp;lt;/HTML&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Automated Testing == &lt;br /&gt;
The tests to carry out testing can be several. A tool that automates these tests is &amp;quot;''httprint''&amp;quot; that allows one, through a signature dictionary, to recognize the type and the version of the web server in use.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of such tool is shown below:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:httprint.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* Saumil Shah: &amp;quot;An Introduction to HTTP fingerprinting&amp;quot; - http://net-square.com/httprint/httprint_paper.html&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
* httprint - http://net-square.com/httprint/index.shtml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15920</id>
		<title>Testing for Web Application Fingerprint (OWASP-IG-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15920"/>
				<updated>2007-01-29T13:34:54Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
Web server fingerprinting is a critical task for the penetration tester. Knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
There are several different vendors and versions of web servers on the market today. Knowing the type of web server that you are testing significantly helps in the testing process, and will also change the course of the test. This information can be derived by sending the web server specific commands and analyzing the output, as each version of web server software may respond differently to these commands. By knowing how each type of web server responds to specific commands and keeping this information in a web server fingerprint database, a penetration tester can send these commands to the web server, analyze the respsonse, and compare it to the database of known signatures. Please note that it usually takes several different commands to accurately identify the web server, as different versions may react similarly to the same command. Rarely, however, do different versions react the same to all HTTP commands. So, by sending several different commands, you increase the accuracy of your guess.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
The simplest and most basic form of identifying a web server is to look at the Server field in the HTTP response header. For our experiments we use netcat. &lt;br /&gt;
Consider the following HTTP Request-Response: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc 202.41.76.251 80&lt;br /&gt;
HEAD / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Mon, 16 Jun 2003 02:53:29 GMT&lt;br /&gt;
Server: Apache/1.3.3 (Unix)  (Red Hat/Linux)&lt;br /&gt;
Last-Modified: Wed, 07 Oct 1998 11:18:14 GMT&lt;br /&gt;
ETag: &amp;quot;1813-49b-361b4df6&amp;quot;&lt;br /&gt;
Accept-Ranges: bytes&lt;br /&gt;
Content-Length: 1179&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
from the ''Server'' field we understand that the server is Apache, version 1.3.3, running on Linux operating system.&lt;br /&gt;
Three examples of the HTTP response headers are shown below:&lt;br /&gt;
&lt;br /&gt;
From an '''Apache 1.3.23''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From a '''Microsoft IIS 5.0''' server:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Expires: Yours, 17 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Date: Mon, 16 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Wed, 28 May 2003 15:32: 21 GMT &lt;br /&gt;
ETag: b0aac0542e25c31: 89d &lt;br /&gt;
Content-Length: 7369 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From a '''Netscape Enterprise 4.1''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:19: 04 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
However, this testing methodology is not so good. There are several techniques that allow a web site to obfuscate or to modify the server banner string.&lt;br /&gt;
For example we could obtain the following answer:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
403 HTTP/1.1 &lt;br /&gt;
Forbidden Date: Mon, 16 Jun 2003 02:41: 27 GMT &lt;br /&gt;
Server: Unknown-Webserver/1.0 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML; &lt;br /&gt;
charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case the server field of that response is obfuscated: we cannot know what type of web server is running.&lt;br /&gt;
&lt;br /&gt;
== Protocol behaviour == &lt;br /&gt;
Refined techniques of testing take in consideration various characteristics of the several web servers available on the market. We will list some methodologies that allow us to deduce the type of web server in use.&lt;br /&gt;
&lt;br /&gt;
=== HTTP header field ordering === &lt;br /&gt;
The first method consists of observing the ordering of the several headers in the response. Every web server has an inner ordering of the header. We consider the following answers as an example:&lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:01: 40 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
We can notice that the ordering of the ''Date'' field and the ''Server'' field differs between Apache, Netscape Enterprise and IIS.&lt;br /&gt;
&lt;br /&gt;
=== Malformed requests test === &lt;br /&gt;
Another useful test to execute involves sending malformed requests or requests of nonexistent pages to the server.&lt;br /&gt;
We consider the following HTTP response: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:12: 37 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Connection: close &lt;br /&gt;
Transfer: chunked &lt;br /&gt;
Content-Type: text/HTML; charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 505 HTTP Version Not Supported &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:04: 04 GMT &lt;br /&gt;
Content-length: 140 &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We notice that every server answers in a different way. The answer also differs in the version of the server. An analogous issue comes if we create requests with a non-existant protocol. Consider the following responses: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:17: 47 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 34 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Content-Length: 87 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&amp;lt;HEAD&amp;gt;&amp;lt;TITLE&amp;gt;Bad request&amp;lt;/TITLE&amp;gt;&amp;lt;/HEAD&amp;gt; &lt;br /&gt;
&amp;lt;BODY&amp;gt;&amp;lt;H1&amp;gt;Bad request&amp;lt;/H1&amp;gt; &lt;br /&gt;
Your browser sent to query this server could not understand. &lt;br /&gt;
&amp;lt;/BODY&amp;gt;&amp;lt;/HTML&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Automated Testing == &lt;br /&gt;
The tests to carry out testing can be several. A tool that automates these tests is &amp;quot;''httprint''&amp;quot; that allows one, through a signature dictionary, to recognize the type and the version of the web server in use.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of such tool is shown below:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:httprint.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* Saumil Shah: &amp;quot;An Introduction to HTTP fingerprinting&amp;quot; - http://net-square.com/httprint/httprint_paper.html&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
* httprint - http://net-square.com/httprint/index.shtml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15919</id>
		<title>Testing for Web Application Fingerprint (OWASP-IG-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=15919"/>
				<updated>2007-01-29T13:34:30Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Brief Summary */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
Web server fingerprinting is a critical task for the penetration tester. Knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
There are several different vendors and versions of web servers on the market today. Knowing the type of web server that you are testing significantly helps in the testing process, and will also change the course of the test. This information can be derived by sending the web server specific commands and analyzing the output, as each version of web server software may respond differently to these commands. By knowing how each type of web server responds to specific commands and keeping this information in a web server fingerprint database, a penetration tester can send these commands to the web server, analyze the respsonse, and compare it to the database of known signatures. Please note that it usually takes several different commands to accurately identify the web server, as different versions may react similarly to the same command. Rarely, however, do different versions react the same to all HTTP commands. So, by sending several different commands, you increase the accuracy of your guess.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
The simplest and most basic form of identifying a Web server is to look at the Server field in the HTTP response header. For our experiments we use netcat. &lt;br /&gt;
Consider the following HTTP Request-Response: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc 202.41.76.251 80&lt;br /&gt;
HEAD / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Mon, 16 Jun 2003 02:53:29 GMT&lt;br /&gt;
Server: Apache/1.3.3 (Unix)  (Red Hat/Linux)&lt;br /&gt;
Last-Modified: Wed, 07 Oct 1998 11:18:14 GMT&lt;br /&gt;
ETag: &amp;quot;1813-49b-361b4df6&amp;quot;&lt;br /&gt;
Accept-Ranges: bytes&lt;br /&gt;
Content-Length: 1179&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
from the ''Server'' field we understand that the server is Apache, version 1.3.3, running on Linux operating system.&lt;br /&gt;
Three examples of the HTTP response headers are shown below:&lt;br /&gt;
&lt;br /&gt;
From an '''Apache 1.3.23''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From a '''Microsoft IIS 5.0''' server:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Expires: Yours, 17 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Date: Mon, 16 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Wed, 28 May 2003 15:32: 21 GMT &lt;br /&gt;
ETag: b0aac0542e25c31: 89d &lt;br /&gt;
Content-Length: 7369 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From a '''Netscape Enterprise 4.1''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:19: 04 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
However, this testing methodology is not so good. There are several techniques that allow a web site to obfuscate or to modify the server banner string.&lt;br /&gt;
For example we could obtain the following answer:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
403 HTTP/1.1 &lt;br /&gt;
Forbidden Date: Mon, 16 Jun 2003 02:41: 27 GMT &lt;br /&gt;
Server: Unknown-Webserver/1.0 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML; &lt;br /&gt;
charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case the server field of that response is obfuscated: we cannot know what type of web server is running.&lt;br /&gt;
&lt;br /&gt;
== Protocol behaviour == &lt;br /&gt;
Refined techniques of testing take in consideration various characteristics of the several web servers available on the market. We will list some methodologies that allow us to deduce the type of web server in use.&lt;br /&gt;
&lt;br /&gt;
=== HTTP header field ordering === &lt;br /&gt;
The first method consists of observing the ordering of the several headers in the response. Every web server has an inner ordering of the header. We consider the following answers as an example:&lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:01: 40 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
We can notice that the ordering of the ''Date'' field and the ''Server'' field differs between Apache, Netscape Enterprise and IIS.&lt;br /&gt;
&lt;br /&gt;
=== Malformed requests test === &lt;br /&gt;
Another useful test to execute involves sending malformed requests or requests of nonexistent pages to the server.&lt;br /&gt;
We consider the following HTTP response: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:12: 37 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Connection: close &lt;br /&gt;
Transfer: chunked &lt;br /&gt;
Content-Type: text/HTML; charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 505 HTTP Version Not Supported &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:04: 04 GMT &lt;br /&gt;
Content-length: 140 &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We notice that every server answers in a different way. The answer also differs in the version of the server. An analogous issue comes if we create requests with a non-existant protocol. Consider the following responses: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:17: 47 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 34 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Content-Length: 87 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&amp;lt;HEAD&amp;gt;&amp;lt;TITLE&amp;gt;Bad request&amp;lt;/TITLE&amp;gt;&amp;lt;/HEAD&amp;gt; &lt;br /&gt;
&amp;lt;BODY&amp;gt;&amp;lt;H1&amp;gt;Bad request&amp;lt;/H1&amp;gt; &lt;br /&gt;
Your browser sent to query this server could not understand. &lt;br /&gt;
&amp;lt;/BODY&amp;gt;&amp;lt;/HTML&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Automated Testing == &lt;br /&gt;
The tests to carry out testing can be several. A tool that automates these tests is &amp;quot;''httprint''&amp;quot; that allows one, through a signature dictionary, to recognize the type and the version of the web server in use.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of such tool is shown below:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:httprint.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* Saumil Shah: &amp;quot;An Introduction to HTTP fingerprinting&amp;quot; - http://net-square.com/httprint/httprint_paper.html&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
* httprint - http://net-square.com/httprint/index.shtml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing:_Information_Gathering&amp;diff=15824</id>
		<title>Testing: Information Gathering</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing:_Information_Gathering&amp;diff=15824"/>
				<updated>2007-01-26T14:49:02Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Information Gathering */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/OWASP_Testing_Guide_v2_Table_of_Contents#Web_Application_Penetration_Testing Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
=== Information Gathering ===&lt;br /&gt;
----&lt;br /&gt;
The  first phase in security assessment is focused on collecting as much information as possible about a target application.&lt;br /&gt;
Information Gathering is a necessary step of a penetration test.&lt;br /&gt;
&lt;br /&gt;
This task can be carried out in many different ways.&lt;br /&gt;
&lt;br /&gt;
Using public tools (search engines), scanners, sending simple HTTP requests, or specially crafted requests, it is possible to force the application to leak information by sending back error messages or revealing the versions and technologies used by the application.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Often it is possible to gather information by receiving a response from the application that could reveal vulnerabilities in the bad configuration or bad server management.&lt;br /&gt;
&lt;br /&gt;
[[Application Discovery AoC|4.2.1 Application Fingerprint]]&lt;br /&gt;
&lt;br /&gt;
Application fingerprint is the first step of the Information Gathering process; knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing. &lt;br /&gt;
&lt;br /&gt;
[[Application Discovery AoC|4.2.2 Application Discovery]]&lt;br /&gt;
&lt;br /&gt;
Application discovery is an activity oriented to the identification of the web applications hosted on a web server/application server.&amp;lt;br&amp;gt;&lt;br /&gt;
This analysis is important because many times there is not a direct link connecting the main application backend. Discovery analysis can be useful to reveal details such as web-apps used for administrative purposes. In addition, it can reveal old versions of files or artifacts such as undeleted, obsolete scripts crafted during the test/development phase or as the result of maintenance.&lt;br /&gt;
&lt;br /&gt;
[[Spidering and googling AoC|4.2.3 Spidering and Googling]]&lt;br /&gt;
&lt;br /&gt;
This phase of the Information Gathering process consists of browsing and capturing resources related to the application being tested. Search engines, such as Google, can be used to discover issues related to the web application structure or error pages produced by the application that have been exposed to the public domain.&lt;br /&gt;
&lt;br /&gt;
[[Testing for Error Code|4.2.4 Analysis of Error Code]]&lt;br /&gt;
&lt;br /&gt;
Web applications may divulge information during a penetration test which is not intended to be seen by an end user. Information such as error codes can inform the tester about technologies and products being used by the application.&amp;lt;br&amp;gt;&lt;br /&gt;
In many cases, error codes can be easily invoked without the need for specialist skills or tools due to bad exception handling design and coding. &lt;br /&gt;
&lt;br /&gt;
[[Infrastructure configuration management testing AoC|4.2.5 Infrastructure Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
Often analysis of the infrastructure and topology architecture can reveal a great deal about a web application. Information such as source code, HTTP methods permitted, administrative functionality, authentication methods and infrastructural configurations can be obtained.&amp;lt;br&amp;gt;&lt;br /&gt;
Clearly, focusing only on the web application will not be an exhaustive test. It cannot be as comprehensive as the information possibly gathered by performing a broader infrastructure analysis.  &lt;br /&gt;
&lt;br /&gt;
[[SSL/TLS Testing AoC|4.2.5.1 SSL/TLS Testing]]&lt;br /&gt;
&lt;br /&gt;
SSL and TLS are two protocols that provide, with the support of cryptography, secure channels for the protection, confidentiality, and authentication of the information being transmitted.&amp;lt;br&amp;gt;&lt;br /&gt;
Considering the criticality of these security implementations, it is important to verify the usage of a strong cipher algorithm and its proper implementation.&lt;br /&gt;
&lt;br /&gt;
[[DB Listener Testing AoC|4.2.5.2 DB Listener Testing]]&lt;br /&gt;
&lt;br /&gt;
During the configuration of a database server, many DB administrators do not adequately consider the security of the DB listener component. The listener could reveal sensitive data as well as configuration settings or running database instances if insecurely configured and probed with manual or automated techniques. Information revealed will often be useful to a tester serving as input to more impacting follow-on tests.&lt;br /&gt;
&lt;br /&gt;
[[Application configuration management testing AoC|4.2.6 Application Configuration Management Testing]]&lt;br /&gt;
&lt;br /&gt;
Web applications hide some information that is usually not considered during the development or configuration of the application itself.&amp;lt;br&amp;gt;&lt;br /&gt;
This data can be discovered in the source code, in the log files or in the default error codes of the web servers. A correct approach to this topic is fundamental during a security assessment.&lt;br /&gt;
&lt;br /&gt;
[[File extensions handling AoC|4.2.6.1 File Extensions Handling]]&lt;br /&gt;
&lt;br /&gt;
The file extensions present in a web server or a web application make it possible to identify the technologies which compose the target application, e.g. jsp and asp extensions. File extensions can also expose additional systems connected to the application.&lt;br /&gt;
&lt;br /&gt;
[[Old file testing AoC|4.2.6.2 Old, Backup and Unreferenced files]]&lt;br /&gt;
&lt;br /&gt;
Redundant, readable and downloadable files on a web server, such as old, backup and renamed files, are a big source of information leakage. It is necessary to verify the presence of these files because they may contain parts of source code, installation paths as well as passwords for applications and/or databases.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing:_Introduction_and_objectives&amp;diff=15818</id>
		<title>Testing: Introduction and objectives</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing:_Introduction_and_objectives&amp;diff=15818"/>
				<updated>2007-01-26T13:50:55Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/OWASP_Testing_Guide_v2_Table_of_Contents#Web_Application_Penetration_Testing Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
This Chapter describes the OWASP Web Application Penetration testing methodology and explains how to test each vulnerability.&lt;br /&gt;
&lt;br /&gt;
'''What is Web Application Penetration Testing?'''&amp;lt;br&amp;gt;&lt;br /&gt;
A penetration test is a method of evaluating the security of a computer system or network by simulating an attack. A Web Application Penetration Test focuses only on evaluating the security of a web application.&amp;lt;br&amp;gt;&lt;br /&gt;
The process involves an active analysis of the application for any weaknesses, technical flaws or vulnerabilities. Any security issues that are found will be presented to the system owner together with an assessment of their impact and often with a proposal for mitigation or a technical solution.&lt;br /&gt;
&lt;br /&gt;
'''What is a vulnerability?'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given that an application owns a set of assets (resources of value such as the data in a database or on the file system), a vulnerability is a weakness in an asset that makes a threat possible.&lt;br /&gt;
So a threat is a potential occurrence that may harm an asset by exploiting a vulnerability.&lt;br /&gt;
A test is an action that tends to show a vulnerability in the application.&lt;br /&gt;
&lt;br /&gt;
'''Our approach in writing this guide'''&lt;br /&gt;
&lt;br /&gt;
The OWASP approach is Open and Collaborative:&lt;br /&gt;
* Open: every security expert can participate with his experience in the project. Everything is free.&lt;br /&gt;
* Collaborative: we usually perform brainstorming before the articles are written so we can share our ideas and develop a collective vision of the project. That means rough consensus, wider audience and participation.&amp;lt;br&amp;gt;&lt;br /&gt;
This approach tends to create a defined Testing Methodology that will be:&lt;br /&gt;
* Consistent&lt;br /&gt;
* Reproducible&lt;br /&gt;
* Under quality control&amp;lt;br&amp;gt;&lt;br /&gt;
The problems that we want to be addressed are:&lt;br /&gt;
* Document all&lt;br /&gt;
* Test all&lt;br /&gt;
We think it is important to use a method to test all known vulnerabilities and document all the the pen test activities.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''What is the OWASP testing methodology?'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Penetration testing will never be an exact science where a complete list of all possible issues that should be tested can be defined. Indeed, penetration testing is only an appropriate technique for testing the security of web applications under certain circumstances. &lt;br /&gt;
The goal is to collect all the possible testing techniques, explain them and keep the guide updated.&amp;lt;br&amp;gt;&lt;br /&gt;
The OWASP Web Application Penetration Testing method is based on the black box approach. The tester knows nothing or very little information about the application to be tested.&lt;br /&gt;
The testing model consists of:&lt;br /&gt;
* Tester: Who performs the testing activities &lt;br /&gt;
* Tools and methodology: The core of this Testing Guide project&lt;br /&gt;
* Application: The black box to test&lt;br /&gt;
The test is divided into 2 phases:&lt;br /&gt;
* Passive mode: in the passive mode the tester tries to understand the application's logic, plays with the application; a tool can be used for information gathering such as an HTTP proxy to observe all the HTTP requests and responses. At the end of this phase the tester should understand all the access points (gates) of the application (e.g. Header HTTP, parameters, cookies). For example the tester could find the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
https://www.example.com/login/Authentic_Form.html&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This may indicate an authentication form in which the application requests a username and a password. &amp;lt;br&amp;gt;&lt;br /&gt;
The following parameters represent two access points (gates) to the application:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://www.example.com/Appx.jsp?a=1&amp;amp;b=1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
In this case, the application shows two gates (parameters a and b).&lt;br /&gt;
All the gates found in this phase represent a point of testing. A spreadsheet with the directory tree of the application and all the access points would be useful for the second phase.&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
* Active mode: in this phase the tester begins to test using the methodology described in the follow paragraphs.&lt;br /&gt;
&lt;br /&gt;
We have split the set of tests in 8 sub-categories:&lt;br /&gt;
*Information Gathering &lt;br /&gt;
*Business Logic Testing&lt;br /&gt;
*Authentication Testing &lt;br /&gt;
*Session Management Testing&lt;br /&gt;
*Data Validation Testing &lt;br /&gt;
*Denial of Service Testing &lt;br /&gt;
*Web Services Testing &lt;br /&gt;
*AJAX Testing &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Here is the list of tests that we will explain in the next paragraphs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:Table1.PNG]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:Table2.PNG]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:Table3.PNG]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=The_OWASP_Testing_Framework&amp;diff=15817</id>
		<title>The OWASP Testing Framework</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=The_OWASP_Testing_Framework&amp;diff=15817"/>
				<updated>2007-01-26T13:48:27Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Phase 3B: Code Reviews */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
&lt;br /&gt;
This section describes a typical testing framework that can be developed within an organization. It can be seen as a reference framework that comprises techniques and tasks that are appropriate at various phases of the software development life cycle (SDLC). Companies and project teams can use this model to develop their own testing framework and to scope testing services from vendors. This framework should not be seen as prescriptive, but as a flexible approach that can be extended and molded to fit an organization’s development process and culture.&lt;br /&gt;
&lt;br /&gt;
This section aims to help organizations build a complete strategic testing process, and is not aimed at consultants or contractors who tend to be engaged in more tactical, specific areas of testing. &lt;br /&gt;
&lt;br /&gt;
It is critical to understand why building an end-to-end testing framework is crucial to assessing and improving software security. Howard and LeBlanc note in ''Writing Secure Code'' that issuing a security bulletin costs Microsoft at least $100,000, and it costs their customers collectively far more than that to implement the security patches. They also note that the US government’s CyberCrime web site (&amp;lt;u&amp;gt;http://www.cybercrime.gov/cccases.html&amp;lt;/u&amp;gt;) details recent criminal cases and the loss to organizations. Typical losses far exceed USD $100,000.&lt;br /&gt;
&lt;br /&gt;
With economics like this, it is little wonder why software vendors move from solely performing black box security testing, which can only be performed on applications that have already been developed, to concentrate on the early cycles of application development such as definition, design, and development.&lt;br /&gt;
&lt;br /&gt;
Many security practitioners still see security testing in the realm of penetration testing. As shown in Chapter 3: , and by the framework, while penetration testing has a role to play, it is generally inefficient at finding bugs and relies excessively on the skill of the tester. It should only be considered as an implementation technique, or to raise awareness of production issues. To improve the security of applications, the security quality of the software must be improved. That means testing the security at the definition, design, develop, deploy, and maintenance stages, and not relying on the costly strategy of waiting until code is completely built. &lt;br /&gt;
&lt;br /&gt;
As discussed in the introduction of this document, there are many development methodologies such as the Rational Unified Process, eXtreme and Agile development, and traditional waterfall methodologies. The intent of this guide is to suggest neither a particular development methodology nor provide specific guidance that adheres to any particular methodology. Instead, we are presenting a generic development model, and the reader should follow it according to their company process.&lt;br /&gt;
&lt;br /&gt;
This testing framework consists of the following activities that should take place:&lt;br /&gt;
&lt;br /&gt;
* Before Development Begins&lt;br /&gt;
* During Definition and Design&lt;br /&gt;
* During Development&lt;br /&gt;
* During Deployment&lt;br /&gt;
* Maintenance and Operations&lt;br /&gt;
&lt;br /&gt;
==Phase 1: Before Development Begins==&lt;br /&gt;
&lt;br /&gt;
Before application development has started:&lt;br /&gt;
&lt;br /&gt;
* Test to ensure that there is an adequate SDLC where security is inherent.&lt;br /&gt;
* Test to ensure that the appropriate policy and standards are in place for the development team.&lt;br /&gt;
* Develop the metrics and measurement criteria. &lt;br /&gt;
&lt;br /&gt;
===Phase 1A: Policies and Standards Review===&lt;br /&gt;
&lt;br /&gt;
Ensure that there are appropriate policies, standards, and documentation in place. Documentation is extremely important as it gives development teams guidelines and policies that they can follow. &lt;br /&gt;
&lt;br /&gt;
''People can only do the right thing, if they know what the right thing is.''' '''''&lt;br /&gt;
&lt;br /&gt;
If the application is to be developed in Java, it is essential that there is a Java secure coding standard. If the application is to use cryptography, it is essential that there is a cryptography standard. No policies or standards can cover every situation that the development team will face. By documenting the common and predictable issues, there will be fewer decisions that need to be made during the development process. &lt;br /&gt;
&lt;br /&gt;
===Phase 1B: Develop Measurement and Metrics Criteria (Ensure Traceability)===&lt;br /&gt;
&lt;br /&gt;
Before development begins, plan the measurement program. By defining criteria that needs to be measured, it provides visibility into defects in both the process and product. It is essential to define the metrics before development begins, as there may be a need to modify the process in order to capture the data.&lt;br /&gt;
&lt;br /&gt;
==Phase 2: During Definition and Design==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Phase 2A: Security Requirements Review===&lt;br /&gt;
&lt;br /&gt;
Security requirements define how an application works from a security perspective. It is essential that the security requirements be tested. Testing in this case means testing the assumptions that are made in the requirements, and testing to see if there are gaps in the requirements definitions. &lt;br /&gt;
&lt;br /&gt;
For example, if there is a security requirement that states that users must be registered before they can get access to the whitepapers section of a website, does this mean that the user must be registered with the system, or should the user be authenticated? Ensure that requirements are as unambiguous as possible. &lt;br /&gt;
&lt;br /&gt;
When looking for requirements gaps, consider looking at security mechanisms such as:&lt;br /&gt;
&lt;br /&gt;
* User Management (password reset etc.)&lt;br /&gt;
* Authentication&lt;br /&gt;
* Authorization&lt;br /&gt;
* Data Confidentiality&lt;br /&gt;
* Integrity&lt;br /&gt;
* Accountability&lt;br /&gt;
* Session Management&lt;br /&gt;
* Transport Security&lt;br /&gt;
* Tiered System Segregation&lt;br /&gt;
* Privacy&lt;br /&gt;
&lt;br /&gt;
===Phase 2B: Design an Architecture Review===&lt;br /&gt;
&lt;br /&gt;
Applications should have a documented design and architecture. By documented we mean models, textual documents, and other similar artifacts. It is essential to test these artifacts to ensure that the design and architecture enforce the appropriate level of security as defined in the requirements. &lt;br /&gt;
&lt;br /&gt;
Identifying security flaws in the design phase is not only one of the most cost efficient places to identify flaws, but can be one of the most effective places to make changes. For example, being able to identify that the design calls for authorization decisions to be made in multiple places; it may be appropriate to consider a central authorization component. If the application is performing data validation at multiple places, it may be appropriate to develop a central validation framework (fixing input validation in one place, rather than hundreds of places, is far cheaper).&lt;br /&gt;
&lt;br /&gt;
If weaknesses are discovered, they should be given to the system architect for alternative approaches.&lt;br /&gt;
&lt;br /&gt;
===Phase 2C: Create and Review UML Models===&lt;br /&gt;
&lt;br /&gt;
Once the design and architecture is complete, build Unified Modeling Language (UML) models that describe how the application works. In some cases, these may already be available. Use these models to confirm with the systems designers an exact understanding of how the application works. If weaknesses are discovered, they should be given to the system architect for alternative approaches.&lt;br /&gt;
&lt;br /&gt;
===Phase 2D: Create and Review Threat Models===&lt;br /&gt;
&lt;br /&gt;
Armed with design and architecture reviews, and the UML models explaining exactly how the system works, undertake a threat modeling exercise. Develop realistic threat scenarios. Analyze the design and architecture to ensure that these threats have been mitigated, accepted by the business, or assigned to a third party, such as an insurance firm. When identified threats have no mitigation strategies, revisit the design and architecture with the systems architect to modify the design.&lt;br /&gt;
&lt;br /&gt;
==Phase 3: During Development==&lt;br /&gt;
&lt;br /&gt;
Theoretically, development is the implementation of a design. However, in the real world, many design decisions are made during code development. These are often smaller decisions that were either too detailed to be described in the design, or in other cases, issues where no policy or standards guidance was offered. If the design and architecture was not adequate, the developer will be faced with many decisions. If there were insufficient policies and standards, the developer will be faced with even more decisions.&lt;br /&gt;
&lt;br /&gt;
===Phase 3A: Code Walkthroughs===&lt;br /&gt;
&lt;br /&gt;
The security team should perform a code walkthrough with the developers, and in some cases, the system architects. A code walkthrough is a high-level walkthrough of the code where the developers can explain the logic and flow. It allows the code review team to obtain a general understanding of the code, and allows the developers to explain why certain things were developed the way they were. &lt;br /&gt;
&lt;br /&gt;
The purpose is not to perform a code review, but to understand the flow at a high-level, the layout, and the structure of the code that makes up the application.&lt;br /&gt;
&lt;br /&gt;
===Phase 3B: Code Reviews===&lt;br /&gt;
&lt;br /&gt;
Armed with a good understanding of how the code is structured and why certain things were coded the way they were, the tester can now examine the actual code for security defects. &lt;br /&gt;
&lt;br /&gt;
Static code reviews validate the code against a set of checklists, including:&lt;br /&gt;
&lt;br /&gt;
* Business requirements for availability, confidentiality, and integrity.&lt;br /&gt;
* OWASP Guide or Top 10 Checklists (depending on the depth of the review) for technical exposures.&lt;br /&gt;
* Specific issues relating to the language or framework in use, such as the Scarlet paper for PHP or Microsoft Secure Coding checklists for ASP.NET.&lt;br /&gt;
* Any industry specific requirements, such as Sarbanes-Oxley 404, COPPA, ISO 17799, APRA, HIPAA, Visa Merchant guidelines or other regulatory regimes.&lt;br /&gt;
In terms of return on resources invested (mostly time), static code reviews produce far higher quality returns than any other security review method, and rely least on the skill of the reviewer, within reason. However, they are not a silver bullet, and need to be considered carefully within a full-spectrum testing regime. &lt;br /&gt;
&lt;br /&gt;
For more details on OWASP checklists, please refer to OWASP Guide for Secure Web Applications, or the latest edition of the OWASP Top 10.&lt;br /&gt;
&lt;br /&gt;
==Phase 4: During Deployment==&lt;br /&gt;
&lt;br /&gt;
===Phase 4A: Application Penetration Testing===&lt;br /&gt;
&lt;br /&gt;
Having tested the requirements, analyzed the design, and performed code review, it might be assumed that all issues have been caught. Hopefully, this is the case, but penetration testing the application after it has been deployed provides a last check to ensure that nothing has been missed. &lt;br /&gt;
&lt;br /&gt;
===Phase 4B: Configuration Management Testing===&lt;br /&gt;
&lt;br /&gt;
The application penetration test should include the checking of how the infrastructure was deployed and secured. While the application may be secure, a small aspect of the configuration could still be at a default install stage and vulnerable to exploitation.&lt;br /&gt;
&lt;br /&gt;
==Phase 5: Maintenance and Operations==&lt;br /&gt;
&lt;br /&gt;
===Phase 5A: Conduct Operational Management Reviews===&lt;br /&gt;
&lt;br /&gt;
There needs to be a process in place which details how the operational side, of the application and infrastructure, is managed.&lt;br /&gt;
&lt;br /&gt;
===Phase 5B: Conduct Periodic Health Checks===&lt;br /&gt;
&lt;br /&gt;
Monthly or quarterly health checks should be performed on both the application and infrastructure to ensure no new security risks have been introduced and that the level of security is still intact.&lt;br /&gt;
&lt;br /&gt;
===Phase 5C: Ensure Change Verification===&lt;br /&gt;
&lt;br /&gt;
After every change has been approved and tested in the QA environment and deployed into the production environment, it is vital that as part of the change management process, the change is  checked to ensure that the level of security hasn’t been affected by the change.&lt;br /&gt;
&lt;br /&gt;
==A Typical SDLC Testing Workflow==&lt;br /&gt;
&lt;br /&gt;
The following figure shows a typical SDLC Testing Workflow.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
[[Image: Typical SDLC Testing Workflow.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=The_OWASP_Testing_Framework&amp;diff=15816</id>
		<title>The OWASP Testing Framework</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=The_OWASP_Testing_Framework&amp;diff=15816"/>
				<updated>2007-01-26T13:47:40Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Phase 2C: Create and Review UML Models */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
&lt;br /&gt;
This section describes a typical testing framework that can be developed within an organization. It can be seen as a reference framework that comprises techniques and tasks that are appropriate at various phases of the software development life cycle (SDLC). Companies and project teams can use this model to develop their own testing framework and to scope testing services from vendors. This framework should not be seen as prescriptive, but as a flexible approach that can be extended and molded to fit an organization’s development process and culture.&lt;br /&gt;
&lt;br /&gt;
This section aims to help organizations build a complete strategic testing process, and is not aimed at consultants or contractors who tend to be engaged in more tactical, specific areas of testing. &lt;br /&gt;
&lt;br /&gt;
It is critical to understand why building an end-to-end testing framework is crucial to assessing and improving software security. Howard and LeBlanc note in ''Writing Secure Code'' that issuing a security bulletin costs Microsoft at least $100,000, and it costs their customers collectively far more than that to implement the security patches. They also note that the US government’s CyberCrime web site (&amp;lt;u&amp;gt;http://www.cybercrime.gov/cccases.html&amp;lt;/u&amp;gt;) details recent criminal cases and the loss to organizations. Typical losses far exceed USD $100,000.&lt;br /&gt;
&lt;br /&gt;
With economics like this, it is little wonder why software vendors move from solely performing black box security testing, which can only be performed on applications that have already been developed, to concentrate on the early cycles of application development such as definition, design, and development.&lt;br /&gt;
&lt;br /&gt;
Many security practitioners still see security testing in the realm of penetration testing. As shown in Chapter 3: , and by the framework, while penetration testing has a role to play, it is generally inefficient at finding bugs and relies excessively on the skill of the tester. It should only be considered as an implementation technique, or to raise awareness of production issues. To improve the security of applications, the security quality of the software must be improved. That means testing the security at the definition, design, develop, deploy, and maintenance stages, and not relying on the costly strategy of waiting until code is completely built. &lt;br /&gt;
&lt;br /&gt;
As discussed in the introduction of this document, there are many development methodologies such as the Rational Unified Process, eXtreme and Agile development, and traditional waterfall methodologies. The intent of this guide is to suggest neither a particular development methodology nor provide specific guidance that adheres to any particular methodology. Instead, we are presenting a generic development model, and the reader should follow it according to their company process.&lt;br /&gt;
&lt;br /&gt;
This testing framework consists of the following activities that should take place:&lt;br /&gt;
&lt;br /&gt;
* Before Development Begins&lt;br /&gt;
* During Definition and Design&lt;br /&gt;
* During Development&lt;br /&gt;
* During Deployment&lt;br /&gt;
* Maintenance and Operations&lt;br /&gt;
&lt;br /&gt;
==Phase 1: Before Development Begins==&lt;br /&gt;
&lt;br /&gt;
Before application development has started:&lt;br /&gt;
&lt;br /&gt;
* Test to ensure that there is an adequate SDLC where security is inherent.&lt;br /&gt;
* Test to ensure that the appropriate policy and standards are in place for the development team.&lt;br /&gt;
* Develop the metrics and measurement criteria. &lt;br /&gt;
&lt;br /&gt;
===Phase 1A: Policies and Standards Review===&lt;br /&gt;
&lt;br /&gt;
Ensure that there are appropriate policies, standards, and documentation in place. Documentation is extremely important as it gives development teams guidelines and policies that they can follow. &lt;br /&gt;
&lt;br /&gt;
''People can only do the right thing, if they know what the right thing is.''' '''''&lt;br /&gt;
&lt;br /&gt;
If the application is to be developed in Java, it is essential that there is a Java secure coding standard. If the application is to use cryptography, it is essential that there is a cryptography standard. No policies or standards can cover every situation that the development team will face. By documenting the common and predictable issues, there will be fewer decisions that need to be made during the development process. &lt;br /&gt;
&lt;br /&gt;
===Phase 1B: Develop Measurement and Metrics Criteria (Ensure Traceability)===&lt;br /&gt;
&lt;br /&gt;
Before development begins, plan the measurement program. By defining criteria that needs to be measured, it provides visibility into defects in both the process and product. It is essential to define the metrics before development begins, as there may be a need to modify the process in order to capture the data.&lt;br /&gt;
&lt;br /&gt;
==Phase 2: During Definition and Design==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Phase 2A: Security Requirements Review===&lt;br /&gt;
&lt;br /&gt;
Security requirements define how an application works from a security perspective. It is essential that the security requirements be tested. Testing in this case means testing the assumptions that are made in the requirements, and testing to see if there are gaps in the requirements definitions. &lt;br /&gt;
&lt;br /&gt;
For example, if there is a security requirement that states that users must be registered before they can get access to the whitepapers section of a website, does this mean that the user must be registered with the system, or should the user be authenticated? Ensure that requirements are as unambiguous as possible. &lt;br /&gt;
&lt;br /&gt;
When looking for requirements gaps, consider looking at security mechanisms such as:&lt;br /&gt;
&lt;br /&gt;
* User Management (password reset etc.)&lt;br /&gt;
* Authentication&lt;br /&gt;
* Authorization&lt;br /&gt;
* Data Confidentiality&lt;br /&gt;
* Integrity&lt;br /&gt;
* Accountability&lt;br /&gt;
* Session Management&lt;br /&gt;
* Transport Security&lt;br /&gt;
* Tiered System Segregation&lt;br /&gt;
* Privacy&lt;br /&gt;
&lt;br /&gt;
===Phase 2B: Design an Architecture Review===&lt;br /&gt;
&lt;br /&gt;
Applications should have a documented design and architecture. By documented we mean models, textual documents, and other similar artifacts. It is essential to test these artifacts to ensure that the design and architecture enforce the appropriate level of security as defined in the requirements. &lt;br /&gt;
&lt;br /&gt;
Identifying security flaws in the design phase is not only one of the most cost efficient places to identify flaws, but can be one of the most effective places to make changes. For example, being able to identify that the design calls for authorization decisions to be made in multiple places; it may be appropriate to consider a central authorization component. If the application is performing data validation at multiple places, it may be appropriate to develop a central validation framework (fixing input validation in one place, rather than hundreds of places, is far cheaper).&lt;br /&gt;
&lt;br /&gt;
If weaknesses are discovered, they should be given to the system architect for alternative approaches.&lt;br /&gt;
&lt;br /&gt;
===Phase 2C: Create and Review UML Models===&lt;br /&gt;
&lt;br /&gt;
Once the design and architecture is complete, build Unified Modeling Language (UML) models that describe how the application works. In some cases, these may already be available. Use these models to confirm with the systems designers an exact understanding of how the application works. If weaknesses are discovered, they should be given to the system architect for alternative approaches.&lt;br /&gt;
&lt;br /&gt;
===Phase 2D: Create and Review Threat Models===&lt;br /&gt;
&lt;br /&gt;
Armed with design and architecture reviews, and the UML models explaining exactly how the system works, undertake a threat modeling exercise. Develop realistic threat scenarios. Analyze the design and architecture to ensure that these threats have been mitigated, accepted by the business, or assigned to a third party, such as an insurance firm. When identified threats have no mitigation strategies, revisit the design and architecture with the systems architect to modify the design.&lt;br /&gt;
&lt;br /&gt;
==Phase 3: During Development==&lt;br /&gt;
&lt;br /&gt;
Theoretically, development is the implementation of a design. However, in the real world, many design decisions are made during code development. These are often smaller decisions that were either too detailed to be described in the design, or in other cases, issues where no policy or standards guidance was offered. If the design and architecture was not adequate, the developer will be faced with many decisions. If there were insufficient policies and standards, the developer will be faced with even more decisions.&lt;br /&gt;
&lt;br /&gt;
===Phase 3A: Code Walkthroughs===&lt;br /&gt;
&lt;br /&gt;
The security team should perform a code walkthrough with the developers, and in some cases, the system architects. A code walkthrough is a high-level walkthrough of the code where the developers can explain the logic and flow. It allows the code review team to obtain a general understanding of the code, and allows the developers to explain why certain things were developed the way they were. &lt;br /&gt;
&lt;br /&gt;
The purpose is not to perform a code review, but to understand the flow at a high-level, the layout, and the structure of the code that makes up the application.&lt;br /&gt;
&lt;br /&gt;
===Phase 3B: Code Reviews===&lt;br /&gt;
&lt;br /&gt;
Armed with a good understanding of how the code is structured and why certain things were coded the way they were, the tester can now examine the actual code for security defects. &lt;br /&gt;
&lt;br /&gt;
Static code reviews validate the code against a set of checklists, including:&lt;br /&gt;
&lt;br /&gt;
* Business requirements for availability, confidentiality, and integrity&lt;br /&gt;
* OWASP Guide or Top 10 Checklists (depending on the depth of the review) for technical exposures&lt;br /&gt;
* Specific issues relating to the language or framework in use, such as the Scarlet paper for PHP or Microsoft Secure Coding checklists for ASP.NET&lt;br /&gt;
* Any industry specific requirements, such as Sarbanes-Oxley 404, COPPA, ISO 17799, APRA, HIPAA, Visa Merchant guidelines or other regulatory regimes&lt;br /&gt;
In terms of return on resources invested (mostly time), static code reviews produce far higher quality returns than any other security review method, and rely least on the skill of the reviewer, within reason. However, they are not a silver bullet, and need to be considered carefully within a full-spectrum testing regime. &lt;br /&gt;
&lt;br /&gt;
For more details on OWASP checklists, please refer to OWASP Guide for Secure Web Applications, or the latest edition of the OWASP Top 10.&lt;br /&gt;
&lt;br /&gt;
==Phase 4: During Deployment==&lt;br /&gt;
&lt;br /&gt;
===Phase 4A: Application Penetration Testing===&lt;br /&gt;
&lt;br /&gt;
Having tested the requirements, analyzed the design, and performed code review, it might be assumed that all issues have been caught. Hopefully, this is the case, but penetration testing the application after it has been deployed provides a last check to ensure that nothing has been missed. &lt;br /&gt;
&lt;br /&gt;
===Phase 4B: Configuration Management Testing===&lt;br /&gt;
&lt;br /&gt;
The application penetration test should include the checking of how the infrastructure was deployed and secured. While the application may be secure, a small aspect of the configuration could still be at a default install stage and vulnerable to exploitation.&lt;br /&gt;
&lt;br /&gt;
==Phase 5: Maintenance and Operations==&lt;br /&gt;
&lt;br /&gt;
===Phase 5A: Conduct Operational Management Reviews===&lt;br /&gt;
&lt;br /&gt;
There needs to be a process in place which details how the operational side, of the application and infrastructure, is managed.&lt;br /&gt;
&lt;br /&gt;
===Phase 5B: Conduct Periodic Health Checks===&lt;br /&gt;
&lt;br /&gt;
Monthly or quarterly health checks should be performed on both the application and infrastructure to ensure no new security risks have been introduced and that the level of security is still intact.&lt;br /&gt;
&lt;br /&gt;
===Phase 5C: Ensure Change Verification===&lt;br /&gt;
&lt;br /&gt;
After every change has been approved and tested in the QA environment and deployed into the production environment, it is vital that as part of the change management process, the change is  checked to ensure that the level of security hasn’t been affected by the change.&lt;br /&gt;
&lt;br /&gt;
==A Typical SDLC Testing Workflow==&lt;br /&gt;
&lt;br /&gt;
The following figure shows a typical SDLC Testing Workflow.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
[[Image: Typical SDLC Testing Workflow.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=The_OWASP_Testing_Framework&amp;diff=15815</id>
		<title>The OWASP Testing Framework</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=The_OWASP_Testing_Framework&amp;diff=15815"/>
				<updated>2007-01-26T13:46:46Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Phase 2A: Security Requirements Review */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
&lt;br /&gt;
This section describes a typical testing framework that can be developed within an organization. It can be seen as a reference framework that comprises techniques and tasks that are appropriate at various phases of the software development life cycle (SDLC). Companies and project teams can use this model to develop their own testing framework and to scope testing services from vendors. This framework should not be seen as prescriptive, but as a flexible approach that can be extended and molded to fit an organization’s development process and culture.&lt;br /&gt;
&lt;br /&gt;
This section aims to help organizations build a complete strategic testing process, and is not aimed at consultants or contractors who tend to be engaged in more tactical, specific areas of testing. &lt;br /&gt;
&lt;br /&gt;
It is critical to understand why building an end-to-end testing framework is crucial to assessing and improving software security. Howard and LeBlanc note in ''Writing Secure Code'' that issuing a security bulletin costs Microsoft at least $100,000, and it costs their customers collectively far more than that to implement the security patches. They also note that the US government’s CyberCrime web site (&amp;lt;u&amp;gt;http://www.cybercrime.gov/cccases.html&amp;lt;/u&amp;gt;) details recent criminal cases and the loss to organizations. Typical losses far exceed USD $100,000.&lt;br /&gt;
&lt;br /&gt;
With economics like this, it is little wonder why software vendors move from solely performing black box security testing, which can only be performed on applications that have already been developed, to concentrate on the early cycles of application development such as definition, design, and development.&lt;br /&gt;
&lt;br /&gt;
Many security practitioners still see security testing in the realm of penetration testing. As shown in Chapter 3: , and by the framework, while penetration testing has a role to play, it is generally inefficient at finding bugs and relies excessively on the skill of the tester. It should only be considered as an implementation technique, or to raise awareness of production issues. To improve the security of applications, the security quality of the software must be improved. That means testing the security at the definition, design, develop, deploy, and maintenance stages, and not relying on the costly strategy of waiting until code is completely built. &lt;br /&gt;
&lt;br /&gt;
As discussed in the introduction of this document, there are many development methodologies such as the Rational Unified Process, eXtreme and Agile development, and traditional waterfall methodologies. The intent of this guide is to suggest neither a particular development methodology nor provide specific guidance that adheres to any particular methodology. Instead, we are presenting a generic development model, and the reader should follow it according to their company process.&lt;br /&gt;
&lt;br /&gt;
This testing framework consists of the following activities that should take place:&lt;br /&gt;
&lt;br /&gt;
* Before Development Begins&lt;br /&gt;
* During Definition and Design&lt;br /&gt;
* During Development&lt;br /&gt;
* During Deployment&lt;br /&gt;
* Maintenance and Operations&lt;br /&gt;
&lt;br /&gt;
==Phase 1: Before Development Begins==&lt;br /&gt;
&lt;br /&gt;
Before application development has started:&lt;br /&gt;
&lt;br /&gt;
* Test to ensure that there is an adequate SDLC where security is inherent.&lt;br /&gt;
* Test to ensure that the appropriate policy and standards are in place for the development team.&lt;br /&gt;
* Develop the metrics and measurement criteria. &lt;br /&gt;
&lt;br /&gt;
===Phase 1A: Policies and Standards Review===&lt;br /&gt;
&lt;br /&gt;
Ensure that there are appropriate policies, standards, and documentation in place. Documentation is extremely important as it gives development teams guidelines and policies that they can follow. &lt;br /&gt;
&lt;br /&gt;
''People can only do the right thing, if they know what the right thing is.''' '''''&lt;br /&gt;
&lt;br /&gt;
If the application is to be developed in Java, it is essential that there is a Java secure coding standard. If the application is to use cryptography, it is essential that there is a cryptography standard. No policies or standards can cover every situation that the development team will face. By documenting the common and predictable issues, there will be fewer decisions that need to be made during the development process. &lt;br /&gt;
&lt;br /&gt;
===Phase 1B: Develop Measurement and Metrics Criteria (Ensure Traceability)===&lt;br /&gt;
&lt;br /&gt;
Before development begins, plan the measurement program. By defining criteria that needs to be measured, it provides visibility into defects in both the process and product. It is essential to define the metrics before development begins, as there may be a need to modify the process in order to capture the data.&lt;br /&gt;
&lt;br /&gt;
==Phase 2: During Definition and Design==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Phase 2A: Security Requirements Review===&lt;br /&gt;
&lt;br /&gt;
Security requirements define how an application works from a security perspective. It is essential that the security requirements be tested. Testing in this case means testing the assumptions that are made in the requirements, and testing to see if there are gaps in the requirements definitions. &lt;br /&gt;
&lt;br /&gt;
For example, if there is a security requirement that states that users must be registered before they can get access to the whitepapers section of a website, does this mean that the user must be registered with the system, or should the user be authenticated? Ensure that requirements are as unambiguous as possible. &lt;br /&gt;
&lt;br /&gt;
When looking for requirements gaps, consider looking at security mechanisms such as:&lt;br /&gt;
&lt;br /&gt;
* User Management (password reset etc.)&lt;br /&gt;
* Authentication&lt;br /&gt;
* Authorization&lt;br /&gt;
* Data Confidentiality&lt;br /&gt;
* Integrity&lt;br /&gt;
* Accountability&lt;br /&gt;
* Session Management&lt;br /&gt;
* Transport Security&lt;br /&gt;
* Tiered System Segregation&lt;br /&gt;
* Privacy&lt;br /&gt;
&lt;br /&gt;
===Phase 2B: Design an Architecture Review===&lt;br /&gt;
&lt;br /&gt;
Applications should have a documented design and architecture. By documented we mean models, textual documents, and other similar artifacts. It is essential to test these artifacts to ensure that the design and architecture enforce the appropriate level of security as defined in the requirements. &lt;br /&gt;
&lt;br /&gt;
Identifying security flaws in the design phase is not only one of the most cost efficient places to identify flaws, but can be one of the most effective places to make changes. For example, being able to identify that the design calls for authorization decisions to be made in multiple places; it may be appropriate to consider a central authorization component. If the application is performing data validation at multiple places, it may be appropriate to develop a central validation framework (fixing input validation in one place, rather than hundreds of places, is far cheaper).&lt;br /&gt;
&lt;br /&gt;
If weaknesses are discovered, they should be given to the system architect for alternative approaches.&lt;br /&gt;
&lt;br /&gt;
===Phase 2C: Create and Review UML Models===&lt;br /&gt;
&lt;br /&gt;
Once the design and architecture is complete, build UML models that describe how the application works. In some cases, these may already be available. Use these models to confirm with the systems designers an exact understanding of how the application works. If weaknesses are discovered, they should be given to the system architect for alternative approaches. &lt;br /&gt;
&lt;br /&gt;
===Phase 2D: Create and Review Threat Models===&lt;br /&gt;
&lt;br /&gt;
Armed with design and architecture reviews, and the UML models explaining exactly how the system works, undertake a threat modeling exercise. Develop realistic threat scenarios. Analyze the design and architecture to ensure that these threats have been mitigated, accepted by the business, or assigned to a third party, such as an insurance firm. When identified threats have no mitigation strategies, revisit the design and architecture with the systems architect to modify the design.&lt;br /&gt;
&lt;br /&gt;
==Phase 3: During Development==&lt;br /&gt;
&lt;br /&gt;
Theoretically, development is the implementation of a design. However, in the real world, many design decisions are made during code development. These are often smaller decisions that were either too detailed to be described in the design, or in other cases, issues where no policy or standards guidance was offered. If the design and architecture was not adequate, the developer will be faced with many decisions. If there were insufficient policies and standards, the developer will be faced with even more decisions.&lt;br /&gt;
&lt;br /&gt;
===Phase 3A: Code Walkthroughs===&lt;br /&gt;
&lt;br /&gt;
The security team should perform a code walkthrough with the developers, and in some cases, the system architects. A code walkthrough is a high-level walkthrough of the code where the developers can explain the logic and flow. It allows the code review team to obtain a general understanding of the code, and allows the developers to explain why certain things were developed the way they were. &lt;br /&gt;
&lt;br /&gt;
The purpose is not to perform a code review, but to understand the flow at a high-level, the layout, and the structure of the code that makes up the application.&lt;br /&gt;
&lt;br /&gt;
===Phase 3B: Code Reviews===&lt;br /&gt;
&lt;br /&gt;
Armed with a good understanding of how the code is structured and why certain things were coded the way they were, the tester can now examine the actual code for security defects. &lt;br /&gt;
&lt;br /&gt;
Static code reviews validate the code against a set of checklists, including:&lt;br /&gt;
&lt;br /&gt;
* Business requirements for availability, confidentiality, and integrity&lt;br /&gt;
* OWASP Guide or Top 10 Checklists (depending on the depth of the review) for technical exposures&lt;br /&gt;
* Specific issues relating to the language or framework in use, such as the Scarlet paper for PHP or Microsoft Secure Coding checklists for ASP.NET&lt;br /&gt;
* Any industry specific requirements, such as Sarbanes-Oxley 404, COPPA, ISO 17799, APRA, HIPAA, Visa Merchant guidelines or other regulatory regimes&lt;br /&gt;
In terms of return on resources invested (mostly time), static code reviews produce far higher quality returns than any other security review method, and rely least on the skill of the reviewer, within reason. However, they are not a silver bullet, and need to be considered carefully within a full-spectrum testing regime. &lt;br /&gt;
&lt;br /&gt;
For more details on OWASP checklists, please refer to OWASP Guide for Secure Web Applications, or the latest edition of the OWASP Top 10.&lt;br /&gt;
&lt;br /&gt;
==Phase 4: During Deployment==&lt;br /&gt;
&lt;br /&gt;
===Phase 4A: Application Penetration Testing===&lt;br /&gt;
&lt;br /&gt;
Having tested the requirements, analyzed the design, and performed code review, it might be assumed that all issues have been caught. Hopefully, this is the case, but penetration testing the application after it has been deployed provides a last check to ensure that nothing has been missed. &lt;br /&gt;
&lt;br /&gt;
===Phase 4B: Configuration Management Testing===&lt;br /&gt;
&lt;br /&gt;
The application penetration test should include the checking of how the infrastructure was deployed and secured. While the application may be secure, a small aspect of the configuration could still be at a default install stage and vulnerable to exploitation.&lt;br /&gt;
&lt;br /&gt;
==Phase 5: Maintenance and Operations==&lt;br /&gt;
&lt;br /&gt;
===Phase 5A: Conduct Operational Management Reviews===&lt;br /&gt;
&lt;br /&gt;
There needs to be a process in place which details how the operational side, of the application and infrastructure, is managed.&lt;br /&gt;
&lt;br /&gt;
===Phase 5B: Conduct Periodic Health Checks===&lt;br /&gt;
&lt;br /&gt;
Monthly or quarterly health checks should be performed on both the application and infrastructure to ensure no new security risks have been introduced and that the level of security is still intact.&lt;br /&gt;
&lt;br /&gt;
===Phase 5C: Ensure Change Verification===&lt;br /&gt;
&lt;br /&gt;
After every change has been approved and tested in the QA environment and deployed into the production environment, it is vital that as part of the change management process, the change is  checked to ensure that the level of security hasn’t been affected by the change.&lt;br /&gt;
&lt;br /&gt;
==A Typical SDLC Testing Workflow==&lt;br /&gt;
&lt;br /&gt;
The following figure shows a typical SDLC Testing Workflow.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
[[Image: Typical SDLC Testing Workflow.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=The_OWASP_Testing_Framework&amp;diff=15814</id>
		<title>The OWASP Testing Framework</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=The_OWASP_Testing_Framework&amp;diff=15814"/>
				<updated>2007-01-26T13:45:47Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Phase 1 — Before Development Begins */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
==Overview==&lt;br /&gt;
&lt;br /&gt;
This section describes a typical testing framework that can be developed within an organization. It can be seen as a reference framework that comprises techniques and tasks that are appropriate at various phases of the software development life cycle (SDLC). Companies and project teams can use this model to develop their own testing framework and to scope testing services from vendors. This framework should not be seen as prescriptive, but as a flexible approach that can be extended and molded to fit an organization’s development process and culture.&lt;br /&gt;
&lt;br /&gt;
This section aims to help organizations build a complete strategic testing process, and is not aimed at consultants or contractors who tend to be engaged in more tactical, specific areas of testing. &lt;br /&gt;
&lt;br /&gt;
It is critical to understand why building an end-to-end testing framework is crucial to assessing and improving software security. Howard and LeBlanc note in ''Writing Secure Code'' that issuing a security bulletin costs Microsoft at least $100,000, and it costs their customers collectively far more than that to implement the security patches. They also note that the US government’s CyberCrime web site (&amp;lt;u&amp;gt;http://www.cybercrime.gov/cccases.html&amp;lt;/u&amp;gt;) details recent criminal cases and the loss to organizations. Typical losses far exceed USD $100,000.&lt;br /&gt;
&lt;br /&gt;
With economics like this, it is little wonder why software vendors move from solely performing black box security testing, which can only be performed on applications that have already been developed, to concentrate on the early cycles of application development such as definition, design, and development.&lt;br /&gt;
&lt;br /&gt;
Many security practitioners still see security testing in the realm of penetration testing. As shown in Chapter 3: , and by the framework, while penetration testing has a role to play, it is generally inefficient at finding bugs and relies excessively on the skill of the tester. It should only be considered as an implementation technique, or to raise awareness of production issues. To improve the security of applications, the security quality of the software must be improved. That means testing the security at the definition, design, develop, deploy, and maintenance stages, and not relying on the costly strategy of waiting until code is completely built. &lt;br /&gt;
&lt;br /&gt;
As discussed in the introduction of this document, there are many development methodologies such as the Rational Unified Process, eXtreme and Agile development, and traditional waterfall methodologies. The intent of this guide is to suggest neither a particular development methodology nor provide specific guidance that adheres to any particular methodology. Instead, we are presenting a generic development model, and the reader should follow it according to their company process.&lt;br /&gt;
&lt;br /&gt;
This testing framework consists of the following activities that should take place:&lt;br /&gt;
&lt;br /&gt;
* Before Development Begins&lt;br /&gt;
* During Definition and Design&lt;br /&gt;
* During Development&lt;br /&gt;
* During Deployment&lt;br /&gt;
* Maintenance and Operations&lt;br /&gt;
&lt;br /&gt;
==Phase 1: Before Development Begins==&lt;br /&gt;
&lt;br /&gt;
Before application development has started:&lt;br /&gt;
&lt;br /&gt;
* Test to ensure that there is an adequate SDLC where security is inherent.&lt;br /&gt;
* Test to ensure that the appropriate policy and standards are in place for the development team.&lt;br /&gt;
* Develop the metrics and measurement criteria. &lt;br /&gt;
&lt;br /&gt;
===Phase 1A: Policies and Standards Review===&lt;br /&gt;
&lt;br /&gt;
Ensure that there are appropriate policies, standards, and documentation in place. Documentation is extremely important as it gives development teams guidelines and policies that they can follow. &lt;br /&gt;
&lt;br /&gt;
''People can only do the right thing, if they know what the right thing is.''' '''''&lt;br /&gt;
&lt;br /&gt;
If the application is to be developed in Java, it is essential that there is a Java secure coding standard. If the application is to use cryptography, it is essential that there is a cryptography standard. No policies or standards can cover every situation that the development team will face. By documenting the common and predictable issues, there will be fewer decisions that need to be made during the development process. &lt;br /&gt;
&lt;br /&gt;
===Phase 1B: Develop Measurement and Metrics Criteria (Ensure Traceability)===&lt;br /&gt;
&lt;br /&gt;
Before development begins, plan the measurement program. By defining criteria that needs to be measured, it provides visibility into defects in both the process and product. It is essential to define the metrics before development begins, as there may be a need to modify the process in order to capture the data.&lt;br /&gt;
&lt;br /&gt;
==Phase 2: During Definition and Design==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Phase 2A: Security Requirements Review===&lt;br /&gt;
&lt;br /&gt;
Security requirements define how an application works from a security perspective. It is essential that the security requirements be tested. Testing in this case means testing the assumptions that are made in the requirements, and testing to see if there are gaps in the requirements definitions. &lt;br /&gt;
&lt;br /&gt;
For example, if there is a security requirement that states that users must be registered before they can get access to the whitepapers section of a website, does this mean that the user must be registered with the system, or should the user be authenticated? Ensure that requirements are as unambiguous as possible. &lt;br /&gt;
&lt;br /&gt;
When looking for requirements gaps, consider looking at security mechanisms such as:&lt;br /&gt;
&lt;br /&gt;
* User Management (password reset etc.)&lt;br /&gt;
* Authentication&lt;br /&gt;
* Authorization&lt;br /&gt;
* Data Confidentiality&lt;br /&gt;
* Integrity&lt;br /&gt;
* Accountability&lt;br /&gt;
* Session Management&lt;br /&gt;
* Transport Security&lt;br /&gt;
* Privacy&lt;br /&gt;
&lt;br /&gt;
===Phase 2B: Design an Architecture Review===&lt;br /&gt;
&lt;br /&gt;
Applications should have a documented design and architecture. By documented we mean models, textual documents, and other similar artifacts. It is essential to test these artifacts to ensure that the design and architecture enforce the appropriate level of security as defined in the requirements. &lt;br /&gt;
&lt;br /&gt;
Identifying security flaws in the design phase is not only one of the most cost efficient places to identify flaws, but can be one of the most effective places to make changes. For example, being able to identify that the design calls for authorization decisions to be made in multiple places; it may be appropriate to consider a central authorization component. If the application is performing data validation at multiple places, it may be appropriate to develop a central validation framework (fixing input validation in one place, rather than hundreds of places, is far cheaper).&lt;br /&gt;
&lt;br /&gt;
If weaknesses are discovered, they should be given to the system architect for alternative approaches.&lt;br /&gt;
&lt;br /&gt;
===Phase 2C: Create and Review UML Models===&lt;br /&gt;
&lt;br /&gt;
Once the design and architecture is complete, build UML models that describe how the application works. In some cases, these may already be available. Use these models to confirm with the systems designers an exact understanding of how the application works. If weaknesses are discovered, they should be given to the system architect for alternative approaches. &lt;br /&gt;
&lt;br /&gt;
===Phase 2D: Create and Review Threat Models===&lt;br /&gt;
&lt;br /&gt;
Armed with design and architecture reviews, and the UML models explaining exactly how the system works, undertake a threat modeling exercise. Develop realistic threat scenarios. Analyze the design and architecture to ensure that these threats have been mitigated, accepted by the business, or assigned to a third party, such as an insurance firm. When identified threats have no mitigation strategies, revisit the design and architecture with the systems architect to modify the design.&lt;br /&gt;
&lt;br /&gt;
==Phase 3: During Development==&lt;br /&gt;
&lt;br /&gt;
Theoretically, development is the implementation of a design. However, in the real world, many design decisions are made during code development. These are often smaller decisions that were either too detailed to be described in the design, or in other cases, issues where no policy or standards guidance was offered. If the design and architecture was not adequate, the developer will be faced with many decisions. If there were insufficient policies and standards, the developer will be faced with even more decisions.&lt;br /&gt;
&lt;br /&gt;
===Phase 3A: Code Walkthroughs===&lt;br /&gt;
&lt;br /&gt;
The security team should perform a code walkthrough with the developers, and in some cases, the system architects. A code walkthrough is a high-level walkthrough of the code where the developers can explain the logic and flow. It allows the code review team to obtain a general understanding of the code, and allows the developers to explain why certain things were developed the way they were. &lt;br /&gt;
&lt;br /&gt;
The purpose is not to perform a code review, but to understand the flow at a high-level, the layout, and the structure of the code that makes up the application.&lt;br /&gt;
&lt;br /&gt;
===Phase 3B: Code Reviews===&lt;br /&gt;
&lt;br /&gt;
Armed with a good understanding of how the code is structured and why certain things were coded the way they were, the tester can now examine the actual code for security defects. &lt;br /&gt;
&lt;br /&gt;
Static code reviews validate the code against a set of checklists, including:&lt;br /&gt;
&lt;br /&gt;
* Business requirements for availability, confidentiality, and integrity&lt;br /&gt;
* OWASP Guide or Top 10 Checklists (depending on the depth of the review) for technical exposures&lt;br /&gt;
* Specific issues relating to the language or framework in use, such as the Scarlet paper for PHP or Microsoft Secure Coding checklists for ASP.NET&lt;br /&gt;
* Any industry specific requirements, such as Sarbanes-Oxley 404, COPPA, ISO 17799, APRA, HIPAA, Visa Merchant guidelines or other regulatory regimes&lt;br /&gt;
In terms of return on resources invested (mostly time), static code reviews produce far higher quality returns than any other security review method, and rely least on the skill of the reviewer, within reason. However, they are not a silver bullet, and need to be considered carefully within a full-spectrum testing regime. &lt;br /&gt;
&lt;br /&gt;
For more details on OWASP checklists, please refer to OWASP Guide for Secure Web Applications, or the latest edition of the OWASP Top 10.&lt;br /&gt;
&lt;br /&gt;
==Phase 4: During Deployment==&lt;br /&gt;
&lt;br /&gt;
===Phase 4A: Application Penetration Testing===&lt;br /&gt;
&lt;br /&gt;
Having tested the requirements, analyzed the design, and performed code review, it might be assumed that all issues have been caught. Hopefully, this is the case, but penetration testing the application after it has been deployed provides a last check to ensure that nothing has been missed. &lt;br /&gt;
&lt;br /&gt;
===Phase 4B: Configuration Management Testing===&lt;br /&gt;
&lt;br /&gt;
The application penetration test should include the checking of how the infrastructure was deployed and secured. While the application may be secure, a small aspect of the configuration could still be at a default install stage and vulnerable to exploitation.&lt;br /&gt;
&lt;br /&gt;
==Phase 5: Maintenance and Operations==&lt;br /&gt;
&lt;br /&gt;
===Phase 5A: Conduct Operational Management Reviews===&lt;br /&gt;
&lt;br /&gt;
There needs to be a process in place which details how the operational side, of the application and infrastructure, is managed.&lt;br /&gt;
&lt;br /&gt;
===Phase 5B: Conduct Periodic Health Checks===&lt;br /&gt;
&lt;br /&gt;
Monthly or quarterly health checks should be performed on both the application and infrastructure to ensure no new security risks have been introduced and that the level of security is still intact.&lt;br /&gt;
&lt;br /&gt;
===Phase 5C: Ensure Change Verification===&lt;br /&gt;
&lt;br /&gt;
After every change has been approved and tested in the QA environment and deployed into the production environment, it is vital that as part of the change management process, the change is  checked to ensure that the level of security hasn’t been affected by the change.&lt;br /&gt;
&lt;br /&gt;
==A Typical SDLC Testing Workflow==&lt;br /&gt;
&lt;br /&gt;
The following figure shows a typical SDLC Testing Workflow.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
[[Image: Typical SDLC Testing Workflow.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=15384</id>
		<title>Testing Guide Introduction</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=15384"/>
				<updated>2007-01-15T14:56:35Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Principles of Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
=== The OWASP Testing Project ===&lt;br /&gt;
----&lt;br /&gt;
The OWASP Testing Project has been in development for over many years. We wanted to help people understand the what, why, when, where, and how of testing their web applications, and not just provide a simple checklist or prescription of issues that should be addressed. We wanted to build a testing framework from which others can build their own testing programs or qualify other people’s processes. Writing the Testing Project has proven to be a difficult task. It has been a challenge to obtain consensus and develop the appropriate content, which would allow people to apply the overall content and framework described here, while enabling them to work in their own environment and culture. It has been also a challenge to change the focus of web application testing from penetration testing to testing integrated in the software development life cycle. Many industry experts and those responsible for software security at some of the largest companies in the world are validating the Testing Framework, presented as OWASP Testing Parts 1 and 2. This framework aims at helping organizations test their web applications in order to build reliable and secure software rather than simply highlighting areas of weakness, although the latter is certainly a byproduct of many of OWASP’s guides and checklists. As such, we have made some hard decisions about the appropriateness of certain testing techniques and technologies, which we fully understand will not be agreed upon by everyone. However, OWASP is able to take the high ground and change culture over time through awareness and education based on consensus and experience, rather than take the path of the “least common denominator.”&lt;br /&gt;
&lt;br /&gt;
'''The Economics of Insecure Software'''&amp;lt;br&amp;gt;&lt;br /&gt;
The cost of insecure software to the world economy is seemingly immeasurable. In June 2002, the US National Institute of Standards (NIST) published a survey on the cost of insecure software to the US economy due to inadequate software testing ''(The economic impacts of inadequate infrastructure for software testing. (2002, June 28). Retrieved May 4, 2004, from http://www.nist.gov/public_affairs/releases/n02-10.htm)''&amp;lt;br&amp;gt;&lt;br /&gt;
Most people understand at least the basic issues, or have a deeper technical understanding of the vulnerabilities. Sadly, few are able to translate that knowledge into monetary value and thereby quantify the costs to their business. We believe that until this happens, CIO’s will not be able to develop an accurate return on a security investment and subsequently assign appropriate budgets for software security. See Ross Anderson’s page at http://www.cl.cam.ac.uk/users/rja14/econsec.html for more information about the economics of security. &lt;br /&gt;
The framework described in this document encourages people to measure security throughout their entire development process. They can then relate the cost of insecure software to the impact it has on their business, and consequently develop appropriate business decisions (resources) to manage the risk. Insecure software has its consequences, but insecure web applications, exposed to millions of users through the Internet are a growing concern. Even now, the confidence of customers using the World Wide Web to purchase or cover their needs is decreasing as more and more web applications are exposed to attacks. &lt;br /&gt;
This introduction covers the processes involved in testing web applications: &lt;br /&gt;
* The scope of what to test &lt;br /&gt;
* Principles of testing &lt;br /&gt;
* Testing techniques explained &lt;br /&gt;
* The OWASP testing framework explained &lt;br /&gt;
In the second part of this guide it is covers how to test each software development life cycle phase using techniques described in this document. For example, Part 2 covers how to test for specific vulnerabilities such as SQL Injection by code inspection and penetration testing. &lt;br /&gt;
&lt;br /&gt;
'''Scope of this Document'''&amp;lt;br&amp;gt;&lt;br /&gt;
This document is designed to help organizations understand what comprises a testing program, and to help them identify the steps that they need to undertake to build and operate that testing program on their web applications. It is intended to give a broad view of the elements required to make a comprehensive web application security program. This guide can be used as a reference and as a methodology to help determine the gap between your existing practices and industry best practices. This guide allows organizations to compare themselves against industry peers, understand the magnitude of resources required to test and remediate their software, or prepare for an audit. This document does not go into the technical details of how to test an application, as the intent is to provide a typical security organizational framework. The technical details about how to test an application, as part of a penetration test or code review will be covered in the Part 2 document mentioned above. What Do We Mean By Testing? During the development lifecycle of a web application, many things need to be tested. The Merriam-Webster Dictionary describes testing as: &lt;br /&gt;
* To put to test or proof &lt;br /&gt;
* To undergo a test &lt;br /&gt;
* To be assigned a standing or evaluation based on tests. &lt;br /&gt;
For the purposes of this document, testing is a process of comparing the state of something against a set of criteria. In the security industry, people frequently test against a set of mental criteria that are neither well defined nor complete. For this reason and others, many outsiders regard security testing as a black art. This document’s aim is to change that perception and to make it easier for people without in-depth security knowledge to make a difference. &lt;br /&gt;
&lt;br /&gt;
'''The Software Development Life Cycle Process'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the best methods to prevent security bugs from appearing in production applications is to improve the Software Development Life Cycle (SDLC) by including security. If a SDLC is not currently being used in your environment, it is time to pick one! The following figure shows a generic SDLC model as well as the (estimated) increasing cost of fixing security bugs in such a model. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:SDLC.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 1: Generic SDLC Model'' &amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Companies should inspect their overall SDLC to ensure that security is an integral part of the development process. SDLCs should include security tests to ensure security is adequately covered and controls are effective throughout the development process. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The Scope of What To Test'''&amp;lt;br&amp;gt;&lt;br /&gt;
It can be helpful to think of software development as a combination of people, process, and technology. If these are the factors that “create” software then it is logical that these are the factors that must be tested. Today most people generally test the technology or the software itself. In fact most people today don’t test the software until it has already been created and is in the deployment phase of its lifecycle (i.e. code has been created and instantiated into a working web application). This is generally a very ineffective and cost prohibitive practice. An effective testing program should have components that test; People – to ensure that there is adequate education and awareness; Process – to ensure that there are adequate policies and standards and that people know how to follow these policies; Technology – to ensure that the process has been effective in its implementation. Unless a holistic approach is adopted, testing just the technical implementation of an application will not uncover management or operational vulnerabilities that could be present. By testing the people, policy and process you can catch issues that would later manifest themselves into defects in the technology, thus eradicating bugs early and identifying the root causes of defects. Likewise only testing some of the technical issues that can be present in a system will result in an incomplete and inaccurate security posture assessment. Denis Verdon, Head of Information Security at Fidelity National Financial (http://www.fnf.com) presented an excellent analogy for this misconception at the OWASP AppSec 2004 Conference in New York. “If cars were built like applications…safety tests would assume frontal impact only. Cars would not be roll tested, or tested for stability in emergency maneuvers, brake effectiveness, side impact and resistance to theft.” &amp;lt;br&amp;gt;&lt;br /&gt;
'''Feedback and Comments'''&amp;lt;br&amp;gt;&lt;br /&gt;
As with all OWASP projects, we welcome comments and feedback. We especially like to know that our work is being used and that it is effective and accurate.&lt;br /&gt;
&lt;br /&gt;
==Principles of Testing==&lt;br /&gt;
&lt;br /&gt;
There are some common misconceptions when developing a testing methodology to weed out security bugs in software. This chapter covers some of the basic principles that should be taken into account by professionals when testing for security bugs in software. &lt;br /&gt;
&lt;br /&gt;
'''There is No Silver Bullet'''&amp;lt;br&amp;gt;&lt;br /&gt;
While it is tempting to think that a security scanner or application firewall will either provide a multitude of defenses or identify a multitude of problems, in reality there are no silver bullets to the problem of insecure software. Application security assessment software, while useful as a first pass to find low-hanging fruit, is generally immature and ineffective at in-depth assessments and at providing adequate test coverage. Remember that security is a process, not a product. &lt;br /&gt;
&lt;br /&gt;
'''Think Strategically, Not Tactically'''&amp;lt;br&amp;gt;&lt;br /&gt;
Over the last few years, security professionals have come to realize the fallacy of the patch and penetrate model that was pervasive in information security during the 1990’s. The patch and penetrate model involves fixing a reported bug, but without proper investigation of the root cause. This patch and penetrate model is usually associated with the window of vulnerability ''(1)'' show in the figure below. The evolution of vulnerabilities in common software used worldwide has shown the ineffectiveness of this model. Vulnerability studies ''(2)'' have shown that the with the reaction time of attackers worldwide, the typical window of vulnerability does not provide enough time for patch installation, since the time between a vulnerability is uncovered and an automated attack against is developed and released is decreasing every year. There are also several wrong assumptions in this patch and penetrate model: patches interfere with the normal operations and might break existing applications, and not all the users might (in the end) be aware of a patch’s availability. Consequently not all the product's users will apply patches, either because of this issue or because they lack knowledge about the patch's existence.&amp;lt;br&amp;gt;&lt;br /&gt;
''Note: (1) Fore more information about the window of vulnerability please refer to Bruce Shneier’s Cryptogram Issue #9, available at http://www.schneier.com/crypto-gram-0009.html'' &amp;lt;br&amp;gt;&lt;br /&gt;
''(2) Such as those included Symantec’s Threat Reports''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:WindowExposure.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 2: Window of Vulnerability''&amp;lt;/center&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
To prevent reoccurring security problems within an application, it is essential to build security into the Software Development Life Cycle (SDLC) by developing standards, policies, and guidelines that fit and work within the development methodology. Threat modeling and other techniques should be used to help assign appropriate resources to those parts of a system that are most at risk. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The SDLC is King'''&amp;lt;br&amp;gt;&lt;br /&gt;
The SDLC is a process that is well known to developers. By integrating security into each phase of the SDLC, it allows for a holistic approach to application security that leverages the procedures already in place within the organization. Be aware that while the names of the various phases may change depending on the SDLC model used by an organization, each conceptual phase of the archetype SDLC will be used to develop the application (i.e. define, design, develop, deploy, maintain). Each phase has security considerations that should become part of the existing process, to ensure a cost-effective and comprehensive security program. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Test Early and Test Often'''&amp;lt;br&amp;gt;&lt;br /&gt;
By detecting a bug early within the SDLC, it allows it to be addressed more quickly and at a lower cost. A security bug is no different from a functional or performance based bug in this regard. A key step in making this possible is to educate the development and QA organizations about common security issues and the ways to detect &amp;amp; prevent them. Although new libraries, tools or languages might help design better programs (with fewer security bugs) new threats arise constantly and developers must be aware of those that affect the software they are developing. Education in security testing also helps developers acquire the appropriate mindset to test an application from an attacker's perspective. This allows each organization to consider security issues as part of their existing responsibilities.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understand the Scope of Security'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is important to know how much security a given project will require. The information and assets that are to be protected should be given a classification that states how they are to be handled (e.g. confidential, secret, top secret). Discussions should occur with legal council to ensure that any specific security needs will be met. In the USA they might come from federal regulations such as the Gramm-Leach-Bliley act (http://www.ftc.gov/privacy/glbact/), or from state laws such as California SB-1386 (http://www.leginfo.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html). For organizations based in EU countries, both country-specific regulation and EU Directives might apply, for example, Directive 96/46/EC4 makes it mandatory to treat personal data in applications with due care, whatever the application. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Mindset'''&amp;lt;br&amp;gt;&lt;br /&gt;
Successfully testing an application for security vulnerabilities requires thinking “outside of the box”. Normal use cases will test the normal behavior of the application when a user is using it in the manner that you expect. Good security testing requires going beyond what is expected and thinking like an attacker who is trying to break the application. Creative thinking can help to determine what unexpected data may cause an application to fail in an insecure manner. It can also help find what assumptions made by web developers are not always true and how can they be subverted. This is one of the reasons why automated tools are actually bad at automatically testing for vulnerabilities, this creative thinking must be done in a case by case basis and most of the web applications are being developed in a unique way (even if using common frameworks). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understanding the Subject'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the first major initiatives in any good security program should be to require accurate documentation of the application. The architecture, data flow diagrams, use cases, and more should be written in formal documents and available for review. The technical specification and application documents should include information that lists not only the desired use cases, but also any specifically disallowed use cases. Finally, it is good to have at least a basic security infrastructure that allows monitoring and trending of any attacks against your applications &amp;amp; network (e.g. IDS systems). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use the Right Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
While we have already stated that there is no tool silver bullet, tools do play a critical role in the overall security program. There is a range of open source and commercial tools that can assist in automation of many routine security tasks. These tools can simplify and speed the security process by assisting security personnel in their tasks. It is important to understand exactly what these tools can and cannot do, however, so that they are not oversold or used incorrectly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The Devil is in the Details'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is critical not to perform a superficial security review of an application and consider it complete. This will instill a false sense of confidence that can be as dangerous as not having done a security review in the first place. It is vital to carefully review the findings and weed out any false positives that may remain in the report. Reporting an incorrect security finding can often undermine the valid message of the rest of a security report. Care should be taken to verify that every possible section of application logic has been tested, and that every use case scenario was explored for possible vulnerabilities. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use Source Code When Available'''&amp;lt;br&amp;gt;&lt;br /&gt;
While black box penetration test results can be impressive and useful to demonstrate how vulnerabilities are exposed in production, they are not the most effective way to secure an application. If the source code for the application is available, it should be given to the security staff to assist them while performing their review. It is possible to discover vulnerabilities within the application source that would be missed during a black box engagement. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Develop Metrics'''&amp;lt;br&amp;gt;&lt;br /&gt;
An important part of a good security program is the ability to determine if things are getting better. It is important to track the results of testing engagements, and develop metrics that will reveal the application security trends within the organization. These metrics can show if more education and training is required, if there is a particular security mechanism that is not clearly understood by development, and if the total number of security related problems being found each month is going down. Consistent metrics that can be generated in an automated way from available source code will also help the organization in assessing the effectiveness of mechanisms introduced to reduce security bugs in software development. Metrics are not easily developed so using standard metrics like those provided by the OWASP Metrics project and other organizations might be a good head start.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Documenting the Test Results'''&amp;lt;br&amp;gt;&lt;br /&gt;
To conclude the testing process, it is important to produce a formal record of what testing actions were taken, by whom, when it was performed, and details of the test findings. It is wise to agree on acceptable format for the report which is useful to all concerned parties, which may include developers, project management, business owners, IT department, audit, and compliance. The report must be clear to the business owner in identifying where material risks exist, sufficient to get their backing for subsequent mitigation actions. The report must be clear to the developer in pin-pointing the exact function that is affected by the vulnerability, with associated recommendation for resolution in a language that the developer will understand (no pun intended). Last but not least, the report writing should not be overly burdensome on the security tester themself; security tester's are not generally renowned for their creative writing skills, therefore agreeing on a complex report can lead to instances where test results do not get properly documented.&lt;br /&gt;
&lt;br /&gt;
==Testing Techniques Explained==&lt;br /&gt;
&lt;br /&gt;
This section presents a high-level overview of various testing techniques that can be employed when building a testing program. It does not present specific methodologies for these techniques, although Part 2 of the OWASP Testing project will address this information. This section is included to provide context for the framework presented in the next Chapter and to highlight the advantages and disadvantages of some of the techniques that can be considered.&lt;br /&gt;
* Manual Inspections &amp;amp; Reviews &lt;br /&gt;
* Threat Modeling &lt;br /&gt;
* Code Review &lt;br /&gt;
* Penetration Testing &lt;br /&gt;
&lt;br /&gt;
=== Manual Inspections &amp;amp; Reviews ===&lt;br /&gt;
Manual inspections are human-driven reviews that typically test the security implications of the people, policies, and processes, but can include inspection of technology decisions such as architectural designs. They are usually conducted by analyzing documentation or using interviews with the designers or system owners. While the concept of manual inspections and human reviews is simple, they can be among the most powerful and effective techniques available. By asking someone how something works and why it was implemented in a specific way, it allows the tester to quickly determine if any security concerns are likely to be evident. Manual inspections and reviews are one of the few ways to test the software development lifecycle process itself and to ensure that there is an adequate policy or skill set in place. As with many things in life, when conducting manual inspections and reviews we suggest you adopt a trust but verify model. Not everything everyone tells you or shows you will be accurate. Manual reviews are particularly good for testing whether people understand the security process, have been made aware of policy, and have the appropriate skills to design and/or implement a secure application. Other activities, including manually reviewing the documentation, secure coding policies, security requirements, and architectural designs, should all be accomplished using manual inspections.&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Requires no supporting technology &lt;br /&gt;
* Can be applied to a variety of situations&lt;br /&gt;
* Flexible &lt;br /&gt;
* Promotes team work &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages:'''&lt;br /&gt;
* Can be time consuming &lt;br /&gt;
* Supporting material not always available &lt;br /&gt;
* Requires significant human thought and skill to be effective!&lt;br /&gt;
&lt;br /&gt;
=== Threat Modeling ===&lt;br /&gt;
&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
In the context of the technical scope, threat modeling has become a popular technique to help system designers think about the security threats that their systems will face. It enables them to develop mitigation strategies for potential vulnerabilities. Threat modeling helps people focus their inevitably limited resources and attention on the parts of the system that most require it. Threat models should be created as early as possible in the software development life cycle, and should be revisited as the application evolves and development progresses. Threat modeling is essentially risk assessment for applications. It is recommended that all applications have a threat model developed and documented. To develop a threat model, we recommend taking a simple approach that follows the NIST 800-30 ''(3)'' standard for risk assessment. This approach involves: &lt;br /&gt;
* Decomposing the application – through a process of manual inspection understanding how the application works, its assets, functionality and connectivity. &lt;br /&gt;
* Defining and classifying the assets – classify the assets into tangible and intangible assets and rank them according to business criticality. &lt;br /&gt;
* Exploring potential vulnerabilities (technical, operational, and management). &lt;br /&gt;
* Exploring potential threats – through a process of developing threat scenarios or attacks trees which develops a realistic view of potential attack vectors from an attacker’s perspective. &lt;br /&gt;
* Creating mitigation strategies – develop mitigating controls for each of the threats deemed to be realistic. The output from a threat model itself can vary but is typically a collection of lists and diagrams. Part 2 of the OWASP Testing Guide (the detailed “How To” text) will outline a specific Threat Modeling methodology. There is no right or wrong way to develop threat models, and several techniques have evolved. The OCTAVE model from Carnegie Mellon (http://www.cert.org/octave/) is worth exploring. &amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Practical attacker's view of the system &lt;br /&gt;
* Flexible &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
'''Disadvantage : &amp;lt;br&amp;gt;'''&lt;br /&gt;
* Relatively new technique &lt;br /&gt;
* Good threat models don’t automatically mean good software &lt;br /&gt;
''Note: (3) Stoneburner, G., Goguen, A., &amp;amp; Feringa, A. (2001, October). Risk management guide for information technology systems. Retrieved May 7, 2004, from http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf''&lt;br /&gt;
&lt;br /&gt;
=== Source Code Review ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Source code review is the process of manually checking a web application's source code for security issues. Many serious security vulnerabilities cannot be detected with any other form of analysis or testing. As the popular saying goes “if you want to know what’s really going on, go straight to the source”. Almost all security experts agree that there is no substitute for actually looking at the code. All the information for identifying security problems is there in the code somewhere. Unlike testing third party closed software such as operating systems, when testing web applications (especially if they have been developed in-house) the source code should be made available for testing purposes. Many unintentional but significant security problems are also extremely difficult to discover with other forms of analysis or testing, such as penetration testing, making source code analysis the technique of choice for technical testing. With the source code a tester can accurately determine what is happening (or is supposed to be happening) and remove the guess work of black box testing. Examples of issues that are particularly conducive to being found through source code reviews include concurrency problems, flawed business logic, access control problems and cryptographic weaknesses as well as backdoors, Trojans, Easter eggs, time bombs, logic bombs, and other forms of malicious code. These issues often manifest themselves as the most harmful vulnerabilities in web sites. Source code analysis can also be extremely efficient to find implementation issues such as places where input validation was not performed or when fail open control procedures maybe present. But keep in mind that operational procedures need to be reviewed also since the source code being deployed might not be the same as the one being analyzed ''(4)''.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages'''&lt;br /&gt;
* Completeness and effectiveness &lt;br /&gt;
* Accuracy &lt;br /&gt;
* Fast (for competent reviewers) &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages'''&lt;br /&gt;
* Requires highly skilled security developers &lt;br /&gt;
* Can miss calls to issues in compiled libraries &lt;br /&gt;
* Can not detect run-time errors easily &lt;br /&gt;
* The source code actually deployed might differ from the one being analyzed&lt;br /&gt;
''Note: (4) See &amp;quot;Reflections on Trusting Trust&amp;quot; by Ken Thompson (http://cm.bell-labs.com/who/ken/trust.html)''&lt;br /&gt;
&lt;br /&gt;
* '''For more on code review OWASP manage a code review project''':&amp;lt;BR&amp;gt;&lt;br /&gt;
http://www.owasp.org/index.php/Category:OWASP_Code_Review_Project&lt;br /&gt;
&lt;br /&gt;
=== Penetration Testing ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Penetration testing has become a common technique used to test network security for many years. It is also commonly known as black box testing or ethical hacking. Penetration testing is essentially the “art” of testing a running application remotely, without knowing the inner workings of the application itself to find security vulnerabilities. Typically, the penetration test team would have access to an application as if they were users. The tester acts like an attacker and attempts to find and exploit vulnerabilities. In many cases the tester will be given a valid account on the system. While penetration testing has proven to be effective in network security, the technique does not naturally translate to applications. When penetration testing is performed on networks and operating systems, the majority of the work is involved in finding and then exploiting known vulnerabilities in specific technologies. As web applications are almost exclusively bespoke, penetration testing in the web application arena is more akin to pure research. Penetration testing tools have been developed that automated the process but again with the nature of web applications their effectiveness is usually poor. Many people today use web application penetration testing as their primary security testing technique. Whilst it certainly has its place in a testing program, we do not believe it should be considered as the primary or only testing technique. Gary McGraw summed up penetration testing well when he said, “If you fail a penetration test you know you have a very bad problem indeed. If you pass a penetration test you do not know that you don’t have a very bad problem”. However, focused penetration testing (i.e. testing that attempts to exploit known vulnerabilities detected in previous reviews) can be useful in detecting if some specific vulnerabilities are actually fixed in the source code deployed at the web site. &amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages'''&lt;br /&gt;
* Can be fast (and therefore cheap) &lt;br /&gt;
* Requires a relatively lower skill-set than source code review &lt;br /&gt;
* Tests the code that is actually being exposed &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages'''&lt;br /&gt;
* Too late in the SDLC &lt;br /&gt;
* Front impact testing only!&lt;br /&gt;
&lt;br /&gt;
=== The Need for a Balanced Approach ===&lt;br /&gt;
With so many techniques and so many approaches to testing the security of your web applications, it can be difficult to understand which techniques to use and when to use them.&lt;br /&gt;
Experience shows that there is no right or wrong answer to exactly what techniques should be used to build a testing framework. The fact remains that all techniques should probably be used to ensure that all areas that need to be tested are tested. What is clear, however, is that there is no single technique that effectively covers all security testing that must be performed to ensure that all issues have been addressed. Many companies adopt one approach, which has historically been penetration testing. Penetration testing, while useful, cannot effectively address many of the issues that need to be tested, and is simply “too little too late” in the software development life cycle (SDLC). &lt;br /&gt;
The correct approach is a balanced one that includes several techniques, from manual interviews to technical testing. The balanced approach is sure to cover testing in all phases in the SDLC. This approach leverages the most appropriate techniques available depending on the current SDLC phase. &lt;br /&gt;
Of course there are times and circumstances where only one technique is possible; for example, a test on a web application that has already been created, and where the testing party does not have access to the source code. In this case, penetration testing is clearly better than no testing at all. However, we encourage the testing parties to challenge assumptions, such as no access to source code, and to explore the possibility of complete testing. &lt;br /&gt;
A balanced approach varies depending on many factors, such as the maturity of the testing process and corporate culture. However, it is recommended that a balanced testing framework look something like the representations shown in Figure 3 and Figure 4. The following figure shows a typical proportional representation overlaid onto the software development life cycle. In keeping with research and experience, it is essential that companies place a higher emphasis on the early stages of development.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionSDLC.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 3: Proportion of Test Effort in SDLC''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The following figure shows a typical proportional representation overlaid onto testing techniques. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionTest.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 4: Proportion of Test Effort According to Test Technique''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''A Note about Web Application Scanners'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use web application scanners. While they undoubtedly have a place in a testing program, we want to highlight some fundamental issues about why we do not believe that automating black box testing is (or will ever be) effective. By highlighting these issues, we are not discouraging web application scanner use. Rather, we are saying that their limitations should be understood, and testing frameworks should be planned appropriately.&lt;br /&gt;
NB: OWASP is currently working to develop a web application scanner-benchmarking platform. The following examples indicate why automated black box testing is not effective. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 1: Magic Parameters'''&amp;lt;br&amp;gt;&lt;br /&gt;
Imagine a simple web application that accepts a name-value pair of “magic” and then the value. For simplicity, the GET request may be: ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic=value&amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt; To further simplify the example, the values in this case can only be ASCII characters a – z (upper or lowercase) and integers 0 – 9. The designers of this application created an administrative backdoor during testing, but obfuscated it to prevent the casual observer from discovering it. By submitting the value sf8g7sfjdsurtsdieerwqredsgnfg8d (30 characters), the user will then be logged in and presented with an administrative screen with total control of the application. The HTTP request is now:&amp;lt;br&amp;gt; ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic= sf8g7sfjdsurtsdieerwqredsgnfg8d &amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt;&lt;br /&gt;
Given that all of the other parameters were simple two- and three-characters fields, it is not possible to start guessing combinations at approximately 28 characters. A web application scanner will need to brute force (or guess) the entire key space of 30 characters. That is up to 3028 permutations, or trillions of HTTP requests! That is an electron in a digital haystack! &lt;br /&gt;
The code for this may look like the following: &amp;lt;br&amp;gt;&lt;br /&gt;
 public void doPost( HttpServletRequest request, HttpServletResponse response) &lt;br /&gt;
 { &lt;br /&gt;
 String magic = “sf8g7sfjdsurtsdieerwqredsgnfg8d”; &lt;br /&gt;
 boolean admin = magic.equals( request.getParameter(“magic”));&lt;br /&gt;
 if (admin) doAdmin( request, response); &lt;br /&gt;
 else …. // normal processing &lt;br /&gt;
 } &lt;br /&gt;
By looking in the code, the vulnerability practically leaps off the page as a potential problem. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 2: Bad Cryptography'''&amp;lt;br&amp;gt;&lt;br /&gt;
Cryptography is widely used in web applications. Imagine that a developer decided to write a simple cryptography algorithm to sign a user in from site A to site B automatically. In his/her wisdom, the developer decides that if a user is logged into site A, then he/she will generate a key using an MD5 hash function that comprises: ''Hash { username : date }'' &amp;lt;br&amp;gt;&lt;br /&gt;
When a user is passed to site B, he/she will send the key on the query string to site B in an HTTP re-direct. Site B independently computes the hash, and compares it to the hash passed on the request. If they match, site B signs the user in as the user they claim to be. Clearly, as we explain the scheme, the inadequacies can be worked out, and it can be seen how anyone that figures it out (or is told how it works, or downloads the information from Bugtraq) can login as any user. Manual inspection, such as an interview, would have uncovered this security issue quickly, as would inspection of the code. A black-box web application scanner would have seen a 128-bit hash that changed with each user, and by the nature of hash functions, did not change in any predicable way.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''A Note about Static Source Code Review Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use static source code scanners. While they undoubtedly have a place in a comprehensive testing program, we want to highlight some fundamental issues about why we do not believe this approach is effective when used alone. Static source code analysis alone cannot understand the context of semantic constructs in code, and therefore is prone to a significant number of false positives. This is particularly true with C and C++. The technology is useful in determining interesting places in the code, however significant manual effort is required to validate the findings. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example:&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 char szTarget[12];&amp;lt;BR&amp;gt;&lt;br /&gt;
 char *s = &amp;quot;Hello, World&amp;quot;; &amp;lt;BR&amp;gt;&lt;br /&gt;
 size_t cSource = strlen_s(s,20); &amp;lt;BR&amp;gt;&lt;br /&gt;
 strncpy_s(temp,sizeof(szTarget),s,cSource); &amp;lt;BR&amp;gt;&lt;br /&gt;
 strncat_s(temp,sizeof(szTarget),s,cSource);&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=15378</id>
		<title>Testing Guide Introduction</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=15378"/>
				<updated>2007-01-15T14:06:20Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Penetration Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
=== The OWASP Testing Project ===&lt;br /&gt;
----&lt;br /&gt;
The OWASP Testing Project has been in development for over many years. We wanted to help people understand the what, why, when, where, and how of testing their web applications, and not just provide a simple checklist or prescription of issues that should be addressed. We wanted to build a testing framework from which others can build their own testing programs or qualify other people’s processes. Writing the Testing Project has proven to be a difficult task. It has been a challenge to obtain consensus and develop the appropriate content, which would allow people to apply the overall content and framework described here, while enabling them to work in their own environment and culture. It has been also a challenge to change the focus of web application testing from penetration testing to testing integrated in the software development life cycle. Many industry experts and those responsible for software security at some of the largest companies in the world are validating the Testing Framework, presented as OWASP Testing Parts 1 and 2. This framework aims at helping organizations test their web applications in order to build reliable and secure software rather than simply highlighting areas of weakness, although the latter is certainly a byproduct of many of OWASP’s guides and checklists. As such, we have made some hard decisions about the appropriateness of certain testing techniques and technologies, which we fully understand will not be agreed upon by everyone. However, OWASP is able to take the high ground and change culture over time through awareness and education based on consensus and experience, rather than take the path of the “least common denominator.”&lt;br /&gt;
&lt;br /&gt;
'''The Economics of Insecure Software'''&amp;lt;br&amp;gt;&lt;br /&gt;
The cost of insecure software to the world economy is seemingly immeasurable. In June 2002, the US National Institute of Standards (NIST) published a survey on the cost of insecure software to the US economy due to inadequate software testing ''(The economic impacts of inadequate infrastructure for software testing. (2002, June 28). Retrieved May 4, 2004, from http://www.nist.gov/public_affairs/releases/n02-10.htm)''&amp;lt;br&amp;gt;&lt;br /&gt;
Most people understand at least the basic issues, or have a deeper technical understanding of the vulnerabilities. Sadly, few are able to translate that knowledge into monetary value and thereby quantify the costs to their business. We believe that until this happens, CIO’s will not be able to develop an accurate return on a security investment and subsequently assign appropriate budgets for software security. See Ross Anderson’s page at http://www.cl.cam.ac.uk/users/rja14/econsec.html for more information about the economics of security. &lt;br /&gt;
The framework described in this document encourages people to measure security throughout their entire development process. They can then relate the cost of insecure software to the impact it has on their business, and consequently develop appropriate business decisions (resources) to manage the risk. Insecure software has its consequences, but insecure web applications, exposed to millions of users through the Internet are a growing concern. Even now, the confidence of customers using the World Wide Web to purchase or cover their needs is decreasing as more and more web applications are exposed to attacks. &lt;br /&gt;
This introduction covers the processes involved in testing web applications: &lt;br /&gt;
* The scope of what to test &lt;br /&gt;
* Principles of testing &lt;br /&gt;
* Testing techniques explained &lt;br /&gt;
* The OWASP testing framework explained &lt;br /&gt;
In the second part of this guide it is covers how to test each software development life cycle phase using techniques described in this document. For example, Part 2 covers how to test for specific vulnerabilities such as SQL Injection by code inspection and penetration testing. &lt;br /&gt;
&lt;br /&gt;
'''Scope of this Document'''&amp;lt;br&amp;gt;&lt;br /&gt;
This document is designed to help organizations understand what comprises a testing program, and to help them identify the steps that they need to undertake to build and operate that testing program on their web applications. It is intended to give a broad view of the elements required to make a comprehensive web application security program. This guide can be used as a reference and as a methodology to help determine the gap between your existing practices and industry best practices. This guide allows organizations to compare themselves against industry peers, understand the magnitude of resources required to test and remediate their software, or prepare for an audit. This document does not go into the technical details of how to test an application, as the intent is to provide a typical security organizational framework. The technical details about how to test an application, as part of a penetration test or code review will be covered in the Part 2 document mentioned above. What Do We Mean By Testing? During the development lifecycle of a web application, many things need to be tested. The Merriam-Webster Dictionary describes testing as: &lt;br /&gt;
* To put to test or proof &lt;br /&gt;
* To undergo a test &lt;br /&gt;
* To be assigned a standing or evaluation based on tests. &lt;br /&gt;
For the purposes of this document, testing is a process of comparing the state of something against a set of criteria. In the security industry, people frequently test against a set of mental criteria that are neither well defined nor complete. For this reason and others, many outsiders regard security testing as a black art. This document’s aim is to change that perception and to make it easier for people without in-depth security knowledge to make a difference. &lt;br /&gt;
&lt;br /&gt;
'''The Software Development Life Cycle Process'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the best methods to prevent security bugs from appearing in production applications is to improve the Software Development Life Cycle (SDLC) by including security. If a SDLC is not currently being used in your environment, it is time to pick one! The following figure shows a generic SDLC model as well as the (estimated) increasing cost of fixing security bugs in such a model. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:SDLC.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 1: Generic SDLC Model'' &amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Companies should inspect their overall SDLC to ensure that security is an integral part of the development process. SDLCs should include security tests to ensure security is adequately covered and controls are effective throughout the development process. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The Scope of What To Test'''&amp;lt;br&amp;gt;&lt;br /&gt;
It can be helpful to think of software development as a combination of people, process, and technology. If these are the factors that “create” software then it is logical that these are the factors that must be tested. Today most people generally test the technology or the software itself. In fact most people today don’t test the software until it has already been created and is in the deployment phase of its lifecycle (i.e. code has been created and instantiated into a working web application). This is generally a very ineffective and cost prohibitive practice. An effective testing program should have components that test; People – to ensure that there is adequate education and awareness; Process – to ensure that there are adequate policies and standards and that people know how to follow these policies; Technology – to ensure that the process has been effective in its implementation. Unless a holistic approach is adopted, testing just the technical implementation of an application will not uncover management or operational vulnerabilities that could be present. By testing the people, policy and process you can catch issues that would later manifest themselves into defects in the technology, thus eradicating bugs early and identifying the root causes of defects. Likewise only testing some of the technical issues that can be present in a system will result in an incomplete and inaccurate security posture assessment. Denis Verdon, Head of Information Security at Fidelity National Financial (http://www.fnf.com) presented an excellent analogy for this misconception at the OWASP AppSec 2004 Conference in New York. “If cars were built like applications…safety tests would assume frontal impact only. Cars would not be roll tested, or tested for stability in emergency maneuvers, brake effectiveness, side impact and resistance to theft.” &amp;lt;br&amp;gt;&lt;br /&gt;
'''Feedback and Comments'''&amp;lt;br&amp;gt;&lt;br /&gt;
As with all OWASP projects, we welcome comments and feedback. We especially like to know that our work is being used and that it is effective and accurate.&lt;br /&gt;
&lt;br /&gt;
==Principles of Testing==&lt;br /&gt;
&lt;br /&gt;
There are some common misconceptions when developing a testing methodology to weed out security bugs in software. This chapter covers some of the basic principles that should be taken into account by professionals when testing for security bugs in software. &lt;br /&gt;
&lt;br /&gt;
'''There is No Silver Bullet'''&amp;lt;br&amp;gt;&lt;br /&gt;
While it is tempting to think that a security scanner or application firewall will either provide a multitude of defenses or identify a multitude of problems, in reality there are no silver bullets to the problem of insecure software. Application security assessment software, while useful as a first pass to find low-hanging fruit, is generally immature and ineffective at in-depth assessments and at providing adequate test coverage. Remember that security is a process, not a product. &lt;br /&gt;
&lt;br /&gt;
'''Think Strategically, Not Tactically'''&amp;lt;br&amp;gt;&lt;br /&gt;
Over the last few years, security professionals have come to realize the fallacy of the patch and penetrate model that was pervasive in information security during the 1990’s. The patch and penetrate model involves fixing a reported bug, but without proper investigation of the root cause. This patch and penetrate model is usually associated with the window of vulnerability ''(1)'' show in the figure below. The evolution of vulnerabilities in common software used worldwide has shown the ineffectiveness of this model. Vulnerability studies ''(2)'' have shown that the with the reaction time of attackers worldwide, the typical window of vulnerability does not provide enough time for patch installation, since the time between a vulnerability is uncovered and an automated attack against is developed and released is decreasing every year. There are also several wrong assumptions in this patch and penetrate model: patches interfere with the normal operations and might break existing applications, and not all the users might (in the end) be aware of a patch’s availability. Consequently not all the product's users will apply patches, either because of this issue or because they lack knowledge about the patch's existence.&amp;lt;br&amp;gt;&lt;br /&gt;
''Note: (1) Fore more information about the window of vulnerability please refer to Bruce Shneier’s Cryptogram Issue #9, available at http://www.schneier.com/crypto-gram-0009.html'' &amp;lt;br&amp;gt;&lt;br /&gt;
''(2) Such as those included Symantec’s Threat Reports''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:WindowExposure.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 2: Window of Vulnerability''&amp;lt;/center&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
To prevent reoccurring security problems within an application, it is essential to build security into the Software Development Life Cycle (SDLC) by developing standards, policies, and guidelines that fit and work within the development methodology. Threat modeling and other techniques should be used to help assign appropriate resources to those parts of a system that are most at risk. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The SDLC is King'''&amp;lt;br&amp;gt;&lt;br /&gt;
The SDLC is a process that is well known to developers. By integrating security into each phase of the SDLC, it allows for a holistic approach to application security that leverages the procedures already in place within the organization. Be aware that while the names of the various phases may change depending on the SDLC model used by an organization, each conceptual phase of the archetype SDLC will be used to develop the application (i.e. define, design, develop, deploy, maintain). Each phase has security considerations that should become part of the existing process, to ensure a cost-effective and comprehensive security program. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Test Early and Test Often'''&amp;lt;br&amp;gt;&lt;br /&gt;
By detecting a bug early within the SDLC, it allows it to be addressed more quickly and at a lower cost. A security bug is no different from a functional or performance based bug in this regard. A key step in making this possible is to educate the development and QA organizations about common security issues and the ways to detect &amp;amp; prevent them. Although new libraries, tools or languages might help design better programs (with fewer security bugs) new threats arise constantly and developers must be aware of those that affect the software they are developing. Education in security testing also helps developers acquire the appropriate mindset to test an application from an attacker's perspective. This allows each organization to consider security issues as part of their existing responsibilities.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understand the Scope of Security'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is important to know how much security a given project will require. The information and assets that are to be protected should be given a classification that states how they are to be handled (e.g. confidential, secret, top secret). Discussions should occur with legal council to ensure that any specific security needs will be met. In the USA they might come from federal regulations such as the Gramm-Leach-Bliley act (http://www.ftc.gov/privacy/glbact/), or from state laws such as California SB-1386 (http://www.leginfo.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html). For organizations based in EU countries, both country-specific regulation and EU Directives might apply, for example, Directive 96/46/EC4 makes it mandatory to treat personal data in applications with due care, whatever the application. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Mindset'''&amp;lt;br&amp;gt;&lt;br /&gt;
Successfully testing an application for security vulnerabilities requires thinking “outside of the box”. Normal use cases will test the normal behavior of the application when a user is using it in the manner that you expect. Good security testing requires going beyond what is expected and thinking like an attacker who is trying to break the application. Creative thinking can help to determine what unexpected data may cause an application to fail in an insecure manner. It can also help find what assumptions made by web developers are not always true and how can they be subverted. This is one of the reasons why automated tools are actually bad at automatically testing for vulnerabilities, this creative thinking must be done in a case by case basis and most of the web applications are being developed in a unique way (even if using common frameworks). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understanding the Subject'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the first major initiatives in any good security program should be to require accurate documentation of the application. The architecture, data flow diagrams, use cases, and more should be written in formal documents and available for review. The technical specification and application documents should include information that lists not only the desired use cases, but also any specifically disallowed use cases. Finally, it is good to have at least a basic security infrastructure that allows monitoring and trending of any attacks against your applications &amp;amp; network (e.g. IDS systems). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use the Right Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
While we have already stated that there is no tool silver bullet, tools do play a critical role in the overall security program. There is a range of open source and commercial tools that can assist in automation of many routine security tasks. These tools can simplify and speed the security process by assisting security personnel in their tasks. It is important to understand exactly what these tools can and cannot do, however, so that they are not oversold or used incorrectly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The Devil is in the Details'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is critical not to perform a superficial security review of an application and consider it complete. This will instill a false sense of confidence that can be as dangerous as not having done a security review in the first place. It is vital to carefully review the findings and weed out any false positives that may remain in the report. Reporting an incorrect security finding can often undermine the valid message of the rest of a security report. Care should be taken to verify that every possible section of application logic has been tested, and that every use case scenario was explored for possible vulnerabilities. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use Source Code When Available'''&amp;lt;br&amp;gt;&lt;br /&gt;
While black box penetration test results can be impressive and useful to demonstrate how vulnerabilities are exposed in production, they are not the most effective way to secure an application. If the source code for the application is available, it should be given to the security staff to assist them while performing their review. It is possible to discover vulnerabilities within the application source that would be missed during a black box engagement. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Develop Metrics'''&amp;lt;br&amp;gt;&lt;br /&gt;
An important part of a good security program is the ability to determine if things are getting better. It is important to track the results of testing engagements, and develop metrics that will reveal the application security trends within the organization. These metrics can show if more education and training is required, if there is a particular security mechanism that is not clearly understood by development, and if the total number of security related problems being found each month is going down. Consistent metrics that can be generated in an automated way from available source code will also help the organization in assessing the effectiveness of mechanisms introduced to reduce security bugs in software development. Metrics are not easily developed so using standard metrics like those provided by the OWASP Metrics project and other organizations might be a good head start.&lt;br /&gt;
&lt;br /&gt;
==Testing Techniques Explained==&lt;br /&gt;
&lt;br /&gt;
This section presents a high-level overview of various testing techniques that can be employed when building a testing program. It does not present specific methodologies for these techniques, although Part 2 of the OWASP Testing project will address this information. This section is included to provide context for the framework presented in the next Chapter and to highlight the advantages and disadvantages of some of the techniques that can be considered.&lt;br /&gt;
* Manual Inspections &amp;amp; Reviews &lt;br /&gt;
* Threat Modeling &lt;br /&gt;
* Code Review &lt;br /&gt;
* Penetration Testing &lt;br /&gt;
&lt;br /&gt;
=== Manual Inspections &amp;amp; Reviews ===&lt;br /&gt;
Manual inspections are human-driven reviews that typically test the security implications of the people, policies, and processes, but can include inspection of technology decisions such as architectural designs. They are usually conducted by analyzing documentation or using interviews with the designers or system owners. While the concept of manual inspections and human reviews is simple, they can be among the most powerful and effective techniques available. By asking someone how something works and why it was implemented in a specific way, it allows the tester to quickly determine if any security concerns are likely to be evident. Manual inspections and reviews are one of the few ways to test the software development lifecycle process itself and to ensure that there is an adequate policy or skill set in place. As with many things in life, when conducting manual inspections and reviews we suggest you adopt a trust but verify model. Not everything everyone tells you or shows you will be accurate. Manual reviews are particularly good for testing whether people understand the security process, have been made aware of policy, and have the appropriate skills to design and/or implement a secure application. Other activities, including manually reviewing the documentation, secure coding policies, security requirements, and architectural designs, should all be accomplished using manual inspections.&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Requires no supporting technology &lt;br /&gt;
* Can be applied to a variety of situations&lt;br /&gt;
* Flexible &lt;br /&gt;
* Promotes team work &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages:'''&lt;br /&gt;
* Can be time consuming &lt;br /&gt;
* Supporting material not always available &lt;br /&gt;
* Requires significant human thought and skill to be effective!&lt;br /&gt;
&lt;br /&gt;
=== Threat Modeling ===&lt;br /&gt;
&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
In the context of the technical scope, threat modeling has become a popular technique to help system designers think about the security threats that their systems will face. It enables them to develop mitigation strategies for potential vulnerabilities. Threat modeling helps people focus their inevitably limited resources and attention on the parts of the system that most require it. Threat models should be created as early as possible in the software development life cycle, and should be revisited as the application evolves and development progresses. Threat modeling is essentially risk assessment for applications. It is recommended that all applications have a threat model developed and documented. To develop a threat model, we recommend taking a simple approach that follows the NIST 800-30 ''(3)'' standard for risk assessment. This approach involves: &lt;br /&gt;
* Decomposing the application – through a process of manual inspection understanding how the application works, its assets, functionality and connectivity. &lt;br /&gt;
* Defining and classifying the assets – classify the assets into tangible and intangible assets and rank them according to business criticality. &lt;br /&gt;
* Exploring potential vulnerabilities (technical, operational, and management). &lt;br /&gt;
* Exploring potential threats – through a process of developing threat scenarios or attacks trees which develops a realistic view of potential attack vectors from an attacker’s perspective. &lt;br /&gt;
* Creating mitigation strategies – develop mitigating controls for each of the threats deemed to be realistic. The output from a threat model itself can vary but is typically a collection of lists and diagrams. Part 2 of the OWASP Testing Guide (the detailed “How To” text) will outline a specific Threat Modeling methodology. There is no right or wrong way to develop threat models, and several techniques have evolved. The OCTAVE model from Carnegie Mellon (http://www.cert.org/octave/) is worth exploring. &amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Practical attacker's view of the system &lt;br /&gt;
* Flexible &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
'''Disadvantage : &amp;lt;br&amp;gt;'''&lt;br /&gt;
* Relatively new technique &lt;br /&gt;
* Good threat models don’t automatically mean good software &lt;br /&gt;
''Note: (3) Stoneburner, G., Goguen, A., &amp;amp; Feringa, A. (2001, October). Risk management guide for information technology systems. Retrieved May 7, 2004, from http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf''&lt;br /&gt;
&lt;br /&gt;
=== Source Code Review ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Source code review is the process of manually checking a web application's source code for security issues. Many serious security vulnerabilities cannot be detected with any other form of analysis or testing. As the popular saying goes “if you want to know what’s really going on, go straight to the source”. Almost all security experts agree that there is no substitute for actually looking at the code. All the information for identifying security problems is there in the code somewhere. Unlike testing third party closed software such as operating systems, when testing web applications (especially if they have been developed in-house) the source code should be made available for testing purposes. Many unintentional but significant security problems are also extremely difficult to discover with other forms of analysis or testing, such as penetration testing, making source code analysis the technique of choice for technical testing. With the source code a tester can accurately determine what is happening (or is supposed to be happening) and remove the guess work of black box testing. Examples of issues that are particularly conducive to being found through source code reviews include concurrency problems, flawed business logic, access control problems and cryptographic weaknesses as well as backdoors, Trojans, Easter eggs, time bombs, logic bombs, and other forms of malicious code. These issues often manifest themselves as the most harmful vulnerabilities in web sites. Source code analysis can also be extremely efficient to find implementation issues such as places where input validation was not performed or when fail open control procedures maybe present. But keep in mind that operational procedures need to be reviewed also since the source code being deployed might not be the same as the one being analyzed ''(4)''.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages'''&lt;br /&gt;
* Completeness and effectiveness &lt;br /&gt;
* Accuracy &lt;br /&gt;
* Fast (for competent reviewers) &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages'''&lt;br /&gt;
* Requires highly skilled security developers &lt;br /&gt;
* Can miss calls to issues in compiled libraries &lt;br /&gt;
* Can not detect run-time errors easily &lt;br /&gt;
* The source code actually deployed might differ from the one being analyzed&lt;br /&gt;
''Note: (4) See &amp;quot;Reflections on Trusting Trust&amp;quot; by Ken Thompson (http://cm.bell-labs.com/who/ken/trust.html)''&lt;br /&gt;
&lt;br /&gt;
* '''For more on code review OWASP manage a code review project''':&amp;lt;BR&amp;gt;&lt;br /&gt;
http://www.owasp.org/index.php/Category:OWASP_Code_Review_Project&lt;br /&gt;
&lt;br /&gt;
=== Penetration Testing ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Penetration testing has become a common technique used to test network security for many years. It is also commonly known as black box testing or ethical hacking. Penetration testing is essentially the “art” of testing a running application remotely, without knowing the inner workings of the application itself to find security vulnerabilities. Typically, the penetration test team would have access to an application as if they were users. The tester acts like an attacker and attempts to find and exploit vulnerabilities. In many cases the tester will be given a valid account on the system. While penetration testing has proven to be effective in network security, the technique does not naturally translate to applications. When penetration testing is performed on networks and operating systems, the majority of the work is involved in finding and then exploiting known vulnerabilities in specific technologies. As web applications are almost exclusively bespoke, penetration testing in the web application arena is more akin to pure research. Penetration testing tools have been developed that automated the process but again with the nature of web applications their effectiveness is usually poor. Many people today use web application penetration testing as their primary security testing technique. Whilst it certainly has its place in a testing program, we do not believe it should be considered as the primary or only testing technique. Gary McGraw summed up penetration testing well when he said, “If you fail a penetration test you know you have a very bad problem indeed. If you pass a penetration test you do not know that you don’t have a very bad problem”. However, focused penetration testing (i.e. testing that attempts to exploit known vulnerabilities detected in previous reviews) can be useful in detecting if some specific vulnerabilities are actually fixed in the source code deployed at the web site. &amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages'''&lt;br /&gt;
* Can be fast (and therefore cheap) &lt;br /&gt;
* Requires a relatively lower skill-set than source code review &lt;br /&gt;
* Tests the code that is actually being exposed &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages'''&lt;br /&gt;
* Too late in the SDLC &lt;br /&gt;
* Front impact testing only!&lt;br /&gt;
&lt;br /&gt;
=== The Need for a Balanced Approach ===&lt;br /&gt;
With so many techniques and so many approaches to testing the security of your web applications, it can be difficult to understand which techniques to use and when to use them.&lt;br /&gt;
Experience shows that there is no right or wrong answer to exactly what techniques should be used to build a testing framework. The fact remains that all techniques should probably be used to ensure that all areas that need to be tested are tested. What is clear, however, is that there is no single technique that effectively covers all security testing that must be performed to ensure that all issues have been addressed. Many companies adopt one approach, which has historically been penetration testing. Penetration testing, while useful, cannot effectively address many of the issues that need to be tested, and is simply “too little too late” in the software development life cycle (SDLC). &lt;br /&gt;
The correct approach is a balanced one that includes several techniques, from manual interviews to technical testing. The balanced approach is sure to cover testing in all phases in the SDLC. This approach leverages the most appropriate techniques available depending on the current SDLC phase. &lt;br /&gt;
Of course there are times and circumstances where only one technique is possible; for example, a test on a web application that has already been created, and where the testing party does not have access to the source code. In this case, penetration testing is clearly better than no testing at all. However, we encourage the testing parties to challenge assumptions, such as no access to source code, and to explore the possibility of complete testing. &lt;br /&gt;
A balanced approach varies depending on many factors, such as the maturity of the testing process and corporate culture. However, it is recommended that a balanced testing framework look something like the representations shown in Figure 3 and Figure 4. The following figure shows a typical proportional representation overlaid onto the software development life cycle. In keeping with research and experience, it is essential that companies place a higher emphasis on the early stages of development.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionSDLC.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 3: Proportion of Test Effort in SDLC''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The following figure shows a typical proportional representation overlaid onto testing techniques. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionTest.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 4: Proportion of Test Effort According to Test Technique''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''A Note about Web Application Scanners'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use web application scanners. While they undoubtedly have a place in a testing program, we want to highlight some fundamental issues about why we do not believe that automating black box testing is (or will ever be) effective. By highlighting these issues, we are not discouraging web application scanner use. Rather, we are saying that their limitations should be understood, and testing frameworks should be planned appropriately.&lt;br /&gt;
NB: OWASP is currently working to develop a web application scanner-benchmarking platform. The following examples indicate why automated black box testing is not effective. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 1: Magic Parameters'''&amp;lt;br&amp;gt;&lt;br /&gt;
Imagine a simple web application that accepts a name-value pair of “magic” and then the value. For simplicity, the GET request may be: ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic=value&amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt; To further simplify the example, the values in this case can only be ASCII characters a – z (upper or lowercase) and integers 0 – 9. The designers of this application created an administrative backdoor during testing, but obfuscated it to prevent the casual observer from discovering it. By submitting the value sf8g7sfjdsurtsdieerwqredsgnfg8d (30 characters), the user will then be logged in and presented with an administrative screen with total control of the application. The HTTP request is now:&amp;lt;br&amp;gt; ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic= sf8g7sfjdsurtsdieerwqredsgnfg8d &amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt;&lt;br /&gt;
Given that all of the other parameters were simple two- and three-characters fields, it is not possible to start guessing combinations at approximately 28 characters. A web application scanner will need to brute force (or guess) the entire key space of 30 characters. That is up to 3028 permutations, or trillions of HTTP requests! That is an electron in a digital haystack! &lt;br /&gt;
The code for this may look like the following: &amp;lt;br&amp;gt;&lt;br /&gt;
 public void doPost( HttpServletRequest request, HttpServletResponse response) &lt;br /&gt;
 { &lt;br /&gt;
 String magic = “sf8g7sfjdsurtsdieerwqredsgnfg8d”; &lt;br /&gt;
 boolean admin = magic.equals( request.getParameter(“magic”));&lt;br /&gt;
 if (admin) doAdmin( request, response); &lt;br /&gt;
 else …. // normal processing &lt;br /&gt;
 } &lt;br /&gt;
By looking in the code, the vulnerability practically leaps off the page as a potential problem. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 2: Bad Cryptography'''&amp;lt;br&amp;gt;&lt;br /&gt;
Cryptography is widely used in web applications. Imagine that a developer decided to write a simple cryptography algorithm to sign a user in from site A to site B automatically. In his/her wisdom, the developer decides that if a user is logged into site A, then he/she will generate a key using an MD5 hash function that comprises: ''Hash { username : date }'' &amp;lt;br&amp;gt;&lt;br /&gt;
When a user is passed to site B, he/she will send the key on the query string to site B in an HTTP re-direct. Site B independently computes the hash, and compares it to the hash passed on the request. If they match, site B signs the user in as the user they claim to be. Clearly, as we explain the scheme, the inadequacies can be worked out, and it can be seen how anyone that figures it out (or is told how it works, or downloads the information from Bugtraq) can login as any user. Manual inspection, such as an interview, would have uncovered this security issue quickly, as would inspection of the code. A black-box web application scanner would have seen a 128-bit hash that changed with each user, and by the nature of hash functions, did not change in any predicable way.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''A Note about Static Source Code Review Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use static source code scanners. While they undoubtedly have a place in a comprehensive testing program, we want to highlight some fundamental issues about why we do not believe this approach is effective when used alone. Static source code analysis alone cannot understand the context of semantic constructs in code, and therefore is prone to a significant number of false positives. This is particularly true with C and C++. The technology is useful in determining interesting places in the code, however significant manual effort is required to validate the findings. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example:&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 char szTarget[12];&amp;lt;BR&amp;gt;&lt;br /&gt;
 char *s = &amp;quot;Hello, World&amp;quot;; &amp;lt;BR&amp;gt;&lt;br /&gt;
 size_t cSource = strlen_s(s,20); &amp;lt;BR&amp;gt;&lt;br /&gt;
 strncpy_s(temp,sizeof(szTarget),s,cSource); &amp;lt;BR&amp;gt;&lt;br /&gt;
 strncat_s(temp,sizeof(szTarget),s,cSource);&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=15377</id>
		<title>Testing Guide Introduction</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=15377"/>
				<updated>2007-01-15T14:05:47Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Source Code Review */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
=== The OWASP Testing Project ===&lt;br /&gt;
----&lt;br /&gt;
The OWASP Testing Project has been in development for over many years. We wanted to help people understand the what, why, when, where, and how of testing their web applications, and not just provide a simple checklist or prescription of issues that should be addressed. We wanted to build a testing framework from which others can build their own testing programs or qualify other people’s processes. Writing the Testing Project has proven to be a difficult task. It has been a challenge to obtain consensus and develop the appropriate content, which would allow people to apply the overall content and framework described here, while enabling them to work in their own environment and culture. It has been also a challenge to change the focus of web application testing from penetration testing to testing integrated in the software development life cycle. Many industry experts and those responsible for software security at some of the largest companies in the world are validating the Testing Framework, presented as OWASP Testing Parts 1 and 2. This framework aims at helping organizations test their web applications in order to build reliable and secure software rather than simply highlighting areas of weakness, although the latter is certainly a byproduct of many of OWASP’s guides and checklists. As such, we have made some hard decisions about the appropriateness of certain testing techniques and technologies, which we fully understand will not be agreed upon by everyone. However, OWASP is able to take the high ground and change culture over time through awareness and education based on consensus and experience, rather than take the path of the “least common denominator.”&lt;br /&gt;
&lt;br /&gt;
'''The Economics of Insecure Software'''&amp;lt;br&amp;gt;&lt;br /&gt;
The cost of insecure software to the world economy is seemingly immeasurable. In June 2002, the US National Institute of Standards (NIST) published a survey on the cost of insecure software to the US economy due to inadequate software testing ''(The economic impacts of inadequate infrastructure for software testing. (2002, June 28). Retrieved May 4, 2004, from http://www.nist.gov/public_affairs/releases/n02-10.htm)''&amp;lt;br&amp;gt;&lt;br /&gt;
Most people understand at least the basic issues, or have a deeper technical understanding of the vulnerabilities. Sadly, few are able to translate that knowledge into monetary value and thereby quantify the costs to their business. We believe that until this happens, CIO’s will not be able to develop an accurate return on a security investment and subsequently assign appropriate budgets for software security. See Ross Anderson’s page at http://www.cl.cam.ac.uk/users/rja14/econsec.html for more information about the economics of security. &lt;br /&gt;
The framework described in this document encourages people to measure security throughout their entire development process. They can then relate the cost of insecure software to the impact it has on their business, and consequently develop appropriate business decisions (resources) to manage the risk. Insecure software has its consequences, but insecure web applications, exposed to millions of users through the Internet are a growing concern. Even now, the confidence of customers using the World Wide Web to purchase or cover their needs is decreasing as more and more web applications are exposed to attacks. &lt;br /&gt;
This introduction covers the processes involved in testing web applications: &lt;br /&gt;
* The scope of what to test &lt;br /&gt;
* Principles of testing &lt;br /&gt;
* Testing techniques explained &lt;br /&gt;
* The OWASP testing framework explained &lt;br /&gt;
In the second part of this guide it is covers how to test each software development life cycle phase using techniques described in this document. For example, Part 2 covers how to test for specific vulnerabilities such as SQL Injection by code inspection and penetration testing. &lt;br /&gt;
&lt;br /&gt;
'''Scope of this Document'''&amp;lt;br&amp;gt;&lt;br /&gt;
This document is designed to help organizations understand what comprises a testing program, and to help them identify the steps that they need to undertake to build and operate that testing program on their web applications. It is intended to give a broad view of the elements required to make a comprehensive web application security program. This guide can be used as a reference and as a methodology to help determine the gap between your existing practices and industry best practices. This guide allows organizations to compare themselves against industry peers, understand the magnitude of resources required to test and remediate their software, or prepare for an audit. This document does not go into the technical details of how to test an application, as the intent is to provide a typical security organizational framework. The technical details about how to test an application, as part of a penetration test or code review will be covered in the Part 2 document mentioned above. What Do We Mean By Testing? During the development lifecycle of a web application, many things need to be tested. The Merriam-Webster Dictionary describes testing as: &lt;br /&gt;
* To put to test or proof &lt;br /&gt;
* To undergo a test &lt;br /&gt;
* To be assigned a standing or evaluation based on tests. &lt;br /&gt;
For the purposes of this document, testing is a process of comparing the state of something against a set of criteria. In the security industry, people frequently test against a set of mental criteria that are neither well defined nor complete. For this reason and others, many outsiders regard security testing as a black art. This document’s aim is to change that perception and to make it easier for people without in-depth security knowledge to make a difference. &lt;br /&gt;
&lt;br /&gt;
'''The Software Development Life Cycle Process'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the best methods to prevent security bugs from appearing in production applications is to improve the Software Development Life Cycle (SDLC) by including security. If a SDLC is not currently being used in your environment, it is time to pick one! The following figure shows a generic SDLC model as well as the (estimated) increasing cost of fixing security bugs in such a model. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:SDLC.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 1: Generic SDLC Model'' &amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Companies should inspect their overall SDLC to ensure that security is an integral part of the development process. SDLCs should include security tests to ensure security is adequately covered and controls are effective throughout the development process. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The Scope of What To Test'''&amp;lt;br&amp;gt;&lt;br /&gt;
It can be helpful to think of software development as a combination of people, process, and technology. If these are the factors that “create” software then it is logical that these are the factors that must be tested. Today most people generally test the technology or the software itself. In fact most people today don’t test the software until it has already been created and is in the deployment phase of its lifecycle (i.e. code has been created and instantiated into a working web application). This is generally a very ineffective and cost prohibitive practice. An effective testing program should have components that test; People – to ensure that there is adequate education and awareness; Process – to ensure that there are adequate policies and standards and that people know how to follow these policies; Technology – to ensure that the process has been effective in its implementation. Unless a holistic approach is adopted, testing just the technical implementation of an application will not uncover management or operational vulnerabilities that could be present. By testing the people, policy and process you can catch issues that would later manifest themselves into defects in the technology, thus eradicating bugs early and identifying the root causes of defects. Likewise only testing some of the technical issues that can be present in a system will result in an incomplete and inaccurate security posture assessment. Denis Verdon, Head of Information Security at Fidelity National Financial (http://www.fnf.com) presented an excellent analogy for this misconception at the OWASP AppSec 2004 Conference in New York. “If cars were built like applications…safety tests would assume frontal impact only. Cars would not be roll tested, or tested for stability in emergency maneuvers, brake effectiveness, side impact and resistance to theft.” &amp;lt;br&amp;gt;&lt;br /&gt;
'''Feedback and Comments'''&amp;lt;br&amp;gt;&lt;br /&gt;
As with all OWASP projects, we welcome comments and feedback. We especially like to know that our work is being used and that it is effective and accurate.&lt;br /&gt;
&lt;br /&gt;
==Principles of Testing==&lt;br /&gt;
&lt;br /&gt;
There are some common misconceptions when developing a testing methodology to weed out security bugs in software. This chapter covers some of the basic principles that should be taken into account by professionals when testing for security bugs in software. &lt;br /&gt;
&lt;br /&gt;
'''There is No Silver Bullet'''&amp;lt;br&amp;gt;&lt;br /&gt;
While it is tempting to think that a security scanner or application firewall will either provide a multitude of defenses or identify a multitude of problems, in reality there are no silver bullets to the problem of insecure software. Application security assessment software, while useful as a first pass to find low-hanging fruit, is generally immature and ineffective at in-depth assessments and at providing adequate test coverage. Remember that security is a process, not a product. &lt;br /&gt;
&lt;br /&gt;
'''Think Strategically, Not Tactically'''&amp;lt;br&amp;gt;&lt;br /&gt;
Over the last few years, security professionals have come to realize the fallacy of the patch and penetrate model that was pervasive in information security during the 1990’s. The patch and penetrate model involves fixing a reported bug, but without proper investigation of the root cause. This patch and penetrate model is usually associated with the window of vulnerability ''(1)'' show in the figure below. The evolution of vulnerabilities in common software used worldwide has shown the ineffectiveness of this model. Vulnerability studies ''(2)'' have shown that the with the reaction time of attackers worldwide, the typical window of vulnerability does not provide enough time for patch installation, since the time between a vulnerability is uncovered and an automated attack against is developed and released is decreasing every year. There are also several wrong assumptions in this patch and penetrate model: patches interfere with the normal operations and might break existing applications, and not all the users might (in the end) be aware of a patch’s availability. Consequently not all the product's users will apply patches, either because of this issue or because they lack knowledge about the patch's existence.&amp;lt;br&amp;gt;&lt;br /&gt;
''Note: (1) Fore more information about the window of vulnerability please refer to Bruce Shneier’s Cryptogram Issue #9, available at http://www.schneier.com/crypto-gram-0009.html'' &amp;lt;br&amp;gt;&lt;br /&gt;
''(2) Such as those included Symantec’s Threat Reports''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:WindowExposure.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 2: Window of Vulnerability''&amp;lt;/center&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
To prevent reoccurring security problems within an application, it is essential to build security into the Software Development Life Cycle (SDLC) by developing standards, policies, and guidelines that fit and work within the development methodology. Threat modeling and other techniques should be used to help assign appropriate resources to those parts of a system that are most at risk. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The SDLC is King'''&amp;lt;br&amp;gt;&lt;br /&gt;
The SDLC is a process that is well known to developers. By integrating security into each phase of the SDLC, it allows for a holistic approach to application security that leverages the procedures already in place within the organization. Be aware that while the names of the various phases may change depending on the SDLC model used by an organization, each conceptual phase of the archetype SDLC will be used to develop the application (i.e. define, design, develop, deploy, maintain). Each phase has security considerations that should become part of the existing process, to ensure a cost-effective and comprehensive security program. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Test Early and Test Often'''&amp;lt;br&amp;gt;&lt;br /&gt;
By detecting a bug early within the SDLC, it allows it to be addressed more quickly and at a lower cost. A security bug is no different from a functional or performance based bug in this regard. A key step in making this possible is to educate the development and QA organizations about common security issues and the ways to detect &amp;amp; prevent them. Although new libraries, tools or languages might help design better programs (with fewer security bugs) new threats arise constantly and developers must be aware of those that affect the software they are developing. Education in security testing also helps developers acquire the appropriate mindset to test an application from an attacker's perspective. This allows each organization to consider security issues as part of their existing responsibilities.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understand the Scope of Security'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is important to know how much security a given project will require. The information and assets that are to be protected should be given a classification that states how they are to be handled (e.g. confidential, secret, top secret). Discussions should occur with legal council to ensure that any specific security needs will be met. In the USA they might come from federal regulations such as the Gramm-Leach-Bliley act (http://www.ftc.gov/privacy/glbact/), or from state laws such as California SB-1386 (http://www.leginfo.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html). For organizations based in EU countries, both country-specific regulation and EU Directives might apply, for example, Directive 96/46/EC4 makes it mandatory to treat personal data in applications with due care, whatever the application. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Mindset'''&amp;lt;br&amp;gt;&lt;br /&gt;
Successfully testing an application for security vulnerabilities requires thinking “outside of the box”. Normal use cases will test the normal behavior of the application when a user is using it in the manner that you expect. Good security testing requires going beyond what is expected and thinking like an attacker who is trying to break the application. Creative thinking can help to determine what unexpected data may cause an application to fail in an insecure manner. It can also help find what assumptions made by web developers are not always true and how can they be subverted. This is one of the reasons why automated tools are actually bad at automatically testing for vulnerabilities, this creative thinking must be done in a case by case basis and most of the web applications are being developed in a unique way (even if using common frameworks). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understanding the Subject'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the first major initiatives in any good security program should be to require accurate documentation of the application. The architecture, data flow diagrams, use cases, and more should be written in formal documents and available for review. The technical specification and application documents should include information that lists not only the desired use cases, but also any specifically disallowed use cases. Finally, it is good to have at least a basic security infrastructure that allows monitoring and trending of any attacks against your applications &amp;amp; network (e.g. IDS systems). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use the Right Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
While we have already stated that there is no tool silver bullet, tools do play a critical role in the overall security program. There is a range of open source and commercial tools that can assist in automation of many routine security tasks. These tools can simplify and speed the security process by assisting security personnel in their tasks. It is important to understand exactly what these tools can and cannot do, however, so that they are not oversold or used incorrectly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The Devil is in the Details'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is critical not to perform a superficial security review of an application and consider it complete. This will instill a false sense of confidence that can be as dangerous as not having done a security review in the first place. It is vital to carefully review the findings and weed out any false positives that may remain in the report. Reporting an incorrect security finding can often undermine the valid message of the rest of a security report. Care should be taken to verify that every possible section of application logic has been tested, and that every use case scenario was explored for possible vulnerabilities. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use Source Code When Available'''&amp;lt;br&amp;gt;&lt;br /&gt;
While black box penetration test results can be impressive and useful to demonstrate how vulnerabilities are exposed in production, they are not the most effective way to secure an application. If the source code for the application is available, it should be given to the security staff to assist them while performing their review. It is possible to discover vulnerabilities within the application source that would be missed during a black box engagement. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Develop Metrics'''&amp;lt;br&amp;gt;&lt;br /&gt;
An important part of a good security program is the ability to determine if things are getting better. It is important to track the results of testing engagements, and develop metrics that will reveal the application security trends within the organization. These metrics can show if more education and training is required, if there is a particular security mechanism that is not clearly understood by development, and if the total number of security related problems being found each month is going down. Consistent metrics that can be generated in an automated way from available source code will also help the organization in assessing the effectiveness of mechanisms introduced to reduce security bugs in software development. Metrics are not easily developed so using standard metrics like those provided by the OWASP Metrics project and other organizations might be a good head start.&lt;br /&gt;
&lt;br /&gt;
==Testing Techniques Explained==&lt;br /&gt;
&lt;br /&gt;
This section presents a high-level overview of various testing techniques that can be employed when building a testing program. It does not present specific methodologies for these techniques, although Part 2 of the OWASP Testing project will address this information. This section is included to provide context for the framework presented in the next Chapter and to highlight the advantages and disadvantages of some of the techniques that can be considered.&lt;br /&gt;
* Manual Inspections &amp;amp; Reviews &lt;br /&gt;
* Threat Modeling &lt;br /&gt;
* Code Review &lt;br /&gt;
* Penetration Testing &lt;br /&gt;
&lt;br /&gt;
=== Manual Inspections &amp;amp; Reviews ===&lt;br /&gt;
Manual inspections are human-driven reviews that typically test the security implications of the people, policies, and processes, but can include inspection of technology decisions such as architectural designs. They are usually conducted by analyzing documentation or using interviews with the designers or system owners. While the concept of manual inspections and human reviews is simple, they can be among the most powerful and effective techniques available. By asking someone how something works and why it was implemented in a specific way, it allows the tester to quickly determine if any security concerns are likely to be evident. Manual inspections and reviews are one of the few ways to test the software development lifecycle process itself and to ensure that there is an adequate policy or skill set in place. As with many things in life, when conducting manual inspections and reviews we suggest you adopt a trust but verify model. Not everything everyone tells you or shows you will be accurate. Manual reviews are particularly good for testing whether people understand the security process, have been made aware of policy, and have the appropriate skills to design and/or implement a secure application. Other activities, including manually reviewing the documentation, secure coding policies, security requirements, and architectural designs, should all be accomplished using manual inspections.&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Requires no supporting technology &lt;br /&gt;
* Can be applied to a variety of situations&lt;br /&gt;
* Flexible &lt;br /&gt;
* Promotes team work &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages:'''&lt;br /&gt;
* Can be time consuming &lt;br /&gt;
* Supporting material not always available &lt;br /&gt;
* Requires significant human thought and skill to be effective!&lt;br /&gt;
&lt;br /&gt;
=== Threat Modeling ===&lt;br /&gt;
&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
In the context of the technical scope, threat modeling has become a popular technique to help system designers think about the security threats that their systems will face. It enables them to develop mitigation strategies for potential vulnerabilities. Threat modeling helps people focus their inevitably limited resources and attention on the parts of the system that most require it. Threat models should be created as early as possible in the software development life cycle, and should be revisited as the application evolves and development progresses. Threat modeling is essentially risk assessment for applications. It is recommended that all applications have a threat model developed and documented. To develop a threat model, we recommend taking a simple approach that follows the NIST 800-30 ''(3)'' standard for risk assessment. This approach involves: &lt;br /&gt;
* Decomposing the application – through a process of manual inspection understanding how the application works, its assets, functionality and connectivity. &lt;br /&gt;
* Defining and classifying the assets – classify the assets into tangible and intangible assets and rank them according to business criticality. &lt;br /&gt;
* Exploring potential vulnerabilities (technical, operational, and management). &lt;br /&gt;
* Exploring potential threats – through a process of developing threat scenarios or attacks trees which develops a realistic view of potential attack vectors from an attacker’s perspective. &lt;br /&gt;
* Creating mitigation strategies – develop mitigating controls for each of the threats deemed to be realistic. The output from a threat model itself can vary but is typically a collection of lists and diagrams. Part 2 of the OWASP Testing Guide (the detailed “How To” text) will outline a specific Threat Modeling methodology. There is no right or wrong way to develop threat models, and several techniques have evolved. The OCTAVE model from Carnegie Mellon (http://www.cert.org/octave/) is worth exploring. &amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Practical attacker's view of the system &lt;br /&gt;
* Flexible &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
'''Disadvantage : &amp;lt;br&amp;gt;'''&lt;br /&gt;
* Relatively new technique &lt;br /&gt;
* Good threat models don’t automatically mean good software &lt;br /&gt;
''Note: (3) Stoneburner, G., Goguen, A., &amp;amp; Feringa, A. (2001, October). Risk management guide for information technology systems. Retrieved May 7, 2004, from http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf''&lt;br /&gt;
&lt;br /&gt;
=== Source Code Review ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Source code review is the process of manually checking a web application's source code for security issues. Many serious security vulnerabilities cannot be detected with any other form of analysis or testing. As the popular saying goes “if you want to know what’s really going on, go straight to the source”. Almost all security experts agree that there is no substitute for actually looking at the code. All the information for identifying security problems is there in the code somewhere. Unlike testing third party closed software such as operating systems, when testing web applications (especially if they have been developed in-house) the source code should be made available for testing purposes. Many unintentional but significant security problems are also extremely difficult to discover with other forms of analysis or testing, such as penetration testing, making source code analysis the technique of choice for technical testing. With the source code a tester can accurately determine what is happening (or is supposed to be happening) and remove the guess work of black box testing. Examples of issues that are particularly conducive to being found through source code reviews include concurrency problems, flawed business logic, access control problems and cryptographic weaknesses as well as backdoors, Trojans, Easter eggs, time bombs, logic bombs, and other forms of malicious code. These issues often manifest themselves as the most harmful vulnerabilities in web sites. Source code analysis can also be extremely efficient to find implementation issues such as places where input validation was not performed or when fail open control procedures maybe present. But keep in mind that operational procedures need to be reviewed also since the source code being deployed might not be the same as the one being analyzed ''(4)''.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages'''&lt;br /&gt;
* Completeness and effectiveness &lt;br /&gt;
* Accuracy &lt;br /&gt;
* Fast (for competent reviewers) &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages'''&lt;br /&gt;
* Requires highly skilled security developers &lt;br /&gt;
* Can miss calls to issues in compiled libraries &lt;br /&gt;
* Can not detect run-time errors easily &lt;br /&gt;
* The source code actually deployed might differ from the one being analyzed&lt;br /&gt;
''Note: (4) See &amp;quot;Reflections on Trusting Trust&amp;quot; by Ken Thompson (http://cm.bell-labs.com/who/ken/trust.html)''&lt;br /&gt;
&lt;br /&gt;
* '''For more on code review OWASP manage a code review project''':&amp;lt;BR&amp;gt;&lt;br /&gt;
http://www.owasp.org/index.php/Category:OWASP_Code_Review_Project&lt;br /&gt;
&lt;br /&gt;
=== Penetration Testing ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Penetration testing has become a common technique used to test network security for many years. It is also commonly known as black box testing or ethical hacking. Penetration testing is essentially the “art” of testing a running application remotely, without knowing the inner workings of the application itself to find security vulnerabilities. Typically, the penetration test team would have access to an application as if they were users. The tester acts like a attacker and attempts to find and exploit vulnerabilities. In many cases the tester will be given a valid account on the system. While penetration testing has proven to be effective in network security, the technique does not naturally translate to applications. When penetration testing is performed on networks and operating systems, the majority of the work is involved in finding and then exploiting known vulnerabilities in specific technologies. As web applications are almost exclusively bespoke, penetration testing in the web application arena is more akin to pure research. Penetration testing tools have been developed that automated the process but again with the nature of web applications their effectiveness is usually poor. Many people today use web application penetration testing as their primary security testing technique. Whilst it certainly has its place in a testing program, we do not believe it should be considered as the primary or only testing technique. Gary McGraw summed up penetration testing well when he said, “If you fail a penetration test you know you have a very bad problem indeed. If you pass a penetration test you do not know that you don’t have a very bad problem”. However, focused penetration testing (i.e. testing that attempts to exploit known vulnerabilities detected in previous reviews) can be useful in detecting if some specific vulnerabilities are actually fixed in the source code deployed at the web site. &amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages'''&lt;br /&gt;
* Can be fast (and therefore cheap) &lt;br /&gt;
* Requires a relatively lower skill-set than source code review &lt;br /&gt;
* Tests the code that is actually being exposed &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages'''&lt;br /&gt;
* Too late in the SDLC &lt;br /&gt;
* Front impact testing only!&lt;br /&gt;
&lt;br /&gt;
=== The Need for a Balanced Approach ===&lt;br /&gt;
With so many techniques and so many approaches to testing the security of your web applications, it can be difficult to understand which techniques to use and when to use them.&lt;br /&gt;
Experience shows that there is no right or wrong answer to exactly what techniques should be used to build a testing framework. The fact remains that all techniques should probably be used to ensure that all areas that need to be tested are tested. What is clear, however, is that there is no single technique that effectively covers all security testing that must be performed to ensure that all issues have been addressed. Many companies adopt one approach, which has historically been penetration testing. Penetration testing, while useful, cannot effectively address many of the issues that need to be tested, and is simply “too little too late” in the software development life cycle (SDLC). &lt;br /&gt;
The correct approach is a balanced one that includes several techniques, from manual interviews to technical testing. The balanced approach is sure to cover testing in all phases in the SDLC. This approach leverages the most appropriate techniques available depending on the current SDLC phase. &lt;br /&gt;
Of course there are times and circumstances where only one technique is possible; for example, a test on a web application that has already been created, and where the testing party does not have access to the source code. In this case, penetration testing is clearly better than no testing at all. However, we encourage the testing parties to challenge assumptions, such as no access to source code, and to explore the possibility of complete testing. &lt;br /&gt;
A balanced approach varies depending on many factors, such as the maturity of the testing process and corporate culture. However, it is recommended that a balanced testing framework look something like the representations shown in Figure 3 and Figure 4. The following figure shows a typical proportional representation overlaid onto the software development life cycle. In keeping with research and experience, it is essential that companies place a higher emphasis on the early stages of development.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionSDLC.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 3: Proportion of Test Effort in SDLC''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The following figure shows a typical proportional representation overlaid onto testing techniques. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionTest.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 4: Proportion of Test Effort According to Test Technique''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''A Note about Web Application Scanners'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use web application scanners. While they undoubtedly have a place in a testing program, we want to highlight some fundamental issues about why we do not believe that automating black box testing is (or will ever be) effective. By highlighting these issues, we are not discouraging web application scanner use. Rather, we are saying that their limitations should be understood, and testing frameworks should be planned appropriately.&lt;br /&gt;
NB: OWASP is currently working to develop a web application scanner-benchmarking platform. The following examples indicate why automated black box testing is not effective. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 1: Magic Parameters'''&amp;lt;br&amp;gt;&lt;br /&gt;
Imagine a simple web application that accepts a name-value pair of “magic” and then the value. For simplicity, the GET request may be: ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic=value&amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt; To further simplify the example, the values in this case can only be ASCII characters a – z (upper or lowercase) and integers 0 – 9. The designers of this application created an administrative backdoor during testing, but obfuscated it to prevent the casual observer from discovering it. By submitting the value sf8g7sfjdsurtsdieerwqredsgnfg8d (30 characters), the user will then be logged in and presented with an administrative screen with total control of the application. The HTTP request is now:&amp;lt;br&amp;gt; ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic= sf8g7sfjdsurtsdieerwqredsgnfg8d &amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt;&lt;br /&gt;
Given that all of the other parameters were simple two- and three-characters fields, it is not possible to start guessing combinations at approximately 28 characters. A web application scanner will need to brute force (or guess) the entire key space of 30 characters. That is up to 3028 permutations, or trillions of HTTP requests! That is an electron in a digital haystack! &lt;br /&gt;
The code for this may look like the following: &amp;lt;br&amp;gt;&lt;br /&gt;
 public void doPost( HttpServletRequest request, HttpServletResponse response) &lt;br /&gt;
 { &lt;br /&gt;
 String magic = “sf8g7sfjdsurtsdieerwqredsgnfg8d”; &lt;br /&gt;
 boolean admin = magic.equals( request.getParameter(“magic”));&lt;br /&gt;
 if (admin) doAdmin( request, response); &lt;br /&gt;
 else …. // normal processing &lt;br /&gt;
 } &lt;br /&gt;
By looking in the code, the vulnerability practically leaps off the page as a potential problem. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 2: Bad Cryptography'''&amp;lt;br&amp;gt;&lt;br /&gt;
Cryptography is widely used in web applications. Imagine that a developer decided to write a simple cryptography algorithm to sign a user in from site A to site B automatically. In his/her wisdom, the developer decides that if a user is logged into site A, then he/she will generate a key using an MD5 hash function that comprises: ''Hash { username : date }'' &amp;lt;br&amp;gt;&lt;br /&gt;
When a user is passed to site B, he/she will send the key on the query string to site B in an HTTP re-direct. Site B independently computes the hash, and compares it to the hash passed on the request. If they match, site B signs the user in as the user they claim to be. Clearly, as we explain the scheme, the inadequacies can be worked out, and it can be seen how anyone that figures it out (or is told how it works, or downloads the information from Bugtraq) can login as any user. Manual inspection, such as an interview, would have uncovered this security issue quickly, as would inspection of the code. A black-box web application scanner would have seen a 128-bit hash that changed with each user, and by the nature of hash functions, did not change in any predicable way.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''A Note about Static Source Code Review Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use static source code scanners. While they undoubtedly have a place in a comprehensive testing program, we want to highlight some fundamental issues about why we do not believe this approach is effective when used alone. Static source code analysis alone cannot understand the context of semantic constructs in code, and therefore is prone to a significant number of false positives. This is particularly true with C and C++. The technology is useful in determining interesting places in the code, however significant manual effort is required to validate the findings. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example:&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 char szTarget[12];&amp;lt;BR&amp;gt;&lt;br /&gt;
 char *s = &amp;quot;Hello, World&amp;quot;; &amp;lt;BR&amp;gt;&lt;br /&gt;
 size_t cSource = strlen_s(s,20); &amp;lt;BR&amp;gt;&lt;br /&gt;
 strncpy_s(temp,sizeof(szTarget),s,cSource); &amp;lt;BR&amp;gt;&lt;br /&gt;
 strncat_s(temp,sizeof(szTarget),s,cSource);&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=15376</id>
		<title>Testing Guide Introduction</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=15376"/>
				<updated>2007-01-15T14:05:21Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Source Code Review */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
=== The OWASP Testing Project ===&lt;br /&gt;
----&lt;br /&gt;
The OWASP Testing Project has been in development for over many years. We wanted to help people understand the what, why, when, where, and how of testing their web applications, and not just provide a simple checklist or prescription of issues that should be addressed. We wanted to build a testing framework from which others can build their own testing programs or qualify other people’s processes. Writing the Testing Project has proven to be a difficult task. It has been a challenge to obtain consensus and develop the appropriate content, which would allow people to apply the overall content and framework described here, while enabling them to work in their own environment and culture. It has been also a challenge to change the focus of web application testing from penetration testing to testing integrated in the software development life cycle. Many industry experts and those responsible for software security at some of the largest companies in the world are validating the Testing Framework, presented as OWASP Testing Parts 1 and 2. This framework aims at helping organizations test their web applications in order to build reliable and secure software rather than simply highlighting areas of weakness, although the latter is certainly a byproduct of many of OWASP’s guides and checklists. As such, we have made some hard decisions about the appropriateness of certain testing techniques and technologies, which we fully understand will not be agreed upon by everyone. However, OWASP is able to take the high ground and change culture over time through awareness and education based on consensus and experience, rather than take the path of the “least common denominator.”&lt;br /&gt;
&lt;br /&gt;
'''The Economics of Insecure Software'''&amp;lt;br&amp;gt;&lt;br /&gt;
The cost of insecure software to the world economy is seemingly immeasurable. In June 2002, the US National Institute of Standards (NIST) published a survey on the cost of insecure software to the US economy due to inadequate software testing ''(The economic impacts of inadequate infrastructure for software testing. (2002, June 28). Retrieved May 4, 2004, from http://www.nist.gov/public_affairs/releases/n02-10.htm)''&amp;lt;br&amp;gt;&lt;br /&gt;
Most people understand at least the basic issues, or have a deeper technical understanding of the vulnerabilities. Sadly, few are able to translate that knowledge into monetary value and thereby quantify the costs to their business. We believe that until this happens, CIO’s will not be able to develop an accurate return on a security investment and subsequently assign appropriate budgets for software security. See Ross Anderson’s page at http://www.cl.cam.ac.uk/users/rja14/econsec.html for more information about the economics of security. &lt;br /&gt;
The framework described in this document encourages people to measure security throughout their entire development process. They can then relate the cost of insecure software to the impact it has on their business, and consequently develop appropriate business decisions (resources) to manage the risk. Insecure software has its consequences, but insecure web applications, exposed to millions of users through the Internet are a growing concern. Even now, the confidence of customers using the World Wide Web to purchase or cover their needs is decreasing as more and more web applications are exposed to attacks. &lt;br /&gt;
This introduction covers the processes involved in testing web applications: &lt;br /&gt;
* The scope of what to test &lt;br /&gt;
* Principles of testing &lt;br /&gt;
* Testing techniques explained &lt;br /&gt;
* The OWASP testing framework explained &lt;br /&gt;
In the second part of this guide it is covers how to test each software development life cycle phase using techniques described in this document. For example, Part 2 covers how to test for specific vulnerabilities such as SQL Injection by code inspection and penetration testing. &lt;br /&gt;
&lt;br /&gt;
'''Scope of this Document'''&amp;lt;br&amp;gt;&lt;br /&gt;
This document is designed to help organizations understand what comprises a testing program, and to help them identify the steps that they need to undertake to build and operate that testing program on their web applications. It is intended to give a broad view of the elements required to make a comprehensive web application security program. This guide can be used as a reference and as a methodology to help determine the gap between your existing practices and industry best practices. This guide allows organizations to compare themselves against industry peers, understand the magnitude of resources required to test and remediate their software, or prepare for an audit. This document does not go into the technical details of how to test an application, as the intent is to provide a typical security organizational framework. The technical details about how to test an application, as part of a penetration test or code review will be covered in the Part 2 document mentioned above. What Do We Mean By Testing? During the development lifecycle of a web application, many things need to be tested. The Merriam-Webster Dictionary describes testing as: &lt;br /&gt;
* To put to test or proof &lt;br /&gt;
* To undergo a test &lt;br /&gt;
* To be assigned a standing or evaluation based on tests. &lt;br /&gt;
For the purposes of this document, testing is a process of comparing the state of something against a set of criteria. In the security industry, people frequently test against a set of mental criteria that are neither well defined nor complete. For this reason and others, many outsiders regard security testing as a black art. This document’s aim is to change that perception and to make it easier for people without in-depth security knowledge to make a difference. &lt;br /&gt;
&lt;br /&gt;
'''The Software Development Life Cycle Process'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the best methods to prevent security bugs from appearing in production applications is to improve the Software Development Life Cycle (SDLC) by including security. If a SDLC is not currently being used in your environment, it is time to pick one! The following figure shows a generic SDLC model as well as the (estimated) increasing cost of fixing security bugs in such a model. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:SDLC.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 1: Generic SDLC Model'' &amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Companies should inspect their overall SDLC to ensure that security is an integral part of the development process. SDLCs should include security tests to ensure security is adequately covered and controls are effective throughout the development process. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The Scope of What To Test'''&amp;lt;br&amp;gt;&lt;br /&gt;
It can be helpful to think of software development as a combination of people, process, and technology. If these are the factors that “create” software then it is logical that these are the factors that must be tested. Today most people generally test the technology or the software itself. In fact most people today don’t test the software until it has already been created and is in the deployment phase of its lifecycle (i.e. code has been created and instantiated into a working web application). This is generally a very ineffective and cost prohibitive practice. An effective testing program should have components that test; People – to ensure that there is adequate education and awareness; Process – to ensure that there are adequate policies and standards and that people know how to follow these policies; Technology – to ensure that the process has been effective in its implementation. Unless a holistic approach is adopted, testing just the technical implementation of an application will not uncover management or operational vulnerabilities that could be present. By testing the people, policy and process you can catch issues that would later manifest themselves into defects in the technology, thus eradicating bugs early and identifying the root causes of defects. Likewise only testing some of the technical issues that can be present in a system will result in an incomplete and inaccurate security posture assessment. Denis Verdon, Head of Information Security at Fidelity National Financial (http://www.fnf.com) presented an excellent analogy for this misconception at the OWASP AppSec 2004 Conference in New York. “If cars were built like applications…safety tests would assume frontal impact only. Cars would not be roll tested, or tested for stability in emergency maneuvers, brake effectiveness, side impact and resistance to theft.” &amp;lt;br&amp;gt;&lt;br /&gt;
'''Feedback and Comments'''&amp;lt;br&amp;gt;&lt;br /&gt;
As with all OWASP projects, we welcome comments and feedback. We especially like to know that our work is being used and that it is effective and accurate.&lt;br /&gt;
&lt;br /&gt;
==Principles of Testing==&lt;br /&gt;
&lt;br /&gt;
There are some common misconceptions when developing a testing methodology to weed out security bugs in software. This chapter covers some of the basic principles that should be taken into account by professionals when testing for security bugs in software. &lt;br /&gt;
&lt;br /&gt;
'''There is No Silver Bullet'''&amp;lt;br&amp;gt;&lt;br /&gt;
While it is tempting to think that a security scanner or application firewall will either provide a multitude of defenses or identify a multitude of problems, in reality there are no silver bullets to the problem of insecure software. Application security assessment software, while useful as a first pass to find low-hanging fruit, is generally immature and ineffective at in-depth assessments and at providing adequate test coverage. Remember that security is a process, not a product. &lt;br /&gt;
&lt;br /&gt;
'''Think Strategically, Not Tactically'''&amp;lt;br&amp;gt;&lt;br /&gt;
Over the last few years, security professionals have come to realize the fallacy of the patch and penetrate model that was pervasive in information security during the 1990’s. The patch and penetrate model involves fixing a reported bug, but without proper investigation of the root cause. This patch and penetrate model is usually associated with the window of vulnerability ''(1)'' show in the figure below. The evolution of vulnerabilities in common software used worldwide has shown the ineffectiveness of this model. Vulnerability studies ''(2)'' have shown that the with the reaction time of attackers worldwide, the typical window of vulnerability does not provide enough time for patch installation, since the time between a vulnerability is uncovered and an automated attack against is developed and released is decreasing every year. There are also several wrong assumptions in this patch and penetrate model: patches interfere with the normal operations and might break existing applications, and not all the users might (in the end) be aware of a patch’s availability. Consequently not all the product's users will apply patches, either because of this issue or because they lack knowledge about the patch's existence.&amp;lt;br&amp;gt;&lt;br /&gt;
''Note: (1) Fore more information about the window of vulnerability please refer to Bruce Shneier’s Cryptogram Issue #9, available at http://www.schneier.com/crypto-gram-0009.html'' &amp;lt;br&amp;gt;&lt;br /&gt;
''(2) Such as those included Symantec’s Threat Reports''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:WindowExposure.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 2: Window of Vulnerability''&amp;lt;/center&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
To prevent reoccurring security problems within an application, it is essential to build security into the Software Development Life Cycle (SDLC) by developing standards, policies, and guidelines that fit and work within the development methodology. Threat modeling and other techniques should be used to help assign appropriate resources to those parts of a system that are most at risk. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The SDLC is King'''&amp;lt;br&amp;gt;&lt;br /&gt;
The SDLC is a process that is well known to developers. By integrating security into each phase of the SDLC, it allows for a holistic approach to application security that leverages the procedures already in place within the organization. Be aware that while the names of the various phases may change depending on the SDLC model used by an organization, each conceptual phase of the archetype SDLC will be used to develop the application (i.e. define, design, develop, deploy, maintain). Each phase has security considerations that should become part of the existing process, to ensure a cost-effective and comprehensive security program. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Test Early and Test Often'''&amp;lt;br&amp;gt;&lt;br /&gt;
By detecting a bug early within the SDLC, it allows it to be addressed more quickly and at a lower cost. A security bug is no different from a functional or performance based bug in this regard. A key step in making this possible is to educate the development and QA organizations about common security issues and the ways to detect &amp;amp; prevent them. Although new libraries, tools or languages might help design better programs (with fewer security bugs) new threats arise constantly and developers must be aware of those that affect the software they are developing. Education in security testing also helps developers acquire the appropriate mindset to test an application from an attacker's perspective. This allows each organization to consider security issues as part of their existing responsibilities.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understand the Scope of Security'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is important to know how much security a given project will require. The information and assets that are to be protected should be given a classification that states how they are to be handled (e.g. confidential, secret, top secret). Discussions should occur with legal council to ensure that any specific security needs will be met. In the USA they might come from federal regulations such as the Gramm-Leach-Bliley act (http://www.ftc.gov/privacy/glbact/), or from state laws such as California SB-1386 (http://www.leginfo.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html). For organizations based in EU countries, both country-specific regulation and EU Directives might apply, for example, Directive 96/46/EC4 makes it mandatory to treat personal data in applications with due care, whatever the application. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Mindset'''&amp;lt;br&amp;gt;&lt;br /&gt;
Successfully testing an application for security vulnerabilities requires thinking “outside of the box”. Normal use cases will test the normal behavior of the application when a user is using it in the manner that you expect. Good security testing requires going beyond what is expected and thinking like an attacker who is trying to break the application. Creative thinking can help to determine what unexpected data may cause an application to fail in an insecure manner. It can also help find what assumptions made by web developers are not always true and how can they be subverted. This is one of the reasons why automated tools are actually bad at automatically testing for vulnerabilities, this creative thinking must be done in a case by case basis and most of the web applications are being developed in a unique way (even if using common frameworks). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understanding the Subject'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the first major initiatives in any good security program should be to require accurate documentation of the application. The architecture, data flow diagrams, use cases, and more should be written in formal documents and available for review. The technical specification and application documents should include information that lists not only the desired use cases, but also any specifically disallowed use cases. Finally, it is good to have at least a basic security infrastructure that allows monitoring and trending of any attacks against your applications &amp;amp; network (e.g. IDS systems). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use the Right Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
While we have already stated that there is no tool silver bullet, tools do play a critical role in the overall security program. There is a range of open source and commercial tools that can assist in automation of many routine security tasks. These tools can simplify and speed the security process by assisting security personnel in their tasks. It is important to understand exactly what these tools can and cannot do, however, so that they are not oversold or used incorrectly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The Devil is in the Details'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is critical not to perform a superficial security review of an application and consider it complete. This will instill a false sense of confidence that can be as dangerous as not having done a security review in the first place. It is vital to carefully review the findings and weed out any false positives that may remain in the report. Reporting an incorrect security finding can often undermine the valid message of the rest of a security report. Care should be taken to verify that every possible section of application logic has been tested, and that every use case scenario was explored for possible vulnerabilities. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use Source Code When Available'''&amp;lt;br&amp;gt;&lt;br /&gt;
While black box penetration test results can be impressive and useful to demonstrate how vulnerabilities are exposed in production, they are not the most effective way to secure an application. If the source code for the application is available, it should be given to the security staff to assist them while performing their review. It is possible to discover vulnerabilities within the application source that would be missed during a black box engagement. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Develop Metrics'''&amp;lt;br&amp;gt;&lt;br /&gt;
An important part of a good security program is the ability to determine if things are getting better. It is important to track the results of testing engagements, and develop metrics that will reveal the application security trends within the organization. These metrics can show if more education and training is required, if there is a particular security mechanism that is not clearly understood by development, and if the total number of security related problems being found each month is going down. Consistent metrics that can be generated in an automated way from available source code will also help the organization in assessing the effectiveness of mechanisms introduced to reduce security bugs in software development. Metrics are not easily developed so using standard metrics like those provided by the OWASP Metrics project and other organizations might be a good head start.&lt;br /&gt;
&lt;br /&gt;
==Testing Techniques Explained==&lt;br /&gt;
&lt;br /&gt;
This section presents a high-level overview of various testing techniques that can be employed when building a testing program. It does not present specific methodologies for these techniques, although Part 2 of the OWASP Testing project will address this information. This section is included to provide context for the framework presented in the next Chapter and to highlight the advantages and disadvantages of some of the techniques that can be considered.&lt;br /&gt;
* Manual Inspections &amp;amp; Reviews &lt;br /&gt;
* Threat Modeling &lt;br /&gt;
* Code Review &lt;br /&gt;
* Penetration Testing &lt;br /&gt;
&lt;br /&gt;
=== Manual Inspections &amp;amp; Reviews ===&lt;br /&gt;
Manual inspections are human-driven reviews that typically test the security implications of the people, policies, and processes, but can include inspection of technology decisions such as architectural designs. They are usually conducted by analyzing documentation or using interviews with the designers or system owners. While the concept of manual inspections and human reviews is simple, they can be among the most powerful and effective techniques available. By asking someone how something works and why it was implemented in a specific way, it allows the tester to quickly determine if any security concerns are likely to be evident. Manual inspections and reviews are one of the few ways to test the software development lifecycle process itself and to ensure that there is an adequate policy or skill set in place. As with many things in life, when conducting manual inspections and reviews we suggest you adopt a trust but verify model. Not everything everyone tells you or shows you will be accurate. Manual reviews are particularly good for testing whether people understand the security process, have been made aware of policy, and have the appropriate skills to design and/or implement a secure application. Other activities, including manually reviewing the documentation, secure coding policies, security requirements, and architectural designs, should all be accomplished using manual inspections.&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Requires no supporting technology &lt;br /&gt;
* Can be applied to a variety of situations&lt;br /&gt;
* Flexible &lt;br /&gt;
* Promotes team work &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages:'''&lt;br /&gt;
* Can be time consuming &lt;br /&gt;
* Supporting material not always available &lt;br /&gt;
* Requires significant human thought and skill to be effective!&lt;br /&gt;
&lt;br /&gt;
=== Threat Modeling ===&lt;br /&gt;
&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
In the context of the technical scope, threat modeling has become a popular technique to help system designers think about the security threats that their systems will face. It enables them to develop mitigation strategies for potential vulnerabilities. Threat modeling helps people focus their inevitably limited resources and attention on the parts of the system that most require it. Threat models should be created as early as possible in the software development life cycle, and should be revisited as the application evolves and development progresses. Threat modeling is essentially risk assessment for applications. It is recommended that all applications have a threat model developed and documented. To develop a threat model, we recommend taking a simple approach that follows the NIST 800-30 ''(3)'' standard for risk assessment. This approach involves: &lt;br /&gt;
* Decomposing the application – through a process of manual inspection understanding how the application works, its assets, functionality and connectivity. &lt;br /&gt;
* Defining and classifying the assets – classify the assets into tangible and intangible assets and rank them according to business criticality. &lt;br /&gt;
* Exploring potential vulnerabilities (technical, operational, and management). &lt;br /&gt;
* Exploring potential threats – through a process of developing threat scenarios or attacks trees which develops a realistic view of potential attack vectors from an attacker’s perspective. &lt;br /&gt;
* Creating mitigation strategies – develop mitigating controls for each of the threats deemed to be realistic. The output from a threat model itself can vary but is typically a collection of lists and diagrams. Part 2 of the OWASP Testing Guide (the detailed “How To” text) will outline a specific Threat Modeling methodology. There is no right or wrong way to develop threat models, and several techniques have evolved. The OCTAVE model from Carnegie Mellon (http://www.cert.org/octave/) is worth exploring. &amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Practical attacker's view of the system &lt;br /&gt;
* Flexible &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
'''Disadvantage : &amp;lt;br&amp;gt;'''&lt;br /&gt;
* Relatively new technique &lt;br /&gt;
* Good threat models don’t automatically mean good software &lt;br /&gt;
''Note: (3) Stoneburner, G., Goguen, A., &amp;amp; Feringa, A. (2001, October). Risk management guide for information technology systems. Retrieved May 7, 2004, from http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf''&lt;br /&gt;
&lt;br /&gt;
=== Source Code Review ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Source code review is the process of manually checking a web application's source code for security issues. Many serious security vulnerabilities cannot be detected with any other form of analysis or testing. As the popular saying goes “if you want to know what’s really going on, go straight to the source”. Almost all security experts agree that there is no substitute for actually looking at the code. All the information for identifying security problems is there in the code somewhere. Unlike testing third party closed software such as operating systems, when testing web applications (especially if they have been developed in-house) the source code should be made available for testing purposes. Many unintentional but significant security problems are also extremely difficult to discover with other forms of analysis or testing, such as penetration testing, making source code analysis the technique of choice for technical testing. With the source code a tester can accurately determine what is happening (or is supposed to be happening) and remove the guess work of black box testing. Examples of issues that are particularly conducive to being found through source code reviews include concurrency problems, flawed business logic, access control problems and cryptographic weaknesses as well as backdoors, Trojans, Easter eggs, time bombs, logic bombs, and other forms of malicious code. These issues often manifest themselves as the most harmful vulnerabilities in web sites. Source code analysis can also be extremely efficient to find implementation issues such as places where input validation was not performed or when fail open control procedures maybe present. But keep in mind that operational procedures need to be reviewed also since the source code being deployed might not be the same as the one being analyzed ''(4)''.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages'''&lt;br /&gt;
* Completeness and effectiveness &lt;br /&gt;
* Accuracy &lt;br /&gt;
* Fast (for competent reviewers) &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages'''&lt;br /&gt;
* Requires highly skilled security developers &lt;br /&gt;
* Can miss calls to issues in compiled libraries &lt;br /&gt;
* Can not detect run-time errors easily &lt;br /&gt;
* The source code actually deployed might differ from the one being analyzed.&lt;br /&gt;
''Note: (4) See &amp;quot;Reflections on Trusting Trust&amp;quot; by Ken Thompson (http://cm.bell-labs.com/who/ken/trust.html)''&lt;br /&gt;
&lt;br /&gt;
* '''For more on code review OWASP manage a code review project''':&amp;lt;BR&amp;gt;&lt;br /&gt;
http://www.owasp.org/index.php/Category:OWASP_Code_Review_Project&lt;br /&gt;
&lt;br /&gt;
=== Penetration Testing ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Penetration testing has become a common technique used to test network security for many years. It is also commonly known as black box testing or ethical hacking. Penetration testing is essentially the “art” of testing a running application remotely, without knowing the inner workings of the application itself to find security vulnerabilities. Typically, the penetration test team would have access to an application as if they were users. The tester acts like a attacker and attempts to find and exploit vulnerabilities. In many cases the tester will be given a valid account on the system. While penetration testing has proven to be effective in network security, the technique does not naturally translate to applications. When penetration testing is performed on networks and operating systems, the majority of the work is involved in finding and then exploiting known vulnerabilities in specific technologies. As web applications are almost exclusively bespoke, penetration testing in the web application arena is more akin to pure research. Penetration testing tools have been developed that automated the process but again with the nature of web applications their effectiveness is usually poor. Many people today use web application penetration testing as their primary security testing technique. Whilst it certainly has its place in a testing program, we do not believe it should be considered as the primary or only testing technique. Gary McGraw summed up penetration testing well when he said, “If you fail a penetration test you know you have a very bad problem indeed. If you pass a penetration test you do not know that you don’t have a very bad problem”. However, focused penetration testing (i.e. testing that attempts to exploit known vulnerabilities detected in previous reviews) can be useful in detecting if some specific vulnerabilities are actually fixed in the source code deployed at the web site. &amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages'''&lt;br /&gt;
* Can be fast (and therefore cheap) &lt;br /&gt;
* Requires a relatively lower skill-set than source code review &lt;br /&gt;
* Tests the code that is actually being exposed &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages'''&lt;br /&gt;
* Too late in the SDLC &lt;br /&gt;
* Front impact testing only!&lt;br /&gt;
&lt;br /&gt;
=== The Need for a Balanced Approach ===&lt;br /&gt;
With so many techniques and so many approaches to testing the security of your web applications, it can be difficult to understand which techniques to use and when to use them.&lt;br /&gt;
Experience shows that there is no right or wrong answer to exactly what techniques should be used to build a testing framework. The fact remains that all techniques should probably be used to ensure that all areas that need to be tested are tested. What is clear, however, is that there is no single technique that effectively covers all security testing that must be performed to ensure that all issues have been addressed. Many companies adopt one approach, which has historically been penetration testing. Penetration testing, while useful, cannot effectively address many of the issues that need to be tested, and is simply “too little too late” in the software development life cycle (SDLC). &lt;br /&gt;
The correct approach is a balanced one that includes several techniques, from manual interviews to technical testing. The balanced approach is sure to cover testing in all phases in the SDLC. This approach leverages the most appropriate techniques available depending on the current SDLC phase. &lt;br /&gt;
Of course there are times and circumstances where only one technique is possible; for example, a test on a web application that has already been created, and where the testing party does not have access to the source code. In this case, penetration testing is clearly better than no testing at all. However, we encourage the testing parties to challenge assumptions, such as no access to source code, and to explore the possibility of complete testing. &lt;br /&gt;
A balanced approach varies depending on many factors, such as the maturity of the testing process and corporate culture. However, it is recommended that a balanced testing framework look something like the representations shown in Figure 3 and Figure 4. The following figure shows a typical proportional representation overlaid onto the software development life cycle. In keeping with research and experience, it is essential that companies place a higher emphasis on the early stages of development.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionSDLC.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 3: Proportion of Test Effort in SDLC''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The following figure shows a typical proportional representation overlaid onto testing techniques. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionTest.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 4: Proportion of Test Effort According to Test Technique''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''A Note about Web Application Scanners'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use web application scanners. While they undoubtedly have a place in a testing program, we want to highlight some fundamental issues about why we do not believe that automating black box testing is (or will ever be) effective. By highlighting these issues, we are not discouraging web application scanner use. Rather, we are saying that their limitations should be understood, and testing frameworks should be planned appropriately.&lt;br /&gt;
NB: OWASP is currently working to develop a web application scanner-benchmarking platform. The following examples indicate why automated black box testing is not effective. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 1: Magic Parameters'''&amp;lt;br&amp;gt;&lt;br /&gt;
Imagine a simple web application that accepts a name-value pair of “magic” and then the value. For simplicity, the GET request may be: ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic=value&amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt; To further simplify the example, the values in this case can only be ASCII characters a – z (upper or lowercase) and integers 0 – 9. The designers of this application created an administrative backdoor during testing, but obfuscated it to prevent the casual observer from discovering it. By submitting the value sf8g7sfjdsurtsdieerwqredsgnfg8d (30 characters), the user will then be logged in and presented with an administrative screen with total control of the application. The HTTP request is now:&amp;lt;br&amp;gt; ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic= sf8g7sfjdsurtsdieerwqredsgnfg8d &amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt;&lt;br /&gt;
Given that all of the other parameters were simple two- and three-characters fields, it is not possible to start guessing combinations at approximately 28 characters. A web application scanner will need to brute force (or guess) the entire key space of 30 characters. That is up to 3028 permutations, or trillions of HTTP requests! That is an electron in a digital haystack! &lt;br /&gt;
The code for this may look like the following: &amp;lt;br&amp;gt;&lt;br /&gt;
 public void doPost( HttpServletRequest request, HttpServletResponse response) &lt;br /&gt;
 { &lt;br /&gt;
 String magic = “sf8g7sfjdsurtsdieerwqredsgnfg8d”; &lt;br /&gt;
 boolean admin = magic.equals( request.getParameter(“magic”));&lt;br /&gt;
 if (admin) doAdmin( request, response); &lt;br /&gt;
 else …. // normal processing &lt;br /&gt;
 } &lt;br /&gt;
By looking in the code, the vulnerability practically leaps off the page as a potential problem. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 2: Bad Cryptography'''&amp;lt;br&amp;gt;&lt;br /&gt;
Cryptography is widely used in web applications. Imagine that a developer decided to write a simple cryptography algorithm to sign a user in from site A to site B automatically. In his/her wisdom, the developer decides that if a user is logged into site A, then he/she will generate a key using an MD5 hash function that comprises: ''Hash { username : date }'' &amp;lt;br&amp;gt;&lt;br /&gt;
When a user is passed to site B, he/she will send the key on the query string to site B in an HTTP re-direct. Site B independently computes the hash, and compares it to the hash passed on the request. If they match, site B signs the user in as the user they claim to be. Clearly, as we explain the scheme, the inadequacies can be worked out, and it can be seen how anyone that figures it out (or is told how it works, or downloads the information from Bugtraq) can login as any user. Manual inspection, such as an interview, would have uncovered this security issue quickly, as would inspection of the code. A black-box web application scanner would have seen a 128-bit hash that changed with each user, and by the nature of hash functions, did not change in any predicable way.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''A Note about Static Source Code Review Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use static source code scanners. While they undoubtedly have a place in a comprehensive testing program, we want to highlight some fundamental issues about why we do not believe this approach is effective when used alone. Static source code analysis alone cannot understand the context of semantic constructs in code, and therefore is prone to a significant number of false positives. This is particularly true with C and C++. The technology is useful in determining interesting places in the code, however significant manual effort is required to validate the findings. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example:&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 char szTarget[12];&amp;lt;BR&amp;gt;&lt;br /&gt;
 char *s = &amp;quot;Hello, World&amp;quot;; &amp;lt;BR&amp;gt;&lt;br /&gt;
 size_t cSource = strlen_s(s,20); &amp;lt;BR&amp;gt;&lt;br /&gt;
 strncpy_s(temp,sizeof(szTarget),s,cSource); &amp;lt;BR&amp;gt;&lt;br /&gt;
 strncat_s(temp,sizeof(szTarget),s,cSource);&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=15375</id>
		<title>Testing Guide Introduction</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=15375"/>
				<updated>2007-01-15T14:04:57Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Source Code Review */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
=== The OWASP Testing Project ===&lt;br /&gt;
----&lt;br /&gt;
The OWASP Testing Project has been in development for over many years. We wanted to help people understand the what, why, when, where, and how of testing their web applications, and not just provide a simple checklist or prescription of issues that should be addressed. We wanted to build a testing framework from which others can build their own testing programs or qualify other people’s processes. Writing the Testing Project has proven to be a difficult task. It has been a challenge to obtain consensus and develop the appropriate content, which would allow people to apply the overall content and framework described here, while enabling them to work in their own environment and culture. It has been also a challenge to change the focus of web application testing from penetration testing to testing integrated in the software development life cycle. Many industry experts and those responsible for software security at some of the largest companies in the world are validating the Testing Framework, presented as OWASP Testing Parts 1 and 2. This framework aims at helping organizations test their web applications in order to build reliable and secure software rather than simply highlighting areas of weakness, although the latter is certainly a byproduct of many of OWASP’s guides and checklists. As such, we have made some hard decisions about the appropriateness of certain testing techniques and technologies, which we fully understand will not be agreed upon by everyone. However, OWASP is able to take the high ground and change culture over time through awareness and education based on consensus and experience, rather than take the path of the “least common denominator.”&lt;br /&gt;
&lt;br /&gt;
'''The Economics of Insecure Software'''&amp;lt;br&amp;gt;&lt;br /&gt;
The cost of insecure software to the world economy is seemingly immeasurable. In June 2002, the US National Institute of Standards (NIST) published a survey on the cost of insecure software to the US economy due to inadequate software testing ''(The economic impacts of inadequate infrastructure for software testing. (2002, June 28). Retrieved May 4, 2004, from http://www.nist.gov/public_affairs/releases/n02-10.htm)''&amp;lt;br&amp;gt;&lt;br /&gt;
Most people understand at least the basic issues, or have a deeper technical understanding of the vulnerabilities. Sadly, few are able to translate that knowledge into monetary value and thereby quantify the costs to their business. We believe that until this happens, CIO’s will not be able to develop an accurate return on a security investment and subsequently assign appropriate budgets for software security. See Ross Anderson’s page at http://www.cl.cam.ac.uk/users/rja14/econsec.html for more information about the economics of security. &lt;br /&gt;
The framework described in this document encourages people to measure security throughout their entire development process. They can then relate the cost of insecure software to the impact it has on their business, and consequently develop appropriate business decisions (resources) to manage the risk. Insecure software has its consequences, but insecure web applications, exposed to millions of users through the Internet are a growing concern. Even now, the confidence of customers using the World Wide Web to purchase or cover their needs is decreasing as more and more web applications are exposed to attacks. &lt;br /&gt;
This introduction covers the processes involved in testing web applications: &lt;br /&gt;
* The scope of what to test &lt;br /&gt;
* Principles of testing &lt;br /&gt;
* Testing techniques explained &lt;br /&gt;
* The OWASP testing framework explained &lt;br /&gt;
In the second part of this guide it is covers how to test each software development life cycle phase using techniques described in this document. For example, Part 2 covers how to test for specific vulnerabilities such as SQL Injection by code inspection and penetration testing. &lt;br /&gt;
&lt;br /&gt;
'''Scope of this Document'''&amp;lt;br&amp;gt;&lt;br /&gt;
This document is designed to help organizations understand what comprises a testing program, and to help them identify the steps that they need to undertake to build and operate that testing program on their web applications. It is intended to give a broad view of the elements required to make a comprehensive web application security program. This guide can be used as a reference and as a methodology to help determine the gap between your existing practices and industry best practices. This guide allows organizations to compare themselves against industry peers, understand the magnitude of resources required to test and remediate their software, or prepare for an audit. This document does not go into the technical details of how to test an application, as the intent is to provide a typical security organizational framework. The technical details about how to test an application, as part of a penetration test or code review will be covered in the Part 2 document mentioned above. What Do We Mean By Testing? During the development lifecycle of a web application, many things need to be tested. The Merriam-Webster Dictionary describes testing as: &lt;br /&gt;
* To put to test or proof &lt;br /&gt;
* To undergo a test &lt;br /&gt;
* To be assigned a standing or evaluation based on tests. &lt;br /&gt;
For the purposes of this document, testing is a process of comparing the state of something against a set of criteria. In the security industry, people frequently test against a set of mental criteria that are neither well defined nor complete. For this reason and others, many outsiders regard security testing as a black art. This document’s aim is to change that perception and to make it easier for people without in-depth security knowledge to make a difference. &lt;br /&gt;
&lt;br /&gt;
'''The Software Development Life Cycle Process'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the best methods to prevent security bugs from appearing in production applications is to improve the Software Development Life Cycle (SDLC) by including security. If a SDLC is not currently being used in your environment, it is time to pick one! The following figure shows a generic SDLC model as well as the (estimated) increasing cost of fixing security bugs in such a model. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:SDLC.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 1: Generic SDLC Model'' &amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Companies should inspect their overall SDLC to ensure that security is an integral part of the development process. SDLCs should include security tests to ensure security is adequately covered and controls are effective throughout the development process. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The Scope of What To Test'''&amp;lt;br&amp;gt;&lt;br /&gt;
It can be helpful to think of software development as a combination of people, process, and technology. If these are the factors that “create” software then it is logical that these are the factors that must be tested. Today most people generally test the technology or the software itself. In fact most people today don’t test the software until it has already been created and is in the deployment phase of its lifecycle (i.e. code has been created and instantiated into a working web application). This is generally a very ineffective and cost prohibitive practice. An effective testing program should have components that test; People – to ensure that there is adequate education and awareness; Process – to ensure that there are adequate policies and standards and that people know how to follow these policies; Technology – to ensure that the process has been effective in its implementation. Unless a holistic approach is adopted, testing just the technical implementation of an application will not uncover management or operational vulnerabilities that could be present. By testing the people, policy and process you can catch issues that would later manifest themselves into defects in the technology, thus eradicating bugs early and identifying the root causes of defects. Likewise only testing some of the technical issues that can be present in a system will result in an incomplete and inaccurate security posture assessment. Denis Verdon, Head of Information Security at Fidelity National Financial (http://www.fnf.com) presented an excellent analogy for this misconception at the OWASP AppSec 2004 Conference in New York. “If cars were built like applications…safety tests would assume frontal impact only. Cars would not be roll tested, or tested for stability in emergency maneuvers, brake effectiveness, side impact and resistance to theft.” &amp;lt;br&amp;gt;&lt;br /&gt;
'''Feedback and Comments'''&amp;lt;br&amp;gt;&lt;br /&gt;
As with all OWASP projects, we welcome comments and feedback. We especially like to know that our work is being used and that it is effective and accurate.&lt;br /&gt;
&lt;br /&gt;
==Principles of Testing==&lt;br /&gt;
&lt;br /&gt;
There are some common misconceptions when developing a testing methodology to weed out security bugs in software. This chapter covers some of the basic principles that should be taken into account by professionals when testing for security bugs in software. &lt;br /&gt;
&lt;br /&gt;
'''There is No Silver Bullet'''&amp;lt;br&amp;gt;&lt;br /&gt;
While it is tempting to think that a security scanner or application firewall will either provide a multitude of defenses or identify a multitude of problems, in reality there are no silver bullets to the problem of insecure software. Application security assessment software, while useful as a first pass to find low-hanging fruit, is generally immature and ineffective at in-depth assessments and at providing adequate test coverage. Remember that security is a process, not a product. &lt;br /&gt;
&lt;br /&gt;
'''Think Strategically, Not Tactically'''&amp;lt;br&amp;gt;&lt;br /&gt;
Over the last few years, security professionals have come to realize the fallacy of the patch and penetrate model that was pervasive in information security during the 1990’s. The patch and penetrate model involves fixing a reported bug, but without proper investigation of the root cause. This patch and penetrate model is usually associated with the window of vulnerability ''(1)'' show in the figure below. The evolution of vulnerabilities in common software used worldwide has shown the ineffectiveness of this model. Vulnerability studies ''(2)'' have shown that the with the reaction time of attackers worldwide, the typical window of vulnerability does not provide enough time for patch installation, since the time between a vulnerability is uncovered and an automated attack against is developed and released is decreasing every year. There are also several wrong assumptions in this patch and penetrate model: patches interfere with the normal operations and might break existing applications, and not all the users might (in the end) be aware of a patch’s availability. Consequently not all the product's users will apply patches, either because of this issue or because they lack knowledge about the patch's existence.&amp;lt;br&amp;gt;&lt;br /&gt;
''Note: (1) Fore more information about the window of vulnerability please refer to Bruce Shneier’s Cryptogram Issue #9, available at http://www.schneier.com/crypto-gram-0009.html'' &amp;lt;br&amp;gt;&lt;br /&gt;
''(2) Such as those included Symantec’s Threat Reports''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:WindowExposure.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 2: Window of Vulnerability''&amp;lt;/center&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
To prevent reoccurring security problems within an application, it is essential to build security into the Software Development Life Cycle (SDLC) by developing standards, policies, and guidelines that fit and work within the development methodology. Threat modeling and other techniques should be used to help assign appropriate resources to those parts of a system that are most at risk. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The SDLC is King'''&amp;lt;br&amp;gt;&lt;br /&gt;
The SDLC is a process that is well known to developers. By integrating security into each phase of the SDLC, it allows for a holistic approach to application security that leverages the procedures already in place within the organization. Be aware that while the names of the various phases may change depending on the SDLC model used by an organization, each conceptual phase of the archetype SDLC will be used to develop the application (i.e. define, design, develop, deploy, maintain). Each phase has security considerations that should become part of the existing process, to ensure a cost-effective and comprehensive security program. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Test Early and Test Often'''&amp;lt;br&amp;gt;&lt;br /&gt;
By detecting a bug early within the SDLC, it allows it to be addressed more quickly and at a lower cost. A security bug is no different from a functional or performance based bug in this regard. A key step in making this possible is to educate the development and QA organizations about common security issues and the ways to detect &amp;amp; prevent them. Although new libraries, tools or languages might help design better programs (with fewer security bugs) new threats arise constantly and developers must be aware of those that affect the software they are developing. Education in security testing also helps developers acquire the appropriate mindset to test an application from an attacker's perspective. This allows each organization to consider security issues as part of their existing responsibilities.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understand the Scope of Security'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is important to know how much security a given project will require. The information and assets that are to be protected should be given a classification that states how they are to be handled (e.g. confidential, secret, top secret). Discussions should occur with legal council to ensure that any specific security needs will be met. In the USA they might come from federal regulations such as the Gramm-Leach-Bliley act (http://www.ftc.gov/privacy/glbact/), or from state laws such as California SB-1386 (http://www.leginfo.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html). For organizations based in EU countries, both country-specific regulation and EU Directives might apply, for example, Directive 96/46/EC4 makes it mandatory to treat personal data in applications with due care, whatever the application. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Mindset'''&amp;lt;br&amp;gt;&lt;br /&gt;
Successfully testing an application for security vulnerabilities requires thinking “outside of the box”. Normal use cases will test the normal behavior of the application when a user is using it in the manner that you expect. Good security testing requires going beyond what is expected and thinking like an attacker who is trying to break the application. Creative thinking can help to determine what unexpected data may cause an application to fail in an insecure manner. It can also help find what assumptions made by web developers are not always true and how can they be subverted. This is one of the reasons why automated tools are actually bad at automatically testing for vulnerabilities, this creative thinking must be done in a case by case basis and most of the web applications are being developed in a unique way (even if using common frameworks). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understanding the Subject'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the first major initiatives in any good security program should be to require accurate documentation of the application. The architecture, data flow diagrams, use cases, and more should be written in formal documents and available for review. The technical specification and application documents should include information that lists not only the desired use cases, but also any specifically disallowed use cases. Finally, it is good to have at least a basic security infrastructure that allows monitoring and trending of any attacks against your applications &amp;amp; network (e.g. IDS systems). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use the Right Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
While we have already stated that there is no tool silver bullet, tools do play a critical role in the overall security program. There is a range of open source and commercial tools that can assist in automation of many routine security tasks. These tools can simplify and speed the security process by assisting security personnel in their tasks. It is important to understand exactly what these tools can and cannot do, however, so that they are not oversold or used incorrectly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The Devil is in the Details'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is critical not to perform a superficial security review of an application and consider it complete. This will instill a false sense of confidence that can be as dangerous as not having done a security review in the first place. It is vital to carefully review the findings and weed out any false positives that may remain in the report. Reporting an incorrect security finding can often undermine the valid message of the rest of a security report. Care should be taken to verify that every possible section of application logic has been tested, and that every use case scenario was explored for possible vulnerabilities. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use Source Code When Available'''&amp;lt;br&amp;gt;&lt;br /&gt;
While black box penetration test results can be impressive and useful to demonstrate how vulnerabilities are exposed in production, they are not the most effective way to secure an application. If the source code for the application is available, it should be given to the security staff to assist them while performing their review. It is possible to discover vulnerabilities within the application source that would be missed during a black box engagement. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Develop Metrics'''&amp;lt;br&amp;gt;&lt;br /&gt;
An important part of a good security program is the ability to determine if things are getting better. It is important to track the results of testing engagements, and develop metrics that will reveal the application security trends within the organization. These metrics can show if more education and training is required, if there is a particular security mechanism that is not clearly understood by development, and if the total number of security related problems being found each month is going down. Consistent metrics that can be generated in an automated way from available source code will also help the organization in assessing the effectiveness of mechanisms introduced to reduce security bugs in software development. Metrics are not easily developed so using standard metrics like those provided by the OWASP Metrics project and other organizations might be a good head start.&lt;br /&gt;
&lt;br /&gt;
==Testing Techniques Explained==&lt;br /&gt;
&lt;br /&gt;
This section presents a high-level overview of various testing techniques that can be employed when building a testing program. It does not present specific methodologies for these techniques, although Part 2 of the OWASP Testing project will address this information. This section is included to provide context for the framework presented in the next Chapter and to highlight the advantages and disadvantages of some of the techniques that can be considered.&lt;br /&gt;
* Manual Inspections &amp;amp; Reviews &lt;br /&gt;
* Threat Modeling &lt;br /&gt;
* Code Review &lt;br /&gt;
* Penetration Testing &lt;br /&gt;
&lt;br /&gt;
=== Manual Inspections &amp;amp; Reviews ===&lt;br /&gt;
Manual inspections are human-driven reviews that typically test the security implications of the people, policies, and processes, but can include inspection of technology decisions such as architectural designs. They are usually conducted by analyzing documentation or using interviews with the designers or system owners. While the concept of manual inspections and human reviews is simple, they can be among the most powerful and effective techniques available. By asking someone how something works and why it was implemented in a specific way, it allows the tester to quickly determine if any security concerns are likely to be evident. Manual inspections and reviews are one of the few ways to test the software development lifecycle process itself and to ensure that there is an adequate policy or skill set in place. As with many things in life, when conducting manual inspections and reviews we suggest you adopt a trust but verify model. Not everything everyone tells you or shows you will be accurate. Manual reviews are particularly good for testing whether people understand the security process, have been made aware of policy, and have the appropriate skills to design and/or implement a secure application. Other activities, including manually reviewing the documentation, secure coding policies, security requirements, and architectural designs, should all be accomplished using manual inspections.&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Requires no supporting technology &lt;br /&gt;
* Can be applied to a variety of situations&lt;br /&gt;
* Flexible &lt;br /&gt;
* Promotes team work &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages:'''&lt;br /&gt;
* Can be time consuming &lt;br /&gt;
* Supporting material not always available &lt;br /&gt;
* Requires significant human thought and skill to be effective!&lt;br /&gt;
&lt;br /&gt;
=== Threat Modeling ===&lt;br /&gt;
&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
In the context of the technical scope, threat modeling has become a popular technique to help system designers think about the security threats that their systems will face. It enables them to develop mitigation strategies for potential vulnerabilities. Threat modeling helps people focus their inevitably limited resources and attention on the parts of the system that most require it. Threat models should be created as early as possible in the software development life cycle, and should be revisited as the application evolves and development progresses. Threat modeling is essentially risk assessment for applications. It is recommended that all applications have a threat model developed and documented. To develop a threat model, we recommend taking a simple approach that follows the NIST 800-30 ''(3)'' standard for risk assessment. This approach involves: &lt;br /&gt;
* Decomposing the application – through a process of manual inspection understanding how the application works, its assets, functionality and connectivity. &lt;br /&gt;
* Defining and classifying the assets – classify the assets into tangible and intangible assets and rank them according to business criticality. &lt;br /&gt;
* Exploring potential vulnerabilities (technical, operational, and management). &lt;br /&gt;
* Exploring potential threats – through a process of developing threat scenarios or attacks trees which develops a realistic view of potential attack vectors from an attacker’s perspective. &lt;br /&gt;
* Creating mitigation strategies – develop mitigating controls for each of the threats deemed to be realistic. The output from a threat model itself can vary but is typically a collection of lists and diagrams. Part 2 of the OWASP Testing Guide (the detailed “How To” text) will outline a specific Threat Modeling methodology. There is no right or wrong way to develop threat models, and several techniques have evolved. The OCTAVE model from Carnegie Mellon (http://www.cert.org/octave/) is worth exploring. &amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Practical attacker's view of the system &lt;br /&gt;
* Flexible &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
'''Disadvantage : &amp;lt;br&amp;gt;'''&lt;br /&gt;
* Relatively new technique &lt;br /&gt;
* Good threat models don’t automatically mean good software &lt;br /&gt;
''Note: (3) Stoneburner, G., Goguen, A., &amp;amp; Feringa, A. (2001, October). Risk management guide for information technology systems. Retrieved May 7, 2004, from http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf''&lt;br /&gt;
&lt;br /&gt;
=== Source Code Review ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Source code review is the process of manually checking a web application's source code for security issues. Many serious security vulnerabilities cannot be detected with any other form of analysis or testing. As the popular saying goes “if you want to know what’s really going on, go straight to the source”. Almost all security experts agree that there is no substitute for actually looking at the code. All the information for identifying security problems is there in the code somewhere. Unlike testing third party closed software such as operating systems, when testing web applications (especially if they have been developed in-house) the source code should be made available for testing purposes. Many unintentional but significant security problems are also extremely difficult to discover with other forms of analysis or testing, such as penetration testing, making source code analysis the technique of choice for technical testing. With the source code a tester can accurately determine what is happening (or is supposed to be happening) and remove the guess work of black box testing (such as penetration testing). Examples of issues that are particularly conducive to being found through source code reviews include concurrency problems, flawed business logic, access control problems and cryptographic weaknesses as well as backdoors, Trojans, Easter eggs, time bombs, logic bombs, and other forms of malicious code. These issues often manifest themselves as the most harmful vulnerabilities in web sites. Source code analysis can also be extremely efficient to find implementation issues such as places where input validation was not performed or when fail open control procedures maybe present. But keep in mind that operational procedures need to be reviewed also since the source code being deployed might not be the same as the one being analyzed ''(4)''.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages'''&lt;br /&gt;
* Completeness and effectiveness &lt;br /&gt;
* Accuracy &lt;br /&gt;
* Fast (for competent reviewers) &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages'''&lt;br /&gt;
* Requires highly skilled security developers &lt;br /&gt;
* Can miss calls to issues in compiled libraries &lt;br /&gt;
* Can not detect run-time errors easily &lt;br /&gt;
* The source code actually deployed might differ from the one being analyzed.&lt;br /&gt;
''Note: (4) See &amp;quot;Reflections on Trusting Trust&amp;quot; by Ken Thompson (http://cm.bell-labs.com/who/ken/trust.html)''&lt;br /&gt;
&lt;br /&gt;
* '''For more on code review OWASP manage a code review project''':&amp;lt;BR&amp;gt;&lt;br /&gt;
http://www.owasp.org/index.php/Category:OWASP_Code_Review_Project&lt;br /&gt;
&lt;br /&gt;
=== Penetration Testing ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Penetration testing has become a common technique used to test network security for many years. It is also commonly known as black box testing or ethical hacking. Penetration testing is essentially the “art” of testing a running application remotely, without knowing the inner workings of the application itself to find security vulnerabilities. Typically, the penetration test team would have access to an application as if they were users. The tester acts like a attacker and attempts to find and exploit vulnerabilities. In many cases the tester will be given a valid account on the system. While penetration testing has proven to be effective in network security, the technique does not naturally translate to applications. When penetration testing is performed on networks and operating systems, the majority of the work is involved in finding and then exploiting known vulnerabilities in specific technologies. As web applications are almost exclusively bespoke, penetration testing in the web application arena is more akin to pure research. Penetration testing tools have been developed that automated the process but again with the nature of web applications their effectiveness is usually poor. Many people today use web application penetration testing as their primary security testing technique. Whilst it certainly has its place in a testing program, we do not believe it should be considered as the primary or only testing technique. Gary McGraw summed up penetration testing well when he said, “If you fail a penetration test you know you have a very bad problem indeed. If you pass a penetration test you do not know that you don’t have a very bad problem”. However, focused penetration testing (i.e. testing that attempts to exploit known vulnerabilities detected in previous reviews) can be useful in detecting if some specific vulnerabilities are actually fixed in the source code deployed at the web site. &amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages'''&lt;br /&gt;
* Can be fast (and therefore cheap) &lt;br /&gt;
* Requires a relatively lower skill-set than source code review &lt;br /&gt;
* Tests the code that is actually being exposed &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages'''&lt;br /&gt;
* Too late in the SDLC &lt;br /&gt;
* Front impact testing only!&lt;br /&gt;
&lt;br /&gt;
=== The Need for a Balanced Approach ===&lt;br /&gt;
With so many techniques and so many approaches to testing the security of your web applications, it can be difficult to understand which techniques to use and when to use them.&lt;br /&gt;
Experience shows that there is no right or wrong answer to exactly what techniques should be used to build a testing framework. The fact remains that all techniques should probably be used to ensure that all areas that need to be tested are tested. What is clear, however, is that there is no single technique that effectively covers all security testing that must be performed to ensure that all issues have been addressed. Many companies adopt one approach, which has historically been penetration testing. Penetration testing, while useful, cannot effectively address many of the issues that need to be tested, and is simply “too little too late” in the software development life cycle (SDLC). &lt;br /&gt;
The correct approach is a balanced one that includes several techniques, from manual interviews to technical testing. The balanced approach is sure to cover testing in all phases in the SDLC. This approach leverages the most appropriate techniques available depending on the current SDLC phase. &lt;br /&gt;
Of course there are times and circumstances where only one technique is possible; for example, a test on a web application that has already been created, and where the testing party does not have access to the source code. In this case, penetration testing is clearly better than no testing at all. However, we encourage the testing parties to challenge assumptions, such as no access to source code, and to explore the possibility of complete testing. &lt;br /&gt;
A balanced approach varies depending on many factors, such as the maturity of the testing process and corporate culture. However, it is recommended that a balanced testing framework look something like the representations shown in Figure 3 and Figure 4. The following figure shows a typical proportional representation overlaid onto the software development life cycle. In keeping with research and experience, it is essential that companies place a higher emphasis on the early stages of development.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionSDLC.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 3: Proportion of Test Effort in SDLC''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The following figure shows a typical proportional representation overlaid onto testing techniques. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionTest.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 4: Proportion of Test Effort According to Test Technique''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''A Note about Web Application Scanners'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use web application scanners. While they undoubtedly have a place in a testing program, we want to highlight some fundamental issues about why we do not believe that automating black box testing is (or will ever be) effective. By highlighting these issues, we are not discouraging web application scanner use. Rather, we are saying that their limitations should be understood, and testing frameworks should be planned appropriately.&lt;br /&gt;
NB: OWASP is currently working to develop a web application scanner-benchmarking platform. The following examples indicate why automated black box testing is not effective. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 1: Magic Parameters'''&amp;lt;br&amp;gt;&lt;br /&gt;
Imagine a simple web application that accepts a name-value pair of “magic” and then the value. For simplicity, the GET request may be: ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic=value&amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt; To further simplify the example, the values in this case can only be ASCII characters a – z (upper or lowercase) and integers 0 – 9. The designers of this application created an administrative backdoor during testing, but obfuscated it to prevent the casual observer from discovering it. By submitting the value sf8g7sfjdsurtsdieerwqredsgnfg8d (30 characters), the user will then be logged in and presented with an administrative screen with total control of the application. The HTTP request is now:&amp;lt;br&amp;gt; ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic= sf8g7sfjdsurtsdieerwqredsgnfg8d &amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt;&lt;br /&gt;
Given that all of the other parameters were simple two- and three-characters fields, it is not possible to start guessing combinations at approximately 28 characters. A web application scanner will need to brute force (or guess) the entire key space of 30 characters. That is up to 3028 permutations, or trillions of HTTP requests! That is an electron in a digital haystack! &lt;br /&gt;
The code for this may look like the following: &amp;lt;br&amp;gt;&lt;br /&gt;
 public void doPost( HttpServletRequest request, HttpServletResponse response) &lt;br /&gt;
 { &lt;br /&gt;
 String magic = “sf8g7sfjdsurtsdieerwqredsgnfg8d”; &lt;br /&gt;
 boolean admin = magic.equals( request.getParameter(“magic”));&lt;br /&gt;
 if (admin) doAdmin( request, response); &lt;br /&gt;
 else …. // normal processing &lt;br /&gt;
 } &lt;br /&gt;
By looking in the code, the vulnerability practically leaps off the page as a potential problem. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 2: Bad Cryptography'''&amp;lt;br&amp;gt;&lt;br /&gt;
Cryptography is widely used in web applications. Imagine that a developer decided to write a simple cryptography algorithm to sign a user in from site A to site B automatically. In his/her wisdom, the developer decides that if a user is logged into site A, then he/she will generate a key using an MD5 hash function that comprises: ''Hash { username : date }'' &amp;lt;br&amp;gt;&lt;br /&gt;
When a user is passed to site B, he/she will send the key on the query string to site B in an HTTP re-direct. Site B independently computes the hash, and compares it to the hash passed on the request. If they match, site B signs the user in as the user they claim to be. Clearly, as we explain the scheme, the inadequacies can be worked out, and it can be seen how anyone that figures it out (or is told how it works, or downloads the information from Bugtraq) can login as any user. Manual inspection, such as an interview, would have uncovered this security issue quickly, as would inspection of the code. A black-box web application scanner would have seen a 128-bit hash that changed with each user, and by the nature of hash functions, did not change in any predicable way.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''A Note about Static Source Code Review Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use static source code scanners. While they undoubtedly have a place in a comprehensive testing program, we want to highlight some fundamental issues about why we do not believe this approach is effective when used alone. Static source code analysis alone cannot understand the context of semantic constructs in code, and therefore is prone to a significant number of false positives. This is particularly true with C and C++. The technology is useful in determining interesting places in the code, however significant manual effort is required to validate the findings. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example:&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 char szTarget[12];&amp;lt;BR&amp;gt;&lt;br /&gt;
 char *s = &amp;quot;Hello, World&amp;quot;; &amp;lt;BR&amp;gt;&lt;br /&gt;
 size_t cSource = strlen_s(s,20); &amp;lt;BR&amp;gt;&lt;br /&gt;
 strncpy_s(temp,sizeof(szTarget),s,cSource); &amp;lt;BR&amp;gt;&lt;br /&gt;
 strncat_s(temp,sizeof(szTarget),s,cSource);&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=15374</id>
		<title>Testing Guide Introduction</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=15374"/>
				<updated>2007-01-15T14:04:11Z</updated>
		
		<summary type="html">&lt;p&gt;Darrellgrundy: /* Source Code Review */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
=== The OWASP Testing Project ===&lt;br /&gt;
----&lt;br /&gt;
The OWASP Testing Project has been in development for over many years. We wanted to help people understand the what, why, when, where, and how of testing their web applications, and not just provide a simple checklist or prescription of issues that should be addressed. We wanted to build a testing framework from which others can build their own testing programs or qualify other people’s processes. Writing the Testing Project has proven to be a difficult task. It has been a challenge to obtain consensus and develop the appropriate content, which would allow people to apply the overall content and framework described here, while enabling them to work in their own environment and culture. It has been also a challenge to change the focus of web application testing from penetration testing to testing integrated in the software development life cycle. Many industry experts and those responsible for software security at some of the largest companies in the world are validating the Testing Framework, presented as OWASP Testing Parts 1 and 2. This framework aims at helping organizations test their web applications in order to build reliable and secure software rather than simply highlighting areas of weakness, although the latter is certainly a byproduct of many of OWASP’s guides and checklists. As such, we have made some hard decisions about the appropriateness of certain testing techniques and technologies, which we fully understand will not be agreed upon by everyone. However, OWASP is able to take the high ground and change culture over time through awareness and education based on consensus and experience, rather than take the path of the “least common denominator.”&lt;br /&gt;
&lt;br /&gt;
'''The Economics of Insecure Software'''&amp;lt;br&amp;gt;&lt;br /&gt;
The cost of insecure software to the world economy is seemingly immeasurable. In June 2002, the US National Institute of Standards (NIST) published a survey on the cost of insecure software to the US economy due to inadequate software testing ''(The economic impacts of inadequate infrastructure for software testing. (2002, June 28). Retrieved May 4, 2004, from http://www.nist.gov/public_affairs/releases/n02-10.htm)''&amp;lt;br&amp;gt;&lt;br /&gt;
Most people understand at least the basic issues, or have a deeper technical understanding of the vulnerabilities. Sadly, few are able to translate that knowledge into monetary value and thereby quantify the costs to their business. We believe that until this happens, CIO’s will not be able to develop an accurate return on a security investment and subsequently assign appropriate budgets for software security. See Ross Anderson’s page at http://www.cl.cam.ac.uk/users/rja14/econsec.html for more information about the economics of security. &lt;br /&gt;
The framework described in this document encourages people to measure security throughout their entire development process. They can then relate the cost of insecure software to the impact it has on their business, and consequently develop appropriate business decisions (resources) to manage the risk. Insecure software has its consequences, but insecure web applications, exposed to millions of users through the Internet are a growing concern. Even now, the confidence of customers using the World Wide Web to purchase or cover their needs is decreasing as more and more web applications are exposed to attacks. &lt;br /&gt;
This introduction covers the processes involved in testing web applications: &lt;br /&gt;
* The scope of what to test &lt;br /&gt;
* Principles of testing &lt;br /&gt;
* Testing techniques explained &lt;br /&gt;
* The OWASP testing framework explained &lt;br /&gt;
In the second part of this guide it is covers how to test each software development life cycle phase using techniques described in this document. For example, Part 2 covers how to test for specific vulnerabilities such as SQL Injection by code inspection and penetration testing. &lt;br /&gt;
&lt;br /&gt;
'''Scope of this Document'''&amp;lt;br&amp;gt;&lt;br /&gt;
This document is designed to help organizations understand what comprises a testing program, and to help them identify the steps that they need to undertake to build and operate that testing program on their web applications. It is intended to give a broad view of the elements required to make a comprehensive web application security program. This guide can be used as a reference and as a methodology to help determine the gap between your existing practices and industry best practices. This guide allows organizations to compare themselves against industry peers, understand the magnitude of resources required to test and remediate their software, or prepare for an audit. This document does not go into the technical details of how to test an application, as the intent is to provide a typical security organizational framework. The technical details about how to test an application, as part of a penetration test or code review will be covered in the Part 2 document mentioned above. What Do We Mean By Testing? During the development lifecycle of a web application, many things need to be tested. The Merriam-Webster Dictionary describes testing as: &lt;br /&gt;
* To put to test or proof &lt;br /&gt;
* To undergo a test &lt;br /&gt;
* To be assigned a standing or evaluation based on tests. &lt;br /&gt;
For the purposes of this document, testing is a process of comparing the state of something against a set of criteria. In the security industry, people frequently test against a set of mental criteria that are neither well defined nor complete. For this reason and others, many outsiders regard security testing as a black art. This document’s aim is to change that perception and to make it easier for people without in-depth security knowledge to make a difference. &lt;br /&gt;
&lt;br /&gt;
'''The Software Development Life Cycle Process'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the best methods to prevent security bugs from appearing in production applications is to improve the Software Development Life Cycle (SDLC) by including security. If a SDLC is not currently being used in your environment, it is time to pick one! The following figure shows a generic SDLC model as well as the (estimated) increasing cost of fixing security bugs in such a model. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:SDLC.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 1: Generic SDLC Model'' &amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Companies should inspect their overall SDLC to ensure that security is an integral part of the development process. SDLCs should include security tests to ensure security is adequately covered and controls are effective throughout the development process. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The Scope of What To Test'''&amp;lt;br&amp;gt;&lt;br /&gt;
It can be helpful to think of software development as a combination of people, process, and technology. If these are the factors that “create” software then it is logical that these are the factors that must be tested. Today most people generally test the technology or the software itself. In fact most people today don’t test the software until it has already been created and is in the deployment phase of its lifecycle (i.e. code has been created and instantiated into a working web application). This is generally a very ineffective and cost prohibitive practice. An effective testing program should have components that test; People – to ensure that there is adequate education and awareness; Process – to ensure that there are adequate policies and standards and that people know how to follow these policies; Technology – to ensure that the process has been effective in its implementation. Unless a holistic approach is adopted, testing just the technical implementation of an application will not uncover management or operational vulnerabilities that could be present. By testing the people, policy and process you can catch issues that would later manifest themselves into defects in the technology, thus eradicating bugs early and identifying the root causes of defects. Likewise only testing some of the technical issues that can be present in a system will result in an incomplete and inaccurate security posture assessment. Denis Verdon, Head of Information Security at Fidelity National Financial (http://www.fnf.com) presented an excellent analogy for this misconception at the OWASP AppSec 2004 Conference in New York. “If cars were built like applications…safety tests would assume frontal impact only. Cars would not be roll tested, or tested for stability in emergency maneuvers, brake effectiveness, side impact and resistance to theft.” &amp;lt;br&amp;gt;&lt;br /&gt;
'''Feedback and Comments'''&amp;lt;br&amp;gt;&lt;br /&gt;
As with all OWASP projects, we welcome comments and feedback. We especially like to know that our work is being used and that it is effective and accurate.&lt;br /&gt;
&lt;br /&gt;
==Principles of Testing==&lt;br /&gt;
&lt;br /&gt;
There are some common misconceptions when developing a testing methodology to weed out security bugs in software. This chapter covers some of the basic principles that should be taken into account by professionals when testing for security bugs in software. &lt;br /&gt;
&lt;br /&gt;
'''There is No Silver Bullet'''&amp;lt;br&amp;gt;&lt;br /&gt;
While it is tempting to think that a security scanner or application firewall will either provide a multitude of defenses or identify a multitude of problems, in reality there are no silver bullets to the problem of insecure software. Application security assessment software, while useful as a first pass to find low-hanging fruit, is generally immature and ineffective at in-depth assessments and at providing adequate test coverage. Remember that security is a process, not a product. &lt;br /&gt;
&lt;br /&gt;
'''Think Strategically, Not Tactically'''&amp;lt;br&amp;gt;&lt;br /&gt;
Over the last few years, security professionals have come to realize the fallacy of the patch and penetrate model that was pervasive in information security during the 1990’s. The patch and penetrate model involves fixing a reported bug, but without proper investigation of the root cause. This patch and penetrate model is usually associated with the window of vulnerability ''(1)'' show in the figure below. The evolution of vulnerabilities in common software used worldwide has shown the ineffectiveness of this model. Vulnerability studies ''(2)'' have shown that the with the reaction time of attackers worldwide, the typical window of vulnerability does not provide enough time for patch installation, since the time between a vulnerability is uncovered and an automated attack against is developed and released is decreasing every year. There are also several wrong assumptions in this patch and penetrate model: patches interfere with the normal operations and might break existing applications, and not all the users might (in the end) be aware of a patch’s availability. Consequently not all the product's users will apply patches, either because of this issue or because they lack knowledge about the patch's existence.&amp;lt;br&amp;gt;&lt;br /&gt;
''Note: (1) Fore more information about the window of vulnerability please refer to Bruce Shneier’s Cryptogram Issue #9, available at http://www.schneier.com/crypto-gram-0009.html'' &amp;lt;br&amp;gt;&lt;br /&gt;
''(2) Such as those included Symantec’s Threat Reports''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:WindowExposure.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 2: Window of Vulnerability''&amp;lt;/center&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
To prevent reoccurring security problems within an application, it is essential to build security into the Software Development Life Cycle (SDLC) by developing standards, policies, and guidelines that fit and work within the development methodology. Threat modeling and other techniques should be used to help assign appropriate resources to those parts of a system that are most at risk. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The SDLC is King'''&amp;lt;br&amp;gt;&lt;br /&gt;
The SDLC is a process that is well known to developers. By integrating security into each phase of the SDLC, it allows for a holistic approach to application security that leverages the procedures already in place within the organization. Be aware that while the names of the various phases may change depending on the SDLC model used by an organization, each conceptual phase of the archetype SDLC will be used to develop the application (i.e. define, design, develop, deploy, maintain). Each phase has security considerations that should become part of the existing process, to ensure a cost-effective and comprehensive security program. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Test Early and Test Often'''&amp;lt;br&amp;gt;&lt;br /&gt;
By detecting a bug early within the SDLC, it allows it to be addressed more quickly and at a lower cost. A security bug is no different from a functional or performance based bug in this regard. A key step in making this possible is to educate the development and QA organizations about common security issues and the ways to detect &amp;amp; prevent them. Although new libraries, tools or languages might help design better programs (with fewer security bugs) new threats arise constantly and developers must be aware of those that affect the software they are developing. Education in security testing also helps developers acquire the appropriate mindset to test an application from an attacker's perspective. This allows each organization to consider security issues as part of their existing responsibilities.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understand the Scope of Security'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is important to know how much security a given project will require. The information and assets that are to be protected should be given a classification that states how they are to be handled (e.g. confidential, secret, top secret). Discussions should occur with legal council to ensure that any specific security needs will be met. In the USA they might come from federal regulations such as the Gramm-Leach-Bliley act (http://www.ftc.gov/privacy/glbact/), or from state laws such as California SB-1386 (http://www.leginfo.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html). For organizations based in EU countries, both country-specific regulation and EU Directives might apply, for example, Directive 96/46/EC4 makes it mandatory to treat personal data in applications with due care, whatever the application. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Mindset'''&amp;lt;br&amp;gt;&lt;br /&gt;
Successfully testing an application for security vulnerabilities requires thinking “outside of the box”. Normal use cases will test the normal behavior of the application when a user is using it in the manner that you expect. Good security testing requires going beyond what is expected and thinking like an attacker who is trying to break the application. Creative thinking can help to determine what unexpected data may cause an application to fail in an insecure manner. It can also help find what assumptions made by web developers are not always true and how can they be subverted. This is one of the reasons why automated tools are actually bad at automatically testing for vulnerabilities, this creative thinking must be done in a case by case basis and most of the web applications are being developed in a unique way (even if using common frameworks). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understanding the Subject'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the first major initiatives in any good security program should be to require accurate documentation of the application. The architecture, data flow diagrams, use cases, and more should be written in formal documents and available for review. The technical specification and application documents should include information that lists not only the desired use cases, but also any specifically disallowed use cases. Finally, it is good to have at least a basic security infrastructure that allows monitoring and trending of any attacks against your applications &amp;amp; network (e.g. IDS systems). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use the Right Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
While we have already stated that there is no tool silver bullet, tools do play a critical role in the overall security program. There is a range of open source and commercial tools that can assist in automation of many routine security tasks. These tools can simplify and speed the security process by assisting security personnel in their tasks. It is important to understand exactly what these tools can and cannot do, however, so that they are not oversold or used incorrectly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The Devil is in the Details'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is critical not to perform a superficial security review of an application and consider it complete. This will instill a false sense of confidence that can be as dangerous as not having done a security review in the first place. It is vital to carefully review the findings and weed out any false positives that may remain in the report. Reporting an incorrect security finding can often undermine the valid message of the rest of a security report. Care should be taken to verify that every possible section of application logic has been tested, and that every use case scenario was explored for possible vulnerabilities. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use Source Code When Available'''&amp;lt;br&amp;gt;&lt;br /&gt;
While black box penetration test results can be impressive and useful to demonstrate how vulnerabilities are exposed in production, they are not the most effective way to secure an application. If the source code for the application is available, it should be given to the security staff to assist them while performing their review. It is possible to discover vulnerabilities within the application source that would be missed during a black box engagement. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Develop Metrics'''&amp;lt;br&amp;gt;&lt;br /&gt;
An important part of a good security program is the ability to determine if things are getting better. It is important to track the results of testing engagements, and develop metrics that will reveal the application security trends within the organization. These metrics can show if more education and training is required, if there is a particular security mechanism that is not clearly understood by development, and if the total number of security related problems being found each month is going down. Consistent metrics that can be generated in an automated way from available source code will also help the organization in assessing the effectiveness of mechanisms introduced to reduce security bugs in software development. Metrics are not easily developed so using standard metrics like those provided by the OWASP Metrics project and other organizations might be a good head start.&lt;br /&gt;
&lt;br /&gt;
==Testing Techniques Explained==&lt;br /&gt;
&lt;br /&gt;
This section presents a high-level overview of various testing techniques that can be employed when building a testing program. It does not present specific methodologies for these techniques, although Part 2 of the OWASP Testing project will address this information. This section is included to provide context for the framework presented in the next Chapter and to highlight the advantages and disadvantages of some of the techniques that can be considered.&lt;br /&gt;
* Manual Inspections &amp;amp; Reviews &lt;br /&gt;
* Threat Modeling &lt;br /&gt;
* Code Review &lt;br /&gt;
* Penetration Testing &lt;br /&gt;
&lt;br /&gt;
=== Manual Inspections &amp;amp; Reviews ===&lt;br /&gt;
Manual inspections are human-driven reviews that typically test the security implications of the people, policies, and processes, but can include inspection of technology decisions such as architectural designs. They are usually conducted by analyzing documentation or using interviews with the designers or system owners. While the concept of manual inspections and human reviews is simple, they can be among the most powerful and effective techniques available. By asking someone how something works and why it was implemented in a specific way, it allows the tester to quickly determine if any security concerns are likely to be evident. Manual inspections and reviews are one of the few ways to test the software development lifecycle process itself and to ensure that there is an adequate policy or skill set in place. As with many things in life, when conducting manual inspections and reviews we suggest you adopt a trust but verify model. Not everything everyone tells you or shows you will be accurate. Manual reviews are particularly good for testing whether people understand the security process, have been made aware of policy, and have the appropriate skills to design and/or implement a secure application. Other activities, including manually reviewing the documentation, secure coding policies, security requirements, and architectural designs, should all be accomplished using manual inspections.&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Requires no supporting technology &lt;br /&gt;
* Can be applied to a variety of situations&lt;br /&gt;
* Flexible &lt;br /&gt;
* Promotes team work &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages:'''&lt;br /&gt;
* Can be time consuming &lt;br /&gt;
* Supporting material not always available &lt;br /&gt;
* Requires significant human thought and skill to be effective!&lt;br /&gt;
&lt;br /&gt;
=== Threat Modeling ===&lt;br /&gt;
&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
In the context of the technical scope, threat modeling has become a popular technique to help system designers think about the security threats that their systems will face. It enables them to develop mitigation strategies for potential vulnerabilities. Threat modeling helps people focus their inevitably limited resources and attention on the parts of the system that most require it. Threat models should be created as early as possible in the software development life cycle, and should be revisited as the application evolves and development progresses. Threat modeling is essentially risk assessment for applications. It is recommended that all applications have a threat model developed and documented. To develop a threat model, we recommend taking a simple approach that follows the NIST 800-30 ''(3)'' standard for risk assessment. This approach involves: &lt;br /&gt;
* Decomposing the application – through a process of manual inspection understanding how the application works, its assets, functionality and connectivity. &lt;br /&gt;
* Defining and classifying the assets – classify the assets into tangible and intangible assets and rank them according to business criticality. &lt;br /&gt;
* Exploring potential vulnerabilities (technical, operational, and management). &lt;br /&gt;
* Exploring potential threats – through a process of developing threat scenarios or attacks trees which develops a realistic view of potential attack vectors from an attacker’s perspective. &lt;br /&gt;
* Creating mitigation strategies – develop mitigating controls for each of the threats deemed to be realistic. The output from a threat model itself can vary but is typically a collection of lists and diagrams. Part 2 of the OWASP Testing Guide (the detailed “How To” text) will outline a specific Threat Modeling methodology. There is no right or wrong way to develop threat models, and several techniques have evolved. The OCTAVE model from Carnegie Mellon (http://www.cert.org/octave/) is worth exploring. &amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Practical attacker's view of the system &lt;br /&gt;
* Flexible &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
'''Disadvantage : &amp;lt;br&amp;gt;'''&lt;br /&gt;
* Relatively new technique &lt;br /&gt;
* Good threat models don’t automatically mean good software &lt;br /&gt;
''Note: (3) Stoneburner, G., Goguen, A., &amp;amp; Feringa, A. (2001, October). Risk management guide for information technology systems. Retrieved May 7, 2004, from http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf''&lt;br /&gt;
&lt;br /&gt;
=== Source Code Review ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Source code review is the process of manually checking a web application's source code for security issues. Many serious security vulnerabilities cannot be detected with any other form of analysis or testing. As the popular saying goes “if you want to know what’s really going on, go straight to the source”. Almost all security experts agree that there is no substitute for actually looking at the code. All the information for identifying security problems is there in the code somewhere. Unlike testing third party closed software such as operating systems, when testing web applications (especially if they have been developed in-house) the source code should be made available for testing purposes. Many unintentional but significant security problems are also extremely difficult to discover with other forms of analysis or testing such as penetration testing making source code analysis the technique of choice for technical testing. With the source code a tester can accurately determine what is happening (or is supposed to be happening) and remove the guess work of black box testing (such as penetration testing). Examples of issues that are particularly conducive to being found through source code reviews include concurrency problems, flawed business logic, access control problems and cryptographic weaknesses as well as backdoors, Trojans, Easter eggs, time bombs, logic bombs, and other forms of malicious code. These issues often manifest themselves as the most harmful vulnerabilities in web sites. Source code analysis can also be extremely efficient to find implementation issues such as places where input validation was not performed or when fail open control procedures maybe present. But keep in mind that operational procedures need to be reviewed also since the source code being deployed might not be the same as the one being analyzed ''(4)''.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages'''&lt;br /&gt;
* Completeness and effectiveness &lt;br /&gt;
* Accuracy &lt;br /&gt;
* Fast (for competent reviewers) &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages'''&lt;br /&gt;
* Requires highly skilled security developers &lt;br /&gt;
* Can miss calls to issues in compiled libraries &lt;br /&gt;
* Can not detect run-time errors easily &lt;br /&gt;
* The source code actually deployed might differ from the one being analyzed.&lt;br /&gt;
''Note: (4) See &amp;quot;Reflections on Trusting Trust&amp;quot; by Ken Thompson (http://cm.bell-labs.com/who/ken/trust.html)''&lt;br /&gt;
&lt;br /&gt;
* '''For more on code review OWASP manage a code review project''':&amp;lt;BR&amp;gt;&lt;br /&gt;
http://www.owasp.org/index.php/Category:OWASP_Code_Review_Project&lt;br /&gt;
&lt;br /&gt;
=== Penetration Testing ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Penetration testing has become a common technique used to test network security for many years. It is also commonly known as black box testing or ethical hacking. Penetration testing is essentially the “art” of testing a running application remotely, without knowing the inner workings of the application itself to find security vulnerabilities. Typically, the penetration test team would have access to an application as if they were users. The tester acts like a attacker and attempts to find and exploit vulnerabilities. In many cases the tester will be given a valid account on the system. While penetration testing has proven to be effective in network security, the technique does not naturally translate to applications. When penetration testing is performed on networks and operating systems, the majority of the work is involved in finding and then exploiting known vulnerabilities in specific technologies. As web applications are almost exclusively bespoke, penetration testing in the web application arena is more akin to pure research. Penetration testing tools have been developed that automated the process but again with the nature of web applications their effectiveness is usually poor. Many people today use web application penetration testing as their primary security testing technique. Whilst it certainly has its place in a testing program, we do not believe it should be considered as the primary or only testing technique. Gary McGraw summed up penetration testing well when he said, “If you fail a penetration test you know you have a very bad problem indeed. If you pass a penetration test you do not know that you don’t have a very bad problem”. However, focused penetration testing (i.e. testing that attempts to exploit known vulnerabilities detected in previous reviews) can be useful in detecting if some specific vulnerabilities are actually fixed in the source code deployed at the web site. &amp;lt;br&amp;gt;&lt;br /&gt;
'''Advantages'''&lt;br /&gt;
* Can be fast (and therefore cheap) &lt;br /&gt;
* Requires a relatively lower skill-set than source code review &lt;br /&gt;
* Tests the code that is actually being exposed &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages'''&lt;br /&gt;
* Too late in the SDLC &lt;br /&gt;
* Front impact testing only!&lt;br /&gt;
&lt;br /&gt;
=== The Need for a Balanced Approach ===&lt;br /&gt;
With so many techniques and so many approaches to testing the security of your web applications, it can be difficult to understand which techniques to use and when to use them.&lt;br /&gt;
Experience shows that there is no right or wrong answer to exactly what techniques should be used to build a testing framework. The fact remains that all techniques should probably be used to ensure that all areas that need to be tested are tested. What is clear, however, is that there is no single technique that effectively covers all security testing that must be performed to ensure that all issues have been addressed. Many companies adopt one approach, which has historically been penetration testing. Penetration testing, while useful, cannot effectively address many of the issues that need to be tested, and is simply “too little too late” in the software development life cycle (SDLC). &lt;br /&gt;
The correct approach is a balanced one that includes several techniques, from manual interviews to technical testing. The balanced approach is sure to cover testing in all phases in the SDLC. This approach leverages the most appropriate techniques available depending on the current SDLC phase. &lt;br /&gt;
Of course there are times and circumstances where only one technique is possible; for example, a test on a web application that has already been created, and where the testing party does not have access to the source code. In this case, penetration testing is clearly better than no testing at all. However, we encourage the testing parties to challenge assumptions, such as no access to source code, and to explore the possibility of complete testing. &lt;br /&gt;
A balanced approach varies depending on many factors, such as the maturity of the testing process and corporate culture. However, it is recommended that a balanced testing framework look something like the representations shown in Figure 3 and Figure 4. The following figure shows a typical proportional representation overlaid onto the software development life cycle. In keeping with research and experience, it is essential that companies place a higher emphasis on the early stages of development.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionSDLC.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 3: Proportion of Test Effort in SDLC''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The following figure shows a typical proportional representation overlaid onto testing techniques. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionTest.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 4: Proportion of Test Effort According to Test Technique''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''A Note about Web Application Scanners'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use web application scanners. While they undoubtedly have a place in a testing program, we want to highlight some fundamental issues about why we do not believe that automating black box testing is (or will ever be) effective. By highlighting these issues, we are not discouraging web application scanner use. Rather, we are saying that their limitations should be understood, and testing frameworks should be planned appropriately.&lt;br /&gt;
NB: OWASP is currently working to develop a web application scanner-benchmarking platform. The following examples indicate why automated black box testing is not effective. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 1: Magic Parameters'''&amp;lt;br&amp;gt;&lt;br /&gt;
Imagine a simple web application that accepts a name-value pair of “magic” and then the value. For simplicity, the GET request may be: ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic=value&amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt; To further simplify the example, the values in this case can only be ASCII characters a – z (upper or lowercase) and integers 0 – 9. The designers of this application created an administrative backdoor during testing, but obfuscated it to prevent the casual observer from discovering it. By submitting the value sf8g7sfjdsurtsdieerwqredsgnfg8d (30 characters), the user will then be logged in and presented with an administrative screen with total control of the application. The HTTP request is now:&amp;lt;br&amp;gt; ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic= sf8g7sfjdsurtsdieerwqredsgnfg8d &amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt;&lt;br /&gt;
Given that all of the other parameters were simple two- and three-characters fields, it is not possible to start guessing combinations at approximately 28 characters. A web application scanner will need to brute force (or guess) the entire key space of 30 characters. That is up to 3028 permutations, or trillions of HTTP requests! That is an electron in a digital haystack! &lt;br /&gt;
The code for this may look like the following: &amp;lt;br&amp;gt;&lt;br /&gt;
 public void doPost( HttpServletRequest request, HttpServletResponse response) &lt;br /&gt;
 { &lt;br /&gt;
 String magic = “sf8g7sfjdsurtsdieerwqredsgnfg8d”; &lt;br /&gt;
 boolean admin = magic.equals( request.getParameter(“magic”));&lt;br /&gt;
 if (admin) doAdmin( request, response); &lt;br /&gt;
 else …. // normal processing &lt;br /&gt;
 } &lt;br /&gt;
By looking in the code, the vulnerability practically leaps off the page as a potential problem. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 2: Bad Cryptography'''&amp;lt;br&amp;gt;&lt;br /&gt;
Cryptography is widely used in web applications. Imagine that a developer decided to write a simple cryptography algorithm to sign a user in from site A to site B automatically. In his/her wisdom, the developer decides that if a user is logged into site A, then he/she will generate a key using an MD5 hash function that comprises: ''Hash { username : date }'' &amp;lt;br&amp;gt;&lt;br /&gt;
When a user is passed to site B, he/she will send the key on the query string to site B in an HTTP re-direct. Site B independently computes the hash, and compares it to the hash passed on the request. If they match, site B signs the user in as the user they claim to be. Clearly, as we explain the scheme, the inadequacies can be worked out, and it can be seen how anyone that figures it out (or is told how it works, or downloads the information from Bugtraq) can login as any user. Manual inspection, such as an interview, would have uncovered this security issue quickly, as would inspection of the code. A black-box web application scanner would have seen a 128-bit hash that changed with each user, and by the nature of hash functions, did not change in any predicable way.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''A Note about Static Source Code Review Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use static source code scanners. While they undoubtedly have a place in a comprehensive testing program, we want to highlight some fundamental issues about why we do not believe this approach is effective when used alone. Static source code analysis alone cannot understand the context of semantic constructs in code, and therefore is prone to a significant number of false positives. This is particularly true with C and C++. The technology is useful in determining interesting places in the code, however significant manual effort is required to validate the findings. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example:&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 char szTarget[12];&amp;lt;BR&amp;gt;&lt;br /&gt;
 char *s = &amp;quot;Hello, World&amp;quot;; &amp;lt;BR&amp;gt;&lt;br /&gt;
 size_t cSource = strlen_s(s,20); &amp;lt;BR&amp;gt;&lt;br /&gt;
 strncpy_s(temp,sizeof(szTarget),s,cSource); &amp;lt;BR&amp;gt;&lt;br /&gt;
 strncat_s(temp,sizeof(szTarget),s,cSource);&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Darrellgrundy</name></author>	</entry>

	</feed>