https://wiki.owasp.org/api.php?action=feedcontributions&user=Lk&feedformat=atomOWASP - User contributions [en]2024-03-29T08:21:41ZUser contributionsMediaWiki 1.27.2https://wiki.owasp.org/index.php?title=Review_Old,_Backup_and_Unreferenced_Files_for_Sensitive_Information_(OTG-CONFIG-004)&diff=11923Review Old, Backup and Unreferenced Files for Sensitive Information (OTG-CONFIG-004)2006-11-06T20:19:19Z<p>Lk: Brief Summary added and spell check</p>
<hr />
<div>[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]<br><br />
{{Template:OWASP Testing Guide v2}}<br />
<br />
== Brief Summary ==<br />
<br><br />
While most of the files within a web server are directly handled by the server itself it isn't uncommon to find unreferenced and/or forgotten files that can be used to obtain important information about either the infrastructure or the credentials.<br><br><br />
Most common scenario include the presence of renamed old version of modified files, inclusion files that are loaded into the language of choice and can be downloaded as source or even automatic or manual backups in form of compressed archives.<br><br><br />
All these files may grant the pentester access to inner workings, backdoors, administrative interfaces or even credentials to connect to the administrative interface or the database server.<br />
<br><br />
<br />
==Causes of old, backup and unreferenced files==<br />
<br />
An important source of vulnerability lies in files which have nothing to do with the application, but are created as a consequence of editing application files, or after creating on-the-fly backup copies, or by leaving in the web tree old files or unreferenced files.<br />
Performing in-place editing or other administrative actions on production web servers may inadvertently leave, as a consequence, backup copies (either generated automatically by the editor while editing files, or by the administrator who is zipping a set of files to create a spot backup).<br />
<br />
It is particularly easy to forget such files, and this may pose a serious security threat to the application. That happens because backup copies may be generated with file extensions differing from those of the original files. A ''.tar, .zip or .gz'' archive that we generate (and forget...) has obviously a different extension, and the same happens with automatic copies created by many editors (for example, emacs generates a backup copy named ''file~ ''when editing ''file''). Making a copy by hand may produce the same effect (think of copying ''file'' to ''file.old'').<br />
<br />
As a result, these activities generate files which a) are not needed by the application, b) may be handled differently than the original file by the web server. For example, if we make a copy of ''login.asp'' named ''login.asp.old'', we are allowing users to download the source code of ''login.asp''; this is because, due to its extension, ''login.asp.old'' will be typically served as text/plain, rather than being executed. In other words, accessing ''login.asp'' causes the execution of the server-side code of ''login.asp'', while accessing ''login.asp.old'' causes the content of ''login.asp.old'' (which is, again, server-side code) to be plainly returned to the user – and displayed in the browser. This may pose security risks, since sensitive information may be revealed. Generally, exposing server side code is a bad idea; not only are you unnecessarily exposing business logic, but you may be unknowingly revealing application-related information which may help an attacker (pathnames, data structures, etc.); not to mention the fact that there are too many scripts with embedded username/password in clear text (which is a careless and very dangerous practice).<br />
<br />
Other causes of unreferenced files are due to design or configuration choices when they allow diverse kind of application-related files such as data files, configuration files, log files, to be stored in filesystem directories that can be accessed by the web server. These files have normally no reason to be in a filesystem space which could be accessed via web, since they should be accessed only at the application level, by the application itself (and not by the casual user browsing around!).<br />
<br />
==Threats==<br />
<br />
Old, backup and unreferenced files present various threats to the security of a web application: <br />
<br />
* Unreferenced files may disclose sensitive information that can facilitate a focused attack against the application; for example include files containing database credentials, configuration files containing references to other hidden content, absolute file paths, etc. <br />
* Unreferenced pages may contain powerful functionality that can be used to attack the application; for example an administration page that is not linked from published content but can be accessed by any user who knows where to find it. <br />
* Old and backup files may contain vulnerabilities that have been fixed in more recent versions; for example ''viewdoc.old.jsp'' may contain a directory traversal vulnerability that has been fixed in ''viewdoc.jsp'' but can still be exploited by anyone who finds the old version. <br />
* Backup files may disclose the source code for pages designed to execute on the server; for example requesting ''viewdoc.bak'' may return the source code for ''viewdoc.jsp'', which can be reviewed for vulnerabilities that may be difficult to find by making blind requests to the executable page. While this threat obviously applies to scripted languages, such as Perl, PHP, ASP, shell scripts, JSP, etc., it is not limited to them, as shown in the example provided in the next bullet.<br />
* Backup archives may contain copies of all files within (or even outside) the webroot. This allows an attacker to quickly enumerate the entire application, including unreferenced pages, source code, include files, etc. For example, if you forget a file named ''myservlets.jar.old'' file containing (a backup copy of) your servlet implementation classes, you are exposing a lot of sensitive information which is susceptible to decompilation and reverse engineering.<br />
* In some cases copying or editing a file does not modify the file extension, but modifies the filename. This happens for example in Windows environments, where file copying operations generate filenames prefixed with “Copy of “ or localized versions of this string. Since the file extension is left unchanged, this is not a case where an executable file is returned as plain text by the web server, and therefore not a case of source code disclosure. However, these files too are dangerous because there is a chance that they include obsolete and incorrect logic that, when invoked, could trigger application errors, which might yield valuable information to an attacker, if diagnostic message display is enabled.<br />
* Log files may contain sensitive information about the activities of application users, for example sensitive data passed in URL parameters, session IDs, URLs visited (which may disclose additional unreferenced content), etc. Other log files (e.g. ftp logs) may contain sensitive information about the maintenance of the application by system administrators.<br />
<br />
==Countermeasures==<br />
<br />
To guarantee an effective protection strategy, testing should be compounded by a security policy which clearly forbids dangerous practices, such as:<br />
<br />
* Editing files in-place on the web server / application server filesystems. This is a particular bad habit, since it is likely to unwillingly generate backup files by the editors. It is amazing to see how often this is done, even in large organizations. If you absolutely need to edit files on a production system, do ensure that you don’t leave behind anything which is not explicitly intended, and consider that you are doing it at your own risk.<br />
* Check carefully any other activity performed on filesystems exposed by the web server, such as spot administration activities. For example, if you occasionally need to take a snapshot of a couple of directories (which you shouldn’t, on a production system...), you may be tempted to zip/tar them first. Be careful not to forget behind those archive files!<br />
* Appropriate configuration management policies should help not to leave around obsolete and unreferenced files.<br />
* Applications should be designed not to create (or rely on) files stored under the web directory trees served by the web server. Data files, log files, configuration files, etc. should be stored in directories not accessible by the web server, to counter the possibility of information disclosure (not to mention data modification if web directory permissions allow writing...).<br />
<br />
==How to Test==<br />
<br />
==Black Box==<br />
Testing for unreferenced files uses both automated and manual techniques, and typically involves a combination of the following: <br />
<br />
''(i) Inference from the naming scheme used for published content ''<br />
<br />
If not already done, enumerate all of the application’s pages and functionality. This can be done manually using a browser, or using an application spidering tool. Most applications use a recognisable naming scheme, and organise resources into pages and directories using words that describe their function. From the naming scheme used for published content, it is often possible to infer the name and location of unreferenced pages. For example, if a page ''viewuser.asp'' is found, then look also for ''edituser.asp'', ''adduser.asp'' and ''deleteuser.asp''. If a directory ''/app/user'' is found, then look also for ''/app/admin'' and ''/app/manager''. <br />
<br />
''(ii) Other clues in published content ''<br />
<br />
Many web applications leave clues in published content that can lead to the discovery of hidden pages and functionality. These clues often appear in the source code of HTML and JavaScript files. The source code for all published content should be manually reviewed to identify clues about other pages and functionality. For example: <br />
<br />
Programmers’ comments and commented-out sections of source code may refer to hidden content: <br />
<br />
<pre><br />
<!-- <A HREF="uploadfile.jsp">Upload a document to the server</A> --><br />
<!-- Link removed while bugs in uploadfile.jsp are fixed --> <br />
</pre><br />
<br />
JavaScript may contain page links that are only rendered within the user’s GUI under certain circumstances: <br />
<br />
<pre><br />
var adminUser=false;<br />
:<br />
if (adminUser) menu.add (new menuItem ("Maintain users", "/admin/useradmin.jsp")); <br />
</pre><br />
<br />
HTML pages may contain FORMs that have been hidden by disabling the SUBMIT element: <br />
<br />
<pre><br />
<FORM action="forgotPassword.jsp" method="post"><br />
<INPUT type="hidden" name="userID" value="123"><br />
<!-- <INPUT type="submit" value="Forgot Password"> --><br />
</FORM> <br />
</pre><br />
<br />
Another source of clues about unreferenced directories is the ''/robots.txt'' file used to provide instructions to web robots: <br />
<br />
<pre><br />
User-agent: *<br />
Disallow: /Admin<br />
Disallow: /uploads<br />
Disallow: /backup<br />
Disallow: /~jbloggs<br />
Disallow: /include <br />
</pre><br />
<br />
''(iii) Blind guessing ''<br />
<br />
In its simplest form, this involves running a list of common filenames through a request engine in an attempt to guess files and directories that exist on the server. The following netcat wrapper script will read a wordlist from stdin and perform a basic guessing attack: <br />
<br />
<pre><br />
#!/bin/bash<br />
<br />
server=www.targetapp.com<br />
port=80<br />
<br />
while read url<br />
do<br />
echo -ne "$url\t"<br />
echo -e "GET /$url HTTP/1.0\nHost: $server\n" | netcat $server $port | head -1<br />
done | tee outputfile <br />
<br />
</pre><br />
<br />
Depending upon the server, GET may be replaced with HEAD for faster results. The outputfile specified can be grepped for “interesting” response codes. The response code 200 (OK) usually indicates that a valid resource has been found (provided the server does not deliver a custom “not found” page using the 200 code). But also look out for 301 (Moved), 302 (Found), 401 (Unauthorized), 403 (Forbidden) and 500 (Internal error), which may also indicate resources or directories that are worthy of further investigation. <br />
<br />
The basic guessing attack should be run against the webroot, and also against all directories that have been identified through other enumeration techniques. More advanced/effective guessing attacks can be performed as follows: <br />
<br />
* Identify the file extensions in use within known areas of the application (e.g. jsp, aspx, html), and use a basic wordlist appended with each of these extensions (or use a longer list of common extensions if resources permit). <br />
* For each file identified through other enumeration techniques, create a custom wordlist derived from that filename. Get a list of common file extensions (including ~, bak, txt, src, dev, old, inc, orig, copy, tmp, etc.) and use each extension before, after, and instead of, the extension of the actual filename. <br />
<br />
Note: Windows file copying operations generate filenames prefixed with “Copy of “ or localized versions of this string, hence they do not change file extensions. While “Copy of ” files typically do not disclose source code when accessed, they might yield valuable information in case they cause errors when invoked.<br />
<br />
''(iv) Information obtained through server vulnerabilities and misconfiguration ''<br />
<br />
The most obvious way in which a misconfigured server may disclose unreferenced pages is through directory listing. Request all enumerated directories to identify any which provide a directory listing. <br />
Numerous vulnerabilities have been found in individual web servers which allow an attacker to enumerate unreferenced content, for example: <br />
<br />
* Apache ?M=D directory listing vulnerability.<br />
* Various IIS script source disclosure vulnerabilities. <br />
* IIS WebDAV directory listing vulnerabilities. <br />
<br />
''(v) Use of publicly available information ''<br />
<br />
Pages and functionality in Internet-facing web applications that are not referenced from within the application itself may be referenced from other public domain sources. There are various sources of these references: <br />
* Pages that used to be referenced may still appear in the archives of Internet search engines. For example, ''1998results.asp'' may no longer be linked from a company’s website, but may remain on the server and in search engine databases. This old script may contain vulnerabilities that could be used to compromise the entire site. The ''site:'' Google search operator may be used to run a query only against your domain of choice, such as in: ''site:www.example.com''. (Mis)using search engines in this way has lead to a broad array of techniques which you may find useful and that are described in the ''Google Hacking'' section of this Guide. Check it to hone your testing skills via Google. Backup files are not likely to be referenced by any other files and therefore may have not been indexed by Google, but if they lie in browsable directories the search engine might know about them.<br />
* In addition, Google and Yahoo keep cached versions of pages found by their robots. Even if ''1998results.asp'' has been removed from the target server, a version of its output may still be stored by these search engines. The cached version may contain references to, or clues about, additional hidden content that still remains on the server. <br />
* Content that is not referenced from within a target application may be linked to by third-party websites. For example, an application which processes online payments on behalf of third-party traders may contain a variety of bespoke functionality which can (normally) only be found by following links within the web sites of its customers.<br />
<br />
==White Box==<br />
<br />
Performing white box testing against old and backup files requires examining the files contained in the directories belonging to the set of web directories served by the web server(s) of the web application infrastructure. Theoretically the examination, to be thorough, has to be done by hand; however, since in most cases copies of files or backup files tend to be created by using the same naming conventions, the search can be easily scripted (for example, editors do leave behind backup copies by naming them with a recognizable extension or ending; humans tend to leave behind files with a “.old” or similar predictable extensions, etc.). A good strategy is that of periodically scheduling a background job checking for files with extensions likely to identify them as copy/backup files, and performing manual checks as well on a longer time basis.<br />
<br />
===Tools===<br />
<br />
* Vulnerability assessment tools tend to include checks to spot web directories having standard names (such as “admin”, “test”, “backup”, etc.), and to report any web directory which allows indexing. If you can’t get any directory listing, you should try to check for likely backup extensions. Check for example Nessus (http://www.nessus.org), Nikto (http://www.cirt.net/code/nikto.shtml) or its new derivative Wikto (http://www.sensepost.com/research/wikto/) which supports also Google hacking based strategies.<br />
* Web spider tools: wget (http://www.gnu.org/software/wget/, http://www.interlog.com/~tcharron/wgetwin.html); Sam Spade (http://www.samspade.org); Spike proxy includes a web site crawler function (http://www.immunitysec.com/spikeproxy.html); Xenu (http://home.snafu.de/tilman/xenulink.html); curl (http://curl.haxx.se). Some of them are also included in standard Linux distributions.<br />
* Web development tools usually include facilities to identify broken links and unreferenced files.<br />
<br />
{{Category:OWASP Testing Project AoC}}</div>Lkhttps://wiki.owasp.org/index.php?title=Testing_for_Session_Management_Schema_(OTG-SESS-001)&diff=11922Testing for Session Management Schema (OTG-SESS-001)2006-11-06T20:10:16Z<p>Lk: Spell check (back to british...)</p>
<hr />
<div>{{Template:OWASP Testing Guide v2}}<br />
<br />
== Brief Summary ==<br />
<br><br />
IN order to avoid continuous authentication for each page of a website or service, web applications implement various mechanisms to store and validate credentials for a pre-determined timespan.<br><br><br />
These mechanisms are known as Session Management and while they're most important in order to increase the ease of use and user-friendliness of the application, they can be exploited by a pentester to gain access to a user account without the need to provide correct credentials.<br />
<br><br />
<br />
== Description of the Issue == <br />
<br><br />
The session management schema should be considered alongside the authentication and authorization schema, and cover at least the questions below from a non technical point of view:<br />
* Will the application be accessed from shared systems? e.g. Internet Café <br><br />
* Is application security of prime concern to the visiting client/customer? <br><br />
* How many concurrent sessions may a user have? <br><br />
* How long is the inactive timeout on the application?<br> <br />
* How long is the active timeout? <br><br />
* Are sessions transferable from one source IP to another? <br><br />
* Is ‘remember my username’ functionality provided? <br><br />
* Is ‘automatic login’ functionality provided? <br><br />
Having identified the schema in place, the application and its logic must be examined to confirm proper implementation of the schema.<br />
This phase of testing is intrinsically linked with general application security testing. Whilst the first Schema questions (is the schema suitable for the site and does the schema meet the application provider’s requirements?) can be analysed in abstract, the final question (Does the site implement the specified schema?) must be considered alongside other technical testing. <br><br />
<br />
The identified schema should be analysed against best practice within the context of the site during our penetration test.<br />
Where the defined schema deviates from security best practice, the associated risks should be identified and described within the context of the environment. Security risks and issues should be detailed and quantified, but ultimately, the application provider must make decisions based on the security and usability of the application.<br />
For example, if it is determined that the site has been designed without inactive session timeouts the application provider should be advised about risks such as replay attacks, long-term attacks based on stolen or compromised Session IDs and abuse of a shared terminal where the application wasn’t logged out. They must then consider these against other requirements such as convenience of use for clients and disruption of the application by forced re-authentication.<br />
<br><br />
''' Session Management Implementation'''<br><br />
In this Chapter we describe how to analyse a Session Schema and how to test it. Technical security testing of Session Management implementation covers two key areas:<br />
* Integrity of Session ID creation<br />
* Secure management of active sessions and Session IDs<br />
The Session ID should be sufficiently unpredictable and abstracted from any private information, and the Session management should be logically secured to prevent any manipulation or circumvention of application security<br />
These two key areas are interdependent, but should be considered separately for a number of reasons.<br />
Firstly, the choice of underlying technology to provide the sessions is bewildering and can already include a large number of OTS products and an almost unlimited number of bespoke or proprietary implementations. Whilst the same technical analysis must be performed on each, established vendor solutions may require a slightly different testing approach and existing security research may exist on the implementation.<br />
Secondly, even an unpredictable and abstract Session ID may be rendered completely ineffectual should the Session management be flawed. Similarly, a strong and secure session management implementation may be undermined by a poor Session ID implementation.<br />
Furthermore, the analyst should closely examine how (and if) the application uses the available Session management. It is not uncommon to see Microsoft ISS server ASP Session IDs passed religiously back and forth during interaction with an application, only to discover that these are not used by the application logic at all. It is therefore not correct to say that because an application is built on a ‘proven secure’ platform its Session Management is automatically secure.<br />
<br />
<br />
== Black Box testing and example ==<br />
<br />
''' Session Analysis'''<br><br />
<br />
The Session Tokens (Cookie, SessionID or Hidden Field) themselves should be examined to ensure their quality from a security perspective. They should be tested against criteria such as their randomness, uniqueness, resistance to statistical and cryptographic analysis and information leakage.<br><br />
* Token Structure & Information Leakage<br />
The first stage is to examine the structure and content of a Session ID provided by the application. A common mistake is to include specific data in the Token instead of issuing a generic value and referencing real data at the server side.<br />
If the Session ID is clear-text, the structure and pertinent data may be immediately obvious as in Figure 1.<br />
<pre><br />
192.168.100.1:owaspuser:password:15:58<br />
</pre><br />
Figure 1<br><br />
<br />
If part or the entire Token appears to be encoded or hashed, it should be compared to various techniques to check for obvious obfuscation.<br />
For example the string “192.168.100.1:owaspuser:password:15:58” is represented in Hex, Base64 and as an MD5 hash in Figure 2.<br />
<pre><br />
Hex 3139322E3136382E3130302E313A6F77617370757365723A70617373776F72643A31353A3538<br />
Base64 MTkyLjE2OC4xMDAuMTpvd2FzcHVzZXI6cGFzc3dvcmQ6MTU6NTg=<br />
MD5 01c2fc4f0a817afd8366689bd29dd40a<br />
</pre><br />
Figure 2 <br><br />
Having identified the type of obfuscation, it may be possible to decode back to the original data. In most cases, however, this is unlikely. Even so, enumerating the encoding in place from the format of the message may still be useful. Furthermore, if both the format and obfuscation technique can be deduced, automated brute force attacks could be devised.<br />
Hybrid Rokens may include information such as IP address or User ID together with an encoded portion, as in Figure 3.<br />
<pre><br />
owaspuser:192.168.100.1: a7656fafe94dae72b1e1487670148412<br />
</pre><br />
Figure 3 <br><br />
Having analysed a single Session Token, the representative sample should be examined.<br />
A simple analysis of the Tokens should immediately reveal any obvious patterns. For example, a 32 bit Token may include 16 bits of static data and 16 bits of variable data. This may indicate that the first 16 bits represents a fixed attribute of the user – e.g. the username or IP address.<br />
If the second 16 bit chunk is incrementing at a regular rate, it may indicate a sequential or even time-based element to the Token generation. See Examples.<br />
If static elements to the Tokens are identified, further samples should be gathered varying one potential input element at a time. For example, login attempts through a different user account or from a different IP address may yield a variance in the previously static portion of the Session Token.<br />
The following areas should be addressed during the single and multiple Session ID structure testing:<br />
* What parts of the Session ID are static? <br />
* What clear-text proprietary information is stored in the Session ID? <br />
e.g. usernames/UID, IP addresses <br />
* What easily decoded proprietary information is stored? <br />
* What information can be deduced from the structure of the Session ID? <br />
* What portions of the Session ID are static for the same login conditions? <br />
* What obvious patterns are present in the Session ID as a whole, or individual portions? <br />
<br />
'''Session ID Predictability & Randomness'''<br><br />
Analysis of the variable areas (if any) of the Session ID should be undertaken to establish if there are any recognizable or predictable patterns.<br />
These analysis may be performed manually and with bespoke or OTS statistical or cryptanalytic tools in order to deduce any patterns in Session ID content.<br />
Manual checks should include comparisons of Session IDs issued for the same login conditions – e.g. the same username, password and IP address. Time is an important factor which must also be controlled. High numbers of simultaneous connections should be made in order to gather samples in the same time window and keep that variable constant. Even a quantization of 50ms or less may be too coarse and a sample taken in this way may reveal time-based components that would otherwise be missed.<br />
Variable elements should be analysed over time to determine whether they are incremental in nature. Where they are incremental, patterns relating to absolute or elapsed time should be investigated. Many systems use time as a seed for their pseudo random elements.<br />
Where the patterns are seemingly random, one-way hashes of time or other environmental variations should be considered as a possibility. Typically, the result of a cryptographic hash is a decimal or hexadecimal number so should be identifiable.<br />
In analysing Session IDs sequences, patterns or cycles, static elements and client dependencies should all be considered as possible contributing elements to the structure and function of the application.<br />
* Are the Session IDs provably random in nature? e.g. Can the result be reproduced? <br />
* Do the same input conditions produce the same ID on a subsequent run? <br />
* Are the Session IDs provably resistant to statistical or cryptanalysis? <br />
* What elements of the Session IDs are time-linked? <br />
* What portions of the Session IDs are predictable? <br />
* Can the next ID be deduced even given full knowledge of the generation algorithm and previous IDs? <br />
<br />
'''Brute Force Attacks'''<br><br />
Brute force attacks inevitably lead on from questions relating to predictability and randomness.<br />
The variance within the Session IDs must be considered together with application session durations and timeouts. If the variation within the Session IDs is relatively small, and Session ID validity is long, the likelihood of a successful brute-force attack is much higher.<br />
A long session ID (or rather one with a great deal of variance) and a shorter validity period would make it far harder to succeed in a brute force attack.<br />
* How long would a brute-force attack on all possible Session IDs take? <br />
* Is the Session ID space large enough to prevent brute forcing? e.g. is the length of the key sufficient when compared to the valid life-span <br />
* Do delays between connection attempts with different Session IDs mitigate the risk of this attack? <br />
<br />
<br />
'''Testing for Topic X vulnerabilities:''' <br><br />
...<br><br />
'''Result Expected:'''<br><br />
...<br><br><br />
== Gray Box testing and example == <br />
'''Testing for Topic X vulnerabilities:'''<br><br />
...<br><br />
'''Result Expected:'''<br><br />
...<br><br><br />
== References ==<br />
'''Whitepapers'''<br><br />
...<br><br />
'''Tools'''<br><br />
...<br><br />
<br />
<br />
{{Category:OWASP Testing Project AoC}}<br />
[[OWASP Testing Guide v2 Table of Contents]]<br />
{{Template:Stub}}</div>Lkhttps://wiki.owasp.org/index.php?title=Testing_for_Bypassing_Authentication_Schema_(OTG-AUTHN-004)&diff=11921Testing for Bypassing Authentication Schema (OTG-AUTHN-004)2006-11-06T20:09:10Z<p>Lk: Spell check</p>
<hr />
<div>{{Template:OWASP Testing Guide v2}}<br />
<br />
== Brief Summary ==<br />
<br><br />
While most most application require authentication for gaining access to private information or tasks, not every authentication method is able to provide adequate security level.<br><br><br />
Negligence, ignorance or simple understatement of the security threats often result in authentication schemes that can be easily bypassed by simply skipping the login page and directly calling an internal page that is supposed to be accessed only after authentication has been performed.<br><br><br />
In addition to this it is often possible to bypass compulsory authentication tampering with requests and tricking the application into thinking that we're already authenticated either by modifying the given URL parameter or by manipulating form or by counterfeiting sessions.<br />
<br><br />
<br />
== Description of the Issue == <br />
<br><br />
...here: Short Description of the Issue: Topic and Explanation<br />
<br><br />
== Black Box testing and example ==<br />
Bypassing authentication schema methods:<br />
<br />
* Direct page request<br />
<br />
In alcuni casi la richiesta di autenticazione della web application avviene solamente quando si cerca di accedere alla home page, mentre se si accedede a qualche risorsa richiamandola direttamente si puo' bypassare lo schem di autenticazione<br />
<br />
* Parameter Modification<br />
In alcuni casi l'autenticazione si basa sul valore con cui sono impostati alcuni parametri quindi e' sufficiente modificarli per bypassare lo schema di autenticazione<br />
<br />
For example, /webapps/login?validUser=yes&isAutheticated=yes can be manually entered into the browser in an attempt to bypass the application server's authentication mechanism.<br />
<br />
* Session Issue<br />
** Session ID Prediction<br />
** Session Fixation<br />
<br />
* Sql Injection (HTML Form Authentication)<br />
<br />
<br><br />
<br />
== Gray Box testing and example == <br />
'''Testing for Topic X vulnerabilities:'''<br><br />
...<br><br />
'''Result Expected:'''<br><br />
...<br><br><br />
== References ==<br />
'''Whitepapers'''<br><br />
...<br><br />
'''Tools'''<br><br />
...<br><br />
<br />
{{Category:OWASP Testing Project AoC}}<br />
[[OWASP Testing Guide v2 Table of Contents]]<br />
{{Template:Stub}}</div>Lkhttps://wiki.owasp.org/index.php?title=Testing_for_Default_or_Guessable_User_Account_(OWASP-AT-003)&diff=11920Testing for Default or Guessable User Account (OWASP-AT-003)2006-11-06T20:08:35Z<p>Lk: Spell check</p>
<hr />
<div>{{Template:OWASP Testing Guide v2}}<br />
<br />
== Brief Summary ==<br />
<br><br />
Today's web application scenario is often populated by common software (begin OpenSource or Commercial) that is installed on web servers and configured or customized. In addition to this most of today's hardware appliance offer web-based configurations or administrative interfaces.<br><br><br />
Often in this pre-configured application and appliance scenario it's easy to encounter administrative software, interfaces and/or websites who use the default credentials for logging in.<br><br />
This default username/password are wildly known by pentesters and malicious users that can use them as a powerful mean to gain access to internal infrastructure and/or to gain privileges and steal data.<br><br />
The same problem applies to software and/or appliances that spot built-in non-removable accounts and, in fewer cases, uses blank passwords as default credentials.<br />
<br><br />
== Description of the Issue == <br />
<br><br />
The source for this problem is often inexperienced IT personnel, unaware of the importance of changing default passwords on installed infrastructure components, programmers, leaving backdoors so they can easily access and test the application, later forgetting to remove them, application administrators and users that chose an easy username and password for themselves, and application with built in, non-removable default accounts with a pre-set username and password. Another problem is blank passwords, which are simply a result of security unawareness and willingness to simplify things.<br />
<br><br />
== Black Box testing and example ==<br />
In blackbox testing we know nothing about the application, its underlying infrastructure, and any username and/or password policies. Often this is not the case and some information about the application is provided – simply skip the steps that refer to obtaining information you already have.<br />
<br />
When testing a known application interface, such as a Cisco router web interface, or Weblogic admin access, check the known usernames and passwords for these devices. This can be done either by Google, or using one of the references in the Further Reading section.<br />
<br />
When facing a home-grown application, to which we do not have a list of default and common user accounts, we need to test it manually, following these guidelines:<br />
* Try the following usernames - "admin", "administrator", "root", "system", or "super". These are popular among system administrators and are often used. Additionally you could try "qa", "test", "test1", "testing", and similar names. Attempt any combination of the above in both the username and the password fields. If the application is vulnerable to username enumeration, and you successfully managed to identify any of the above usernames, attempt passwords in a similar manner.<br />
* Application administrative users are often named after the application. This means if you are testing an application named "Obscurity", try using obscurity/obscurity as the username and password.<br />
* When performing a test for a customer, attempt using names of contacts you have received as usernames.<br />
* Attempt using all the above usernames with blank passwords.<br />
<br />
'''Result Expected:'''<br><br />
...<br><br><br />
== Gray Box testing and example == <br />
The steps described next rely on an entirely White Box approach. If only some of the information is available to you, refer to black box testing to fill the gaps.<br />
<br />
Talk to the IT personnel to determine which passwords they use for administrative access. <br />
<br />
Check whether these usernames and passwords are complex, difficult to guess, and not related to the application name, person name, or administrative names ("system"). <br />
Note blank passwords.<br />
Check in the user database for default names, application names, and easily guessed names as described in the Black Box testing section. Check for empty password fields.<br />
<br />
Examine the code for hard coded usernames and passwords.<br />
'''Result Expected:'''<br><br />
...<br><br><br />
== References ==<br />
'''Whitepapers'''<br><br />
* http://www.cirt.net/cgi-bin/passwd.pl<br />
* http://phenoelit.darklab.org/cgi-bin/display.pl?SUBF=list&SORT=1<br />
* http://www.governmentsecurity.org/articles/DefaultLoginsandPasswordsforNetworkedDevices.php<br />
* http://www.virus.org/default-password/<br />
'''Tools'''<br><br />
...<br><br />
<br />
{{Category:OWASP Testing Project AoC}}</div>Lkhttps://wiki.owasp.org/index.php?title=Test_File_Extensions_Handling_for_Sensitive_Information_(OTG-CONFIG-003)&diff=11919Test File Extensions Handling for Sensitive Information (OTG-CONFIG-003)2006-11-06T20:07:16Z<p>Lk: Spell check</p>
<hr />
<div>[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]<br><br />
{{Template:OWASP Testing Guide v2}}<br />
<br />
...To review and expand...<br />
<br />
== Brief Summary ==<br />
<br />
File extensions are commonly used in web servers to easily determine which technologies / languages / plugins must be used to fulfill the web request.<br><br><br />
While this behavior is consistent with RFCs and Web Standards, using standard file extensions provides the pentester useful information about the underlying technologies used in a web appliance and greatly simplifies the task of determining the attack scenario to be used on peculiar technologies.<br><br><br />
In addition to this misconfiguration in web servers could easily reveal confidential information about access credentials.<br />
<br />
==Description of the Issue==<br />
<br />
Determining how web servers handle requests corresponding to files having different extensions may help to understand web server behaviour depending on the kind of files we try to access. For example, it can help understand which file extensions are returned as text/plain versus those which cause execution on the server side. The latter are indicative of technologies / languages / plugins which are used by web servers or application servers, and may provide additional insight on how the web application is engineered. For example, a “.pl” extension is usually associated with server-side Perl support (though the file extension alone may be deceptive and not fully conclusive; for example, Perl server-side resources might be renamed to conceal the fact that they are indeed Perl related). See also next section on “web server components” for more on identifying server side technologies and components.<br />
<br />
<br />
==Black Box testing and example==<br />
<br />
Submit http[s] requests involving different file extensions and verify how they are handled. These verifications should be on a per web directory basis. Verify directories which allow script execution. Web server directories can be identified by vulnerability scanners, which look for the presence of well-known directories. In addition, mirroring the web site structure allows to reconstruct the tree of web directories served by the application.<br />
In case the web application architecture is load-balanced, it is important to assess all of the web servers. This may or may not be easy depending on the configuration of the balancing infrastructure. In a redounded infrastructure there may be slight variations in the configuration of individual web / application servers, this may happen for example if the web architecture employs heterogeneous technologies (think of a set of IIS and Apache web servers in a load-balancing configuration, which may introduce slight asymmetric behaviour between themselves, and possibly different vulnerabilities).<br />
'''Example:'''<br><br />
We have identified the existence of a file named connection.inc. Trying to access it directly gives back its contents, which are:<br />
<br />
<pre><br />
<?<br />
mysql_connect("127.0.0.1", "root", "")<br />
or die("Could not connect");<br />
<br />
?><br />
</pre><br />
<br />
We determine the existence of a MySQL DBMS back end, and the (weak) credentials used by the web application to access it. This example (which occurred in a real assessment) shows how dangerous can be the access to some kind of files.<br />
Whitepapers<br />
<br />
==Gray Box testing and example==<br />
<br />
Performing white box testing against file extensions handling amounts at checking the configurations of web server(s) / application server(s) taking part in the web application architecture, and verifying how they are instructed to serve different file extensions.<br />
If the web application relies on a load-balanced, heterogeneous infrastructure, determine whether this may introduce different behaviour.<br />
<br />
<br />
<br />
==References==<br />
<br />
<br />
'''Whitepapers'''<br><br />
'''Tools'''<br><br />
<br />
Vulnerability scanners, such as Nessus and Nikto check for the existence of well-known web directories. They may allow as well to download the web site structure, which is helpful when trying to determine the configuration of web directories and how individual file extensions are served. Other tools that can be used for this purpose include wget (http://www.gnu.org/software/wget/) and curl (http://curl.haxx.se), or google for “web mirroring tools”.<br />
<br />
<br />
{{Category:OWASP Testing Project AoC}}</div>Lkhttps://wiki.owasp.org/index.php?title=Test_Application_Platform_Configuration_(OTG-CONFIG-002)&diff=11918Test Application Platform Configuration (OTG-CONFIG-002)2006-11-06T20:06:18Z<p>Lk: Spell check</p>
<hr />
<div>[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]<br><br />
{{Template:OWASP Testing Guide v2}}<br />
<br />
== Brief Summary ==<br />
<br />
Proper configuration of the single elements that make up an application architecture is important in order to prevent mistakes that might compromise the security of the whole architecture.<br />
<br />
Configuration review and testing is a critical task in creating and maintaining such an architecture since many different systems will be usually provided with generic configurations which might not be suited to the task they will perform on the specific site they're installed on. <br />
<br />
While the typical web and application servers installation will spot a lot of functionalities (like application examples, documentation, test pages) what is not essential to and should be removed before deployment to avoid post-install exploitation. <br />
<br />
==Sample/known files and directories==<br />
<br />
Many web servers and application servers provide, in a default installation, sample application and files that are provided for the benefit of the developer and in order to test that the server is working properly right after installation. However, many default web server applications have been later known to be vulnerable. This was the case, for example, for CVE-1999-0449 (Denial of Service in IIS when the Exair sample site had been installed), CAN-2002-1744 (Directory traversal vulnerability in CodeBrws.asp in Microsoft IIS 5.0), CAN-2002-1630 (Use of sendmail.jsp in Oracle 9iAS), or CAN-2003-1172 (Directory traversal in the view-source sample in Apache’s Cocoon).<br />
<br />
CGI scanners include a detailed list of known files and directory samples that are provided by different web or application servers and might be a fast way to determine if these files are present. However, the only way to be really sure is to do a full review of the contents of the web server and/or application server and determination of whether they are related to the application itself or not.<br />
<br />
==Comment review==<br />
<br />
It is very common, and even recommended, for programmers to include detailed comments on their source code in order to allow for other programmers to better understand why a given decision was taken in coding a given function. Programmers usually do it too when developing large web-based applications. However, comments included inline in HTML code might reveal a potential attacker internal information that should not be available to them. Sometimes, even source code is commented out since a functionality is no longer required, but this comment is leaked out to the HTML pages returned to the users unintentionally.<br />
<br />
Comment review should be done in order to determine if any information is being leaked through comments. This review can only be thoroughly done through an analysis of the web server static and dynamic content and through file searches. It can be useful, however, to browse the site either in an automatic or guided fashion and store all the content retrieved. This retrieved content can then be searched in order to analyse the HTML comments available, if any, in the code.<br />
<br />
==Configuration review==<br />
<br />
The web server or application server configuration takes an important role in protecting the contents of the site and it must be carefully reviewed in order to spot common configuration mistakes. Obviously, the recommended configuration varies depending on the site policy, and the functionality that should be provided by the server software. In most cases, however, configuration guidelines (either provided by the software vendor or external parties) should be followed in order to determine if the server has been properly secured. It is impossible to generically say how a server should be configured, however, some common guidelines should be taken into account:<br />
<br />
* Only enable server modules[1] that are needed for the application. This reduces the attack surface since the server is reduced in size and complexity as software modules are disabled. It also prevents vulnerabilities that might appear in the vendor software affect the site if they are only present in modules that have been already disabled.<br />
* Handle server errors (40x or 50x) with custom made pages instead with the default web server pages. Specifically make sure that any application errors will not be returned to the end-user and that no code is leaked through these since it will help an attacker. It is actually very common to forget this point since developers do need this information in pre-production environments.<br />
* Make sure that the server software runs with minimised privileges in the operating system. This prevents an error in the server software from directly compromising the whole system. Although an attacker could elevate privileges once running code as the web server.<br />
* Make sure the server software logs properly both legitimate access and errors.<br />
* Make sure that the server is configured to properly handle overloads and prevent Denial of Service attacks. Ensure that the server has been performance tuned properly.<br />
<br />
<br />
==Logging==<br />
<br />
Logging is an important asset of the security of an application architecture since it can be used to detect flaws in applications (users constantly trying to retrieve a file that does not really exist) as well as sustained attacks from rogue users. Logs are typically properly generated by web and other server software but it is not so common to find applications that properly log their actions to a log and, when they do, they main intention of the application logs is to produce debugging output that could be used by the programmer to analyse a particular error.<br />
<br />
In both cases (server and application logs) several issues should be tested and analysed based on the log contents:<br />
<br />
# Do the logs contain sensitive information? <br />
# Are the logs stored in a dedicated server?<br />
# Can log usage generate a Denial of Service condition?<br />
# How are they rotated? Are logs kept for the sufficient time?<br />
# How are logs reviewed? Can administrators use these reviews to detect targeted attacks?<br />
# How are log backups preserved?<br />
# Is the data being logged data validated (min/max length, chars etc) prior to being logged?<br />
<br />
'''''Sensitive information in logs'''''<br />
<br />
Some applications might, for example use GET requests to forward form data which will be viewable in the server logs. This means that server logs might contain sensitive information (such as usernames as passwords, or bank account details). This sensitive information can be misused by an attacker if logs were to be obtained by an attacker, for example, through administrative interfaces or known web server vulnerabilities or misconfiguration (like the well-known ''server-status ''misconfiguration in Apache-based HTTP servers ).<br />
<br />
Also, in some jurisdictions, storing some sensitive information in log files, such as personal data, might oblige the enterprise to apply the data protection laws that they would apply to their back-end databases to log files too. And failure to do so, even unknowingly, might carry penalties under the data protection laws that apply.<br />
<br />
==Log location==<br />
<br />
Typically, servers will generate local logs of their actions and errors, consuming disk of the system the server is running on. However, if the server is compromised, its logs can be wiped out by the intruder to clean up all the traces of its attack and methods. If this were to happen the system administrator would have no knowledge of how the attack occurred or what the attack source was located. Actually, most attacker toolkits include a ''log zapper ''that is capable to clean up any logs that hold a given information (like the IP address of the attacker) and are routinely used in attacker’s system-level rootkits.<br />
<br />
Consequently, it is wiser to keep logs in a separate location and not in the web server itself. This also makes it easier to aggregate logs from different sources that refer to the same application (such as those of a web server farm) and it also makes it easier to do log analysis (which can be CPU intensive) without affecting the server itself.<br />
<br />
==Log storage==<br />
<br />
Logs can introduce a Denial of Service condition if they are not properly stored. Obviously, any attacker with sufficient resources, could be able to, unless detected and blocked, to produce a sufficient number of requests that would fill up the allocated space to log files. However, if the server is not properly configured, the log files will be stored in the same disk partition as the one used for the operating system software or the application itself. This means that, if the disk were to be filled up, the operating system or the application might fail because they are unable to write on disk.<br />
<br />
Typically, in UNIX systems logs will be located in /var (although some server installations might reside in /opt or /usr/local) and it is thus important to make sure that the directories that logs are stored at are in a separate partition. In some cases, and in order to prevent the system logs to be affected, the log directory of the server software itself (such as /var/log/apache in the Apache web server) should be stored in a dedicated partition.<br />
<br />
This is not to say that logs should be allowed to grow to fill up the filesystem they reside in. Growth of server logs should be monitored in order to detect this condition since it may be indicative of an attack.<br />
<br />
Testing this condition is as easy as, and as dangerous in production environments, as firing off a sufficient and sustained number of requests to see if these requests are logged and, if so, if there is a possibility to fill up the log partition through these requests. In some environments where QUERY_STRING parameters are also logged regardless of whether they are produced through GET or POST requests, big queries can be simulated that will fill up the logs faster since, typically, a single request will cause only a small amount of data to be logged: date and time, source IP address, URI request, and server result.<br />
<br />
==Log rotation==<br />
<br />
Most servers (but few custom applications) will rotate logs in order to prevent them from filling up the filesystem they reside on. The assumption when rotating logs is that the information in them is only necessary for a limited amount of time.<br />
<br />
This feature should be tested in order to ensure that:<br />
<br />
* Logs are kept for the time defined in the security policy, not more and not less.<br />
* Logs are compressed once rotated (this is a convenience, since it will mean that more logs will be stored for the same available disk space)<br />
* Filesystem permission of rotated log files are the same (or stricter) that those of the log files itself. For example, web servers will need to write to the logs they use but they don’t actually need to write to rotated logs which means that the permissions of the files can be changed upon rotation to preventing the web server process from modifying these.<br />
<br />
Some servers might rotate logs when they reach a given size. If this happens, it must be ensured that an attacker cannot force logs to rotate in order to hide its tracks.<br />
<br />
==Log review==<br />
<br />
Review of logs can be used for more that extraction of usage statistics of files in the web servers (which is typically what most log-based application will focus on) but also to determine if attacks take place at the web server.<br />
<br />
In order to analyse web server attacks the error log files of the server need to be analysed. Review should concentrate on:<br />
<br />
* 40x (not found) error messages, a large amount of these from the same source might be indicative of a CGI scanner tool being used against the web server<br />
* 50x (server error) messages. These can be an indication of an attacker abusing parts of the application which fail unexpectedly. For example, the first phases of a SQL injection attack will produce these error message when the SQL query is not properly constructed and its execution fails on the backend database.<br />
<br />
Log statistics or analysis should not be generated, nor stored, in the same server that produces the logs. Otherwise, an attacker might, through a web server vulnerability or improper configuration, gain access to them and retrieve similar information as the one that would be disclosed by log files themselves.<br />
<br />
<br />
==References==<br />
<br />
Recommended guides include<br />
<br />
* Generic:<br />
** CERT Security Improvement Modules: Securing Public Web Servers , published at http://www.cert.org/security-improvement/<br />
* Apache<br />
** Apache Security, by Ivan Ristic, O’reilly, march 2005.<br />
** Apache Security Secrets: Revealed (Again), Mark Cox, November 2003 available at <u>http://www.awe.com/mark/apcon2003/</u><br />
** Apache Security Secrets: Revealed, ApacheCon 2002, Las Vegas, Mark J Cox, October 2002, available at http://www.awe.com/mark/apcon2002<br />
** Apache Security Configuration Document, InterSect Alliance, http://www.intersectalliance.com/projects/ApacheConfig/index.html<br />
** Performance Tuning, <u>http://httpd.apache.org/docs/misc/perf-tuning.html</u><br />
* Lotus Domino<br />
** Lotus Security Handbook, William Tworek et al., April 2004, available in the IBM Redbooks collection<br />
** Lotus Domino Security, an X-force white-paper, Internet Security Systems, December 2002<br />
** Hackproofing Lotus Domino Web Server, David Litchfield, October 2001, <br />
** NGSSoftware Insight Security Research, available at www.nextgenss.com<br />
* Microsoft IIS<br />
** IIS 6.0 Security, by Rohyt Belani, Michael Muckin, available at <u>http://www.securityfocus.com/print/infocus/1765</u><br />
** Securing Your Web Server (Patterns and Practices), Microsoft Corporation, January 2004<br />
** IIS Security and Programming Countermeasures, by Jason Coombs <br />
** From Blueprint to Fortress: A Guide to Securing IIS 5.0, by John Davis, Microsoft Corporation, June 2001 <br />
** Secure Internet Information Services 5 Checklist, by Michael Howard, Microsoft Corporation, June 2000<br />
** “How To: Use IISLockdown.exe”, available at http://msdn.microsoft.com/library/en-us/secmod/html/secmod113.asp<br />
** “INFO: Using URLScan on IIS”, available at <u>http://support.microsoft.com/default.aspx?scid=307608</u>.<br />
* Red Hat’s (formerly Netscape’s) iPlanet<br />
** Guide to the Secure Configuration and Administration of iPlanet Web Server, Enterprise Edition 4.1, by James M Hayes, The Network Applications Team of the Systems and Network Attack Center (SNAC), NSA, January 2001<br />
* WebSphere<br />
** IBM WebSphere V5.0 Security, WebSphere Handbook Series, by Peter Kovari et al., IBM, December 2002.<br />
** IBM WebSphere V4.0 Advanced Edition Security, by Peter Kovari et al., IBM, March 2002.<br />
<br />
==Notes==<br />
[1] ISAPI extensions in the IIS case<br />
<br />
<br />
<br />
{{Category:OWASP Testing Project AoC}}</div>Lkhttps://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&diff=11917Testing for SSL-TLS (OWASP-CM-001)2006-11-06T20:05:28Z<p>Lk: </p>
<hr />
<div>[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]<br><br />
{{Template:OWASP Testing Guide v2}}<br />
<br />
== Brief Summary ==<br />
<br />
Due to historical exporting restrictions of high grade cryptography, legacy and new web server could be able to handle a weak cryptographic support.<br />
<br />
Even if high grade ciphers are normally used and installed, some misconfiguration in server installation could be used to force the use of a weaker cipher to gain access to the supposed secure communication channel. <br />
<br />
==SSL / TLS cipher specifications and requirements for site==<br />
<br />
The http clear-text protocol is normally secured via an SSL or TLS tunnel, resulting in https traffic. In addition to provide encryption of data in transit, https allows to identify the identity of servers (and, optionally, of clients) by means of digital certificates.<br />
<br />
Historically, there have been limitations set in place by the U.S. government to allow crypto systems to be exported only for key sizes of at most 40 bits, a key length which could be broken and would allow the decryption of communications. Since then cryptographic export regulations have been relaxed (though some constraints still hold), however it is important to check the SSL configuration being used to avoid putting in place cryptographic support which could be easily defeated. SSL-based services should not offer the possibility to choose weak ciphers.<br />
<br />
Technically, cipher determination is performed as follows. In the initial phase of a SSL connection setup, the client sends to the server a Client Hello message specifying, among other information, the cipher suites that it is able to handle. A client is usually a web browser (most popular SSL client nowadays…), but not necessarily, since it can be any SSL-enabled application; the same holds for the server, which needs not be a web server, though this is the most common case. (For example, a noteworthy class of SSL clients is that of SSL proxies such as stunnel (www.stunnel.org) which can be used, among other things, to allow non-SSL enabled tools to talk to SSL services.) A cipher suite is specified by an encryption protocol (DES, RC4, AES), the encryption key length (such as 40, 56, 128 bits) and a hash algorithm (SHA, MD5) used for integrity checking. Upon receipt of a Client Hello message, the server decides which cipher suite it will use for that session. It is possible (for example by means of configuration directives) to specify which cipher suites the server will honour. In this way you may control whether, for example, to allow or not conversations with clients supporting 40-bit encryption only.<br />
<br />
==How to Test==<br />
<br />
==Black Box==<br />
<br />
In order to detect possible support of weak ciphers, the ports associated to SSL/TLS wrapped services must be identified. These typically include port 443 which is the standard https port, however this may change because a) https services may be configured to run on non-standard ports, b) there may be additional SSL/TLS wrapped services related to the web application. In general a service discovery is required to identify such ports.<br />
<br />
The nmap scanner, via the “–sV” scan option, is able to identify SSL services. Vulnerability Scanners, in addition to perform service discovery, may include checks against weak ciphers (for example, the Nessus scanner has the capability of checking SSL services on arbitrary ports, and will report weak ciphers).<br />
<br />
==White Box==<br />
<br />
Check the configuration of the web servers which provide https services. If the web application provides other SSL/TLS wrapped services, these should be checked as well.<br />
<br />
==References==<br />
<br />
==Examples==<br />
<br />
<u>Example 1</u>. SSL service recognition via nmap.<br />
<br />
<pre><br />
[root@test]# nmap -F -sV localhost<br />
<br />
Starting nmap 3.75 ( http://www.insecure.org/nmap/ ) at 2005-07-27 14:41 CEST<br />
Interesting ports on localhost.localdomain (127.0.0.1):<br />
(The 1205 ports scanned but not shown below are in state: closed)<br />
<br />
PORT STATE SERVICE VERSION<br />
443/tcp open ssl OpenSSL<br />
901/tcp open http Samba SWAT administration server<br />
8080/tcp open http Apache httpd 2.0.54 ((Unix) mod_ssl/2.0.54 OpenSSL/0.9.7g PHP/4.3.11)<br />
8081/tcp open http Apache Tomcat/Coyote JSP engine 1.0<br />
<br />
Nmap run completed -- 1 IP address (1 host up) scanned in 27.881 seconds<br />
[root@test]# <br />
</pre><br />
<br />
<u>Example 2</u>. Identifying weak ciphers with Nessus.<br />
The following is an anonymized excerpt of a report generated by the Nessus scanner, corresponding to the identification of a server certificate allowing weak ciphers (see underlined text).<br />
<br />
'''https (443/tcp)'''<br />
<u>Description</u><br />
Here is the SSLv2 server certificate:<br />
Certificate:<br />
Data:<br />
Version: 3 (0x2)<br />
Serial Number: 1 (0x1)<br />
Signature Algorithm: md5WithRSAEncryption<br />
Issuer: C=**, ST=******, L=******, O=******, OU=******, CN=******<br />
Validity<br />
Not Before: Oct 17 07:12:16 2002 GMT<br />
Not After : Oct 16 07:12:16 2004 GMT<br />
Subject: C=**, ST=******, L=******, O=******, CN=******<br />
Subject Public Key Info:<br />
Public Key Algorithm: rsaEncryption<br />
RSA Public Key: (1024 bit)<br />
Modulus (1024 bit):<br />
00:98:4f:24:16:cb:0f:74:e8:9c:55:ce:62:14:4e:<br />
6b:84:c5:81:43:59:c1:2e:ac:ba:af:92:51:f3:0b:<br />
ad:e1:4b:22:ba:5a:9a:1e:0f:0b:fb:3d:5d:e6:fc:<br />
ef:b8:8c:dc:78:28:97:8b:f0:1f:17:9f:69:3f:0e:<br />
72:51:24:1b:9c:3d:85:52:1d:df:da:5a:b8:2e:d2:<br />
09:00:76:24:43:bc:08:67:6b:dd:6b:e9:d2:f5:67:<br />
e1:90:2a:b4:3b:b4:3c:b3:71:4e:88:08:74:b9:a8:<br />
2d:c4:8c:65:93:08:e6:2f:fd:e0:fa:dc:6d:d7:a2:<br />
3d:0a:75:26:cf:dc:47:74:29<br />
Exponent: 65537 (0x10001)<br />
X509v3 extensions:<br />
X509v3 Basic Constraints:<br />
CA:FALSE<br />
Netscape Comment:<br />
OpenSSL Generated Certificate<br />
Page 10<br />
Network Vulnerability Assessment Report 25.05.2005<br />
X509v3 Subject Key Identifier:<br />
10:00:38:4C:45:F0:7C:E4:C6:A7:A4:E2:C9:F0:E4:2B:A8:F9:63:A8<br />
X509v3 Authority Key Identifier:<br />
keyid:CE:E5:F9:41:7B:D9:0E:5E:5D:DF:5E:B9:F3:E6:4A:12:19:02:76:CE<br />
DirName:/C=**/ST=******/L=******/O=******/OU=******/CN=******<br />
serial:00<br />
Signature Algorithm: md5WithRSAEncryption<br />
7b:14:bd:c7:3c:0c:01:8d:69:91:95:46:5c:e6:1e:25:9b:aa:<br />
8b:f5:0d:de:e3:2e:82:1e:68:be:97:3b:39:4a:83:ae:fd:15:<br />
2e:50:c8:a7:16:6e:c9:4e:76:cc:fd:69:ae:4f:12:b8:e7:01:<br />
b6:58:7e:39:d1:fa:8d:49:bd:ff:6b:a8:dd:ae:83:ed:bc:b2:<br />
40:e3:a5:e0:fd:ae:3f:57:4d:ec:f3:21:34:b1:84:97:06:6f:<br />
f4:7d:f4:1c:84:cc:bb:1c:1c:e7:7a:7d:2d:e9:49:60:93:12:<br />
0d:9f:05:8c:8e:f9:cf:e8:9f:fc:15:c0:6e:e2:fe:e5:07:81:<br />
82:fc<br />
Here is the list of available SSLv2 ciphers:<br />
RC4-MD5<br />
EXP-RC4-MD5<br />
RC2-CBC-MD5<br />
EXP-RC2-CBC-MD5<br />
DES-CBC-MD5<br />
DES-CBC3-MD5<br />
RC4-64-MD5<br />
<u>The SSLv2 server offers 5 strong ciphers, but also 0 medium strength and '''2 weak "export class" ciphers'''.</u><br />
<u>The weak/medium ciphers may be chosen by an export-grade or badly configured client software. They only offer a limited protection against a brute force attack</u><br />
<u>Solution: disable those ciphers and upgrade your client software if necessary.</u><br />
See http://support.microsoft.com/default.aspx?scid=kben-us216482<br />
or http://httpd.apache.org/docs-2.0/mod/mod_ssl.html#sslciphersuite<br />
This SSLv2 server also accepts SSLv3 connections.<br />
This SSLv2 server also accepts TLSv1 connections.<br />
<br />
Vulnerable hosts<br />
''(list of vulnerable hosts follows)''<br />
<br />
<u>Example 3</u>. Manually audit weak SSL cipher levels with OpenSSL. The following will attempt to connect to Google.com with SSLv2.<br />
<pre><br />
[root@test]# openssl s_client -no_tls1 -no_ssl3 -connect www.google.com:443<br />
CONNECTED(00000003)<br />
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com<br />
verify error:num=20:unable to get local issuer certificate<br />
verify return:1<br />
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com<br />
verify error:num=27:certificate not trusted<br />
verify return:1<br />
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com<br />
verify error:num=21:unable to verify the first certificate<br />
verify return:1<br />
---<br />
Server certificate<br />
-----BEGIN CERTIFICATE-----<br />
MIIDYzCCAsygAwIBAgIQYFbAC3yUC8RFj9MS7lfBkzANBgkqhkiG9w0BAQQFADCB<br />
zjELMAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJ<br />
Q2FwZSBUb3duMR0wGwYDVQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UE<br />
CxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhh<br />
d3RlIFByZW1pdW0gU2VydmVyIENBMSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNl<br />
cnZlckB0aGF3dGUuY29tMB4XDTA2MDQyMTAxMDc0NVoXDTA3MDQyMTAxMDc0NVow<br />
aDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDU1v<br />
dW50YWluIFZpZXcxEzARBgNVBAoTCkdvb2dsZSBJbmMxFzAVBgNVBAMTDnd3dy5n<br />
b29nbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC/e2Vs8U33fRDk<br />
5NNpNgkB1zKw4rqTozmfwty7eTEI8PVH1Bf6nthocQ9d9SgJAI2WOBP4grPj7MqO<br />
dXMTFWGDfiTnwes16G7NZlyh6peT68r7ifrwSsVLisJp6pUf31M5Z3D88b+Yy4PE<br />
D7BJaTxq6NNmP1vYUJeXsGSGrV6FUQIDAQABo4GmMIGjMB0GA1UdJQQWMBQGCCsG<br />
AQUFBwMBBggrBgEFBQcDAjBABgNVHR8EOTA3MDWgM6Axhi9odHRwOi8vY3JsLnRo<br />
YXd0ZS5jb20vVGhhd3RlUHJlbWl1bVNlcnZlckNBLmNybDAyBggrBgEFBQcBAQQm<br />
MCQwIgYIKwYBBQUHMAGGFmh0dHA6Ly9vY3NwLnRoYXd0ZS5jb20wDAYDVR0TAQH/<br />
BAIwADANBgkqhkiG9w0BAQQFAAOBgQADlTbBdVY6LD1nHWkhTadmzuWq2rWE0KO3<br />
Ay+7EleYWPOo+EST315QLpU6pQgblgobGoI5x/fUg2U8WiYj1I1cbavhX2h1hda3<br />
FJWnB3SiXaiuDTsGxQ267EwCVWD5bCrSWa64ilSJTgiUmzAv0a2W8YHXdG08+nYc<br />
X/dVk5WRTw==<br />
-----END CERTIFICATE-----<br />
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com<br />
issuer=/C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/emailAddress=premium-server@thawte.com<br />
---<br />
No client certificate CA names sent<br />
---<br />
Ciphers common between both SSL endpoints:<br />
RC4-MD5 EXP-RC4-MD5 RC2-CBC-MD5<br />
EXP-RC2-CBC-MD5 DES-CBC-MD5 DES-CBC3-MD5<br />
RC4-64-MD5<br />
---<br />
SSL handshake has read 1023 bytes and written 333 bytes<br />
---<br />
New, SSLv2, Cipher is DES-CBC3-MD5<br />
Server public key is 1024 bit<br />
Compression: NONE<br />
Expansion: NONE<br />
SSL-Session:<br />
Protocol : SSLv2<br />
Cipher : DES-CBC3-MD5<br />
Session-ID: 709F48E4D567C70A2E49886E4C697CDE<br />
Session-ID-ctx:<br />
Master-Key: 649E68F8CF936E69642286AC40A80F433602E3C36FD288C3<br />
Key-Arg : E8CB6FEB9ECF3033<br />
Start Time: 1156977226<br />
Timeout : 300 (sec)<br />
Verify return code: 21 (unable to verify the first certificate)<br />
---<br />
closed<br />
</pre><br />
<br />
==Whitepapers==<br />
<br />
# RFC2246. The TLS Protocol Version 1.0 (updated by RFC3546), <u>http://www.ietf.org/rfc/rfc2246.txt</u><br />
# RFC2817. Upgrading to TLS Within HTTP/1.1, <u>http://www.ietf.org/rfc/rfc2817.txt</u><br />
# RFC3546. Transport Layer Security (TLS) Extensions, <u>http://www.ietf.org/rfc/rfc3546.txt</u><br />
# <u>www.verisign.net</u> features various material on the topic<br />
<br />
==Tools==<br />
<br />
Vulnerability scanners may include checks against weak ciphers. For example, the Nessus scanner (http://www.nessus.org) has this capability and flags the presence of SSL weak ciphers (see example provided above).<br />
<br />
You may also rely on specialized tools such as SSL Digger (http://www.foundstone.com/resources/proddesc/ssldigger.htm), or – for the command line oriented – experiment with the openssl tool, which provides access to OpenSSL cryptographic functions directly from a Unix shell (may be already available on *nix boxes, otherwise see www.openssl.org).<br />
<br />
To identify SSL-based services, use a vulnerability scanner or a port scanner with service recognition capabilities. The nmap scanner features a “-sV” scanning option which tries to identify services, while the nessus vulnerability scanner has the capability of identifying SSL-based services on arbitrary ports and to run vulnerability checks on them regardless of whether they are configured on standard or non-standard ports.<br />
<br />
In case you need to talk to a SSL service but your favourite tool doesn’t support SSL, you may benefit from a SSL proxy such as stunnel; stunnel will take care of tunneling the underlying protocol (usually http, but not necessarily so) and communicate with the SSL service you need to reach.<br />
<br />
Finally, a word of advice. Though it may be tempting to use a regular browser to check certificates, there are various reasons for not doing so. Browsers have been plagued by various bugs in this area, and the way the browser will perform the check might be influenced by configuration settings that may not be always evident. Instead, rely on vulnerability scanners or on specialized tools to do the job.<br />
<br />
<br />
<br />
<br />
==SSL certificate validity – client and server==<br />
<br />
When accessing a web application via the https protocol, a secure channel is established between the client (usually the browser) and the server. The identity of one (the server) or both parties (client and server) is then established by means of digital certificates. In order for the communication to be set up, a number of checks on the certificates must be passed. While discussing SSL and certificate based authentication is beyond the scope of this Guide, we will focus on the main criteria involved in ascertaining certificate validity: a) checking if the Certificate Authority (CA) is a known one (meaning one considered trusted), b) checking that the certificate is currently valid, and c) checking that the name of the site and the name reported in the certificate do match.<br />
<br />
Let’s examine each check more in detail.<br />
<br />
a) Each browser comes with a preloaded list of trusted CAs, against which the certificate signing CA is compared (this list can be customized and expanded at will). During the initial negotiations with a https server, if the server certificate relates to a CA unknown to the browser, usually a warning is raised. Usually this happens because a web application relies on a certificate signed by a self-established CA. Whether this is to be considered a concern depends. For example, this may be fine for an Intranet environment (think of corporate web email being provided via https; here, obviously all users do recognize the internal CA as a trusted CA). When a service is provided to the general public via the Internet, however (i.e. when it is important to positively verify the identity of the server we are talking to), it is usually imperative to rely on a trusted CA, i.e. on a CA which is recognized by all the user base (and here we stop with our considerations, i.e. we won’t delve deeper in the implications of the trust model being used by digital certificates).<br />
<br />
b) Certificates have associated a period of validity, therefore they may expire. Again, we are warned by the browser about this. A public service needs a temporally valid certificate; otherwise, it means we are talking with a server whose certificate was issued by someone we trust, but that has expired, and has not been renewed.<br />
<br />
c) Why the name on the certificate and the name of the server should not match? If this happens, it might sound suspicious (i.e.: whom are we talking with?). For a number of reasons, this is not so rare to see. A situation which causes this is when a system hosts a number of name-based virtual hosts, i.e. virtual hosts sharing the same IP address, that are identified by means of the HTTP 1.1 Host: header information. In this case, since the SSL handshake – during which the client browser checks the server certificate – takes place before the HTTP request is processed, it is not possible to assign different certificates to each virtual server. Therefore, if the name of the site and the name reported in the certificate do not match we have a condition which is typically signalled by the browser. To avoid this, IP-based virtual servers must be used. [2] and [3] describe techniques to deal with this problem and allow name-based virtual hosts to be correctly referenced.<br />
<br />
<br />
<br />
==How to Test==<br />
<br />
===Black Box===<br />
<br />
Examine the validity of the certificates used by the application. Browsers will issue a warning when encountering expired certificates, certificates issued by untrusted – meaning unknown to the browser – CAs, certificates which do not match namewise with the site they should refer. By clicking on the padlock which appears in the browser window when visiting an https site, you can look at information related to the certificate – including issuer, period of validity, encryption characteristics, etc.<br />
<br />
If the application requires a client certificate, you probably have installed one to access it. Certificate information is available in the browser, by inspecting the relevant certificate(s) in the list of the installed certificates.<br />
<br />
These checks must be applied to all visible SSL-wrapped communication channels used by the application. Though this is the usual https service running on port 443, there may be additional services involved depending on the web application architecture and on deployment issues (for example, an https administrative port left open, https services on non-standard ports, etc.). Therefore, apply these checks to all SSL-wrapped ports which have been discovered. For example, the nmap scanner features a scanning mode (enabled by the –sV command line switch) which identifies SSL-wrapped services. The Nessus vulnerability scanner has the capability of performing SSL checks on all SSL/TLS-wrapped services.<br />
<br />
===White Box===<br />
<br />
Examine the validity of the certificates used by the application – at server and client level. The usage of certificates is primarily at the web server level, however there may be additional communication paths protected by SSL (for example, towards the DBMS). You should check the application architecture to identify all SSL protected channels.<br />
<br />
==References==<br />
<br />
===Examples===<br />
<br />
Rather than providing a fictitious example, we have inserted an anonymized real-life example to stress how frequent is to stumble on https sites whose certificates are inaccurate with respect to naming.<br />
<br />
The following screenshots refer to a regional site of a high-profile IT company.<br />
<br />
<u>Warning issued by Microsoft Internet Explorer.</u> We are visiting a ''.it'' site and the certificate was issued to a ''.com ''site! Internet Explorer warns that the name on the certificate does not match the name of the site.<br />
<br />
<br />
[[Image:SSL Certificate Validity Testing IE Warning.gif]]<br />
<br />
<br />
<u>Warning issued by Mozilla Firefox.</u> The message issued by Firefox is different – Firefox complains because it cannot ascertain the identity of the ''.com'' site the certificate refers to; this because it does not know the CA who signed the certificate. In fact, Internet Explorer and Firefox do not come preloaded with the same list of CAs. Therefore, the behavior experienced with various browsers may differ.<br />
<br />
<br />
[[Image:SSL Certificate Validity Testing Firefox Warning.gif]]<br />
<br />
<br />
===Whitepapers===<br />
* [1] RFC2246. The TLS Protocol Version 1.0 (updated by RFC3546), <u>http://www.ietf.org/rfc/rfc2246.txt</u><br />
* [2] RFC2817. Upgrading to TLS Within HTTP/1.1, <u>http://www.ietf.org/rfc/rfc2817.txt</u><br />
* [3] RFC3546. Transport Layer Security (TLS) Extensions, <u>http://www.ietf.org/rfc/rfc3546.txt</u><br />
<br />
==Tools==<br />
<br />
Vulnerability scanners may include checks regarding certificate validity, including name mismatch and time expiration. They also usually report other information, such as the CA which issued the certificate. Remember, however, that there is no unified notion of a “trusted CA”; what is trusted depends on the configuration of the software and on the human assumptions made beforehand. Browsers come with a preloaded list of trusted CA. If your web application rely on a CA which is not in this list (for example, because you rely on a self-made CA), you should take into account the process of configuring user browsers to recognize the CA.<br />
<br />
The Nessus scanner includes a plugin to check for expired certificates or certificates which are going to expire within 60 days (plugin “SSL certificate expiry”, plugin id 15901). This plugin will check certificates ''installed on the server''.<br />
<br />
==Category==<br />
[[Category:Cryptographic Vulnerability]]<br />
[[Category:SSL]]<br />
<br />
<br />
<br />
{{Category:OWASP Testing Project AoC}}</div>Lkhttps://wiki.owasp.org/index.php?title=Test_Application_Platform_Configuration_(OTG-CONFIG-002)&diff=11916Test Application Platform Configuration (OTG-CONFIG-002)2006-11-06T20:01:52Z<p>Lk: Spell check</p>
<hr />
<div>[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]<br><br />
{{Template:OWASP Testing Guide v2}}<br />
<br />
== Brief Summary ==<br />
<br />
Proper configuration of the single elements that make up an application architecture is important in order to prevent mistakes that might compromise the security of the whole architecture.<br />
<br />
Configuration review and testing is a critical task in creating and maintaining such an architecture since many different systems will be usually provided with generic configurations which might not be suited to the task they will perform on the specific site they're installed on. <br />
<br />
While the typical web and application servers installation will spot a lot of functionalities (like application examples, documentation, test pages) what is not essential to and should be removed before deployment to avoid post-install exploitation. <br />
<br />
==Sample/known files and directories==<br />
<br />
Many web servers and application servers provide, in a default installation, sample application and files that are provided for the benefit of the developer and in order to test that the server is working properly right after installation. However, many default web server applications have been later known to be vulnerable. This was the case, for example, for CVE-1999-0449 (Denial of Service in IIS when the Exair sample site had been installed), CAN-2002-1744 (Directory traversal vulnerability in CodeBrws.asp in Microsoft IIS 5.0), CAN-2002-1630 (Use of sendmail.jsp in Oracle 9iAS), or CAN-2003-1172 (Directory traversal in the view-source sample in Apache’s Cocoon).<br />
<br />
CGI scanners include a detailed list of known files and directory samples that are provided by different web or application servers and might be a fast way to determine if these files are present. However, the only way to be really sure is to do a full review of the contents of the web server and/or application server and determination of whether they are related to the application itself or not.<br />
<br />
==Comment review==<br />
<br />
It is very common, and even recommended, for programmers to include detailed comments on their source code in order to allow for other programmers to better understand why a given decision was taken in coding a given function. Programmers usually do it too when developing large web-based applications. However, comments included inline in HTML code might reveal a potential attacker internal information that should not be available to them. Sometimes, even source code is commented out since a functionality is no longer required, but this comment is leaked out to the HTML pages returned to the users unintentionally.<br />
<br />
Comment review should be done in order to determine if any information is being leaked through comments. This review can only be thoroughly done through an analysis of the web server static and dynamic content and through file searches. It can be useful, however, to browse the site either in an automatic or guided fashion and store all the content retrieved. This retrieved content can then be searched in order to analyse the HTML comments available, if any, in the code.<br />
<br />
==Configuration review==<br />
<br />
The web server or application server configuration takes an important role in protecting the contents of the site and it must be carefully reviewed in order to spot common configuration mistakes. Obviously, the recommended configuration varies depending on the site policy, and the functionality that should be provided by the server software. In most cases, however, configuration guidelines (either provided by the software vendor or external parties) should be followed in order to determine if the server has been properly secured. It is impossible to generically say how a server should be configured, however, some common guidelines should be taken into account:<br />
<br />
* Only enable server modules[1] that are needed for the application. This reduces the attack surface since the server is reduced in size and complexity as software modules are disabled. It also prevents vulnerabilities that might appear in the vendor software affect the site if they are only present in modules that have been already disabled.<br />
* Handle server errors (40x or 50x) with custom made pages instead with the default web server pages. Specifically make sure that any application errors will not be returned to the end-user and that no code is leaked through these since it will help an attacker. It is actually very common to forget this point since developers do need this information in pre-production environments.<br />
* Make sure that the server software runs with minimised privileges in the operating system. This prevents an error in the server software from directly compromising the whole system. Although an attacker could elevate privileges once running code as the web server.<br />
* Make sure the server software logs properly both legitimate access and errors.<br />
* Make sure that the server is configured to properly handle overloads and prevent Denial of Service attacks. Ensure that the server has been performance tuned properly.<br />
<br />
<br />
==Logging==<br />
<br />
Logging is an important asset of the security of an application architecture since it can be used to detect flaws in applications (users constantly trying to retrieve a file that does not really exist) as well as sustained attacks from rogue users. Logs are typically properly generated by web and other server software but it is not so common to find applications that properly log their actions to a log and, when they do, they main intention of the application logs is to produce debugging output that could be used by the programmer to analyse a particular error.<br />
<br />
In both cases (server and application logs) several issues should be tested and analysed based on the log contents:<br />
<br />
# Do the logs contain sensitive information? <br />
# Are the logs stored in a dedicated server?<br />
# Can log usage generate a Denial of Service condition?<br />
# How are they rotated? Are logs kept for the sufficient time?<br />
# How are logs reviewed? Can administrators use these reviews to detect targeted attacks?<br />
# How are log backups preserved?<br />
# Is the data being logged data validated (min/max length, chars etc) prior to being logged?<br />
<br />
'''''Sensitive information in logs'''''<br />
<br />
Some applications might, for example use GET requests to forward form data which will be viewable in the server logs. This means that server logs might contain sensitive information (such as usernames as passwords, or bank account details). This sensitive information can be misused by an attacker if logs were to be obtained by an attacker, for example, through administrative interfaces or known web server vulnerabilities or misconfiguration (like the well-known ''server-status ''misconfiguration in Apache-based HTTP servers ).<br />
<br />
Also, in some jurisdictions, storing some sensitive information in log files, such as personal data, might oblige the enterprise to apply the data protection laws that they would apply to their back-end databases to log files too. And failure to do so, even unknowingly, might carry penalties under the data protection laws that apply.<br />
<br />
==Log location==<br />
<br />
Typically, servers will generate local logs of their actions and errors, consuming disk of the system the server is running on. However, if the server is compromised, its logs can be wiped out by the intruder to clean up all the traces of its attack and methods. If this were to happen the system administrator would have no knowledge of how the attack occurred or what the attack source was located. Actually, most attacker toolkits include a ''log zapper ''that is capable to clean up any logs that hold a given information (like the IP address of the attacker) and are routinely used in attacker’s system-level rootkits.<br />
<br />
Consequently, it is wiser to keep logs in a separate location and not in the web server itself. This also makes it easier to aggregate logs from different sources that refer to the same application (such as those of a web server farm) and it also makes it easier to do log analysis (which can be CPU intensive) without affecting the server itself.<br />
<br />
==Log storage==<br />
<br />
Logs can introduce a Denial of Service condition if they are not properly stored. Obviously, any attacker with sufficient resources, could be able to, unless detected and blocked, to produce a sufficient number of requests that would fill up the allocated space to log files. However, if the server is not properly configured, the log files will be stored in the same disk partition as the one used for the operating system software or the application itself. This means that, if the disk were to be filled up, the operating system or the application might fail because they are unable to write on disk.<br />
<br />
Typically, in UNIX systems logs will be located in /var (although some server installations might reside in /opt or /usr/local) and it is thus important to make sure that the directories that logs are stored at are in a separate partition. In some cases, and in order to prevent the system logs to be affected, the log directory of the server software itself (such as /var/log/apache in the Apache web server) should be stored in a dedicated partition.<br />
<br />
This is not to say that logs should be allowed to grow to fill up the filesystem they reside in. Growth of server logs should be monitored in order to detect this condition since it may be indicative of an attack.<br />
<br />
Testing this condition is as easy as, and as dangerous in production environments, as firing off a sufficient and sustained number of requests to see if these requests are logged and, if so, if there is a possibility to fill up the log partition through these requests. In some environments where QUERY_STRING parameters are also logged regardless of whether they are produced through GET or POST requests, big queries can be simulated that will fill up the logs faster since, typically, a single request will cause only a small amount of data to be logged: date and time, source ip address, URI request, and server result.<br />
<br />
==Log rotation==<br />
<br />
Most servers (but few custom applications) will rotate logs in order to prevent them from filling up the filesystem they reside on. The assumption when rotating logs is that the information in them is only necessary for a limited amount of time.<br />
<br />
This feature should be tested in order to ensure that:<br />
<br />
* Logs are kept for the time defined in the security policy, not more and not less.<br />
* Logs are compressed once rotated (this is a convenience, since it will mean that more logs will be stored for the same available disk space)<br />
* Filesystem permission of rotated log files are the same (or stricter) that those of the log files itself. For example, web servers will need to write to the logs they use but they don’t actually need to write to rotated logs which means that the permissions of the files can be changed upon rotation to preventing the web server process from modifying these.<br />
<br />
Some servers might rotate logs when they reach a given size. If this happens, it must be ensured that an attacker cannot force logs to rotate in order to hide its tracks.<br />
<br />
==Log review==<br />
<br />
Review of logs can be used for more that extraction of usage statistics of files in the web servers (which is typically what most log-based application will focus on) but also to determine if attacks take place at the web server.<br />
<br />
In order to analyse web server attacks the error log files of the server need to be analysed. Review should concentrate on:<br />
<br />
* 40x (not found) error messages, a large amount of these from the same source might be indicative of a CGI scanner tool being used against the web server<br />
* 50x (server error) messages. These can be an indication of an attacker abusing parts of the application which fail unexpectedly. For example, the first phases of a SQL injection attack will produce these error message when the SQL query is not properly constructed and its execution fails on the backend database.<br />
<br />
Log statistics or analysis should not be generated, nor stored, in the same server that produces the logs. Otherwise, an attacker might, through a web server vulnerability or improper configuration, gain access to them and retrieve similar information as the one that would be disclosed by log files themselves.<br />
<br />
<br />
==References==<br />
<br />
Recommended guides include<br />
<br />
* Generic:<br />
** CERT Security Improvement Modules: Securing Public Web Servers , published at http://www.cert.org/security-improvement/<br />
* Apache<br />
** Apache Security, by Ivan Ristic, O’reilly, march 2005.<br />
** Apache Security Secrets: Revealed (Again), Mark Cox, November 2003 available at <u>http://www.awe.com/mark/apcon2003/</u><br />
** Apache Security Secrets: Revealed, ApacheCon 2002, Las Vegas, Mark J Cox, October 2002, available at http://www.awe.com/mark/apcon2002<br />
** Apache Security Configuration Document, InterSect Alliance, http://www.intersectalliance.com/projects/ApacheConfig/index.html<br />
** Performance Tuning, <u>http://httpd.apache.org/docs/misc/perf-tuning.html</u><br />
* Lotus Domino<br />
** Lotus Security Handbook, William Tworek et al., April 2004, available in the IBM Redbooks collection<br />
** Lotus Domino Security, an X-force white-paper, Internet Security Systems, December 2002<br />
** Hackproofing Lotus Domino Web Server, David Litchfield, October 2001, <br />
** NGSSoftware Insight Security Research, available at www.nextgenss.com<br />
* Microsoft IIS<br />
** IIS 6.0 Security, by Rohyt Belani, Michael Muckin, available at <u>http://www.securityfocus.com/print/infocus/1765</u><br />
** Securing Your Web Server (Patterns and Practices), Microsoft Corporation, January 2004<br />
** IIS Security and Programming Countermeasures, by Jason Coombs <br />
** From Blueprint to Fortress: A Guide to Securing IIS 5.0, by John Davis, Microsoft Corporation, June 2001 <br />
** Secure Internet Information Services 5 Checklist, by Michael Howard, Microsoft Corporation, June 2000<br />
** “How To: Use IISLockdown.exe”, available at http://msdn.microsoft.com/library/en-us/secmod/html/secmod113.asp<br />
** “INFO: Using URLScan on IIS”, available at <u>http://support.microsoft.com/default.aspx?scid=307608</u>.<br />
* Red Hat’s (formerly Netscape’s) iPlanet<br />
** Guide to the Secure Configuration and Administration of iPlanet Web Server, Enterprise Edition 4.1, by James M Hayes, The Network Applications Team of the Systems and Network Attack Center (SNAC), NSA, January 2001<br />
* WebSphere<br />
** IBM WebSphere V5.0 Security, WebSphere Handbook Series, by Peter Kovari et al., IBM, December 2002.<br />
** IBM WebSphere V4.0 Advanced Edition Security, by Peter Kovari et al., IBM, March 2002.<br />
<br />
==Notes==<br />
[1] ISAPI extensions in the IIS case<br />
<br />
<br />
<br />
{{Category:OWASP Testing Project AoC}}</div>Lkhttps://wiki.owasp.org/index.php?title=Test_File_Extensions_Handling_for_Sensitive_Information_(OTG-CONFIG-003)&diff=11915Test File Extensions Handling for Sensitive Information (OTG-CONFIG-003)2006-11-06T19:48:12Z<p>Lk: spellcheck</p>
<hr />
<div>[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]<br><br />
{{Template:OWASP Testing Guide v2}}<br />
<br />
...To review and expand...<br />
<br />
== Brief Summary ==<br />
<br />
File extensions are commonly used in web servers to easily determine which technologies / languages / plugins must be used to fulfill the web request.<br><br><br />
While this behavior is consistent with RFCs and Web Standards, using standard file extensions provides the pentester useful information about the underlying technologies used in a web appliance and greatly simplifies the task of determining the attack scenario to be used on peculiar technologies.<br><br><br />
In addition to this misconfiguration in web servers could easily reveal confidential information about access credentials.<br />
<br />
==Description of the Issue==<br />
<br />
Determining how web servers handle requests corresponding to files having different extensions may help to understand web server behavior depending on the kind of files we try to access. For example, it can help understand which file extensions are returned as text/plain versus those which cause execution on the server side. The latter are indicative of technologies / languages / plugins which are used by web servers or application servers, and may provide additional insight on how the web application is engineered. For example, a “.pl” extension is usually associated with server-side Perl support (though the file extension alone may be deceptive and not fully conclusive; for example, Perl server-side resources might be renamed to conceal the fact that they are indeed Perl related). See also next section on “web server components” for more on identifying server side technologies and components.<br />
<br />
<br />
==Black Box testing and example==<br />
<br />
Submit http[s] requests involving different file extensions and verify how they are handled. These verifications should be on a per web directory basis. Verify directories which allow script execution. Web server directories can be identified by vulnerability scanners, which look for the presence of well-known directories. In addition, mirroring the web site structure allows to reconstruct the tree of web directories served by the application.<br />
In case the web application architecture is load-balanced, it is important to assess all of the web servers. This may or may not be easy depending on the configuration of the balancing infrastructure. In a redounded infrastructure there may be slight variations in the configuration of individual web / application servers, this may happen for example if the web architecture employs heterogeneous technologies (think of a set of IIS and Apache web servers in a load-balancing configuration, which may introduce slight asymmetric behavior between themselves, and possibly different vulnerabilities).<br />
'''Example:'''<br><br />
We have identified the existence of a file named connection.inc. Trying to access it directly gives back its contents, which are:<br />
<br />
<pre><br />
<?<br />
mysql_connect("127.0.0.1", "root", "")<br />
or die("Could not connect");<br />
<br />
?><br />
</pre><br />
<br />
We determine the existence of a MySQL DBMS back end, and the (weak) credentials used by the web application to access it. This example (which occurred in a real assessment) shows how dangerous can be the access to some kind of files.<br />
Whitepapers<br />
<br />
==Gray Box testing and example==<br />
<br />
Performing white box testing against file extensions handling amounts at checking the configurations of web server(s) / application server(s) taking part in the web application architecture, and verifying how they are instructed to serve different file extensions.<br />
If the web application relies on a load-balanced, etherogeneous infrastructure, determine whether this may introduce different behavior.<br />
<br />
<br />
<br />
==References==<br />
<br />
<br />
'''Whitepapers'''<br><br />
'''Tools'''<br><br />
<br />
Vulnerability scanners, such as Nessus and Nikto check for the existence of well-known web directories. They may allow as well to download the web site structure, which is helpful when trying to determine the configuration of web directories and how individual file extensions are served. Other tools that can be used for this purpose include wget (http://www.gnu.org/software/wget/) and curl (http://curl.haxx.se), or google for “web mirroring tools”.<br />
<br />
<br />
{{Category:OWASP Testing Project AoC}}</div>Lkhttps://wiki.owasp.org/index.php?title=Testing_for_Session_Management_Schema_(OTG-SESS-001)&diff=11914Testing for Session Management Schema (OTG-SESS-001)2006-11-06T19:46:44Z<p>Lk: Brief Summary added</p>
<hr />
<div>{{Template:OWASP Testing Guide v2}}<br />
<br />
== Brief Summary ==<br />
<br><br />
IN order to avoid continuous authentication for each page of a website or service, web applications implement various mechanisms to store and validate credentials for a pre-determined timespan.<br><br><br />
These mechanisms are known as Session Management and while they're most important in order to increase the ease of use and user-friendliness of the application, they can be exploited by a pentester to gain access to a user account without the need to provide correct credentials.<br />
<br><br />
<br />
== Description of the Issue == <br />
<br><br />
The session management schema should be considered alongside the authentication and authorization schema, and cover at least the questions below from a non technical point of view:<br />
* Will the application be accessed from shared systems? e.g. Internet Café <br><br />
* Is application security of prime concern to the visiting client/customer? <br><br />
* How many concurrent sessions may a user have? <br><br />
* How long is the inactive timeout on the application?<br> <br />
* How long is the active timeout? <br><br />
* Are sessions transferable from one source IP to another? <br><br />
* Is ‘remember my username’ functionality provided? <br><br />
* Is ‘automatic login’ functionality provided? <br><br />
Having identified the schema in place, the application and its logic must be examined to confirm proper implementation of the schema.<br />
This phase of testing is intrinsically linked with general application security testing. Whilst the first Schema questions (is the schema suitable for the site and does the schema meet the application provider’s requirements?) can be analyzed in abstract, the final question (Does the site implement the specified schema?) must be considered alongside other technical testing. <br><br />
<br />
The identified schema should be analyzed against best practice within the context of the site during our penetration test.<br />
Where the defined schema deviates from security best practice, the associated risks should be identified and described within the context of the environment. Security risks and issues should be detailed and quantified, but ultimately, the application provider must make decisions based on the security and usability of the application.<br />
For example, if it is determined that the site has been designed without inactive session timeouts the application provider should be advised about risks such as replay attacks, long-term attacks based on stolen or compromised Session IDs and abuse of a shared terminal where the application wasn’t logged out. They must then consider these against other requirements such as convenience of use for clients and disruption of the application by forced re-authentication.<br />
<br><br />
''' Session Management Implementation'''<br><br />
In this Chapter we describe how to analyze a Session Schema and how to test it. Technical security testing of Session Management implementation covers two key areas:<br />
* Integrity of Session ID creation<br />
* Secure management of active sessions and Session IDs<br />
The Session ID should be sufficiently unpredictable and abstracted from any private information, and the Session management should be logically secured to prevent any manipulation or circumvention of application security<br />
These two key areas are interdependent, but should be considered separately for a number of reasons.<br />
Firstly, the choice of underlying technology to provide the sessions is bewildering and can already include a large number of OTS products and an almost unlimited number of bespoke or proprietary implementations. Whilst the same technical analysis must be performed on each, established vendor solutions may require a slightly different testing approach and existing security research may exist on the implementation.<br />
Secondly, even an unpredictable and abstract Session ID may be rendered completely ineffectual should the Session management be flawed. Similarly, a strong and secure session management implementation may be undermined by a poor Session ID implementation.<br />
Furthermore, the analyst should closely examine how (and if) the application uses the available Session management. It is not uncommon to see Microsoft ISS server ASP Session IDs passed religiously back and forth during interaction with an application, only to discover that these are not used by the application logic at all. It is therefore not correct to say that because an application is built on a ‘proven secure’ platform its Session Management is automatically secure.<br />
<br />
<br />
== Black Box testing and example ==<br />
<br />
''' Session Analysis'''<br><br />
<br />
The Session Tokens (Cookie, SessionID or Hidden Field) themselves should be examined to ensure their quality from a security perspective. They should be tested against criteria such as their randomness, uniqueness, resistance to statistical and cryptographic analysis and information leakage.<br><br />
* Token Structure & Information Leakage<br />
The first stage is to examine the structure and content of a Session ID provided by the application. A common mistake is to include specific data in the Token instead of issuing a generic value and referencing real data at the server side.<br />
If the Session ID is clear-text, the structure and pertinent data may be immediately obvious as in Figure 1.<br />
<pre><br />
192.168.100.1:owaspuser:password:15:58<br />
</pre><br />
Figure 1<br><br />
<br />
If part or the entire Token appears to be encoded or hashed, it should be compared to various techniques to check for obvious obfuscation.<br />
For example the string “192.168.100.1:owaspuser:password:15:58” is represented in Hex, Base64 and as an MD5 hash in Figure 2.<br />
<pre><br />
Hex 3139322E3136382E3130302E313A6F77617370757365723A70617373776F72643A31353A3538<br />
Base64 MTkyLjE2OC4xMDAuMTpvd2FzcHVzZXI6cGFzc3dvcmQ6MTU6NTg=<br />
MD5 01c2fc4f0a817afd8366689bd29dd40a<br />
</pre><br />
Figure 2 <br><br />
Having identified the type of obfuscation, it may be possible to decode back to the original data. In most cases, however, this is unlikely. Even so, enumerating the encoding in place from the format of the message may still be useful. Furthermore, if both the format and obfuscation technique can be deduced, automated brute force attacks could be devised.<br />
Hybrid Rokens may include information such as IP address or User ID together with an encoded portion, as in Figure 3.<br />
<pre><br />
owaspuser:192.168.100.1: a7656fafe94dae72b1e1487670148412<br />
</pre><br />
Figure 3 <br><br />
Having analysed a single Session Token, the representative sample should be examined.<br />
A simple analysis of the Tokens should immediately reveal any obvious patterns. For example, a 32 bit Token may include 16 bits of static data and 16 bits of variable data. This may indicate that the first 16 bits represents a fixed attribute of the user – e.g. the username or IP address.<br />
If the second 16 bit chunk is incrementing at a regular rate, it may indicate a sequential or even time-based element to the Token generation. See Examples.<br />
If static elements to the Tokens are identified, further samples should be gathered varying one potential input element at a time. For example, login attempts through a different user account or from a different IP address may yield a variance in the previously static portion of the Session Token.<br />
The following areas should be addressed during the single and multiple Session ID structure testing:<br />
* What parts of the Session ID are static? <br />
* What clear-text proprietary information is stored in the Session ID? <br />
e.g. usernames/UID, IP addresses <br />
* What easily decoded proprietary information is stored? <br />
* What information can be deduced from the structure of the Session ID? <br />
* What portions of the Session ID are static for the same login conditions? <br />
* What obvious patterns are present in the Session ID as a whole, or individual portions? <br />
<br />
'''Session ID Predictability & Randomness'''<br><br />
Analysis of the variable areas (if any) of the Session ID should be undertaken to establish if there are any recognizable or predictable patterns.<br />
These analysis may be performed manually and with bespoke or OTS statistical or cryptanalytic tools in order to deduce any patterns in Session ID content.<br />
Manual checks should include comparisons of Session IDs issued for the same login conditions – e.g. the same username, password and IP address. Time is an important factor which must also be controlled. High numbers of simultaneous connections should be made in order to gather samples in the same time window and keep that variable constant. Even a quantization of 50ms or less may be too coarse and a sample taken in this way may reveal time-based components that would otherwise be missed.<br />
Variable elements should be analyzed over time to determine whether they are incremental in nature. Where they are incremental, patterns relating to absolute or elapsed time should be investigated. Many systems use time as a seed for their pseudo random elements.<br />
Where the patterns are seemingly random, one-way hashes of time or other environmental variations should be considered as a possibility. Typically, the result of a cryptographic hash is a decimal or hexadecimal number so should be identifiable.<br />
In analyzing Session IDs sequences, patterns or cycles, static elements and client dependencies should all be considered as possible contributing elements to the structure and function of the application.<br />
* Are the Session IDs provably random in nature? e.g. Can the result be reproduced? <br />
* Do the same input conditions produce the same ID on a subsequent run? <br />
* Are the Session IDs provably resistant to statistical or cryptanalysis? <br />
* What elements of the Session IDs are time-linked? <br />
* What portions of the Session IDs are predictable? <br />
* Can the next ID be deduced even given full knowledge of the generation algorithm and previous IDs? <br />
<br />
'''Brute Force Attacks'''<br><br />
Brute force attacks inevitably lead on from questions relating to predictability and randomness.<br />
The variance within the Session IDs must be considered together with application session durations and timeouts. If the variation within the Session IDs is relatively small, and Session ID validity is long, the likelihood of a successful brute-force attack is much higher.<br />
A long session ID (or rather one with a great deal of variance) and a shorter validity period would make it far harder to succeed in a brute force attack.<br />
* How long would a brute-force attack on all possible Session IDs take? <br />
* Is the Session ID space large enough to prevent brute forcing? e.g. is the length of the key sufficient when compared to the valid life-span <br />
* Do delays between connection attempts with different Session IDs mitigate the risk of this attack? <br />
<br />
<br />
'''Testing for Topic X vulnerabilities:''' <br><br />
...<br><br />
'''Result Expected:'''<br><br />
...<br><br><br />
== Gray Box testing and example == <br />
'''Testing for Topic X vulnerabilities:'''<br><br />
...<br><br />
'''Result Expected:'''<br><br />
...<br><br><br />
== References ==<br />
'''Whitepapers'''<br><br />
...<br><br />
'''Tools'''<br><br />
...<br><br />
<br />
<br />
{{Category:OWASP Testing Project AoC}}<br />
[[OWASP Testing Guide v2 Table of Contents]]<br />
{{Template:Stub}}</div>Lkhttps://wiki.owasp.org/index.php?title=Test_Network/Infrastructure_Configuration_(OTG-CONFIG-001)&diff=11912Test Network/Infrastructure Configuration (OTG-CONFIG-001)2006-11-06T19:39:53Z<p>Lk: Brief Summary added</p>
<hr />
<div>[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]<br><br />
{{Template:OWASP Testing Guide v2}}<br />
<br />
== Brief Summary ==<br />
<br><br />
The intrinsic complexity of interconnected and heterogeneous web server infrastructure, which can count hundreds of web applications, make configuration management and review a fundamental step in testing and deploying every single application.<br><br><br />
In fact it takes only a single small vulnerability to undermine the security of the entire infrastructure and even small and almost unimportant problems may evolve into severe risks for another application on the same server.<br><br><br />
In order to address these problems it is of uttermost importance to perform an in.depth review of configuration and known security issues.<br />
<br><br />
<br />
== Description of the Issue == <br />
<br />
Proper configuration management of the web server infrastructure is very important in order to preserve the security of the application itself. If elements such as the web server software, the back-end database servers or the authentication servers are not properly reviewed and secured they might introduce undesired risks or introduce new vulnerabilities that might compromise the application itself.<br />
<br />
For example, a web server vulnerability that would allow a remote attacker to disclose the source code of the application itself (vulnerability that has arisen a number of times in both web servers or application servers) could compromise the application as anonymous users could use the information disclosed in the source code to leverage attacks against the application or its users.<br />
<br />
In order to test the configuration management infrastructure the following steps need to be taken:<br />
<br />
* the different elements that make up the infrastructure need to be determined in order to understand how they interact with web application and how they affect its security<br />
* all the elements of the infrastructure need to be reviewed in order to make sure that they don’t hold any known vulnerabilities<br />
* a review needs to be done of the administrative tools used to maintain all the different elements<br />
* the authentication systems, if any, need to reviewed in order to assure that they serve the needs of the application and that they cannot be manipulated to leverage access by external users.<br />
* A list of defined ports which are required for the application should be maintained and kept under change control.<br />
<br />
==Review of the application architecture==<br />
<br />
The application architecture needs to be reviewed through the test to determine what different components are used to build the web application. In small setups, such as a simple CGI-based application, a single server might be used that runs the web server which executes the C, Perl, or Shell CGIs application and maybe authentication is also based on the web server authentication mechanisms. On more complex setups, such as an online bank system, multiple servers might be involved including: a reverse proxy, a front-end web server, an application server and a database server or LDAP server. Each of these servers will be used for different purposes and might be even be divided in different networks with firewalling devices between them creating different DMZs so that access to the web server will not grant a remote user access to the authentication mechanism itself and compromises of the different elements of the architecture can be isolated in a way such that they will not compromise the whole architecture.<br />
<br />
Getting knowledge of the application architecture can be easy, if this information is provided to the testing team by the application developers in document form or through interviews, or can prove to be very difficult to determine if doing a blind penetration test.<br />
<br />
In the later case, a tester will first start with the assumption that there is a simple setup (a single server) and will, through the information retrieved from other tests will derive the different elements and question this assumption the architecture will be extended. He will start by making simple questions such as: “Is there a firewalling system protecting the web server?” which will be answered based on the results of network scans targeted at the web server and the analysis of whether the network ports of the web server are being filtered in the network edge (no answer or ICMP unreachables are received) or if the server is directly connected to the Internet (i.e. returns RST packets for all non-listening ports). This analysis can be enhanced in order to determine the type of firewall system used based on network packet tests: is it a stateful firewall or is it an access list filter on a router? How is it configured? Can it be bypassed? <br />
<br />
Detecting a reverse proxy in front of the web server needs to be done by the analysis of the web server banner which might directly disclose the existence of a reverse proxy (for example, if ‘WebSEAL’[1] is returned). It can also be determined by the answers of the web server to requests and comparing them to the expected answers. For example, some reverse proxies act as “intrusion prevention systems” (or web-shields) by blocking known attacks targeted at the web server. If the web server is known to answer with a 404 message to a request which targets and unavailable page and returns a different error message for some common web attacks like those done by CGI scanners it might be an indication of a reverse proxy (or an application-level firewall) which is filtering the requests and returning a different error page than the one expected. Another example, if the web server returns a set of available HTTP methods (including TRACE) but then the expected methods return errors then there probably is something in between blocking them. And, in some cases, even the protection system gives itself away:<br />
<br />
<pre><br />
GET / web-console/ServerInfo.jsp%00 HTTP/1.0<br />
<br />
HTTP/1.0 200<br />
Pragma: no-cache<br />
Cache-Control: no-cache<br />
Content-Type: text/html<br />
Content-Length: 83<br />
<br />
<TITLE>Error</TITLE><br />
<BODY><br />
<H1>Error</H1><br />
FW-1 at XXXXXX: Access denied.</BODY><br />
Example of the security server of Check Point Firewall-1 NG AI “protecting” a web server<br />
</pre><br />
<br />
Reverse proxies can also be introduced as proxy-caches to accelerate the performance of back-end application servers. Detecting these proxies can be done based, again, on the server header, or timing requests that should be cached by the server and comparing the time taken to server the first request with subsequent requests.<br />
<br />
Other elements that can be detected are network balancers. Typically, these systems will be balance a given TCP/IP port to multiple servers based on different algorithms (round-robin, web server load, number of requests, etc.). Thus, the detection of this architecture elements needs to be done based on multiple requests and comparing results in order to determine if the requests are going to the same or different web servers, for example, based on the Date: header if the server clocks are not synchronised. In some cases the network load balance might inject new information in the headers that will make it stand out distinctively, like the AlteonP cookie introduced by Nortel’s Alteon WebSystems load balancer.<br />
<br />
Application web servers are usually easy to detect. Sometimes because the request for several resources is handled by the application server itself and not the web server and the response header will vary significantly (including different or additional values in the answer header). Another possibility to detect these is if the web servers tries to set cookies which are indicative of an application web server being used (such as the JSESSIONID provided by some J2EE servers) or rewrite URLs automatically to do session tracking.<br />
<br />
Authentication backends (such as LDAP directories, relational databases, or RADIUS servers) however, are not as easy to detect from an external point of view in an immediate way since they will be hidden by the application itself.<br />
<br />
The use of a database backend can be determined simply by navigating an application. If there is highly dynamic content generated “on the fly” it is probably being extracted from some sort of database by the application itself. Sometimes even the way information is requested might give insight to an existence of a database back-end, for example, an online shopping applications that uses numeric identifiers (‘id’) when browsing the different articles in the shop. However, when doing a blind application test knowledge of the underlying database is usually only available when some vulnerability surfaces in the application, such as an SQL injection, which indicates that the application is actually talking to a database (or the vulnerability would not be possible otherwise). <br />
<br />
==Known server vulnerabilities==<br />
<br />
Vulnerabilities found in the different elements that make up the application architecture, be it the web server itself or the database backend can severely compromise the application itself in some cases even more if a vulnerability had been found in the application itself. For example, consider a server vulnerability that allows a remote, unauthenticated user, to upload files to the web server even replacing existing files, this vulnerability would compromise the application itself since a rogue user would be able to replace the application itself or introduce code that would affect the backend servers since its application code would be run just like any other application.<br />
<br />
Reviewing server vulnerabilities can be either hard to do if the test needs to be done through a blind penetration test. In these cases, vulnerabilities need to tested from a remote site, typically using an automated tool, however, the test of some vulnerabilities can have unpredictable results to the web server or testing for some kinds of vulnerabilities (like those directly involved in denial of service attacks) might not be possible due to the service downtime involved if the test was successful. Also, some automated tools will flag vulnerabilities based on the web server version retrieved. This leads both to false positives and false negatives: on one hand, if the web server version has been removed or obscured by the local site administrator the scan tool will not flag the server as vulnerable even if it, on the other hand, if the vendor providing the software does not update the web server version when vulnerabilities are fixed in it the scan tool will flag vulnerabilities that do not exist. The later case is actually very common in some operating system vendors that do backport patches of security vulnerabilities to the software they provide in the operating system but do not do a full upload to the latest software version, this happens, for example, in most GNU/Linux distributions such as Debian, Red Hat or SuSE. In most cases, vulnerability scanning of an application architecture will only find vulnerabilities associated with the “exposed” elements of the architecture (such as the web server) and will usually be unable to find vulnerabilities associated to elements which are not directly exposed, such as the authentication backends, the database backends or, even, reverse proxies in use.<br />
<br />
Finally, not all software vendors disclose vulnerability information in public way and, even, information of the vulnerabilities present in their different releases is not published in vulnerability databases[2] but is only disclosed to customers or published through fixes that do not have accompanying advisories. This reduces the usefulness of vulnerability scanning tools. Typically, vulnerability coverage of these tools will be very good for common products (such as the Apache web server, Microsoft’s Internet Information Server, or IBM’s Lotus Domino) but will be lacking for less known products.<br />
<br />
This is why reviewing vulnerabilities is best done when the tester is provided internal information of the software used including versions and releases used and patches applied to the software. Which this information in its hand, the tester can retrieve the information from the vendor itself and analyse what vulnerabilities might be present in the architecture and how they can affect the application itself. All these vulnerabilities can, when possible be tested in order to determine their real effects and detect if there might be any external elements (such as intrusion detection or prevention systems) that might reduce or negate the possibility of exploiting these vulnerabilities. Testers might even determine, through a configuration review, that the vulnerability is not even present since it affects a software component that is not in use.<br />
<br />
It is also worthwhile noticing that vendors will sometimes silently fix vulnerabilities and make them available on new software releases. Also, different vendors will have difference release cycles that determines the support they might provide for older releases. A tester with detailed information of the software versions used by the architecture can analyse the risk associated to the use of old software releases that might be unsupported in the short term or are already unsupported. This is critical, since if a vulnerability were to surface in an old software version that is no longer under support the systems personnel might not be directly aware of it: no patches will be ever made available for it and advisories might not list that version as vulnerable (as it is unsupported). And even in the event that they might be aware that the vulnerability is present and the system is, indeed, vulnerable, they will need to do a full upgrade to a new software release which might introduce significant downtime in the application architecture or might force the application to be recoded due to incompatibilities with the latest software version.<br />
<br />
==Administrative tools==<br />
<br />
Any web server infrastructure requires the existence of administrative tools to maintain and update the information used by the application: static content (web pages, graphic files), applications source code, user authentication databases, etc. Depending on the site, technology or software used administrative tools will be differ. For example, some web servers will be managed using administrative interfaces which are, themselves, web servers (such as when using the iPlanet web server) or will be administrated by plain text configuration files (in the Apache case[3]) or use operating-system GUI tools (when using Microsoft’s IIS server or ASP.Net). In most cases, however, the server configuration will be handled using different tools than the maintenance of the files used by the web server, which are managed through FTP servers, WebDAV, network file systems (NFS, CIFS) or other mechanisms. Obviously operating system of the elements that make up the application architecture will also be managed using other tools. Also, applications might have administrative interfaces embedded in them that are used to manage the application data itself (users, content, etc.)<br />
<br />
Review of the administrative interfaces used to manage the different parts of the architecture is very important since if a user gains access to any of them he can compromise or damage the application architecture. Thus it is important to:<br />
<br />
* list all the possible administrative interfaces.<br />
* determine if administrative interfaces are available only from an internal network or are also available from the Internet.<br />
* if available from the Internet, determine what are the access control methods used to access these interfaces and if they are susceptible to attacks.<br />
<br />
Some sites do not directly manage the web server applications fully, they might have other companies manage the content provided by the web server application. This external companies might either provide only parts of the content (news updates or promotions) or might manage the web server completely including content and code. It is common to find administrative interfaces be available from the Internet in these situations, since using the Internet, as the web servers are directly connected to it anyway, is cheaper than providing a dedicated line that will connect the external company to the application infrastructure through a management-only interface. In this situation it is very important to test if the administrative interfaces can be vulnerable to attacks. <br />
<br />
==Authentication back-ends==<br />
<br />
Many applications rely heavily on the authentication methods implemented to provide information only to the authorised user and no other user. In some cases, like in a merchant shop, the information might be the same (the history of items bought in the shop and the user profile) but it should only be viewed by the legitimate user. In other cases, like an internal human resources application, different users will have different roles that determine what actions or functionality is available to them in the application.<br />
<br />
It is important to review and test the security of the authentication back-end to determine that the information they store cannot be recovered by any means. This means ensuring that the authentication information is stored in encrypted form, specially the passwords, if any, used by users to access the application[5]. Of course, backups of the authentication system should also be kept encrypted to prevent disclosure of this sensible information in the event of loss.<br />
<br />
Review user’s application privileges?<br />
Review default users?<br />
Admin-level and user-level access use same authentication back-end?<br />
<br />
==Notes==<br />
* [1]WebSEAL, also known as Tivoli Authentication Manager, is a reverse Proxy from IBM which is part of the Tivoli framework.<br />
* [2]Such as Symantec’s Bugtraq, ISS’ Xforce, or NIST’s National Vulnerability Database (NVD)<br />
* [3]There are some GUI-based administration tools for Apache (like NetLoony) but they are not in widespread use yet.<br />
* [4]It is very common the use of database back-ends for authentication purposes with user tables that include the password that grants access to users in plain text.<br />
<br />
<br />
{{Category:OWASP Testing Project AoC}}</div>Lkhttps://wiki.owasp.org/index.php?title=Testing_for_Bypassing_Authentication_Schema_(OTG-AUTHN-004)&diff=11910Testing for Bypassing Authentication Schema (OTG-AUTHN-004)2006-11-06T19:30:38Z<p>Lk: Brief Summary added</p>
<hr />
<div>{{Template:OWASP Testing Guide v2}}<br />
<br />
== Brief Summary ==<br />
<br><br />
While most most application require authentication for gaining access to private information or tasks, not every authentication method is able to provide adequate security level.<br><br><br />
Negligence, ignorance or simple understatement of the security threats often result in authentication schemes that can be easily bypassed by simply skipping the login page and directly calling an internal page that is supposed to be accessed only after authentication has been performed.<br><br><br />
In addition to this it is often possible to bypass compulsory authentication tampering with requests and tricking the application into thinking that we're already authenticated either by modifying the given URL parameter or by manipulating form or by counterfeiting sessions.<br />
<br><br />
<br />
== Description of the Issue == <br />
<br><br />
...here: Short Description of the Issue: Topic and Explanation<br />
<br><br />
== Black Box testing and example ==<br />
Bypassing authentication schema methods:<br />
<br />
* Direct page request<br />
<br />
In alcuni casi la richiesta di autenticazione della web application avviene solamente quando si cerca di accedere alla home page, mentre se si accedede a qualche risorsa richiamandola direttamente si puo' bypassare lo schem di autenticazione<br />
<br />
* Parameter Modification<br />
In alcuni casi l'autenticazione si basa sul valore con cui sono impostati alcuni parametri quindi e' sufficiente modificarli per bypassare lo schema di autenticazione<br />
<br />
For example, /webapps/login?validUser=yes&isAutheticated=yes can be manually entered into the browser in an attempt to bypass the application server's authentication mechanism.<br />
<br />
* Session Issue<br />
** Session ID Prediction<br />
** Session Fixation<br />
<br />
* Sql Injection (HTML Form Auhtentication)<br />
<br />
<br><br />
<br />
== Gray Box testing and example == <br />
'''Testing for Topic X vulnerabilities:'''<br><br />
...<br><br />
'''Result Expected:'''<br><br />
...<br><br><br />
== References ==<br />
'''Whitepapers'''<br><br />
...<br><br />
'''Tools'''<br><br />
...<br><br />
<br />
{{Category:OWASP Testing Project AoC}}<br />
[[OWASP Testing Guide v2 Table of Contents]]<br />
{{Template:Stub}}</div>Lkhttps://wiki.owasp.org/index.php?title=Testing_for_Default_or_Guessable_User_Account_(OWASP-AT-003)&diff=11906Testing for Default or Guessable User Account (OWASP-AT-003)2006-11-06T19:22:59Z<p>Lk: Brief Summary added</p>
<hr />
<div>{{Template:OWASP Testing Guide v2}}<br />
<br />
== Brief Summary ==<br />
<br><br />
Today's web application scenario is often populated by common software (begin OpenSource or Commercial) that is installed on web servers and configured or customized. In addition to this most of today's hardware appliance offer web-based configurations or administrative interfaces.<br><br><br />
Often in this pre-configured application and appliance scenario it's easy to encounter administrative software, interfaces and/or websites who use the default credentials for logging in.<br><br />
This default username/password are wildly known bu pentesters and malicious users that can use them as a powerful mean to gain access to internal infrastructure and/or to gain privileges and steal data.<br><br />
The same problem applies to softwares and/or appliances that spot built-in non-removable accounts and, in fewer cases, uses blank passwords as default credentials.<br />
<br><br />
== Description of the Issue == <br />
<br><br />
The source for this problem is often inexperienced IT personnel, unaware of the importance of changing default passwords on installed infrastructure components, programmers, leaving backdoors so they can easily access and test the application, later forgetting to remove them, application administrators and users that chose an easy username and password for themselves, and application with built in, non-removable default accounts with a preset username and password. Another problem is blank passwords, which are simply a result of security unawareness and willingness to simplify things.<br />
<br><br />
== Black Box testing and example ==<br />
In blackbox testing we know nothing about the application, its underlying infrastructure, and any username and/or password policies. Often this is not the case and some information about the application is provided – simply skip the steps that refer to obtaining information you already have.<br />
<br />
When testing a known application interface, such as a Cisco router web interface, or Weblogic admin access, check the known usernames and passwords for these devices. This can be done either by Google, or using one of the references in the Further Reading section.<br />
<br />
When facing a homegrown application, to which we do not have a list of default and common user accounts, we need to test it manually, following these guidelines:<br />
* Try the following usernames - "admin", "administrator", "root", "system", or "super". These are popular among system administrators and are often used. Additionally you could try "qa", "test", "test1", "testing", and similar names. Attempt any combination of the above in both the username and the password fields. If the application is vulnerable to username enumeration, and you successfully managed to identify any of the above usernames, attempt passwords in a similar manner.<br />
* Application administrative users are often named after the application. This means if you are testing an application named "Obscurity", try using obscurity/obscurity as the username and password.<br />
* When performing a test for a customer, attempt using names of contacts you have received as usernames.<br />
* Attempt using all the above usernames with blank passwords.<br />
<br />
'''Result Expected:'''<br><br />
...<br><br><br />
== Gray Box testing and example == <br />
The steps described next rely on an entirely White Box approach. If only some of the information is available to you, refer to black box testing to fill the gaps.<br />
<br />
Talk to the IT personnel to determine which passwords they use for administrative access. <br />
<br />
Check whether these usernames and passwords are complex, difficult to guess, and not related to the application name, person name, or administrative names ("system"). <br />
Note blank passwords.<br />
Check in the user database for default names, application names, and easily guessed names as described in the Black Box testing section. Check for empty password fields.<br />
<br />
Examine the code for hard coded usernames and passwords.<br />
'''Result Expected:'''<br><br />
...<br><br><br />
== References ==<br />
'''Whitepapers'''<br><br />
* http://www.cirt.net/cgi-bin/passwd.pl<br />
* http://phenoelit.darklab.org/cgi-bin/display.pl?SUBF=list&SORT=1<br />
* http://www.governmentsecurity.org/articles/DefaultLoginsandPasswordsforNetworkedDevices.php<br />
* http://www.virus.org/default-password/<br />
'''Tools'''<br><br />
...<br><br />
<br />
{{Category:OWASP Testing Project AoC}}</div>Lkhttps://wiki.owasp.org/index.php?title=Test_File_Extensions_Handling_for_Sensitive_Information_(OTG-CONFIG-003)&diff=11891Test File Extensions Handling for Sensitive Information (OTG-CONFIG-003)2006-11-06T17:12:30Z<p>Lk: Brief Summary added</p>
<hr />
<div>[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]<br><br />
{{Template:OWASP Testing Guide v2}}<br />
<br />
...To review and expand...<br />
<br />
== Brief Summary ==<br />
<br />
File extensions are commonly used in web servers to easily determine which technologies / languages / plugins must be used to fulfill the web request.<br><br><br />
While this behavior is consistent with RFCs and Web Standards, using standard file extensions provides the pentester useful information about the underlying technologies used in a web appliance and greatly simplifies the task of determining the attack scenario to be used on peculiar technologies.<br><br><br />
In addition to this misconfiguration in web servers could easily reveal confidential information about access credentials.<br />
<br />
==Description of the Issue==<br />
<br />
Determining how web servers handle requests corresponding to files having different extensions may help to understand web server behavior depending on the kind of files we try to access. For example, it can help understand which file extensions are returned as text/plain versus those which cause execution on the server side. The latter are indicative of technologies / languages / plugins which are used by web servers or application servers, and may provide additional insight on how the web application is engineered. For example, a “.pl” extension is usually associated with server-side Perl support (though the file extension alone may be deceptive and not fully conclusive; for example, Perl server-side resources might be renamed to conceal the fact that they are indeed Perl related). See also next section on “web server components” for more on identifying server side technologies and components.<br />
<br />
<br />
==Black Box testing and example==<br />
<br />
Submit http[s] requests involving different file extensions and verify how they are handled. These verifications should be on a per web directory basis. Verify directories which allow script execution. Web server directories can be identified by vulnerability scanners, which look for the presence of well-known directories. In addition, mirroring the web site structure allows to reconstruct the tree of web directories served by the application.<br />
In case the web application architecture is load-balanced, it is important to assess all of the web servers. This may or may not be easy depending on the configuration of the balancing infrastructure. In a redunded infrastructure there may be slight variations in the configuration of individual web / application servers, this may happen for example if the web architecture employs etherogeneous technologies (think of a set of IIS and Apache web servers in a load-balancing configuration, which may introduce slight asymmetric behavior between themselves, and possibly different vulnerabilities).<br />
'''Example:'''<br><br />
We have identified the existence of a file named connection.inc. Trying to access it directly gives back its contents, which are:<br />
<br />
<pre><br />
<?<br />
mysql_connect("127.0.0.1", "root", "")<br />
or die("Could not connect");<br />
<br />
?><br />
</pre><br />
<br />
We determine the existence of a MySQL DBMS back end, and the (weak) credentials used by the web application to access it. This example (which occurred in a real assessment) shows how dangerous can be the access to some kind of files.<br />
Whitepapers<br />
<br />
==Gray Box testing and example==<br />
<br />
Performing white box testing against file extensions handling amounts at checking the configurations of web server(s) / application server(s) taking part in the web application architecture, and verifying how they are instructed to serve different file extensions.<br />
If the web application relies on a load-balanced, etherogeneous infrastructure, determine whether this may introduce different behavior.<br />
<br />
<br />
<br />
==References==<br />
<br />
<br />
'''Whitepapers'''<br><br />
'''Tools'''<br><br />
<br />
Vulnerability scanners, such as nessus and nikto check for the existence of well-known web directories. They may allow as well to download the web site structure, which is helpful when trying to determine the configuration of web directories and how individual file extensions are served. Other tools that can be used for this purpose include wget (http://www.gnu.org/software/wget/) and curl (http://curl.haxx.se), or google for “web mirroring tools”.<br />
<br />
<br />
{{Category:OWASP Testing Project AoC}}</div>Lkhttps://wiki.owasp.org/index.php?title=Test_Application_Platform_Configuration_(OTG-CONFIG-002)&diff=11890Test Application Platform Configuration (OTG-CONFIG-002)2006-11-06T17:01:43Z<p>Lk: Brief Summary added</p>
<hr />
<div>[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]<br><br />
{{Template:OWASP Testing Guide v2}}<br />
<br />
== Brief Summary ==<br />
<br />
Proper configuration of the single elements that make up an application architecture is important in order to prevent mistakes that might compromise the security of the whole architecture.<br />
<br />
Configuration review and testing is a critical task in creating and maintaining such an architecture since many different systems will be usually provided with generic configurations which might not be suited to the task they will perform on the specific site they're installed on. <br />
<br />
While the typical web and application servers installation will spot a lot of functionalities (like application examples, documentation, test pages) what is not essential to and should be removed before deployment to avoid post-install exploitation. <br />
<br />
==Sample/known files and directories==<br />
<br />
Many web servers and application servers provide, in a default installation, sample application and files that are provided for the benefit of the developer and in order to test that the server is working properly right after installation. However, many default web server applications have been later known to be vulnerable. This was the case, for example, for CVE-1999-0449 (Denial of Service in IIS when the Exair sample site had been installed), CAN-2002-1744 (Directory traversal vulnerability in CodeBrws.asp in Microsoft IIS 5.0), CAN-2002-1630 (Use of sendmail.jsp in Oracle 9iAS), or CAN-2003-1172 (Directory traversal in the view-source sample in Apache’s Cocoon).<br />
<br />
CGI scanners include a detailed list of known files and directory samples that are provided by different web or application servers and might be a fast way to determine if these files are present. However, the only way to be really sure is to do a full review of the contents of the web server and/or application server and determination of whether they are related to the application itself or not.<br />
<br />
==Comment review==<br />
<br />
It is very common, and even recommended, for programmers to include detailed comments on their source code in order to allow for other programmers to better understand why a given decision was taken in coding a given function. Programmers usually do it too when developing large web-based applications. However, comments included inline in HTML code might reveal a potential attacker internal information that should not be available to them. Sometimes, even source code is commented out since a functionality is no longer required, but this comment is leaked out to the HTML pages returned to the users unintentionally.<br />
<br />
Comment review should be done in order to determine if any information is being leaked through comments. This review can only be thoroughly done through an analysis of the web server static and dynamic content and through file searches. It can be useful, however, to browse the site either in an automatic or guided fashion and store all the content retrieved. This retrieved content can then be searched in order to analyse the HTML comments available, if any, in the code.<br />
<br />
==Configuration review==<br />
<br />
The web server or application server configuration takes an important role in protecting the contents of the site and it must be carefully reviewed in order to spot common configuration mistakes. Obviously, the recommended configuration varies depending on the site policy, and the functionality that should be provided by the server software. In most cases, however, configuration guidelines (either provided by the software vendor or external parties) should be followed in order to determine if the server has been properly secured. It is impossible to generically say how a server should be configured, however, some common guidelines should be taken into account:<br />
<br />
* Only enable server modules[1] that are needed for the application. This reduces the attack surface since the server is reduced in size and complexity as software modules are disabled. It also prevents vulnerabilities that might appear in the vendor software affect the site if they are only present in modules that have been already disabled.<br />
* Handle server errors (40x or 50x) with custom made pages instead with the default web server pages. Specifically make sure that any application errors will not be returned to the end-user and that no code is leaked through these since it will help an attacker. It is actually very common to forget this point since developers do need this information in pre-production environments.<br />
* Make sure that the server software runs with minimised privileges in the operating system. This prevents an error in the server software from directly compromising the whole system. Although an attacker could elevate privileges once running code as the web server.<br />
* Make sure the server software logs properly both legitimate access and errors.<br />
* Make sure that the server is configured to properly handle overloads and prevent Denial of Service attacks. Ensure that the server has been performance tuned properly.<br />
<br />
<br />
==Logging==<br />
<br />
Logging is an important asset of the security of an application architecture since it can be used to detect flaws in applications (users constantly trying to retrieve a file that does not really exist) as well as sustained attacks from rogue users. Logs are typically properly generated by web and other server software but it is not so common to find applications that properly log their actions to a log and, when they do, they main intention of the application logs is to produce debugging output that could be used by the programmer to analyse a particular error.<br />
<br />
In both cases (server and application logs) several issues should be tested and analysed based on the log contents:<br />
<br />
# Do the logs contain sensitive information? <br />
# Are the logs stored in a dedicated server?<br />
# Can log usage generate a Denial of Service condition?<br />
# How are they rotated? Are logs kept for the sufficient time?<br />
# How are logs reviewed? Can administrators use these reviews to detect targeted attacks?<br />
# How are log backups preserved?<br />
# Is the data being logged data validated (min/max length, chars etc) prior to being logged?<br />
<br />
'''''Sensitive information in logs'''''<br />
<br />
Some applications might, for example use GET requests to forward form data which will be viewable in the server logs. This means that server logs might contain sensitive information (such as usernames as passwords, or bank account details). This sensitve information can be misused by an attacker if logs were to be obtained by an attacker, for example, through administrative interfaces or known web server vulnerabilities or misconfigurations (like the well-known ''server-status ''misconfiguration in Apache-based HTTP servers ).<br />
<br />
Also, in some jurisdictions, storing some sensitive information in log files, such as personal data, might oblige the enterprise to apply the data protection laws that they would apply to their back-end databases to log files too. And failure to do so, even unknowingly, might carry penalties under the data protection laws that apply.<br />
<br />
==Log location==<br />
<br />
Typically, servers will generate local logs of their actions and errors, consuming disk of the system the server is running on. However, if the server is compromised, its logs can be wiped out by the intruder to clean up all the traces of its attack and methods. If this were to happen the system administrator would have no knowledge of how the attack occurred or what the attack source was located. Actually, most attacker toolkits include a ''log zapper ''that is capable to clean up any logs that hold a given information (like the IP address of the attacker) and are routinarely used in attacker’s system-level rootkits.<br />
<br />
Consequently, it is wiser to keep logs in a separate location and not in the web server itself. This also makes it easier to aggregate logs from different sources that refer to the same application (such as those of a web server farm) and it also makes it easier to do log analysis (which can be CPU intensive) without affecting the server itself.<br />
<br />
==Log storage==<br />
<br />
Logs can introduce a Denial of Service condition if they are not properly stored. Obviously, any attacker with sufficient resources, could be able to, unless detected and blocked, to produce a sufficient number of requests that would fill up the allocated space to log files. However, if the server is not properly configured, the log files will be stored in the same disk partition as the one used for the operating system software or the application itself. This means that, if the disk were to be filled up, the operating system or the application might fail because they are unable to write on disk.<br />
<br />
Typically, in UNIX systems logs will be located in /var (although some server installations might reside in /opt or /usr/local) and it is thus important to make sure that the directories that logs are stored at are in a separate partition. In some cases, and in order to prevent the system logs to be affected, the log directory of the server software itself (such as /var/log/apache in the Apache web server) should be stored in a dedicated partition.<br />
<br />
This is not to say that logs should be allowed to grow to fill up the filesystem they reside in. Growth of server logs should be monitored in order to detect this condition since it may be indicative of an attack.<br />
<br />
Testing this condition is as easy as, and as dangerous in production environments, as firing off a sufficient and sustained number of requests to see if these requests are logged and, if so, if there is a possibility to fill up the log partition through these requests. In some environments where QUERY_STRING parameters are also logged regardless of whether they are produced through GET or POST requests, big queries can be simulated that will fill up the logs faster since, typically, a single request will cause only a small amount of data to be logged: date and time, source ip address, URI request, and server result.<br />
<br />
==Log rotation==<br />
<br />
Most servers (but few custom applications) will rotate logs in order to prevent them from filling up the filesystem they reside on. The assumption when rotating logs is that the information in them is only necessary for a limited amount of time.<br />
<br />
This feature should be tested in order to ensure that:<br />
<br />
* Logs are kept for the time defined in the security policy, not more and not less.<br />
* Logs are compressed once rotated (this is a convenience, since it will mean that more logs will be stored for the same available disk space)<br />
* Filesystem permission of rotated log files are the same (or stricter) that those of the log files itself. For example, web servers will need to write to the logs they use but they don’t actually need to write to rotated logs which means that the permissions of the files can be changed upon rotation to preventing the web server process from modifying these.<br />
<br />
Some servers might rotate logs when they reach a given size. If this happens, it must be ensured that an attacker cannot force logs to rotate in order to hide its tracks.<br />
<br />
==Log review==<br />
<br />
Review of logs can be used for more that extraction of usage statistics of files in the web servers (which is typically what most log-based application will focus on) but also to determine if attacks take place at the web server.<br />
<br />
In order to analyse web server attacks the error log files of the server need to be analysed. Review should concentrate on:<br />
<br />
* 40x (not found) error messages, a large amount of these from the same source might be indicative of a CGI scanner tool being used against the web server<br />
* 50x (server error) messages. These can be an indication of an attacker abusing parts of the application which fail unexpectedly. For example, the first phases of a SQL injection attack will produce these error message when the SQL query is not properly constructed and its execution fails on the backend database.<br />
<br />
Log statistics or analysis should not be generated, nor stored, in the same server that produces the logs. Otherwise, an attacker might, through a web server vulnerability or improper configuration, gain access to them and retrieve similar information as the one that would be disclosed by log files themselves.<br />
<br />
<br />
==References==<br />
<br />
Recommended guides include<br />
<br />
* Generic:<br />
** CERT Security Improvement Modules: Securing Public Web Servers , published at http://www.cert.org/security-improvement/<br />
* Apache<br />
** Apache Security, by Ivan Ristic, O’reilly, march 2005.<br />
** Apache Security Secrets: Revealed (Again), Mark Cox, November 2003 available at <u>http://www.awe.com/mark/apcon2003/</u><br />
** Apache Security Secrets: Revealed, ApacheCon 2002, Las Vegas, Mark J Cox, October 2002, available at http://www.awe.com/mark/apcon2002<br />
** Apache Security Configuration Document, InterSect Alliance, http://www.intersectalliance.com/projects/ApacheConfig/index.html<br />
** Performance Tuning, <u>http://httpd.apache.org/docs/misc/perf-tuning.html</u><br />
* Lotus Domino<br />
** Lotus Security Handbook, William Tworek et al., April 2004, available in the IBM Redbooks collection<br />
** Lotus Domino Security, an X-force white-paper, Internet Security Systems, December 2002<br />
** Hackproofing Lotus Domino Web Server, David Litchfield, October 2001, <br />
** NGSSoftware Insight Security Research, available at www.nextgenss.com<br />
* Microsoft IIS<br />
** IIS 6.0 Security, by Rohyt Belani, Michael Muckin, available at <u>http://www.securityfocus.com/print/infocus/1765</u><br />
** Securing Your Web Server (Patterns and Practices), Microsoft Corporation, January 2004<br />
** IIS Security and Programming Countermeasures, by Jason Coombs <br />
** From Blueprint to Fortress: A Guide to Securing IIS 5.0, by John Davis, Microsoft Corporation, June 2001 <br />
** Secure Internet Information Services 5 Checklist, by Michael Howard, Microsoft Corporation, June 2000<br />
** “How To: Use IISLockdown.exe”, available at http://msdn.microsoft.com/library/en-us/secmod/html/secmod113.asp<br />
** “INFO: Using URLScan on IIS”, available at <u>http://support.microsoft.com/default.aspx?scid=307608</u>.<br />
* Red Hat’s (formerly Netscape’s) iPlanet<br />
** Guide to the Secure Configuration and Administration of iPlanet Web Server, Enterprise Edition 4.1, by James M Hayes, The Network Applications Team of the Systems and Network Attack Center (SNAC), NSA, January 2001<br />
* WebSphere<br />
** IBM WebSphere V5.0 Security, WebSphere Handbook Series, by Peter Kovari et al., IBM, December 2002.<br />
** IBM WebSphere V4.0 Advanced Edition Security, by Peter Kovari et al., IBM, March 2002.<br />
<br />
==Notes==<br />
[1] ISAPI extensions in the IIS case<br />
<br />
<br />
<br />
{{Category:OWASP Testing Project AoC}}</div>Lkhttps://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&diff=11889Testing for SSL-TLS (OWASP-CM-001)2006-11-06T16:55:59Z<p>Lk: Brief Summary added</p>
<hr />
<div>[[http://www.owasp.org/index.php/Web_Application_Penetration_Testing_AoC Up]]<br><br />
{{Template:OWASP Testing Guide v2}}<br />
<br />
== Brief Summary ==<br />
<br />
Due to historical exporting restrictions of high grade cryptography, legacy and new web server could be able to handle a weak cryptographic support.<br />
<br />
Even if high grade ciphers are normally used and installed, some misconfiguration in server installation could be used to force the use of a weaker cipher to gain access to the supposed secure communication channel. <br />
<br />
==SSL / TLS cipher specifications and requirements for site==<br />
<br />
The http clear-text protocol is normally secured via an SSL or TLS tunnel, resulting in https traffic. In addition to provide encryption of data in transit, https allows to identify the identity of servers (and, optionally, of clients) by means of digital certificates.<br />
<br />
Historically, there have been limitations set in place by the U.S. government to allow crypto systems to be exported only for key sizes of at most 40 bits, a key length which could be broken and would allow the decryption of communications. Since then cryptographic export regulations have been relaxed (though some constraints still hold), however it is important to check the SSL configuration being used to avoid putting in place cryptographic support which could be easily defeated. SSL-based services should not offer the possibility to choose weak ciphers.<br />
<br />
Technically, cipher determination is performed as follows. In the initial phase of a SSL connection setup, the client sends to the server a Client Hello message specifying, among other information, the cipher suites that it is able to handle. A client is usually a web browser (most popular SSL client nowadays…), but not necessarily, since it can be any SSL-enabled application; the same holds for the server, which needs not be a web server, though this is the most common case. (For example, a noteworthy class of SSL clients is that of SSL proxies such as stunnel (www.stunnel.org) which can be used, among other things, to allow non-SSL enabled tools to talk to SSL services.) A cipher suite is specified by an encryption protocol (DES, RC4, AES), the encryption key length (such as 40, 56, 128 bits) and a hash algorithm (SHA, MD5) used for integrity checking. Upon receipt of a Client Hello message, the server decides which cipher suite it will use for that session. It is possible (for example by means of configuration directives) to specify which cipher suites the server will honor. In this way you may control whether, for example, to allow or not conversations with clients supporting 40-bit encryption only.<br />
<br />
==How to Test==<br />
<br />
==Black Box==<br />
<br />
In order to detect possible support of weak ciphers, the ports associated to SSL/TLS wrapped services must be identified. These typically include port 443 which is the standard https port, however this may change because a) https services may be configured to run on non-standard ports, b) there may be additional SSL/TLS wrapped services related to the web application. In general a service discovery is required to identify such ports.<br />
<br />
The nmap scanner, via the “–sV” scan option, is able to identify SSL services. Vulnerability Scanners, in addition to perform service discovery, may include checks against weak ciphers (for example, the Nessus scanner has the capability of checking SSL services on arbitrary ports, and will report weak ciphers).<br />
<br />
==White Box==<br />
<br />
Check the configuration of the web servers which provide https services. If the web application provides other SSL/TLS wrapped services, these should be checked as well.<br />
<br />
==References==<br />
<br />
==Examples==<br />
<br />
<u>Example 1</u>. SSL service recognition via nmap.<br />
<br />
<pre><br />
[root@test]# nmap -F -sV localhost<br />
<br />
Starting nmap 3.75 ( http://www.insecure.org/nmap/ ) at 2005-07-27 14:41 CEST<br />
Interesting ports on localhost.localdomain (127.0.0.1):<br />
(The 1205 ports scanned but not shown below are in state: closed)<br />
<br />
PORT STATE SERVICE VERSION<br />
443/tcp open ssl OpenSSL<br />
901/tcp open http Samba SWAT administration server<br />
8080/tcp open http Apache httpd 2.0.54 ((Unix) mod_ssl/2.0.54 OpenSSL/0.9.7g PHP/4.3.11)<br />
8081/tcp open http Apache Tomcat/Coyote JSP engine 1.0<br />
<br />
Nmap run completed -- 1 IP address (1 host up) scanned in 27.881 seconds<br />
[root@test]# <br />
</pre><br />
<br />
<u>Example 2</u>. Identifying weak ciphers with Nessus.<br />
The following is an anonymized excerpt of a report generated by the Nessus scanner, corresponding to the identification of a server certificate allowing weak ciphers (see underlined text).<br />
<br />
'''https (443/tcp)'''<br />
<u>Description</u><br />
Here is the SSLv2 server certificate:<br />
Certificate:<br />
Data:<br />
Version: 3 (0x2)<br />
Serial Number: 1 (0x1)<br />
Signature Algorithm: md5WithRSAEncryption<br />
Issuer: C=**, ST=******, L=******, O=******, OU=******, CN=******<br />
Validity<br />
Not Before: Oct 17 07:12:16 2002 GMT<br />
Not After : Oct 16 07:12:16 2004 GMT<br />
Subject: C=**, ST=******, L=******, O=******, CN=******<br />
Subject Public Key Info:<br />
Public Key Algorithm: rsaEncryption<br />
RSA Public Key: (1024 bit)<br />
Modulus (1024 bit):<br />
00:98:4f:24:16:cb:0f:74:e8:9c:55:ce:62:14:4e:<br />
6b:84:c5:81:43:59:c1:2e:ac:ba:af:92:51:f3:0b:<br />
ad:e1:4b:22:ba:5a:9a:1e:0f:0b:fb:3d:5d:e6:fc:<br />
ef:b8:8c:dc:78:28:97:8b:f0:1f:17:9f:69:3f:0e:<br />
72:51:24:1b:9c:3d:85:52:1d:df:da:5a:b8:2e:d2:<br />
09:00:76:24:43:bc:08:67:6b:dd:6b:e9:d2:f5:67:<br />
e1:90:2a:b4:3b:b4:3c:b3:71:4e:88:08:74:b9:a8:<br />
2d:c4:8c:65:93:08:e6:2f:fd:e0:fa:dc:6d:d7:a2:<br />
3d:0a:75:26:cf:dc:47:74:29<br />
Exponent: 65537 (0x10001)<br />
X509v3 extensions:<br />
X509v3 Basic Constraints:<br />
CA:FALSE<br />
Netscape Comment:<br />
OpenSSL Generated Certificate<br />
Page 10<br />
Network Vulnerability Assessment Report 25.05.2005<br />
X509v3 Subject Key Identifier:<br />
10:00:38:4C:45:F0:7C:E4:C6:A7:A4:E2:C9:F0:E4:2B:A8:F9:63:A8<br />
X509v3 Authority Key Identifier:<br />
keyid:CE:E5:F9:41:7B:D9:0E:5E:5D:DF:5E:B9:F3:E6:4A:12:19:02:76:CE<br />
DirName:/C=**/ST=******/L=******/O=******/OU=******/CN=******<br />
serial:00<br />
Signature Algorithm: md5WithRSAEncryption<br />
7b:14:bd:c7:3c:0c:01:8d:69:91:95:46:5c:e6:1e:25:9b:aa:<br />
8b:f5:0d:de:e3:2e:82:1e:68:be:97:3b:39:4a:83:ae:fd:15:<br />
2e:50:c8:a7:16:6e:c9:4e:76:cc:fd:69:ae:4f:12:b8:e7:01:<br />
b6:58:7e:39:d1:fa:8d:49:bd:ff:6b:a8:dd:ae:83:ed:bc:b2:<br />
40:e3:a5:e0:fd:ae:3f:57:4d:ec:f3:21:34:b1:84:97:06:6f:<br />
f4:7d:f4:1c:84:cc:bb:1c:1c:e7:7a:7d:2d:e9:49:60:93:12:<br />
0d:9f:05:8c:8e:f9:cf:e8:9f:fc:15:c0:6e:e2:fe:e5:07:81:<br />
82:fc<br />
Here is the list of available SSLv2 ciphers:<br />
RC4-MD5<br />
EXP-RC4-MD5<br />
RC2-CBC-MD5<br />
EXP-RC2-CBC-MD5<br />
DES-CBC-MD5<br />
DES-CBC3-MD5<br />
RC4-64-MD5<br />
<u>The SSLv2 server offers 5 strong ciphers, but also 0 medium strength and '''2 weak "export class" ciphers'''.</u><br />
<u>The weak/medium ciphers may be chosen by an export-grade or badly configured client software. They only offer a limited protection against a brute force attack</u><br />
<u>Solution: disable those ciphers and upgrade your client software if necessary.</u><br />
See http://support.microsoft.com/default.aspx?scid=kben-us216482<br />
or http://httpd.apache.org/docs-2.0/mod/mod_ssl.html#sslciphersuite<br />
This SSLv2 server also accepts SSLv3 connections.<br />
This SSLv2 server also accepts TLSv1 connections.<br />
<br />
Vulnerable hosts<br />
''(list of vulnerable hosts follows)''<br />
<br />
<u>Example 3</u>. Manually audit weak SSL cipher levels with OpenSSL. The following will attempt to connect to Google.com with SSLv2.<br />
<pre><br />
[root@test]# openssl s_client -no_tls1 -no_ssl3 -connect www.google.com:443<br />
CONNECTED(00000003)<br />
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com<br />
verify error:num=20:unable to get local issuer certificate<br />
verify return:1<br />
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com<br />
verify error:num=27:certificate not trusted<br />
verify return:1<br />
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com<br />
verify error:num=21:unable to verify the first certificate<br />
verify return:1<br />
---<br />
Server certificate<br />
-----BEGIN CERTIFICATE-----<br />
MIIDYzCCAsygAwIBAgIQYFbAC3yUC8RFj9MS7lfBkzANBgkqhkiG9w0BAQQFADCB<br />
zjELMAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJ<br />
Q2FwZSBUb3duMR0wGwYDVQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UE<br />
CxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhh<br />
d3RlIFByZW1pdW0gU2VydmVyIENBMSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNl<br />
cnZlckB0aGF3dGUuY29tMB4XDTA2MDQyMTAxMDc0NVoXDTA3MDQyMTAxMDc0NVow<br />
aDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDU1v<br />
dW50YWluIFZpZXcxEzARBgNVBAoTCkdvb2dsZSBJbmMxFzAVBgNVBAMTDnd3dy5n<br />
b29nbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC/e2Vs8U33fRDk<br />
5NNpNgkB1zKw4rqTozmfwty7eTEI8PVH1Bf6nthocQ9d9SgJAI2WOBP4grPj7MqO<br />
dXMTFWGDfiTnwes16G7NZlyh6peT68r7ifrwSsVLisJp6pUf31M5Z3D88b+Yy4PE<br />
D7BJaTxq6NNmP1vYUJeXsGSGrV6FUQIDAQABo4GmMIGjMB0GA1UdJQQWMBQGCCsG<br />
AQUFBwMBBggrBgEFBQcDAjBABgNVHR8EOTA3MDWgM6Axhi9odHRwOi8vY3JsLnRo<br />
YXd0ZS5jb20vVGhhd3RlUHJlbWl1bVNlcnZlckNBLmNybDAyBggrBgEFBQcBAQQm<br />
MCQwIgYIKwYBBQUHMAGGFmh0dHA6Ly9vY3NwLnRoYXd0ZS5jb20wDAYDVR0TAQH/<br />
BAIwADANBgkqhkiG9w0BAQQFAAOBgQADlTbBdVY6LD1nHWkhTadmzuWq2rWE0KO3<br />
Ay+7EleYWPOo+EST315QLpU6pQgblgobGoI5x/fUg2U8WiYj1I1cbavhX2h1hda3<br />
FJWnB3SiXaiuDTsGxQ267EwCVWD5bCrSWa64ilSJTgiUmzAv0a2W8YHXdG08+nYc<br />
X/dVk5WRTw==<br />
-----END CERTIFICATE-----<br />
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com<br />
issuer=/C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/emailAddress=premium-server@thawte.com<br />
---<br />
No client certificate CA names sent<br />
---<br />
Ciphers common between both SSL endpoints:<br />
RC4-MD5 EXP-RC4-MD5 RC2-CBC-MD5<br />
EXP-RC2-CBC-MD5 DES-CBC-MD5 DES-CBC3-MD5<br />
RC4-64-MD5<br />
---<br />
SSL handshake has read 1023 bytes and written 333 bytes<br />
---<br />
New, SSLv2, Cipher is DES-CBC3-MD5<br />
Server public key is 1024 bit<br />
Compression: NONE<br />
Expansion: NONE<br />
SSL-Session:<br />
Protocol : SSLv2<br />
Cipher : DES-CBC3-MD5<br />
Session-ID: 709F48E4D567C70A2E49886E4C697CDE<br />
Session-ID-ctx:<br />
Master-Key: 649E68F8CF936E69642286AC40A80F433602E3C36FD288C3<br />
Key-Arg : E8CB6FEB9ECF3033<br />
Start Time: 1156977226<br />
Timeout : 300 (sec)<br />
Verify return code: 21 (unable to verify the first certificate)<br />
---<br />
closed<br />
</pre><br />
<br />
==Whitepapers==<br />
<br />
# RFC2246. The TLS Protocol Version 1.0 (updated by RFC3546), <u>http://www.ietf.org/rfc/rfc2246.txt</u><br />
# RFC2817. Upgrading to TLS Within HTTP/1.1, <u>http://www.ietf.org/rfc/rfc2817.txt</u><br />
# RFC3546. Transport Layer Security (TLS) Extensions, <u>http://www.ietf.org/rfc/rfc3546.txt</u><br />
# <u>www.verisign.net</u> features various material on the topic<br />
<br />
==Tools==<br />
<br />
Vulnerability scanners may include checks against weak ciphers. For example, the Nessus scanner (http://www.nessus.org) has this capability and flags the presence of SSL weak ciphers (see example provided above).<br />
<br />
You may also rely on specialized tools such as SSL Digger (http://www.foundstone.com/resources/proddesc/ssldigger.htm), or – for the command line oriented – experiment with the openssl tool, which provides access to OpenSSL cryptographic functions directly from a Unix shell (may be already available on *nix boxes, otherwise see www.openssl.org).<br />
<br />
To identify SSL-based services, use a vulnerability scanner or a port scanner with service recognition capabilities. The nmap scanner features a “-sV” scanning option which tries to identify services, while the nessus vulnerability scanner has the capability of identifying SSL-based services on arbitrary ports and to run vulnerability checks on them regardless of whether they are configured on standard or non-standard ports.<br />
<br />
In case you need to talk to a SSL service but your favorite tool doesn’t support SSL, you may benefit from a SSL proxy such as stunnel; stunnel will take care of tunneling the underlying protocol (usually http, but not necessarily so) and communicate with the SSL service you need to reach.<br />
<br />
Finally, a word of advice. Though it may be tempting to use a regular browser to check certificates, there are various reasons for not doing so. Browsers have been plagued by various bugs in this area, and the way the browser will perform the check might be influenced by configuration settings that may not be always evident. Instead, rely on vulnerability scanners or on specialized tools to do the job.<br />
<br />
<br />
<br />
<br />
==SSL certificate validity – client and server==<br />
<br />
When accessing a web application via the https protocol, a secure channel is established between the client (usually the browser) and the server. The identity of one (the server) or both parties (client and server) is then established by means of digital certificates. In order for the communication to be set up, a number of checks on the certificates must be passed. While discussing SSL and certificate based authentication is beyond the scope of this Guide, we will focus on the main criteria involved in ascertaining certificate validity: a) checking if the Certificate Authority (CA) is a known one (meaning one considered trusted), b) checking that the certificate is currently valid, and c) checking that the name of the site and the name reported in the certificate do match.<br />
<br />
Let’s examine each check more in detail.<br />
<br />
a) Each browser comes with a preloaded list of trusted CAs, against which the certificate signing CA is compared (this list can be customized and expanded at will). During the initial negotiations with a https server, if the server certificate relates to a CA unknown to the browser, usually a warning is raised. Usually this happens because a web application relies on a certificate signed by a self-established CA. Whether this is to be considered a concern depends. For example, this may be fine for an intranet environment (think of corporate web email being provided via https; here, obviously all users do recognize the internal CA as a trusted CA). When a service is provided to the general public via the Internet, however (i.e. when it is important to positively verify the identity of the server we are talking to), it is usually imperative to rely on a trusted CA, i.e. on a CA which is recognized by all the user base (and here we stop with our considerations, i.e. we won’t delve deeper in the implications of the trust model being used by digital certificates).<br />
<br />
b) Certificates have associated a period of validity, therefore they may expire. Again, we are warned by the browser about this. A public service needs a temporally valid certificate; otherwise, it means we are talking with a server whose certificate was issued by someone we trust, but that has expired, and has not been renewed.<br />
<br />
c) Why the name on the certificate and the name of the server should not match? If this happens, it might sound suspicious (i.e.: whom are we talking with?). For a number of reasons, this is not so rare to see. A situation which causes this is when a system hosts a number of name-based virtual hosts, i.e. virtual hosts sharing the same IP address, that are identified by means of the HTTP 1.1 Host: header information. In this case, since the SSL handshake – during which the client browser checks the server certificate – takes place before the HTTP request is processed, it is not possible to assign different certificates to each virtual server. Therefore, if the name of the site and the name reported in the certificate do not match we have a condition which is typically signaled by the browser. To avoid this, IP-based virtual servers must be used. [2] and [3] describe techniques to deal with this problem and allow name-based virtual hosts to be correctly referenced.<br />
<br />
<br />
<br />
==How to Test==<br />
<br />
===Black Box===<br />
<br />
Examine the validity of the certificates used by the application. Browsers will issue a warning when encountering expired certificates, certificates issued by untrusted – meaning unknown to the browser – CAs, certificates which do not match namewise with the site they should refer. By clicking on the padlock which appears in the browser window when visiting an https site, you can look at information related to the certificate – including issuer, period of validity, encryption characteristics, etc.<br />
<br />
If the application requires a client certificate, you probably have installed one to access it. Certificate information is available in the browser, by inspecting the relevant certificate(s) in the list of the installed certificates.<br />
<br />
These checks must be applied to all visible SSL-wrapped communication channels used by the application. Though this is the usual https service running on port 443, there may be additional services involved depending on the web application architecture and on deployment issues (for example, an https administrative port left open, https services on non-standard ports, etc.). Therefore, apply these checks to all SSL-wrapped ports which have been discovered. For example, the nmap scanner features a scanning mode (enabled by the –sV command line switch) which identifies SSL-wrapped services. The Nessus vulnerability scanner has the capability of performing SSL checks on all SSL/TLS-wrapped services.<br />
<br />
===White Box===<br />
<br />
Examine the validity of the certificates used by the application – at server and client level. The usage of certificates is primarily at the web server level, however there may be additional communication paths protected by SSL (for example, towards the DBMS). You should check the application architecture to identify all SSL protected channels.<br />
<br />
==References==<br />
<br />
===Examples===<br />
<br />
Rather than providing a fictitious example, we have inserted an anonymized real-life example to stress how frequent is to stumble on https sites whose certificates are inaccurate with respect to naming.<br />
<br />
The following screenshots refer to a regional site of a high-profile IT company.<br />
<br />
<u>Warning issued by Microsoft Internet Explorer.</u> We are visiting a ''.it'' site and the certificate was issued to a ''.com ''site! Internet Explorer warns that the name on the certificate does not match the name of the site.<br />
<br />
<br />
[[Image:SSL Certificate Validity Testing IE Warning.gif]]<br />
<br />
<br />
<u>Warning issued by Mozilla Firefox.</u> The message issued by Firefox is different – Firefox complains because it cannot ascertain the identity of the ''.com'' site the certificate refers to; this because it does not know the CA who signed the certificate. In fact, Internet Explorer and Firefox do not come preloaded with the same list of CAs. Therefore, the behavior experienced with various browsers may differ.<br />
<br />
<br />
[[Image:SSL Certificate Validity Testing Firefox Warning.gif]]<br />
<br />
<br />
===Whitepapers===<br />
* [1] RFC2246. The TLS Protocol Version 1.0 (updated by RFC3546), <u>http://www.ietf.org/rfc/rfc2246.txt</u><br />
* [2] RFC2817. Upgrading to TLS Within HTTP/1.1, <u>http://www.ietf.org/rfc/rfc2817.txt</u><br />
* [3] RFC3546. Transport Layer Security (TLS) Extensions, <u>http://www.ietf.org/rfc/rfc3546.txt</u><br />
<br />
==Tools==<br />
<br />
Vulnerability scanners may include checks regarding certificate validity, including name mismatch and time expiration. They also usually report other information, such as the CA which issued the certificate. Remember, however, that there is no unified notion of a “trusted CA”; what is trusted depends on the configuration of the software and on the human assumptions made beforehand. Browsers come with a preloaded list of trusted CA. If your web application rely on a CA which is not in this list (for example, because you rely on a self-made CA), you should take into account the process of configuring user browsers to recognize the CA.<br />
<br />
The Nessus scanner includes a plugin to check for expired certificates or certificates which are going to expire within 60 days (plugin “SSL certificate expiry”, plugin id 15901). This plugin will check certificates ''installed on the server''.<br />
<br />
==Category==<br />
[[Category:Cryptographic Vulnerability]]<br />
[[Category:SSL]]<br />
<br />
<br />
<br />
{{Category:OWASP Testing Project AoC}}</div>Lkhttps://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&diff=11231Enumerate Applications on Webserver (OTG-INFO-004)2006-10-29T09:09:44Z<p>Lk: Brief Summary</p>
<hr />
<div>{{Template:OWASP Testing Guide v2}}<br />
<br />
== Brief Summary ==<br />
A common step for testing vulnerabilities in a Web presence is to find out which particular applications are hosted on a Web Server.<br/><br />
Many different applications, in fact, have known vulnerabilities and known attack strategies than can be exploited in order to gain remote control and/or data exploitation.<br><br />
In addition to this many applications are often hosted on a particular web server without reference from the main website: this is true for internal and/or extranet website which could be misconfigured or not updated due to the perception they're used only "internally".<br/><br />
In addition to this many application use common path for administrative interfaces which can be used to guess or bruteforce administrative passwords.<br />
<br />
== Description of the Issue == <br />
<br><br />
...here: Short Description of the Issue: Topic and Explanation<br />
<br><br />
== Black Box testing and example ==<br />
'''Testing for Topic X vulnerabilities:''' <br><br />
...<br><br />
'''Result Expected:'''<br><br />
...<br><br><br />
== Gray Box testing and example == <br />
'''Testing for Topic X vulnerabilities:'''<br><br />
...<br><br />
'''Result Expected:'''<br><br />
...<br><br><br />
== References ==<br />
'''Whitepapers'''<br><br />
...<br><br />
'''Tools'''<br><br />
...<br><br />
<br />
{{Category:OWASP Testing Project AoC}}<br />
[[OWASP Testing Guide v2 Table of Contents]]<br />
{{Template:Stub}}</div>Lk