<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://wiki.owasp.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jeffrey+Walton</id>
		<title>OWASP - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://wiki.owasp.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Jeffrey+Walton"/>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php/Special:Contributions/Jeffrey_Walton"/>
		<updated>2026-04-09T09:12:18Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.27.2</generator>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=155299</id>
		<title>Transport Layer Protection Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=155299"/>
				<updated>2013-07-10T06:30:44Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added Rule - Prefer Ephemeral Key Exchanges&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction  =&lt;br /&gt;
&lt;br /&gt;
This article provides a simple model to follow when implementing transport layer protection for an application. Although the concept of SSL is known to many, the actual details and security specific decisions of implementation are often poorly understood and frequently result in insecure deployments. This article establishes clear rules which provide guidance on securely designing and configuring transport layer security for an application. This article is focused on the use of SSL/TLS between a web application and a web browser, but that we also encourage the use of SSL/TLS or other network encryption technologies, such as VPN, on back end and other non-browser based connections.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== Architectural Decision  ==&lt;br /&gt;
&lt;br /&gt;
An architectural decision must be made to determine the appropriate method to protect data when it is being transmitted.  The most common options available to corporations are Virtual Private Networks (VPN) or a SSL/TLS model commonly used by web applications. The selected model is determined by the business needs of the particular organization. For example, a VPN connection may be the best design for a partnership between two companies that includes mutual access to a shared server over a variety of protocols. Conversely, an Internet facing enterprise web application would likely be best served by a SSL/TLS model. &lt;br /&gt;
&lt;br /&gt;
This cheat sheet will focus on security considerations when the SSL/TLS model is selected. This is a frequently used model for publicly accessible web applications.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection with SSL/TLS  =&lt;br /&gt;
&lt;br /&gt;
== Benefits  ==&lt;br /&gt;
&lt;br /&gt;
The primary benefit of transport layer security is the protection of web application data from unauthorized disclosure and modification when it is transmitted between clients (web browsers) and the web application server, and between the web application server and back end and other non-browser based enterprise components. &lt;br /&gt;
&lt;br /&gt;
The server validation component of TLS provides authentication of the server to the client.  If configured to require client side certificates, TLS can also play a role in client authentication to the server. However, in practice client side certificates are not often used in lieu of username and password based authentication models for clients.&lt;br /&gt;
&lt;br /&gt;
TLS also provides two additional benefits that are commonly overlooked; integrity guarantees and replay prevention. A TLS stream of communication contains built-in controls to prevent tampering with any portion of the encrypted data. In addition, controls are also built-in to prevent a captured stream of TLS data from being replayed at a later time.&lt;br /&gt;
&lt;br /&gt;
It should be noted that TLS provides the above guarantees to data during transmission. TLS does not offer any of these security benefits to data that is at rest. Therefore appropriate security controls must be added to protect data while at rest within the application or within data stores.&lt;br /&gt;
&lt;br /&gt;
== Basic Requirements ==&lt;br /&gt;
&lt;br /&gt;
The basic requirements for using TLS are: access to a Public Key Infrastructure (PKI) in order to obtain certificates, access to a directory or an Online Certificate Status Protocol (OCSP) responder in order to check certificate revocation status, and agreement/ability to support a minimum configuration of protocol versions and protocol options for each version.&lt;br /&gt;
&lt;br /&gt;
== SSL vs. TLS  ==&lt;br /&gt;
&lt;br /&gt;
The terms, Secure Socket Layer (SSL) and Transport Layer Security (TLS) are often used interchangeably. In fact, SSL v3.1 is equivalent to TLS v1.0. However, different versions of SSL and TLS are supported by modern web browsers and by most modern web frameworks and platforms. For the purposes of this cheat sheet we will refer to the technology generically as TLS. Recommendations regarding the use of SSL and TLS protocols, as well as browser support for TLS, can be found in the rule below title [[Transport_Layer_Protection_Cheat_Sheet#Rule_-_Only_Support_Strong_Protocols| &amp;quot;Only Support Strong Protocols&amp;quot;]].&lt;br /&gt;
&lt;br /&gt;
[[Image:Asvs_cryptomodule.gif|thumb|350px|right|Cryptomodule Parts and Operation]]&lt;br /&gt;
&lt;br /&gt;
== When to Use a FIPS 140-2 Validated Cryptomodule ==&lt;br /&gt;
&lt;br /&gt;
If the web application may be the target of determined attackers (a common threat model for Internet accessible applications handling sensitive data), it is strongly advised to use TLS services that are provided by [http://csrc.nist.gov/groups/STM/cmvp/validation.html FIPS 140-2 validated cryptomodules]. &lt;br /&gt;
&lt;br /&gt;
A cryptomodule, whether it is a software library or a hardware device, basically consists of three parts:&lt;br /&gt;
&lt;br /&gt;
* Components that implement cryptographic algorithms (symmetric and asymmetric algorithms, hash algorithms, random number generator algorithms, and message authentication code algorithms) &lt;br /&gt;
* Components that call and manage cryptographic functions (inputs and outputs include cryptographic keys and so-called critical security parameters) &lt;br /&gt;
* A physical container around the components that implement cryptographic algorithms and the components that call and manage cryptographic functions&lt;br /&gt;
&lt;br /&gt;
The security of a cryptomodule and its services (and the web applications that call the cryptomodule) depend on the correct implementation and integration of each of these three parts. In addition, the cryptomodule must be used and accessed securely. The includes consideration for:&lt;br /&gt;
&lt;br /&gt;
* Calling and managing cryptographic functions&lt;br /&gt;
* Securely Handling inputs and output&lt;br /&gt;
* Ensuring the secure construction of the physical container around the components&lt;br /&gt;
&lt;br /&gt;
In order to leverage the benefits of TLS it is important to use a TLS service (e.g. library, web framework, web application server) which has been FIPS 140-2 validated. In addition, the cryptomodule must be installed, configured and operated in either an approved or an allowed mode to provide a high degree of certainty that the FIPS 140-2 validated cryptomodule is providing the expected security services in the expected manner.&lt;br /&gt;
&lt;br /&gt;
If the system is legally required to use FIPS 140-2 encryption (e.g., owned or operated by or on behalf of the U.S. Government) then TLS must be used and SSL disabled. Details on why SSL is unacceptable are described in Section 7.1 of [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program].&lt;br /&gt;
&lt;br /&gt;
Further reading on the use of TLS to protect highly sensitive data against determined attackers can be viewed in [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP800-52 Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations]&lt;br /&gt;
&lt;br /&gt;
== Secure Server Design  ==&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS for All Login Pages and All Authenticated Pages  ===&lt;br /&gt;
&lt;br /&gt;
The login page and all subsequent authenticated pages must be exclusively accessed over TLS. The initial login page, referred to as the &amp;quot;login landing page&amp;quot;, must be served over TLS. Failure to utilize TLS for the login landing page allows an attacker to modify the login form action, causing the user's credentials to be posted to an arbitrary location. Failure to utilize TLS for authenticated pages after the login enables an attacker to view the unencrypted session ID and compromise the user's authenticated session. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS on Any Networks (External and Internal) Transmitting Sensitive Data  ===&lt;br /&gt;
&lt;br /&gt;
All networks, both external and internal, which transmit sensitive data must utilize TLS or an equivalent transport layer security mechanism. It is not sufficient to claim that access to the internal network is &amp;quot;restricted to employees&amp;quot;. Numerous recent data compromises have shown that the internal network can be breached by attackers. In these attacks, sniffers have been installed to access unencrypted sensitive data sent on the internal network. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Provide Non-TLS Pages for Secure Content  ===&lt;br /&gt;
&lt;br /&gt;
All pages which are available over TLS must not be available over a non-TLS connection. A user may inadvertently bookmark or manually type a URL to a HTTP page (e.g. http://example.com/myaccount) within the authenticated portion of the application. If this request is processed by the application then the response, and any sensitive data, would be returned to the user over the clear text HTTP.&lt;br /&gt;
&lt;br /&gt;
=== Rule - REMOVED - Do Not Perform Redirects from Non-TLS Page to TLS Login Page  ===&lt;br /&gt;
&lt;br /&gt;
This recommendation has been removed. Ultimately, the below guidance will only provide user education and cannot provide any technical controls to protect the user against a man-in-the-middle attack.  &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
A common practice is to redirect users that have requested a non-TLS version of the login page to the TLS version (e.g. http://example.com/login redirects to https://example.com/login). This practice creates an additional attack vector for a man in the middle attack. In addition, redirecting from non-TLS versions to the TLS version reinforces to the user that the practice of requesting the non-TLS page is acceptable and secure.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the man-in-the-middle attack is used by the attacker to intercept the non-TLS to TLS redirect message. The attacker then injects the HTML of the actual login page and changes the form to post over unencrypted HTTP. This allows the attacker to view the user's credentials as they are transmitted in the clear.&lt;br /&gt;
&lt;br /&gt;
It is recommended to display a security warning message to the user whenever the non-TLS login page is requested. This security warning should urge the user to always type &amp;quot;HTTPS&amp;quot; into the browser or bookmark the secure login page.  This approach will help educate users on the correct and most secure method of accessing the application.&lt;br /&gt;
&lt;br /&gt;
Currently there are no controls that an application can enforce to entirely mitigate this risk. Ultimately, this issue is the responsibility of the user since the application cannot prevent the user from initially typing [http://owasp.org http://example.com/login] (versus HTTPS). &lt;br /&gt;
&lt;br /&gt;
Note: [http://www.w3.org/Security/wiki/Strict_Transport_Security Strict Transport Security] will address this issue and will provide a server side control to instruct supporting browsers that the site should only be accessed over HTTPS&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Mix TLS and Non-TLS Content  ===&lt;br /&gt;
&lt;br /&gt;
A page that is available over TLS must be comprised completely of content which is transmitted over TLS. The page must not contain any content that is transmitted over unencrypted HTTP. This includes content from unrelated third party sites. &lt;br /&gt;
&lt;br /&gt;
An attacker could intercept any of the data transmitted over the unencrypted HTTP and inject malicious content into the user's page. This malicious content would be included in the page even if the overall page is served over TLS. In addition, an attacker could steal the user's session cookie that is transmitted with any non-TLS requests. This is possible if the cookie's 'secure' flag is not set. See the rule 'Use &amp;quot;Secure&amp;quot; Cookie Flag'&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use &amp;quot;Secure&amp;quot; Cookie Flag  ===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Secure&amp;quot; flag must be set for all user cookies. Failure to use the &amp;quot;secure&amp;quot; flag enables an attacker to access the session cookie by tricking the user's browser into submitting a request to an unencrypted page on the site. This attack is possible even if the server is not configured to offer HTTP content since the attacker is monitoring the requests and does not care if the server responds with a 404 or doesn't respond at all.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Keep Sensitive Data Out of the URL ===&lt;br /&gt;
&lt;br /&gt;
Sensitive data must not be transmitted via URL arguments. A more appropriate place is to store sensitive data in a server side repository or within the user's session.  When using TLS the URL arguments and values are encrypted during transit. However, there are two methods that the URL arguments and values could be exposed.&lt;br /&gt;
&lt;br /&gt;
1. The entire URL is cached within the local user's browser history. This may expose sensitive data to any other user of the workstation.&lt;br /&gt;
&lt;br /&gt;
2. The entire URL is exposed if the user clicks on a link to another HTTPS site. This may expose sensitive data within the referral field to the third party site. This exposure occurs in most browsers and will only occur on transitions between two TLS sites. &lt;br /&gt;
&lt;br /&gt;
For example, a user following a link on [http://owasp.org https://example.com] which leads to [http://owasp.org https://someOtherexample.com] would expose the full URL of [http://owasp.org https://example.com] (including URL arguments) in the referral header (within most browsers). This would not be the case if the user followed a link on [http://owasp.org https://example.com] to [http://owasp.org http://someHTTPexample.com]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Prevent Caching of Sensitive Data ===&lt;br /&gt;
&lt;br /&gt;
The TLS protocol provides confidentiality only for data in transit but it does not help with potential data leakage issues at the client or intermediary proxies. As a result, it is frequently prudent to instruct these nodes not to cache or persist sensitive data. One option is to add a suitable Cache-Control header to relevant HTTP responses, for example &amp;quot;Cache-Control: no-cache, no store, must-revalidate&amp;quot;. For compatibility with HTTP/1.0 the response should include header &amp;quot;Pragma: no-cache&amp;quot;. More information is available in [http://www.ietf.org/rfc/rfc2616.txt HTTP 1.1 RFC 2616], section 14.9.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use HTTP Strict Transport Security ===&lt;br /&gt;
&lt;br /&gt;
A new browser security setting called HTTP Strict Transport Security (HSTS) will significantly enhance the implementation of TLS for a domain. HSTS is enabled via a special response header and this instructs [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security#Browser_Support compatible browsers] to enforce the following security controls:&lt;br /&gt;
&lt;br /&gt;
* All requests to the domain will be sent over HTTPS&lt;br /&gt;
* Any attempts to send an HTTP requests to the domain will be automatically upgraded by the browser to HTTPS before the request is sent&lt;br /&gt;
* If a user encounters a bad SSL certificate, the user will receive an error message and will not be allowed to override the warning message&lt;br /&gt;
&lt;br /&gt;
Additional information on HSTS can be found at [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security https://www.owasp.org/index.php/HTTP_Strict_Transport_Security] and also on the OWASP [http://www.youtube.com/watch?v=zEV3HOuM_Vw&amp;amp;feature=youtube_gdata AppSecTutorial Series - Episode 4]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Prefer Ephemeral Key Exchanges ===&lt;br /&gt;
&lt;br /&gt;
Ephemeral key exchanges are based on Diffie-Hellman and use per-session, temporary keys during the initial SSL/TLS handshake. They provide perfect forward secrecy (PFS), which means a compromise of the server's long term signing key does not compromise the confidentiality of past session. When the server uses an ephemeral key, the server will sign the temporary key with its long term key (the long term key is the customary key available in its certificate).&lt;br /&gt;
&lt;br /&gt;
If you have a server farm and are providing forward secrecy, then you might have to disable session resumption. For example, Apache writes the session id's and master secrets to disk so all servers in the farm can participate in resuming a session (there is currently no in-memory mechanism to achieve the sharing). Writing the session id and master secret to disk undermines forward secrecy.&lt;br /&gt;
&lt;br /&gt;
== Server Certificate and Protocol Configuration  ==&lt;br /&gt;
&lt;br /&gt;
Note: If using a FIPS 140-2 cryptomodule disregard the following rules and defer to the recommended configuration for the particular cryptomodule.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use an Appropriate Certification Authority for the Application's User Base  ===&lt;br /&gt;
&lt;br /&gt;
An application user must never be presented with a warning that the certificate was signed by an unknown or untrusted authority. The application's user population must have access to the public certificate of the certification authority which issued the server's certificate. For Internet accessible websites, the most effective method of achieving this goal is to purchase the TLS certificate from a recognize certification authority. Popular Internet browsers already contain the public certificates of these recognized certification authorities. &lt;br /&gt;
&lt;br /&gt;
Internal applications with a limited user population can use an internal certification authority provided its public certificate is securely distributed to all users. However, remember that all certificates issued by this certification authority will be trusted by the users. Therefore, utilize controls to protect the private key and ensure that only authorized individuals have the ability to sign certificates. &lt;br /&gt;
&lt;br /&gt;
The use of self signed certificates is never acceptable. Self signed certificates negate the benefit of end-point authentication and also significantly decrease the ability for an individual to detect a man-in-the-middle attack. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Protocols ===&lt;br /&gt;
&lt;br /&gt;
SSL/TLS is a collection of protocols. Weaknesses have been identified with earlier SSL protocols, including [http://www.schneier.com/paper-ssl-revised.pdf SSLv2] and [http://www.yaksman.org/~lweith/ssl.pdf SSLv3]. The best practice for transport layer protection is to only provide support for the TLS protocols - TLS1.0, TLS 1.1 and TLS 1.2. This configuration will provide maximum protection against skilled and determined attackers and is appropriate for applications handling sensitive data or performing critical operations.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers Nearly all modern browsers support at least TLS 1.0]. As of February 2013, contemporary browsers (Chrome v20+, IE v8+, Opera v10+, and Safari v5+) [http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers support TLS 1.1 and TLS 1.2]. You should provide support for TLS 1.1 and TLS 1.2 to accommodate clients which support the protocols.&lt;br /&gt;
&lt;br /&gt;
In situations where lesser security requirements are necessary, it may be acceptable to also provide support for SSL 3.0 and TLS 1.0. [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 has known weaknesses] which severely compromise the channel's security. TLS 1.0 suffers [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html CBC Chaining attacks and Padding Oracle attacks]. SSLv3 and TLSv1.0 should only be used only after risk analysis and acceptance.&lt;br /&gt;
&lt;br /&gt;
Under no circumstances should SSLv2 be enabled as a protocol selection. The [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 protocol is broken] and does not provide adequate transport layer protection.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Cryptographic Ciphers  ===&lt;br /&gt;
&lt;br /&gt;
Each protocol (SSLv3, TLSv1.0, etc) provides cipher suites. As of TLS 1.2, [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 there is support for over 300 suites (320+ and counting)], including [http://www.mail-archive.com/cryptography@randombit.net/msg03785.html national vanity cipher suites]. The strength of the encryption used within a TLS session is determined by the encryption cipher negotiated between the server and the browser. In order to ensure that only strong cryptographic ciphers are selected the server must be modified to disable the use of weak ciphers. It is recommended to configure the server to only support strong ciphers and to use sufficiently large key sizes. In general, the following should be observed when selecting CipherSuites:&lt;br /&gt;
&lt;br /&gt;
* Disable cipher suites that do not offer authentication (NULL ciphersuites, aNULL or eNULL)&lt;br /&gt;
* Disable anonymous Diffie-Hellman key exchange (ADH)&lt;br /&gt;
* Disable export level ciphers (EXP, eg. ciphers containing DES)&lt;br /&gt;
* Disable key sizes smaller than 128 bits for encrypting payload traffic&lt;br /&gt;
* Disable the use of MD5 as a hashing mechanism for payload traffic&lt;br /&gt;
* Use AES, 3-key 3DES for encryption operated in CBC mode &lt;br /&gt;
* Stream Ciphers which XOR the key stream with plaintext (such as AES/CTR mode)&lt;br /&gt;
* Use SHA1 or above for digests, prefer SHA2 (or equivalent)&lt;br /&gt;
* Support ephemeral Diffie-Hellman key exchange&lt;br /&gt;
&lt;br /&gt;
Note: The TLS usage of MD5 does not expose the TLS protocol to any of the weaknesses of the MD5 algorithm (see FIPS 140-2 IG). However, MD5 must never be used outside of TLS protocol (e.g. for general hashing). For example, the RSA client authentication signature always uses both SHA-1 and MD5, and that usage is allowed.&lt;br /&gt;
&lt;br /&gt;
Note: Use of Ephemeral Diffie-Hellman key exchange will protect confidentiality of the transmitted plaintext data even if the corresponding RSA or DSS server private key got compromised. An attacker would have to perform active man-in-the-middle attack at the time of the key exchange to be able to extract the transmitted plaintext. All modern browsers support this key exchange with the notable exception of Internet Explorer prior to Windows Vista.&lt;br /&gt;
&lt;br /&gt;
Additional information can be obtained within the [http://www.ietf.org/rfc/rfc4346.txt TLS 1.1 RFC 4346] and [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf FIPS 140-2 IG], and [&lt;br /&gt;
&lt;br /&gt;
=== Rule - Support TLS-PSK and TLS-SRP for Mutual Authentication ===&lt;br /&gt;
&lt;br /&gt;
When using a shared secret or password offer TLS-PSK (Pre-Shared Key) or TLS-SRP (Secure Remote Password), which are known as Password Authenticated Key Exchange (PAKEs). TLS-PSK and TLS-SRP properly bind the channel, which refers to the cryptographic binding between the outer tunnel and the inner authentication protocol. IANA currently reserves [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 79 PSK cipehr suites] and [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 9 SRP cipher suites].&lt;br /&gt;
&lt;br /&gt;
Basic authentication places the user's password on the wire in the plain text after a server authenticates itself. Basic authentication only provides unilateral authentication. In contrast, both TLS-PSK and TLS-SRP provide mutual authentication, meaning each party proves it knows the password without placing the password on the wire in the plain text.&lt;br /&gt;
&lt;br /&gt;
Finally, using a PAKE removes the need to trust an outside party, such as a Certification Authority (CA).&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Secure Renegotiations  ===&lt;br /&gt;
&lt;br /&gt;
A design weakness in TLS, identified as [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-3555 CVE-2009-3555], allows an attacker to inject a plaintext of his choice into a TLS session of a victim. In the HTTPS context the attacker might be able to inject his own HTTP requests on behalf of the victim. The issue can be mitigated either by disabling support for TLS renegotiations or by supporting only renegotiations compliant with [http://www.ietf.org/rfc/rfc5746.txt RFC 5746]. All modern browsers have been updated to comply with this RFC.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Disable Compression ===&lt;br /&gt;
&lt;br /&gt;
Compression Ratio Info-leak Made Easy (CRIME) is an exploit against the data compression scheme used by the TLS and SPDY protocols. The exploit allows an adversary to recover user authentication cookies from HTTPS. The recovered cookie can be subsequently used for session hijacking attacks.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Strong Keys &amp;amp; Protect Them ===&lt;br /&gt;
&lt;br /&gt;
The private key used to generate the cipher key must be sufficiently strong for the anticipated lifetime of the private key and corresponding certificate. The current best practice is to select a key size of at least 2048. Keys of length 1024 will be obsolete beginning in 2010.  Additional information on key lifetimes and comparable key strengths can be found in [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf NIST SP 800-57]. In addition, the private key must be stored in a location that is protected from unauthorized access.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use a Certificate That Supports Required Domain Names ===&lt;br /&gt;
&lt;br /&gt;
A user should never be presented with a certificate error, including prompts to reconcile domain or hostname mismatches, or expired certificates. If the application is available at both [https://owasp.org https://www.example.com] and [https://owasp.org https://example.com] then an appropriate certificate, or certificates, must be presented to accommodate the situation. The presence of certificate errors desensitizes users to TLS error messages and increases the possibility an attacker could launch a convincing phishing or man-in-the-middle attack.&lt;br /&gt;
&lt;br /&gt;
For example, consider a web application accessible at [https://owasp.org https://abc.example.com] and [https://owasp.org https://xyz.example.com]. One certificate should be acquired for the host or server ''abc.example.com''; and a second certificate for host or server ''xyz.example.com''. In both cases, the hostname would be present in the Subject's Common Name (CN).&lt;br /&gt;
&lt;br /&gt;
Alternatively, the Subject Alternate Names (SANs) can be used to provide a specific listing of multiple names where the certificate is valid. In the example above, the certificate could list the Subject's CN as ''example.com'', and list two SANs: ''abc.example.com'' and ''xyz.example.com''. These certificates are sometimes referred to as &amp;quot;multiple domain certificates&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Fully Qualified Names in Certificates ===&lt;br /&gt;
&lt;br /&gt;
Use fully qualified names in the DNS name field, and do not use unqualifed names (e.g., 'www'), local names (e.g., 'localhost'), or private IP addresses (e.g., 192.168.1.1) in the DNS name field. Unqualifed names, local names, or private IP addresses violate the certificate specification.&lt;br /&gt;
 &lt;br /&gt;
=== Rule - Do Not Use Wildcard Certificates ===&lt;br /&gt;
&lt;br /&gt;
You should refrain from using wildcard certificates. Though they are expedient at circumventing annoying user prompts, they also [[Least_privilege|violate the principal of least privilege]] and asks the user to trust all machines, including developer's machines, the secretary's machine in the lobby and the sign-in kiosk. Obtaining access to the private key is left as an exercise for the attacker, but its made much easier when stored on the file system unprotected.&lt;br /&gt;
&lt;br /&gt;
Statistics gathered by Qualys for [http://media.blackhat.com/bh-us-10/presentations/Ristic/BlackHat-USA-2010-Ristic-Qualys-SSL-Survey-HTTP-Rating-Guide-slides.pdf Internet SSL Survey 2010] indicate wildcard certificates have a 4.4% share, so the practice is not standard for public facing hosts. Finally, wildcard certificates violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines].&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Use RFC 1918 Addresses in Certificates ===&lt;br /&gt;
&lt;br /&gt;
Certificates should not use private addresses. RFC 1918 is [http://tools.ietf.org/rfc/rfc1918.txt Address Allocation for Private Internets]. Private addresses are Internet Assigned Numbers Authority (IANA) reserved and include 192.168/16, 172.16/12, and 10/8.&lt;br /&gt;
&lt;br /&gt;
Certificates issued with private addresses violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines]. In addition, Peter Gutmann writes in in [http://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf Engineering Security]: &amp;quot;This one is particularly troublesome because, in combination with the router-compromise attacks... and ...OSCP-defeating measures, it allows an attacker to spoof any EV-certificate site.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Rule - Always Provide All Needed Certificates ===&lt;br /&gt;
&lt;br /&gt;
Clients attempt to solve the problem of identifying a server or host using PKI and X509 certificate. When a user receives a server or host's certificate, the certificate must be validated back to a trusted root certification authority. This is known as path validation.&lt;br /&gt;
&lt;br /&gt;
There can be one or more intermediate certificates in between the end-entity (server or host) certificate and root certificate. In addition to validating both endpoints, the user will also have to validate all intermediate certificates. Validating all intermediate certificates can be tricky because the user may not have them locally. This is a well-known PKI issue called the “Which Directory?&amp;quot; problem.&lt;br /&gt;
&lt;br /&gt;
To avoid the “Which Directory?&amp;quot; problem, a server should provide the user with all required certificates used in a path validation.&lt;br /&gt;
&lt;br /&gt;
== Client (Browser) Configuration  ==&lt;br /&gt;
&lt;br /&gt;
The validation procedures to ensure that a certificate is valid are complex and difficult to correctly perform.  In a typical web application model, these checks will be performed by the client's web browser in accordance with local browser settings and are out of the control of the application. However, these items do need to be addressed in the following scenarios:&lt;br /&gt;
&lt;br /&gt;
* The application server establishes connections to other applications over TLS for purposes such as web services or any exchange of data&lt;br /&gt;
* A thick client application is connecting to a server via TLS&lt;br /&gt;
&lt;br /&gt;
In these situations extensive certificate validation checks must occur in order to establish the validity of the certificate. Consult the following resources to assist in the design and testing of this functionality. The NIST PKI testing site includes a full test suite of certificates and expected outcomes of the test cases.&lt;br /&gt;
* [http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html NIST PKI Testing]&lt;br /&gt;
* [http://www.ietf.org/rfc/rfc5280.txt IETF RFC 5280]&lt;br /&gt;
&lt;br /&gt;
As specified in the above guidance, if the certificate can not be validated for any reason then the connection between the client and server must be dropped. Any data exchanged over a connection where the certificate has not properly been validated could be exposed to unauthorized access or modification.&lt;br /&gt;
&lt;br /&gt;
== Additional Controls  ==&lt;br /&gt;
&lt;br /&gt;
=== Extended Validation Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Extended validation certificates (EV Certificates) proffer an enhanced investigation by the issuer into the requesting party due to the industry's race to the bottom. The purpose of EV certificates is to provide the user with greater assurance that the owner of the certificate is a verified legal entity for the site. Browsers with support for EV certificates distinguish an EV certificate in a variety of ways. Internet Explorer will color a portion of the URL in green, while Mozilla will add a green portion to the left of the URL indicating the company name. &lt;br /&gt;
&lt;br /&gt;
High value websites should consider the use of EV certificates to enhance customer confidence in the certificate. It should also be noted that EV certificates do not provide any greater technical security for the TLS. The purpose of the EV certificate is to increase user confidence that the target site is indeed who it claims to be.&lt;br /&gt;
&lt;br /&gt;
=== Client-Side Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Client side certificates can be used with TLS to prove the identity of the client to the server. Referred to as &amp;quot;two-way TLS&amp;quot;, this configuration requires the client to provide their certificate to the server, in addition to the server providing their's to the client. If client certificates are used, ensure that the same validation of the client certificate is performed by the server, as indicated for the validation of server certificates above. In addition, the server should be configured to drop the TLS connection if the client certificate cannot be verified or is not provided. &lt;br /&gt;
&lt;br /&gt;
The use of client side certificates is relatively rare currently due to the complexities of certificate generation, safe distribution, client side configuration, certificate revocation and reissuance, and the fact that clients can only authenticate on machines where their client side certificate is installed. Such certificates are typically used for very high value connections that have small user populations.&lt;br /&gt;
&lt;br /&gt;
=== Certificate and Public Key Pinning ===&lt;br /&gt;
&lt;br /&gt;
Hybrid and native applications can take advantage of [[Certificate_and_Public_Key_Pinning|certificate and public key pinning]]. Pinning associates a host (for example, server) with an identity (for example, certificate or public key), and allows an application to leverage knowledge of the pre-existing relationship. At runtime, the application would inspect the certificate or public key received after connecting to the server. If the certificate or public key is expected, then the application would proceed as normal. If unexpected, the application would stop using the channel and close the connection since an adversary could control the channel or server.&lt;br /&gt;
&lt;br /&gt;
Pinning still requires customary X509 checks, such as revocation, since CRLs and OCSP provides real time status information. Otherwise, an application could possibly (1) accept a known bad certificate; or (2) require an out-of-band update, which could result in a lengthy App Store approval.&lt;br /&gt;
&lt;br /&gt;
Browser based applications are at a disadvantage since most browsers do not allow the user to leverage pre-existing relationships and ''a priori'' knowledge. In addition, Javascript and Websockets do not expose methods to for a web app to query the underlying secure connection information (such as the certificate or public key). It is noteworthy that Chromium based browsers perform pinning on selected sites, but the list is currently maintained by the vendor.&lt;br /&gt;
&lt;br /&gt;
For more information, please see the [[Pinning Cheat Sheet]].&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection for Back End and Other Connections  =&lt;br /&gt;
&lt;br /&gt;
Although not the focus of this cheat sheet, it should be stressed that transport layer protection is necessary for back-end connections and any other connection where sensitive data is exchanged or where user identity is established. Failure to implement an effective and robust transport layer security will expose sensitive data and undermine the effectiveness of any authentication or access control mechanism. &lt;br /&gt;
&lt;br /&gt;
== Secure Internal Network Fallacy  ==&lt;br /&gt;
&lt;br /&gt;
The internal network of a corporation is not immune to attacks. Many recent high profile intrusions, where thousands of sensitive customer records were compromised, have been perpetrated by attackers that have gained internal network access and then used sniffers to capture unencrypted data as it traversed the internal network.&lt;br /&gt;
&lt;br /&gt;
= Related Articles  =&lt;br /&gt;
&lt;br /&gt;
* OWASP – [[Testing for SSL-TLS (OWASP-CM-001)|Testing for SSL-TLS]], and OWASP [[Guide to Cryptography]] &lt;br /&gt;
* OWASP – [http://www.owasp.org/index.php/ASVS Application Security Verification Standard (ASVS) – Communication Security Verification Requirements (V10)]&lt;br /&gt;
* OWASP – ASVS Article on [[Why you need to use a FIPS 140-2 validated cryptomodule]]&lt;br /&gt;
* SSL Labs http://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide]&lt;br /&gt;
* yaSSL – [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html Differences between SSL and TLS Protocol Versions]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP 800-52 Guidelines for the selection and use of transport layer security (TLS) Implementations]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf FIPS 140-2 Security Requirements for Cryptographic Modules]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf SP 800-57 Recommendation for Key Management, Revision 2]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/drafts.html#sp800-95 SP 800-95 Guide to Secure Web Services] &lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5280.txt RFC 5280 Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc2246.txt RFC 2246 The Transport Layer Security (TLS) Protocol Version 1.0 (JAN 1999)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc4346.txt RFC 4346 The Transport Layer Security (TLS) Protocol Version 1.1 (APR 2006)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5246.txt RFC 5246 The Transport Layer Security (TLS) Protocol Version 1.2 (AUG 2008)]&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
Michael Coates - michael.coates[at]owasp.org &amp;lt;br/&amp;gt;&lt;br /&gt;
Dave Wichers - dave.wichers[at]aspectsecurity.com &amp;lt;br/&amp;gt;&lt;br /&gt;
Michael Boberski - boberski_michael[at]bah.com&amp;lt;br/&amp;gt;&lt;br /&gt;
Tyler Reguly -treguly[at]sslfail.com&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=154641</id>
		<title>C-Based Toolchain Hardening</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=154641"/>
				<updated>2013-06-28T18:41:12Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Fixed OpenSSL configure switches&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening]] is a treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. This article will examine Microsoft and GCC toolchains for the C, C++ and Objective C languages. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, preprocessor, compiler, and linker. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, Visual Studio-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
This is a prescriptive article, and it will not debate semantics or speculate on behavior. Some information, such as the C/C++ committee's motivation and pedigree for [https://groups.google.com/a/isocpp.org/forum/?fromgroups=#!topic/std-discussion/ak8e1mzBhGs &amp;quot;program diagnostics&amp;quot;, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;], appears to be lost like a tale in the Lord of the Rings. As such, the article will specify semantics (for example, the philosophy of 'debug' and 'release' build configurations), assign behaviors (for example, what an assert should do in a 'debug' and 'release' build configurations), and present a position. If you find the posture is too aggressive, then you should back off as required to suite your taste.&lt;br /&gt;
&lt;br /&gt;
A secure toolchain is not a silver bullet. It is one piece of an overall strategy in the engineering process to help ensure success. It will compliment existing processes such as static analysis, dynamic analysis, secure coding, negative test suites, and the like. Tools such as Valgrind and Helgrind will still be needed. And a project will still require solid designs and architectures.&lt;br /&gt;
&lt;br /&gt;
The OWASP [http://code.google.com/p/owasp-esapi-cplusplus/source ESAPI C++] project eats its own dog food. Many of the examples you will see in this article come directly from the ESAPI C++ project.&lt;br /&gt;
&lt;br /&gt;
Finally, a [[Category:Cheat Sheet|cheat sheet]] is available for those who desire a terse treatment of the material. Please visit [[C-Based_Toolchain_Hardening_Cheat_Sheet|C-Based Toolchain Hardening Cheat Sheet]] for the abbreviated version.&lt;br /&gt;
&lt;br /&gt;
== Wisdom ==&lt;br /&gt;
&lt;br /&gt;
Code '''must''' be correct. It '''should''' be secure. It '''can''' be efficient.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Jon_Bentley Dr. Jon Bentley]: ''&amp;quot;If it doesn't have to be correct, I can make it as fast as you'd like it to be&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Gary_McGraw Dr. Gary McGraw]: ''&amp;quot;Thou shalt not rely solely on security features and functions to build secure software as security is an emergent property of the entire system and thus relies on building and integrating all parts properly&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
Configuration is the first opportunity to configure your project for success. Not only do you have to configure your project to meet reliability and security goals, you must also configure integrated libraries properly. You typically have has three choices. First, you can use auto-configuration utilities if on Linux or Unix. Second, you can write a makefile by hand. This is predominant on Linux, Mac OS X, and Unix, but it applies to Windows as well. Finally, you can use an integrated development environment or IDE.&lt;br /&gt;
&lt;br /&gt;
=== Build Configurations ===&lt;br /&gt;
&lt;br /&gt;
At this stage in the process, you should concentrate on configuring for two builds: Debug and Release. Debug will be used for development and include full instrumentation. Release will be configured for production. The difference between the two settings is usually ''optimization level'' and ''debug level''. A third build configuration is Test, and its usually a special case of Release.&lt;br /&gt;
&lt;br /&gt;
For debug and release builds, the settings are typically diametrically opposed. Debug configurations have no optimizations and full debug information; while Release builds have optimizations and minimal to moderate debug information. In addition, debug code has full assertions and additional library integration, such as mudflaps and malloc guards such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The Test configuration is often a Release configuration that makes everything public for testing and builds a test harness. For example, all member functions public (C++ class) and all interfaces (library or shared object) should be made available for testing. Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
[http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8] introduced an optimization of &amp;lt;tt&amp;gt;-Og&amp;lt;/tt&amp;gt;. Note that it is only an optimization, and still requires a customary debug level via &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Debug Builds ====&lt;br /&gt;
&lt;br /&gt;
Debug builds are where developers spend most of their time when vetting problems, so this build should concentrate forces and tools or be a 'force multiplier'. Though many do not realize, debug code is more highly valued than release code because its adorned with additional instrumentation. The debug instrumentation will cause a program to become nearly &amp;quot;self-debugging&amp;quot;, and help you catch mistakes such as bad parameters, failed API calls, and memory problems.&lt;br /&gt;
&lt;br /&gt;
Self-debugging code reduces your time during trouble shooting and debugging. Reducing time under the debugger means you have more time for development and feature requests. If code is checked in without debug instrumentation, it should be fixed by adding instrumentation or rejected.&lt;br /&gt;
&lt;br /&gt;
For GCC, optimizations and debug symbolication are controlled through two switches: &amp;lt;tt&amp;gt;-O&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;. You should use the following as part of your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a minimal debug session:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-O0 -g3 -ggdb&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; turns off optimizations and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available. You may need to use &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; so some analysis is performed. Otherwise, your debug build will be missing a number of warnings not present in release builds. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debugging information is available for the debug session, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; includes extensions to help with a debug session under GDB. For completeness, Jan Krachtovil stated &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; currently has no effect in a private email.&lt;br /&gt;
&lt;br /&gt;
Debug build should also define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is not defined. &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; removes &amp;quot;program diagnostics&amp;quot;; and has undesirable behavior and side effects which discussed below in more detail. The defines should be present for all code, and not just the program. You use it for all code (your program and included libraries) because you need to know how they fail too (remember, you take the bug report - not the third party library).&lt;br /&gt;
&lt;br /&gt;
In addition, you should use other relevant flags, such as &amp;lt;tt&amp;gt;-fno-omit-frame-pointer&amp;lt;/tt&amp;gt;. Ensuring a frame pointer exists makes it easier to decode stack traces. Since debug builds are not shipped, its OK to leave symbols in the executable. Programs with debug information do not suffer performance hits. See, for example, [http://gcc.gnu.org/ml/gcc-help/2005-03/msg00032.html How does the gcc -g option affect performance?]&lt;br /&gt;
&lt;br /&gt;
Finally, you should ensure your project includes additional diagnostic libraries, such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt; and [http://code.google.com/p/address-sanitizer/ Address Sanitizer]. A comparison of some memory checking tools can be found at [http://code.google.com/p/address-sanitizer/wiki/ComparisonOfMemoryTools Comparison Of Memory Tools]. If you don't include additional diagostics in debug builds, then you should start using them sinces its OK to find errors you are not looking for.&lt;br /&gt;
&lt;br /&gt;
==== Release Builds ====&lt;br /&gt;
&lt;br /&gt;
Release builds are what your customer receives. They are meant to be run on production hardware and servers, and they should be reliable, secure, and efficient. A stable release build is the product of the hard work and effort during development.&lt;br /&gt;
&lt;br /&gt;
For release builds, you should use the following as part of &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for release builds:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-On -g2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O''n''&amp;lt;/tt&amp;gt; sets optimizations for speed or size (for example, &amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;), and &amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt; ensure debugging information is created.&lt;br /&gt;
&lt;br /&gt;
Debugging information should be stripped and retained in case of symbolication for a crash report from the field. While not desired, debug information can be left in place without a performance penalty. See ''[http://gcc.gnu.org/ml/gcc-help/2005-03/msg00032.html How does the gcc -g option affect performance?]'' for details.&lt;br /&gt;
&lt;br /&gt;
Release builds should also define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is not defined. The time for debugging and diagnostics is over, so users get production code with full optimizations, no &amp;quot;programming diagnostics&amp;quot;, and other efficiencies. If you can't optimize or your are performing excessive logging, it usually means the program is not ready for production.&lt;br /&gt;
&lt;br /&gt;
If you have been relying on an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and then a subsequent &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;, you have been abusing &amp;quot;program diagnostics&amp;quot; since it has no place in production code. If you want a memory dump, create one so users don't have to worry about secrets and other sensitive information being written to the filesystem and emailed in plain text.&lt;br /&gt;
&lt;br /&gt;
For Windows, you would use &amp;lt;tt&amp;gt;/Od&amp;lt;/tt&amp;gt; for debug builds; and &amp;lt;tt&amp;gt;/Ox&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/O2&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/Os&amp;lt;/tt&amp;gt; for release builds. See Microsoft's [http://msdn.microsoft.com/en-us/library/k1ack8f1.aspx /O Options (Optimize Code)] for details.&lt;br /&gt;
&lt;br /&gt;
==== Test Builds ====&lt;br /&gt;
&lt;br /&gt;
Test builds are used to provide heuristic validation by way of positive and negative test suites. Under a test configuration, all interfaces are tested to ensure they perform to specification and satisfaction. &amp;quot;Satisfaction&amp;quot; is subjective, but it should include no crashing and no trashing of your memory arena, even when faced with negative tests.&lt;br /&gt;
&lt;br /&gt;
Because all interfaces are tested (and not just the public ones), your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; should include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should also change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Nearly everyone gets a positive test right, so no more needs to be said. The negative self tests are much more interesting, and you should concentrate on trying to make your program fail so you can verify its fails gracefully. Remember, a bad guy is not going to be courteous when he attempts to cause your program to fail. And its your project that takes egg on the face by way of a bug report or guest appearance on [http://www.grok.org.uk/full-disclosure/ Full Disclosure] or [http://www.securityfocus.com/archive Bugtraq] - not ''&amp;lt;nowiki&amp;gt;&amp;lt;some library&amp;gt;&amp;lt;/nowiki&amp;gt;'' you included.&lt;br /&gt;
&lt;br /&gt;
=== Auto Tools ===&lt;br /&gt;
&lt;br /&gt;
Auto configuration tools are popular on many Linux and Unix based systems, and the tools include ''Autoconf'', ''Automake'', ''config'', and ''Configure''. The tools work together to produce project files from scripts and template files. After the process completes, your project should be setup and ready to be made with &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When using auto configuration tools, there are a few files of interest worth mentioning. The files are part of the auto tools chain and include &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; and the various &amp;lt;tt&amp;gt;*.in&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;*.ac&amp;lt;/tt&amp;gt; (autoconf), and &amp;lt;tt&amp;gt;*.am&amp;lt;/tt&amp;gt; (automake) files. At times, you will have to open them, or the resulting makefiles, to tune the &amp;quot;stock&amp;quot; configuration.&lt;br /&gt;
&lt;br /&gt;
There are three downsides to the command line configuration tools in the toolchain: (1) they often ignore user requests, (2) they cannot create configurations, and (3) security is often not a goal.&lt;br /&gt;
&lt;br /&gt;
To demonstrate the first issue, confider your project with the following: &amp;lt;tt&amp;gt;configure CFLAGS=&amp;quot;-Wall -fPIE&amp;quot; CXXFLAGS=&amp;quot;-Wall -fPIE&amp;quot; LDFLAGS=&amp;quot;-pie&amp;quot;&amp;lt;/tt&amp;gt;. You will probably find the auto tools ignored your request, which means the command below will not produce expected results. As a work around, you will have to open an &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; scripts, &amp;lt;tt&amp;gt;Makefile.in&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;Makefile.am&amp;lt;/tt&amp;gt; and fix the configuration.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ configure CFLAGS=&amp;quot;-Wall -Wextra -Wconversion -fPIE -Wno-unused-parameter&lt;br /&gt;
    -Wformat=2 -Wformat-security -fstack-protector-all -Wstrict-overflow&amp;quot;&lt;br /&gt;
    LDFLAGS=&amp;quot;-pie -z,noexecstack -z,noexecheap -z,relro -z,now&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the second point, you will probably be disappointed to learn [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html Automake does not support the concept of configurations]. Its not entirely Autoconf's or Automake's fault - ''Make'' and its inability to detect changes is the underlying problem. Specifically, ''Make'' only [http://pubs.opengroup.org/onlinepubs/009695399/utilities/make.html checks modification times of prerequisites and targets], and does not check things like &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;. The net effect is you will not receive expected results when you issue &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt; and then &amp;lt;tt&amp;gt;make test&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Finally, you will probably be disappointed to learn tools such as Autoconf and Automake miss many security related opportunities and ship insecure out of the box. There are a number of compiler switches and linker flags that improve the defensive posture of a program, but they are not 'on' by default. Tools like Autoconf - which are supposed to handle this situation - often provides setting to serve the lowest of all denominators.&lt;br /&gt;
&lt;br /&gt;
A recent discussion on the Automake mailing list illuminates the issue: ''[https://lists.gnu.org/archive/html/autoconf/2012-12/msg00038.html Enabling compiler warning flags]''. Attempts to improve default configurations were met with resistance and no action was taken. The resistance is often of the form, &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some useful warning&amp;gt;&amp;lt;/nowiki&amp;gt; also produces false positives&amp;quot; or &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some obscure platform&amp;gt;&amp;lt;/nowiki&amp;gt; does not support &amp;lt;nowiki&amp;gt;&amp;lt;established security feature&amp;gt;&amp;lt;/nowiki&amp;gt;&amp;quot;. Its noteworthy that David Wheeler, the author of ''[http://www.dwheeler.com/secure-programs/ Secure Programming for Linux and Unix HOWTO]'', was one of the folks trying to improve the posture.&lt;br /&gt;
&lt;br /&gt;
=== Makefiles ===&lt;br /&gt;
&lt;br /&gt;
Make is one of the earliest build systems dating back to the 1970s. Its available on Linux, Mac OS X and Unix, so you will frequently encounter projects using it. Unfortunately, Make has a number of short comings (''[http://aegis.sourceforge.net/auug97.pdf Recursive Make Considered Harmful]'' and ''[http://www.conifersystems.com/whitepapers/gnu-make/ What’s Wrong With GNU make?]''), and can cause some discomfort. Despite issues with Make, ESAPI C++ uses Make primarily for three reasons: first, its omnipresent; second, its easier to manage than the Auto Tools family; and third, &amp;lt;tt&amp;gt;libtool&amp;lt;/tt&amp;gt; was out of the question.&lt;br /&gt;
&lt;br /&gt;
Consider what happens when you: (1) type &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt;, and then type &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;. Each build would require different &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; due to optimizations and level of debug support. In your makefile, you would extract the relevant target and set &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; similar to below (taken from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/Makefile ESAPI C++ Makefile]):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Makefile&lt;br /&gt;
DEBUG_GOALS = $(filter $(MAKECMDGOALS), debug)&lt;br /&gt;
ifneq ($(DEBUG_GOALS),)&lt;br /&gt;
  WANT_DEBUG := 1&lt;br /&gt;
  WANT_TEST := 0&lt;br /&gt;
  WANT_RELEASE := 0&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_DEBUG),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
  ESAPI_CXXFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_RELEASE),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
  ESAPI_CXXFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_TEST),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
  ESAPI_CXXFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
# Merge ESAPI flags with user supplied flags. We perform the extra step to ensure &lt;br /&gt;
# user options follow our options, which should give user option's a preference.&lt;br /&gt;
override CFLAGS := $(ESAPI_CFLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(ESAPI_CXXFLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(ESAPI_LDFLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make will first build the program in a debug configuration for a session under the debugger using a rule similar to:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;%.cpp:%.o:&lt;br /&gt;
        $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $&amp;lt; -o $@&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you want the release build, Make will do nothing because it considers everything up to date despite the fact &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; have changed. Hence, your program will actually be in a debug configuration and risk a &amp;lt;tt&amp;gt;SIGABRT&amp;lt;/tt&amp;gt; at runtime because debug instrumentation is present (recall &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; calls &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; when &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined). In essence, you have DoS'd yourself due to &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, many projects do not honor the user's command line. ESAPI C++ does its best to ensure a user's flags are honored via &amp;lt;tt&amp;gt;override&amp;lt;/tt&amp;gt; as shown above, but other projects do not. For example, consider a project that should be built with Position Independent Executable (PIE or ASLR) enabled and data execution prevention (DEP) enabled. Dismissing user settings combined with insecure out of the box settings (and not picking them up during auto-setup or auto-configure) means a program built with the following will likely have neither defense:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ make CFLAGS=&amp;quot;-fPIE&amp;quot; CXXFLAGS=&amp;quot;-fPIE&amp;quot; LDFLAGS=&amp;quot;-pie -z,noexecstack, -z,noexecheap&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Defenses such as ASLR and DEP are especially important on Linux because [http://linux.die.net/man/5/elf Data Execution - not Prevention - is the norm].&lt;br /&gt;
&lt;br /&gt;
=== Integration ===&lt;br /&gt;
&lt;br /&gt;
Project level integration presents opportunities to harden your program or library with domain specific knowledge. For example, if the platform supports Position Independent Executables (PIE or ASLR) and data execution prevention (DEP), then you should integrate with it. The consequences of not doing so could result in exploitation. As a case in point, see KingCope's 0-days for MySQL in December, 2012 (CVE-2012-5579 and CVE-2012-5612, among others). Integration with platform security would have neutered a number of the 0-days.&lt;br /&gt;
&lt;br /&gt;
You also have the opportunity to include helpful libraries that are not need for business logic support. For example, if you are working on a platform with [http://dmalloc.com DMalloc] or [http://code.google.com/p/address-sanitizer/ Address Sanitizer], you should probably use it in your debug builds. For Ubuntu, DMalloc available from the package manager and can be installed with &amp;lt;tt&amp;gt;sudo apt-get install libdmalloc5&amp;lt;/tt&amp;gt;. For Apple platforms, its available as a scheme option (see [[#Clang/Xcode|Clang/Xcode]] below). Address Sanitizer is available in [http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8 and above] for many platforms.&lt;br /&gt;
&lt;br /&gt;
In addition, project level integration is an opportunity to harden third party libraries you chose to include. Because you chose to include them, you and your users are responsible for them. If you or your users endure a SP800-53 audit, third party libraries will be in scope because the supply chain is included (specifically, item SA-12, Supply Chain Protection). The audits are not limited to those in the US Federal arena - financial institutions perform reviews too. A perfect example of violating this guidance is [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525], which was due to [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library].&lt;br /&gt;
&lt;br /&gt;
Another example is including OpenSSL. You know (1) [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 is insecure], (2) [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 is insecure], and (3) [http://arstechnica.com/security/2012/09/crime-hijacks-https-sessions/ compression is insecure] (among others). In addition, suppose you don't use hardware and engines, and only allow static linking. Given the knowledge and specifications, you would configure the OpenSSL library as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ Configure darwin64-x86_64-cc -no-hw -no-engine -no-comp -no-shared -no-dso -no-ssl2 -no-ssl3 --openssldir=…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Note Well'': you might want engines, especially on Ivy Bridge microarchitectures (3rd generation Intel Core i5 and i7 processors). To have OpenSSL use the processor's random number generator (via the of &amp;lt;tt&amp;gt;rdrand&amp;lt;/tt&amp;gt; instruction), you will need to call OpenSSL's &amp;lt;tt&amp;gt;ENGINE_load_rdrand()&amp;lt;/tt&amp;gt; function and then &amp;lt;tt&amp;gt;ENGINE_set_default&amp;lt;/tt&amp;gt; with &amp;lt;tt&amp;gt;ENGINE_METHOD_RAND&amp;lt;/tt&amp;gt;. See [http://wiki.opensslfoundation.com/index.php/Random_Numbers OpenSSL's Random Numbers] for details.&lt;br /&gt;
&lt;br /&gt;
If you configure without the switches, then you will likely have vulnerable code/libraries and risk failing an audit. If the program is a remote server, then the following command will reveal if compression is active on the channel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ echo &amp;quot;GET / HTTP1.0&amp;quot; | openssl s_client -connect &amp;lt;nowiki&amp;gt;example.com:443&amp;lt;/nowiki&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;nm&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;openssl s_client&amp;lt;/tt&amp;gt; will show that compression is enabled in the client. In fact, any symbol within the &amp;lt;tt&amp;gt;OPENSSL_NO_COMP&amp;lt;/tt&amp;gt; preprocessor macro will bear witness since &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; is translated into a &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; define.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ nm /usr/local/ssl/iphoneos/lib/libcrypto.a 2&amp;gt;/dev/null | egrep -i &amp;quot;(COMP_CTX_new|COMP_CTX_free)&amp;quot;&lt;br /&gt;
0000000000000110 T COMP_CTX_free&lt;br /&gt;
0000000000000000 T COMP_CTX_new&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even more egregious is the answer given to auditors who specifically ask about configurations and protocols: &amp;quot;we don't use weak/wounded/broken ciphers&amp;quot; or &amp;quot;we follow best practices.&amp;quot; The use of compression tells the auditor that you are using wounded protocol in an insecure configuration and you don't follow best practices. That will likely set off alarm bells, and ensure the auditor dives deeper on more items.&lt;br /&gt;
&lt;br /&gt;
== Preprocessor ==&lt;br /&gt;
&lt;br /&gt;
The preprocessor is crucial to setting up a project for success. The C committee provided one macro - &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; - and the macro can be used to derive a number of configurations and drive engineering processes. Unfortunately, the committee also left many related items to chance, which has resulted in programmers abusing builtin facilities. This section will help you set up you projects to integrate well with other projects and ensure reliability and security.&lt;br /&gt;
&lt;br /&gt;
There are three topics to discuss when hardening the preprocessor. The first is well defined configurations which produce well defined behaviors, the second is useful behavior from assert, and the third is proper use of macros when integrating vendor code and third party libraries.&lt;br /&gt;
&lt;br /&gt;
=== Configurations ===&lt;br /&gt;
&lt;br /&gt;
To remove ambiguity, you should recognize two configurations: Release and Debug. Release is for production code on live servers, and its behavior is requested via the C/C++ &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; macro. Its also the only macro observed by the C and C++ Committees and Posix. Diametrically opposed to release is Debug. While there is a compelling argument for &amp;lt;tt&amp;gt;!defined(NDEBUG)&amp;lt;/tt&amp;gt;, you should have an explicit macro for the configuration and that macro should be &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;. This is because vendors and outside libraries use &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (or similar) macro for their configuration. For example, Carnegie Mellon's Mach kernel uses &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, Microsoft's CRT uses [http://msdn.microsoft.com/en-us/library/ww5t02fa%28v=vs.71%29.aspx&amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt;], and Wind River Workbench uses &amp;lt;tt&amp;gt;DEBUG_MODE&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (Release) and &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (Debug), you have two additional cross products: both are defined or neither are defined. Defining both should be an error, and defining neither should default to a release configuration. Below is from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/esapi/EsapiCommon.h ESAPI C++ EsapiCommon.h], which is the configuration file used by all source files:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// Only one or the other, but not both&lt;br /&gt;
#if (defined(DEBUG) || defined(_DEBUG)) &amp;amp;&amp;amp; (defined(NDEBUG) || defined(_NDEBUG))&lt;br /&gt;
# error Both DEBUG and NDEBUG are defined.&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
// The only time we switch to debug is when asked. NDEBUG or {nothing} results&lt;br /&gt;
// in release build (fewer surprises at runtime).&lt;br /&gt;
#if defined(DEBUG) || defined(_DEBUG)&lt;br /&gt;
# define ESAPI_BUILD_DEBUG 1&lt;br /&gt;
#else&lt;br /&gt;
# define ESAPI_BUILD_RELEASE 1&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is in effect, your code should receive full debug instrumentation, including the full force of assertions.&lt;br /&gt;
&lt;br /&gt;
=== ASSERT ===&lt;br /&gt;
&lt;br /&gt;
Asserts will help you create self-debugging code by helping you find the point of first failure quickly and easily. Asserts should be used throughout your program, including parameter validation, return value checking and program state. The &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; will silently guard your code through its lifetime. It will always be there, even when not debugging a specific component of a module. If you have thorough code coverage, you will spend less time debugging and more time developing because programs will debug themselves.&lt;br /&gt;
&lt;br /&gt;
To use asserts effectively, you should assert everything. That includes parameters upon entering a function, return values from function calls, and any program state. Everywhere you place an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation or checking, you should have an assert. Everywhere you have an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; for validation or checking, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand.&lt;br /&gt;
&lt;br /&gt;
If you are still using &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt;'s, then you have an opportunity for improvement. In the time it takes for you to write a &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; statement, you could have written an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;. Unlike the &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; which are often removed when no longer needed, the &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; stays active forever. Remember, this is all about finding the point of first failure quickly so you can spend your time doing other things.&lt;br /&gt;
&lt;br /&gt;
There is one problem with using asserts - [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html Posix states &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; should call &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;] if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined. When debugging, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will never be defined since you want the &amp;quot;program diagnostics&amp;quot; (quote from the Posix description). The behavior makes &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and its accompanying &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; completely useless for development. The result of &amp;quot;program diagnostics&amp;quot; calling &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; due to standard C/C++ behavior is disuse - developers simply don't use them. Its incredibly bad for the development community because self-debugging programs can help eradicate so many stability problems.&lt;br /&gt;
&lt;br /&gt;
Since self-debugging programs are so powerful, you will have to have to supply your own assert and signal handler with improved behavior. Your assert will exchange auto-aborting behavior for auto-debugging behavior. The auto-debugging facility will ensure the debugger snaps when a problem is detected, and you will find the point of first failure quickly and easily.&lt;br /&gt;
&lt;br /&gt;
ESAPI C++ supplies its own assert with the behavior described above. In the code below, &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt; raises &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; when in effect or it evaluates to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; in other cases.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// A debug assert which should be sprinkled liberally. This assert fires and then continues rather&lt;br /&gt;
// than calling abort(). Useful when examining negative test cases from the command line.&lt;br /&gt;
#if (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_STARNIX))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp) {                                    \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; std::endl;                                           \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) {                               \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; &amp;quot;: \&amp;quot;&amp;quot; &amp;lt;&amp;lt; (msg) &amp;lt;&amp;lt; &amp;quot;\&amp;quot;&amp;quot; &amp;lt;&amp;lt; std::endl;                \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#elif (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_WINDOWS))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      assert(exp)&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) assert(exp)&lt;br /&gt;
#else&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      ((void)(exp))&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) ((void)(exp))&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
#if !defined(ASSERT)&lt;br /&gt;
#  define ASSERT(exp)     ESAPI_ASSERT1(exp)&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At program startup, a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; handler will be installed if one is not provided by another component:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct DebugTrapHandler&lt;br /&gt;
{&lt;br /&gt;
  DebugTrapHandler()&lt;br /&gt;
  {&lt;br /&gt;
    struct sigaction new_handler, old_handler;&lt;br /&gt;
&lt;br /&gt;
    do&lt;br /&gt;
      {&lt;br /&gt;
        int ret = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, NULL, &amp;amp;old_handler);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        // Don't step on another's handler&lt;br /&gt;
        if (old_handler.sa_handler != NULL) break;&lt;br /&gt;
&lt;br /&gt;
        new_handler.sa_handler = &amp;amp;DebugTrapHandler::NullHandler;&lt;br /&gt;
        new_handler.sa_flags = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigemptyset (&amp;amp;new_handler.sa_mask);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, &amp;amp;new_handler, NULL);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
      } while(0);&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  static void NullHandler(int /*unused*/) { }&lt;br /&gt;
&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
// We specify a relatively low priority, to make sure we run before other CTORs&lt;br /&gt;
// http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Attributes.html#C_002b_002b-Attributes&lt;br /&gt;
static const DebugTrapHandler g_dummyHandler __attribute__ ((init_priority (110)));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On a Windows platform, you would call &amp;lt;tt&amp;gt;_set_invalid_parameter_handler&amp;lt;/tt&amp;gt; (and possibly &amp;lt;tt&amp;gt;set_unexpected&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;set_terminate&amp;lt;/tt&amp;gt;) to install a new handler.&lt;br /&gt;
&lt;br /&gt;
Live hosts running production code should always define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (i.e., release configuration), which means they do not assert or auto-abort. Auto-abortion is not acceptable behavior, and anyone who asks for the behavior is completely abusing the functionality of &amp;quot;program diagnostics&amp;quot;. If a program wants a core dump, then it should create the dump rather than crashing.&lt;br /&gt;
&lt;br /&gt;
For more reading on asserting effectively, please see one of John Robbin's books, such as ''[http://www.amazon.com/dp/0735608865 Debugging Applications]''. John is a legendary bug slayer in Windows circles, and he will show you how to do nearly everything, from debugging a simple program to bug slaying in multithreaded programs.&lt;br /&gt;
&lt;br /&gt;
=== Additional Macros ===&lt;br /&gt;
&lt;br /&gt;
Additional macros include any macros needed to integrate properly and securely. It includes integrating the program with the platform (for example MFC or Cocoa/CocoaTouch) and libraries (for example, Crypto++ or OpenSSL). It can be a challenge because you have to have proficiency with your platform and all included libraries and frameworks. The list below illustrates the level of detail you will need when integrating.&lt;br /&gt;
&lt;br /&gt;
Though Boost is missing from the list, it appears to lack recommendations, additional debug diagnostics, and a hardening guide. See ''[http://stackoverflow.com/questions/14927033/boost-hardening-guide-preprocessor-macros BOOST Hardening Guide (Preprocessor Macros)]'' for details. In addition, Tim Day points to ''[http://boost.2283326.n4.nabble.com/boost-build-should-we-not-define-SECURE-SCL-0-by-default-for-all-msvc-toolsets-td2654710.html &amp;lt;nowiki&amp;gt;[boost.build] should we not define _SECURE_SCL=0 by default for all msvc toolsets&amp;lt;/nowiki&amp;gt;]'' for a recent discussion related to hardening (or lack thereof).&lt;br /&gt;
&lt;br /&gt;
In addition to what you should define, defining some macros and undefining others should trigger a security related defect. For example, &amp;lt;tt&amp;gt;-U_FORTIFY_SOURCES&amp;lt;/tt&amp;gt; on Linux and &amp;lt;tt&amp;gt;_CRT_SECURE_NO_WARNINGS=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_SCL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_ATL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;STRSAFE_NO_DEPRECATE&amp;lt;/tt&amp;gt; on Windows.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Platform/Library!!Debug!!Release&lt;br /&gt;
|+ Table 1: Additional Platform/Library Macros&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;175pt&amp;quot;|All&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|DEBUG=1&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|NDEBUG=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Linux&lt;br /&gt;
|_GLIBCXX_DEBUG=1&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
_GLIBCXX_CONCEPT_CHECKS=1&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
|_FORTIFY_SOURCE=2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Android&lt;br /&gt;
|NDK_DEBUG=1&lt;br /&gt;
|_FORTIFY_SOURCE=1 (4.2 and above)&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define LOGI(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt logging)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Cocoa/CocoaTouch&lt;br /&gt;
|&lt;br /&gt;
|NS_BLOCK_ASSERTIONS=1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define NSLog(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt ASL)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SafeInt&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft&lt;br /&gt;
|_DEBUG=1, STRICT,&amp;lt;br&amp;gt;&lt;br /&gt;
_SECURE_SCL=1, _HAS_ITERATOR_DEBUGGING=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
|STRICT&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft ATL &amp;amp; MFC&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|STLPort&lt;br /&gt;
|_STLP_DEBUG=1, _STLP_USE_DEBUG_LIB=1&amp;lt;br&amp;gt;&lt;br /&gt;
_STLP_DEBUG_ALLOC=1, _STLP_DEBUG_UNINITIALIZED=1&lt;br /&gt;
|&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLite&lt;br /&gt;
|SQLITE_DEBUG, SQLITE_MEMDEBUG&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLCipher&lt;br /&gt;
|SQLITE_HAS_CODEC=1&amp;lt;BR&amp;gt;&lt;br /&gt;
SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;e&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_HAS_CODEC=1&amp;lt;BR&amp;gt;&lt;br /&gt;
SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;e&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Be careful with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; when using pre-compiled libraries such as Boost from a distribution. There are ABI incompatibilities, and the result will likely be a crash. You will have to compile Boost with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; or omit &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; See [http://gcc.gnu.org/onlinedocs/libstdc++/manual/concept_checking.html Chapter 5, Diagnostics] of the libstdc++ manual for details.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt; SQLite secure deletion zeroizes memory on destruction. Define as required, and always define in US Federal since zeroization is required for FIPS 140-2, Level 1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt; ''N'' is 0644 by default, which means everyone has some access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;e&amp;lt;/sup&amp;gt; Force temporary tables into memory (no unencrypted data to disk).&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
##########################################&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Compiler and Linker ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers provide a rich set of warnings from the analysis of code during compilation. Both GCC and Visual Studio have static analysis capabilities to help find mistakes early in the development process. The built in static analysis capabilities of GCC and Visual Studio are usually sufficient to ensure proper API usage and catch a number of mistakes such as using an uninitialized variable or comparing a negative signed int and a positive unsigned int.&lt;br /&gt;
&lt;br /&gt;
As a concrete example, (and for those not familiar with C/C++ promotion rules), a warning will be issued if a signed integer is promoted to an unsigned integer and then compared because a side effect is &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! GCC and Visual Studio will not currently catch, for example, SQL injections and other tainted data usage. For that, you will need a tool designed to perform data flow analysis or taint analysis.&lt;br /&gt;
&lt;br /&gt;
Some in the development community resist static analysis or refute its results. For example, when static analysis warned the Linux kernel's &amp;lt;tt&amp;gt;sys_prctl&amp;lt;/tt&amp;gt; was comparing an unsigned value against less than zero, Jesper Juhl offered a patch to clean up the code. Linus Torvalds howled “No, you don't do this… GCC is crap” (referring to compiling with warnings). For the full discussion, see ''[http://linux.derkeiler.com/Mailing-Lists/Kernel/2006-11/msg08325.html &amp;lt;nowiki&amp;gt;[PATCH] Don't compare unsigned variable for &amp;lt;0 in sys_prctl()&amp;lt;/nowiki&amp;gt;]'' from the Linux Kernel mailing list.&lt;br /&gt;
&lt;br /&gt;
The following sections will detail steps for three platforms. First is a typical GNU Linux based distribution offering GCC and Binutils, second is Clang and Xcode, and third is modern Windows platforms.&lt;br /&gt;
&lt;br /&gt;
=== Distribution Hardening ===&lt;br /&gt;
&lt;br /&gt;
Before discussing GCC and Binutils, it would be a good time to point out some of the defenses discussed below are all ready present in a distribution. Unfortunately, its design by committee, so what is present is usually only a mild variation of what is available (this way, everyone is mildly offended). For those who are purely worried about performance, you might be surprised to learn you have already taken the small performance hint without even knowing.&lt;br /&gt;
&lt;br /&gt;
Linux and BSD distributions often apply some hardening without intervention via ''[http://gcc.gnu.org/onlinedocs/gcc/Spec-Files.html GCC Spec Files]''. If you are using Debian, Ubuntu, Linux Mint and family, see ''[http://wiki.debian.org/Hardening Debian Hardening]''. For Red Hat and Fedora systems, see ''[http://lists.fedoraproject.org/pipermail/devel-announce/2011-August/000821.html New hardened build support (coming) in F16]''. Gentoo users should visit ''[http://www.gentoo.org/proj/en/hardened/ Hardened Gentoo]''.&lt;br /&gt;
&lt;br /&gt;
You can see the settings being used by a distribution via &amp;lt;tt&amp;gt;gcc -dumpspecs&amp;lt;/tt&amp;gt;. From Linux Mint 12 below, -fstack-protector (but not -fstack-protector-all) is used by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ gcc -dumpspecs&lt;br /&gt;
…&lt;br /&gt;
*link_ssp: %{fstack-protector:}&lt;br /&gt;
&lt;br /&gt;
*ssp_default: %{!fno-stack-protector:%{!fstack-protector-all: %{!ffreestanding:%{!nostdlib:-fstack-protector}}}}&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The “SSP” above stands for Stack Smashing Protector. SSP is a reimplementation of Hiroaki Etoh's work on IBM Pro Police Stack Detector. See Hiroaki Etoh's patch ''[http://gcc.gnu.org/ml/gcc-patches/2001-06/msg01753.html gcc stack-smashing protector]'' and IBM's ''[http://www.research.ibm.com/trl/projects/security/ssp/ GCC extension for protecting applications from stack-smashing attacks]'' for details.&lt;br /&gt;
&lt;br /&gt;
=== GCC/Binutils ===&lt;br /&gt;
&lt;br /&gt;
GCC (the compiler collection) and Binutils (the assemblers, linkers, and other tools) are separate projects that work together to produce a final executable. Both the compiler and linker offer options to help you write safer and more secure code. The linker will produce code which takes advantage of platform security features offered by the kernel and PaX, such as no-exec stacks and heaps (NX) and Position Independent Executable (PIE).&lt;br /&gt;
&lt;br /&gt;
The table below offers a set of compiler options to build your program. Static analysis warnings help catch mistakes early, while the linker options harden the executable at runtime. In the table below, “GCC” should be loosely taken as “non-ancient distributions.” While the GCC team considers 4.2 ancient, you will still encounter it on Apple and BSD platforms due to changes in GPL licensing around 2007. Refer to ''[http://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html GCC Option Summary]'', ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html Options to Request or Suppress Warnings]'' and ''[http://sourceware.org/binutils/docs-2.21/ld/Options.html Binutils (LD) Command Line Options]'' for usage details.&lt;br /&gt;
&lt;br /&gt;
Noteworthy of special mention are &amp;lt;tt&amp;gt;-fno-strict-overflow&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fwrapv&amp;lt;/tt&amp;gt;&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;. The flags ensure the compiler does not remove statements that result in overflow or wrap. If your program only runs correctly using the flags, it is likely violating C/C++ rules on overflow and illegal. If the program is illegal due to overflow or wrap checking, you should consider using [http://code.google.com/p/safe-iop/ safe-iop] for C or David LeBlanc's [http://safeint.codeplex.com SafeInt] in C++.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, some of those settings can be verified with the [http://www.trapkit.de/tools/checksec.html Checksec] tool written by Tobias Klein. The &amp;lt;tt&amp;gt;checksec.sh&amp;lt;/tt&amp;gt; script is designed to test standard Linux OS and PaX security features being used by an application. See the [http://www.trapkit.de/tools/checksec.html Trapkit] web page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 2: GCC C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wall -Wextra&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;75t&amp;quot;|GCC&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Enables many warnings (despite their names, all and extra do not turn on all warnings).&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wconversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may alter a value (includes -Wsign-conversion).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-conversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may change the sign of an integer value, such as assigning a signed integer to an unsigned integer (&amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion!).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wcast-align&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for a pointer cast to a type which has a different size, causing an invalid alignment and subsequent bus error on ARM processors.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wformat=2 -Wformat-security&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Increases warnings related to possible security defects, including incorrect format specifiers.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-common&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Prevent global variables being simultaneously defined in different object files.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fstack-protector or -fstack-protector-all&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Stack Smashing Protector (SSP). Improves stack layout and adds a guard to detect stack based buffer overflows.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-omit-frame-pointer&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Improves backtraces for post-mortem analysis&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wmissing-prototypes and -Wmissing-declarations&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a global function is defined without a prototype or declaration.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-prototypes&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a function is declared or defined without specifying the argument types.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-overflow&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.2&lt;br /&gt;
|Warn about optimizations taken due to &amp;lt;nowiki&amp;gt;[undefined]&amp;lt;/nowiki&amp;gt; signed integer overflow assumptions.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wtrampolines&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.3&lt;br /&gt;
|Warn about trampolines generated for pointers to nested functions. Trampolines require executable stacks.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=address&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/address-sanitizer/ AddressSanitizer], a fast memory error detector. Memory access instructions will be instrumented to help detect heap, stack, and global buffer overflows; as well as use-after-free bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=thread&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/data-race-test/wiki/ThreadSanitizer ThreadSanitizer], a fast data race detector. Memory access instructions will be instrumented to detect data race bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,nodlopen and -Wl,-z,nodump&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.10&lt;br /&gt;
|Reduces the ability of an attacker to load, manipulate, and dump shared objects.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,noexecstack and -Wl,-z,noexecheap&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.14&lt;br /&gt;
|Data Execution Prevention (DEP). ELF headers are marked with PT_GNU_STACK and PT_GNU_HEAP.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,relro&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Global Offset Table (GOT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,now&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Procedure Linkage Table (PLT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIC&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils&lt;br /&gt;
|Position Independent Code. Used for libraries and shared objects. Both -fPIC (compiler) and -shared (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIE&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.16&lt;br /&gt;
|Position Independent Executable (ASLR). Used for programs. Both -fPIE (compiler) and -pie (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Unlike Clang and -Weverything, GCC does not provide a switch to truly enable all warnings.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; -fstack-protector guards functions with high risk objects such as C strings, while -fstack-protector-all guards all objects.&lt;br /&gt;
&lt;br /&gt;
Additional C++ warnings which can be used include the following in Table 3. See ''[http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html GCC's Options Controlling C++ Dialect]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 3: GCC C++ Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Woverloaded-virtual&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn when a function declaration hides virtual functions from a base class. &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wreorder&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when the order of member initializers given in the code does not match the order in which they must be executed.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-promo&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when overload resolution chooses a promotion from unsigned or enumerated type to a signed type, over a conversion to an unsigned type of the same size.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wnon-virtual-dtor&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when a class has virtual functions and an accessible non-virtual destructor.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Weffc++&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn about violations of the following style guidelines from Scott Meyers' ''[http://www.aristeia.com/books.html Effective C++, Second Edition]'' book.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
And additional Objective C warnings which are often useful include the following. See ''[http://gcc.gnu.org/onlinedocs/gcc/Objective_002dC-and-Objective_002dC_002b_002b-Dialect-Options.html Options Controlling Objective-C and Objective-C++ Dialects]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 4: GCC Objective C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wstrict-selector-match&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn if multiple methods with differing argument and/or return types are found for a given selector when attempting to send a message using this selector to a receiver of type id or Class.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wundeclared-selector&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn if a &amp;lt;tt&amp;gt;@selector(…)&amp;lt;/tt&amp;gt; expression referring to an undeclared selector is found. &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The use of aggressive warnings will produce spurious noise. The noise is a tradeoff - you can learn of potential problems at the cost of wading through some chaff. The following will help reduces spurious noise from the warning system:&lt;br /&gt;
&lt;br /&gt;
* -Wno-unused-parameter (GCC)&lt;br /&gt;
* -Wno-type-limits (GCC 4.3)&lt;br /&gt;
* -Wno-tautological-compare (Clang)&lt;br /&gt;
&lt;br /&gt;
Finally, a simple version based Makefile example is shown below. This is different than feature based makefile produced by auto tools (which will test for a particular feature and then define a symbol or configure a template file). Not all platforms use all options and flags. To address the issue you can pursue one of two strategies. First, you can ship with a weakened posture by servicing the lowest common denominator; or you can ship with everything in force. In the latter case, those who don't have a feature available will edit the makefile to accommodate their installation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;CXX=g++&lt;br /&gt;
EGREP = egrep&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GCC_COMPILER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version')&lt;br /&gt;
GCC41_OR_LATER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version (4\.[1-9]|[5-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GNU_LD210_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[0-9]|2\.[2-9])')&lt;br /&gt;
GNU_LD214_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[4-9]|2\.[2-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC_COMPILER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wall -Wextra -Wconversion&lt;br /&gt;
  MY_CC_FLAGS += -Wformat=2 -Wformat-security&lt;br /&gt;
  MY_CC_FLAGS += -Wno-unused-parameter&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC41_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fstack-protector-all&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC42_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wstrict-overflow&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC43_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wtrampolines&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD210_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,nodlopen -z,nodump&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD214_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,noexecstack -z,noexecheap&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD215_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,relro -z,now&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD216_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fPIE&lt;br /&gt;
  MY_LD_FLAGS += -pie&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
# Use 'override' to honor the user's command line&lt;br /&gt;
override CFLAGS := $(MY_CC_FLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(MY_CC_FLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(MY_LD_FLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Clang/Xcode ===&lt;br /&gt;
&lt;br /&gt;
[http://clang.llvm.org Clang] and [http://llvm.org LLVM] have been aggressively developed since Apple lost its GPL compiler back in 2007 (due to Tivoization which resulted in GPLv3). Since that time, a number of developers and Goggle have joined the effort. While Clang will consume most (all?) GCC/Binutil flags and switches, the project supports a number of its own options, including a static analyzer. In addition, Clang is relatively easy to build with additional diagnostics, such as Dr. John Regher and Peng Li's [http://embed.cs.utah.edu/ioc/ Integer Overflow Checker (IOC)].&lt;br /&gt;
&lt;br /&gt;
IOC is incredibly useful, and has found bugs in a number of projects, from the Linux Kernel (&amp;lt;tt&amp;gt;include/linux/bitops.h&amp;lt;/tt&amp;gt;, still unfixed), SQLite, PHP, Firefox (many still unfixed), LLVM, and Python. Future version of Clang (Clang 3.3 and above) will allow you to enable the checks out of the box with &amp;lt;tt&amp;gt;-fsanitize=integer&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=shift&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Clang options can be found at [http://clang.llvm.org/docs/UsersManual.html Clang Compiler User’s Manual]. Clang does include an option to turn on all warnings - &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt;. Use it with care but use it regularly since you will get back a lot of noise and issues you missed. For example, add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; for production builds and make non-spurious issues a quality gate. Under Xcode, simply add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to compiler warnings, both static analysis and additional security checks can be performed. Reading on Clang's static analysis capabilities can be found at [http://clang-analyzer.llvm.org Clang Static Analyzer]. Figure 1 below shows some of the security checks utilized by Xcode.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-11.png|thumb|450px|Figure 1: Clang/LLVM and Xcode options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Visual Studio ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a convenient Integrated Development Environment (IDE) for managing solutions and their settings. the section called “Visual Studio Options” discusses option which should be used with Visual Studio, and the section called “Project Properties” demonstrates incorporating those options into a solution's project.&lt;br /&gt;
&lt;br /&gt;
The table below lists the compiler and linker switches which should be used under Visual Studio. Refer to Howard and LeBlanc's Writing Secure Code (Microsoft Press) for a detailed discussion; or ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]'' in Security Briefs by Michael Howard. In the table below, “Visual Studio” refers to nearly all versions of the development environment, including Visual Studio 5.0 and 6.0.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, those settings can be verified with BinScope. BinScope is a verification tool from Microsoft that analyzes binaries to ensure that they have been built in compliance with Microsoft's Security Development Lifecycle (SDLC) requirements and recommendations. See the ''[https://www.microsoft.com/download/en/details.aspx?id=11910 BinScope Binary Analyzer]'' download page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 5: Visual Studio Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;150pt&amp;quot;|&amp;lt;nowiki&amp;gt;/W4&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;100pt&amp;quot;|Visual Studio&lt;br /&gt;
|width=&amp;quot;350pt&amp;quot;|Warning level 4, which includes most warnings.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/WAll&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Enable all warnings, including those off by default.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/GS&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Adds a security cookie (guard or canary) on the stack before the return address buffer stack based for overflow checks.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/SafeSEH&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Safe structured exception handling to remediate SEH overwrites.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/analyze&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Enterprise code analysis (freely available with Windows SDK for Windows Server 2008 and .NET Framework 3.5).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/NXCOMPAT&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Data Execution Prevention (DEP).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/dynamicbase&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Address Space Layout Randomization (ASLR).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;strict_gs_check&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Aggressively applies stack protections to a source file to help detect some categories of stack based buffer overruns.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;See Jon Sturgeon's discussion of the switch at ''[https://blogs.msdn.com/b/vcblog/archive/2010/12/14/off-by-default-compiler-warnings-in-visual-c.aspx Off By Default Compiler Warnings in Visual C++]''.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;When using /GS, there are a number of circumstances which affect the inclusion of a security cookie. For example, the guard is not used if there is no buffer in the stack frame, optimizations are disabled, or the function is declared naked or contains inline assembly.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;tt&amp;gt;#pragma strict_gs_check(on)&amp;lt;/tt&amp;gt; should be used sparingly, but is recommend in high risk situations, such as when a source file parses input from the internet.&lt;br /&gt;
&lt;br /&gt;
=== Warn Suppression ===&lt;br /&gt;
&lt;br /&gt;
From the tables above, a lot of warnings have been enabled to help detect possible programming mistakes. The potential mistakes are detected via compiler which carries around a lot of contextual information during its code analysis phase. At times, you will receive spurious warnings because the compiler is not ''that'' smart. Its understandable and even a good thing (how would you like to be out of a job because a program writes its own programs?). At times you will have to learn how to work with the compiler's warning system to suppress warnings. Notice what was not said: turn off the warnings.&lt;br /&gt;
&lt;br /&gt;
Suppressing warnings placates the compiler for spurious noise so you can get to the issues that matter (you are separating the wheat from the chaff). This section will offer some hints and point out some potential minefields. First is an unused parameter (for example, &amp;lt;tt&amp;gt;argc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;argv&amp;lt;/tt&amp;gt;). Suppressing unused parameter warnings is especially helpful for C++ and interface programming, where parameters are often unused. For this warning, simply define an &amp;quot;UNUSED&amp;quot; macro and warp the parameter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#define UNUSED_PARAMETER(x) ((void)x)&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    UNUSED_PARAMETER(argc);&lt;br /&gt;
    UNUSED_PARAMETER(argv);&lt;br /&gt;
    …&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A potential minefield lies near &amp;quot;comparing unsigned and signed&amp;quot; values, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will catch it for you. This is because C/C++ promotion rules state the signed value will be promoted to an unsigned value and then compared. That means &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! To fix this, you cannot blindly cast - you must first range test the value:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;int x = GetX();&lt;br /&gt;
unsigned int y = GetY();&lt;br /&gt;
&lt;br /&gt;
ASSERT(x &amp;gt;= 0);&lt;br /&gt;
if(!(x &amp;gt;= 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? X is negative.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
if(static_cast&amp;lt;unsigned int&amp;gt;(x) &amp;gt; y)&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is greater than y&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
else&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is not greater than y&amp;quot; &amp;lt;&amp;lt; endl;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;x&amp;lt;/tt&amp;gt;. Just run the program and wait for it to tell you there is a problem. If there is a problem, the program will snap the debugger (and more importantly, not call a useless &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; as specified by Posix). It beats the snot out of &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; that are removed when no longer needed or pollute outputs.&lt;br /&gt;
&lt;br /&gt;
Another conversion problem you will encounter conversion between types, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will also catch it for you. The following will always have an opportunity to fail, and should light up like a Christmas tree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct sockaddr_in addr;&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
addr.sin_port = htons(atoi(argv[2]));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following would probably serve you much better. Notice &amp;lt;tt&amp;gt;atoi&amp;lt;/tt&amp;gt; and fiends are not used because they can silently fail. In addition, the code is instrumented so you don't need to waste a lot of time debugging potential problems:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;const char* cstr = GetPortString();&lt;br /&gt;
&lt;br /&gt;
ASSERT(cstr != NULL);&lt;br /&gt;
if(!(cstr != NULL))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port string is not valid.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
istringstream iss(cstr);&lt;br /&gt;
long long t = 0;&lt;br /&gt;
iss &amp;gt;&amp;gt; t;&lt;br /&gt;
&lt;br /&gt;
ASSERT(!(iss.fail()));&lt;br /&gt;
if(iss.fail())&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Failed to read port.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// Should this be a port above the reserved range ([0-1024] on Unix)?&lt;br /&gt;
ASSERT(t &amp;gt; 0);&lt;br /&gt;
if(!(t &amp;gt; 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too small&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
ASSERT(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max()));&lt;br /&gt;
if(!(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max())))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too large&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use port&lt;br /&gt;
unsigned short port = static_cast&amp;lt;unsigned short&amp;gt;(t);&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;port&amp;lt;/tt&amp;gt;. This code will continue checking conditions, years after being instrumented (assuming to wrote code to read a config file early in the project). There's no need to remove the &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt;s as with &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; since they are silent guardians.&lt;br /&gt;
&lt;br /&gt;
Another useful suppression trick is too avoid ignoring return values. Not only is it useful to suppress the warning, its required for correct code. For example, &amp;lt;tt&amp;gt;snprint&amp;lt;/tt&amp;gt; will alert you to truncations through its return value. You should not make them silent truncations by ignoring the warning or casting to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;char path[PATH_MAX];&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int ret = snprintf(path, sizeof(path), &amp;quot;%s/%s&amp;quot;, GetDirectory(), GetObjectName());&lt;br /&gt;
ASSERT(ret != -1);&lt;br /&gt;
ASSERT(!(ret &amp;gt;= sizeof(path)));&lt;br /&gt;
&lt;br /&gt;
if(ret == -1 || ret &amp;gt;= sizeof(path))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Unable to build full object name&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use path&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem is pandemic, and not just boring user land programs. Projects which offer high integrity code, such as SELinux, suffer silent truncations. The following is from an approved SELinux patch even though a comment was made that it [http://permalink.gmane.org/gmane.comp.security.selinux/16845 suffered silent truncations in its &amp;lt;tt&amp;gt;security_compute_create_name&amp;lt;/tt&amp;gt; function] from &amp;lt;tt&amp;gt;compute_create.c&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12  int security_compute_create_raw(security_context_t scon,&lt;br /&gt;
13                                  security_context_t tcon,&lt;br /&gt;
14                                  security_class_t   tclass,&lt;br /&gt;
15                                  security_context_t * newcon)&lt;br /&gt;
16  {&lt;br /&gt;
17    char path[PATH_MAX];&lt;br /&gt;
18    char *buf;&lt;br /&gt;
19    size_t size;&lt;br /&gt;
20    int fd, ret;&lt;br /&gt;
21 	&lt;br /&gt;
22    if (!selinux_mnt) {&lt;br /&gt;
23      errno = ENOENT;&lt;br /&gt;
24      return -1;&lt;br /&gt;
25    }&lt;br /&gt;
26 	&lt;br /&gt;
27    snprintf(path, sizeof path, &amp;quot;%s/create&amp;quot;, selinux_mnt);&lt;br /&gt;
28    fd = open(path, O_RDWR);&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Unlike other examples, the above code will not debug itself, and you will have to set breakpoints and trace calls to determine the point of first failure. (And the code above gambles that the truncated file does not exist or is not under an adversary's control by blindly performing the &amp;lt;tt&amp;gt;open&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
== Runtime ==&lt;br /&gt;
&lt;br /&gt;
The previous sections concentrated on setting up your project for success. This section will examine additional hints for running with increased diagnostics and defenses. Not all platforms are created equal - GNU Linux is difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening to a program after compiling and static linking]; while Windows allows post-build hardening through a download. Remember, the goal is to find the point of first failure quickly so you can improve the reliability and security of the code.&lt;br /&gt;
&lt;br /&gt;
=== Xcode ===&lt;br /&gt;
&lt;br /&gt;
Xcode offers additional [http://developer.apple.com/library/mac/#recipes/xcode_help-scheme_editor/Articles/SchemeDiagnostics.html Application Diagnostics] that can help find memory errors and object use problems. Schemes can be managed through ''Products'' menu item, ''Scheme'' submenu item, and then ''Edit''. From the editor, navigate to the ''Diagnostics'' tab. In the figure below, four additional instruments are enabled for the debugging cycle: Scribble guards, Edge guards, Malloc guards, and Zombies.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-1.png|thumb|450px|Figure 2: Xcode Memory Diagnostics]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
There is one caveat with using some of the guards: Apple only provides them for the simulator, and not a device. In the past, the guards were available for both devices and simulators.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a number of debugging aides for use during development. The aides are called [http://msdn.microsoft.com/en-us/library/d21c150d.aspx Managed Debugging Assistants (MDAs)]. You can find the MDAs on the ''Debug'' menu, then ''Exceptions'' submenu. MDAs allow you to tune your debugging experience by, for example, filter exceptions for which the debugger should snap. For more details, see Stephen Toub's ''[http://msdn.microsoft.com/en-us/magazine/cc163606.aspx Let The CLR Find Bugs For You With Managed Debugging Assistants]''.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-2.png|thumb|450px|Figure 3: Managed Debugging Assistants]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Finally, for runtime hardening, Microsoft has a helpful tool called EMET. EMET is the [http://support.microsoft.com/kb/2458544 Enhanced Mitigation Experience Toolkit], and allows you to apply runtime hardening to an executable which was built without. Its very useful for utilities and other programs that were built without an SDLC.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-3.png|thumb|450px|Figure 4: Windows and EMET]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Baltimore&amp;diff=152248</id>
		<title>Baltimore</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Baltimore&amp;diff=152248"/>
				<updated>2013-05-26T22:28:51Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added meeting heading&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''OWASP Baltimore Local Chapter''' meetings are FREE and OPEN to anyone interested in learning more about application security. We encourage individuals to provide knowledge transfer via hands-on training and presentations of specific OWASP projects and research topics and sharing SDLC knowledge. &lt;br /&gt;
&lt;br /&gt;
We the encourage vendor-agnostic presentations to utilize the OWASP Powerpoint template when applicable and individual volunteerism to enable perpetual growth. As a 501(3)c non-profit association donations of meeting space or refreshments sponsorship is encouraged, simply contact the local chapter leaders listed on this page to discuss. Prior to participating with OWASP please review the Chapter Rules. &lt;br /&gt;
&lt;br /&gt;
The chapter is committed to providing an engaging experience for a variety of audience types ranging from local students and those beginning in app-sec, to those experienced and accomplished professionals who are looking for competent collaborators for OWASP-related projects. To this end, we will continue to conduct both monthly chapter meetings as well as out-of-band curricula, on application security topics. &lt;br /&gt;
&lt;br /&gt;
{{Chapter Template|chaptername=Baltimore|extra =Come see us at a chapter meeting, join the mailing list, or email us directly.&lt;br /&gt;
&lt;br /&gt;
The chapter leaders are [mailto:rajiv.t.mathew@gmail.com Rajiv Mathew] and [mailto:lattera@gmail.com Shawn Webb]. Please feel free to email us. The chapter leaders and mailing list welcome your participation and thoughts.&lt;br /&gt;
&lt;br /&gt;
OWASP Baltimore uses Meetup to schedule its meetings. Please see [http://www.meetup.com/OWASP-Baltimore-Chapter/ OWASP Baltimore Chapter] for more information and breaking news.&lt;br /&gt;
&lt;br /&gt;
The group's mailing list is [http://lists.owasp.org/mailman/listinfo/owasp-baltimore OWASP Baltimore], and its archive can be found at [http://lists.owasp.org/pipermail/owasp-baltimore OWASP Baltimore Archives].|mailinglistsite=http://lists.owasp.org/mailman/listinfo/owasp-baltimore|emailarchives=http://lists.owasp.org/pipermail/owasp-baltimore OWASP Baltimore Archives}}&lt;br /&gt;
&lt;br /&gt;
== Meetings ==&lt;br /&gt;
&lt;br /&gt;
Meetings are held the first Thursday of each month. Announcements are made on the [http://lists.owasp.org/mailman/listinfo/owasp-baltimore OWASP Baltimore mailing list] and [http://www.meetup.com/OWASP-Baltimore-Chapter/ OWASP Baltimore Chapter Meetup]. Offline questions are answered by [mailto:rajiv.t.mathew@gmail.com Rajiv Mathew] and [mailto:lattera@gmail.com Shawn Webb].&lt;br /&gt;
&lt;br /&gt;
== Local News ==&lt;br /&gt;
&lt;br /&gt;
OWASP Baltimore is currently seeking corporate sponsors to host meetings. Please contact [mailto:rajiv.t.mathew@gmail.com Rajiv Mathew] and [mailto:lattera@gmail.com Shawn Webb] if you have resources available for OWASP.&lt;br /&gt;
&lt;br /&gt;
We are also in a drive for membership. If you know someone with an interest in application security, be sure to pass on the chapter contacts! Everyone is welcome to join us at our chapter meetings.&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Chapter]]&lt;br /&gt;
[[Category:Maryland]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Baltimore&amp;diff=152244</id>
		<title>Baltimore</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Baltimore&amp;diff=152244"/>
				<updated>2013-05-26T16:58:18Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Still fiddling with bits&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''OWASP Baltimore Local Chapter''' meetings are FREE and OPEN to anyone interested in learning more about application security. We encourage individuals to provide knowledge transfer via hands-on training and presentations of specific OWASP projects and research topics and sharing SDLC knowledge. &lt;br /&gt;
&lt;br /&gt;
We the encourage vendor-agnostic presentations to utilize the OWASP Powerpoint template when applicable and individual volunteerism to enable perpetual growth. As a 501(3)c non-profit association donations of meeting space or refreshments sponsorship is encouraged, simply contact the local chapter leaders listed on this page to discuss. Prior to participating with OWASP please review the Chapter Rules. &lt;br /&gt;
&lt;br /&gt;
The chapter is committed to providing an engaging experience for a variety of audience types ranging from local students and those beginning in app-sec, to those experienced and accomplished professionals who are looking for competent collaborators for OWASP-related projects. To this end, we will continue to conduct both monthly chapter meetings as well as out-of-band curricula, on application security topics. &lt;br /&gt;
&lt;br /&gt;
{{Chapter Template|chaptername=Baltimore|extra =Come see us at a chapter meeting, join the mailing list, or email us directly.&lt;br /&gt;
&lt;br /&gt;
The chapter leaders are [mailto:rajiv.t.mathew@gmail.com Rajiv Mathew] and [mailto:lattera@gmail.com Shawn Webb]. Please feel free to email us. The chapter leaders and mailing list welcome your participation and thoughts.&lt;br /&gt;
&lt;br /&gt;
OWASP Baltimore uses Meetup to schedule its meetings. Please see [http://www.meetup.com/OWASP-Baltimore-Chapter/ OWASP Baltimore Chapter] for more information and breaking news.&lt;br /&gt;
&lt;br /&gt;
The group's mailing list is [http://lists.owasp.org/mailman/listinfo/owasp-baltimore OWASP Baltimore], and its archive can be found at [http://lists.owasp.org/pipermail/owasp-baltimore OWASP Baltimore Archives].|mailinglistsite=http://lists.owasp.org/mailman/listinfo/owasp-baltimore|emailarchives=http://lists.owasp.org/pipermail/owasp-baltimore OWASP Baltimore Archives}}&lt;br /&gt;
&lt;br /&gt;
== Local News ==&lt;br /&gt;
&lt;br /&gt;
OWASP Baltimore is currently seeking corporate sponsors to host meetings. Please contact [mailto:rajiv.t.mathew@gmail.com Rajiv Mathew] and [mailto:lattera@gmail.com Shawn Webb] if you have resources available for OWASP.&lt;br /&gt;
&lt;br /&gt;
We are also in a drive for membership. If you know someone with an interest in application security, be sure to pass on the chapter contacts! Everyone is welcome to join us at our chapter meetings.&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Chapter]]&lt;br /&gt;
[[Category:Maryland]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Baltimore&amp;diff=152243</id>
		<title>Baltimore</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Baltimore&amp;diff=152243"/>
				<updated>2013-05-26T16:56:19Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Fixed that Chapter Template (its a bit tricky!)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''OWASP Baltimore Local Chapter''' meetings are FREE and OPEN to anyone interested in learning more about application security. We encourage individuals to provide knowledge transfer via hands-on training and presentations of specific OWASP projects and research topics and sharing SDLC knowledge. &lt;br /&gt;
&lt;br /&gt;
We the encourage vendor-agnostic presentations to utilize the OWASP Powerpoint template when applicable and individual volunteerism to enable perpetual growth. As a 501(3)c non-profit association donations of meeting space or refreshments sponsorship is encouraged, simply contact the local chapter leaders listed on this page to discuss. Prior to participating with OWASP please review the Chapter Rules. &lt;br /&gt;
&lt;br /&gt;
The chapter is committed to providing an engaging experience for a variety of audience types ranging from local students and those beginning in app-sec, to those experienced and accomplished professionals who are looking for competent collaborators for OWASP-related projects. To this end, we will continue to conduct both monthly chapter meetings as well as out-of-band curricula, on application security topics. &lt;br /&gt;
&lt;br /&gt;
{{Chapter Template|chaptername=Baltimore|extra =Come see us at a chapter meeting, join the mailing list, or email us directly.&lt;br /&gt;
&lt;br /&gt;
The chapter leaders are [mailto:rajiv.t.mathew@gmail.com Rajiv Mathew] and [mailto:lattera@gmail.com Shawn Webb]. Please feel free to email us. The group's mailing list is [http://lists.owasp.org/mailman/listinfo/owasp-baltimore OWASP Baltimore], and its archive can be found at [http://lists.owasp.org/pipermail/owasp-baltimore OWASP Baltimore Archives]. The chapter leaders and mailing list welcome your participation and thoughts.&lt;br /&gt;
&lt;br /&gt;
OWASP Baltimore uses Meetup to schedule its meetings. Please see [http://www.meetup.com/OWASP-Baltimore-Chapter/ OWASP Baltimore Chapter] for more information and breaking news.&lt;br /&gt;
&lt;br /&gt;
|mailinglistsite=http://lists.owasp.org/mailman/listinfo/owasp-baltimore|emailarchives=http://lists.owasp.org/pipermail/owasp-baltimore OWASP Baltimore Archives}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Local News ==&lt;br /&gt;
&lt;br /&gt;
OWASP Baltimore is currently seeking corporate sponsors to host meetings. Please contact [mailto:rajiv.t.mathew@gmail.com Rajiv Mathew] and [mailto:lattera@gmail.com Shawn Webb] if you have resources available for OWASP.&lt;br /&gt;
&lt;br /&gt;
We are also in a drive for membership. If you know someone with an interest in application security, be sure to pass on the chapter contacts! Everyone is welcome to join us at our chapter meetings.&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Chapter]]&lt;br /&gt;
[[Category:Maryland]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Baltimore&amp;diff=152242</id>
		<title>Baltimore</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Baltimore&amp;diff=152242"/>
				<updated>2013-05-26T16:53:08Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The '''OWASP Baltimore Local Chapter''' meetings are FREE and OPEN to anyone interested in learning more about application security. We encourage individuals to provide knowledge transfer via hands-on training and presentations of specific OWASP projects and research topics and sharing SDLC knowledge. &lt;br /&gt;
&lt;br /&gt;
We the encourage vendor-agnostic presentations to utilize the OWASP Powerpoint template when applicable and individual volunteerism to enable perpetual growth. As a 501(3)c non-profit association donations of meeting space or refreshments sponsorship is encouraged, simply contact the local chapter leaders listed on this page to discuss. Prior to participating with OWASP please review the Chapter Rules. &lt;br /&gt;
&lt;br /&gt;
The chapter is committed to providing an engaging experience for a variety of audience types ranging from local students and those beginning in app-sec, to those experienced and accomplished professionals who are looking for competent collaborators for OWASP-related projects. To this end, we will continue to conduct both monthly chapter meetings as well as out-of-band curricula, on application security topics. &lt;br /&gt;
&lt;br /&gt;
{{Chapter Template|chaptername=Baltimore|extra =Come see us at a chapter meeting, join the mailing list, or email us directly.&lt;br /&gt;
&lt;br /&gt;
== Communications and Contacts ==&lt;br /&gt;
&lt;br /&gt;
The chapter leaders are [mailto:rajiv.t.mathew@gmail.com Rajiv Mathew] and [mailto:lattera@gmail.com Shawn Webb]. Please feel free to email us. The group's mailing list is [http://lists.owasp.org/mailman/listinfo/owasp-baltimore OWASP Baltimore], and its archive can be found at [http://lists.owasp.org/pipermail/owasp-baltimore OWASP Baltimore Archives]. The chapter leaders and mailing list welcome your participation and thoughts.&lt;br /&gt;
&lt;br /&gt;
OWASP Baltimore uses Meetup to schedule its meetings. Please see [http://www.meetup.com/OWASP-Baltimore-Chapter/ OWASP Baltimore Chapter] for more information and breaking news.&lt;br /&gt;
&lt;br /&gt;
== Local News ==&lt;br /&gt;
&lt;br /&gt;
OWASP Baltimore is currently seeking corporate sponsors to host meetings. Please contact [mailto:rajiv.t.mathew@gmail.com Rajiv Mathew] and [mailto:lattera@gmail.com Shawn Webb] if you have resources available for OWASP.&lt;br /&gt;
&lt;br /&gt;
We are also in a drive for membership. If you know someone with an interest in application security, be sure to pass on the chapter contacts! Everyone is welcome to join us at our chapter meetings.&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Chapter]]&lt;br /&gt;
[[Category:Maryland]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=File:Securing-Wireless-Channels-in-the-Mobile-Space.ppt&amp;diff=151030</id>
		<title>File:Securing-Wireless-Channels-in-the-Mobile-Space.ppt</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=File:Securing-Wireless-Channels-in-the-Mobile-Space.ppt&amp;diff=151030"/>
				<updated>2013-05-05T13:41:32Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: uploaded a new version of &amp;amp;quot;File:Securing-Wireless-Channels-in-the-Mobile-Space.ppt&amp;amp;quot;: Added slide for TLS-PSK and &amp;quot;Does it Work&amp;quot;. The &amp;quot;Does it Work&amp;quot; slide features the Google Group posting by Alibo where he asked about Chrome complaining abou&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Presentation of &amp;quot;Securing Wireless Channels in the Mobile Space&amp;quot; given in Northern Virginia on February 7, 2013.&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=File:Pubkey-pin-ios.zip&amp;diff=150987</id>
		<title>File:Pubkey-pin-ios.zip</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=File:Pubkey-pin-ios.zip&amp;diff=150987"/>
				<updated>2013-05-03T17:46:14Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: uploaded a new version of &amp;amp;quot;File:Pubkey-pin-ios.zip&amp;amp;quot;: Updated source code comments to include both certificate and public key pinning&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;iOS program code for &amp;quot;Securing Wireless Channels in the Mobile Space&amp;quot; presentation (Northern Virginia, 2013-02-07)&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Certificate_and_Public_Key_Pinning&amp;diff=150904</id>
		<title>Certificate and Public Key Pinning</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Certificate_and_Public_Key_Pinning&amp;diff=150904"/>
				<updated>2013-05-03T02:26:54Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Cleaned up PSK&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
[[Certificate and Public Key Pinning]] is a technical guide to implementing certificate and public key pinning as discussed at the ''[https://www.owasp.org/index.php/Virginia Virginia chapter's]'' presentation [[Media:Securing-Wireless-Channels-in-the-Mobile-Space.ppt|Securing Wireless Channels in the Mobile Space]]. This guide is focused on providing clear, simple, actionable guidance for securing the channel in a hostile environment where actors could be malicious and the conference of trust a liability. Additional presentation material included [[Media:pubkey-pin-supplement.pdf|supplement with code excerpts]], [[Media:pubkey-pin-android.zip|Android sample program]], [[Media:pubkey-pin-ios.zip|iOS sample program]], [[Media:pubkey-pin-dotnet.zip|.Net sample program]], and [[Media:pubkey-pin-openssl.zip|OpenSSL sample program]].&lt;br /&gt;
&lt;br /&gt;
A cheat sheet is available at [[Pinning_Cheat_Sheet|Pinning Cheat Sheet]].&lt;br /&gt;
&lt;br /&gt;
== Introduction == &lt;br /&gt;
&lt;br /&gt;
Secure channels are a cornerstone to users and employees working remotely and on the go. Users and developers expect end-to-end security when sending and receiving data - especially sensitive data on channels protected by VPN, SSL, or TLS. While organizations which control DNS and CA have likely reduced risk to trivial levels under most threat models, users and developers subjugated to other's DNS and a public CA hierarchy are exposed to non-trivial amounts of risk. In fact, history has shown those relying on outside services have suffered chronic breaches in their secure channels.&lt;br /&gt;
&lt;br /&gt;
The pandemic abuse of trust has resulted in users, developers and applications making security related decisions on untrusted input. The situation is somewhat of a paradox: entities such as DNS and CAs are trusted and supposed to supply trusted input; yet their input cannot be trusted. Relying on untrusted input for security related decisions is not only bad karma, it violates a number of secure coding principals (see, for example, OWASP's [[Injection Theory]] and [[Data Validation]]).&lt;br /&gt;
&lt;br /&gt;
Pinning effectively removes the &amp;quot;conference of trust&amp;quot;. An application which pins a certificate or public key no longer needs to depend on others - such as DNS or CAs - when making security decisions relating to a peer's identity. For those familiar with SSH, you should realize that public key pinning nearly identical to SSH's &amp;lt;tt&amp;gt;StrictHostKeyChecking&amp;lt;/tt&amp;gt; option. SSH had it right the entire time, and the rest of the world is beginning to realize the virtues of directly identifying a host or service by its public key.&lt;br /&gt;
&lt;br /&gt;
Others who actively engage in pinning include Google and its browser Chrome. Chrome was successful in detecting the DigiNotar compromise which uncovered suspected interception by the Iranian government on its citizens. The initial report of the compromise can be found at ''[https://productforums.google.com/d/topic/gmail/3J3r2JqFNTw/discussion Is This MITM Attack to Gmail's SSL?]''; and Google Security's immediate response at ''[https://googleonlinesecurity.blogspot.com/2011/08/update-on-attempted-man-in-middle.html An update on attempted man-in-the-middle attacks]''.&lt;br /&gt;
&lt;br /&gt;
== What's the problem? ==&lt;br /&gt;
&lt;br /&gt;
Users, developers, and applications expect end-to-end security on their secure channels, but some secure channels are not meeting the expectation. Specifically, channels built using well known protocols such as VPN, SSL, and TLS can be vulnerable to a number of attacks.&lt;br /&gt;
&lt;br /&gt;
Examples of past failures are listed on the discussion tab for this article. This cheat sheet does not attempt to catalogue the failures in the industry, investigate the design flaws in the scaffolding, justify the lack of accountability or liability with the providers, explain the race to the bottom in services, or demystify the collusion between, for example, Browsers and CAs. For additional reading, please visit ''[http://www.cs.auckland.ac.nz/~pgut001/pubs/pkitutorial.pdf PKI is Broken]'' and ''[http://blog.cryptographyengineering.com/2012/02/how-to-fix-internet.html The Internet is Broken]''.&lt;br /&gt;
&lt;br /&gt;
=== Patient 0 ===&lt;br /&gt;
&lt;br /&gt;
The original problem was the ''Key Distribution Problem''. Insecure communications can be transformed into a secure communication problem with encryption. Encrypted communications can be transformed into an identity problem with signatures. The identity problem terminates at the key distribution problem. They are the same problem.&lt;br /&gt;
&lt;br /&gt;
=== The Cures ===&lt;br /&gt;
&lt;br /&gt;
There are three cures for the key distribution problem. First is to have first hand knowledge of your partner or peer (i.e., a peer, server or service). This could be solved with SneakerNet. Unfortunately, SneakerNet does not scale and cannot be used to solve the key distribution problem.&lt;br /&gt;
&lt;br /&gt;
The second is to rely on others, and it has two variants: (1) web of trust, and (2) hierarchy of trust. Web of Trust and Hierarchy of Trust solve the key distribution problem in a sterile environment. However, Web of Trust and Hierarchy of Trust each requires us to require us to rely on others - or '''confer trust'''. In practice, trusting others is showing to be problematic.&lt;br /&gt;
&lt;br /&gt;
== What Is Pinning? ==&lt;br /&gt;
&lt;br /&gt;
Pinning is the process of associating a host with their ''expected'' X509 certificate or public key. Once a certificate or public key is known or seen for a host, the certificate or public key is associated or 'pinned' to the host. If more than one certificate or public key is acceptable, then the program holds a ''pinset'' (taking from [https://developers.google.com/events/io/sessions/gooio2012/107/ Jon Larimer and Kenny Root Google I/O talk]). In this case, the advertised identity must match one of the elements in the pinset.&lt;br /&gt;
&lt;br /&gt;
A host or service's certificate or public key can be added to an application at development time, or it can be added upon first encountering the certificate or public key. The former - adding at development time - is preferred since ''preloading'' the certificate or public key ''out of band'' usually means the attacker cannot taint the pin. If the certificate or public key is added upon first encounter, you will be using ''key continuity''. Key continuity can fail if the attacker has a privileged position during the first first encounter.&lt;br /&gt;
&lt;br /&gt;
Pinning leverages knowledge of the pre-existing relationship between the user and an organization or service to help make better security related decisions. Because you already have information on the server or service, you don't need to rely on generalized mechanisms meant to solve the ''key distribution'' problem. That is, you don't need to turn to DNS for name/address mappings or CAs for bindings and status. Once exception is revocation and it is discussed below in [[#Pinning_Gaps|Pinning Gaps]].&lt;br /&gt;
&lt;br /&gt;
It is also worth mention that Pinning is not Stapling. Stapling sends both the certificate and  OCSP responder information in the same request to avoid the additional fetches the client should perform during path validations.&lt;br /&gt;
&lt;br /&gt;
=== When Do You Pin? ===&lt;br /&gt;
&lt;br /&gt;
You should pin anytime you want to be relatively certain of the remote host's identity or when operating in a hostile environment. Since one or both are almost always true, you should probably pin all the time.&lt;br /&gt;
&lt;br /&gt;
A perfect case in point: during the two weeks or so of preparation for the presentation and cheat sheet, we've observed three relevant and related failures. First was [http://gaurangkp.wordpress.com/2013/01/09/nokia-https-mitm/ Nokia/Opera willfully breaking the secure channel]; second was [http://blog.malwarebytes.org/intelligence/2013/02/digital-certificates-and-malware-a-dangerous-mix/ DigiCert issuing a code signing certificate for malware]; and third was [http://krebsonsecurity.com/2013/02/security-firm-bit9-hacked-used-to-spread-malware/ Bit9's loss of its root signing key]. The environment is not only hostile, its toxic.&lt;br /&gt;
&lt;br /&gt;
=== When Do You Whitelist? ===&lt;br /&gt;
&lt;br /&gt;
If you are working for an organization which practices &amp;quot;egress filtering&amp;quot; as part of a Data Loss Prevention (DLP) strategy, you will likely encounter ''Interception Proxies''. I like to refer to these things as '''&amp;quot;good&amp;quot; bad guys''' (as opposed to '''&amp;quot;bad&amp;quot; bad guys''') since both break end-to-end security and we can't tell them apart. In this case, '''do not''' offer to whitelist the interception proxy since it defeats your security goals. Add the interception proxy's public key to your pinset after being '''instructed''' to do so by the folks in Risk Acceptance.&lt;br /&gt;
&lt;br /&gt;
Note: if you whitelist a certificate or public key for a different host (for example, to accommodate an interception proxy), you are no longer pinning the expected certificates and keys for the host. Security and integrity on the channel could suffer, and it surely breaks end-to-end security expectations of users and organizations.&lt;br /&gt;
&lt;br /&gt;
For more reading on interception proxies, the additional risk they bestow, and how they fail, see Dr. Matthew Green's ''[http://blog.cryptographyengineering.com/2012/03/how-do-interception-proxies-fail.html How do Interception Proxies fail?]'' and Jeff Jarmoc's BlackHat talk ''[https://www.blackhat.com/html/bh-eu-12/bh-eu-12-archives.html#jarmoc SSL/TLS Interception Proxies and Transitive Trust]''.&lt;br /&gt;
&lt;br /&gt;
=== How Do You Pin? ===&lt;br /&gt;
&lt;br /&gt;
The idea is to re-use the exiting protocols and infrastructure, but use them in a hardened manner. For re-use, a program would keep doing the things it used to do when establishing a secure connection.&lt;br /&gt;
&lt;br /&gt;
To harden the channel, the program would would take advantage of the &amp;lt;tt&amp;gt;OnConnect&amp;lt;/tt&amp;gt; callback offered by a library, framework or platform. In the callback, the program would verify the remote host's identity by validating its certificate or public key. While pinning does not have to occur in an &amp;lt;tt&amp;gt;OnConnect&amp;lt;/tt&amp;gt; callback, its often most convenient because the underlying connection information is readily available.&lt;br /&gt;
&lt;br /&gt;
== What Should Be Pinned? ==&lt;br /&gt;
&lt;br /&gt;
The first thing to decide is what should be pinned. For this choice, you have two options: you can (1) pin  the certificate; or (2) pin the public key. If you choose public keys, you have two additional choices: (a) pin the &amp;lt;tt&amp;gt;subjectPublicKeyInfo&amp;lt;/tt&amp;gt;; or (b) pin one of the concrete types such as &amp;lt;tt&amp;gt;RSAPublicKey&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;DSAPublicKey&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The three choices are explained below in more detail. I would encourage you to pin the &amp;lt;tt&amp;gt;subjectPublicKeyInfo&amp;lt;/tt&amp;gt; because it has the public parameters (such as &amp;lt;tt&amp;gt;{e,n}&amp;lt;/tt&amp;gt; for an RSA public key) '''and''' contextual information such as an algorithm and OID. The context will help you keep your bearings at times, and Figure 1 below shows the additional information available.&lt;br /&gt;
&lt;br /&gt;
=== Encodings/Formats ===&lt;br /&gt;
&lt;br /&gt;
For the purposes of this article, the objects are in X509-compatible presentation format (PKCS#1 defers to X509, both of which use ASN.1). If you have a PEM encoded object (for example, &amp;lt;tt&amp;gt;-----BEGIN CERTIFICATE-----&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-----END CERTIFICATE-----&amp;lt;/tt&amp;gt;), then convert the object to DER encoding. Conversion using OpenSSL is offered below in [[#Format_Conversions|Format Conversions]].&lt;br /&gt;
&lt;br /&gt;
A certificate is an object which binds an entity (such as a person or organization) to a public key via a signature. The certificate is DER encoded, and has associated data or attributes such as ''Subject'' (who is identified or bound), ''Issuer'' (who signed it), ''Validity'' (''NotBefore'' and ''NotAfter''), and a ''Public Key''.&lt;br /&gt;
&lt;br /&gt;
A certificate has a ''subjectPublicKeyInfo''. The subjectPublicKeyInfo is a key with additional information. The ASN.1 type includes an ''Algorithm ID'', a ''Version'', and an extensible format to hold a concrete public key. Figures 1 and 2 below show different views of the same of a RSA key, which is the subjectPublicKeyInfo. The key is for the site [https://www.random.org random.org], and it is used in the sample programs and listings below.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:random-org-der-dump.png|thumb|375px|Figure 1: subjectPublicKeyInfo dumped with dumpans1]]&lt;br /&gt;
| [[File:random-org-der-hex.png|thumb|375px|Figure 2: subjectPublicKeyInfo under a hex editor]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The concrete public key is an encoded public key. The key format will usually be specified elsewhere - for example, PKCS#1 in the case of RSA Public Keys. In the case of an RSA public key, the type is ''RSAPublicKey'' and the parameters &amp;lt;tt&amp;gt;{e,n}&amp;lt;/tt&amp;gt; will be ASN.1 encoded. Figures 1 and 2 above clearly show the modulus (''n'' at line 28) and exponent (''e'' at line 289). For DSA, the concrete type is DSAPublicKey and the ASN.1 encoded parameters would be &amp;lt;tt&amp;gt;{p,q,g,y}&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Final takeaways: (1) a certificate binds an entity to a public key; (2) a certificate has a subjectPublicKeyInfo; and (3) a subjectPublicKeyInfo has an concrete public key. For those who want to learn more, a more in-depth discussion from a programmer's perspective can be found at the Code Project's article ''[http://www.codeproject.com/Articles/25487/Cryptographic-Interoperability-Keys Cryptographic Interoperability: Keys]''.&lt;br /&gt;
&lt;br /&gt;
=== Certificate ===&lt;br /&gt;
&lt;br /&gt;
[[File:pin-cert.png|thumb|right|100px|Certificate]] The certificate is easiest to pin. You can fetch the certificate out of band for the website, have the IT folks email your company certificate to you, use &amp;lt;tt&amp;gt;openssl s_client&amp;lt;/tt&amp;gt; to retrieve the certificate etc. When the certificate expires, you would update your application. Assuming your application has no bugs or security defects, the application would be updated every year or two.&lt;br /&gt;
&lt;br /&gt;
At runtime, you retrieve the website or server's certificate in the callback. Within the callback, you compare the retrieved certificate with the certificate embedded within the program. If the comparison fails, then fail the method or function. &lt;br /&gt;
&lt;br /&gt;
There is a downside to pinning a certificate. If the site rotates its certificate on a regular basis, then your application would need to be updated regularly. For example, Google rotates its certificates, so you will need to update your application about once a month (if it depended on Google services). Even though Google rotates its certificates, the underlying public keys (within the certificate) remain static.&lt;br /&gt;
&lt;br /&gt;
=== Public Key ===&lt;br /&gt;
&lt;br /&gt;
[[File:pin-pubkey.png|thumb|right|100px|Public Key]] Public key pinning is more flexible but a little trickier due to the extra steps necessary to extract the public key from a certificate. As with a certificate, the program checks the extracted public key with its embedded copy of the public key.&lt;br /&gt;
&lt;br /&gt;
There are two downsides two public key pinning. First, its harder to work with keys (versus certificates) since you usually must extract the key from the certificate. Extraction is a minor inconvenience in Java and .Net, buts its uncomfortable in Cocoa/CocoaTouch and OpenSSL. Second, the key is static and may violate key rotation policies.&lt;br /&gt;
&lt;br /&gt;
=== Hashing ===&lt;br /&gt;
&lt;br /&gt;
While the three choices above used DER encoding, its also acceptable to use a hash of the information (or other transforms). In fact, the original sample programs were written using digested certificates and public keys. The samples were changed to allow a programmer to inspect the objects with tools like &amp;lt;tt&amp;gt;dumpasn1&amp;lt;/tt&amp;gt; and other ASN.1 decoders.&lt;br /&gt;
&lt;br /&gt;
Hashing also provides three additional benefits. First, hashing allows you to anonymize a certificate or public key. This might be important if you application is concerned about leaking information during decompilation and re-engineering.&lt;br /&gt;
&lt;br /&gt;
Second, a digested certificate fingerprint is often available as a native API for many libraries, so its convenient to use.&lt;br /&gt;
&lt;br /&gt;
Finally, an organization might want to supply a reserve (or back-up) identity in case the primary identity is compromised. Hashing ensures your adversaries do not see the reserved certificate or public key in advance of its use. In fact, Google's IETF draft ''websec-key-pinning'' uses the technique.&lt;br /&gt;
&lt;br /&gt;
== What About X509? ==&lt;br /&gt;
&lt;br /&gt;
PKI{X} and the Internet form an intersection. What Internet users expect and what they receive from CAs could vary wildly. For example, an Internet user has security goals, while a CA has revenue goals and legal goals. Many are surprised to learn that the user is often required to perform host identity verification even though the CA issued the certificate (the details are buried in CA warranties on their certificates and their Certification Practice Statement (CPS)).&lt;br /&gt;
&lt;br /&gt;
There are a number of PKI profiles available. For the Internet, &amp;quot;Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL)&amp;quot;, also known as [http://tools.ietf.org/rfc/rfc5280.txt RFC 5280], is of interest. Since a certificate is specified in the ITU's X509 standard, there are lots of mandatory and optional fields available for validation from both bodies. Because of the disjoint goals among groups, the next section provides guidance.&lt;br /&gt;
&lt;br /&gt;
=== Mandatory Checks ===&lt;br /&gt;
&lt;br /&gt;
All X509 verifications must include:&lt;br /&gt;
&lt;br /&gt;
* A path validation check. The check verifies all the signatures on certificates in the chain are valid under a given PKI. The check begins at the server or service's certificate (the leaf), and proceeds back to a trusted root certificate (the root).&lt;br /&gt;
&lt;br /&gt;
* A validity check, or the &amp;lt;tt&amp;gt;notBefore&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;notAfter&amp;lt;/tt&amp;gt; fields. The &amp;lt;tt&amp;gt;notAfter&amp;lt;/tt&amp;gt; field is especially important since a CA will not warrant the certificate after the date, and it does not have to provide CRL/OCSP updates after the date.&lt;br /&gt;
&lt;br /&gt;
* Revocation status. As with &amp;lt;tt&amp;gt;notAfter&amp;lt;/tt&amp;gt;, revocation is important because the CA will not warrant a certificate once it is listed as revoked. The IETF approved way of checking a certificate's revocation is OCSP and specified in [http://tools.ietf.org/rfc/rfc2560.txt RFC 2560].&lt;br /&gt;
&lt;br /&gt;
=== Optional Checks ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;nowiki&amp;gt;[Mulling over what else to present, and the best way to present it. Subject name? DNS lookups? Key Usage? Algorithms? Geolocation based on IP? Check back soon.]&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Public Key Checks ===&lt;br /&gt;
&lt;br /&gt;
''Quod vide'' (''q.v.''). Verifying the identity of a host with knowledge of its associated/expected public key is pinning.&lt;br /&gt;
&lt;br /&gt;
== Examples of Pinning ==&lt;br /&gt;
&lt;br /&gt;
This section demonstrates certificate and public key pinning in Android Java, iOS, .Net, and OpenSSL. All programs attempt to connect to [https://www.random.org random.org] and fetch bytes (Dr. Mads Haahr participates in AOSP's pinning program, so the site should have a static key). The programs enjoy a pre-existing relationship with the site (more correctly, ''a priori'' knowledge), so they include a copy of the site's public key and pin the identity on the key.&lt;br /&gt;
&lt;br /&gt;
Parameter validation, return value checking, and error checking have been omitted in the code below, but is present in the sample programs. So the sample code is ready for copy/paste. By far, the most uncomfortable languages are C-based: iOS and OpenSSL.&lt;br /&gt;
&lt;br /&gt;
=== Android ===&lt;br /&gt;
&lt;br /&gt;
Pinning in Android is accomplished through a custom &amp;lt;tt&amp;gt;X509TrustManager&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;X509TrustManager&amp;lt;/tt&amp;gt; should perform the customary X509 checks in addition to performing the pin.&lt;br /&gt;
&lt;br /&gt;
Download: [[Media:pubkey-pin-android.zip|Android sample program]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;public final class PubKeyManager implements X509TrustManager&lt;br /&gt;
{&lt;br /&gt;
  private static String PUB_KEY = &amp;quot;30820122300d06092a864886f70d0101&amp;quot; +&lt;br /&gt;
    &amp;quot;0105000382010f003082010a0282010100b35ea8adaf4cb6db86068a836f3c85&amp;quot; +&lt;br /&gt;
    &amp;quot;5a545b1f0cc8afb19e38213bac4d55c3f2f19df6dee82ead67f70a990131b6bc&amp;quot; +&lt;br /&gt;
    &amp;quot;ac1a9116acc883862f00593199df19ce027c8eaaae8e3121f7f329219464e657&amp;quot; +&lt;br /&gt;
    &amp;quot;2cbf66e8e229eac2992dd795c4f23df0fe72b6ceef457eba0b9029619e0395b8&amp;quot; +&lt;br /&gt;
    &amp;quot;609851849dd6214589a2ceba4f7a7dcceb7ab2a6b60c27c69317bd7ab2135f50&amp;quot; +&lt;br /&gt;
    &amp;quot;c6317e5dbfb9d1e55936e4109b7b911450c746fe0d5d07165b6b23ada7700b00&amp;quot; +&lt;br /&gt;
    &amp;quot;33238c858ad179a82459c4718019c111b4ef7be53e5972e06ca68a112406da38&amp;quot; +&lt;br /&gt;
    &amp;quot;cf60d2f4fda4d1cd52f1da9fd6104d91a34455cd7b328b02525320a35253147b&amp;quot; +&lt;br /&gt;
    &amp;quot;e0b7a5bc860966dc84f10d723ce7eed5430203010001&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
  public void checkServerTrusted(X509Certificate[] chain, String authType) throws CertificateException&lt;br /&gt;
  {&lt;br /&gt;
    if (chain == null) {&lt;br /&gt;
      throw new IllegalArgumentException(&amp;quot;checkServerTrusted: X509Certificate array is null&amp;quot;);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    if (!(chain.length &amp;gt; 0)) {&lt;br /&gt;
      throw new IllegalArgumentException(&amp;quot;checkServerTrusted: X509Certificate is empty&amp;quot;);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    if (!(null != authType &amp;amp;&amp;amp; authType.equalsIgnoreCase(&amp;quot;RSA&amp;quot;))) {&lt;br /&gt;
      throw new CertificateException(&amp;quot;checkServerTrusted: AuthType is not RSA&amp;quot;);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // Perform customary SSL/TLS checks&lt;br /&gt;
    try {&lt;br /&gt;
      TrustManagerFactory tmf = TrustManagerFactory.getInstance(&amp;quot;X509&amp;quot;);&lt;br /&gt;
      tmf.init((KeyStore) null);&lt;br /&gt;
      &lt;br /&gt;
      for (TrustManager trustManager : tmf.getTrustManagers()) {&lt;br /&gt;
        ((X509TrustManager) trustManager).checkServerTrusted(chain, authType);&lt;br /&gt;
      }&lt;br /&gt;
    } catch (Exception e) {&lt;br /&gt;
      throw new CertificateException(e);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    // Hack ahead: BigInteger and toString(). We know a DER encoded Public Key begins&lt;br /&gt;
    // with 0x30 (ASN.1 SEQUENCE and CONSTRUCTED), so there is no leading 0x00 to drop.&lt;br /&gt;
    RSAPublicKey pubkey = (RSAPublicKey) chain[0].getPublicKey();&lt;br /&gt;
    String encoded = new BigInteger(1 /* positive */, pubkey.getEncoded()).toString(16);&lt;br /&gt;
&lt;br /&gt;
    // Pin it!&lt;br /&gt;
    final boolean expected = PUB_KEY.equalsIgnoreCase(encoded);&lt;br /&gt;
    if (!expected) {&lt;br /&gt;
      throw new CertificateException(&amp;quot;checkServerTrusted: Expected public key: &amp;quot;&lt;br /&gt;
                + PUB_KEY + &amp;quot;, got public key:&amp;quot; + encoded);&lt;br /&gt;
      }&lt;br /&gt;
    }&lt;br /&gt;
  }&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;PubKeyManager&amp;lt;/tt&amp;gt; would be used in code similar to below.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;TrustManager tm[] = { new PubKeyManager() };&lt;br /&gt;
&lt;br /&gt;
SSLContext context = SSLContext.getInstance(&amp;quot;TLS&amp;quot;);&lt;br /&gt;
context.init(null, tm, null);&lt;br /&gt;
&lt;br /&gt;
URL url = new URL( &amp;quot;https://www.random.org/integers/?&amp;quot; +&lt;br /&gt;
                   &amp;quot;num=16&amp;amp;min=0&amp;amp;max=255&amp;amp;col=16&amp;amp;base=10&amp;amp;format=plain&amp;amp;rnd=new&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
HttpsURLConnection connection = (HttpsURLConnection) url.openConnection();&lt;br /&gt;
connection.setSSLSocketFactory(context.getSocketFactory());&lt;br /&gt;
&lt;br /&gt;
InputStreamReader instream = new InputStreamReader(connection.getInputStream());&lt;br /&gt;
StreamTokenizer tokenizer = new StreamTokenizer(instream);&lt;br /&gt;
...&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== iOS ===&lt;br /&gt;
&lt;br /&gt;
iOS pinning is performed through a &amp;lt;tt&amp;gt;NSURLConnectionDelegate&amp;lt;/tt&amp;gt;. The delegate must implement &amp;lt;tt&amp;gt;connection:canAuthenticateAgainstProtectionSpace:&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;connection:didReceiveAuthenticationChallenge:&amp;lt;/tt&amp;gt;. Within &amp;lt;tt&amp;gt;connection:didReceiveAuthenticationChallenge:&amp;lt;/tt&amp;gt;, the delegate must call &amp;lt;tt&amp;gt;SecTrustEvaluate&amp;lt;/tt&amp;gt; to perform customary X509 checks.&lt;br /&gt;
&lt;br /&gt;
Download: [[Media:pubkey-pin-ios.zip|iOS sample program]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-(IBAction)fetchButtonTapped:(id)sender&lt;br /&gt;
{&lt;br /&gt;
    NSString* requestString = @&amp;quot;https://www.random.org/integers/?&lt;br /&gt;
        num=16&amp;amp;min=0&amp;amp;max=255&amp;amp;col=16&amp;amp;base=16&amp;amp;format=plain&amp;amp;rnd=new&amp;quot;;&lt;br /&gt;
    NSURL* requestUrl = [NSURL URLWithString:requestString];&lt;br /&gt;
&lt;br /&gt;
    NSURLRequest* request = [NSURLRequest requestWithURL:requestUrl&lt;br /&gt;
                                             cachePolicy:NSURLRequestReloadIgnoringLocalCacheData&lt;br /&gt;
                                         timeoutInterval:10.0f];&lt;br /&gt;
&lt;br /&gt;
    NSURLConnection* connection = [[NSURLConnection alloc] initWithRequest:request delegate:self];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
-(BOOL)connection:(NSURLConnection *)connection canAuthenticateAgainstProtectionSpace:&lt;br /&gt;
                  (NSURLProtectionSpace*)space&lt;br /&gt;
{&lt;br /&gt;
    return [[space authenticationMethod] isEqualToString: NSURLAuthenticationMethodServerTrust];&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
- (void)connection:(NSURLConnection *)connection didReceiveAuthenticationChallenge:&lt;br /&gt;
                   (NSURLAuthenticationChallenge *)challenge&lt;br /&gt;
{&lt;br /&gt;
  if ([[[challenge protectionSpace] authenticationMethod] isEqualToString: NSURLAuthenticationMethodServerTrust])&lt;br /&gt;
  {&lt;br /&gt;
    do&lt;br /&gt;
    {&lt;br /&gt;
      SecTrustRef serverTrust = [[challenge protectionSpace] serverTrust];&lt;br /&gt;
      if(nil == serverTrust)&lt;br /&gt;
        break; /* failed */&lt;br /&gt;
&lt;br /&gt;
      OSStatus status = SecTrustEvaluate(serverTrust, NULL);&lt;br /&gt;
      if(!(errSecSuccess == status))&lt;br /&gt;
        break; /* failed */&lt;br /&gt;
&lt;br /&gt;
      SecCertificateRef serverCertificate = SecTrustGetCertificateAtIndex(serverTrust, 0);&lt;br /&gt;
      if(nil == serverCertificate)&lt;br /&gt;
        break; /* failed */&lt;br /&gt;
&lt;br /&gt;
      CFDataRef serverCertificateData = SecCertificateCopyData(serverCertificate);&lt;br /&gt;
      [(id)serverCertificateData autorelease];&lt;br /&gt;
      if(nil == serverCertificateData)&lt;br /&gt;
        break; /* failed */&lt;br /&gt;
&lt;br /&gt;
      const UInt8* const data = CFDataGetBytePtr(serverCertificateData);&lt;br /&gt;
      const CFIndex size = CFDataGetLength(serverCertificateData);&lt;br /&gt;
      NSData* cert1 = [NSData dataWithBytes:data length:(NSUInteger)size];&lt;br /&gt;
&lt;br /&gt;
      NSString *file = [[NSBundle mainBundle] pathForResource:@&amp;quot;random-org&amp;quot; ofType:@&amp;quot;der&amp;quot;];&lt;br /&gt;
      NSData* cert2 = [NSData dataWithContentsOfFile:file];&lt;br /&gt;
&lt;br /&gt;
      if(nil == cert1 || nil == cert2)&lt;br /&gt;
        break; /* failed */&lt;br /&gt;
&lt;br /&gt;
      const BOOL equal = [cert1 isEqualToData:cert2];&lt;br /&gt;
      if(!equal)&lt;br /&gt;
        break; /* failed */&lt;br /&gt;
&lt;br /&gt;
      // The only good exit point&lt;br /&gt;
      return [[challenge sender] useCredential: [NSURLCredential credentialForTrust: serverTrust]&lt;br /&gt;
                    forAuthenticationChallenge: challenge];&lt;br /&gt;
    } while(0);&lt;br /&gt;
&lt;br /&gt;
    // Bad dog&lt;br /&gt;
    return [[challenge sender] cancelAuthenticationChallenge: challenge];&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== .Net ===&lt;br /&gt;
&lt;br /&gt;
.Net pinning can be achieved by using &amp;lt;tt&amp;gt;ServicePointManager&amp;lt;/tt&amp;gt; as shown below.&lt;br /&gt;
&lt;br /&gt;
Download: [[Media:pubkey-pin-dotnet.zip|.Net sample program]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// Encoded RSAPublicKey&lt;br /&gt;
private static String PUB_KEY = &amp;quot;30818902818100C4A06B7B52F8D17DC1CCB47362&amp;quot; +&lt;br /&gt;
    &amp;quot;C64AB799AAE19E245A7559E9CEEC7D8AA4DF07CB0B21FDFD763C63A313A668FE9D764E&amp;quot; +&lt;br /&gt;
    &amp;quot;D913C51A676788DB62AF624F422C2F112C1316922AA5D37823CD9F43D1FC54513D14B2&amp;quot; +&lt;br /&gt;
    &amp;quot;9E36991F08A042C42EAAEEE5FE8E2CB10167174A359CEBF6FACC2C9CA933AD403137EE&amp;quot; +&lt;br /&gt;
    &amp;quot;2C3F4CBED9460129C72B0203010001&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
public static void Main(string[] args)&lt;br /&gt;
{&lt;br /&gt;
  ServicePointManager.ServerCertificateValidationCallback = PinPublicKey;&lt;br /&gt;
  WebRequest wr = WebRequest.Create(&amp;quot;https://encrypted.google.com/&amp;quot;);&lt;br /&gt;
  wr.GetResponse();&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
public static bool PinPublicKey(object sender, X509Certificate certificate, X509Chain chain,&lt;br /&gt;
                                SslPolicyErrors sslPolicyErrors)&lt;br /&gt;
{&lt;br /&gt;
  if (null == certificate)&lt;br /&gt;
    return false;&lt;br /&gt;
&lt;br /&gt;
  String pk = certificate.GetPublicKeyString();&lt;br /&gt;
  if (pk.Equals(PUB_KEY))&lt;br /&gt;
    return true;&lt;br /&gt;
&lt;br /&gt;
  // Bad dog&lt;br /&gt;
  return false;&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== OpenSSL ===&lt;br /&gt;
&lt;br /&gt;
Pinning can occur at one of two places with OpenSSL. First is the user supplied &amp;lt;tt&amp;gt;verify_callback&amp;lt;/tt&amp;gt;. Second is after the connection is established via &amp;lt;tt&amp;gt;SSL_get_peer_certificate&amp;lt;/tt&amp;gt;. Either method will allow you to access the peer's certificate.&lt;br /&gt;
&lt;br /&gt;
Though OpenSSL performs the X509 checks, you must fail the connection and tear down the socket on error. By design, a server that does not supply a certificate will result in &amp;lt;tt&amp;gt;X509_V_OK&amp;lt;/tt&amp;gt; with a '''NULL''' certificate. To check the result of the customary verification: (1) you must call &amp;lt;tt&amp;gt;SSL_get_verify_result&amp;lt;/tt&amp;gt; and verify the return code is &amp;lt;tt&amp;gt;X509_V_OK&amp;lt;/tt&amp;gt;; and (2) you must call &amp;lt;tt&amp;gt;SSL_get_peer_certificate&amp;lt;/tt&amp;gt; and verify the certificate is '''non-NULL'''.&lt;br /&gt;
&lt;br /&gt;
Download: [[Media:pubkey-pin-openssl.zip|OpenSSL sample program]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;int pkp_pin_peer_pubkey(SSL* ssl)&lt;br /&gt;
{&lt;br /&gt;
    if(NULL == ssl) return FALSE;&lt;br /&gt;
    &lt;br /&gt;
    X509* cert = NULL;&lt;br /&gt;
    FILE* fp = NULL;&lt;br /&gt;
    &lt;br /&gt;
    /* Scratch */&lt;br /&gt;
    int len1 = 0, len2 = 0;&lt;br /&gt;
    unsigned char *buff1 = NULL, *buff2 = NULL;&lt;br /&gt;
    &lt;br /&gt;
    /* Result is returned to caller */&lt;br /&gt;
    int ret = 0, result = FALSE;&lt;br /&gt;
    &lt;br /&gt;
    do&lt;br /&gt;
    {&lt;br /&gt;
        /* http://www.openssl.org/docs/ssl/SSL_get_peer_certificate.html */&lt;br /&gt;
        cert = SSL_get_peer_certificate(ssl);&lt;br /&gt;
        if(!(cert != NULL))&lt;br /&gt;
            break; /* failed */&lt;br /&gt;
        &lt;br /&gt;
        /* Begin Gyrations to get the subjectPublicKeyInfo       */&lt;br /&gt;
        /* Thanks to Viktor Dukhovni on the OpenSSL mailing list */&lt;br /&gt;
        &lt;br /&gt;
        /* http://groups.google.com/group/mailing.openssl.users/browse_thread/thread/d61858dae102c6c7 */&lt;br /&gt;
        len1 = i2d_X509_PUBKEY(X509_get_X509_PUBKEY(cert), NULL);&lt;br /&gt;
        if(!(len1 &amp;gt; 0))&lt;br /&gt;
            break; /* failed */&lt;br /&gt;
        &lt;br /&gt;
        /* scratch */&lt;br /&gt;
        unsigned char* temp = NULL;&lt;br /&gt;
        &lt;br /&gt;
        /* http://www.openssl.org/docs/crypto/buffer.html */&lt;br /&gt;
        buff1 = temp = OPENSSL_malloc(len1);&lt;br /&gt;
        if(!(buff1 != NULL))&lt;br /&gt;
            break; /* failed */&lt;br /&gt;
        &lt;br /&gt;
        /* http://www.openssl.org/docs/crypto/d2i_X509.html */&lt;br /&gt;
        len2 = i2d_X509_PUBKEY(X509_get_X509_PUBKEY(cert), &amp;amp;temp);&lt;br /&gt;
&lt;br /&gt;
        /* These checks are verifying we got back the same values as when we sized the buffer.      */&lt;br /&gt;
        /* Its pretty weak since they should always be the same. But it gives us something to test. */&lt;br /&gt;
        if(!((len1 == len2) &amp;amp;&amp;amp; (temp != NULL) &amp;amp;&amp;amp; ((temp - buff1) == len1)))&lt;br /&gt;
            break; /* failed */&lt;br /&gt;
        &lt;br /&gt;
        /* End Gyrations */&lt;br /&gt;
        &lt;br /&gt;
        /* See the warning above!!!                                            */&lt;br /&gt;
        /* http://pubs.opengroup.org/onlinepubs/009696699/functions/fopen.html */&lt;br /&gt;
        fp = fopen(&amp;quot;random-org.der&amp;quot;, &amp;quot;rx&amp;quot;);&lt;br /&gt;
        if(NULL ==fp) {&lt;br /&gt;
            fp = fopen(&amp;quot;random-org.der&amp;quot;, &amp;quot;r&amp;quot;);&lt;br /&gt;
        &lt;br /&gt;
        if(!(NULL != fp))&lt;br /&gt;
            break; /* failed */&lt;br /&gt;
        &lt;br /&gt;
        /* Seek to eof to determine the file's size                            */&lt;br /&gt;
        /* http://pubs.opengroup.org/onlinepubs/009696699/functions/fseek.html */&lt;br /&gt;
        ret = fseek(fp, 0, SEEK_END);&lt;br /&gt;
        if(!(0 == ret))&lt;br /&gt;
            break; /* failed */&lt;br /&gt;
        &lt;br /&gt;
        /* Fetch the file's size                                               */&lt;br /&gt;
        /* http://pubs.opengroup.org/onlinepubs/009696699/functions/ftell.html */&lt;br /&gt;
        long size = ftell(fp);&lt;br /&gt;
&lt;br /&gt;
        /* Arbitrary size, but should be relatively small (less than 1K or 2K) */&lt;br /&gt;
        if(!(size != -1 &amp;amp;&amp;amp; size &amp;gt; 0 &amp;amp;&amp;amp; size &amp;lt; 2048))&lt;br /&gt;
            break; /* failed */&lt;br /&gt;
        &lt;br /&gt;
        /* Rewind to beginning to perform the read                             */&lt;br /&gt;
        /* http://pubs.opengroup.org/onlinepubs/009696699/functions/fseek.html */&lt;br /&gt;
        ret = fseek(fp, 0, SEEK_SET);&lt;br /&gt;
        if(!(0 == ret))&lt;br /&gt;
            break; /* failed */&lt;br /&gt;
        &lt;br /&gt;
        /* Re-use buff2 and len2 */&lt;br /&gt;
        buff2 = NULL; len2 = (int)size;&lt;br /&gt;
        &lt;br /&gt;
        /* http://www.openssl.org/docs/crypto/buffer.html */&lt;br /&gt;
        buff2 = OPENSSL_malloc(len2);&lt;br /&gt;
        if(!(buff2 != NULL))&lt;br /&gt;
            break; /* failed */&lt;br /&gt;
        &lt;br /&gt;
        /* http://pubs.opengroup.org/onlinepubs/009696699/functions/fread.html */&lt;br /&gt;
        /* Returns number of elements read, which should be 1 */&lt;br /&gt;
        ret = (int)fread(buff2, (size_t)len2, 1, fp);&lt;br /&gt;
        if(!(ret == 1))&lt;br /&gt;
            break; /* failed */&lt;br /&gt;
        &lt;br /&gt;
        /* Re-use size. MIN and MAX macro below... */&lt;br /&gt;
        size = len1 &amp;lt; len2 ? len1 : len2;&lt;br /&gt;
        &lt;br /&gt;
        /*************************/&lt;br /&gt;
        /*****    PAYDIRT    *****/&lt;br /&gt;
        /*************************/&lt;br /&gt;
        if(len1 != (int)size || len2 != (int)size || 0 != memcmp(buff1, buff2, (size_t)size))&lt;br /&gt;
            break; /* failed */&lt;br /&gt;
        &lt;br /&gt;
        /* The one good exit point */&lt;br /&gt;
        result = TRUE;&lt;br /&gt;
        &lt;br /&gt;
    } while(0);&lt;br /&gt;
    &lt;br /&gt;
    if(fp != NULL)&lt;br /&gt;
        fclose(fp);&lt;br /&gt;
    &lt;br /&gt;
    /* http://www.openssl.org/docs/crypto/buffer.html */&lt;br /&gt;
    if(NULL != buff2)&lt;br /&gt;
        OPENSSL_free(buff2);&lt;br /&gt;
    &lt;br /&gt;
    /* http://www.openssl.org/docs/crypto/buffer.html */&lt;br /&gt;
    if(NULL != buff1)&lt;br /&gt;
        OPENSSL_free(buff1);&lt;br /&gt;
    &lt;br /&gt;
    /* http://www.openssl.org/docs/crypto/X509_new.html */&lt;br /&gt;
    if(NULL != cert)&lt;br /&gt;
        X509_free(cert);&lt;br /&gt;
    &lt;br /&gt;
    return result;&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Pinning Alternatives ==&lt;br /&gt;
&lt;br /&gt;
Not all applications use split key cryptography. Fortunately, there are protocols which allow you to set up a secure channel based on knowledge of passwords and pre-shared secrets (rather than putting the secret on the wire in a basic authentication scheme). Two are listed below - SRP and PSK. SRP and PSK have [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 88 cipher suites assigned to them by IANA for TLS], so there's no shortage of choices.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:pin-iana-assigned.png|thumb|450px|Figure 3: IANA reserved cipher suites for SRP and PSK]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== SRP ===&lt;br /&gt;
&lt;br /&gt;
Secure Remote Password (SRP) is a Password Authenticated Key Exchange (PAKE) by Thomas Wu based upon Diffie-Hellman. The protocol is standardized in [https://tools.ietf.org/rfc/rfc5054.txt RFC 5054] and available in the OpenSSL library (among others). In the SRP scheme, the server uses a verifier which consists of a &amp;lt;tt&amp;gt;{salt, hash(password)}&amp;lt;/tt&amp;gt; pair. The user has the password and receives the salt from the server. With lots of hand waiving, both parties select per-instance random values (nonces) and execute the protocol using ''g&amp;lt;sup&amp;gt;{(salt + password)|verifier} + nonces&amp;lt;/sup&amp;gt;'' rather than traditional Diffie-Hellman using ''g&amp;lt;sup&amp;gt;ab&amp;lt;/sup&amp;gt;''.&lt;br /&gt;
&lt;br /&gt;
[[File:homer-p-np.jpg|thumb|right|150px|P=NP!!!]]Diffie-Hellman based schemes are part of a family of problems based on Discrete Logs (DL), which are logarithms over a finite field. DL schemes are appealing because they are known to be hard (unless ''P=NP'', which would cause computational number theorists to have a cow).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== PSK ===&lt;br /&gt;
&lt;br /&gt;
PSK is Pre-Shared Key and specified in [https://tools.ietf.org/rfc/rfc4279.txt RFC 4279] and [https://tools.ietf.org/rfc/rfc4764.txt RFC 4764]. The shared secret is used as a pre-master secret in TLS-PSK for SSL/TLS; or used to key a block cipher in EAP-PSK. EAP-PSK is designed for authentication over insecure networks such as IEEE 802.11.&lt;br /&gt;
&lt;br /&gt;
== Miscellaneous ==&lt;br /&gt;
&lt;br /&gt;
This sections covers administrivia and miscellaneous items related to pinning.&lt;br /&gt;
&lt;br /&gt;
=== Ephemeral Keys ===&lt;br /&gt;
&lt;br /&gt;
Ephemeral keys are temporary keys used for one instance of a protocol execution and then thrown away. An ephemeral key has the benefit of providing forward secrecy, meaning a compromise of the site or service's long term (static) signing key does not facilitate decrypting past messages because the key was temporary and discarded (once the session terminated).&lt;br /&gt;
&lt;br /&gt;
Ephemeral keys do not affect pinning because the Ephemeral key is delivered in a separate &amp;lt;tt&amp;gt;ServerKeyExchange&amp;lt;/tt&amp;gt; message. In addition, the ephemeral key is a key and not a certificate, so it does not change the construction of the certificate chain. That is, the certificate of interest will still be located at &amp;lt;tt&amp;gt;certificates[0]&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=== Pinning Gaps ===&lt;br /&gt;
&lt;br /&gt;
There are two gaps when pinning due to reuse of the existing infrastructure and protocols. First, an explicit challenge is '''not''' sent by the program to the peer server based on the server's public information. So the program never knows if the peer can actually decrypt messages. However, the shortcoming is usually academic in practice since an adversary will receive messages it can't decrypt.&lt;br /&gt;
&lt;br /&gt;
Second is revocation. Clients don't usually engage in revocation checking, so it could be possible to use a known bad certificate or key in a pinset. Even if revocation is active, Certificate Revocation Lists (CRLs) and Online Certificate Status Protocol (OCSP) can be defeated in a hostile environment. An application can take steps to remediate, with the primary means being freshness. That is, an application should be updated and distributed immediately when a critical security parameter changes.&lt;br /&gt;
&lt;br /&gt;
=== No Relationship ^@$! ===&lt;br /&gt;
&lt;br /&gt;
If you don't have a pre-existing relationship, all is not lost. First, you can pin a host or server's certificate or public key the first time you encounter it. If the bad guy was not active when you encountered the certificate or public key, he or she will not be successful with future funny business.&lt;br /&gt;
&lt;br /&gt;
Second, bad certificates are being spotted quicker in the field due to projects like [http://www.chromium.org Chromium] and [https://addons.mozilla.org/en-us/firefox/addon/certificate-patrol/ Certificate Patrol], and initiatives like the EFF's [https://www.eff.org/observatory SSL Observatory].&lt;br /&gt;
&lt;br /&gt;
Third, help is on its way, and there are a number of futures that will assist with the endeavors:&lt;br /&gt;
&lt;br /&gt;
* Public Key Pinning (http://www.ietf.org/id/draft-ietf-websec-key-pinning-04.txt) – an extension to the HTTP protocol allowing web host operators to instruct user agents (UAs) to remember (&amp;quot;pin&amp;quot;) the hosts' cryptographic identities for a given period of time.&lt;br /&gt;
* DNS-based Authentication of Named Entities (DANE) (https://datatracker.ietf.org/doc/rfc6698/) - uses Secure DNS to associate Certificates with Domain Names For S/MIME, SMTP with TLS, DNSSEC and TLSA records.&lt;br /&gt;
* Sovereign Keys (http://www.eff.org/sovereign-keys) - operates by providing an optional and secure way of associating domain names with public keys via DNSSEC. PKI (hierarchical) is still used. Semi-centralized with append only logging.&lt;br /&gt;
* Convergence (http://convergence.io) – different [geographical] views of a site and its associated data (certificates and public keys). Web of Trust is used. Semi-centralized.&lt;br /&gt;
&lt;br /&gt;
While Sovereign Keys and Convergence still require us to confer trust to outside parties, the parties involved do not serve share holders or covet revenue streams. Their interests are industry transparency and user security.&lt;br /&gt;
&lt;br /&gt;
=== More Information? ===&lt;br /&gt;
&lt;br /&gt;
Pinning is an ''old new thing'' that has been shaken, stirred, and repackaged. While &amp;quot;pinning&amp;quot; and &amp;quot;pinsets&amp;quot; are relatively new terms for old things, Jon Larimer and Kenny Root spent time on the subject at Google I/O 2012 with their talk ''[https://developers.google.com/events/io/sessions/gooio2012/107/ Security and Privacy in Android Apps]''.&lt;br /&gt;
&lt;br /&gt;
=== Format Conversions ===&lt;br /&gt;
&lt;br /&gt;
As a convenience to readers, the following with convert between PEM and DER format using OpenSSL.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Public key, X509&lt;br /&gt;
$ openssl genrsa -out rsa-openssl.pem 3072&lt;br /&gt;
$ openssl rsa -in rsa-openssl.pem -pubout -outform DER -out rsa-openssl.der&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Private key, PKCS#8&lt;br /&gt;
$ openssl genrsa -out rsa-openssl.pem 3072&lt;br /&gt;
$ openssl pkcs8 -nocrypt -in rsa-openssl.pem -inform PEM -topk8 -outform DER -out rsa-openssl.der&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&lt;br /&gt;
* OWASP [[Injection_Theory|Injection Theory]]&lt;br /&gt;
* OWASP [[Data_Validation|Data Validation]]&lt;br /&gt;
* OWASP [[Transport_Layer_Protection_Cheat_Sheet|Transport Layer Protection Cheat Sheet]]&lt;br /&gt;
* IETF [http://www.ietf.org/id/draft-ietf-websec-key-pinning-04.txt Public Key Pinning]&lt;br /&gt;
* IETF [http://www.ietf.org/rfc/rfc5054.txt RFC 5054 (SRP)]&lt;br /&gt;
* IETF [http://www.ietf.org/rfc/rfc4764.txt RFC 4764 (EAP-PSK)]&lt;br /&gt;
* IETF [http://www.ietf.org/rfc/rfc1421.txt RFC 1421 (PEM Encoding)]&lt;br /&gt;
* IETF [http://www.ietf.org/rfc/rfc5280.txt RFC 5280 (Internet X.509, PKIX)]&lt;br /&gt;
* IETF [http://www.ietf.org/rfc/rfc4648.txt RFC 4648 (Base16, Base32, and Base64 Encodings)]&lt;br /&gt;
* IETF [http://www.ietf.org/rfc/rfc3279.txt RFC 3279 (PKI, X509 Algorithms and CRL Profiles)]&lt;br /&gt;
* IETF [http://www.ietf.org/rfc/rfc4055.txt RFC 4055 (PKI, X509 Additional Algorithms and CRL Profiles)]&lt;br /&gt;
* IETF [http://www.ietf.org/rfc/rfc2246.txt RFC 2246 (TLS 1.0)]&lt;br /&gt;
* IETF [http://www.ietf.org/rfc/rfc4346.txt RFC 4346 (TLS 1.1)]&lt;br /&gt;
* IETF [http://www.ietf.org/rfc/rfc5246.txt RFC 5246 (TLS 1.2)]&lt;br /&gt;
* IETF [http://www.ietf.org/rfc/rfc6698.txt RFC 6698, Draft (DANE)]&lt;br /&gt;
* EFF [http://www.eff.org/sovereign-keys Sovereign Keys]&lt;br /&gt;
* Thoughtcrime Labs [http://convergence.io/ Convergence]&lt;br /&gt;
* RSA Laboratories [http://www.rsa.com/rsalabs/node.asp?id=2125 PKCS#1, RSA Encryption Standard]&lt;br /&gt;
* RSA Laboratories [http://www.rsa.com/rsalabs/node.asp?id=2128 PKCS#6, Extended-Certificate Syntax Standard]&lt;br /&gt;
* ITU [http://www.itu.int/rec/T-REC-X.690-200811-I/en Specification of Basic Encoding Rules (BER), Canonical Encoding Rules (CER) and Distinguished Encoding Rules (DER)]&lt;br /&gt;
* TOR Project [https://blog.torproject.org/blog/detecting-certificate-authority-compromises-and-web-browser-collusion Detecting Certificate Authority Compromises and Web Browser Collusion]&lt;br /&gt;
* Code Project [http://www.codeproject.com/Articles/25487/Cryptographic-Interoperability-Keys Cryptographic Interoperability: Keys]&lt;br /&gt;
* Google I/O [https://developers.google.com/events/io/sessions/gooio2012/107/ Security and Privacy in Android Apps]&lt;br /&gt;
* Trevor Perrin [https://crypto.stanford.edu/RealWorldCrypto/slides/perrin.pdf Transparency, Trust Agility, Pinning (Recent Developments in Server Authentication)]&lt;br /&gt;
* Dr. Peter Gutmann's [http://www.cs.auckland.ac.nz/~pgut001/pubs/pkitutorial.pdf PKI is Broken]&lt;br /&gt;
* Dr. Matthew Green's [http://blog.cryptographyengineering.com/2012/02/how-to-fix-internet.html The Internet is Broken]&lt;br /&gt;
* Dr. Matthew Green's [http://blog.cryptographyengineering.com/2012/03/how-do-interception-proxies-fail.html How do Interception Proxies fail?]&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* JohnSteven - john, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=File:Securing-Wireless-Channels-in-the-Mobile-Space.ppt&amp;diff=150903</id>
		<title>File:Securing-Wireless-Channels-in-the-Mobile-Space.ppt</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=File:Securing-Wireless-Channels-in-the-Mobile-Space.ppt&amp;diff=150903"/>
				<updated>2013-05-03T01:51:51Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: uploaded a new version of &amp;amp;quot;File:Securing-Wireless-Channels-in-the-Mobile-Space.ppt&amp;amp;quot;: Updated slide deck. Used at Baltimore, MD OWASP meeting 02-MAY-2013.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Presentation of &amp;quot;Securing Wireless Channels in the Mobile Space&amp;quot; given in Northern Virginia on February 7, 2013.&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=File:Securing_Wireless_Channels_in_the_Mobile_Space.ppt&amp;diff=150902</id>
		<title>File:Securing Wireless Channels in the Mobile Space.ppt</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=File:Securing_Wireless_Channels_in_the_Mobile_Space.ppt&amp;diff=150902"/>
				<updated>2013-05-03T01:49:42Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Updated slide deck. Used at Baltimore, MD OWASP meeting 02-MAY-2013.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Updated slide deck. Used at Baltimore, MD OWASP meeting 02-MAY-2013.&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=File:Pubkey-pin-ios.zip&amp;diff=150901</id>
		<title>File:Pubkey-pin-ios.zip</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=File:Pubkey-pin-ios.zip&amp;diff=150901"/>
				<updated>2013-05-03T01:47:17Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: uploaded a new version of &amp;amp;quot;File:Pubkey-pin-ios.zip&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;iOS program code for &amp;quot;Securing Wireless Channels in the Mobile Space&amp;quot; presentation (Northern Virginia, 2013-02-07)&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=150647</id>
		<title>Transport Layer Protection Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=150647"/>
				<updated>2013-04-28T19:56:41Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added &amp;quot;shared secret or password&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction  =&lt;br /&gt;
&lt;br /&gt;
This article provides a simple model to follow when implementing transport layer protection for an application. Although the concept of SSL is known to many, the actual details and security specific decisions of implementation are often poorly understood and frequently result in insecure deployments. This article establishes clear rules which provide guidance on securely designing and configuring transport layer security for an application. This article is focused on the use of SSL/TLS between a web application and a web browser, but that we also encourage the use of SSL/TLS or other network encryption technologies, such as VPN, on back end and other non-browser based connections.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== Architectural Decision  ==&lt;br /&gt;
&lt;br /&gt;
An architectural decision must be made to determine the appropriate method to protect data when it is being transmitted.  The most common options available to corporations are Virtual Private Networks (VPN) or a SSL/TLS model commonly used by web applications. The selected model is determined by the business needs of the particular organization. For example, a VPN connection may be the best design for a partnership between two companies that includes mutual access to a shared server over a variety of protocols. Conversely, an Internet facing enterprise web application would likely be best served by a SSL/TLS model. &lt;br /&gt;
&lt;br /&gt;
This cheat sheet will focus on security considerations when the SSL/TLS model is selected. This is a frequently used model for publicly accessible web applications.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection with SSL/TLS  =&lt;br /&gt;
&lt;br /&gt;
== Benefits  ==&lt;br /&gt;
&lt;br /&gt;
The primary benefit of transport layer security is the protection of web application data from unauthorized disclosure and modification when it is transmitted between clients (web browsers) and the web application server, and between the web application server and back end and other non-browser based enterprise components. &lt;br /&gt;
&lt;br /&gt;
The server validation component of TLS provides authentication of the server to the client.  If configured to require client side certificates, TLS can also play a role in client authentication to the server. However, in practice client side certificates are not often used in lieu of username and password based authentication models for clients.&lt;br /&gt;
&lt;br /&gt;
TLS also provides two additional benefits that are commonly overlooked; integrity guarantees and replay prevention. A TLS stream of communication contains built-in controls to prevent tampering with any portion of the encrypted data. In addition, controls are also built-in to prevent a captured stream of TLS data from being replayed at a later time.&lt;br /&gt;
&lt;br /&gt;
It should be noted that TLS provides the above guarantees to data during transmission. TLS does not offer any of these security benefits to data that is at rest. Therefore appropriate security controls must be added to protect data while at rest within the application or within data stores.&lt;br /&gt;
&lt;br /&gt;
== Basic Requirements ==&lt;br /&gt;
&lt;br /&gt;
The basic requirements for using TLS are: access to a Public Key Infrastructure (PKI) in order to obtain certificates, access to a directory or an Online Certificate Status Protocol (OCSP) responder in order to check certificate revocation status, and agreement/ability to support a minimum configuration of protocol versions and protocol options for each version.&lt;br /&gt;
&lt;br /&gt;
== SSL vs. TLS  ==&lt;br /&gt;
&lt;br /&gt;
The terms, Secure Socket Layer (SSL) and Transport Layer Security (TLS) are often used interchangeably. In fact, SSL v3.1 is equivalent to TLS v1.0. However, different versions of SSL and TLS are supported by modern web browsers and by most modern web frameworks and platforms. For the purposes of this cheat sheet we will refer to the technology generically as TLS. Recommendations regarding the use of SSL and TLS protocols, as well as browser support for TLS, can be found in the rule below title [[Transport_Layer_Protection_Cheat_Sheet#Rule_-_Only_Support_Strong_Protocols| &amp;quot;Only Support Strong Protocols&amp;quot;]].&lt;br /&gt;
&lt;br /&gt;
[[Image:Asvs_cryptomodule.gif|thumb|350px|right|Cryptomodule Parts and Operation]]&lt;br /&gt;
&lt;br /&gt;
== When to Use a FIPS 140-2 Validated Cryptomodule ==&lt;br /&gt;
&lt;br /&gt;
If the web application may be the target of determined attackers (a common threat model for Internet accessible applications handling sensitive data), it is strongly advised to use TLS services that are provided by [http://csrc.nist.gov/groups/STM/cmvp/validation.html FIPS 140-2 validated cryptomodules]. &lt;br /&gt;
&lt;br /&gt;
A cryptomodule, whether it is a software library or a hardware device, basically consists of three parts:&lt;br /&gt;
&lt;br /&gt;
* Components that implement cryptographic algorithms (symmetric and asymmetric algorithms, hash algorithms, random number generator algorithms, and message authentication code algorithms) &lt;br /&gt;
* Components that call and manage cryptographic functions (inputs and outputs include cryptographic keys and so-called critical security parameters) &lt;br /&gt;
* A physical container around the components that implement cryptographic algorithms and the components that call and manage cryptographic functions&lt;br /&gt;
&lt;br /&gt;
The security of a cryptomodule and its services (and the web applications that call the cryptomodule) depend on the correct implementation and integration of each of these three parts. In addition, the cryptomodule must be used and accessed securely. The includes consideration for:&lt;br /&gt;
&lt;br /&gt;
* Calling and managing cryptographic functions&lt;br /&gt;
* Securely Handling inputs and output&lt;br /&gt;
* Ensuring the secure construction of the physical container around the components&lt;br /&gt;
&lt;br /&gt;
In order to leverage the benefits of TLS it is important to use a TLS service (e.g. library, web framework, web application server) which has been FIPS 140-2 validated. In addition, the cryptomodule must be installed, configured and operated in either an approved or an allowed mode to provide a high degree of certainty that the FIPS 140-2 validated cryptomodule is providing the expected security services in the expected manner.&lt;br /&gt;
&lt;br /&gt;
If the system is legally required to use FIPS 140-2 encryption (e.g., owned or operated by or on behalf of the U.S. Government) then TLS must be used and SSL disabled. Details on why SSL is unacceptable are described in Section 7.1 of [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program].&lt;br /&gt;
&lt;br /&gt;
Further reading on the use of TLS to protect highly sensitive data against determined attackers can be viewed in [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP800-52 Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations]&lt;br /&gt;
&lt;br /&gt;
== Secure Server Design  ==&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS for All Login Pages and All Authenticated Pages  ===&lt;br /&gt;
&lt;br /&gt;
The login page and all subsequent authenticated pages must be exclusively accessed over TLS. The initial login page, referred to as the &amp;quot;login landing page&amp;quot;, must be served over TLS. Failure to utilize TLS for the login landing page allows an attacker to modify the login form action, causing the user's credentials to be posted to an arbitrary location. Failure to utilize TLS for authenticated pages after the login enables an attacker to view the unencrypted session ID and compromise the user's authenticated session. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS on Any Networks (External and Internal) Transmitting Sensitive Data  ===&lt;br /&gt;
&lt;br /&gt;
All networks, both external and internal, which transmit sensitive data must utilize TLS or an equivalent transport layer security mechanism. It is not sufficient to claim that access to the internal network is &amp;quot;restricted to employees&amp;quot;. Numerous recent data compromises have shown that the internal network can be breached by attackers. In these attacks, sniffers have been installed to access unencrypted sensitive data sent on the internal network. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Provide Non-TLS Pages for Secure Content  ===&lt;br /&gt;
&lt;br /&gt;
All pages which are available over TLS must not be available over a non-TLS connection. A user may inadvertently bookmark or manually type a URL to a HTTP page (e.g. http://example.com/myaccount) within the authenticated portion of the application. If this request is processed by the application then the response, and any sensitive data, would be returned to the user over the clear text HTTP.&lt;br /&gt;
&lt;br /&gt;
=== Rule - REMOVED - Do Not Perform Redirects from Non-TLS Page to TLS Login Page  ===&lt;br /&gt;
&lt;br /&gt;
This recommendation has been removed. Ultimately, the below guidance will only provide user education and cannot provide any technical controls to protect the user against a man-in-the-middle attack.  &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
A common practice is to redirect users that have requested a non-TLS version of the login page to the TLS version (e.g. http://example.com/login redirects to https://example.com/login). This practice creates an additional attack vector for a man in the middle attack. In addition, redirecting from non-TLS versions to the TLS version reinforces to the user that the practice of requesting the non-TLS page is acceptable and secure.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the man-in-the-middle attack is used by the attacker to intercept the non-TLS to TLS redirect message. The attacker then injects the HTML of the actual login page and changes the form to post over unencrypted HTTP. This allows the attacker to view the user's credentials as they are transmitted in the clear.&lt;br /&gt;
&lt;br /&gt;
It is recommended to display a security warning message to the user whenever the non-TLS login page is requested. This security warning should urge the user to always type &amp;quot;HTTPS&amp;quot; into the browser or bookmark the secure login page.  This approach will help educate users on the correct and most secure method of accessing the application.&lt;br /&gt;
&lt;br /&gt;
Currently there are no controls that an application can enforce to entirely mitigate this risk. Ultimately, this issue is the responsibility of the user since the application cannot prevent the user from initially typing [http://owasp.org http://example.com/login] (versus HTTPS). &lt;br /&gt;
&lt;br /&gt;
Note: [http://www.w3.org/Security/wiki/Strict_Transport_Security Strict Transport Security] will address this issue and will provide a server side control to instruct supporting browsers that the site should only be accessed over HTTPS&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Mix TLS and Non-TLS Content  ===&lt;br /&gt;
&lt;br /&gt;
A page that is available over TLS must be comprised completely of content which is transmitted over TLS. The page must not contain any content that is transmitted over unencrypted HTTP. This includes content from unrelated third party sites. &lt;br /&gt;
&lt;br /&gt;
An attacker could intercept any of the data transmitted over the unencrypted HTTP and inject malicious content into the user's page. This malicious content would be included in the page even if the overall page is served over TLS. In addition, an attacker could steal the user's session cookie that is transmitted with any non-TLS requests. This is possible if the cookie's 'secure' flag is not set. See the rule 'Use &amp;quot;Secure&amp;quot; Cookie Flag'&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use &amp;quot;Secure&amp;quot; Cookie Flag  ===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Secure&amp;quot; flag must be set for all user cookies. Failure to use the &amp;quot;secure&amp;quot; flag enables an attacker to access the session cookie by tricking the user's browser into submitting a request to an unencrypted page on the site. This attack is possible even if the server is not configured to offer HTTP content since the attacker is monitoring the requests and does not care if the server responds with a 404 or doesn't respond at all.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Keep Sensitive Data Out of the URL ===&lt;br /&gt;
&lt;br /&gt;
Sensitive data must not be transmitted via URL arguments. A more appropriate place is to store sensitive data in a server side repository or within the user's session.  When using TLS the URL arguments and values are encrypted during transit. However, there are two methods that the URL arguments and values could be exposed.&lt;br /&gt;
&lt;br /&gt;
1. The entire URL is cached within the local user's browser history. This may expose sensitive data to any other user of the workstation.&lt;br /&gt;
&lt;br /&gt;
2. The entire URL is exposed if the user clicks on a link to another HTTPS site. This may expose sensitive data within the referral field to the third party site. This exposure occurs in most browsers and will only occur on transitions between two TLS sites. &lt;br /&gt;
&lt;br /&gt;
For example, a user following a link on [http://owasp.org https://example.com] which leads to [http://owasp.org https://someOtherexample.com] would expose the full URL of [http://owasp.org https://example.com] (including URL arguments) in the referral header (within most browsers). This would not be the case if the user followed a link on [http://owasp.org https://example.com] to [http://owasp.org http://someHTTPexample.com]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Prevent Caching of Sensitive Data ===&lt;br /&gt;
&lt;br /&gt;
The TLS protocol provides confidentiality only for data in transit but it does not help with potential data leakage issues at the client or intermediary proxies. As a result, it is frequently prudent to instruct these nodes not to cache or persist sensitive data. One option is to add a suitable Cache-Control header to relevant HTTP responses, for example &amp;quot;Cache-Control: no-cache, no store, must-revalidate&amp;quot;. For compatibility with HTTP/1.0 the response should include header &amp;quot;Pragma: no-cache&amp;quot;. More information is available in [http://www.ietf.org/rfc/rfc2616.txt HTTP 1.1 RFC 2616], section 14.9.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use HTTP Strict Transport Security ===&lt;br /&gt;
&lt;br /&gt;
A new browser security setting called HTTP Strict Transport Security (HSTS) will significantly enhance the implementation of TLS for a domain. HSTS is enabled via a special response header and this instructs [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security#Browser_Support compatible browsers] to enforce the following security controls:&lt;br /&gt;
&lt;br /&gt;
* All requests to the domain will be sent over HTTPS&lt;br /&gt;
* Any attempts to send an HTTP requests to the domain will be automatically upgraded by the browser to HTTPS before the request is sent&lt;br /&gt;
* If a user encounters a bad SSL certificate, the user will receive an error message and will not be allowed to override the warning message&lt;br /&gt;
&lt;br /&gt;
Additional information on HSTS can be found at [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security https://www.owasp.org/index.php/HTTP_Strict_Transport_Security] and also on the OWASP [http://www.youtube.com/watch?v=zEV3HOuM_Vw&amp;amp;feature=youtube_gdata AppSecTutorial Series - Episode 4]&lt;br /&gt;
&lt;br /&gt;
== Server Certificate and Protocol Configuration  ==&lt;br /&gt;
&lt;br /&gt;
Note: If using a FIPS 140-2 cryptomodule disregard the following rules and defer to the recommended configuration for the particular cryptomodule.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use an Appropriate Certification Authority for the Application's User Base  ===&lt;br /&gt;
&lt;br /&gt;
An application user must never be presented with a warning that the certificate was signed by an unknown or untrusted authority. The application's user population must have access to the public certificate of the certification authority which issued the server's certificate. For Internet accessible websites, the most effective method of achieving this goal is to purchase the TLS certificate from a recognize certification authority. Popular Internet browsers already contain the public certificates of these recognized certification authorities. &lt;br /&gt;
&lt;br /&gt;
Internal applications with a limited user population can use an internal certification authority provided its public certificate is securely distributed to all users. However, remember that all certificates issued by this certification authority will be trusted by the users. Therefore, utilize controls to protect the private key and ensure that only authorized individuals have the ability to sign certificates. &lt;br /&gt;
&lt;br /&gt;
The use of self signed certificates is never acceptable. Self signed certificates negate the benefit of end-point authentication and also significantly decrease the ability for an individual to detect a man-in-the-middle attack. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Protocols ===&lt;br /&gt;
&lt;br /&gt;
SSL/TLS is a collection of protocols. Weaknesses have been identified with earlier SSL protocols, including [http://www.schneier.com/paper-ssl-revised.pdf SSLv2] and [http://www.yaksman.org/~lweith/ssl.pdf SSLv3]. The best practice for transport layer protection is to only provide support for the TLS protocols - TLS1.0, TLS 1.1 and TLS 1.2. This configuration will provide maximum protection against skilled and determined attackers and is appropriate for applications handling sensitive data or performing critical operations.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers Nearly all modern browsers support at least TLS 1.0]. As of February 2013, contemporary browsers (Chrome v20+, IE v8+, Opera v10+, and Safari v5+) [http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers support TLS 1.1 and TLS 1.2]. You should provide support for TLS 1.1 and TLS 1.2 to accommodate clients which support the protocols.&lt;br /&gt;
&lt;br /&gt;
In situations where lesser security requirements are necessary, it may be acceptable to also provide support for SSL 3.0 and TLS 1.0. [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 has known weaknesses] which severely compromise the channel's security. TLS 1.0 suffers [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html CBC Chaining attacks and Padding Oracle attacks]. SSLv3 and TLSv1.0 should only be used only after risk analysis and acceptance.&lt;br /&gt;
&lt;br /&gt;
Under no circumstances should SSLv2 be enabled as a protocol selection. The [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 protocol is broken] and does not provide adequate transport layer protection.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Cryptographic Ciphers  ===&lt;br /&gt;
&lt;br /&gt;
Each protocol (SSLv3, TLSv1.0, etc) provide cipher suites. As of TLS 1.2, [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 there is support for over 300 suites (320+ and counting)], including [http://www.mail-archive.com/cryptography@randombit.net/msg03785.html national vanity cipher suites]. The strength of the encryption used within a TLS session is determined by the encryption cipher negotiated between the server and the browser. In order to ensure that only strong cryptographic ciphers are selected the server must be modified to disable the use of weak ciphers. It is recommended to configure the server to only support strong ciphers and to use sufficiently large key sizes. In general, the following should be observed when selecting CipherSuites:&lt;br /&gt;
&lt;br /&gt;
* Use AES, 3-key 3DES for encryption operated in CBC mode &lt;br /&gt;
* Stream Ciphers which XOR the key stream with plaintext (such as AES/CTR mode)&lt;br /&gt;
* Use SHA1 or above for digests, prefer SHA2 (or equivalent)&lt;br /&gt;
* MD5 should not be used except as a PRF (no signing, no MACs)&lt;br /&gt;
* Do not provide support for NULL ciphersuites (aNULL or eNULL)&lt;br /&gt;
* Do not provide support for anonymous Diffie-Hellman &lt;br /&gt;
* Support ephemeral Diffie-Hellman key exchange&lt;br /&gt;
&lt;br /&gt;
Note: The TLS usage of MD5 does not expose the TLS protocol to any of the weaknesses of the MD5 algorithm (see FIPS 140-2 IG). However, MD5 must never be used outside of TLS protocol (e.g. for general hashing).&lt;br /&gt;
&lt;br /&gt;
Note: Use of Ephemeral Diffie-Hellman key exchange will protect confidentiality of the transmitted plaintext data even if the corresponding RSA or DSS server private key got compromised. An attacker would have to perform active man-in-the-middle attack at the time of the key exchange to be able to extract the transmitted plaintext. All modern browsers support this key exchange with the notable exception of Internet Explorer prior to Windows Vista.&lt;br /&gt;
&lt;br /&gt;
Additional information can be obtained within the [http://www.ietf.org/rfc/rfc4346.txt TLS 1.1 RFC 4346] and [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf FIPS 140-2 IG]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Support TLS-PSK and TLS-SRP for Mutual Authentication ===&lt;br /&gt;
&lt;br /&gt;
When using a shared secret or password offer TLS-PSK (Pre-Shared Key) or TLS-SRP (Secure Remote Password), which are known as Password Authenticated Key Exchange (PAKEs). TLS-PSK and TLS-SRP properly bind the channel, which refers to the cryptographic binding between the outer tunnel and the inner authentication protocol. IANA currently reserves [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 79 PSK cipehr suites] and [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 9 SRP cipher suites].&lt;br /&gt;
&lt;br /&gt;
Basic authentication places the user's password on the wire in the plain text after a server authenticates itself. Basic authentication only provides unilateral authentication. In contrast, both TLS-PSK and TLS-SRP provide mutual authentication, meaning each party proves it knows the password without placing the password on the wire in the plain text.&lt;br /&gt;
&lt;br /&gt;
Finally, using a PAKE removes the need to trust an outside party, such as a Certification Authority (CA).&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Secure Renegotiations  ===&lt;br /&gt;
&lt;br /&gt;
A design weakness in TLS, identified as [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-3555 CVE-2009-3555], allows an attacker to inject a plaintext of his choice into a TLS session of a victim. In the HTTPS context the attacker might be able to inject his own HTTP requests on behalf of the victim. The issue can be mitigated either by disabling support for TLS renegotiations or by supporting only renegotiations compliant with [http://www.ietf.org/rfc/rfc5746.txt RFC 5746]. All modern browsers have been updated to comply with this RFC.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Disable Compression ===&lt;br /&gt;
&lt;br /&gt;
Compression Ratio Info-leak Made Easy (CRIME) is an exploit against the data compression scheme used by the TLS and SPDY protocols. The exploit allows an adversary to recover user authentication cookies from HTTPS. The recovered cookie can be subsequently used for session hijacking attacks.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Strong Keys &amp;amp; Protect Them ===&lt;br /&gt;
&lt;br /&gt;
The private key used to generate the cipher key must be sufficiently strong for the anticipated lifetime of the private key and corresponding certificate. The current best practice is to select a key size of at least 2048. Keys of length 1024 will be obsolete beginning in 2010.  Additional information on key lifetimes and comparable key strengths can be found in [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf NIST SP 800-57]. In addition, the private key must be stored in a location that is protected from unauthorized access.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use a Certificate That Supports Required Domain Names ===&lt;br /&gt;
&lt;br /&gt;
A user should never be presented with a certificate error, including prompts to reconcile domain or hostname mismatches, or expired certificates. If the application is available at both [https://owasp.org https://www.example.com] and [https://owasp.org https://example.com] then an appropriate certificate, or certificates, must be presented to accommodate the situation. The presence of certificate errors desensitizes users to TLS error messages and increases the possibility an attacker could launch a convincing phishing or man-in-the-middle attack.&lt;br /&gt;
&lt;br /&gt;
For example, consider a web application accessible at [https://owasp.org https://abc.example.com] and [https://owasp.org https://xyz.example.com]. One certificate should be acquired for the host or server ''abc.example.com''; and a second certificate for host or server ''xyz.example.com''. In both cases, the hostname would be present in the Subject's Common Name (CN).&lt;br /&gt;
&lt;br /&gt;
Alternatively, the Subject Alternate Names (SANs) can be used to provide a specific listing of multiple names where the certificate is valid. In the example above, the certificate could list the Subject's CN as ''example.com'', and list two SANs: ''abc.example.com'' and ''xyz.example.com''. These certificates are sometimes referred to as &amp;quot;multiple domain certificates&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Fully Qualified Names in Certificates ===&lt;br /&gt;
&lt;br /&gt;
Use fully qualified names in the DNS name field, and do not use unqualifed names (e.g., 'www'), local names (e.g., 'localhost'), or private IP addresses (e.g., 192.168.1.1) in the DNS name field. Unqualifed names, local names, or private IP addresses violate the certificate specification.&lt;br /&gt;
 &lt;br /&gt;
=== Rule - Do Not Use Wildcard Certificates ===&lt;br /&gt;
&lt;br /&gt;
You should refrain from using wildcard certificates. Though they are expedient at circumventing annoying user prompts, they also [[Least_privilege|violate the principal of least privilege]] and asks the user to trust all machines, including developer's machines, the secretary's machine in the lobby and the sign-in kiosk. Obtaining access to the private key is left as an exercise for the attacker, but its made much easier when stored on the file system unprotected.&lt;br /&gt;
&lt;br /&gt;
Statistics gathered by Qualys for [http://media.blackhat.com/bh-us-10/presentations/Ristic/BlackHat-USA-2010-Ristic-Qualys-SSL-Survey-HTTP-Rating-Guide-slides.pdf Internet SSL Survey 2010] indicate wildcard certificates have a 4.4% share, so the practice is not standard for public facing hosts. Finally, wildcard certificates violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines].&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Use RFC 1918 Addresses in Certificates ===&lt;br /&gt;
&lt;br /&gt;
Certificates should not use private addresses. RFC 1918 is [http://tools.ietf.org/rfc/rfc1918.txt Address Allocation for Private Internets]. Private addresses are Internet Assigned Numbers Authority (IANA) reserved and include 192.168/16, 172.16/12, and 10/8.&lt;br /&gt;
&lt;br /&gt;
Certificates issued with private addresses violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines]. In addition, Peter Gutmann writes in in [http://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf Engineering Security]: &amp;quot;This one is particularly troublesome because, in combination with the router-compromise attacks... and ...OSCP-defeating measures, it allows an attacker to spoof any EV-certificate site.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Rule - Always Provide All Needed Certificates ===&lt;br /&gt;
&lt;br /&gt;
Clients attempt to solve the problem of identifying a server or host using PKI and X509 certificate. When a user receives a server or host's certificate, the certificate must be validated back to a trusted root certification authority. This is known as path validation.&lt;br /&gt;
&lt;br /&gt;
There can be one or more intermediate certificates in between the end-entity (server or host) certificate and root certificate. In addition to validating both endpoints, the user will also have to validate all intermediate certificates. Validating all intermediate certificates can be tricky because the user may not have them locally. This is a well-known PKI issue called the “Which Directory?&amp;quot; problem.&lt;br /&gt;
&lt;br /&gt;
To avoid the “Which Directory?&amp;quot; problem, a server should provide the user with all required certificates used in a path validation.&lt;br /&gt;
&lt;br /&gt;
== Client (Browser) Configuration  ==&lt;br /&gt;
&lt;br /&gt;
The validation procedures to ensure that a certificate is valid are complex and difficult to correctly perform.  In a typical web application model, these checks will be performed by the client's web browser in accordance with local browser settings and are out of the control of the application. However, these items do need to be addressed in the following scenarios:&lt;br /&gt;
&lt;br /&gt;
* The application server establishes connections to other applications over TLS for purposes such as web services or any exchange of data&lt;br /&gt;
* A thick client application is connecting to a server via TLS&lt;br /&gt;
&lt;br /&gt;
In these situations extensive certificate validation checks must occur in order to establish the validity of the certificate. Consult the following resources to assist in the design and testing of this functionality. The NIST PKI testing site includes a full test suite of certificates and expected outcomes of the test cases.&lt;br /&gt;
* [http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html NIST PKI Testing]&lt;br /&gt;
* [http://www.ietf.org/rfc/rfc5280.txt IETF RFC 5280]&lt;br /&gt;
&lt;br /&gt;
As specified in the above guidance, if the certificate can not be validated for any reason then the connection between the client and server must be dropped. Any data exchanged over a connection where the certificate has not properly been validated could be exposed to unauthorized access or modification.&lt;br /&gt;
&lt;br /&gt;
== Additional Controls  ==&lt;br /&gt;
&lt;br /&gt;
=== Extended Validation Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Extended validation certificates (EV Certificates) proffer an enhanced investigation by the issuer into the requesting party due to the industry's race to the bottom. The purpose of EV certificates is to provide the user with greater assurance that the owner of the certificate is a verified legal entity for the site. Browsers with support for EV certificates distinguish an EV certificate in a variety of ways. Internet Explorer will color a portion of the URL in green, while Mozilla will add a green portion to the left of the URL indicating the company name. &lt;br /&gt;
&lt;br /&gt;
High value websites should consider the use of EV certificates to enhance customer confidence in the certificate. It should also be noted that EV certificates do not provide any greater technical security for the TLS. The purpose of the EV certificate is to increase user confidence that the target site is indeed who it claims to be.&lt;br /&gt;
&lt;br /&gt;
=== Client-Side Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Client side certificates can be used with TLS to prove the identity of the client to the server. Referred to as &amp;quot;two-way TLS&amp;quot;, this configuration requires the client to provide their certificate to the server, in addition to the server providing their's to the client. If client certificates are used, ensure that the same validation of the client certificate is performed by the server, as indicated for the validation of server certificates above. In addition, the server should be configured to drop the TLS connection if the client certificate cannot be verified or is not provided. &lt;br /&gt;
&lt;br /&gt;
The use of client side certificates is relatively rare currently due to the complexities of certificate generation, safe distribution, client side configuration, certificate revocation and reissuance, and the fact that clients can only authenticate on machines where their client side certificate is installed. Such certificates are typically used for very high value connections that have small user populations.&lt;br /&gt;
&lt;br /&gt;
=== Certificate and Public Key Pinning ===&lt;br /&gt;
&lt;br /&gt;
Hybrid and native applications can take advantage of [[Certificate_and_Public_Key_Pinning|certificate and public key pinning]]. Pinning associates a host (for example, server) with an identity (for example, certificate or public key), and allows an application to leverage knowledge of the pre-existing relationship. At runtime, the application would inspect the certificate or public key received after connecting to the server. If the certificate or public key is expected, then the application would proceed as normal. If unexpected, the application would stop using the channel and close the connection since an adversary could control the channel or server.&lt;br /&gt;
&lt;br /&gt;
Pinning still requires customary X509 checks, such as revocation, since CRLs and OCSP provides real time status information. Otherwise, an application could possibly (1) accept a known bad certificate; or (2) require an out-of-band update, which could result in a lengthy App Store approval.&lt;br /&gt;
&lt;br /&gt;
Browser based applications are at a disadvantage since most browsers do not allow the user to leverage pre-existing relationships and ''a priori'' knowledge. In addition, Javascript and Websockets do not expose methods to for a web app to query the underlying secure connection information (such as the certificate or public key). It is noteworthy that Chromium based browsers perform pinning on selected sites, but the list is currently maintained by the vendor.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection for Back End and Other Connections  =&lt;br /&gt;
&lt;br /&gt;
Although not the focus of this cheat sheet, it should be stressed that transport layer protection is necessary for back-end connections and any other connection where sensitive data is exchanged or where user identity is established. Failure to implement an effective and robust transport layer security will expose sensitive data and undermine the effectiveness of any authentication or access control mechanism. &lt;br /&gt;
&lt;br /&gt;
== Secure Internal Network Fallacy  ==&lt;br /&gt;
&lt;br /&gt;
The internal network of a corporation is not immune to attacks. Many recent high profile intrusions, where thousands of sensitive customer records were compromised, have been perpetrated by attackers that have gained internal network access and then used sniffers to capture unencrypted data as it traversed the internal network.&lt;br /&gt;
&lt;br /&gt;
= Related Articles  =&lt;br /&gt;
&lt;br /&gt;
* OWASP – [[Testing for SSL-TLS (OWASP-CM-001)|Testing for SSL-TLS]], and OWASP [[Guide to Cryptography]] &lt;br /&gt;
* OWASP – [http://www.owasp.org/index.php/ASVS Application Security Verification Standard (ASVS) – Communication Security Verification Requirements (V10)]&lt;br /&gt;
* OWASP – ASVS Article on [[Why you need to use a FIPS 140-2 validated cryptomodule]]&lt;br /&gt;
* SSL Labs http://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide]&lt;br /&gt;
* yaSSL – [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html Differences between SSL and TLS Protocol Versions]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP 800-52 Guidelines for the selection and use of transport layer security (TLS) Implementations]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf FIPS 140-2 Security Requirements for Cryptographic Modules]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf SP 800-57 Recommendation for Key Management, Revision 2]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/drafts.html#sp800-95 SP 800-95 Guide to Secure Web Services] &lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5280.txt RFC 5280 Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc2246.txt RFC 2246 The Transport Layer Security (TLS) Protocol Version 1.0 (JAN 1999)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc4346.txt RFC 4346 The Transport Layer Security (TLS) Protocol Version 1.1 (APR 2006)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5246.txt RFC 5246 The Transport Layer Security (TLS) Protocol Version 1.2 (AUG 2008)]&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
Michael Coates - michael.coates[at]owasp.org &amp;lt;br/&amp;gt;&lt;br /&gt;
Dave Wichers - dave.wichers[at]aspectsecurity.com &amp;lt;br/&amp;gt;&lt;br /&gt;
Michael Boberski - boberski_michael[at]bah.com&amp;lt;br/&amp;gt;&lt;br /&gt;
Tyler Reguly -treguly[at]sslfail.com&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=150639</id>
		<title>Transport Layer Protection Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=150639"/>
				<updated>2013-04-28T00:34:45Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added IAN infor on PSK and SRP&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction  =&lt;br /&gt;
&lt;br /&gt;
This article provides a simple model to follow when implementing transport layer protection for an application. Although the concept of SSL is known to many, the actual details and security specific decisions of implementation are often poorly understood and frequently result in insecure deployments. This article establishes clear rules which provide guidance on securely designing and configuring transport layer security for an application. This article is focused on the use of SSL/TLS between a web application and a web browser, but that we also encourage the use of SSL/TLS or other network encryption technologies, such as VPN, on back end and other non-browser based connections.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== Architectural Decision  ==&lt;br /&gt;
&lt;br /&gt;
An architectural decision must be made to determine the appropriate method to protect data when it is being transmitted.  The most common options available to corporations are Virtual Private Networks (VPN) or a SSL/TLS model commonly used by web applications. The selected model is determined by the business needs of the particular organization. For example, a VPN connection may be the best design for a partnership between two companies that includes mutual access to a shared server over a variety of protocols. Conversely, an Internet facing enterprise web application would likely be best served by a SSL/TLS model. &lt;br /&gt;
&lt;br /&gt;
This cheat sheet will focus on security considerations when the SSL/TLS model is selected. This is a frequently used model for publicly accessible web applications.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection with SSL/TLS  =&lt;br /&gt;
&lt;br /&gt;
== Benefits  ==&lt;br /&gt;
&lt;br /&gt;
The primary benefit of transport layer security is the protection of web application data from unauthorized disclosure and modification when it is transmitted between clients (web browsers) and the web application server, and between the web application server and back end and other non-browser based enterprise components. &lt;br /&gt;
&lt;br /&gt;
The server validation component of TLS provides authentication of the server to the client.  If configured to require client side certificates, TLS can also play a role in client authentication to the server. However, in practice client side certificates are not often used in lieu of username and password based authentication models for clients.&lt;br /&gt;
&lt;br /&gt;
TLS also provides two additional benefits that are commonly overlooked; integrity guarantees and replay prevention. A TLS stream of communication contains built-in controls to prevent tampering with any portion of the encrypted data. In addition, controls are also built-in to prevent a captured stream of TLS data from being replayed at a later time.&lt;br /&gt;
&lt;br /&gt;
It should be noted that TLS provides the above guarantees to data during transmission. TLS does not offer any of these security benefits to data that is at rest. Therefore appropriate security controls must be added to protect data while at rest within the application or within data stores.&lt;br /&gt;
&lt;br /&gt;
== Basic Requirements ==&lt;br /&gt;
&lt;br /&gt;
The basic requirements for using TLS are: access to a Public Key Infrastructure (PKI) in order to obtain certificates, access to a directory or an Online Certificate Status Protocol (OCSP) responder in order to check certificate revocation status, and agreement/ability to support a minimum configuration of protocol versions and protocol options for each version.&lt;br /&gt;
&lt;br /&gt;
== SSL vs. TLS  ==&lt;br /&gt;
&lt;br /&gt;
The terms, Secure Socket Layer (SSL) and Transport Layer Security (TLS) are often used interchangeably. In fact, SSL v3.1 is equivalent to TLS v1.0. However, different versions of SSL and TLS are supported by modern web browsers and by most modern web frameworks and platforms. For the purposes of this cheat sheet we will refer to the technology generically as TLS. Recommendations regarding the use of SSL and TLS protocols, as well as browser support for TLS, can be found in the rule below title [[Transport_Layer_Protection_Cheat_Sheet#Rule_-_Only_Support_Strong_Protocols| &amp;quot;Only Support Strong Protocols&amp;quot;]].&lt;br /&gt;
&lt;br /&gt;
[[Image:Asvs_cryptomodule.gif|thumb|350px|right|Cryptomodule Parts and Operation]]&lt;br /&gt;
&lt;br /&gt;
== When to Use a FIPS 140-2 Validated Cryptomodule ==&lt;br /&gt;
&lt;br /&gt;
If the web application may be the target of determined attackers (a common threat model for Internet accessible applications handling sensitive data), it is strongly advised to use TLS services that are provided by [http://csrc.nist.gov/groups/STM/cmvp/validation.html FIPS 140-2 validated cryptomodules]. &lt;br /&gt;
&lt;br /&gt;
A cryptomodule, whether it is a software library or a hardware device, basically consists of three parts:&lt;br /&gt;
&lt;br /&gt;
* Components that implement cryptographic algorithms (symmetric and asymmetric algorithms, hash algorithms, random number generator algorithms, and message authentication code algorithms) &lt;br /&gt;
* Components that call and manage cryptographic functions (inputs and outputs include cryptographic keys and so-called critical security parameters) &lt;br /&gt;
* A physical container around the components that implement cryptographic algorithms and the components that call and manage cryptographic functions&lt;br /&gt;
&lt;br /&gt;
The security of a cryptomodule and its services (and the web applications that call the cryptomodule) depend on the correct implementation and integration of each of these three parts. In addition, the cryptomodule must be used and accessed securely. The includes consideration for:&lt;br /&gt;
&lt;br /&gt;
* Calling and managing cryptographic functions&lt;br /&gt;
* Securely Handling inputs and output&lt;br /&gt;
* Ensuring the secure construction of the physical container around the components&lt;br /&gt;
&lt;br /&gt;
In order to leverage the benefits of TLS it is important to use a TLS service (e.g. library, web framework, web application server) which has been FIPS 140-2 validated. In addition, the cryptomodule must be installed, configured and operated in either an approved or an allowed mode to provide a high degree of certainty that the FIPS 140-2 validated cryptomodule is providing the expected security services in the expected manner.&lt;br /&gt;
&lt;br /&gt;
If the system is legally required to use FIPS 140-2 encryption (e.g., owned or operated by or on behalf of the U.S. Government) then TLS must be used and SSL disabled. Details on why SSL is unacceptable are described in Section 7.1 of [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program].&lt;br /&gt;
&lt;br /&gt;
Further reading on the use of TLS to protect highly sensitive data against determined attackers can be viewed in [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP800-52 Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations]&lt;br /&gt;
&lt;br /&gt;
== Secure Server Design  ==&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS for All Login Pages and All Authenticated Pages  ===&lt;br /&gt;
&lt;br /&gt;
The login page and all subsequent authenticated pages must be exclusively accessed over TLS. The initial login page, referred to as the &amp;quot;login landing page&amp;quot;, must be served over TLS. Failure to utilize TLS for the login landing page allows an attacker to modify the login form action, causing the user's credentials to be posted to an arbitrary location. Failure to utilize TLS for authenticated pages after the login enables an attacker to view the unencrypted session ID and compromise the user's authenticated session. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS on Any Networks (External and Internal) Transmitting Sensitive Data  ===&lt;br /&gt;
&lt;br /&gt;
All networks, both external and internal, which transmit sensitive data must utilize TLS or an equivalent transport layer security mechanism. It is not sufficient to claim that access to the internal network is &amp;quot;restricted to employees&amp;quot;. Numerous recent data compromises have shown that the internal network can be breached by attackers. In these attacks, sniffers have been installed to access unencrypted sensitive data sent on the internal network. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Provide Non-TLS Pages for Secure Content  ===&lt;br /&gt;
&lt;br /&gt;
All pages which are available over TLS must not be available over a non-TLS connection. A user may inadvertently bookmark or manually type a URL to a HTTP page (e.g. http://example.com/myaccount) within the authenticated portion of the application. If this request is processed by the application then the response, and any sensitive data, would be returned to the user over the clear text HTTP.&lt;br /&gt;
&lt;br /&gt;
=== Rule - REMOVED - Do Not Perform Redirects from Non-TLS Page to TLS Login Page  ===&lt;br /&gt;
&lt;br /&gt;
This recommendation has been removed. Ultimately, the below guidance will only provide user education and cannot provide any technical controls to protect the user against a man-in-the-middle attack.  &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
A common practice is to redirect users that have requested a non-TLS version of the login page to the TLS version (e.g. http://example.com/login redirects to https://example.com/login). This practice creates an additional attack vector for a man in the middle attack. In addition, redirecting from non-TLS versions to the TLS version reinforces to the user that the practice of requesting the non-TLS page is acceptable and secure.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the man-in-the-middle attack is used by the attacker to intercept the non-TLS to TLS redirect message. The attacker then injects the HTML of the actual login page and changes the form to post over unencrypted HTTP. This allows the attacker to view the user's credentials as they are transmitted in the clear.&lt;br /&gt;
&lt;br /&gt;
It is recommended to display a security warning message to the user whenever the non-TLS login page is requested. This security warning should urge the user to always type &amp;quot;HTTPS&amp;quot; into the browser or bookmark the secure login page.  This approach will help educate users on the correct and most secure method of accessing the application.&lt;br /&gt;
&lt;br /&gt;
Currently there are no controls that an application can enforce to entirely mitigate this risk. Ultimately, this issue is the responsibility of the user since the application cannot prevent the user from initially typing [http://owasp.org http://example.com/login] (versus HTTPS). &lt;br /&gt;
&lt;br /&gt;
Note: [http://www.w3.org/Security/wiki/Strict_Transport_Security Strict Transport Security] will address this issue and will provide a server side control to instruct supporting browsers that the site should only be accessed over HTTPS&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Mix TLS and Non-TLS Content  ===&lt;br /&gt;
&lt;br /&gt;
A page that is available over TLS must be comprised completely of content which is transmitted over TLS. The page must not contain any content that is transmitted over unencrypted HTTP. This includes content from unrelated third party sites. &lt;br /&gt;
&lt;br /&gt;
An attacker could intercept any of the data transmitted over the unencrypted HTTP and inject malicious content into the user's page. This malicious content would be included in the page even if the overall page is served over TLS. In addition, an attacker could steal the user's session cookie that is transmitted with any non-TLS requests. This is possible if the cookie's 'secure' flag is not set. See the rule 'Use &amp;quot;Secure&amp;quot; Cookie Flag'&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use &amp;quot;Secure&amp;quot; Cookie Flag  ===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Secure&amp;quot; flag must be set for all user cookies. Failure to use the &amp;quot;secure&amp;quot; flag enables an attacker to access the session cookie by tricking the user's browser into submitting a request to an unencrypted page on the site. This attack is possible even if the server is not configured to offer HTTP content since the attacker is monitoring the requests and does not care if the server responds with a 404 or doesn't respond at all.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Keep Sensitive Data Out of the URL ===&lt;br /&gt;
&lt;br /&gt;
Sensitive data must not be transmitted via URL arguments. A more appropriate place is to store sensitive data in a server side repository or within the user's session.  When using TLS the URL arguments and values are encrypted during transit. However, there are two methods that the URL arguments and values could be exposed.&lt;br /&gt;
&lt;br /&gt;
1. The entire URL is cached within the local user's browser history. This may expose sensitive data to any other user of the workstation.&lt;br /&gt;
&lt;br /&gt;
2. The entire URL is exposed if the user clicks on a link to another HTTPS site. This may expose sensitive data within the referral field to the third party site. This exposure occurs in most browsers and will only occur on transitions between two TLS sites. &lt;br /&gt;
&lt;br /&gt;
For example, a user following a link on [http://owasp.org https://example.com] which leads to [http://owasp.org https://someOtherexample.com] would expose the full URL of [http://owasp.org https://example.com] (including URL arguments) in the referral header (within most browsers). This would not be the case if the user followed a link on [http://owasp.org https://example.com] to [http://owasp.org http://someHTTPexample.com]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Prevent Caching of Sensitive Data ===&lt;br /&gt;
&lt;br /&gt;
The TLS protocol provides confidentiality only for data in transit but it does not help with potential data leakage issues at the client or intermediary proxies. As a result, it is frequently prudent to instruct these nodes not to cache or persist sensitive data. One option is to add a suitable Cache-Control header to relevant HTTP responses, for example &amp;quot;Cache-Control: no-cache, no store, must-revalidate&amp;quot;. For compatibility with HTTP/1.0 the response should include header &amp;quot;Pragma: no-cache&amp;quot;. More information is available in [http://www.ietf.org/rfc/rfc2616.txt HTTP 1.1 RFC 2616], section 14.9.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use HTTP Strict Transport Security ===&lt;br /&gt;
&lt;br /&gt;
A new browser security setting called HTTP Strict Transport Security (HSTS) will significantly enhance the implementation of TLS for a domain. HSTS is enabled via a special response header and this instructs [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security#Browser_Support compatible browsers] to enforce the following security controls:&lt;br /&gt;
&lt;br /&gt;
* All requests to the domain will be sent over HTTPS&lt;br /&gt;
* Any attempts to send an HTTP requests to the domain will be automatically upgraded by the browser to HTTPS before the request is sent&lt;br /&gt;
* If a user encounters a bad SSL certificate, the user will receive an error message and will not be allowed to override the warning message&lt;br /&gt;
&lt;br /&gt;
Additional information on HSTS can be found at [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security https://www.owasp.org/index.php/HTTP_Strict_Transport_Security] and also on the OWASP [http://www.youtube.com/watch?v=zEV3HOuM_Vw&amp;amp;feature=youtube_gdata AppSecTutorial Series - Episode 4]&lt;br /&gt;
&lt;br /&gt;
== Server Certificate and Protocol Configuration  ==&lt;br /&gt;
&lt;br /&gt;
Note: If using a FIPS 140-2 cryptomodule disregard the following rules and defer to the recommended configuration for the particular cryptomodule.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use an Appropriate Certification Authority for the Application's User Base  ===&lt;br /&gt;
&lt;br /&gt;
An application user must never be presented with a warning that the certificate was signed by an unknown or untrusted authority. The application's user population must have access to the public certificate of the certification authority which issued the server's certificate. For Internet accessible websites, the most effective method of achieving this goal is to purchase the TLS certificate from a recognize certification authority. Popular Internet browsers already contain the public certificates of these recognized certification authorities. &lt;br /&gt;
&lt;br /&gt;
Internal applications with a limited user population can use an internal certification authority provided its public certificate is securely distributed to all users. However, remember that all certificates issued by this certification authority will be trusted by the users. Therefore, utilize controls to protect the private key and ensure that only authorized individuals have the ability to sign certificates. &lt;br /&gt;
&lt;br /&gt;
The use of self signed certificates is never acceptable. Self signed certificates negate the benefit of end-point authentication and also significantly decrease the ability for an individual to detect a man-in-the-middle attack. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Protocols ===&lt;br /&gt;
&lt;br /&gt;
SSL/TLS is a collection of protocols. Weaknesses have been identified with earlier SSL protocols, including [http://www.schneier.com/paper-ssl-revised.pdf SSLv2] and [http://www.yaksman.org/~lweith/ssl.pdf SSLv3]. The best practice for transport layer protection is to only provide support for the TLS protocols - TLS1.0, TLS 1.1 and TLS 1.2. This configuration will provide maximum protection against skilled and determined attackers and is appropriate for applications handling sensitive data or performing critical operations.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers Nearly all modern browsers support at least TLS 1.0]. As of February 2013, contemporary browsers (Chrome v20+, IE v8+, Opera v10+, and Safari v5+) [http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers support TLS 1.1 and TLS 1.2]. You should provide support for TLS 1.1 and TLS 1.2 to accommodate clients which support the protocols.&lt;br /&gt;
&lt;br /&gt;
In situations where lesser security requirements are necessary, it may be acceptable to also provide support for SSL 3.0 and TLS 1.0. [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 has known weaknesses] which severely compromise the channel's security. TLS 1.0 suffers [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html CBC Chaining attacks and Padding Oracle attacks]. SSLv3 and TLSv1.0 should only be used only after risk analysis and acceptance.&lt;br /&gt;
&lt;br /&gt;
Under no circumstances should SSLv2 be enabled as a protocol selection. The [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 protocol is broken] and does not provide adequate transport layer protection.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Cryptographic Ciphers  ===&lt;br /&gt;
&lt;br /&gt;
Each protocol (SSLv3, TLSv1.0, etc) provide cipher suites. As of TLS 1.2, [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 there is support for over 300 suites (320+ and counting)], including [http://www.mail-archive.com/cryptography@randombit.net/msg03785.html national vanity cipher suites]. The strength of the encryption used within a TLS session is determined by the encryption cipher negotiated between the server and the browser. In order to ensure that only strong cryptographic ciphers are selected the server must be modified to disable the use of weak ciphers. It is recommended to configure the server to only support strong ciphers and to use sufficiently large key sizes. In general, the following should be observed when selecting CipherSuites:&lt;br /&gt;
&lt;br /&gt;
* Use AES, 3-key 3DES for encryption operated in CBC mode &lt;br /&gt;
* Stream Ciphers which XOR the key stream with plaintext (such as AES/CTR mode)&lt;br /&gt;
* Use SHA1 or above for digests, prefer SHA2 (or equivalent)&lt;br /&gt;
* MD5 should not be used except as a PRF (no signing, no MACs)&lt;br /&gt;
* Do not provide support for NULL ciphersuites (aNULL or eNULL)&lt;br /&gt;
* Do not provide support for anonymous Diffie-Hellman &lt;br /&gt;
* Support ephemeral Diffie-Hellman key exchange&lt;br /&gt;
&lt;br /&gt;
Note: The TLS usage of MD5 does not expose the TLS protocol to any of the weaknesses of the MD5 algorithm (see FIPS 140-2 IG). However, MD5 must never be used outside of TLS protocol (e.g. for general hashing).&lt;br /&gt;
&lt;br /&gt;
Note: Use of Ephemeral Diffie-Hellman key exchange will protect confidentiality of the transmitted plaintext data even if the corresponding RSA or DSS server private key got compromised. An attacker would have to perform active man-in-the-middle attack at the time of the key exchange to be able to extract the transmitted plaintext. All modern browsers support this key exchange with the notable exception of Internet Explorer prior to Windows Vista.&lt;br /&gt;
&lt;br /&gt;
Additional information can be obtained within the [http://www.ietf.org/rfc/rfc4346.txt TLS 1.1 RFC 4346] and [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf FIPS 140-2 IG]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Support TLS-PSK and TLS-SRP for Mutual Authentication ===&lt;br /&gt;
&lt;br /&gt;
Use TLS-PSK (Pre-Shared Key) or TLS-SRP (Secure Remote Password), which are known as Password Authenticated Key Exchange (PAKEs). TLS-PSK and TLS-SRP properly bind the channel, which refers to the cryptographic binding between the outer tunnel and the inner authentication protocol. IANA currently reserves [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 79 PSK cipehr suites] and [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 9 SRP cipher suites].&lt;br /&gt;
&lt;br /&gt;
Basic authentication places the user's password on the wire in the plain text after a server authenticates itself. Basic authentication only provides unilateral authentication. In contrast, both TLS-PSK and TLS-SRP provide mutual authentication, meaning each party proves it knows the password without placing the password on the wire in the plain text.&lt;br /&gt;
&lt;br /&gt;
Finally, using a PAKE removes the need to trust an outside party, such as a Certification Authority (CA).&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Secure Renegotiations  ===&lt;br /&gt;
&lt;br /&gt;
A design weakness in TLS, identified as [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-3555 CVE-2009-3555], allows an attacker to inject a plaintext of his choice into a TLS session of a victim. In the HTTPS context the attacker might be able to inject his own HTTP requests on behalf of the victim. The issue can be mitigated either by disabling support for TLS renegotiations or by supporting only renegotiations compliant with [http://www.ietf.org/rfc/rfc5746.txt RFC 5746]. All modern browsers have been updated to comply with this RFC.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Disable Compression ===&lt;br /&gt;
&lt;br /&gt;
Compression Ratio Info-leak Made Easy (CRIME) is an exploit against the data compression scheme used by the TLS and SPDY protocols. The exploit allows an adversary to recover user authentication cookies from HTTPS. The recovered cookie can be subsequently used for session hijacking attacks.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Strong Keys &amp;amp; Protect Them ===&lt;br /&gt;
&lt;br /&gt;
The private key used to generate the cipher key must be sufficiently strong for the anticipated lifetime of the private key and corresponding certificate. The current best practice is to select a key size of at least 2048. Keys of length 1024 will be obsolete beginning in 2010.  Additional information on key lifetimes and comparable key strengths can be found in [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf NIST SP 800-57]. In addition, the private key must be stored in a location that is protected from unauthorized access.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use a Certificate That Supports Required Domain Names ===&lt;br /&gt;
&lt;br /&gt;
A user should never be presented with a certificate error, including prompts to reconcile domain or hostname mismatches, or expired certificates. If the application is available at both [https://owasp.org https://www.example.com] and [https://owasp.org https://example.com] then an appropriate certificate, or certificates, must be presented to accommodate the situation. The presence of certificate errors desensitizes users to TLS error messages and increases the possibility an attacker could launch a convincing phishing or man-in-the-middle attack.&lt;br /&gt;
&lt;br /&gt;
For example, consider a web application accessible at [https://owasp.org https://abc.example.com] and [https://owasp.org https://xyz.example.com]. One certificate should be acquired for the host or server ''abc.example.com''; and a second certificate for host or server ''xyz.example.com''. In both cases, the hostname would be present in the Subject's Common Name (CN).&lt;br /&gt;
&lt;br /&gt;
Alternatively, the Subject Alternate Names (SANs) can be used to provide a specific listing of multiple names where the certificate is valid. In the example above, the certificate could list the Subject's CN as ''example.com'', and list two SANs: ''abc.example.com'' and ''xyz.example.com''. These certificates are sometimes referred to as &amp;quot;multiple domain certificates&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Fully Qualified Names in Certificates ===&lt;br /&gt;
&lt;br /&gt;
Use fully qualified names in the DNS name field, and do not use unqualifed names (e.g., 'www'), local names (e.g., 'localhost'), or private IP addresses (e.g., 192.168.1.1) in the DNS name field. Unqualifed names, local names, or private IP addresses violate the certificate specification.&lt;br /&gt;
 &lt;br /&gt;
=== Rule - Do Not Use Wildcard Certificates ===&lt;br /&gt;
&lt;br /&gt;
You should refrain from using wildcard certificates. Though they are expedient at circumventing annoying user prompts, they also [[Least_privilege|violate the principal of least privilege]] and asks the user to trust all machines, including developer's machines, the secretary's machine in the lobby and the sign-in kiosk. Obtaining access to the private key is left as an exercise for the attacker, but its made much easier when stored on the file system unprotected.&lt;br /&gt;
&lt;br /&gt;
Statistics gathered by Qualys for [http://media.blackhat.com/bh-us-10/presentations/Ristic/BlackHat-USA-2010-Ristic-Qualys-SSL-Survey-HTTP-Rating-Guide-slides.pdf Internet SSL Survey 2010] indicate wildcard certificates have a 4.4% share, so the practice is not standard for public facing hosts. Finally, wildcard certificates violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines].&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Use RFC 1918 Addresses in Certificates ===&lt;br /&gt;
&lt;br /&gt;
Certificates should not use private addresses. RFC 1918 is [http://tools.ietf.org/rfc/rfc1918.txt Address Allocation for Private Internets]. Private addresses are Internet Assigned Numbers Authority (IANA) reserved and include 192.168/16, 172.16/12, and 10/8.&lt;br /&gt;
&lt;br /&gt;
Certificates issued with private addresses violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines]. In addition, Peter Gutmann writes in in [http://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf Engineering Security]: &amp;quot;This one is particularly troublesome because, in combination with the router-compromise attacks... and ...OSCP-defeating measures, it allows an attacker to spoof any EV-certificate site.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Rule - Always Provide All Needed Certificates ===&lt;br /&gt;
&lt;br /&gt;
Clients attempt to solve the problem of identifying a server or host using PKI and X509 certificate. When a user receives a server or host's certificate, the certificate must be validated back to a trusted root certification authority. This is known as path validation.&lt;br /&gt;
&lt;br /&gt;
There can be one or more intermediate certificates in between the end-entity (server or host) certificate and root certificate. In addition to validating both endpoints, the user will also have to validate all intermediate certificates. Validating all intermediate certificates can be tricky because the user may not have them locally. This is a well-known PKI issue called the “Which Directory?&amp;quot; problem.&lt;br /&gt;
&lt;br /&gt;
To avoid the “Which Directory?&amp;quot; problem, a server should provide the user with all required certificates used in a path validation.&lt;br /&gt;
&lt;br /&gt;
== Client (Browser) Configuration  ==&lt;br /&gt;
&lt;br /&gt;
The validation procedures to ensure that a certificate is valid are complex and difficult to correctly perform.  In a typical web application model, these checks will be performed by the client's web browser in accordance with local browser settings and are out of the control of the application. However, these items do need to be addressed in the following scenarios:&lt;br /&gt;
&lt;br /&gt;
* The application server establishes connections to other applications over TLS for purposes such as web services or any exchange of data&lt;br /&gt;
* A thick client application is connecting to a server via TLS&lt;br /&gt;
&lt;br /&gt;
In these situations extensive certificate validation checks must occur in order to establish the validity of the certificate. Consult the following resources to assist in the design and testing of this functionality. The NIST PKI testing site includes a full test suite of certificates and expected outcomes of the test cases.&lt;br /&gt;
* [http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html NIST PKI Testing]&lt;br /&gt;
* [http://www.ietf.org/rfc/rfc5280.txt IETF RFC 5280]&lt;br /&gt;
&lt;br /&gt;
As specified in the above guidance, if the certificate can not be validated for any reason then the connection between the client and server must be dropped. Any data exchanged over a connection where the certificate has not properly been validated could be exposed to unauthorized access or modification.&lt;br /&gt;
&lt;br /&gt;
== Additional Controls  ==&lt;br /&gt;
&lt;br /&gt;
=== Extended Validation Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Extended validation certificates (EV Certificates) proffer an enhanced investigation by the issuer into the requesting party due to the industry's race to the bottom. The purpose of EV certificates is to provide the user with greater assurance that the owner of the certificate is a verified legal entity for the site. Browsers with support for EV certificates distinguish an EV certificate in a variety of ways. Internet Explorer will color a portion of the URL in green, while Mozilla will add a green portion to the left of the URL indicating the company name. &lt;br /&gt;
&lt;br /&gt;
High value websites should consider the use of EV certificates to enhance customer confidence in the certificate. It should also be noted that EV certificates do not provide any greater technical security for the TLS. The purpose of the EV certificate is to increase user confidence that the target site is indeed who it claims to be.&lt;br /&gt;
&lt;br /&gt;
=== Client-Side Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Client side certificates can be used with TLS to prove the identity of the client to the server. Referred to as &amp;quot;two-way TLS&amp;quot;, this configuration requires the client to provide their certificate to the server, in addition to the server providing their's to the client. If client certificates are used, ensure that the same validation of the client certificate is performed by the server, as indicated for the validation of server certificates above. In addition, the server should be configured to drop the TLS connection if the client certificate cannot be verified or is not provided. &lt;br /&gt;
&lt;br /&gt;
The use of client side certificates is relatively rare currently due to the complexities of certificate generation, safe distribution, client side configuration, certificate revocation and reissuance, and the fact that clients can only authenticate on machines where their client side certificate is installed. Such certificates are typically used for very high value connections that have small user populations.&lt;br /&gt;
&lt;br /&gt;
=== Certificate and Public Key Pinning ===&lt;br /&gt;
&lt;br /&gt;
Hybrid and native applications can take advantage of [[Certificate_and_Public_Key_Pinning|certificate and public key pinning]]. Pinning associates a host (for example, server) with an identity (for example, certificate or public key), and allows an application to leverage knowledge of the pre-existing relationship. At runtime, the application would inspect the certificate or public key received after connecting to the server. If the certificate or public key is expected, then the application would proceed as normal. If unexpected, the application would stop using the channel and close the connection since an adversary could control the channel or server.&lt;br /&gt;
&lt;br /&gt;
Pinning still requires customary X509 checks, such as revocation, since CRLs and OCSP provides real time status information. Otherwise, an application could possibly (1) accept a known bad certificate; or (2) require an out-of-band update, which could result in a lengthy App Store approval.&lt;br /&gt;
&lt;br /&gt;
Browser based applications are at a disadvantage since most browsers do not allow the user to leverage pre-existing relationships and ''a priori'' knowledge. In addition, Javascript and Websockets do not expose methods to for a web app to query the underlying secure connection information (such as the certificate or public key). It is noteworthy that Chromium based browsers perform pinning on selected sites, but the list is currently maintained by the vendor.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection for Back End and Other Connections  =&lt;br /&gt;
&lt;br /&gt;
Although not the focus of this cheat sheet, it should be stressed that transport layer protection is necessary for back-end connections and any other connection where sensitive data is exchanged or where user identity is established. Failure to implement an effective and robust transport layer security will expose sensitive data and undermine the effectiveness of any authentication or access control mechanism. &lt;br /&gt;
&lt;br /&gt;
== Secure Internal Network Fallacy  ==&lt;br /&gt;
&lt;br /&gt;
The internal network of a corporation is not immune to attacks. Many recent high profile intrusions, where thousands of sensitive customer records were compromised, have been perpetrated by attackers that have gained internal network access and then used sniffers to capture unencrypted data as it traversed the internal network.&lt;br /&gt;
&lt;br /&gt;
= Related Articles  =&lt;br /&gt;
&lt;br /&gt;
* OWASP – [[Testing for SSL-TLS (OWASP-CM-001)|Testing for SSL-TLS]], and OWASP [[Guide to Cryptography]] &lt;br /&gt;
* OWASP – [http://www.owasp.org/index.php/ASVS Application Security Verification Standard (ASVS) – Communication Security Verification Requirements (V10)]&lt;br /&gt;
* OWASP – ASVS Article on [[Why you need to use a FIPS 140-2 validated cryptomodule]]&lt;br /&gt;
* SSL Labs http://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide]&lt;br /&gt;
* yaSSL – [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html Differences between SSL and TLS Protocol Versions]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP 800-52 Guidelines for the selection and use of transport layer security (TLS) Implementations]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf FIPS 140-2 Security Requirements for Cryptographic Modules]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf SP 800-57 Recommendation for Key Management, Revision 2]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/drafts.html#sp800-95 SP 800-95 Guide to Secure Web Services] &lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5280.txt RFC 5280 Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc2246.txt RFC 2246 The Transport Layer Security (TLS) Protocol Version 1.0 (JAN 1999)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc4346.txt RFC 4346 The Transport Layer Security (TLS) Protocol Version 1.1 (APR 2006)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5246.txt RFC 5246 The Transport Layer Security (TLS) Protocol Version 1.2 (AUG 2008)]&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
Michael Coates - michael.coates[at]owasp.org &amp;lt;br/&amp;gt;&lt;br /&gt;
Dave Wichers - dave.wichers[at]aspectsecurity.com &amp;lt;br/&amp;gt;&lt;br /&gt;
Michael Boberski - boberski_michael[at]bah.com&amp;lt;br/&amp;gt;&lt;br /&gt;
Tyler Reguly -treguly[at]sslfail.com&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=150624</id>
		<title>Transport Layer Protection Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=150624"/>
				<updated>2013-04-27T22:57:22Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Password Based Authentication -&amp;gt; Mutual Authentication&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction  =&lt;br /&gt;
&lt;br /&gt;
This article provides a simple model to follow when implementing transport layer protection for an application. Although the concept of SSL is known to many, the actual details and security specific decisions of implementation are often poorly understood and frequently result in insecure deployments. This article establishes clear rules which provide guidance on securely designing and configuring transport layer security for an application. This article is focused on the use of SSL/TLS between a web application and a web browser, but that we also encourage the use of SSL/TLS or other network encryption technologies, such as VPN, on back end and other non-browser based connections.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== Architectural Decision  ==&lt;br /&gt;
&lt;br /&gt;
An architectural decision must be made to determine the appropriate method to protect data when it is being transmitted.  The most common options available to corporations are Virtual Private Networks (VPN) or a SSL/TLS model commonly used by web applications. The selected model is determined by the business needs of the particular organization. For example, a VPN connection may be the best design for a partnership between two companies that includes mutual access to a shared server over a variety of protocols. Conversely, an Internet facing enterprise web application would likely be best served by a SSL/TLS model. &lt;br /&gt;
&lt;br /&gt;
This cheat sheet will focus on security considerations when the SSL/TLS model is selected. This is a frequently used model for publicly accessible web applications.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection with SSL/TLS  =&lt;br /&gt;
&lt;br /&gt;
== Benefits  ==&lt;br /&gt;
&lt;br /&gt;
The primary benefit of transport layer security is the protection of web application data from unauthorized disclosure and modification when it is transmitted between clients (web browsers) and the web application server, and between the web application server and back end and other non-browser based enterprise components. &lt;br /&gt;
&lt;br /&gt;
The server validation component of TLS provides authentication of the server to the client.  If configured to require client side certificates, TLS can also play a role in client authentication to the server. However, in practice client side certificates are not often used in lieu of username and password based authentication models for clients.&lt;br /&gt;
&lt;br /&gt;
TLS also provides two additional benefits that are commonly overlooked; integrity guarantees and replay prevention. A TLS stream of communication contains built-in controls to prevent tampering with any portion of the encrypted data. In addition, controls are also built-in to prevent a captured stream of TLS data from being replayed at a later time.&lt;br /&gt;
&lt;br /&gt;
It should be noted that TLS provides the above guarantees to data during transmission. TLS does not offer any of these security benefits to data that is at rest. Therefore appropriate security controls must be added to protect data while at rest within the application or within data stores.&lt;br /&gt;
&lt;br /&gt;
== Basic Requirements ==&lt;br /&gt;
&lt;br /&gt;
The basic requirements for using TLS are: access to a Public Key Infrastructure (PKI) in order to obtain certificates, access to a directory or an Online Certificate Status Protocol (OCSP) responder in order to check certificate revocation status, and agreement/ability to support a minimum configuration of protocol versions and protocol options for each version.&lt;br /&gt;
&lt;br /&gt;
== SSL vs. TLS  ==&lt;br /&gt;
&lt;br /&gt;
The terms, Secure Socket Layer (SSL) and Transport Layer Security (TLS) are often used interchangeably. In fact, SSL v3.1 is equivalent to TLS v1.0. However, different versions of SSL and TLS are supported by modern web browsers and by most modern web frameworks and platforms. For the purposes of this cheat sheet we will refer to the technology generically as TLS. Recommendations regarding the use of SSL and TLS protocols, as well as browser support for TLS, can be found in the rule below title [[Transport_Layer_Protection_Cheat_Sheet#Rule_-_Only_Support_Strong_Protocols| &amp;quot;Only Support Strong Protocols&amp;quot;]].&lt;br /&gt;
&lt;br /&gt;
[[Image:Asvs_cryptomodule.gif|thumb|350px|right|Cryptomodule Parts and Operation]]&lt;br /&gt;
&lt;br /&gt;
== When to Use a FIPS 140-2 Validated Cryptomodule ==&lt;br /&gt;
&lt;br /&gt;
If the web application may be the target of determined attackers (a common threat model for Internet accessible applications handling sensitive data), it is strongly advised to use TLS services that are provided by [http://csrc.nist.gov/groups/STM/cmvp/validation.html FIPS 140-2 validated cryptomodules]. &lt;br /&gt;
&lt;br /&gt;
A cryptomodule, whether it is a software library or a hardware device, basically consists of three parts:&lt;br /&gt;
&lt;br /&gt;
* Components that implement cryptographic algorithms (symmetric and asymmetric algorithms, hash algorithms, random number generator algorithms, and message authentication code algorithms) &lt;br /&gt;
* Components that call and manage cryptographic functions (inputs and outputs include cryptographic keys and so-called critical security parameters) &lt;br /&gt;
* A physical container around the components that implement cryptographic algorithms and the components that call and manage cryptographic functions&lt;br /&gt;
&lt;br /&gt;
The security of a cryptomodule and its services (and the web applications that call the cryptomodule) depend on the correct implementation and integration of each of these three parts. In addition, the cryptomodule must be used and accessed securely. The includes consideration for:&lt;br /&gt;
&lt;br /&gt;
* Calling and managing cryptographic functions&lt;br /&gt;
* Securely Handling inputs and output&lt;br /&gt;
* Ensuring the secure construction of the physical container around the components&lt;br /&gt;
&lt;br /&gt;
In order to leverage the benefits of TLS it is important to use a TLS service (e.g. library, web framework, web application server) which has been FIPS 140-2 validated. In addition, the cryptomodule must be installed, configured and operated in either an approved or an allowed mode to provide a high degree of certainty that the FIPS 140-2 validated cryptomodule is providing the expected security services in the expected manner.&lt;br /&gt;
&lt;br /&gt;
If the system is legally required to use FIPS 140-2 encryption (e.g., owned or operated by or on behalf of the U.S. Government) then TLS must be used and SSL disabled. Details on why SSL is unacceptable are described in Section 7.1 of [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program].&lt;br /&gt;
&lt;br /&gt;
Further reading on the use of TLS to protect highly sensitive data against determined attackers can be viewed in [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP800-52 Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations]&lt;br /&gt;
&lt;br /&gt;
== Secure Server Design  ==&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS for All Login Pages and All Authenticated Pages  ===&lt;br /&gt;
&lt;br /&gt;
The login page and all subsequent authenticated pages must be exclusively accessed over TLS. The initial login page, referred to as the &amp;quot;login landing page&amp;quot;, must be served over TLS. Failure to utilize TLS for the login landing page allows an attacker to modify the login form action, causing the user's credentials to be posted to an arbitrary location. Failure to utilize TLS for authenticated pages after the login enables an attacker to view the unencrypted session ID and compromise the user's authenticated session. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS on Any Networks (External and Internal) Transmitting Sensitive Data  ===&lt;br /&gt;
&lt;br /&gt;
All networks, both external and internal, which transmit sensitive data must utilize TLS or an equivalent transport layer security mechanism. It is not sufficient to claim that access to the internal network is &amp;quot;restricted to employees&amp;quot;. Numerous recent data compromises have shown that the internal network can be breached by attackers. In these attacks, sniffers have been installed to access unencrypted sensitive data sent on the internal network. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Provide Non-TLS Pages for Secure Content  ===&lt;br /&gt;
&lt;br /&gt;
All pages which are available over TLS must not be available over a non-TLS connection. A user may inadvertently bookmark or manually type a URL to a HTTP page (e.g. http://example.com/myaccount) within the authenticated portion of the application. If this request is processed by the application then the response, and any sensitive data, would be returned to the user over the clear text HTTP.&lt;br /&gt;
&lt;br /&gt;
=== Rule - REMOVED - Do Not Perform Redirects from Non-TLS Page to TLS Login Page  ===&lt;br /&gt;
&lt;br /&gt;
This recommendation has been removed. Ultimately, the below guidance will only provide user education and cannot provide any technical controls to protect the user against a man-in-the-middle attack.  &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
A common practice is to redirect users that have requested a non-TLS version of the login page to the TLS version (e.g. http://example.com/login redirects to https://example.com/login). This practice creates an additional attack vector for a man in the middle attack. In addition, redirecting from non-TLS versions to the TLS version reinforces to the user that the practice of requesting the non-TLS page is acceptable and secure.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the man-in-the-middle attack is used by the attacker to intercept the non-TLS to TLS redirect message. The attacker then injects the HTML of the actual login page and changes the form to post over unencrypted HTTP. This allows the attacker to view the user's credentials as they are transmitted in the clear.&lt;br /&gt;
&lt;br /&gt;
It is recommended to display a security warning message to the user whenever the non-TLS login page is requested. This security warning should urge the user to always type &amp;quot;HTTPS&amp;quot; into the browser or bookmark the secure login page.  This approach will help educate users on the correct and most secure method of accessing the application.&lt;br /&gt;
&lt;br /&gt;
Currently there are no controls that an application can enforce to entirely mitigate this risk. Ultimately, this issue is the responsibility of the user since the application cannot prevent the user from initially typing [http://owasp.org http://example.com/login] (versus HTTPS). &lt;br /&gt;
&lt;br /&gt;
Note: [http://www.w3.org/Security/wiki/Strict_Transport_Security Strict Transport Security] will address this issue and will provide a server side control to instruct supporting browsers that the site should only be accessed over HTTPS&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Mix TLS and Non-TLS Content  ===&lt;br /&gt;
&lt;br /&gt;
A page that is available over TLS must be comprised completely of content which is transmitted over TLS. The page must not contain any content that is transmitted over unencrypted HTTP. This includes content from unrelated third party sites. &lt;br /&gt;
&lt;br /&gt;
An attacker could intercept any of the data transmitted over the unencrypted HTTP and inject malicious content into the user's page. This malicious content would be included in the page even if the overall page is served over TLS. In addition, an attacker could steal the user's session cookie that is transmitted with any non-TLS requests. This is possible if the cookie's 'secure' flag is not set. See the rule 'Use &amp;quot;Secure&amp;quot; Cookie Flag'&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use &amp;quot;Secure&amp;quot; Cookie Flag  ===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Secure&amp;quot; flag must be set for all user cookies. Failure to use the &amp;quot;secure&amp;quot; flag enables an attacker to access the session cookie by tricking the user's browser into submitting a request to an unencrypted page on the site. This attack is possible even if the server is not configured to offer HTTP content since the attacker is monitoring the requests and does not care if the server responds with a 404 or doesn't respond at all.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Keep Sensitive Data Out of the URL ===&lt;br /&gt;
&lt;br /&gt;
Sensitive data must not be transmitted via URL arguments. A more appropriate place is to store sensitive data in a server side repository or within the user's session.  When using TLS the URL arguments and values are encrypted during transit. However, there are two methods that the URL arguments and values could be exposed.&lt;br /&gt;
&lt;br /&gt;
1. The entire URL is cached within the local user's browser history. This may expose sensitive data to any other user of the workstation.&lt;br /&gt;
&lt;br /&gt;
2. The entire URL is exposed if the user clicks on a link to another HTTPS site. This may expose sensitive data within the referral field to the third party site. This exposure occurs in most browsers and will only occur on transitions between two TLS sites. &lt;br /&gt;
&lt;br /&gt;
For example, a user following a link on [http://owasp.org https://example.com] which leads to [http://owasp.org https://someOtherexample.com] would expose the full URL of [http://owasp.org https://example.com] (including URL arguments) in the referral header (within most browsers). This would not be the case if the user followed a link on [http://owasp.org https://example.com] to [http://owasp.org http://someHTTPexample.com]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Prevent Caching of Sensitive Data ===&lt;br /&gt;
&lt;br /&gt;
The TLS protocol provides confidentiality only for data in transit but it does not help with potential data leakage issues at the client or intermediary proxies. As a result, it is frequently prudent to instruct these nodes not to cache or persist sensitive data. One option is to add a suitable Cache-Control header to relevant HTTP responses, for example &amp;quot;Cache-Control: no-cache, no store, must-revalidate&amp;quot;. For compatibility with HTTP/1.0 the response should include header &amp;quot;Pragma: no-cache&amp;quot;. More information is available in [http://www.ietf.org/rfc/rfc2616.txt HTTP 1.1 RFC 2616], section 14.9.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use HTTP Strict Transport Security ===&lt;br /&gt;
&lt;br /&gt;
A new browser security setting called HTTP Strict Transport Security (HSTS) will significantly enhance the implementation of TLS for a domain. HSTS is enabled via a special response header and this instructs [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security#Browser_Support compatible browsers] to enforce the following security controls:&lt;br /&gt;
&lt;br /&gt;
* All requests to the domain will be sent over HTTPS&lt;br /&gt;
* Any attempts to send an HTTP requests to the domain will be automatically upgraded by the browser to HTTPS before the request is sent&lt;br /&gt;
* If a user encounters a bad SSL certificate, the user will receive an error message and will not be allowed to override the warning message&lt;br /&gt;
&lt;br /&gt;
Additional information on HSTS can be found at [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security https://www.owasp.org/index.php/HTTP_Strict_Transport_Security] and also on the OWASP [http://www.youtube.com/watch?v=zEV3HOuM_Vw&amp;amp;feature=youtube_gdata AppSecTutorial Series - Episode 4]&lt;br /&gt;
&lt;br /&gt;
== Server Certificate and Protocol Configuration  ==&lt;br /&gt;
&lt;br /&gt;
Note: If using a FIPS 140-2 cryptomodule disregard the following rules and defer to the recommended configuration for the particular cryptomodule.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use an Appropriate Certification Authority for the Application's User Base  ===&lt;br /&gt;
&lt;br /&gt;
An application user must never be presented with a warning that the certificate was signed by an unknown or untrusted authority. The application's user population must have access to the public certificate of the certification authority which issued the server's certificate. For Internet accessible websites, the most effective method of achieving this goal is to purchase the TLS certificate from a recognize certification authority. Popular Internet browsers already contain the public certificates of these recognized certification authorities. &lt;br /&gt;
&lt;br /&gt;
Internal applications with a limited user population can use an internal certification authority provided its public certificate is securely distributed to all users. However, remember that all certificates issued by this certification authority will be trusted by the users. Therefore, utilize controls to protect the private key and ensure that only authorized individuals have the ability to sign certificates. &lt;br /&gt;
&lt;br /&gt;
The use of self signed certificates is never acceptable. Self signed certificates negate the benefit of end-point authentication and also significantly decrease the ability for an individual to detect a man-in-the-middle attack. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Protocols ===&lt;br /&gt;
&lt;br /&gt;
SSL/TLS is a collection of protocols. Weaknesses have been identified with earlier SSL protocols, including [http://www.schneier.com/paper-ssl-revised.pdf SSLv2] and [http://www.yaksman.org/~lweith/ssl.pdf SSLv3]. The best practice for transport layer protection is to only provide support for the TLS protocols - TLS1.0, TLS 1.1 and TLS 1.2. This configuration will provide maximum protection against skilled and determined attackers and is appropriate for applications handling sensitive data or performing critical operations.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers Nearly all modern browsers support at least TLS 1.0]. As of February 2013, contemporary browsers (Chrome v20+, IE v8+, Opera v10+, and Safari v5+) [http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers support TLS 1.1 and TLS 1.2]. You should provide support for TLS 1.1 and TLS 1.2 to accommodate clients which support the protocols.&lt;br /&gt;
&lt;br /&gt;
In situations where lesser security requirements are necessary, it may be acceptable to also provide support for SSL 3.0 and TLS 1.0. [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 has known weaknesses] which severely compromise the channel's security. TLS 1.0 suffers [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html CBC Chaining attacks and Padding Oracle attacks]. SSLv3 and TLSv1.0 should only be used only after risk analysis and acceptance.&lt;br /&gt;
&lt;br /&gt;
Under no circumstances should SSLv2 be enabled as a protocol selection. The [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 protocol is broken] and does not provide adequate transport layer protection.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Cryptographic Ciphers  ===&lt;br /&gt;
&lt;br /&gt;
Each protocol (SSLv3, TLSv1.0, etc) provide cipher suites. As of TLS 1.2, [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 there is support for over 300 suites (320+ and counting)], including [http://www.mail-archive.com/cryptography@randombit.net/msg03785.html national vanity cipher suites]. The strength of the encryption used within a TLS session is determined by the encryption cipher negotiated between the server and the browser. In order to ensure that only strong cryptographic ciphers are selected the server must be modified to disable the use of weak ciphers. It is recommended to configure the server to only support strong ciphers and to use sufficiently large key sizes. In general, the following should be observed when selecting CipherSuites:&lt;br /&gt;
&lt;br /&gt;
* Use AES, 3-key 3DES for encryption operated in CBC mode &lt;br /&gt;
* Stream Ciphers which XOR the key stream with plaintext (such as AES/CTR mode)&lt;br /&gt;
* Use SHA1 or above for digests, prefer SHA2 (or equivalent)&lt;br /&gt;
* MD5 should not be used except as a PRF (no signing, no MACs)&lt;br /&gt;
* Do not provide support for NULL ciphersuites (aNULL or eNULL)&lt;br /&gt;
* Do not provide support for anonymous Diffie-Hellman &lt;br /&gt;
* Support ephemeral Diffie-Hellman key exchange&lt;br /&gt;
&lt;br /&gt;
Note: The TLS usage of MD5 does not expose the TLS protocol to any of the weaknesses of the MD5 algorithm (see FIPS 140-2 IG). However, MD5 must never be used outside of TLS protocol (e.g. for general hashing).&lt;br /&gt;
&lt;br /&gt;
Note: Use of Ephemeral Diffie-Hellman key exchange will protect confidentiality of the transmitted plaintext data even if the corresponding RSA or DSS server private key got compromised. An attacker would have to perform active man-in-the-middle attack at the time of the key exchange to be able to extract the transmitted plaintext. All modern browsers support this key exchange with the notable exception of Internet Explorer prior to Windows Vista.&lt;br /&gt;
&lt;br /&gt;
Additional information can be obtained within the [http://www.ietf.org/rfc/rfc4346.txt TLS 1.1 RFC 4346] and [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf FIPS 140-2 IG]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Support TLS-PSK and TLS-SRP for Mutual Authentication ===&lt;br /&gt;
&lt;br /&gt;
Use TLS-PSK (Pre-Shared Key) or TLS-SRP (Secure Remote Password), which are known as Password Authenticated Key Exchange (PAKEs). TLS-PSK and TLS-SRP properly bind the channel, which refers to the cryptographic binding between the outer tunnel and the inner authentication protocol.&lt;br /&gt;
&lt;br /&gt;
Basic authentication places the user's password on the wire in the plain text after a server authenticates itself. Basic authentication only provides unilateral authentication. In contrast, both TLS-PSK and TLS-SRP provide mutual authentication, meaning each party proves it knows the password without placing the password on the wire in the plain text.&lt;br /&gt;
&lt;br /&gt;
Finally, using a PAKE removes the need to trust an outside party, such as a Certification Authority (CA).&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Secure Renegotiations  ===&lt;br /&gt;
&lt;br /&gt;
A design weakness in TLS, identified as [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-3555 CVE-2009-3555], allows an attacker to inject a plaintext of his choice into a TLS session of a victim. In the HTTPS context the attacker might be able to inject his own HTTP requests on behalf of the victim. The issue can be mitigated either by disabling support for TLS renegotiations or by supporting only renegotiations compliant with [http://www.ietf.org/rfc/rfc5746.txt RFC 5746]. All modern browsers have been updated to comply with this RFC.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Disable Compression ===&lt;br /&gt;
&lt;br /&gt;
Compression Ratio Info-leak Made Easy (CRIME) is an exploit against the data compression scheme used by the TLS and SPDY protocols. The exploit allows an adversary to recover user authentication cookies from HTTPS. The recovered cookie can be subsequently used for session hijacking attacks.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Strong Keys &amp;amp; Protect Them ===&lt;br /&gt;
&lt;br /&gt;
The private key used to generate the cipher key must be sufficiently strong for the anticipated lifetime of the private key and corresponding certificate. The current best practice is to select a key size of at least 2048. Keys of length 1024 will be obsolete beginning in 2010.  Additional information on key lifetimes and comparable key strengths can be found in [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf NIST SP 800-57]. In addition, the private key must be stored in a location that is protected from unauthorized access.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use a Certificate That Supports Required Domain Names ===&lt;br /&gt;
&lt;br /&gt;
A user should never be presented with a certificate error, including prompts to reconcile domain or hostname mismatches, or expired certificates. If the application is available at both [https://owasp.org https://www.example.com] and [https://owasp.org https://example.com] then an appropriate certificate, or certificates, must be presented to accommodate the situation. The presence of certificate errors desensitizes users to TLS error messages and increases the possibility an attacker could launch a convincing phishing or man-in-the-middle attack.&lt;br /&gt;
&lt;br /&gt;
For example, consider a web application accessible at [https://owasp.org https://abc.example.com] and [https://owasp.org https://xyz.example.com]. One certificate should be acquired for the host or server ''abc.example.com''; and a second certificate for host or server ''xyz.example.com''. In both cases, the hostname would be present in the Subject's Common Name (CN).&lt;br /&gt;
&lt;br /&gt;
Alternatively, the Subject Alternate Names (SANs) can be used to provide a specific listing of multiple names where the certificate is valid. In the example above, the certificate could list the Subject's CN as ''example.com'', and list two SANs: ''abc.example.com'' and ''xyz.example.com''. These certificates are sometimes referred to as &amp;quot;multiple domain certificates&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Fully Qualified Names in Certificates ===&lt;br /&gt;
&lt;br /&gt;
Use fully qualified names in the DNS name field, and do not use unqualifed names (e.g., 'www'), local names (e.g., 'localhost'), or private IP addresses (e.g., 192.168.1.1) in the DNS name field. Unqualifed names, local names, or private IP addresses violate the certificate specification.&lt;br /&gt;
 &lt;br /&gt;
=== Rule - Do Not Use Wildcard Certificates ===&lt;br /&gt;
&lt;br /&gt;
You should refrain from using wildcard certificates. Though they are expedient at circumventing annoying user prompts, they also [[Least_privilege|violate the principal of least privilege]] and asks the user to trust all machines, including developer's machines, the secretary's machine in the lobby and the sign-in kiosk. Obtaining access to the private key is left as an exercise for the attacker, but its made much easier when stored on the file system unprotected.&lt;br /&gt;
&lt;br /&gt;
Statistics gathered by Qualys for [http://media.blackhat.com/bh-us-10/presentations/Ristic/BlackHat-USA-2010-Ristic-Qualys-SSL-Survey-HTTP-Rating-Guide-slides.pdf Internet SSL Survey 2010] indicate wildcard certificates have a 4.4% share, so the practice is not standard for public facing hosts. Finally, wildcard certificates violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines].&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Use RFC 1918 Addresses in Certificates ===&lt;br /&gt;
&lt;br /&gt;
Certificates should not use private addresses. RFC 1918 is [http://tools.ietf.org/rfc/rfc1918.txt Address Allocation for Private Internets]. Private addresses are Internet Assigned Numbers Authority (IANA) reserved and include 192.168/16, 172.16/12, and 10/8.&lt;br /&gt;
&lt;br /&gt;
Certificates issued with private addresses violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines]. In addition, Peter Gutmann writes in in [http://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf Engineering Security]: &amp;quot;This one is particularly troublesome because, in combination with the router-compromise attacks... and ...OSCP-defeating measures, it allows an attacker to spoof any EV-certificate site.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Rule - Always Provide All Needed Certificates ===&lt;br /&gt;
&lt;br /&gt;
Clients attempt to solve the problem of identifying a server or host using PKI and X509 certificate. When a user receives a server or host's certificate, the certificate must be validated back to a trusted root certification authority. This is known as path validation.&lt;br /&gt;
&lt;br /&gt;
There can be one or more intermediate certificates in between the end-entity (server or host) certificate and root certificate. In addition to validating both endpoints, the user will also have to validate all intermediate certificates. Validating all intermediate certificates can be tricky because the user may not have them locally. This is a well-known PKI issue called the “Which Directory?&amp;quot; problem.&lt;br /&gt;
&lt;br /&gt;
To avoid the “Which Directory?&amp;quot; problem, a server should provide the user with all required certificates used in a path validation.&lt;br /&gt;
&lt;br /&gt;
== Client (Browser) Configuration  ==&lt;br /&gt;
&lt;br /&gt;
The validation procedures to ensure that a certificate is valid are complex and difficult to correctly perform.  In a typical web application model, these checks will be performed by the client's web browser in accordance with local browser settings and are out of the control of the application. However, these items do need to be addressed in the following scenarios:&lt;br /&gt;
&lt;br /&gt;
* The application server establishes connections to other applications over TLS for purposes such as web services or any exchange of data&lt;br /&gt;
* A thick client application is connecting to a server via TLS&lt;br /&gt;
&lt;br /&gt;
In these situations extensive certificate validation checks must occur in order to establish the validity of the certificate. Consult the following resources to assist in the design and testing of this functionality. The NIST PKI testing site includes a full test suite of certificates and expected outcomes of the test cases.&lt;br /&gt;
* [http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html NIST PKI Testing]&lt;br /&gt;
* [http://www.ietf.org/rfc/rfc5280.txt IETF RFC 5280]&lt;br /&gt;
&lt;br /&gt;
As specified in the above guidance, if the certificate can not be validated for any reason then the connection between the client and server must be dropped. Any data exchanged over a connection where the certificate has not properly been validated could be exposed to unauthorized access or modification.&lt;br /&gt;
&lt;br /&gt;
== Additional Controls  ==&lt;br /&gt;
&lt;br /&gt;
=== Extended Validation Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Extended validation certificates (EV Certificates) proffer an enhanced investigation by the issuer into the requesting party due to the industry's race to the bottom. The purpose of EV certificates is to provide the user with greater assurance that the owner of the certificate is a verified legal entity for the site. Browsers with support for EV certificates distinguish an EV certificate in a variety of ways. Internet Explorer will color a portion of the URL in green, while Mozilla will add a green portion to the left of the URL indicating the company name. &lt;br /&gt;
&lt;br /&gt;
High value websites should consider the use of EV certificates to enhance customer confidence in the certificate. It should also be noted that EV certificates do not provide any greater technical security for the TLS. The purpose of the EV certificate is to increase user confidence that the target site is indeed who it claims to be.&lt;br /&gt;
&lt;br /&gt;
=== Client-Side Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Client side certificates can be used with TLS to prove the identity of the client to the server. Referred to as &amp;quot;two-way TLS&amp;quot;, this configuration requires the client to provide their certificate to the server, in addition to the server providing their's to the client. If client certificates are used, ensure that the same validation of the client certificate is performed by the server, as indicated for the validation of server certificates above. In addition, the server should be configured to drop the TLS connection if the client certificate cannot be verified or is not provided. &lt;br /&gt;
&lt;br /&gt;
The use of client side certificates is relatively rare currently due to the complexities of certificate generation, safe distribution, client side configuration, certificate revocation and reissuance, and the fact that clients can only authenticate on machines where their client side certificate is installed. Such certificates are typically used for very high value connections that have small user populations.&lt;br /&gt;
&lt;br /&gt;
=== Certificate and Public Key Pinning ===&lt;br /&gt;
&lt;br /&gt;
Hybrid and native applications can take advantage of [[Certificate_and_Public_Key_Pinning|certificate and public key pinning]]. Pinning associates a host (for example, server) with an identity (for example, certificate or public key), and allows an application to leverage knowledge of the pre-existing relationship. At runtime, the application would inspect the certificate or public key received after connecting to the server. If the certificate or public key is expected, then the application would proceed as normal. If unexpected, the application would stop using the channel and close the connection since an adversary could control the channel or server.&lt;br /&gt;
&lt;br /&gt;
Pinning still requires customary X509 checks, such as revocation, since CRLs and OCSP provides real time status information. Otherwise, an application could possibly (1) accept a known bad certificate; or (2) require an out-of-band update, which could result in a lengthy App Store approval.&lt;br /&gt;
&lt;br /&gt;
Browser based applications are at a disadvantage since most browsers do not allow the user to leverage pre-existing relationships and ''a priori'' knowledge. In addition, Javascript and Websockets do not expose methods to for a web app to query the underlying secure connection information (such as the certificate or public key). It is noteworthy that Chromium based browsers perform pinning on selected sites, but the list is currently maintained by the vendor.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection for Back End and Other Connections  =&lt;br /&gt;
&lt;br /&gt;
Although not the focus of this cheat sheet, it should be stressed that transport layer protection is necessary for back-end connections and any other connection where sensitive data is exchanged or where user identity is established. Failure to implement an effective and robust transport layer security will expose sensitive data and undermine the effectiveness of any authentication or access control mechanism. &lt;br /&gt;
&lt;br /&gt;
== Secure Internal Network Fallacy  ==&lt;br /&gt;
&lt;br /&gt;
The internal network of a corporation is not immune to attacks. Many recent high profile intrusions, where thousands of sensitive customer records were compromised, have been perpetrated by attackers that have gained internal network access and then used sniffers to capture unencrypted data as it traversed the internal network.&lt;br /&gt;
&lt;br /&gt;
= Related Articles  =&lt;br /&gt;
&lt;br /&gt;
* OWASP – [[Testing for SSL-TLS (OWASP-CM-001)|Testing for SSL-TLS]], and OWASP [[Guide to Cryptography]] &lt;br /&gt;
* OWASP – [http://www.owasp.org/index.php/ASVS Application Security Verification Standard (ASVS) – Communication Security Verification Requirements (V10)]&lt;br /&gt;
* OWASP – ASVS Article on [[Why you need to use a FIPS 140-2 validated cryptomodule]]&lt;br /&gt;
* SSL Labs http://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide]&lt;br /&gt;
* yaSSL – [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html Differences between SSL and TLS Protocol Versions]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP 800-52 Guidelines for the selection and use of transport layer security (TLS) Implementations]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf FIPS 140-2 Security Requirements for Cryptographic Modules]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf SP 800-57 Recommendation for Key Management, Revision 2]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/drafts.html#sp800-95 SP 800-95 Guide to Secure Web Services] &lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5280.txt RFC 5280 Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc2246.txt RFC 2246 The Transport Layer Security (TLS) Protocol Version 1.0 (JAN 1999)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc4346.txt RFC 4346 The Transport Layer Security (TLS) Protocol Version 1.1 (APR 2006)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5246.txt RFC 5246 The Transport Layer Security (TLS) Protocol Version 1.2 (AUG 2008)]&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
Michael Coates - michael.coates[at]owasp.org &amp;lt;br/&amp;gt;&lt;br /&gt;
Dave Wichers - dave.wichers[at]aspectsecurity.com &amp;lt;br/&amp;gt;&lt;br /&gt;
Michael Boberski - boberski_michael[at]bah.com&amp;lt;br/&amp;gt;&lt;br /&gt;
Tyler Reguly -treguly[at]sslfail.com&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=150623</id>
		<title>Transport Layer Protection Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=150623"/>
				<updated>2013-04-27T22:56:24Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added &amp;quot;Rule - Support TLS-PSK and TLS-SRP for Password Based Authentication&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction  =&lt;br /&gt;
&lt;br /&gt;
This article provides a simple model to follow when implementing transport layer protection for an application. Although the concept of SSL is known to many, the actual details and security specific decisions of implementation are often poorly understood and frequently result in insecure deployments. This article establishes clear rules which provide guidance on securely designing and configuring transport layer security for an application. This article is focused on the use of SSL/TLS between a web application and a web browser, but that we also encourage the use of SSL/TLS or other network encryption technologies, such as VPN, on back end and other non-browser based connections.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== Architectural Decision  ==&lt;br /&gt;
&lt;br /&gt;
An architectural decision must be made to determine the appropriate method to protect data when it is being transmitted.  The most common options available to corporations are Virtual Private Networks (VPN) or a SSL/TLS model commonly used by web applications. The selected model is determined by the business needs of the particular organization. For example, a VPN connection may be the best design for a partnership between two companies that includes mutual access to a shared server over a variety of protocols. Conversely, an Internet facing enterprise web application would likely be best served by a SSL/TLS model. &lt;br /&gt;
&lt;br /&gt;
This cheat sheet will focus on security considerations when the SSL/TLS model is selected. This is a frequently used model for publicly accessible web applications.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection with SSL/TLS  =&lt;br /&gt;
&lt;br /&gt;
== Benefits  ==&lt;br /&gt;
&lt;br /&gt;
The primary benefit of transport layer security is the protection of web application data from unauthorized disclosure and modification when it is transmitted between clients (web browsers) and the web application server, and between the web application server and back end and other non-browser based enterprise components. &lt;br /&gt;
&lt;br /&gt;
The server validation component of TLS provides authentication of the server to the client.  If configured to require client side certificates, TLS can also play a role in client authentication to the server. However, in practice client side certificates are not often used in lieu of username and password based authentication models for clients.&lt;br /&gt;
&lt;br /&gt;
TLS also provides two additional benefits that are commonly overlooked; integrity guarantees and replay prevention. A TLS stream of communication contains built-in controls to prevent tampering with any portion of the encrypted data. In addition, controls are also built-in to prevent a captured stream of TLS data from being replayed at a later time.&lt;br /&gt;
&lt;br /&gt;
It should be noted that TLS provides the above guarantees to data during transmission. TLS does not offer any of these security benefits to data that is at rest. Therefore appropriate security controls must be added to protect data while at rest within the application or within data stores.&lt;br /&gt;
&lt;br /&gt;
== Basic Requirements ==&lt;br /&gt;
&lt;br /&gt;
The basic requirements for using TLS are: access to a Public Key Infrastructure (PKI) in order to obtain certificates, access to a directory or an Online Certificate Status Protocol (OCSP) responder in order to check certificate revocation status, and agreement/ability to support a minimum configuration of protocol versions and protocol options for each version.&lt;br /&gt;
&lt;br /&gt;
== SSL vs. TLS  ==&lt;br /&gt;
&lt;br /&gt;
The terms, Secure Socket Layer (SSL) and Transport Layer Security (TLS) are often used interchangeably. In fact, SSL v3.1 is equivalent to TLS v1.0. However, different versions of SSL and TLS are supported by modern web browsers and by most modern web frameworks and platforms. For the purposes of this cheat sheet we will refer to the technology generically as TLS. Recommendations regarding the use of SSL and TLS protocols, as well as browser support for TLS, can be found in the rule below title [[Transport_Layer_Protection_Cheat_Sheet#Rule_-_Only_Support_Strong_Protocols| &amp;quot;Only Support Strong Protocols&amp;quot;]].&lt;br /&gt;
&lt;br /&gt;
[[Image:Asvs_cryptomodule.gif|thumb|350px|right|Cryptomodule Parts and Operation]]&lt;br /&gt;
&lt;br /&gt;
== When to Use a FIPS 140-2 Validated Cryptomodule ==&lt;br /&gt;
&lt;br /&gt;
If the web application may be the target of determined attackers (a common threat model for Internet accessible applications handling sensitive data), it is strongly advised to use TLS services that are provided by [http://csrc.nist.gov/groups/STM/cmvp/validation.html FIPS 140-2 validated cryptomodules]. &lt;br /&gt;
&lt;br /&gt;
A cryptomodule, whether it is a software library or a hardware device, basically consists of three parts:&lt;br /&gt;
&lt;br /&gt;
* Components that implement cryptographic algorithms (symmetric and asymmetric algorithms, hash algorithms, random number generator algorithms, and message authentication code algorithms) &lt;br /&gt;
* Components that call and manage cryptographic functions (inputs and outputs include cryptographic keys and so-called critical security parameters) &lt;br /&gt;
* A physical container around the components that implement cryptographic algorithms and the components that call and manage cryptographic functions&lt;br /&gt;
&lt;br /&gt;
The security of a cryptomodule and its services (and the web applications that call the cryptomodule) depend on the correct implementation and integration of each of these three parts. In addition, the cryptomodule must be used and accessed securely. The includes consideration for:&lt;br /&gt;
&lt;br /&gt;
* Calling and managing cryptographic functions&lt;br /&gt;
* Securely Handling inputs and output&lt;br /&gt;
* Ensuring the secure construction of the physical container around the components&lt;br /&gt;
&lt;br /&gt;
In order to leverage the benefits of TLS it is important to use a TLS service (e.g. library, web framework, web application server) which has been FIPS 140-2 validated. In addition, the cryptomodule must be installed, configured and operated in either an approved or an allowed mode to provide a high degree of certainty that the FIPS 140-2 validated cryptomodule is providing the expected security services in the expected manner.&lt;br /&gt;
&lt;br /&gt;
If the system is legally required to use FIPS 140-2 encryption (e.g., owned or operated by or on behalf of the U.S. Government) then TLS must be used and SSL disabled. Details on why SSL is unacceptable are described in Section 7.1 of [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program].&lt;br /&gt;
&lt;br /&gt;
Further reading on the use of TLS to protect highly sensitive data against determined attackers can be viewed in [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP800-52 Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations]&lt;br /&gt;
&lt;br /&gt;
== Secure Server Design  ==&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS for All Login Pages and All Authenticated Pages  ===&lt;br /&gt;
&lt;br /&gt;
The login page and all subsequent authenticated pages must be exclusively accessed over TLS. The initial login page, referred to as the &amp;quot;login landing page&amp;quot;, must be served over TLS. Failure to utilize TLS for the login landing page allows an attacker to modify the login form action, causing the user's credentials to be posted to an arbitrary location. Failure to utilize TLS for authenticated pages after the login enables an attacker to view the unencrypted session ID and compromise the user's authenticated session. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS on Any Networks (External and Internal) Transmitting Sensitive Data  ===&lt;br /&gt;
&lt;br /&gt;
All networks, both external and internal, which transmit sensitive data must utilize TLS or an equivalent transport layer security mechanism. It is not sufficient to claim that access to the internal network is &amp;quot;restricted to employees&amp;quot;. Numerous recent data compromises have shown that the internal network can be breached by attackers. In these attacks, sniffers have been installed to access unencrypted sensitive data sent on the internal network. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Provide Non-TLS Pages for Secure Content  ===&lt;br /&gt;
&lt;br /&gt;
All pages which are available over TLS must not be available over a non-TLS connection. A user may inadvertently bookmark or manually type a URL to a HTTP page (e.g. http://example.com/myaccount) within the authenticated portion of the application. If this request is processed by the application then the response, and any sensitive data, would be returned to the user over the clear text HTTP.&lt;br /&gt;
&lt;br /&gt;
=== Rule - REMOVED - Do Not Perform Redirects from Non-TLS Page to TLS Login Page  ===&lt;br /&gt;
&lt;br /&gt;
This recommendation has been removed. Ultimately, the below guidance will only provide user education and cannot provide any technical controls to protect the user against a man-in-the-middle attack.  &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
A common practice is to redirect users that have requested a non-TLS version of the login page to the TLS version (e.g. http://example.com/login redirects to https://example.com/login). This practice creates an additional attack vector for a man in the middle attack. In addition, redirecting from non-TLS versions to the TLS version reinforces to the user that the practice of requesting the non-TLS page is acceptable and secure.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the man-in-the-middle attack is used by the attacker to intercept the non-TLS to TLS redirect message. The attacker then injects the HTML of the actual login page and changes the form to post over unencrypted HTTP. This allows the attacker to view the user's credentials as they are transmitted in the clear.&lt;br /&gt;
&lt;br /&gt;
It is recommended to display a security warning message to the user whenever the non-TLS login page is requested. This security warning should urge the user to always type &amp;quot;HTTPS&amp;quot; into the browser or bookmark the secure login page.  This approach will help educate users on the correct and most secure method of accessing the application.&lt;br /&gt;
&lt;br /&gt;
Currently there are no controls that an application can enforce to entirely mitigate this risk. Ultimately, this issue is the responsibility of the user since the application cannot prevent the user from initially typing [http://owasp.org http://example.com/login] (versus HTTPS). &lt;br /&gt;
&lt;br /&gt;
Note: [http://www.w3.org/Security/wiki/Strict_Transport_Security Strict Transport Security] will address this issue and will provide a server side control to instruct supporting browsers that the site should only be accessed over HTTPS&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Mix TLS and Non-TLS Content  ===&lt;br /&gt;
&lt;br /&gt;
A page that is available over TLS must be comprised completely of content which is transmitted over TLS. The page must not contain any content that is transmitted over unencrypted HTTP. This includes content from unrelated third party sites. &lt;br /&gt;
&lt;br /&gt;
An attacker could intercept any of the data transmitted over the unencrypted HTTP and inject malicious content into the user's page. This malicious content would be included in the page even if the overall page is served over TLS. In addition, an attacker could steal the user's session cookie that is transmitted with any non-TLS requests. This is possible if the cookie's 'secure' flag is not set. See the rule 'Use &amp;quot;Secure&amp;quot; Cookie Flag'&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use &amp;quot;Secure&amp;quot; Cookie Flag  ===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Secure&amp;quot; flag must be set for all user cookies. Failure to use the &amp;quot;secure&amp;quot; flag enables an attacker to access the session cookie by tricking the user's browser into submitting a request to an unencrypted page on the site. This attack is possible even if the server is not configured to offer HTTP content since the attacker is monitoring the requests and does not care if the server responds with a 404 or doesn't respond at all.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Keep Sensitive Data Out of the URL ===&lt;br /&gt;
&lt;br /&gt;
Sensitive data must not be transmitted via URL arguments. A more appropriate place is to store sensitive data in a server side repository or within the user's session.  When using TLS the URL arguments and values are encrypted during transit. However, there are two methods that the URL arguments and values could be exposed.&lt;br /&gt;
&lt;br /&gt;
1. The entire URL is cached within the local user's browser history. This may expose sensitive data to any other user of the workstation.&lt;br /&gt;
&lt;br /&gt;
2. The entire URL is exposed if the user clicks on a link to another HTTPS site. This may expose sensitive data within the referral field to the third party site. This exposure occurs in most browsers and will only occur on transitions between two TLS sites. &lt;br /&gt;
&lt;br /&gt;
For example, a user following a link on [http://owasp.org https://example.com] which leads to [http://owasp.org https://someOtherexample.com] would expose the full URL of [http://owasp.org https://example.com] (including URL arguments) in the referral header (within most browsers). This would not be the case if the user followed a link on [http://owasp.org https://example.com] to [http://owasp.org http://someHTTPexample.com]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Prevent Caching of Sensitive Data ===&lt;br /&gt;
&lt;br /&gt;
The TLS protocol provides confidentiality only for data in transit but it does not help with potential data leakage issues at the client or intermediary proxies. As a result, it is frequently prudent to instruct these nodes not to cache or persist sensitive data. One option is to add a suitable Cache-Control header to relevant HTTP responses, for example &amp;quot;Cache-Control: no-cache, no store, must-revalidate&amp;quot;. For compatibility with HTTP/1.0 the response should include header &amp;quot;Pragma: no-cache&amp;quot;. More information is available in [http://www.ietf.org/rfc/rfc2616.txt HTTP 1.1 RFC 2616], section 14.9.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use HTTP Strict Transport Security ===&lt;br /&gt;
&lt;br /&gt;
A new browser security setting called HTTP Strict Transport Security (HSTS) will significantly enhance the implementation of TLS for a domain. HSTS is enabled via a special response header and this instructs [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security#Browser_Support compatible browsers] to enforce the following security controls:&lt;br /&gt;
&lt;br /&gt;
* All requests to the domain will be sent over HTTPS&lt;br /&gt;
* Any attempts to send an HTTP requests to the domain will be automatically upgraded by the browser to HTTPS before the request is sent&lt;br /&gt;
* If a user encounters a bad SSL certificate, the user will receive an error message and will not be allowed to override the warning message&lt;br /&gt;
&lt;br /&gt;
Additional information on HSTS can be found at [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security https://www.owasp.org/index.php/HTTP_Strict_Transport_Security] and also on the OWASP [http://www.youtube.com/watch?v=zEV3HOuM_Vw&amp;amp;feature=youtube_gdata AppSecTutorial Series - Episode 4]&lt;br /&gt;
&lt;br /&gt;
== Server Certificate and Protocol Configuration  ==&lt;br /&gt;
&lt;br /&gt;
Note: If using a FIPS 140-2 cryptomodule disregard the following rules and defer to the recommended configuration for the particular cryptomodule.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use an Appropriate Certification Authority for the Application's User Base  ===&lt;br /&gt;
&lt;br /&gt;
An application user must never be presented with a warning that the certificate was signed by an unknown or untrusted authority. The application's user population must have access to the public certificate of the certification authority which issued the server's certificate. For Internet accessible websites, the most effective method of achieving this goal is to purchase the TLS certificate from a recognize certification authority. Popular Internet browsers already contain the public certificates of these recognized certification authorities. &lt;br /&gt;
&lt;br /&gt;
Internal applications with a limited user population can use an internal certification authority provided its public certificate is securely distributed to all users. However, remember that all certificates issued by this certification authority will be trusted by the users. Therefore, utilize controls to protect the private key and ensure that only authorized individuals have the ability to sign certificates. &lt;br /&gt;
&lt;br /&gt;
The use of self signed certificates is never acceptable. Self signed certificates negate the benefit of end-point authentication and also significantly decrease the ability for an individual to detect a man-in-the-middle attack. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Protocols ===&lt;br /&gt;
&lt;br /&gt;
SSL/TLS is a collection of protocols. Weaknesses have been identified with earlier SSL protocols, including [http://www.schneier.com/paper-ssl-revised.pdf SSLv2] and [http://www.yaksman.org/~lweith/ssl.pdf SSLv3]. The best practice for transport layer protection is to only provide support for the TLS protocols - TLS1.0, TLS 1.1 and TLS 1.2. This configuration will provide maximum protection against skilled and determined attackers and is appropriate for applications handling sensitive data or performing critical operations.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers Nearly all modern browsers support at least TLS 1.0]. As of February 2013, contemporary browsers (Chrome v20+, IE v8+, Opera v10+, and Safari v5+) [http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers support TLS 1.1 and TLS 1.2]. You should provide support for TLS 1.1 and TLS 1.2 to accommodate clients which support the protocols.&lt;br /&gt;
&lt;br /&gt;
In situations where lesser security requirements are necessary, it may be acceptable to also provide support for SSL 3.0 and TLS 1.0. [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 has known weaknesses] which severely compromise the channel's security. TLS 1.0 suffers [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html CBC Chaining attacks and Padding Oracle attacks]. SSLv3 and TLSv1.0 should only be used only after risk analysis and acceptance.&lt;br /&gt;
&lt;br /&gt;
Under no circumstances should SSLv2 be enabled as a protocol selection. The [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 protocol is broken] and does not provide adequate transport layer protection.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Cryptographic Ciphers  ===&lt;br /&gt;
&lt;br /&gt;
Each protocol (SSLv3, TLSv1.0, etc) provide cipher suites. As of TLS 1.2, [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 there is support for over 300 suites (320+ and counting)], including [http://www.mail-archive.com/cryptography@randombit.net/msg03785.html national vanity cipher suites]. The strength of the encryption used within a TLS session is determined by the encryption cipher negotiated between the server and the browser. In order to ensure that only strong cryptographic ciphers are selected the server must be modified to disable the use of weak ciphers. It is recommended to configure the server to only support strong ciphers and to use sufficiently large key sizes. In general, the following should be observed when selecting CipherSuites:&lt;br /&gt;
&lt;br /&gt;
* Use AES, 3-key 3DES for encryption operated in CBC mode &lt;br /&gt;
* Stream Ciphers which XOR the key stream with plaintext (such as AES/CTR mode)&lt;br /&gt;
* Use SHA1 or above for digests, prefer SHA2 (or equivalent)&lt;br /&gt;
* MD5 should not be used except as a PRF (no signing, no MACs)&lt;br /&gt;
* Do not provide support for NULL ciphersuites (aNULL or eNULL)&lt;br /&gt;
* Do not provide support for anonymous Diffie-Hellman &lt;br /&gt;
* Support ephemeral Diffie-Hellman key exchange&lt;br /&gt;
&lt;br /&gt;
Note: The TLS usage of MD5 does not expose the TLS protocol to any of the weaknesses of the MD5 algorithm (see FIPS 140-2 IG). However, MD5 must never be used outside of TLS protocol (e.g. for general hashing).&lt;br /&gt;
&lt;br /&gt;
Note: Use of Ephemeral Diffie-Hellman key exchange will protect confidentiality of the transmitted plaintext data even if the corresponding RSA or DSS server private key got compromised. An attacker would have to perform active man-in-the-middle attack at the time of the key exchange to be able to extract the transmitted plaintext. All modern browsers support this key exchange with the notable exception of Internet Explorer prior to Windows Vista.&lt;br /&gt;
&lt;br /&gt;
Additional information can be obtained within the [http://www.ietf.org/rfc/rfc4346.txt TLS 1.1 RFC 4346] and [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf FIPS 140-2 IG]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Support TLS-PSK and TLS-SRP for Password Based Authentication ===&lt;br /&gt;
&lt;br /&gt;
Use TLS-PSK (Pre-Shared Key) or TLS-SRP (Secure Remote Password), which are known as Password Authenticated Key Exchange (PAKEs). TLS-PSK and TLS-SRP properly bind the channel, which refers to the cryptographic binding between the outer tunnel and the inner authentication protocol.&lt;br /&gt;
&lt;br /&gt;
Basic authentication places the user's password on the wire in the plain text after a server authenticates itself. Basic authentication only provides unilateral authentication. In contrast, both TLS-PSK and TLS-SRP provide mutual authentication, meaning each party proves it knows the password without placing the password on the wire in the plain text.&lt;br /&gt;
&lt;br /&gt;
Finally, using a PAKE removes the need to trust an outside party, such as a Certification Authority (CA).&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Secure Renegotiations  ===&lt;br /&gt;
&lt;br /&gt;
A design weakness in TLS, identified as [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-3555 CVE-2009-3555], allows an attacker to inject a plaintext of his choice into a TLS session of a victim. In the HTTPS context the attacker might be able to inject his own HTTP requests on behalf of the victim. The issue can be mitigated either by disabling support for TLS renegotiations or by supporting only renegotiations compliant with [http://www.ietf.org/rfc/rfc5746.txt RFC 5746]. All modern browsers have been updated to comply with this RFC.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Disable Compression ===&lt;br /&gt;
&lt;br /&gt;
Compression Ratio Info-leak Made Easy (CRIME) is an exploit against the data compression scheme used by the TLS and SPDY protocols. The exploit allows an adversary to recover user authentication cookies from HTTPS. The recovered cookie can be subsequently used for session hijacking attacks.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Strong Keys &amp;amp; Protect Them ===&lt;br /&gt;
&lt;br /&gt;
The private key used to generate the cipher key must be sufficiently strong for the anticipated lifetime of the private key and corresponding certificate. The current best practice is to select a key size of at least 2048. Keys of length 1024 will be obsolete beginning in 2010.  Additional information on key lifetimes and comparable key strengths can be found in [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf NIST SP 800-57]. In addition, the private key must be stored in a location that is protected from unauthorized access.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use a Certificate That Supports Required Domain Names ===&lt;br /&gt;
&lt;br /&gt;
A user should never be presented with a certificate error, including prompts to reconcile domain or hostname mismatches, or expired certificates. If the application is available at both [https://owasp.org https://www.example.com] and [https://owasp.org https://example.com] then an appropriate certificate, or certificates, must be presented to accommodate the situation. The presence of certificate errors desensitizes users to TLS error messages and increases the possibility an attacker could launch a convincing phishing or man-in-the-middle attack.&lt;br /&gt;
&lt;br /&gt;
For example, consider a web application accessible at [https://owasp.org https://abc.example.com] and [https://owasp.org https://xyz.example.com]. One certificate should be acquired for the host or server ''abc.example.com''; and a second certificate for host or server ''xyz.example.com''. In both cases, the hostname would be present in the Subject's Common Name (CN).&lt;br /&gt;
&lt;br /&gt;
Alternatively, the Subject Alternate Names (SANs) can be used to provide a specific listing of multiple names where the certificate is valid. In the example above, the certificate could list the Subject's CN as ''example.com'', and list two SANs: ''abc.example.com'' and ''xyz.example.com''. These certificates are sometimes referred to as &amp;quot;multiple domain certificates&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Fully Qualified Names in Certificates ===&lt;br /&gt;
&lt;br /&gt;
Use fully qualified names in the DNS name field, and do not use unqualifed names (e.g., 'www'), local names (e.g., 'localhost'), or private IP addresses (e.g., 192.168.1.1) in the DNS name field. Unqualifed names, local names, or private IP addresses violate the certificate specification.&lt;br /&gt;
 &lt;br /&gt;
=== Rule - Do Not Use Wildcard Certificates ===&lt;br /&gt;
&lt;br /&gt;
You should refrain from using wildcard certificates. Though they are expedient at circumventing annoying user prompts, they also [[Least_privilege|violate the principal of least privilege]] and asks the user to trust all machines, including developer's machines, the secretary's machine in the lobby and the sign-in kiosk. Obtaining access to the private key is left as an exercise for the attacker, but its made much easier when stored on the file system unprotected.&lt;br /&gt;
&lt;br /&gt;
Statistics gathered by Qualys for [http://media.blackhat.com/bh-us-10/presentations/Ristic/BlackHat-USA-2010-Ristic-Qualys-SSL-Survey-HTTP-Rating-Guide-slides.pdf Internet SSL Survey 2010] indicate wildcard certificates have a 4.4% share, so the practice is not standard for public facing hosts. Finally, wildcard certificates violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines].&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Use RFC 1918 Addresses in Certificates ===&lt;br /&gt;
&lt;br /&gt;
Certificates should not use private addresses. RFC 1918 is [http://tools.ietf.org/rfc/rfc1918.txt Address Allocation for Private Internets]. Private addresses are Internet Assigned Numbers Authority (IANA) reserved and include 192.168/16, 172.16/12, and 10/8.&lt;br /&gt;
&lt;br /&gt;
Certificates issued with private addresses violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines]. In addition, Peter Gutmann writes in in [http://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf Engineering Security]: &amp;quot;This one is particularly troublesome because, in combination with the router-compromise attacks... and ...OSCP-defeating measures, it allows an attacker to spoof any EV-certificate site.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Rule - Always Provide All Needed Certificates ===&lt;br /&gt;
&lt;br /&gt;
Clients attempt to solve the problem of identifying a server or host using PKI and X509 certificate. When a user receives a server or host's certificate, the certificate must be validated back to a trusted root certification authority. This is known as path validation.&lt;br /&gt;
&lt;br /&gt;
There can be one or more intermediate certificates in between the end-entity (server or host) certificate and root certificate. In addition to validating both endpoints, the user will also have to validate all intermediate certificates. Validating all intermediate certificates can be tricky because the user may not have them locally. This is a well-known PKI issue called the “Which Directory?&amp;quot; problem.&lt;br /&gt;
&lt;br /&gt;
To avoid the “Which Directory?&amp;quot; problem, a server should provide the user with all required certificates used in a path validation.&lt;br /&gt;
&lt;br /&gt;
== Client (Browser) Configuration  ==&lt;br /&gt;
&lt;br /&gt;
The validation procedures to ensure that a certificate is valid are complex and difficult to correctly perform.  In a typical web application model, these checks will be performed by the client's web browser in accordance with local browser settings and are out of the control of the application. However, these items do need to be addressed in the following scenarios:&lt;br /&gt;
&lt;br /&gt;
* The application server establishes connections to other applications over TLS for purposes such as web services or any exchange of data&lt;br /&gt;
* A thick client application is connecting to a server via TLS&lt;br /&gt;
&lt;br /&gt;
In these situations extensive certificate validation checks must occur in order to establish the validity of the certificate. Consult the following resources to assist in the design and testing of this functionality. The NIST PKI testing site includes a full test suite of certificates and expected outcomes of the test cases.&lt;br /&gt;
* [http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html NIST PKI Testing]&lt;br /&gt;
* [http://www.ietf.org/rfc/rfc5280.txt IETF RFC 5280]&lt;br /&gt;
&lt;br /&gt;
As specified in the above guidance, if the certificate can not be validated for any reason then the connection between the client and server must be dropped. Any data exchanged over a connection where the certificate has not properly been validated could be exposed to unauthorized access or modification.&lt;br /&gt;
&lt;br /&gt;
== Additional Controls  ==&lt;br /&gt;
&lt;br /&gt;
=== Extended Validation Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Extended validation certificates (EV Certificates) proffer an enhanced investigation by the issuer into the requesting party due to the industry's race to the bottom. The purpose of EV certificates is to provide the user with greater assurance that the owner of the certificate is a verified legal entity for the site. Browsers with support for EV certificates distinguish an EV certificate in a variety of ways. Internet Explorer will color a portion of the URL in green, while Mozilla will add a green portion to the left of the URL indicating the company name. &lt;br /&gt;
&lt;br /&gt;
High value websites should consider the use of EV certificates to enhance customer confidence in the certificate. It should also be noted that EV certificates do not provide any greater technical security for the TLS. The purpose of the EV certificate is to increase user confidence that the target site is indeed who it claims to be.&lt;br /&gt;
&lt;br /&gt;
=== Client-Side Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Client side certificates can be used with TLS to prove the identity of the client to the server. Referred to as &amp;quot;two-way TLS&amp;quot;, this configuration requires the client to provide their certificate to the server, in addition to the server providing their's to the client. If client certificates are used, ensure that the same validation of the client certificate is performed by the server, as indicated for the validation of server certificates above. In addition, the server should be configured to drop the TLS connection if the client certificate cannot be verified or is not provided. &lt;br /&gt;
&lt;br /&gt;
The use of client side certificates is relatively rare currently due to the complexities of certificate generation, safe distribution, client side configuration, certificate revocation and reissuance, and the fact that clients can only authenticate on machines where their client side certificate is installed. Such certificates are typically used for very high value connections that have small user populations.&lt;br /&gt;
&lt;br /&gt;
=== Certificate and Public Key Pinning ===&lt;br /&gt;
&lt;br /&gt;
Hybrid and native applications can take advantage of [[Certificate_and_Public_Key_Pinning|certificate and public key pinning]]. Pinning associates a host (for example, server) with an identity (for example, certificate or public key), and allows an application to leverage knowledge of the pre-existing relationship. At runtime, the application would inspect the certificate or public key received after connecting to the server. If the certificate or public key is expected, then the application would proceed as normal. If unexpected, the application would stop using the channel and close the connection since an adversary could control the channel or server.&lt;br /&gt;
&lt;br /&gt;
Pinning still requires customary X509 checks, such as revocation, since CRLs and OCSP provides real time status information. Otherwise, an application could possibly (1) accept a known bad certificate; or (2) require an out-of-band update, which could result in a lengthy App Store approval.&lt;br /&gt;
&lt;br /&gt;
Browser based applications are at a disadvantage since most browsers do not allow the user to leverage pre-existing relationships and ''a priori'' knowledge. In addition, Javascript and Websockets do not expose methods to for a web app to query the underlying secure connection information (such as the certificate or public key). It is noteworthy that Chromium based browsers perform pinning on selected sites, but the list is currently maintained by the vendor.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection for Back End and Other Connections  =&lt;br /&gt;
&lt;br /&gt;
Although not the focus of this cheat sheet, it should be stressed that transport layer protection is necessary for back-end connections and any other connection where sensitive data is exchanged or where user identity is established. Failure to implement an effective and robust transport layer security will expose sensitive data and undermine the effectiveness of any authentication or access control mechanism. &lt;br /&gt;
&lt;br /&gt;
== Secure Internal Network Fallacy  ==&lt;br /&gt;
&lt;br /&gt;
The internal network of a corporation is not immune to attacks. Many recent high profile intrusions, where thousands of sensitive customer records were compromised, have been perpetrated by attackers that have gained internal network access and then used sniffers to capture unencrypted data as it traversed the internal network.&lt;br /&gt;
&lt;br /&gt;
= Related Articles  =&lt;br /&gt;
&lt;br /&gt;
* OWASP – [[Testing for SSL-TLS (OWASP-CM-001)|Testing for SSL-TLS]], and OWASP [[Guide to Cryptography]] &lt;br /&gt;
* OWASP – [http://www.owasp.org/index.php/ASVS Application Security Verification Standard (ASVS) – Communication Security Verification Requirements (V10)]&lt;br /&gt;
* OWASP – ASVS Article on [[Why you need to use a FIPS 140-2 validated cryptomodule]]&lt;br /&gt;
* SSL Labs http://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide]&lt;br /&gt;
* yaSSL – [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html Differences between SSL and TLS Protocol Versions]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP 800-52 Guidelines for the selection and use of transport layer security (TLS) Implementations]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf FIPS 140-2 Security Requirements for Cryptographic Modules]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf SP 800-57 Recommendation for Key Management, Revision 2]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/drafts.html#sp800-95 SP 800-95 Guide to Secure Web Services] &lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5280.txt RFC 5280 Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc2246.txt RFC 2246 The Transport Layer Security (TLS) Protocol Version 1.0 (JAN 1999)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc4346.txt RFC 4346 The Transport Layer Security (TLS) Protocol Version 1.1 (APR 2006)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5246.txt RFC 5246 The Transport Layer Security (TLS) Protocol Version 1.2 (AUG 2008)]&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
Michael Coates - michael.coates[at]owasp.org &amp;lt;br/&amp;gt;&lt;br /&gt;
Dave Wichers - dave.wichers[at]aspectsecurity.com &amp;lt;br/&amp;gt;&lt;br /&gt;
Michael Boberski - boberski_michael[at]bah.com&amp;lt;br/&amp;gt;&lt;br /&gt;
Tyler Reguly -treguly[at]sslfail.com&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=149173</id>
		<title>Transport Layer Protection Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=149173"/>
				<updated>2013-04-04T17:44:47Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added rule for &amp;quot;Use Fully Qualified Names in Certificates&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction  =&lt;br /&gt;
&lt;br /&gt;
This article provides a simple model to follow when implementing transport layer protection for an application. Although the concept of SSL is known to many, the actual details and security specific decisions of implementation are often poorly understood and frequently result in insecure deployments. This article establishes clear rules which provide guidance on securely designing and configuring transport layer security for an application. This article is focused on the use of SSL/TLS between a web application and a web browser, but that we also encourage the use of SSL/TLS or other network encryption technologies, such as VPN, on back end and other non-browser based connections.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== Architectural Decision  ==&lt;br /&gt;
&lt;br /&gt;
An architectural decision must be made to determine the appropriate method to protect data when it is being transmitted.  The most common options available to corporations are Virtual Private Networks (VPN) or a SSL/TLS model commonly used by web applications. The selected model is determined by the business needs of the particular organization. For example, a VPN connection may be the best design for a partnership between two companies that includes mutual access to a shared server over a variety of protocols. Conversely, an Internet facing enterprise web application would likely be best served by a SSL/TLS model. &lt;br /&gt;
&lt;br /&gt;
This cheat sheet will focus on security considerations when the SSL/TLS model is selected. This is a frequently used model for publicly accessible web applications.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection with SSL/TLS  =&lt;br /&gt;
&lt;br /&gt;
== Benefits  ==&lt;br /&gt;
&lt;br /&gt;
The primary benefit of transport layer security is the protection of web application data from unauthorized disclosure and modification when it is transmitted between clients (web browsers) and the web application server, and between the web application server and back end and other non-browser based enterprise components. &lt;br /&gt;
&lt;br /&gt;
The server validation component of TLS provides authentication of the server to the client.  If configured to require client side certificates, TLS can also play a role in client authentication to the server. However, in practice client side certificates are not often used in lieu of username and password based authentication models for clients.&lt;br /&gt;
&lt;br /&gt;
TLS also provides two additional benefits that are commonly overlooked; integrity guarantees and replay prevention. A TLS stream of communication contains built-in controls to prevent tampering with any portion of the encrypted data. In addition, controls are also built-in to prevent a captured stream of TLS data from being replayed at a later time.&lt;br /&gt;
&lt;br /&gt;
It should be noted that TLS provides the above guarantees to data during transmission. TLS does not offer any of these security benefits to data that is at rest. Therefore appropriate security controls must be added to protect data while at rest within the application or within data stores.&lt;br /&gt;
&lt;br /&gt;
== Basic Requirements ==&lt;br /&gt;
&lt;br /&gt;
The basic requirements for using TLS are: access to a Public Key Infrastructure (PKI) in order to obtain certificates, access to a directory or an Online Certificate Status Protocol (OCSP) responder in order to check certificate revocation status, and agreement/ability to support a minimum configuration of protocol versions and protocol options for each version.&lt;br /&gt;
&lt;br /&gt;
== SSL vs. TLS  ==&lt;br /&gt;
&lt;br /&gt;
The terms, Secure Socket Layer (SSL) and Transport Layer Security (TLS) are often used interchangeably. In fact, SSL v3.1 is equivalent to TLS v1.0. However, different versions of SSL and TLS are supported by modern web browsers and by most modern web frameworks and platforms. For the purposes of this cheat sheet we will refer to the technology generically as TLS. Recommendations regarding the use of SSL and TLS protocols, as well as browser support for TLS, can be found in the rule below title [[Transport_Layer_Protection_Cheat_Sheet#Rule_-_Only_Support_Strong_Protocols| &amp;quot;Only Support Strong Protocols&amp;quot;]].&lt;br /&gt;
&lt;br /&gt;
[[Image:Asvs_cryptomodule.gif|thumb|350px|right|Cryptomodule Parts and Operation]]&lt;br /&gt;
&lt;br /&gt;
== When to Use a FIPS 140-2 Validated Cryptomodule ==&lt;br /&gt;
&lt;br /&gt;
If the web application may be the target of determined attackers (a common threat model for Internet accessible applications handling sensitive data), it is strongly advised to use TLS services that are provided by [http://csrc.nist.gov/groups/STM/cmvp/validation.html FIPS 140-2 validated cryptomodules]. &lt;br /&gt;
&lt;br /&gt;
A cryptomodule, whether it is a software library or a hardware device, basically consists of three parts:&lt;br /&gt;
&lt;br /&gt;
* Components that implement cryptographic algorithms (symmetric and asymmetric algorithms, hash algorithms, random number generator algorithms, and message authentication code algorithms) &lt;br /&gt;
* Components that call and manage cryptographic functions (inputs and outputs include cryptographic keys and so-called critical security parameters) &lt;br /&gt;
* A physical container around the components that implement cryptographic algorithms and the components that call and manage cryptographic functions&lt;br /&gt;
&lt;br /&gt;
The security of a cryptomodule and its services (and the web applications that call the cryptomodule) depend on the correct implementation and integration of each of these three parts. In addition, the cryptomodule must be used and accessed securely. The includes consideration for:&lt;br /&gt;
&lt;br /&gt;
* Calling and managing cryptographic functions&lt;br /&gt;
* Securely Handling inputs and output&lt;br /&gt;
* Ensuring the secure construction of the physical container around the components&lt;br /&gt;
&lt;br /&gt;
In order to leverage the benefits of TLS it is important to use a TLS service (e.g. library, web framework, web application server) which has been FIPS 140-2 validated. In addition, the cryptomodule must be installed, configured and operated in either an approved or an allowed mode to provide a high degree of certainty that the FIPS 140-2 validated cryptomodule is providing the expected security services in the expected manner.&lt;br /&gt;
&lt;br /&gt;
If the system is legally required to use FIPS 140-2 encryption (e.g., owned or operated by or on behalf of the U.S. Government) then TLS must be used and SSL disabled. Details on why SSL is unacceptable are described in Section 7.1 of [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program].&lt;br /&gt;
&lt;br /&gt;
Further reading on the use of TLS to protect highly sensitive data against determined attackers can be viewed in [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP800-52 Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations]&lt;br /&gt;
&lt;br /&gt;
== Secure Server Design  ==&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS for All Login Pages and All Authenticated Pages  ===&lt;br /&gt;
&lt;br /&gt;
The login page and all subsequent authenticated pages must be exclusively accessed over TLS. The initial login page, referred to as the &amp;quot;login landing page&amp;quot;, must be served over TLS. Failure to utilize TLS for the login landing page allows an attacker to modify the login form action, causing the user's credentials to be posted to an arbitrary location. Failure to utilize TLS for authenticated pages after the login enables an attacker to view the unencrypted session ID and compromise the user's authenticated session. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS on Any Networks (External and Internal) Transmitting Sensitive Data  ===&lt;br /&gt;
&lt;br /&gt;
All networks, both external and internal, which transmit sensitive data must utilize TLS or an equivalent transport layer security mechanism. It is not sufficient to claim that access to the internal network is &amp;quot;restricted to employees&amp;quot;. Numerous recent data compromises have shown that the internal network can be breached by attackers. In these attacks, sniffers have been installed to access unencrypted sensitive data sent on the internal network. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Provide Non-TLS Pages for Secure Content  ===&lt;br /&gt;
&lt;br /&gt;
All pages which are available over TLS must not be available over a non-TLS connection. A user may inadvertently bookmark or manually type a URL to a HTTP page (e.g. http://example.com/myaccount) within the authenticated portion of the application. If this request is processed by the application then the response, and any sensitive data, would be returned to the user over the clear text HTTP.&lt;br /&gt;
&lt;br /&gt;
=== Rule - REMOVED - Do Not Perform Redirects from Non-TLS Page to TLS Login Page  ===&lt;br /&gt;
&lt;br /&gt;
This recommendation has been removed. Ultimately, the below guidance will only provide user education and cannot provide any technical controls to protect the user against a man-in-the-middle attack.  &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
A common practice is to redirect users that have requested a non-TLS version of the login page to the TLS version (e.g. http://example.com/login redirects to https://example.com/login). This practice creates an additional attack vector for a man in the middle attack. In addition, redirecting from non-TLS versions to the TLS version reinforces to the user that the practice of requesting the non-TLS page is acceptable and secure.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the man-in-the-middle attack is used by the attacker to intercept the non-TLS to TLS redirect message. The attacker then injects the HTML of the actual login page and changes the form to post over unencrypted HTTP. This allows the attacker to view the user's credentials as they are transmitted in the clear.&lt;br /&gt;
&lt;br /&gt;
It is recommended to display a security warning message to the user whenever the non-TLS login page is requested. This security warning should urge the user to always type &amp;quot;HTTPS&amp;quot; into the browser or bookmark the secure login page.  This approach will help educate users on the correct and most secure method of accessing the application.&lt;br /&gt;
&lt;br /&gt;
Currently there are no controls that an application can enforce to entirely mitigate this risk. Ultimately, this issue is the responsibility of the user since the application cannot prevent the user from initially typing [http://owasp.org http://example.com/login] (versus HTTPS). &lt;br /&gt;
&lt;br /&gt;
Note: [http://www.w3.org/Security/wiki/Strict_Transport_Security Strict Transport Security] will address this issue and will provide a server side control to instruct supporting browsers that the site should only be accessed over HTTPS&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Mix TLS and Non-TLS Content  ===&lt;br /&gt;
&lt;br /&gt;
A page that is available over TLS must be comprised completely of content which is transmitted over TLS. The page must not contain any content that is transmitted over unencrypted HTTP. This includes content from unrelated third party sites. &lt;br /&gt;
&lt;br /&gt;
An attacker could intercept any of the data transmitted over the unencrypted HTTP and inject malicious content into the user's page. This malicious content would be included in the page even if the overall page is served over TLS. In addition, an attacker could steal the user's session cookie that is transmitted with any non-TLS requests. This is possible if the cookie's 'secure' flag is not set. See the rule 'Use &amp;quot;Secure&amp;quot; Cookie Flag'&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use &amp;quot;Secure&amp;quot; Cookie Flag  ===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Secure&amp;quot; flag must be set for all user cookies. Failure to use the &amp;quot;secure&amp;quot; flag enables an attacker to access the session cookie by tricking the user's browser into submitting a request to an unencrypted page on the site. This attack is possible even if the server is not configured to offer HTTP content since the attacker is monitoring the requests and does not care if the server responds with a 404 or doesn't respond at all.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Keep Sensitive Data Out of the URL ===&lt;br /&gt;
&lt;br /&gt;
Sensitive data must not be transmitted via URL arguments. A more appropriate place is to store sensitive data in a server side repository or within the user's session.  When using TLS the URL arguments and values are encrypted during transit. However, there are two methods that the URL arguments and values could be exposed.&lt;br /&gt;
&lt;br /&gt;
1. The entire URL is cached within the local user's browser history. This may expose sensitive data to any other user of the workstation.&lt;br /&gt;
&lt;br /&gt;
2. The entire URL is exposed if the user clicks on a link to another HTTPS site. This may expose sensitive data within the referral field to the third party site. This exposure occurs in most browsers and will only occur on transitions between two TLS sites. &lt;br /&gt;
&lt;br /&gt;
For example, a user following a link on [http://owasp.org https://example.com] which leads to [http://owasp.org https://someOtherexample.com] would expose the full URL of [http://owasp.org https://example.com] (including URL arguments) in the referral header (within most browsers). This would not be the case if the user followed a link on [http://owasp.org https://example.com] to [http://owasp.org http://someHTTPexample.com]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Prevent Caching of Sensitive Data ===&lt;br /&gt;
&lt;br /&gt;
The TLS protocol provides confidentiality only for data in transit but it does not help with potential data leakage issues at the client or intermediary proxies. As a result, it is frequently prudent to instruct these nodes not to cache or persist sensitive data. One option is to add a suitable Cache-Control header to relevant HTTP responses, for example &amp;quot;Cache-Control: no-cache, no store, must-revalidate&amp;quot;. For compatibility with HTTP/1.0 the response should include header &amp;quot;Pragma: no-cache&amp;quot;. More information is available in [http://www.ietf.org/rfc/rfc2616.txt HTTP 1.1 RFC 2616], section 14.9.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use HTTP Strict Transport Security ===&lt;br /&gt;
&lt;br /&gt;
A new browser security setting called HTTP Strict Transport Security (HSTS) will significantly enhance the implementation of TLS for a domain. HSTS is enabled via a special response header and this instructs [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security#Browser_Support compatible browsers] to enforce the following security controls:&lt;br /&gt;
&lt;br /&gt;
* All requests to the domain will be sent over HTTPS&lt;br /&gt;
* Any attempts to send an HTTP requests to the domain will be automatically upgraded by the browser to HTTPS before the request is sent&lt;br /&gt;
* If a user encounters a bad SSL certificate, the user will receive an error message and will not be allowed to override the warning message&lt;br /&gt;
&lt;br /&gt;
Additional information on HSTS can be found at [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security https://www.owasp.org/index.php/HTTP_Strict_Transport_Security] and also on the OWASP [http://www.youtube.com/watch?v=zEV3HOuM_Vw&amp;amp;feature=youtube_gdata AppSecTutorial Series - Episode 4]&lt;br /&gt;
&lt;br /&gt;
== Server Certificate and Protocol Configuration  ==&lt;br /&gt;
&lt;br /&gt;
Note: If using a FIPS 140-2 cryptomodule disregard the following rules and defer to the recommended configuration for the particular cryptomodule.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use an Appropriate Certification Authority for the Application's User Base  ===&lt;br /&gt;
&lt;br /&gt;
An application user must never be presented with a warning that the certificate was signed by an unknown or untrusted authority. The application's user population must have access to the public certificate of the certification authority which issued the server's certificate. For Internet accessible websites, the most effective method of achieving this goal is to purchase the TLS certificate from a recognize certification authority. Popular Internet browsers already contain the public certificates of these recognized certification authorities. &lt;br /&gt;
&lt;br /&gt;
Internal applications with a limited user population can use an internal certification authority provided its public certificate is securely distributed to all users. However, remember that all certificates issued by this certification authority will be trusted by the users. Therefore, utilize controls to protect the private key and ensure that only authorized individuals have the ability to sign certificates. &lt;br /&gt;
&lt;br /&gt;
The use of self signed certificates is never acceptable. Self signed certificates negate the benefit of end-point authentication and also significantly decrease the ability for an individual to detect a man-in-the-middle attack. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Protocols ===&lt;br /&gt;
&lt;br /&gt;
SSL/TLS is a collection of protocols. Weaknesses have been identified with earlier SSL protocols, including [http://www.schneier.com/paper-ssl-revised.pdf SSLv2] and [http://www.yaksman.org/~lweith/ssl.pdf SSLv3]. The best practice for transport layer protection is to only provide support for the TLS protocols - TLS1.0, TLS 1.1 and TLS 1.2. This configuration will provide maximum protection against skilled and determined attackers and is appropriate for applications handling sensitive data or performing critical operations.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers Nearly all modern browsers support at least TLS 1.0]. As of February 2013, contemporary browsers (Chrome v20+, IE v8+, Opera v10+, and Safari v5+) [http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers support TLS 1.1 and TLS 1.2]. You should provide support for TLS 1.1 and TLS 1.2 to accommodate clients which support the protocols.&lt;br /&gt;
&lt;br /&gt;
In situations where lesser security requirements are necessary, it may be acceptable to also provide support for SSL 3.0 and TLS 1.0. [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 has known weaknesses] which severely compromise the channel's security. TLS 1.0 suffers [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html CBC Chaining attacks and Padding Oracle attacks]. SSLv3 and TLSv1.0 should only be used only after risk analysis and acceptance.&lt;br /&gt;
&lt;br /&gt;
Under no circumstances should SSLv2 be enabled as a protocol selection. The [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 protocol is broken] and does not provide adequate transport layer protection.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Cryptographic Ciphers  ===&lt;br /&gt;
&lt;br /&gt;
Each protocol (SSLv3, TLSv1.0, etc) provide cipher suites. As of TLS 1.2, [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 there is support for over 300 suites (320+ and counting)], including [http://www.mail-archive.com/cryptography@randombit.net/msg03785.html national vanity cipher suites]. The strength of the encryption used within a TLS session is determined by the encryption cipher negotiated between the server and the browser. In order to ensure that only strong cryptographic ciphers are selected the server must be modified to disable the use of weak ciphers. It is recommended to configure the server to only support strong ciphers and to use sufficiently large key sizes. In general, the following should be observed when selecting CipherSuites:&lt;br /&gt;
&lt;br /&gt;
* Use AES, 3-key 3DES for encryption operated in CBC mode &lt;br /&gt;
* Stream Ciphers which XOR the key stream with plaintext (such as AES/CTR mode)&lt;br /&gt;
* Use SHA1 or above for digests, prefer SHA2 (or equivalent)&lt;br /&gt;
* MD5 should not be used except as a PRF (no signing, no MACs)&lt;br /&gt;
* Do not provide support for NULL ciphersuites (aNULL or eNULL)&lt;br /&gt;
* Do not provide support for anonymous Diffie-Hellman &lt;br /&gt;
* Support ephemeral Diffie-Hellman key exchange&lt;br /&gt;
&lt;br /&gt;
Note: The TLS usage of MD5 does not expose the TLS protocol to any of the weaknesses of the MD5 algorithm (see FIPS 140-2 IG). However, MD5 must never be used outside of TLS protocol (e.g. for general hashing).&lt;br /&gt;
&lt;br /&gt;
Note: Use of Ephemeral Diffie-Hellman key exchange will protect confidentiality of the transmitted plaintext data even if the corresponding RSA or DSS server private key got compromised. An attacker would have to perform active man-in-the-middle attack at the time of the key exchange to be able to extract the transmitted plaintext. All modern browsers support this key exchange with the notable exception of Internet Explorer prior to Windows Vista.&lt;br /&gt;
&lt;br /&gt;
Additional information can be obtained within the [http://www.ietf.org/rfc/rfc4346.txt TLS 1.1 RFC 4346] and [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf FIPS 140-2 IG]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Secure Renegotiations  ===&lt;br /&gt;
&lt;br /&gt;
A design weakness in TLS, identified as [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-3555 CVE-2009-3555], allows an attacker to inject a plaintext of his choice into a TLS session of a victim. In the HTTPS context the attacker might be able to inject his own HTTP requests on behalf of the victim. The issue can be mitigated either by disabling support for TLS renegotiations or by supporting only renegotiations compliant with [http://www.ietf.org/rfc/rfc5746.txt RFC 5746]. All modern browsers have been updated to comply with this RFC.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Disable Compression ===&lt;br /&gt;
&lt;br /&gt;
Compression Ratio Info-leak Made Easy (CRIME) is an exploit against the data compression scheme used by the TLS and SPDY protocols. The exploit allows an adversary to recover user authentication cookies from HTTPS. The recovered cookie can be subsequently used for session hijacking attacks.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Strong Keys &amp;amp; Protect Them ===&lt;br /&gt;
&lt;br /&gt;
The private key used to generate the cipher key must be sufficiently strong for the anticipated lifetime of the private key and corresponding certificate. The current best practice is to select a key size of at least 2048. Keys of length 1024 will be obsolete beginning in 2010.  Additional information on key lifetimes and comparable key strengths can be found in [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf NIST SP 800-57]. In addition, the private key must be stored in a location that is protected from unauthorized access.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use a Certificate That Supports Required Domain Names ===&lt;br /&gt;
&lt;br /&gt;
A user should never be presented with a certificate error, including prompts to reconcile domain or hostname mismatches, or expired certificates. If the application is available at both [https://owasp.org https://www.example.com] and [https://owasp.org https://example.com] then an appropriate certificate, or certificates, must be presented to accommodate the situation. The presence of certificate errors desensitizes users to TLS error messages and increases the possibility an attacker could launch a convincing phishing or man-in-the-middle attack.&lt;br /&gt;
&lt;br /&gt;
For example, consider a web application accessible at [https://owasp.org https://abc.example.com] and [https://owasp.org https://xyz.example.com]. One certificate should be acquired for the host or server ''abc.example.com''; and a second certificate for host or server ''xyz.example.com''. In both cases, the hostname would be present in the Subject's Common Name (CN).&lt;br /&gt;
&lt;br /&gt;
Alternatively, the Subject Alternate Names (SANs) can be used to provide a specific listing of multiple names where the certificate is valid. In the example above, the certificate could list the Subject's CN as ''example.com'', and list two SANs: ''abc.example.com'' and ''xyz.example.com''. These certificates are sometimes referred to as &amp;quot;multiple domain certificates&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Fully Qualified Names in Certificates ===&lt;br /&gt;
&lt;br /&gt;
Use fully qualified names in the DNS name field, and do not use unqualifed names (e.g., 'www'), local names (e.g., 'localhost'), or private IP addresses (e.g., 192.168.1.1) in the DNS name field. Unqualifed names, local names, or private IP addresses violate the certificate specification.&lt;br /&gt;
 &lt;br /&gt;
=== Rule - Do Not Use Wildcard Certificates ===&lt;br /&gt;
&lt;br /&gt;
You should refrain from using wildcard certificates. Though they are expedient at circumventing annoying user prompts, they also [[Least_privilege|violate the principal of least privilege]] and asks the user to trust all machines, including developer's machines, the secretary's machine in the lobby and the sign-in kiosk. Obtaining access to the private key is left as an exercise for the attacker, but its made much easier when stored on the file system unprotected.&lt;br /&gt;
&lt;br /&gt;
Statistics gathered by Qualys for [http://media.blackhat.com/bh-us-10/presentations/Ristic/BlackHat-USA-2010-Ristic-Qualys-SSL-Survey-HTTP-Rating-Guide-slides.pdf Internet SSL Survey 2010] indicate wildcard certificates have a 4.4% share, so the practice is not standard for public facing hosts. Finally, wildcard certificates violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines].&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Use RFC 1918 Addresses in Certificates ===&lt;br /&gt;
&lt;br /&gt;
Certificates should not use private addresses. RFC 1918 is [http://tools.ietf.org/rfc/rfc1918.txt Address Allocation for Private Internets]. Private addresses are Internet Assigned Numbers Authority (IANA) reserved and include 192.168/16, 172.16/12, and 10/8.&lt;br /&gt;
&lt;br /&gt;
Certificates issued with private addresses violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines]. In addition, Peter Gutmann writes in in [http://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf Engineering Security]: &amp;quot;This one is particularly troublesome because, in combination with the router-compromise attacks... and ...OSCP-defeating measures, it allows an attacker to spoof any EV-certificate site.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Rule - Always Provide All Needed Certificates ===&lt;br /&gt;
&lt;br /&gt;
Clients attempt to solve the problem of identifying a server or host using PKI and X509 certificate. When a user receives a server or host's certificate, the certificate must be validated back to a trusted root certification authority. This is known as path validation.&lt;br /&gt;
&lt;br /&gt;
There can be one or more intermediate certificates in between the end-entity (server or host) certificate and root certificate. In addition to validating both endpoints, the user will also have to validate all intermediate certificates. Validating all intermediate certificates can be tricky because the user may not have them locally. This is a well-known PKI issue called the “Which Directory?&amp;quot; problem.&lt;br /&gt;
&lt;br /&gt;
To avoid the “Which Directory?&amp;quot; problem, a server should provide the user with all required certificates used in a path validation.&lt;br /&gt;
&lt;br /&gt;
== Client (Browser) Configuration  ==&lt;br /&gt;
&lt;br /&gt;
The validation procedures to ensure that a certificate is valid are complex and difficult to correctly perform.  In a typical web application model, these checks will be performed by the client's web browser in accordance with local browser settings and are out of the control of the application. However, these items do need to be addressed in the following scenarios:&lt;br /&gt;
&lt;br /&gt;
* The application server establishes connections to other applications over TLS for purposes such as web services or any exchange of data&lt;br /&gt;
* A thick client application is connecting to a server via TLS&lt;br /&gt;
&lt;br /&gt;
In these situations extensive certificate validation checks must occur in order to establish the validity of the certificate. Consult the following resources to assist in the design and testing of this functionality. The NIST PKI testing site includes a full test suite of certificates and expected outcomes of the test cases.&lt;br /&gt;
* [http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html NIST PKI Testing]&lt;br /&gt;
* [http://www.ietf.org/rfc/rfc5280.txt IETF RFC 5280]&lt;br /&gt;
&lt;br /&gt;
As specified in the above guidance, if the certificate can not be validated for any reason then the connection between the client and server must be dropped. Any data exchanged over a connection where the certificate has not properly been validated could be exposed to unauthorized access or modification.&lt;br /&gt;
&lt;br /&gt;
== Additional Controls  ==&lt;br /&gt;
&lt;br /&gt;
=== Extended Validation Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Extended validation certificates (EV Certificates) proffer an enhanced investigation by the issuer into the requesting party due to the industry's race to the bottom. The purpose of EV certificates is to provide the user with greater assurance that the owner of the certificate is a verified legal entity for the site. Browsers with support for EV certificates distinguish an EV certificate in a variety of ways. Internet Explorer will color a portion of the URL in green, while Mozilla will add a green portion to the left of the URL indicating the company name. &lt;br /&gt;
&lt;br /&gt;
High value websites should consider the use of EV certificates to enhance customer confidence in the certificate. It should also be noted that EV certificates do not provide any greater technical security for the TLS. The purpose of the EV certificate is to increase user confidence that the target site is indeed who it claims to be.&lt;br /&gt;
&lt;br /&gt;
=== Client-Side Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Client side certificates can be used with TLS to prove the identity of the client to the server. Referred to as &amp;quot;two-way TLS&amp;quot;, this configuration requires the client to provide their certificate to the server, in addition to the server providing their's to the client. If client certificates are used, ensure that the same validation of the client certificate is performed by the server, as indicated for the validation of server certificates above. In addition, the server should be configured to drop the TLS connection if the client certificate cannot be verified or is not provided. &lt;br /&gt;
&lt;br /&gt;
The use of client side certificates is relatively rare currently due to the complexities of certificate generation, safe distribution, client side configuration, certificate revocation and reissuance, and the fact that clients can only authenticate on machines where their client side certificate is installed. Such certificates are typically used for very high value connections that have small user populations.&lt;br /&gt;
&lt;br /&gt;
=== Certificate and Public Key Pinning ===&lt;br /&gt;
&lt;br /&gt;
Hybrid and native applications can take advantage of [[Certificate_and_Public_Key_Pinning|certificate and public key pinning]]. Pinning associates a host (for example, server) with an identity (for example, certificate or public key), and allows an application to leverage knowledge of the pre-existing relationship. At runtime, the application would inspect the certificate or public key received after connecting to the server. If the certificate or public key is expected, then the application would proceed as normal. If unexpected, the application would stop using the channel and close the connection since an adversary could control the channel or server.&lt;br /&gt;
&lt;br /&gt;
Pinning still requires customary X509 checks, such as revocation, since CRLs and OCSP provides real time status information. Otherwise, an application could possibly (1) accept a known bad certificate; or (2) require an out-of-band update, which could result in a lengthy App Store approval.&lt;br /&gt;
&lt;br /&gt;
Browser based applications are at a disadvantage since most browsers do not allow the user to leverage pre-existing relationships and ''a priori'' knowledge. In addition, Javascript and Websockets do not expose methods to for a web app to query the underlying secure connection information (such as the certificate or public key). It is noteworthy that Chromium based browsers perform pinning on selected sites, but the list is currently maintained by the vendor.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection for Back End and Other Connections  =&lt;br /&gt;
&lt;br /&gt;
Although not the focus of this cheat sheet, it should be stressed that transport layer protection is necessary for back-end connections and any other connection where sensitive data is exchanged or where user identity is established. Failure to implement an effective and robust transport layer security will expose sensitive data and undermine the effectiveness of any authentication or access control mechanism. &lt;br /&gt;
&lt;br /&gt;
== Secure Internal Network Fallacy  ==&lt;br /&gt;
&lt;br /&gt;
The internal network of a corporation is not immune to attacks. Many recent high profile intrusions, where thousands of sensitive customer records were compromised, have been perpetrated by attackers that have gained internal network access and then used sniffers to capture unencrypted data as it traversed the internal network.&lt;br /&gt;
&lt;br /&gt;
= Related Articles  =&lt;br /&gt;
&lt;br /&gt;
* OWASP – [[Testing for SSL-TLS (OWASP-CM-001)|Testing for SSL-TLS]], and OWASP [[Guide to Cryptography]] &lt;br /&gt;
* OWASP – [http://www.owasp.org/index.php/ASVS Application Security Verification Standard (ASVS) – Communication Security Verification Requirements (V10)]&lt;br /&gt;
* OWASP – ASVS Article on [[Why you need to use a FIPS 140-2 validated cryptomodule]]&lt;br /&gt;
* SSL Labs http://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide]&lt;br /&gt;
* yaSSL – [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html Differences between SSL and TLS Protocol Versions]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP 800-52 Guidelines for the selection and use of transport layer security (TLS) Implementations]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf FIPS 140-2 Security Requirements for Cryptographic Modules]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf SP 800-57 Recommendation for Key Management, Revision 2]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/drafts.html#sp800-95 SP 800-95 Guide to Secure Web Services] &lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5280.txt RFC 5280 Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc2246.txt RFC 2246 The Transport Layer Security (TLS) Protocol Version 1.0 (JAN 1999)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc4346.txt RFC 4346 The Transport Layer Security (TLS) Protocol Version 1.1 (APR 2006)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5246.txt RFC 5246 The Transport Layer Security (TLS) Protocol Version 1.2 (AUG 2008)]&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
Michael Coates - michael.coates[at]owasp.org &amp;lt;br/&amp;gt;&lt;br /&gt;
Dave Wichers - dave.wichers[at]aspectsecurity.com &amp;lt;br/&amp;gt;&lt;br /&gt;
Michael Boberski - boberski_michael[at]bah.com&amp;lt;br/&amp;gt;&lt;br /&gt;
Tyler Reguly -treguly[at]sslfail.com&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=149158</id>
		<title>Transport Layer Protection Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=149158"/>
				<updated>2013-04-04T13:15:17Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Wikified link to Gutmann's book&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction  =&lt;br /&gt;
&lt;br /&gt;
This article provides a simple model to follow when implementing transport layer protection for an application. Although the concept of SSL is known to many, the actual details and security specific decisions of implementation are often poorly understood and frequently result in insecure deployments. This article establishes clear rules which provide guidance on securely designing and configuring transport layer security for an application. This article is focused on the use of SSL/TLS between a web application and a web browser, but that we also encourage the use of SSL/TLS or other network encryption technologies, such as VPN, on back end and other non-browser based connections.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== Architectural Decision  ==&lt;br /&gt;
&lt;br /&gt;
An architectural decision must be made to determine the appropriate method to protect data when it is being transmitted.  The most common options available to corporations are Virtual Private Networks (VPN) or a SSL/TLS model commonly used by web applications. The selected model is determined by the business needs of the particular organization. For example, a VPN connection may be the best design for a partnership between two companies that includes mutual access to a shared server over a variety of protocols. Conversely, an Internet facing enterprise web application would likely be best served by a SSL/TLS model. &lt;br /&gt;
&lt;br /&gt;
This cheat sheet will focus on security considerations when the SSL/TLS model is selected. This is a frequently used model for publicly accessible web applications.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection with SSL/TLS  =&lt;br /&gt;
&lt;br /&gt;
== Benefits  ==&lt;br /&gt;
&lt;br /&gt;
The primary benefit of transport layer security is the protection of web application data from unauthorized disclosure and modification when it is transmitted between clients (web browsers) and the web application server, and between the web application server and back end and other non-browser based enterprise components. &lt;br /&gt;
&lt;br /&gt;
The server validation component of TLS provides authentication of the server to the client.  If configured to require client side certificates, TLS can also play a role in client authentication to the server. However, in practice client side certificates are not often used in lieu of username and password based authentication models for clients.&lt;br /&gt;
&lt;br /&gt;
TLS also provides two additional benefits that are commonly overlooked; integrity guarantees and replay prevention. A TLS stream of communication contains built-in controls to prevent tampering with any portion of the encrypted data. In addition, controls are also built-in to prevent a captured stream of TLS data from being replayed at a later time.&lt;br /&gt;
&lt;br /&gt;
It should be noted that TLS provides the above guarantees to data during transmission. TLS does not offer any of these security benefits to data that is at rest. Therefore appropriate security controls must be added to protect data while at rest within the application or within data stores.&lt;br /&gt;
&lt;br /&gt;
== Basic Requirements ==&lt;br /&gt;
&lt;br /&gt;
The basic requirements for using TLS are: access to a Public Key Infrastructure (PKI) in order to obtain certificates, access to a directory or an Online Certificate Status Protocol (OCSP) responder in order to check certificate revocation status, and agreement/ability to support a minimum configuration of protocol versions and protocol options for each version.&lt;br /&gt;
&lt;br /&gt;
== SSL vs. TLS  ==&lt;br /&gt;
&lt;br /&gt;
The terms, Secure Socket Layer (SSL) and Transport Layer Security (TLS) are often used interchangeably. In fact, SSL v3.1 is equivalent to TLS v1.0. However, different versions of SSL and TLS are supported by modern web browsers and by most modern web frameworks and platforms. For the purposes of this cheat sheet we will refer to the technology generically as TLS. Recommendations regarding the use of SSL and TLS protocols, as well as browser support for TLS, can be found in the rule below title [[Transport_Layer_Protection_Cheat_Sheet#Rule_-_Only_Support_Strong_Protocols| &amp;quot;Only Support Strong Protocols&amp;quot;]].&lt;br /&gt;
&lt;br /&gt;
[[Image:Asvs_cryptomodule.gif|thumb|350px|right|Cryptomodule Parts and Operation]]&lt;br /&gt;
&lt;br /&gt;
== When to Use a FIPS 140-2 Validated Cryptomodule ==&lt;br /&gt;
&lt;br /&gt;
If the web application may be the target of determined attackers (a common threat model for Internet accessible applications handling sensitive data), it is strongly advised to use TLS services that are provided by [http://csrc.nist.gov/groups/STM/cmvp/validation.html FIPS 140-2 validated cryptomodules]. &lt;br /&gt;
&lt;br /&gt;
A cryptomodule, whether it is a software library or a hardware device, basically consists of three parts:&lt;br /&gt;
&lt;br /&gt;
* Components that implement cryptographic algorithms (symmetric and asymmetric algorithms, hash algorithms, random number generator algorithms, and message authentication code algorithms) &lt;br /&gt;
* Components that call and manage cryptographic functions (inputs and outputs include cryptographic keys and so-called critical security parameters) &lt;br /&gt;
* A physical container around the components that implement cryptographic algorithms and the components that call and manage cryptographic functions&lt;br /&gt;
&lt;br /&gt;
The security of a cryptomodule and its services (and the web applications that call the cryptomodule) depend on the correct implementation and integration of each of these three parts. In addition, the cryptomodule must be used and accessed securely. The includes consideration for:&lt;br /&gt;
&lt;br /&gt;
* Calling and managing cryptographic functions&lt;br /&gt;
* Securely Handling inputs and output&lt;br /&gt;
* Ensuring the secure construction of the physical container around the components&lt;br /&gt;
&lt;br /&gt;
In order to leverage the benefits of TLS it is important to use a TLS service (e.g. library, web framework, web application server) which has been FIPS 140-2 validated. In addition, the cryptomodule must be installed, configured and operated in either an approved or an allowed mode to provide a high degree of certainty that the FIPS 140-2 validated cryptomodule is providing the expected security services in the expected manner.&lt;br /&gt;
&lt;br /&gt;
If the system is legally required to use FIPS 140-2 encryption (e.g., owned or operated by or on behalf of the U.S. Government) then TLS must be used and SSL disabled. Details on why SSL is unacceptable are described in Section 7.1 of [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program].&lt;br /&gt;
&lt;br /&gt;
Further reading on the use of TLS to protect highly sensitive data against determined attackers can be viewed in [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP800-52 Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations]&lt;br /&gt;
&lt;br /&gt;
== Secure Server Design  ==&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS for All Login Pages and All Authenticated Pages  ===&lt;br /&gt;
&lt;br /&gt;
The login page and all subsequent authenticated pages must be exclusively accessed over TLS. The initial login page, referred to as the &amp;quot;login landing page&amp;quot;, must be served over TLS. Failure to utilize TLS for the login landing page allows an attacker to modify the login form action, causing the user's credentials to be posted to an arbitrary location. Failure to utilize TLS for authenticated pages after the login enables an attacker to view the unencrypted session ID and compromise the user's authenticated session. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS on Any Networks (External and Internal) Transmitting Sensitive Data  ===&lt;br /&gt;
&lt;br /&gt;
All networks, both external and internal, which transmit sensitive data must utilize TLS or an equivalent transport layer security mechanism. It is not sufficient to claim that access to the internal network is &amp;quot;restricted to employees&amp;quot;. Numerous recent data compromises have shown that the internal network can be breached by attackers. In these attacks, sniffers have been installed to access unencrypted sensitive data sent on the internal network. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Provide Non-TLS Pages for Secure Content  ===&lt;br /&gt;
&lt;br /&gt;
All pages which are available over TLS must not be available over a non-TLS connection. A user may inadvertently bookmark or manually type a URL to a HTTP page (e.g. http://example.com/myaccount) within the authenticated portion of the application. If this request is processed by the application then the response, and any sensitive data, would be returned to the user over the clear text HTTP.&lt;br /&gt;
&lt;br /&gt;
=== Rule - REMOVED - Do Not Perform Redirects from Non-TLS Page to TLS Login Page  ===&lt;br /&gt;
&lt;br /&gt;
This recommendation has been removed. Ultimately, the below guidance will only provide user education and cannot provide any technical controls to protect the user against a man-in-the-middle attack.  &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
A common practice is to redirect users that have requested a non-TLS version of the login page to the TLS version (e.g. http://example.com/login redirects to https://example.com/login). This practice creates an additional attack vector for a man in the middle attack. In addition, redirecting from non-TLS versions to the TLS version reinforces to the user that the practice of requesting the non-TLS page is acceptable and secure.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the man-in-the-middle attack is used by the attacker to intercept the non-TLS to TLS redirect message. The attacker then injects the HTML of the actual login page and changes the form to post over unencrypted HTTP. This allows the attacker to view the user's credentials as they are transmitted in the clear.&lt;br /&gt;
&lt;br /&gt;
It is recommended to display a security warning message to the user whenever the non-TLS login page is requested. This security warning should urge the user to always type &amp;quot;HTTPS&amp;quot; into the browser or bookmark the secure login page.  This approach will help educate users on the correct and most secure method of accessing the application.&lt;br /&gt;
&lt;br /&gt;
Currently there are no controls that an application can enforce to entirely mitigate this risk. Ultimately, this issue is the responsibility of the user since the application cannot prevent the user from initially typing [http://owasp.org http://example.com/login] (versus HTTPS). &lt;br /&gt;
&lt;br /&gt;
Note: [http://www.w3.org/Security/wiki/Strict_Transport_Security Strict Transport Security] will address this issue and will provide a server side control to instruct supporting browsers that the site should only be accessed over HTTPS&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Mix TLS and Non-TLS Content  ===&lt;br /&gt;
&lt;br /&gt;
A page that is available over TLS must be comprised completely of content which is transmitted over TLS. The page must not contain any content that is transmitted over unencrypted HTTP. This includes content from unrelated third party sites. &lt;br /&gt;
&lt;br /&gt;
An attacker could intercept any of the data transmitted over the unencrypted HTTP and inject malicious content into the user's page. This malicious content would be included in the page even if the overall page is served over TLS. In addition, an attacker could steal the user's session cookie that is transmitted with any non-TLS requests. This is possible if the cookie's 'secure' flag is not set. See the rule 'Use &amp;quot;Secure&amp;quot; Cookie Flag'&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use &amp;quot;Secure&amp;quot; Cookie Flag  ===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Secure&amp;quot; flag must be set for all user cookies. Failure to use the &amp;quot;secure&amp;quot; flag enables an attacker to access the session cookie by tricking the user's browser into submitting a request to an unencrypted page on the site. This attack is possible even if the server is not configured to offer HTTP content since the attacker is monitoring the requests and does not care if the server responds with a 404 or doesn't respond at all.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Keep Sensitive Data Out of the URL ===&lt;br /&gt;
&lt;br /&gt;
Sensitive data must not be transmitted via URL arguments. A more appropriate place is to store sensitive data in a server side repository or within the user's session.  When using TLS the URL arguments and values are encrypted during transit. However, there are two methods that the URL arguments and values could be exposed.&lt;br /&gt;
&lt;br /&gt;
1. The entire URL is cached within the local user's browser history. This may expose sensitive data to any other user of the workstation.&lt;br /&gt;
&lt;br /&gt;
2. The entire URL is exposed if the user clicks on a link to another HTTPS site. This may expose sensitive data within the referral field to the third party site. This exposure occurs in most browsers and will only occur on transitions between two TLS sites. &lt;br /&gt;
&lt;br /&gt;
For example, a user following a link on [http://owasp.org https://example.com] which leads to [http://owasp.org https://someOtherexample.com] would expose the full URL of [http://owasp.org https://example.com] (including URL arguments) in the referral header (within most browsers). This would not be the case if the user followed a link on [http://owasp.org https://example.com] to [http://owasp.org http://someHTTPexample.com]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Prevent Caching of Sensitive Data ===&lt;br /&gt;
&lt;br /&gt;
The TLS protocol provides confidentiality only for data in transit but it does not help with potential data leakage issues at the client or intermediary proxies. As a result, it is frequently prudent to instruct these nodes not to cache or persist sensitive data. One option is to add a suitable Cache-Control header to relevant HTTP responses, for example &amp;quot;Cache-Control: no-cache, no store, must-revalidate&amp;quot;. For compatibility with HTTP/1.0 the response should include header &amp;quot;Pragma: no-cache&amp;quot;. More information is available in [http://www.ietf.org/rfc/rfc2616.txt HTTP 1.1 RFC 2616], section 14.9.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use HTTP Strict Transport Security ===&lt;br /&gt;
&lt;br /&gt;
A new browser security setting called HTTP Strict Transport Security (HSTS) will significantly enhance the implementation of TLS for a domain. HSTS is enabled via a special response header and this instructs [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security#Browser_Support compatible browsers] to enforce the following security controls:&lt;br /&gt;
&lt;br /&gt;
* All requests to the domain will be sent over HTTPS&lt;br /&gt;
* Any attempts to send an HTTP requests to the domain will be automatically upgraded by the browser to HTTPS before the request is sent&lt;br /&gt;
* If a user encounters a bad SSL certificate, the user will receive an error message and will not be allowed to override the warning message&lt;br /&gt;
&lt;br /&gt;
Additional information on HSTS can be found at [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security https://www.owasp.org/index.php/HTTP_Strict_Transport_Security] and also on the OWASP [http://www.youtube.com/watch?v=zEV3HOuM_Vw&amp;amp;feature=youtube_gdata AppSecTutorial Series - Episode 4]&lt;br /&gt;
&lt;br /&gt;
== Server Certificate and Protocol Configuration  ==&lt;br /&gt;
&lt;br /&gt;
Note: If using a FIPS 140-2 cryptomodule disregard the following rules and defer to the recommended configuration for the particular cryptomodule.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use an Appropriate Certification Authority for the Application's User Base  ===&lt;br /&gt;
&lt;br /&gt;
An application user must never be presented with a warning that the certificate was signed by an unknown or untrusted authority. The application's user population must have access to the public certificate of the certification authority which issued the server's certificate. For Internet accessible websites, the most effective method of achieving this goal is to purchase the TLS certificate from a recognize certification authority. Popular Internet browsers already contain the public certificates of these recognized certification authorities. &lt;br /&gt;
&lt;br /&gt;
Internal applications with a limited user population can use an internal certification authority provided its public certificate is securely distributed to all users. However, remember that all certificates issued by this certification authority will be trusted by the users. Therefore, utilize controls to protect the private key and ensure that only authorized individuals have the ability to sign certificates. &lt;br /&gt;
&lt;br /&gt;
The use of self signed certificates is never acceptable. Self signed certificates negate the benefit of end-point authentication and also significantly decrease the ability for an individual to detect a man-in-the-middle attack. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Protocols ===&lt;br /&gt;
&lt;br /&gt;
SSL/TLS is a collection of protocols. Weaknesses have been identified with earlier SSL protocols, including [http://www.schneier.com/paper-ssl-revised.pdf SSLv2] and [http://www.yaksman.org/~lweith/ssl.pdf SSLv3]. The best practice for transport layer protection is to only provide support for the TLS protocols - TLS1.0, TLS 1.1 and TLS 1.2. This configuration will provide maximum protection against skilled and determined attackers and is appropriate for applications handling sensitive data or performing critical operations.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers Nearly all modern browsers support at least TLS 1.0]. As of February 2013, contemporary browsers (Chrome v20+, IE v8+, Opera v10+, and Safari v5+) [http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers support TLS 1.1 and TLS 1.2]. You should provide support for TLS 1.1 and TLS 1.2 to accommodate clients which support the protocols.&lt;br /&gt;
&lt;br /&gt;
In situations where lesser security requirements are necessary, it may be acceptable to also provide support for SSL 3.0 and TLS 1.0. [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 has known weaknesses] which severely compromise the channel's security. TLS 1.0 suffers [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html CBC Chaining attacks and Padding Oracle attacks]. SSLv3 and TLSv1.0 should only be used only after risk analysis and acceptance.&lt;br /&gt;
&lt;br /&gt;
Under no circumstances should SSLv2 be enabled as a protocol selection. The [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 protocol is broken] and does not provide adequate transport layer protection.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Cryptographic Ciphers  ===&lt;br /&gt;
&lt;br /&gt;
Each protocol (SSLv3, TLSv1.0, etc) provide cipher suites. As of TLS 1.2, [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 there is support for over 300 suites (320+ and counting)], including [http://www.mail-archive.com/cryptography@randombit.net/msg03785.html national vanity cipher suites]. The strength of the encryption used within a TLS session is determined by the encryption cipher negotiated between the server and the browser. In order to ensure that only strong cryptographic ciphers are selected the server must be modified to disable the use of weak ciphers. It is recommended to configure the server to only support strong ciphers and to use sufficiently large key sizes. In general, the following should be observed when selecting CipherSuites:&lt;br /&gt;
&lt;br /&gt;
* Use AES, 3-key 3DES for encryption operated in CBC mode &lt;br /&gt;
* Stream Ciphers which XOR the key stream with plaintext (such as AES/CTR mode)&lt;br /&gt;
* Use SHA1 or above for digests, prefer SHA2 (or equivalent)&lt;br /&gt;
* MD5 should not be used except as a PRF (no signing, no MACs)&lt;br /&gt;
* Do not provide support for NULL ciphersuites (aNULL or eNULL)&lt;br /&gt;
* Do not provide support for anonymous Diffie-Hellman &lt;br /&gt;
* Support ephemeral Diffie-Hellman key exchange&lt;br /&gt;
&lt;br /&gt;
Note: The TLS usage of MD5 does not expose the TLS protocol to any of the weaknesses of the MD5 algorithm (see FIPS 140-2 IG). However, MD5 must never be used outside of TLS protocol (e.g. for general hashing).&lt;br /&gt;
&lt;br /&gt;
Note: Use of Ephemeral Diffie-Hellman key exchange will protect confidentiality of the transmitted plaintext data even if the corresponding RSA or DSS server private key got compromised. An attacker would have to perform active man-in-the-middle attack at the time of the key exchange to be able to extract the transmitted plaintext. All modern browsers support this key exchange with the notable exception of Internet Explorer prior to Windows Vista.&lt;br /&gt;
&lt;br /&gt;
Additional information can be obtained within the [http://www.ietf.org/rfc/rfc4346.txt TLS 1.1 RFC 4346] and [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf FIPS 140-2 IG]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Secure Renegotiations  ===&lt;br /&gt;
&lt;br /&gt;
A design weakness in TLS, identified as [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-3555 CVE-2009-3555], allows an attacker to inject a plaintext of his choice into a TLS session of a victim. In the HTTPS context the attacker might be able to inject his own HTTP requests on behalf of the victim. The issue can be mitigated either by disabling support for TLS renegotiations or by supporting only renegotiations compliant with [http://www.ietf.org/rfc/rfc5746.txt RFC 5746]. All modern browsers have been updated to comply with this RFC.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Disable Compression ===&lt;br /&gt;
&lt;br /&gt;
Compression Ratio Info-leak Made Easy (CRIME) is an exploit against the data compression scheme used by the TLS and SPDY protocols. The exploit allows an adversary to recover user authentication cookies from HTTPS. The recovered cookie can be subsequently used for session hijacking attacks.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Strong Keys &amp;amp; Protect Them ===&lt;br /&gt;
&lt;br /&gt;
The private key used to generate the cipher key must be sufficiently strong for the anticipated lifetime of the private key and corresponding certificate. The current best practice is to select a key size of at least 2048. Keys of length 1024 will be obsolete beginning in 2010.  Additional information on key lifetimes and comparable key strengths can be found in [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf NIST SP 800-57]. In addition, the private key must be stored in a location that is protected from unauthorized access.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use a Certificate That Supports Required Domain Names ===&lt;br /&gt;
&lt;br /&gt;
A user should never be presented with a certificate error, including prompts to reconcile domain or hostname mismatches, or expired certificates. If the application is available at both [https://owasp.org https://www.example.com] and [https://owasp.org https://example.com] then an appropriate certificate, or certificates, must be presented to accommodate the situation. The presence of certificate errors desensitizes users to TLS error messages and increases the possibility an attacker could launch a convincing phishing or man-in-the-middle attack.&lt;br /&gt;
&lt;br /&gt;
For example, consider a web application accessible at [https://owasp.org https://abc.example.com] and [https://owasp.org https://xyz.example.com]. One certificate should be acquired for the host or server ''abc.example.com''; and a second certificate for host or server ''xyz.example.com''. In both cases, the hostname would be present in the Subject's Common Name (CN).&lt;br /&gt;
&lt;br /&gt;
Alternatively, the Subject Alternate Names (SANs) can be used to provide a specific listing of multiple names where the certificate is valid. In the example above, the certificate could list the Subject's CN as ''example.com'', and list two SANs: ''abc.example.com'' and ''xyz.example.com''. These certificates are sometimes referred to as &amp;quot;multiple domain certificates&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Use Wildcard Certificates ===&lt;br /&gt;
&lt;br /&gt;
You should refrain from using wildcard certificates. Though they are expedient at circumventing annoying user prompts, they also [[Least_privilege|violate the principal of least privilege]] and asks the user to trust all machines, including developer's machines, the secretary's machine in the lobby and the sign-in kiosk. Obtaining access to the private key is left as an exercise for the attacker, but its made much easier when stored on the file system unprotected.&lt;br /&gt;
&lt;br /&gt;
Statistics gathered by Qualys for [http://media.blackhat.com/bh-us-10/presentations/Ristic/BlackHat-USA-2010-Ristic-Qualys-SSL-Survey-HTTP-Rating-Guide-slides.pdf Internet SSL Survey 2010] indicate wildcard certificates have a 4.4% share, so the practice is not standard for public facing hosts.&lt;br /&gt;
&lt;br /&gt;
Finally, wildcard certificates violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines].&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Use RFC 1918 Addresses ===&lt;br /&gt;
&lt;br /&gt;
RFC 1918 is [http://tools.ietf.org/rfc/rfc1918.txt Address Allocation for Private Internets]. Private addresses are Internet Assigned Numbers Authority (IANA) reserved and include 192.168/16, 172.16/12, and 10/8. Certificates should not use private addresses.&lt;br /&gt;
&lt;br /&gt;
Certificates issued with private addresses violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines]. In addition, Peter Gutmann writes in in [http://www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf Engineering Security]: &amp;quot;This one is particularly troublesome because, in combination with the router-compromise attacks... and ...OSCP-defeating measures, it allows an attacker to spoof any EV-certificate site.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Rule - Always Provide All Needed Certificates ===&lt;br /&gt;
&lt;br /&gt;
Clients attempt to solve the problem of identifying a server or host using PKI and X509 certificate. When a user receives a server or host's certificate, the certificate must be validated back to a trusted root certification authority. This is known as path validation.&lt;br /&gt;
&lt;br /&gt;
There can be one or more intermediate certificates in between the end-entity (server or host) certificate and root certificate. In addition to validating both endpoints, the user will also have to validate all intermediate certificates. Validating all intermediate certificates can be tricky because the user may not have them locally. This is a well-known PKI issue called the “Which Directory?&amp;quot; problem.&lt;br /&gt;
&lt;br /&gt;
To avoid the “Which Directory?&amp;quot; problem, a server should provide the user with all required certificates used in a path validation.&lt;br /&gt;
&lt;br /&gt;
== Client (Browser) Configuration  ==&lt;br /&gt;
&lt;br /&gt;
The validation procedures to ensure that a certificate is valid are complex and difficult to correctly perform.  In a typical web application model, these checks will be performed by the client's web browser in accordance with local browser settings and are out of the control of the application. However, these items do need to be addressed in the following scenarios:&lt;br /&gt;
&lt;br /&gt;
* The application server establishes connections to other applications over TLS for purposes such as web services or any exchange of data&lt;br /&gt;
* A thick client application is connecting to a server via TLS&lt;br /&gt;
&lt;br /&gt;
In these situations extensive certificate validation checks must occur in order to establish the validity of the certificate. Consult the following resources to assist in the design and testing of this functionality. The NIST PKI testing site includes a full test suite of certificates and expected outcomes of the test cases.&lt;br /&gt;
* [http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html NIST PKI Testing]&lt;br /&gt;
* [http://www.ietf.org/rfc/rfc5280.txt IETF RFC 5280]&lt;br /&gt;
&lt;br /&gt;
As specified in the above guidance, if the certificate can not be validated for any reason then the connection between the client and server must be dropped. Any data exchanged over a connection where the certificate has not properly been validated could be exposed to unauthorized access or modification.&lt;br /&gt;
&lt;br /&gt;
== Additional Controls  ==&lt;br /&gt;
&lt;br /&gt;
=== Extended Validation Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Extended validation certificates (EV Certificates) proffer an enhanced investigation by the issuer into the requesting party due to the industry's race to the bottom. The purpose of EV certificates is to provide the user with greater assurance that the owner of the certificate is a verified legal entity for the site. Browsers with support for EV certificates distinguish an EV certificate in a variety of ways. Internet Explorer will color a portion of the URL in green, while Mozilla will add a green portion to the left of the URL indicating the company name. &lt;br /&gt;
&lt;br /&gt;
High value websites should consider the use of EV certificates to enhance customer confidence in the certificate. It should also be noted that EV certificates do not provide any greater technical security for the TLS. The purpose of the EV certificate is to increase user confidence that the target site is indeed who it claims to be.&lt;br /&gt;
&lt;br /&gt;
=== Client-Side Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Client side certificates can be used with TLS to prove the identity of the client to the server. Referred to as &amp;quot;two-way TLS&amp;quot;, this configuration requires the client to provide their certificate to the server, in addition to the server providing their's to the client. If client certificates are used, ensure that the same validation of the client certificate is performed by the server, as indicated for the validation of server certificates above. In addition, the server should be configured to drop the TLS connection if the client certificate cannot be verified or is not provided. &lt;br /&gt;
&lt;br /&gt;
The use of client side certificates is relatively rare currently due to the complexities of certificate generation, safe distribution, client side configuration, certificate revocation and reissuance, and the fact that clients can only authenticate on machines where their client side certificate is installed. Such certificates are typically used for very high value connections that have small user populations.&lt;br /&gt;
&lt;br /&gt;
=== Certificate and Public Key Pinning ===&lt;br /&gt;
&lt;br /&gt;
Hybrid and native applications can take advantage of [[Certificate_and_Public_Key_Pinning|certificate and public key pinning]]. Pinning associates a host (for example, server) with an identity (for example, certificate or public key), and allows an application to leverage knowledge of the pre-existing relationship. At runtime, the application would inspect the certificate or public key received after connecting to the server. If the certificate or public key is expected, then the application would proceed as normal. If unexpected, the application would stop using the channel and close the connection since an adversary could control the channel or server.&lt;br /&gt;
&lt;br /&gt;
Pinning still requires customary X509 checks, such as revocation, since CRLs and OCSP provides real time status information. Otherwise, an application could possibly (1) accept a known bad certificate; or (2) require an out-of-band update, which could result in a lengthy App Store approval.&lt;br /&gt;
&lt;br /&gt;
Browser based applications are at a disadvantage since most browsers do not allow the user to leverage pre-existing relationships and ''a priori'' knowledge. In addition, Javascript and Websockets do not expose methods to for a web app to query the underlying secure connection information (such as the certificate or public key). It is noteworthy that Chromium based browsers perform pinning on selected sites, but the list is currently maintained by the vendor.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection for Back End and Other Connections  =&lt;br /&gt;
&lt;br /&gt;
Although not the focus of this cheat sheet, it should be stressed that transport layer protection is necessary for back-end connections and any other connection where sensitive data is exchanged or where user identity is established. Failure to implement an effective and robust transport layer security will expose sensitive data and undermine the effectiveness of any authentication or access control mechanism. &lt;br /&gt;
&lt;br /&gt;
== Secure Internal Network Fallacy  ==&lt;br /&gt;
&lt;br /&gt;
The internal network of a corporation is not immune to attacks. Many recent high profile intrusions, where thousands of sensitive customer records were compromised, have been perpetrated by attackers that have gained internal network access and then used sniffers to capture unencrypted data as it traversed the internal network.&lt;br /&gt;
&lt;br /&gt;
= Related Articles  =&lt;br /&gt;
&lt;br /&gt;
* OWASP – [[Testing for SSL-TLS (OWASP-CM-001)|Testing for SSL-TLS]], and OWASP [[Guide to Cryptography]] &lt;br /&gt;
* OWASP – [http://www.owasp.org/index.php/ASVS Application Security Verification Standard (ASVS) – Communication Security Verification Requirements (V10)]&lt;br /&gt;
* OWASP – ASVS Article on [[Why you need to use a FIPS 140-2 validated cryptomodule]]&lt;br /&gt;
* SSL Labs http://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide]&lt;br /&gt;
* yaSSL – [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html Differences between SSL and TLS Protocol Versions]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP 800-52 Guidelines for the selection and use of transport layer security (TLS) Implementations]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf FIPS 140-2 Security Requirements for Cryptographic Modules]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf SP 800-57 Recommendation for Key Management, Revision 2]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/drafts.html#sp800-95 SP 800-95 Guide to Secure Web Services] &lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5280.txt RFC 5280 Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc2246.txt RFC 2246 The Transport Layer Security (TLS) Protocol Version 1.0 (JAN 1999)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc4346.txt RFC 4346 The Transport Layer Security (TLS) Protocol Version 1.1 (APR 2006)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5246.txt RFC 5246 The Transport Layer Security (TLS) Protocol Version 1.2 (AUG 2008)]&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
Michael Coates - michael.coates[at]owasp.org &amp;lt;br/&amp;gt;&lt;br /&gt;
Dave Wichers - dave.wichers[at]aspectsecurity.com &amp;lt;br/&amp;gt;&lt;br /&gt;
Michael Boberski - boberski_michael[at]bah.com&amp;lt;br/&amp;gt;&lt;br /&gt;
Tyler Reguly -treguly[at]sslfail.com&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=149157</id>
		<title>Transport Layer Protection Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=149157"/>
				<updated>2013-04-04T13:14:15Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added rule for &amp;quot;Do Not Use RFC 1918 Addresses&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction  =&lt;br /&gt;
&lt;br /&gt;
This article provides a simple model to follow when implementing transport layer protection for an application. Although the concept of SSL is known to many, the actual details and security specific decisions of implementation are often poorly understood and frequently result in insecure deployments. This article establishes clear rules which provide guidance on securely designing and configuring transport layer security for an application. This article is focused on the use of SSL/TLS between a web application and a web browser, but that we also encourage the use of SSL/TLS or other network encryption technologies, such as VPN, on back end and other non-browser based connections.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== Architectural Decision  ==&lt;br /&gt;
&lt;br /&gt;
An architectural decision must be made to determine the appropriate method to protect data when it is being transmitted.  The most common options available to corporations are Virtual Private Networks (VPN) or a SSL/TLS model commonly used by web applications. The selected model is determined by the business needs of the particular organization. For example, a VPN connection may be the best design for a partnership between two companies that includes mutual access to a shared server over a variety of protocols. Conversely, an Internet facing enterprise web application would likely be best served by a SSL/TLS model. &lt;br /&gt;
&lt;br /&gt;
This cheat sheet will focus on security considerations when the SSL/TLS model is selected. This is a frequently used model for publicly accessible web applications.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection with SSL/TLS  =&lt;br /&gt;
&lt;br /&gt;
== Benefits  ==&lt;br /&gt;
&lt;br /&gt;
The primary benefit of transport layer security is the protection of web application data from unauthorized disclosure and modification when it is transmitted between clients (web browsers) and the web application server, and between the web application server and back end and other non-browser based enterprise components. &lt;br /&gt;
&lt;br /&gt;
The server validation component of TLS provides authentication of the server to the client.  If configured to require client side certificates, TLS can also play a role in client authentication to the server. However, in practice client side certificates are not often used in lieu of username and password based authentication models for clients.&lt;br /&gt;
&lt;br /&gt;
TLS also provides two additional benefits that are commonly overlooked; integrity guarantees and replay prevention. A TLS stream of communication contains built-in controls to prevent tampering with any portion of the encrypted data. In addition, controls are also built-in to prevent a captured stream of TLS data from being replayed at a later time.&lt;br /&gt;
&lt;br /&gt;
It should be noted that TLS provides the above guarantees to data during transmission. TLS does not offer any of these security benefits to data that is at rest. Therefore appropriate security controls must be added to protect data while at rest within the application or within data stores.&lt;br /&gt;
&lt;br /&gt;
== Basic Requirements ==&lt;br /&gt;
&lt;br /&gt;
The basic requirements for using TLS are: access to a Public Key Infrastructure (PKI) in order to obtain certificates, access to a directory or an Online Certificate Status Protocol (OCSP) responder in order to check certificate revocation status, and agreement/ability to support a minimum configuration of protocol versions and protocol options for each version.&lt;br /&gt;
&lt;br /&gt;
== SSL vs. TLS  ==&lt;br /&gt;
&lt;br /&gt;
The terms, Secure Socket Layer (SSL) and Transport Layer Security (TLS) are often used interchangeably. In fact, SSL v3.1 is equivalent to TLS v1.0. However, different versions of SSL and TLS are supported by modern web browsers and by most modern web frameworks and platforms. For the purposes of this cheat sheet we will refer to the technology generically as TLS. Recommendations regarding the use of SSL and TLS protocols, as well as browser support for TLS, can be found in the rule below title [[Transport_Layer_Protection_Cheat_Sheet#Rule_-_Only_Support_Strong_Protocols| &amp;quot;Only Support Strong Protocols&amp;quot;]].&lt;br /&gt;
&lt;br /&gt;
[[Image:Asvs_cryptomodule.gif|thumb|350px|right|Cryptomodule Parts and Operation]]&lt;br /&gt;
&lt;br /&gt;
== When to Use a FIPS 140-2 Validated Cryptomodule ==&lt;br /&gt;
&lt;br /&gt;
If the web application may be the target of determined attackers (a common threat model for Internet accessible applications handling sensitive data), it is strongly advised to use TLS services that are provided by [http://csrc.nist.gov/groups/STM/cmvp/validation.html FIPS 140-2 validated cryptomodules]. &lt;br /&gt;
&lt;br /&gt;
A cryptomodule, whether it is a software library or a hardware device, basically consists of three parts:&lt;br /&gt;
&lt;br /&gt;
* Components that implement cryptographic algorithms (symmetric and asymmetric algorithms, hash algorithms, random number generator algorithms, and message authentication code algorithms) &lt;br /&gt;
* Components that call and manage cryptographic functions (inputs and outputs include cryptographic keys and so-called critical security parameters) &lt;br /&gt;
* A physical container around the components that implement cryptographic algorithms and the components that call and manage cryptographic functions&lt;br /&gt;
&lt;br /&gt;
The security of a cryptomodule and its services (and the web applications that call the cryptomodule) depend on the correct implementation and integration of each of these three parts. In addition, the cryptomodule must be used and accessed securely. The includes consideration for:&lt;br /&gt;
&lt;br /&gt;
* Calling and managing cryptographic functions&lt;br /&gt;
* Securely Handling inputs and output&lt;br /&gt;
* Ensuring the secure construction of the physical container around the components&lt;br /&gt;
&lt;br /&gt;
In order to leverage the benefits of TLS it is important to use a TLS service (e.g. library, web framework, web application server) which has been FIPS 140-2 validated. In addition, the cryptomodule must be installed, configured and operated in either an approved or an allowed mode to provide a high degree of certainty that the FIPS 140-2 validated cryptomodule is providing the expected security services in the expected manner.&lt;br /&gt;
&lt;br /&gt;
If the system is legally required to use FIPS 140-2 encryption (e.g., owned or operated by or on behalf of the U.S. Government) then TLS must be used and SSL disabled. Details on why SSL is unacceptable are described in Section 7.1 of [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program].&lt;br /&gt;
&lt;br /&gt;
Further reading on the use of TLS to protect highly sensitive data against determined attackers can be viewed in [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP800-52 Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations]&lt;br /&gt;
&lt;br /&gt;
== Secure Server Design  ==&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS for All Login Pages and All Authenticated Pages  ===&lt;br /&gt;
&lt;br /&gt;
The login page and all subsequent authenticated pages must be exclusively accessed over TLS. The initial login page, referred to as the &amp;quot;login landing page&amp;quot;, must be served over TLS. Failure to utilize TLS for the login landing page allows an attacker to modify the login form action, causing the user's credentials to be posted to an arbitrary location. Failure to utilize TLS for authenticated pages after the login enables an attacker to view the unencrypted session ID and compromise the user's authenticated session. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS on Any Networks (External and Internal) Transmitting Sensitive Data  ===&lt;br /&gt;
&lt;br /&gt;
All networks, both external and internal, which transmit sensitive data must utilize TLS or an equivalent transport layer security mechanism. It is not sufficient to claim that access to the internal network is &amp;quot;restricted to employees&amp;quot;. Numerous recent data compromises have shown that the internal network can be breached by attackers. In these attacks, sniffers have been installed to access unencrypted sensitive data sent on the internal network. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Provide Non-TLS Pages for Secure Content  ===&lt;br /&gt;
&lt;br /&gt;
All pages which are available over TLS must not be available over a non-TLS connection. A user may inadvertently bookmark or manually type a URL to a HTTP page (e.g. http://example.com/myaccount) within the authenticated portion of the application. If this request is processed by the application then the response, and any sensitive data, would be returned to the user over the clear text HTTP.&lt;br /&gt;
&lt;br /&gt;
=== Rule - REMOVED - Do Not Perform Redirects from Non-TLS Page to TLS Login Page  ===&lt;br /&gt;
&lt;br /&gt;
This recommendation has been removed. Ultimately, the below guidance will only provide user education and cannot provide any technical controls to protect the user against a man-in-the-middle attack.  &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
A common practice is to redirect users that have requested a non-TLS version of the login page to the TLS version (e.g. http://example.com/login redirects to https://example.com/login). This practice creates an additional attack vector for a man in the middle attack. In addition, redirecting from non-TLS versions to the TLS version reinforces to the user that the practice of requesting the non-TLS page is acceptable and secure.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the man-in-the-middle attack is used by the attacker to intercept the non-TLS to TLS redirect message. The attacker then injects the HTML of the actual login page and changes the form to post over unencrypted HTTP. This allows the attacker to view the user's credentials as they are transmitted in the clear.&lt;br /&gt;
&lt;br /&gt;
It is recommended to display a security warning message to the user whenever the non-TLS login page is requested. This security warning should urge the user to always type &amp;quot;HTTPS&amp;quot; into the browser or bookmark the secure login page.  This approach will help educate users on the correct and most secure method of accessing the application.&lt;br /&gt;
&lt;br /&gt;
Currently there are no controls that an application can enforce to entirely mitigate this risk. Ultimately, this issue is the responsibility of the user since the application cannot prevent the user from initially typing [http://owasp.org http://example.com/login] (versus HTTPS). &lt;br /&gt;
&lt;br /&gt;
Note: [http://www.w3.org/Security/wiki/Strict_Transport_Security Strict Transport Security] will address this issue and will provide a server side control to instruct supporting browsers that the site should only be accessed over HTTPS&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Mix TLS and Non-TLS Content  ===&lt;br /&gt;
&lt;br /&gt;
A page that is available over TLS must be comprised completely of content which is transmitted over TLS. The page must not contain any content that is transmitted over unencrypted HTTP. This includes content from unrelated third party sites. &lt;br /&gt;
&lt;br /&gt;
An attacker could intercept any of the data transmitted over the unencrypted HTTP and inject malicious content into the user's page. This malicious content would be included in the page even if the overall page is served over TLS. In addition, an attacker could steal the user's session cookie that is transmitted with any non-TLS requests. This is possible if the cookie's 'secure' flag is not set. See the rule 'Use &amp;quot;Secure&amp;quot; Cookie Flag'&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use &amp;quot;Secure&amp;quot; Cookie Flag  ===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Secure&amp;quot; flag must be set for all user cookies. Failure to use the &amp;quot;secure&amp;quot; flag enables an attacker to access the session cookie by tricking the user's browser into submitting a request to an unencrypted page on the site. This attack is possible even if the server is not configured to offer HTTP content since the attacker is monitoring the requests and does not care if the server responds with a 404 or doesn't respond at all.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Keep Sensitive Data Out of the URL ===&lt;br /&gt;
&lt;br /&gt;
Sensitive data must not be transmitted via URL arguments. A more appropriate place is to store sensitive data in a server side repository or within the user's session.  When using TLS the URL arguments and values are encrypted during transit. However, there are two methods that the URL arguments and values could be exposed.&lt;br /&gt;
&lt;br /&gt;
1. The entire URL is cached within the local user's browser history. This may expose sensitive data to any other user of the workstation.&lt;br /&gt;
&lt;br /&gt;
2. The entire URL is exposed if the user clicks on a link to another HTTPS site. This may expose sensitive data within the referral field to the third party site. This exposure occurs in most browsers and will only occur on transitions between two TLS sites. &lt;br /&gt;
&lt;br /&gt;
For example, a user following a link on [http://owasp.org https://example.com] which leads to [http://owasp.org https://someOtherexample.com] would expose the full URL of [http://owasp.org https://example.com] (including URL arguments) in the referral header (within most browsers). This would not be the case if the user followed a link on [http://owasp.org https://example.com] to [http://owasp.org http://someHTTPexample.com]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Prevent Caching of Sensitive Data ===&lt;br /&gt;
&lt;br /&gt;
The TLS protocol provides confidentiality only for data in transit but it does not help with potential data leakage issues at the client or intermediary proxies. As a result, it is frequently prudent to instruct these nodes not to cache or persist sensitive data. One option is to add a suitable Cache-Control header to relevant HTTP responses, for example &amp;quot;Cache-Control: no-cache, no store, must-revalidate&amp;quot;. For compatibility with HTTP/1.0 the response should include header &amp;quot;Pragma: no-cache&amp;quot;. More information is available in [http://www.ietf.org/rfc/rfc2616.txt HTTP 1.1 RFC 2616], section 14.9.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use HTTP Strict Transport Security ===&lt;br /&gt;
&lt;br /&gt;
A new browser security setting called HTTP Strict Transport Security (HSTS) will significantly enhance the implementation of TLS for a domain. HSTS is enabled via a special response header and this instructs [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security#Browser_Support compatible browsers] to enforce the following security controls:&lt;br /&gt;
&lt;br /&gt;
* All requests to the domain will be sent over HTTPS&lt;br /&gt;
* Any attempts to send an HTTP requests to the domain will be automatically upgraded by the browser to HTTPS before the request is sent&lt;br /&gt;
* If a user encounters a bad SSL certificate, the user will receive an error message and will not be allowed to override the warning message&lt;br /&gt;
&lt;br /&gt;
Additional information on HSTS can be found at [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security https://www.owasp.org/index.php/HTTP_Strict_Transport_Security] and also on the OWASP [http://www.youtube.com/watch?v=zEV3HOuM_Vw&amp;amp;feature=youtube_gdata AppSecTutorial Series - Episode 4]&lt;br /&gt;
&lt;br /&gt;
== Server Certificate and Protocol Configuration  ==&lt;br /&gt;
&lt;br /&gt;
Note: If using a FIPS 140-2 cryptomodule disregard the following rules and defer to the recommended configuration for the particular cryptomodule.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use an Appropriate Certification Authority for the Application's User Base  ===&lt;br /&gt;
&lt;br /&gt;
An application user must never be presented with a warning that the certificate was signed by an unknown or untrusted authority. The application's user population must have access to the public certificate of the certification authority which issued the server's certificate. For Internet accessible websites, the most effective method of achieving this goal is to purchase the TLS certificate from a recognize certification authority. Popular Internet browsers already contain the public certificates of these recognized certification authorities. &lt;br /&gt;
&lt;br /&gt;
Internal applications with a limited user population can use an internal certification authority provided its public certificate is securely distributed to all users. However, remember that all certificates issued by this certification authority will be trusted by the users. Therefore, utilize controls to protect the private key and ensure that only authorized individuals have the ability to sign certificates. &lt;br /&gt;
&lt;br /&gt;
The use of self signed certificates is never acceptable. Self signed certificates negate the benefit of end-point authentication and also significantly decrease the ability for an individual to detect a man-in-the-middle attack. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Protocols ===&lt;br /&gt;
&lt;br /&gt;
SSL/TLS is a collection of protocols. Weaknesses have been identified with earlier SSL protocols, including [http://www.schneier.com/paper-ssl-revised.pdf SSLv2] and [http://www.yaksman.org/~lweith/ssl.pdf SSLv3]. The best practice for transport layer protection is to only provide support for the TLS protocols - TLS1.0, TLS 1.1 and TLS 1.2. This configuration will provide maximum protection against skilled and determined attackers and is appropriate for applications handling sensitive data or performing critical operations.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers Nearly all modern browsers support at least TLS 1.0]. As of February 2013, contemporary browsers (Chrome v20+, IE v8+, Opera v10+, and Safari v5+) [http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers support TLS 1.1 and TLS 1.2]. You should provide support for TLS 1.1 and TLS 1.2 to accommodate clients which support the protocols.&lt;br /&gt;
&lt;br /&gt;
In situations where lesser security requirements are necessary, it may be acceptable to also provide support for SSL 3.0 and TLS 1.0. [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 has known weaknesses] which severely compromise the channel's security. TLS 1.0 suffers [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html CBC Chaining attacks and Padding Oracle attacks]. SSLv3 and TLSv1.0 should only be used only after risk analysis and acceptance.&lt;br /&gt;
&lt;br /&gt;
Under no circumstances should SSLv2 be enabled as a protocol selection. The [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 protocol is broken] and does not provide adequate transport layer protection.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Cryptographic Ciphers  ===&lt;br /&gt;
&lt;br /&gt;
Each protocol (SSLv3, TLSv1.0, etc) provide cipher suites. As of TLS 1.2, [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 there is support for over 300 suites (320+ and counting)], including [http://www.mail-archive.com/cryptography@randombit.net/msg03785.html national vanity cipher suites]. The strength of the encryption used within a TLS session is determined by the encryption cipher negotiated between the server and the browser. In order to ensure that only strong cryptographic ciphers are selected the server must be modified to disable the use of weak ciphers. It is recommended to configure the server to only support strong ciphers and to use sufficiently large key sizes. In general, the following should be observed when selecting CipherSuites:&lt;br /&gt;
&lt;br /&gt;
* Use AES, 3-key 3DES for encryption operated in CBC mode &lt;br /&gt;
* Stream Ciphers which XOR the key stream with plaintext (such as AES/CTR mode)&lt;br /&gt;
* Use SHA1 or above for digests, prefer SHA2 (or equivalent)&lt;br /&gt;
* MD5 should not be used except as a PRF (no signing, no MACs)&lt;br /&gt;
* Do not provide support for NULL ciphersuites (aNULL or eNULL)&lt;br /&gt;
* Do not provide support for anonymous Diffie-Hellman &lt;br /&gt;
* Support ephemeral Diffie-Hellman key exchange&lt;br /&gt;
&lt;br /&gt;
Note: The TLS usage of MD5 does not expose the TLS protocol to any of the weaknesses of the MD5 algorithm (see FIPS 140-2 IG). However, MD5 must never be used outside of TLS protocol (e.g. for general hashing).&lt;br /&gt;
&lt;br /&gt;
Note: Use of Ephemeral Diffie-Hellman key exchange will protect confidentiality of the transmitted plaintext data even if the corresponding RSA or DSS server private key got compromised. An attacker would have to perform active man-in-the-middle attack at the time of the key exchange to be able to extract the transmitted plaintext. All modern browsers support this key exchange with the notable exception of Internet Explorer prior to Windows Vista.&lt;br /&gt;
&lt;br /&gt;
Additional information can be obtained within the [http://www.ietf.org/rfc/rfc4346.txt TLS 1.1 RFC 4346] and [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf FIPS 140-2 IG]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Secure Renegotiations  ===&lt;br /&gt;
&lt;br /&gt;
A design weakness in TLS, identified as [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-3555 CVE-2009-3555], allows an attacker to inject a plaintext of his choice into a TLS session of a victim. In the HTTPS context the attacker might be able to inject his own HTTP requests on behalf of the victim. The issue can be mitigated either by disabling support for TLS renegotiations or by supporting only renegotiations compliant with [http://www.ietf.org/rfc/rfc5746.txt RFC 5746]. All modern browsers have been updated to comply with this RFC.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Disable Compression ===&lt;br /&gt;
&lt;br /&gt;
Compression Ratio Info-leak Made Easy (CRIME) is an exploit against the data compression scheme used by the TLS and SPDY protocols. The exploit allows an adversary to recover user authentication cookies from HTTPS. The recovered cookie can be subsequently used for session hijacking attacks.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Strong Keys &amp;amp; Protect Them ===&lt;br /&gt;
&lt;br /&gt;
The private key used to generate the cipher key must be sufficiently strong for the anticipated lifetime of the private key and corresponding certificate. The current best practice is to select a key size of at least 2048. Keys of length 1024 will be obsolete beginning in 2010.  Additional information on key lifetimes and comparable key strengths can be found in [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf NIST SP 800-57]. In addition, the private key must be stored in a location that is protected from unauthorized access.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use a Certificate That Supports Required Domain Names ===&lt;br /&gt;
&lt;br /&gt;
A user should never be presented with a certificate error, including prompts to reconcile domain or hostname mismatches, or expired certificates. If the application is available at both [https://owasp.org https://www.example.com] and [https://owasp.org https://example.com] then an appropriate certificate, or certificates, must be presented to accommodate the situation. The presence of certificate errors desensitizes users to TLS error messages and increases the possibility an attacker could launch a convincing phishing or man-in-the-middle attack.&lt;br /&gt;
&lt;br /&gt;
For example, consider a web application accessible at [https://owasp.org https://abc.example.com] and [https://owasp.org https://xyz.example.com]. One certificate should be acquired for the host or server ''abc.example.com''; and a second certificate for host or server ''xyz.example.com''. In both cases, the hostname would be present in the Subject's Common Name (CN).&lt;br /&gt;
&lt;br /&gt;
Alternatively, the Subject Alternate Names (SANs) can be used to provide a specific listing of multiple names where the certificate is valid. In the example above, the certificate could list the Subject's CN as ''example.com'', and list two SANs: ''abc.example.com'' and ''xyz.example.com''. These certificates are sometimes referred to as &amp;quot;multiple domain certificates&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Use Wildcard Certificates ===&lt;br /&gt;
&lt;br /&gt;
You should refrain from using wildcard certificates. Though they are expedient at circumventing annoying user prompts, they also [[Least_privilege|violate the principal of least privilege]] and asks the user to trust all machines, including developer's machines, the secretary's machine in the lobby and the sign-in kiosk. Obtaining access to the private key is left as an exercise for the attacker, but its made much easier when stored on the file system unprotected.&lt;br /&gt;
&lt;br /&gt;
Statistics gathered by Qualys for [http://media.blackhat.com/bh-us-10/presentations/Ristic/BlackHat-USA-2010-Ristic-Qualys-SSL-Survey-HTTP-Rating-Guide-slides.pdf Internet SSL Survey 2010] indicate wildcard certificates have a 4.4% share, so the practice is not standard for public facing hosts.&lt;br /&gt;
&lt;br /&gt;
Finally, wildcard certificates violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines].&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Use RFC 1918 Addresses ===&lt;br /&gt;
&lt;br /&gt;
RFC 1918 is [http://tools.ietf.org/rfc/rfc1918.txt Address Allocation for Private Internets]. Private addresses are Internet Assigned Numbers Authority (IANA) reserved and include 192.168/16, 172.16/12, and 10/8. Certificates should not use private addresses.&lt;br /&gt;
&lt;br /&gt;
Certificates issued with private addresses violate [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines]. In addition, Peter Gutmann writes in in [www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf Engineering Security]: &amp;quot;This one is particularly troublesome because, in combination with the router-compromise attacks... and ...OSCP-defeating measures, it allows an attacker to spoof any EV-certificate site.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
=== Rule - Always Provide All Needed Certificates ===&lt;br /&gt;
&lt;br /&gt;
Clients attempt to solve the problem of identifying a server or host using PKI and X509 certificate. When a user receives a server or host's certificate, the certificate must be validated back to a trusted root certification authority. This is known as path validation.&lt;br /&gt;
&lt;br /&gt;
There can be one or more intermediate certificates in between the end-entity (server or host) certificate and root certificate. In addition to validating both endpoints, the user will also have to validate all intermediate certificates. Validating all intermediate certificates can be tricky because the user may not have them locally. This is a well-known PKI issue called the “Which Directory?&amp;quot; problem.&lt;br /&gt;
&lt;br /&gt;
To avoid the “Which Directory?&amp;quot; problem, a server should provide the user with all required certificates used in a path validation.&lt;br /&gt;
&lt;br /&gt;
== Client (Browser) Configuration  ==&lt;br /&gt;
&lt;br /&gt;
The validation procedures to ensure that a certificate is valid are complex and difficult to correctly perform.  In a typical web application model, these checks will be performed by the client's web browser in accordance with local browser settings and are out of the control of the application. However, these items do need to be addressed in the following scenarios:&lt;br /&gt;
&lt;br /&gt;
* The application server establishes connections to other applications over TLS for purposes such as web services or any exchange of data&lt;br /&gt;
* A thick client application is connecting to a server via TLS&lt;br /&gt;
&lt;br /&gt;
In these situations extensive certificate validation checks must occur in order to establish the validity of the certificate. Consult the following resources to assist in the design and testing of this functionality. The NIST PKI testing site includes a full test suite of certificates and expected outcomes of the test cases.&lt;br /&gt;
* [http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html NIST PKI Testing]&lt;br /&gt;
* [http://www.ietf.org/rfc/rfc5280.txt IETF RFC 5280]&lt;br /&gt;
&lt;br /&gt;
As specified in the above guidance, if the certificate can not be validated for any reason then the connection between the client and server must be dropped. Any data exchanged over a connection where the certificate has not properly been validated could be exposed to unauthorized access or modification.&lt;br /&gt;
&lt;br /&gt;
== Additional Controls  ==&lt;br /&gt;
&lt;br /&gt;
=== Extended Validation Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Extended validation certificates (EV Certificates) proffer an enhanced investigation by the issuer into the requesting party due to the industry's race to the bottom. The purpose of EV certificates is to provide the user with greater assurance that the owner of the certificate is a verified legal entity for the site. Browsers with support for EV certificates distinguish an EV certificate in a variety of ways. Internet Explorer will color a portion of the URL in green, while Mozilla will add a green portion to the left of the URL indicating the company name. &lt;br /&gt;
&lt;br /&gt;
High value websites should consider the use of EV certificates to enhance customer confidence in the certificate. It should also be noted that EV certificates do not provide any greater technical security for the TLS. The purpose of the EV certificate is to increase user confidence that the target site is indeed who it claims to be.&lt;br /&gt;
&lt;br /&gt;
=== Client-Side Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Client side certificates can be used with TLS to prove the identity of the client to the server. Referred to as &amp;quot;two-way TLS&amp;quot;, this configuration requires the client to provide their certificate to the server, in addition to the server providing their's to the client. If client certificates are used, ensure that the same validation of the client certificate is performed by the server, as indicated for the validation of server certificates above. In addition, the server should be configured to drop the TLS connection if the client certificate cannot be verified or is not provided. &lt;br /&gt;
&lt;br /&gt;
The use of client side certificates is relatively rare currently due to the complexities of certificate generation, safe distribution, client side configuration, certificate revocation and reissuance, and the fact that clients can only authenticate on machines where their client side certificate is installed. Such certificates are typically used for very high value connections that have small user populations.&lt;br /&gt;
&lt;br /&gt;
=== Certificate and Public Key Pinning ===&lt;br /&gt;
&lt;br /&gt;
Hybrid and native applications can take advantage of [[Certificate_and_Public_Key_Pinning|certificate and public key pinning]]. Pinning associates a host (for example, server) with an identity (for example, certificate or public key), and allows an application to leverage knowledge of the pre-existing relationship. At runtime, the application would inspect the certificate or public key received after connecting to the server. If the certificate or public key is expected, then the application would proceed as normal. If unexpected, the application would stop using the channel and close the connection since an adversary could control the channel or server.&lt;br /&gt;
&lt;br /&gt;
Pinning still requires customary X509 checks, such as revocation, since CRLs and OCSP provides real time status information. Otherwise, an application could possibly (1) accept a known bad certificate; or (2) require an out-of-band update, which could result in a lengthy App Store approval.&lt;br /&gt;
&lt;br /&gt;
Browser based applications are at a disadvantage since most browsers do not allow the user to leverage pre-existing relationships and ''a priori'' knowledge. In addition, Javascript and Websockets do not expose methods to for a web app to query the underlying secure connection information (such as the certificate or public key). It is noteworthy that Chromium based browsers perform pinning on selected sites, but the list is currently maintained by the vendor.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection for Back End and Other Connections  =&lt;br /&gt;
&lt;br /&gt;
Although not the focus of this cheat sheet, it should be stressed that transport layer protection is necessary for back-end connections and any other connection where sensitive data is exchanged or where user identity is established. Failure to implement an effective and robust transport layer security will expose sensitive data and undermine the effectiveness of any authentication or access control mechanism. &lt;br /&gt;
&lt;br /&gt;
== Secure Internal Network Fallacy  ==&lt;br /&gt;
&lt;br /&gt;
The internal network of a corporation is not immune to attacks. Many recent high profile intrusions, where thousands of sensitive customer records were compromised, have been perpetrated by attackers that have gained internal network access and then used sniffers to capture unencrypted data as it traversed the internal network.&lt;br /&gt;
&lt;br /&gt;
= Related Articles  =&lt;br /&gt;
&lt;br /&gt;
* OWASP – [[Testing for SSL-TLS (OWASP-CM-001)|Testing for SSL-TLS]], and OWASP [[Guide to Cryptography]] &lt;br /&gt;
* OWASP – [http://www.owasp.org/index.php/ASVS Application Security Verification Standard (ASVS) – Communication Security Verification Requirements (V10)]&lt;br /&gt;
* OWASP – ASVS Article on [[Why you need to use a FIPS 140-2 validated cryptomodule]]&lt;br /&gt;
* SSL Labs http://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide]&lt;br /&gt;
* yaSSL – [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html Differences between SSL and TLS Protocol Versions]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP 800-52 Guidelines for the selection and use of transport layer security (TLS) Implementations]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf FIPS 140-2 Security Requirements for Cryptographic Modules]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf SP 800-57 Recommendation for Key Management, Revision 2]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/drafts.html#sp800-95 SP 800-95 Guide to Secure Web Services] &lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5280.txt RFC 5280 Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc2246.txt RFC 2246 The Transport Layer Security (TLS) Protocol Version 1.0 (JAN 1999)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc4346.txt RFC 4346 The Transport Layer Security (TLS) Protocol Version 1.1 (APR 2006)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5246.txt RFC 5246 The Transport Layer Security (TLS) Protocol Version 1.2 (AUG 2008)]&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
Michael Coates - michael.coates[at]owasp.org &amp;lt;br/&amp;gt;&lt;br /&gt;
Dave Wichers - dave.wichers[at]aspectsecurity.com &amp;lt;br/&amp;gt;&lt;br /&gt;
Michael Boberski - boberski_michael[at]bah.com&amp;lt;br/&amp;gt;&lt;br /&gt;
Tyler Reguly -treguly[at]sslfail.com&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=149156</id>
		<title>Transport Layer Protection Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=149156"/>
				<updated>2013-04-04T12:59:43Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added rule for &amp;quot;Do Not Use Wildcard Certificates&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction  =&lt;br /&gt;
&lt;br /&gt;
This article provides a simple model to follow when implementing transport layer protection for an application. Although the concept of SSL is known to many, the actual details and security specific decisions of implementation are often poorly understood and frequently result in insecure deployments. This article establishes clear rules which provide guidance on securely designing and configuring transport layer security for an application. This article is focused on the use of SSL/TLS between a web application and a web browser, but that we also encourage the use of SSL/TLS or other network encryption technologies, such as VPN, on back end and other non-browser based connections.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== Architectural Decision  ==&lt;br /&gt;
&lt;br /&gt;
An architectural decision must be made to determine the appropriate method to protect data when it is being transmitted.  The most common options available to corporations are Virtual Private Networks (VPN) or a SSL/TLS model commonly used by web applications. The selected model is determined by the business needs of the particular organization. For example, a VPN connection may be the best design for a partnership between two companies that includes mutual access to a shared server over a variety of protocols. Conversely, an Internet facing enterprise web application would likely be best served by a SSL/TLS model. &lt;br /&gt;
&lt;br /&gt;
This cheat sheet will focus on security considerations when the SSL/TLS model is selected. This is a frequently used model for publicly accessible web applications.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection with SSL/TLS  =&lt;br /&gt;
&lt;br /&gt;
== Benefits  ==&lt;br /&gt;
&lt;br /&gt;
The primary benefit of transport layer security is the protection of web application data from unauthorized disclosure and modification when it is transmitted between clients (web browsers) and the web application server, and between the web application server and back end and other non-browser based enterprise components. &lt;br /&gt;
&lt;br /&gt;
The server validation component of TLS provides authentication of the server to the client.  If configured to require client side certificates, TLS can also play a role in client authentication to the server. However, in practice client side certificates are not often used in lieu of username and password based authentication models for clients.&lt;br /&gt;
&lt;br /&gt;
TLS also provides two additional benefits that are commonly overlooked; integrity guarantees and replay prevention. A TLS stream of communication contains built-in controls to prevent tampering with any portion of the encrypted data. In addition, controls are also built-in to prevent a captured stream of TLS data from being replayed at a later time.&lt;br /&gt;
&lt;br /&gt;
It should be noted that TLS provides the above guarantees to data during transmission. TLS does not offer any of these security benefits to data that is at rest. Therefore appropriate security controls must be added to protect data while at rest within the application or within data stores.&lt;br /&gt;
&lt;br /&gt;
== Basic Requirements ==&lt;br /&gt;
&lt;br /&gt;
The basic requirements for using TLS are: access to a Public Key Infrastructure (PKI) in order to obtain certificates, access to a directory or an Online Certificate Status Protocol (OCSP) responder in order to check certificate revocation status, and agreement/ability to support a minimum configuration of protocol versions and protocol options for each version.&lt;br /&gt;
&lt;br /&gt;
== SSL vs. TLS  ==&lt;br /&gt;
&lt;br /&gt;
The terms, Secure Socket Layer (SSL) and Transport Layer Security (TLS) are often used interchangeably. In fact, SSL v3.1 is equivalent to TLS v1.0. However, different versions of SSL and TLS are supported by modern web browsers and by most modern web frameworks and platforms. For the purposes of this cheat sheet we will refer to the technology generically as TLS. Recommendations regarding the use of SSL and TLS protocols, as well as browser support for TLS, can be found in the rule below title [[Transport_Layer_Protection_Cheat_Sheet#Rule_-_Only_Support_Strong_Protocols| &amp;quot;Only Support Strong Protocols&amp;quot;]].&lt;br /&gt;
&lt;br /&gt;
[[Image:Asvs_cryptomodule.gif|thumb|350px|right|Cryptomodule Parts and Operation]]&lt;br /&gt;
&lt;br /&gt;
== When to Use a FIPS 140-2 Validated Cryptomodule ==&lt;br /&gt;
&lt;br /&gt;
If the web application may be the target of determined attackers (a common threat model for Internet accessible applications handling sensitive data), it is strongly advised to use TLS services that are provided by [http://csrc.nist.gov/groups/STM/cmvp/validation.html FIPS 140-2 validated cryptomodules]. &lt;br /&gt;
&lt;br /&gt;
A cryptomodule, whether it is a software library or a hardware device, basically consists of three parts:&lt;br /&gt;
&lt;br /&gt;
* Components that implement cryptographic algorithms (symmetric and asymmetric algorithms, hash algorithms, random number generator algorithms, and message authentication code algorithms) &lt;br /&gt;
* Components that call and manage cryptographic functions (inputs and outputs include cryptographic keys and so-called critical security parameters) &lt;br /&gt;
* A physical container around the components that implement cryptographic algorithms and the components that call and manage cryptographic functions&lt;br /&gt;
&lt;br /&gt;
The security of a cryptomodule and its services (and the web applications that call the cryptomodule) depend on the correct implementation and integration of each of these three parts. In addition, the cryptomodule must be used and accessed securely. The includes consideration for:&lt;br /&gt;
&lt;br /&gt;
* Calling and managing cryptographic functions&lt;br /&gt;
* Securely Handling inputs and output&lt;br /&gt;
* Ensuring the secure construction of the physical container around the components&lt;br /&gt;
&lt;br /&gt;
In order to leverage the benefits of TLS it is important to use a TLS service (e.g. library, web framework, web application server) which has been FIPS 140-2 validated. In addition, the cryptomodule must be installed, configured and operated in either an approved or an allowed mode to provide a high degree of certainty that the FIPS 140-2 validated cryptomodule is providing the expected security services in the expected manner.&lt;br /&gt;
&lt;br /&gt;
If the system is legally required to use FIPS 140-2 encryption (e.g., owned or operated by or on behalf of the U.S. Government) then TLS must be used and SSL disabled. Details on why SSL is unacceptable are described in Section 7.1 of [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program].&lt;br /&gt;
&lt;br /&gt;
Further reading on the use of TLS to protect highly sensitive data against determined attackers can be viewed in [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP800-52 Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations]&lt;br /&gt;
&lt;br /&gt;
== Secure Server Design  ==&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS for All Login Pages and All Authenticated Pages  ===&lt;br /&gt;
&lt;br /&gt;
The login page and all subsequent authenticated pages must be exclusively accessed over TLS. The initial login page, referred to as the &amp;quot;login landing page&amp;quot;, must be served over TLS. Failure to utilize TLS for the login landing page allows an attacker to modify the login form action, causing the user's credentials to be posted to an arbitrary location. Failure to utilize TLS for authenticated pages after the login enables an attacker to view the unencrypted session ID and compromise the user's authenticated session. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS on Any Networks (External and Internal) Transmitting Sensitive Data  ===&lt;br /&gt;
&lt;br /&gt;
All networks, both external and internal, which transmit sensitive data must utilize TLS or an equivalent transport layer security mechanism. It is not sufficient to claim that access to the internal network is &amp;quot;restricted to employees&amp;quot;. Numerous recent data compromises have shown that the internal network can be breached by attackers. In these attacks, sniffers have been installed to access unencrypted sensitive data sent on the internal network. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Provide Non-TLS Pages for Secure Content  ===&lt;br /&gt;
&lt;br /&gt;
All pages which are available over TLS must not be available over a non-TLS connection. A user may inadvertently bookmark or manually type a URL to a HTTP page (e.g. http://example.com/myaccount) within the authenticated portion of the application. If this request is processed by the application then the response, and any sensitive data, would be returned to the user over the clear text HTTP.&lt;br /&gt;
&lt;br /&gt;
=== Rule - REMOVED - Do Not Perform Redirects from Non-TLS Page to TLS Login Page  ===&lt;br /&gt;
&lt;br /&gt;
This recommendation has been removed. Ultimately, the below guidance will only provide user education and cannot provide any technical controls to protect the user against a man-in-the-middle attack.  &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
A common practice is to redirect users that have requested a non-TLS version of the login page to the TLS version (e.g. http://example.com/login redirects to https://example.com/login). This practice creates an additional attack vector for a man in the middle attack. In addition, redirecting from non-TLS versions to the TLS version reinforces to the user that the practice of requesting the non-TLS page is acceptable and secure.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the man-in-the-middle attack is used by the attacker to intercept the non-TLS to TLS redirect message. The attacker then injects the HTML of the actual login page and changes the form to post over unencrypted HTTP. This allows the attacker to view the user's credentials as they are transmitted in the clear.&lt;br /&gt;
&lt;br /&gt;
It is recommended to display a security warning message to the user whenever the non-TLS login page is requested. This security warning should urge the user to always type &amp;quot;HTTPS&amp;quot; into the browser or bookmark the secure login page.  This approach will help educate users on the correct and most secure method of accessing the application.&lt;br /&gt;
&lt;br /&gt;
Currently there are no controls that an application can enforce to entirely mitigate this risk. Ultimately, this issue is the responsibility of the user since the application cannot prevent the user from initially typing [http://owasp.org http://example.com/login] (versus HTTPS). &lt;br /&gt;
&lt;br /&gt;
Note: [http://www.w3.org/Security/wiki/Strict_Transport_Security Strict Transport Security] will address this issue and will provide a server side control to instruct supporting browsers that the site should only be accessed over HTTPS&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Mix TLS and Non-TLS Content  ===&lt;br /&gt;
&lt;br /&gt;
A page that is available over TLS must be comprised completely of content which is transmitted over TLS. The page must not contain any content that is transmitted over unencrypted HTTP. This includes content from unrelated third party sites. &lt;br /&gt;
&lt;br /&gt;
An attacker could intercept any of the data transmitted over the unencrypted HTTP and inject malicious content into the user's page. This malicious content would be included in the page even if the overall page is served over TLS. In addition, an attacker could steal the user's session cookie that is transmitted with any non-TLS requests. This is possible if the cookie's 'secure' flag is not set. See the rule 'Use &amp;quot;Secure&amp;quot; Cookie Flag'&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use &amp;quot;Secure&amp;quot; Cookie Flag  ===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Secure&amp;quot; flag must be set for all user cookies. Failure to use the &amp;quot;secure&amp;quot; flag enables an attacker to access the session cookie by tricking the user's browser into submitting a request to an unencrypted page on the site. This attack is possible even if the server is not configured to offer HTTP content since the attacker is monitoring the requests and does not care if the server responds with a 404 or doesn't respond at all.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Keep Sensitive Data Out of the URL ===&lt;br /&gt;
&lt;br /&gt;
Sensitive data must not be transmitted via URL arguments. A more appropriate place is to store sensitive data in a server side repository or within the user's session.  When using TLS the URL arguments and values are encrypted during transit. However, there are two methods that the URL arguments and values could be exposed.&lt;br /&gt;
&lt;br /&gt;
1. The entire URL is cached within the local user's browser history. This may expose sensitive data to any other user of the workstation.&lt;br /&gt;
&lt;br /&gt;
2. The entire URL is exposed if the user clicks on a link to another HTTPS site. This may expose sensitive data within the referral field to the third party site. This exposure occurs in most browsers and will only occur on transitions between two TLS sites. &lt;br /&gt;
&lt;br /&gt;
For example, a user following a link on [http://owasp.org https://example.com] which leads to [http://owasp.org https://someOtherexample.com] would expose the full URL of [http://owasp.org https://example.com] (including URL arguments) in the referral header (within most browsers). This would not be the case if the user followed a link on [http://owasp.org https://example.com] to [http://owasp.org http://someHTTPexample.com]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Prevent Caching of Sensitive Data ===&lt;br /&gt;
&lt;br /&gt;
The TLS protocol provides confidentiality only for data in transit but it does not help with potential data leakage issues at the client or intermediary proxies. As a result, it is frequently prudent to instruct these nodes not to cache or persist sensitive data. One option is to add a suitable Cache-Control header to relevant HTTP responses, for example &amp;quot;Cache-Control: no-cache, no store, must-revalidate&amp;quot;. For compatibility with HTTP/1.0 the response should include header &amp;quot;Pragma: no-cache&amp;quot;. More information is available in [http://www.ietf.org/rfc/rfc2616.txt HTTP 1.1 RFC 2616], section 14.9.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use HTTP Strict Transport Security ===&lt;br /&gt;
&lt;br /&gt;
A new browser security setting called HTTP Strict Transport Security (HSTS) will significantly enhance the implementation of TLS for a domain. HSTS is enabled via a special response header and this instructs [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security#Browser_Support compatible browsers] to enforce the following security controls:&lt;br /&gt;
&lt;br /&gt;
* All requests to the domain will be sent over HTTPS&lt;br /&gt;
* Any attempts to send an HTTP requests to the domain will be automatically upgraded by the browser to HTTPS before the request is sent&lt;br /&gt;
* If a user encounters a bad SSL certificate, the user will receive an error message and will not be allowed to override the warning message&lt;br /&gt;
&lt;br /&gt;
Additional information on HSTS can be found at [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security https://www.owasp.org/index.php/HTTP_Strict_Transport_Security] and also on the OWASP [http://www.youtube.com/watch?v=zEV3HOuM_Vw&amp;amp;feature=youtube_gdata AppSecTutorial Series - Episode 4]&lt;br /&gt;
&lt;br /&gt;
== Server Certificate and Protocol Configuration  ==&lt;br /&gt;
&lt;br /&gt;
Note: If using a FIPS 140-2 cryptomodule disregard the following rules and defer to the recommended configuration for the particular cryptomodule.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use an Appropriate Certification Authority for the Application's User Base  ===&lt;br /&gt;
&lt;br /&gt;
An application user must never be presented with a warning that the certificate was signed by an unknown or untrusted authority. The application's user population must have access to the public certificate of the certification authority which issued the server's certificate. For Internet accessible websites, the most effective method of achieving this goal is to purchase the TLS certificate from a recognize certification authority. Popular Internet browsers already contain the public certificates of these recognized certification authorities. &lt;br /&gt;
&lt;br /&gt;
Internal applications with a limited user population can use an internal certification authority provided its public certificate is securely distributed to all users. However, remember that all certificates issued by this certification authority will be trusted by the users. Therefore, utilize controls to protect the private key and ensure that only authorized individuals have the ability to sign certificates. &lt;br /&gt;
&lt;br /&gt;
The use of self signed certificates is never acceptable. Self signed certificates negate the benefit of end-point authentication and also significantly decrease the ability for an individual to detect a man-in-the-middle attack. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Protocols ===&lt;br /&gt;
&lt;br /&gt;
SSL/TLS is a collection of protocols. Weaknesses have been identified with earlier SSL protocols, including [http://www.schneier.com/paper-ssl-revised.pdf SSLv2] and [http://www.yaksman.org/~lweith/ssl.pdf SSLv3]. The best practice for transport layer protection is to only provide support for the TLS protocols - TLS1.0, TLS 1.1 and TLS 1.2. This configuration will provide maximum protection against skilled and determined attackers and is appropriate for applications handling sensitive data or performing critical operations.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers Nearly all modern browsers support at least TLS 1.0]. As of February 2013, contemporary browsers (Chrome v20+, IE v8+, Opera v10+, and Safari v5+) [http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers support TLS 1.1 and TLS 1.2]. You should provide support for TLS 1.1 and TLS 1.2 to accommodate clients which support the protocols.&lt;br /&gt;
&lt;br /&gt;
In situations where lesser security requirements are necessary, it may be acceptable to also provide support for SSL 3.0 and TLS 1.0. [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 has known weaknesses] which severely compromise the channel's security. TLS 1.0 suffers [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html CBC Chaining attacks and Padding Oracle attacks]. SSLv3 and TLSv1.0 should only be used only after risk analysis and acceptance.&lt;br /&gt;
&lt;br /&gt;
Under no circumstances should SSLv2 be enabled as a protocol selection. The [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 protocol is broken] and does not provide adequate transport layer protection.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Cryptographic Ciphers  ===&lt;br /&gt;
&lt;br /&gt;
Each protocol (SSLv3, TLSv1.0, etc) provide cipher suites. As of TLS 1.2, [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 there is support for over 300 suites (320+ and counting)], including [http://www.mail-archive.com/cryptography@randombit.net/msg03785.html national vanity cipher suites]. The strength of the encryption used within a TLS session is determined by the encryption cipher negotiated between the server and the browser. In order to ensure that only strong cryptographic ciphers are selected the server must be modified to disable the use of weak ciphers. It is recommended to configure the server to only support strong ciphers and to use sufficiently large key sizes. In general, the following should be observed when selecting CipherSuites:&lt;br /&gt;
&lt;br /&gt;
* Use AES, 3-key 3DES for encryption operated in CBC mode &lt;br /&gt;
* Stream Ciphers which XOR the key stream with plaintext (such as AES/CTR mode)&lt;br /&gt;
* Use SHA1 or above for digests, prefer SHA2 (or equivalent)&lt;br /&gt;
* MD5 should not be used except as a PRF (no signing, no MACs)&lt;br /&gt;
* Do not provide support for NULL ciphersuites (aNULL or eNULL)&lt;br /&gt;
* Do not provide support for anonymous Diffie-Hellman &lt;br /&gt;
* Support ephemeral Diffie-Hellman key exchange&lt;br /&gt;
&lt;br /&gt;
Note: The TLS usage of MD5 does not expose the TLS protocol to any of the weaknesses of the MD5 algorithm (see FIPS 140-2 IG). However, MD5 must never be used outside of TLS protocol (e.g. for general hashing).&lt;br /&gt;
&lt;br /&gt;
Note: Use of Ephemeral Diffie-Hellman key exchange will protect confidentiality of the transmitted plaintext data even if the corresponding RSA or DSS server private key got compromised. An attacker would have to perform active man-in-the-middle attack at the time of the key exchange to be able to extract the transmitted plaintext. All modern browsers support this key exchange with the notable exception of Internet Explorer prior to Windows Vista.&lt;br /&gt;
&lt;br /&gt;
Additional information can be obtained within the [http://www.ietf.org/rfc/rfc4346.txt TLS 1.1 RFC 4346] and [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf FIPS 140-2 IG]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Secure Renegotiations  ===&lt;br /&gt;
&lt;br /&gt;
A design weakness in TLS, identified as [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-3555 CVE-2009-3555], allows an attacker to inject a plaintext of his choice into a TLS session of a victim. In the HTTPS context the attacker might be able to inject his own HTTP requests on behalf of the victim. The issue can be mitigated either by disabling support for TLS renegotiations or by supporting only renegotiations compliant with [http://www.ietf.org/rfc/rfc5746.txt RFC 5746]. All modern browsers have been updated to comply with this RFC.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Disable Compression ===&lt;br /&gt;
&lt;br /&gt;
Compression Ratio Info-leak Made Easy (CRIME) is an exploit against the data compression scheme used by the TLS and SPDY protocols. The exploit allows an adversary to recover user authentication cookies from HTTPS. The recovered cookie can be subsequently used for session hijacking attacks.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Strong Keys &amp;amp; Protect Them ===&lt;br /&gt;
&lt;br /&gt;
The private key used to generate the cipher key must be sufficiently strong for the anticipated lifetime of the private key and corresponding certificate. The current best practice is to select a key size of at least 2048. Keys of length 1024 will be obsolete beginning in 2010.  Additional information on key lifetimes and comparable key strengths can be found in [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf NIST SP 800-57]. In addition, the private key must be stored in a location that is protected from unauthorized access.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use a Certificate That Supports Required Domain Names ===&lt;br /&gt;
&lt;br /&gt;
A user should never be presented with a certificate error, including prompts to reconcile domain or hostname mismatches, or expired certificates. If the application is available at both [https://owasp.org https://www.example.com] and [https://owasp.org https://example.com] then an appropriate certificate, or certificates, must be presented to accommodate the situation. The presence of certificate errors desensitizes users to TLS error messages and increases the possibility an attacker could launch a convincing phishing or man-in-the-middle attack.&lt;br /&gt;
&lt;br /&gt;
For example, consider a web application accessible at [https://owasp.org https://abc.example.com] and [https://owasp.org https://xyz.example.com]. One certificate should be acquired for the host or server ''abc.example.com''; and a second certificate for host or server ''xyz.example.com''. In both cases, the hostname would be present in the Subject's Common Name (CN).&lt;br /&gt;
&lt;br /&gt;
Alternatively, the Subject Alternate Names (SANs) can be used to provide a specific listing of multiple names where the certificate is valid. In the example above, the certificate could list the Subject's CN as ''example.com'', and list two SANs: ''abc.example.com'' and ''xyz.example.com''. These certificates are sometimes referred to as &amp;quot;multiple domain certificates&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Use Wildcard Certificates ===&lt;br /&gt;
&lt;br /&gt;
You should refrain from using wildcard certificates. Though they are expedient at circumventing annoying user prompts, they also [[Least_privilege|violate the principal of least privilege]] and asks the user to trust all machines, including developer's machines, the secretary's machine in the lobby and the sign-in kiosk. Obtaining access to the private key is left as an exercise for the attacker, but its made much easier when stored on the file system unprotected.&lt;br /&gt;
&lt;br /&gt;
Statistics gathered by Qualys for [http://media.blackhat.com/bh-us-10/presentations/Ristic/BlackHat-USA-2010-Ristic-Qualys-SSL-Survey-HTTP-Rating-Guide-slides.pdf Internet SSL Survey 2010] indicate wildcard certificates have a 4.4% share, so the practice is not standard for public facing hosts.&lt;br /&gt;
&lt;br /&gt;
Finally, wildcard certificates violate the [https://www.cabforum.org/EV_Certificate_Guidelines.pdf EV Certificate Guidelines].&lt;br /&gt;
&lt;br /&gt;
=== Rule - Always Provide All Needed Certificates ===&lt;br /&gt;
&lt;br /&gt;
Clients attempt to solve the problem of identifying a server or host using PKI and X509 certificate. When a user receives a server or host's certificate, the certificate must be validated back to a trusted root certification authority. This is known as path validation.&lt;br /&gt;
&lt;br /&gt;
There can be one or more intermediate certificates in between the end-entity (server or host) certificate and root certificate. In addition to validating both endpoints, the user will also have to validate all intermediate certificates. Validating all intermediate certificates can be tricky because the user may not have them locally. This is a well-known PKI issue called the “Which Directory?&amp;quot; problem.&lt;br /&gt;
&lt;br /&gt;
To avoid the “Which Directory?&amp;quot; problem, a server should provide the user with all required certificates used in a path validation.&lt;br /&gt;
&lt;br /&gt;
== Client (Browser) Configuration  ==&lt;br /&gt;
&lt;br /&gt;
The validation procedures to ensure that a certificate is valid are complex and difficult to correctly perform.  In a typical web application model, these checks will be performed by the client's web browser in accordance with local browser settings and are out of the control of the application. However, these items do need to be addressed in the following scenarios:&lt;br /&gt;
&lt;br /&gt;
* The application server establishes connections to other applications over TLS for purposes such as web services or any exchange of data&lt;br /&gt;
* A thick client application is connecting to a server via TLS&lt;br /&gt;
&lt;br /&gt;
In these situations extensive certificate validation checks must occur in order to establish the validity of the certificate. Consult the following resources to assist in the design and testing of this functionality. The NIST PKI testing site includes a full test suite of certificates and expected outcomes of the test cases.&lt;br /&gt;
* [http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html NIST PKI Testing]&lt;br /&gt;
* [http://www.ietf.org/rfc/rfc5280.txt IETF RFC 5280]&lt;br /&gt;
&lt;br /&gt;
As specified in the above guidance, if the certificate can not be validated for any reason then the connection between the client and server must be dropped. Any data exchanged over a connection where the certificate has not properly been validated could be exposed to unauthorized access or modification.&lt;br /&gt;
&lt;br /&gt;
== Additional Controls  ==&lt;br /&gt;
&lt;br /&gt;
=== Extended Validation Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Extended validation certificates (EV Certificates) proffer an enhanced investigation by the issuer into the requesting party due to the industry's race to the bottom. The purpose of EV certificates is to provide the user with greater assurance that the owner of the certificate is a verified legal entity for the site. Browsers with support for EV certificates distinguish an EV certificate in a variety of ways. Internet Explorer will color a portion of the URL in green, while Mozilla will add a green portion to the left of the URL indicating the company name. &lt;br /&gt;
&lt;br /&gt;
High value websites should consider the use of EV certificates to enhance customer confidence in the certificate. It should also be noted that EV certificates do not provide any greater technical security for the TLS. The purpose of the EV certificate is to increase user confidence that the target site is indeed who it claims to be.&lt;br /&gt;
&lt;br /&gt;
=== Client-Side Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Client side certificates can be used with TLS to prove the identity of the client to the server. Referred to as &amp;quot;two-way TLS&amp;quot;, this configuration requires the client to provide their certificate to the server, in addition to the server providing their's to the client. If client certificates are used, ensure that the same validation of the client certificate is performed by the server, as indicated for the validation of server certificates above. In addition, the server should be configured to drop the TLS connection if the client certificate cannot be verified or is not provided. &lt;br /&gt;
&lt;br /&gt;
The use of client side certificates is relatively rare currently due to the complexities of certificate generation, safe distribution, client side configuration, certificate revocation and reissuance, and the fact that clients can only authenticate on machines where their client side certificate is installed. Such certificates are typically used for very high value connections that have small user populations.&lt;br /&gt;
&lt;br /&gt;
=== Certificate and Public Key Pinning ===&lt;br /&gt;
&lt;br /&gt;
Hybrid and native applications can take advantage of [[Certificate_and_Public_Key_Pinning|certificate and public key pinning]]. Pinning associates a host (for example, server) with an identity (for example, certificate or public key), and allows an application to leverage knowledge of the pre-existing relationship. At runtime, the application would inspect the certificate or public key received after connecting to the server. If the certificate or public key is expected, then the application would proceed as normal. If unexpected, the application would stop using the channel and close the connection since an adversary could control the channel or server.&lt;br /&gt;
&lt;br /&gt;
Pinning still requires customary X509 checks, such as revocation, since CRLs and OCSP provides real time status information. Otherwise, an application could possibly (1) accept a known bad certificate; or (2) require an out-of-band update, which could result in a lengthy App Store approval.&lt;br /&gt;
&lt;br /&gt;
Browser based applications are at a disadvantage since most browsers do not allow the user to leverage pre-existing relationships and ''a priori'' knowledge. In addition, Javascript and Websockets do not expose methods to for a web app to query the underlying secure connection information (such as the certificate or public key). It is noteworthy that Chromium based browsers perform pinning on selected sites, but the list is currently maintained by the vendor.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection for Back End and Other Connections  =&lt;br /&gt;
&lt;br /&gt;
Although not the focus of this cheat sheet, it should be stressed that transport layer protection is necessary for back-end connections and any other connection where sensitive data is exchanged or where user identity is established. Failure to implement an effective and robust transport layer security will expose sensitive data and undermine the effectiveness of any authentication or access control mechanism. &lt;br /&gt;
&lt;br /&gt;
== Secure Internal Network Fallacy  ==&lt;br /&gt;
&lt;br /&gt;
The internal network of a corporation is not immune to attacks. Many recent high profile intrusions, where thousands of sensitive customer records were compromised, have been perpetrated by attackers that have gained internal network access and then used sniffers to capture unencrypted data as it traversed the internal network.&lt;br /&gt;
&lt;br /&gt;
= Related Articles  =&lt;br /&gt;
&lt;br /&gt;
* OWASP – [[Testing for SSL-TLS (OWASP-CM-001)|Testing for SSL-TLS]], and OWASP [[Guide to Cryptography]] &lt;br /&gt;
* OWASP – [http://www.owasp.org/index.php/ASVS Application Security Verification Standard (ASVS) – Communication Security Verification Requirements (V10)]&lt;br /&gt;
* OWASP – ASVS Article on [[Why you need to use a FIPS 140-2 validated cryptomodule]]&lt;br /&gt;
* SSL Labs http://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide]&lt;br /&gt;
* yaSSL – [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html Differences between SSL and TLS Protocol Versions]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP 800-52 Guidelines for the selection and use of transport layer security (TLS) Implementations]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf FIPS 140-2 Security Requirements for Cryptographic Modules]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf SP 800-57 Recommendation for Key Management, Revision 2]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/drafts.html#sp800-95 SP 800-95 Guide to Secure Web Services] &lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5280.txt RFC 5280 Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc2246.txt RFC 2246 The Transport Layer Security (TLS) Protocol Version 1.0 (JAN 1999)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc4346.txt RFC 4346 The Transport Layer Security (TLS) Protocol Version 1.1 (APR 2006)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5246.txt RFC 5246 The Transport Layer Security (TLS) Protocol Version 1.2 (AUG 2008)]&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
Michael Coates - michael.coates[at]owasp.org &amp;lt;br/&amp;gt;&lt;br /&gt;
Dave Wichers - dave.wichers[at]aspectsecurity.com &amp;lt;br/&amp;gt;&lt;br /&gt;
Michael Boberski - boberski_michael[at]bah.com&amp;lt;br/&amp;gt;&lt;br /&gt;
Tyler Reguly -treguly[at]sslfail.com&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=149109</id>
		<title>Transport Layer Protection Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=149109"/>
				<updated>2013-04-03T16:51:28Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Improved flow&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction  =&lt;br /&gt;
&lt;br /&gt;
This article provides a simple model to follow when implementing transport layer protection for an application. Although the concept of SSL is known to many, the actual details and security specific decisions of implementation are often poorly understood and frequently result in insecure deployments. This article establishes clear rules which provide guidance on securely designing and configuring transport layer security for an application. This article is focused on the use of SSL/TLS between a web application and a web browser, but that we also encourage the use of SSL/TLS or other network encryption technologies, such as VPN, on back end and other non-browser based connections.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== Architectural Decision  ==&lt;br /&gt;
&lt;br /&gt;
An architectural decision must be made to determine the appropriate method to protect data when it is being transmitted.  The most common options available to corporations are Virtual Private Networks (VPN) or a SSL/TLS model commonly used by web applications. The selected model is determined by the business needs of the particular organization. For example, a VPN connection may be the best design for a partnership between two companies that includes mutual access to a shared server over a variety of protocols. Conversely, an Internet facing enterprise web application would likely be best served by a SSL/TLS model. &lt;br /&gt;
&lt;br /&gt;
This cheat sheet will focus on security considerations when the SSL/TLS model is selected. This is a frequently used model for publicly accessible web applications.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection with SSL/TLS  =&lt;br /&gt;
&lt;br /&gt;
== Benefits  ==&lt;br /&gt;
&lt;br /&gt;
The primary benefit of transport layer security is the protection of web application data from unauthorized disclosure and modification when it is transmitted between clients (web browsers) and the web application server, and between the web application server and back end and other non-browser based enterprise components. &lt;br /&gt;
&lt;br /&gt;
The server validation component of TLS provides authentication of the server to the client.  If configured to require client side certificates, TLS can also play a role in client authentication to the server. However, in practice client side certificates are not often used in lieu of username and password based authentication models for clients.&lt;br /&gt;
&lt;br /&gt;
TLS also provides two additional benefits that are commonly overlooked; integrity guarantees and replay prevention. A TLS stream of communication contains built-in controls to prevent tampering with any portion of the encrypted data. In addition, controls are also built-in to prevent a captured stream of TLS data from being replayed at a later time.&lt;br /&gt;
&lt;br /&gt;
It should be noted that TLS provides the above guarantees to data during transmission. TLS does not offer any of these security benefits to data that is at rest. Therefore appropriate security controls must be added to protect data while at rest within the application or within data stores.&lt;br /&gt;
&lt;br /&gt;
== Basic Requirements ==&lt;br /&gt;
&lt;br /&gt;
The basic requirements for using TLS are: access to a Public Key Infrastructure (PKI) in order to obtain certificates, access to a directory or an Online Certificate Status Protocol (OCSP) responder in order to check certificate revocation status, and agreement/ability to support a minimum configuration of protocol versions and protocol options for each version.&lt;br /&gt;
&lt;br /&gt;
== SSL vs. TLS  ==&lt;br /&gt;
&lt;br /&gt;
The terms, Secure Socket Layer (SSL) and Transport Layer Security (TLS) are often used interchangeably. In fact, SSL v3.1 is equivalent to TLS v1.0. However, different versions of SSL and TLS are supported by modern web browsers and by most modern web frameworks and platforms. For the purposes of this cheat sheet we will refer to the technology generically as TLS. Recommendations regarding the use of SSL and TLS protocols, as well as browser support for TLS, can be found in the rule below title [[Transport_Layer_Protection_Cheat_Sheet#Rule_-_Only_Support_Strong_Protocols| &amp;quot;Only Support Strong Protocols&amp;quot;]].&lt;br /&gt;
&lt;br /&gt;
[[Image:Asvs_cryptomodule.gif|thumb|350px|right|Cryptomodule Parts and Operation]]&lt;br /&gt;
&lt;br /&gt;
== When to Use a FIPS 140-2 Validated Cryptomodule ==&lt;br /&gt;
&lt;br /&gt;
If the web application may be the target of determined attackers (a common threat model for Internet accessible applications handling sensitive data), it is strongly advised to use TLS services that are provided by [http://csrc.nist.gov/groups/STM/cmvp/validation.html FIPS 140-2 validated cryptomodules]. &lt;br /&gt;
&lt;br /&gt;
A cryptomodule, whether it is a software library or a hardware device, basically consists of three parts:&lt;br /&gt;
&lt;br /&gt;
* Components that implement cryptographic algorithms (symmetric and asymmetric algorithms, hash algorithms, random number generator algorithms, and message authentication code algorithms) &lt;br /&gt;
* Components that call and manage cryptographic functions (inputs and outputs include cryptographic keys and so-called critical security parameters) &lt;br /&gt;
* A physical container around the components that implement cryptographic algorithms and the components that call and manage cryptographic functions&lt;br /&gt;
&lt;br /&gt;
The security of a cryptomodule and its services (and the web applications that call the cryptomodule) depend on the correct implementation and integration of each of these three parts. In addition, the cryptomodule must be used and accessed securely. The includes consideration for:&lt;br /&gt;
&lt;br /&gt;
* Calling and managing cryptographic functions&lt;br /&gt;
* Securely Handling inputs and output&lt;br /&gt;
* Ensuring the secure construction of the physical container around the components&lt;br /&gt;
&lt;br /&gt;
In order to leverage the benefits of TLS it is important to use a TLS service (e.g. library, web framework, web application server) which has been FIPS 140-2 validated. In addition, the cryptomodule must be installed, configured and operated in either an approved or an allowed mode to provide a high degree of certainty that the FIPS 140-2 validated cryptomodule is providing the expected security services in the expected manner.&lt;br /&gt;
&lt;br /&gt;
If the system is legally required to use FIPS 140-2 encryption (e.g., owned or operated by or on behalf of the U.S. Government) then TLS must be used and SSL disabled. Details on why SSL is unacceptable are described in Section 7.1 of [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program].&lt;br /&gt;
&lt;br /&gt;
Further reading on the use of TLS to protect highly sensitive data against determined attackers can be viewed in [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP800-52 Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations]&lt;br /&gt;
&lt;br /&gt;
== Secure Server Design  ==&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS for All Login Pages and All Authenticated Pages  ===&lt;br /&gt;
&lt;br /&gt;
The login page and all subsequent authenticated pages must be exclusively accessed over TLS. The initial login page, referred to as the &amp;quot;login landing page&amp;quot;, must be served over TLS. Failure to utilize TLS for the login landing page allows an attacker to modify the login form action, causing the user's credentials to be posted to an arbitrary location. Failure to utilize TLS for authenticated pages after the login enables an attacker to view the unencrypted session ID and compromise the user's authenticated session. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS on Any Networks (External and Internal) Transmitting Sensitive Data  ===&lt;br /&gt;
&lt;br /&gt;
All networks, both external and internal, which transmit sensitive data must utilize TLS or an equivalent transport layer security mechanism. It is not sufficient to claim that access to the internal network is &amp;quot;restricted to employees&amp;quot;. Numerous recent data compromises have shown that the internal network can be breached by attackers. In these attacks, sniffers have been installed to access unencrypted sensitive data sent on the internal network. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Provide Non-TLS Pages for Secure Content  ===&lt;br /&gt;
&lt;br /&gt;
All pages which are available over TLS must not be available over a non-TLS connection. A user may inadvertently bookmark or manually type a URL to a HTTP page (e.g. http://example.com/myaccount) within the authenticated portion of the application. If this request is processed by the application then the response, and any sensitive data, would be returned to the user over the clear text HTTP.&lt;br /&gt;
&lt;br /&gt;
=== Rule - REMOVED - Do Not Perform Redirects from Non-TLS Page to TLS Login Page  ===&lt;br /&gt;
&lt;br /&gt;
This recommendation has been removed. Ultimately, the below guidance will only provide user education and cannot provide any technical controls to protect the user against a man-in-the-middle attack.  &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
A common practice is to redirect users that have requested a non-TLS version of the login page to the TLS version (e.g. http://example.com/login redirects to https://example.com/login). This practice creates an additional attack vector for a man in the middle attack. In addition, redirecting from non-TLS versions to the TLS version reinforces to the user that the practice of requesting the non-TLS page is acceptable and secure.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the man-in-the-middle attack is used by the attacker to intercept the non-TLS to TLS redirect message. The attacker then injects the HTML of the actual login page and changes the form to post over unencrypted HTTP. This allows the attacker to view the user's credentials as they are transmitted in the clear.&lt;br /&gt;
&lt;br /&gt;
It is recommended to display a security warning message to the user whenever the non-TLS login page is requested. This security warning should urge the user to always type &amp;quot;HTTPS&amp;quot; into the browser or bookmark the secure login page.  This approach will help educate users on the correct and most secure method of accessing the application.&lt;br /&gt;
&lt;br /&gt;
Currently there are no controls that an application can enforce to entirely mitigate this risk. Ultimately, this issue is the responsibility of the user since the application cannot prevent the user from initially typing [http://owasp.org http://example.com/login] (versus HTTPS). &lt;br /&gt;
&lt;br /&gt;
Note: [http://www.w3.org/Security/wiki/Strict_Transport_Security Strict Transport Security] will address this issue and will provide a server side control to instruct supporting browsers that the site should only be accessed over HTTPS&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Mix TLS and Non-TLS Content  ===&lt;br /&gt;
&lt;br /&gt;
A page that is available over TLS must be comprised completely of content which is transmitted over TLS. The page must not contain any content that is transmitted over unencrypted HTTP. This includes content from unrelated third party sites. &lt;br /&gt;
&lt;br /&gt;
An attacker could intercept any of the data transmitted over the unencrypted HTTP and inject malicious content into the user's page. This malicious content would be included in the page even if the overall page is served over TLS. In addition, an attacker could steal the user's session cookie that is transmitted with any non-TLS requests. This is possible if the cookie's 'secure' flag is not set. See the rule 'Use &amp;quot;Secure&amp;quot; Cookie Flag'&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use &amp;quot;Secure&amp;quot; Cookie Flag  ===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Secure&amp;quot; flag must be set for all user cookies. Failure to use the &amp;quot;secure&amp;quot; flag enables an attacker to access the session cookie by tricking the user's browser into submitting a request to an unencrypted page on the site. This attack is possible even if the server is not configured to offer HTTP content since the attacker is monitoring the requests and does not care if the server responds with a 404 or doesn't respond at all.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Keep Sensitive Data Out of the URL ===&lt;br /&gt;
&lt;br /&gt;
Sensitive data must not be transmitted via URL arguments. A more appropriate place is to store sensitive data in a server side repository or within the user's session.  When using TLS the URL arguments and values are encrypted during transit. However, there are two methods that the URL arguments and values could be exposed.&lt;br /&gt;
&lt;br /&gt;
1. The entire URL is cached within the local user's browser history. This may expose sensitive data to any other user of the workstation.&lt;br /&gt;
&lt;br /&gt;
2. The entire URL is exposed if the user clicks on a link to another HTTPS site. This may expose sensitive data within the referral field to the third party site. This exposure occurs in most browsers and will only occur on transitions between two TLS sites. &lt;br /&gt;
&lt;br /&gt;
For example, a user following a link on [http://owasp.org https://example.com] which leads to [http://owasp.org https://someOtherexample.com] would expose the full URL of [http://owasp.org https://example.com] (including URL arguments) in the referral header (within most browsers). This would not be the case if the user followed a link on [http://owasp.org https://example.com] to [http://owasp.org http://someHTTPexample.com]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Prevent Caching of Sensitive Data ===&lt;br /&gt;
&lt;br /&gt;
The TLS protocol provides confidentiality only for data in transit but it does not help with potential data leakage issues at the client or intermediary proxies. As a result, it is frequently prudent to instruct these nodes not to cache or persist sensitive data. One option is to add a suitable Cache-Control header to relevant HTTP responses, for example &amp;quot;Cache-Control: no-cache, no store, must-revalidate&amp;quot;. For compatibility with HTTP/1.0 the response should include header &amp;quot;Pragma: no-cache&amp;quot;. More information is available in [http://www.ietf.org/rfc/rfc2616.txt HTTP 1.1 RFC 2616], section 14.9.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use HTTP Strict Transport Security ===&lt;br /&gt;
&lt;br /&gt;
A new browser security setting called HTTP Strict Transport Security (HSTS) will significantly enhance the implementation of TLS for a domain. HSTS is enabled via a special response header and this instructs [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security#Browser_Support compatible browsers] to enforce the following security controls:&lt;br /&gt;
&lt;br /&gt;
* All requests to the domain will be sent over HTTPS&lt;br /&gt;
* Any attempts to send an HTTP requests to the domain will be automatically upgraded by the browser to HTTPS before the request is sent&lt;br /&gt;
* If a user encounters a bad SSL certificate, the user will receive an error message and will not be allowed to override the warning message&lt;br /&gt;
&lt;br /&gt;
Additional information on HSTS can be found at [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security https://www.owasp.org/index.php/HTTP_Strict_Transport_Security] and also on the OWASP [http://www.youtube.com/watch?v=zEV3HOuM_Vw&amp;amp;feature=youtube_gdata AppSecTutorial Series - Episode 4]&lt;br /&gt;
&lt;br /&gt;
== Server Certificate and Protocol Configuration  ==&lt;br /&gt;
&lt;br /&gt;
Note: If using a FIPS 140-2 cryptomodule disregard the following rules and defer to the recommended configuration for the particular cryptomodule.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use an Appropriate Certification Authority for the Application's User Base  ===&lt;br /&gt;
&lt;br /&gt;
An application user must never be presented with a warning that the certificate was signed by an unknown or untrusted authority. The application's user population must have access to the public certificate of the certification authority which issued the server's certificate. For Internet accessible websites, the most effective method of achieving this goal is to purchase the TLS certificate from a recognize certification authority. Popular Internet browsers already contain the public certificates of these recognized certification authorities. &lt;br /&gt;
&lt;br /&gt;
Internal applications with a limited user population can use an internal certification authority provided its public certificate is securely distributed to all users. However, remember that all certificates issued by this certification authority will be trusted by the users. Therefore, utilize controls to protect the private key and ensure that only authorized individuals have the ability to sign certificates. &lt;br /&gt;
&lt;br /&gt;
The use of self signed certificates is never acceptable. Self signed certificates negate the benefit of end-point authentication and also significantly decrease the ability for an individual to detect a man-in-the-middle attack. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Protocols ===&lt;br /&gt;
&lt;br /&gt;
SSL/TLS is a collection of protocols. Weaknesses have been identified with earlier SSL protocols, including [http://www.schneier.com/paper-ssl-revised.pdf SSLv2] and [http://www.yaksman.org/~lweith/ssl.pdf SSLv3]. The best practice for transport layer protection is to only provide support for the TLS protocols - TLS1.0, TLS 1.1 and TLS 1.2. This configuration will provide maximum protection against skilled and determined attackers and is appropriate for applications handling sensitive data or performing critical operations.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers Nearly all modern browsers support at least TLS 1.0]. As of February 2013, contemporary browsers (Chrome v20+, IE v8+, Opera v10+, and Safari v5+) [http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers support TLS 1.1 and TLS 1.2]. You should provide support for TLS 1.1 and TLS 1.2 to accommodate clients which support the protocols.&lt;br /&gt;
&lt;br /&gt;
In situations where lesser security requirements are necessary, it may be acceptable to also provide support for SSL 3.0 and TLS 1.0. [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 has known weaknesses] which severely compromise the channel's security. TLS 1.0 suffers [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html CBC Chaining attacks and Padding Oracle attacks]. SSLv3 and TLSv1.0 should only be used only after risk analysis and acceptance.&lt;br /&gt;
&lt;br /&gt;
Under no circumstances should SSLv2 be enabled as a protocol selection. The [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 protocol is broken] and does not provide adequate transport layer protection.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Cryptographic Ciphers  ===&lt;br /&gt;
&lt;br /&gt;
Each protocol (SSLv3, TLSv1.0, etc) provide cipher suites. As of TLS 1.2, [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 there is support for over 300 suites (320+ and counting)], including [http://www.mail-archive.com/cryptography@randombit.net/msg03785.html national vanity cipher suites]. The strength of the encryption used within a TLS session is determined by the encryption cipher negotiated between the server and the browser. In order to ensure that only strong cryptographic ciphers are selected the server must be modified to disable the use of weak ciphers. It is recommended to configure the server to only support strong ciphers and to use sufficiently large key sizes. In general, the following should be observed when selecting CipherSuites:&lt;br /&gt;
&lt;br /&gt;
* Use AES, 3-key 3DES for encryption operated in CBC mode &lt;br /&gt;
* Stream Ciphers which XOR the key stream with plaintext (such as AES/CTR mode)&lt;br /&gt;
* Use SHA1 or above for digests, prefer SHA2 (or equivalent)&lt;br /&gt;
* MD5 should not be used except as a PRF (no signing, no MACs)&lt;br /&gt;
* Do not provide support for NULL ciphersuites (aNULL or eNULL)&lt;br /&gt;
* Do not provide support for anonymous Diffie-Hellman &lt;br /&gt;
* Support ephemeral Diffie-Hellman key exchange&lt;br /&gt;
&lt;br /&gt;
Note: The TLS usage of MD5 does not expose the TLS protocol to any of the weaknesses of the MD5 algorithm (see FIPS 140-2 IG). However, MD5 must never be used outside of TLS protocol (e.g. for general hashing).&lt;br /&gt;
&lt;br /&gt;
Note: Use of Ephemeral Diffie-Hellman key exchange will protect confidentiality of the transmitted plaintext data even if the corresponding RSA or DSS server private key got compromised. An attacker would have to perform active man-in-the-middle attack at the time of the key exchange to be able to extract the transmitted plaintext. All modern browsers support this key exchange with the notable exception of Internet Explorer prior to Windows Vista.&lt;br /&gt;
&lt;br /&gt;
Additional information can be obtained within the [http://www.ietf.org/rfc/rfc4346.txt TLS 1.1 RFC 4346] and [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf FIPS 140-2 IG]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Secure Renegotiations  ===&lt;br /&gt;
&lt;br /&gt;
A design weakness in TLS, identified as [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-3555 CVE-2009-3555], allows an attacker to inject a plaintext of his choice into a TLS session of a victim. In the HTTPS context the attacker might be able to inject his own HTTP requests on behalf of the victim. The issue can be mitigated either by disabling support for TLS renegotiations or by supporting only renegotiations compliant with [http://www.ietf.org/rfc/rfc5746.txt RFC 5746]. All modern browsers have been updated to comply with this RFC.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Disable Compression ===&lt;br /&gt;
&lt;br /&gt;
Compression Ratio Info-leak Made Easy (CRIME) is an exploit against the data compression scheme used by the TLS and SPDY protocols. The exploit allows an adversary to recover user authentication cookies from HTTPS. The recovered cookie can be subsequently used for session hijacking attacks.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Strong Keys &amp;amp; Protect Them ===&lt;br /&gt;
&lt;br /&gt;
The private key used to generate the cipher key must be sufficiently strong for the anticipated lifetime of the private key and corresponding certificate. The current best practice is to select a key size of at least 2048. Keys of length 1024 will be obsolete beginning in 2010.  Additional information on key lifetimes and comparable key strengths can be found in [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf NIST SP 800-57]. In addition, the private key must be stored in a location that is protected from unauthorized access.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use a Certificate That Supports Required Domain Names  ===&lt;br /&gt;
&lt;br /&gt;
A user should never be presented with a certificate error, including prompts to reconcile domain or hostname mismatches, or expired certificates. If the application is available at both [https://owasp.org https://www.example.com] and [https://owasp.org https://example.com] then an appropriate certificate, or certificates, must be presented to accommodate the situation. The presence of certificate errors desensitizes users to TLS error messages and increases the possibility an attacker could launch a convincing phishing or man-in-the-middle attack.&lt;br /&gt;
&lt;br /&gt;
For example, consider a web application accessible at [https://owasp.org https://abc.example.com] and [https://owasp.org https://xyz.example.com]. One certificate should be acquired for the host or server ''abc.example.com''; and a second certificate for host or server ''xyz.example.com''. In both cases, the hostname would be present in the Subject's Common Name (CN).&lt;br /&gt;
&lt;br /&gt;
Alternatively, the Subject Alternate Names (SANs) can be used to provide a specific listing of multiple names where the certificate is valid. In the example above, the certificate could list the Subject's CN as ''example.com'', and list two SANs: ''abc.example.com'' and ''xyz.example.com''. These certificates are sometimes referred to as &amp;quot;multiple domain certificates&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
You should refrain from using wildcard certificates. Though they are expedient at circumventing annoying user prompts, they also [[Least_privilege|violate the principal of least privilege]] and asks the user to trust all machines, including developer's machines, the secretary's machine in the lobby and the sign-in kiosk. Obtaining access to the private key is left as an exercise for the attacker, but its made much easier when stored on the file system unprotected. Finally, statistics gather by Qualys for their [http://media.blackhat.com/bh-us-10/presentations/Ristic/BlackHat-USA-2010-Ristic-Qualys-SSL-Survey-HTTP-Rating-Guide-slides.pdf Internet SSL Survey 2010] indicate wildcard certificates have a 4.4% share, so the practice is not standard for public facing hosts.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Always Provide All Needed Certificates ===&lt;br /&gt;
&lt;br /&gt;
Clients attempt to solve the problem of identifying a server or host using PKI and X509 certificate. When a user receives a server or host's certificate, the certificate must be validated back to a trusted root certification authority. This is known as path validation.&lt;br /&gt;
&lt;br /&gt;
There can be one or more intermediate certificates in between the end-entity (server or host) certificate and root certificate. In addition to validating both endpoints, the user will also have to validate all intermediate certificates. Validating all intermediate certificates can be tricky because the user may not have them locally. This is a well-known PKI issue called the “Which Directory?&amp;quot; problem.&lt;br /&gt;
&lt;br /&gt;
To avoid the “Which Directory?&amp;quot; problem, a server should provide the user with all required certificates used in a path validation.&lt;br /&gt;
&lt;br /&gt;
== Client (Browser) Configuration  ==&lt;br /&gt;
&lt;br /&gt;
The validation procedures to ensure that a certificate is valid are complex and difficult to correctly perform.  In a typical web application model, these checks will be performed by the client's web browser in accordance with local browser settings and are out of the control of the application. However, these items do need to be addressed in the following scenarios:&lt;br /&gt;
&lt;br /&gt;
* The application server establishes connections to other applications over TLS for purposes such as web services or any exchange of data&lt;br /&gt;
* A thick client application is connecting to a server via TLS&lt;br /&gt;
&lt;br /&gt;
In these situations extensive certificate validation checks must occur in order to establish the validity of the certificate. Consult the following resources to assist in the design and testing of this functionality. The NIST PKI testing site includes a full test suite of certificates and expected outcomes of the test cases.&lt;br /&gt;
* [http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html NIST PKI Testing]&lt;br /&gt;
* [http://www.ietf.org/rfc/rfc5280.txt IETF RFC 5280]&lt;br /&gt;
&lt;br /&gt;
As specified in the above guidance, if the certificate can not be validated for any reason then the connection between the client and server must be dropped. Any data exchanged over a connection where the certificate has not properly been validated could be exposed to unauthorized access or modification.&lt;br /&gt;
&lt;br /&gt;
== Additional Controls  ==&lt;br /&gt;
&lt;br /&gt;
=== Extended Validation Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Extended validation certificates (EV Certificates) proffer an enhanced investigation by the issuer into the requesting party due to the industry's race to the bottom. The purpose of EV certificates is to provide the user with greater assurance that the owner of the certificate is a verified legal entity for the site. Browsers with support for EV certificates distinguish an EV certificate in a variety of ways. Internet Explorer will color a portion of the URL in green, while Mozilla will add a green portion to the left of the URL indicating the company name. &lt;br /&gt;
&lt;br /&gt;
High value websites should consider the use of EV certificates to enhance customer confidence in the certificate. It should also be noted that EV certificates do not provide any greater technical security for the TLS. The purpose of the EV certificate is to increase user confidence that the target site is indeed who it claims to be.&lt;br /&gt;
&lt;br /&gt;
=== Client-Side Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Client side certificates can be used with TLS to prove the identity of the client to the server. Referred to as &amp;quot;two-way TLS&amp;quot;, this configuration requires the client to provide their certificate to the server, in addition to the server providing their's to the client. If client certificates are used, ensure that the same validation of the client certificate is performed by the server, as indicated for the validation of server certificates above. In addition, the server should be configured to drop the TLS connection if the client certificate cannot be verified or is not provided. &lt;br /&gt;
&lt;br /&gt;
The use of client side certificates is relatively rare currently due to the complexities of certificate generation, safe distribution, client side configuration, certificate revocation and reissuance, and the fact that clients can only authenticate on machines where their client side certificate is installed. Such certificates are typically used for very high value connections that have small user populations.&lt;br /&gt;
&lt;br /&gt;
=== Certificate and Public Key Pinning ===&lt;br /&gt;
&lt;br /&gt;
Hybrid and native applications can take advantage of [[Certificate_and_Public_Key_Pinning|certificate and public key pinning]]. Pinning associates a host (for example, server) with an identity (for example, certificate or public key), and allows an application to leverage knowledge of the pre-existing relationship. At runtime, the application would inspect the certificate or public key received after connecting to the server. If the certificate or public key is expected, then the application would proceed as normal. If unexpected, the application would stop using the channel and close the connection since an adversary could control the channel or server.&lt;br /&gt;
&lt;br /&gt;
Pinning still requires customary X509 checks, such as revocation, since CRLs and OCSP provides real time status information. Otherwise, an application could possibly (1) accept a known bad certificate; or (2) require an out-of-band update, which could result in a lengthy App Store approval.&lt;br /&gt;
&lt;br /&gt;
Browser based applications are at a disadvantage since most browsers do not allow the user to leverage pre-existing relationships and ''a priori'' knowledge. In addition, Javascript and Websockets do not expose methods to for a web app to query the underlying secure connection information (such as the certificate or public key). It is noteworthy that Chromium based browsers perform pinning on selected sites, but the list is currently maintained by the vendor.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection for Back End and Other Connections  =&lt;br /&gt;
&lt;br /&gt;
Although not the focus of this cheat sheet, it should be stressed that transport layer protection is necessary for back-end connections and any other connection where sensitive data is exchanged or where user identity is established. Failure to implement an effective and robust transport layer security will expose sensitive data and undermine the effectiveness of any authentication or access control mechanism. &lt;br /&gt;
&lt;br /&gt;
== Secure Internal Network Fallacy  ==&lt;br /&gt;
&lt;br /&gt;
The internal network of a corporation is not immune to attacks. Many recent high profile intrusions, where thousands of sensitive customer records were compromised, have been perpetrated by attackers that have gained internal network access and then used sniffers to capture unencrypted data as it traversed the internal network.&lt;br /&gt;
&lt;br /&gt;
= Related Articles  =&lt;br /&gt;
&lt;br /&gt;
* OWASP – [[Testing for SSL-TLS (OWASP-CM-001)|Testing for SSL-TLS]], and OWASP [[Guide to Cryptography]] &lt;br /&gt;
* OWASP – [http://www.owasp.org/index.php/ASVS Application Security Verification Standard (ASVS) – Communication Security Verification Requirements (V10)]&lt;br /&gt;
* OWASP – ASVS Article on [[Why you need to use a FIPS 140-2 validated cryptomodule]]&lt;br /&gt;
* SSL Labs http://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide]&lt;br /&gt;
* yaSSL – [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html Differences between SSL and TLS Protocol Versions]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP 800-52 Guidelines for the selection and use of transport layer security (TLS) Implementations]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf FIPS 140-2 Security Requirements for Cryptographic Modules]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf SP 800-57 Recommendation for Key Management, Revision 2]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/drafts.html#sp800-95 SP 800-95 Guide to Secure Web Services] &lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5280.txt RFC 5280 Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc2246.txt RFC 2246 The Transport Layer Security (TLS) Protocol Version 1.0 (JAN 1999)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc4346.txt RFC 4346 The Transport Layer Security (TLS) Protocol Version 1.1 (APR 2006)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5246.txt RFC 5246 The Transport Layer Security (TLS) Protocol Version 1.2 (AUG 2008)]&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
Michael Coates - michael.coates[at]owasp.org &amp;lt;br/&amp;gt;&lt;br /&gt;
Dave Wichers - dave.wichers[at]aspectsecurity.com &amp;lt;br/&amp;gt;&lt;br /&gt;
Michael Boberski - boberski_michael[at]bah.com&amp;lt;br/&amp;gt;&lt;br /&gt;
Tyler Reguly -treguly[at]sslfail.com&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=149108</id>
		<title>Transport Layer Protection Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Transport_Layer_Protection_Cheat_Sheet&amp;diff=149108"/>
				<updated>2013-04-03T16:48:30Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added statistic on wildcard certifcate market share&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Introduction  =&lt;br /&gt;
&lt;br /&gt;
This article provides a simple model to follow when implementing transport layer protection for an application. Although the concept of SSL is known to many, the actual details and security specific decisions of implementation are often poorly understood and frequently result in insecure deployments. This article establishes clear rules which provide guidance on securely designing and configuring transport layer security for an application. This article is focused on the use of SSL/TLS between a web application and a web browser, but that we also encourage the use of SSL/TLS or other network encryption technologies, such as VPN, on back end and other non-browser based connections.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== Architectural Decision  ==&lt;br /&gt;
&lt;br /&gt;
An architectural decision must be made to determine the appropriate method to protect data when it is being transmitted.  The most common options available to corporations are Virtual Private Networks (VPN) or a SSL/TLS model commonly used by web applications. The selected model is determined by the business needs of the particular organization. For example, a VPN connection may be the best design for a partnership between two companies that includes mutual access to a shared server over a variety of protocols. Conversely, an Internet facing enterprise web application would likely be best served by a SSL/TLS model. &lt;br /&gt;
&lt;br /&gt;
This cheat sheet will focus on security considerations when the SSL/TLS model is selected. This is a frequently used model for publicly accessible web applications.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection with SSL/TLS  =&lt;br /&gt;
&lt;br /&gt;
== Benefits  ==&lt;br /&gt;
&lt;br /&gt;
The primary benefit of transport layer security is the protection of web application data from unauthorized disclosure and modification when it is transmitted between clients (web browsers) and the web application server, and between the web application server and back end and other non-browser based enterprise components. &lt;br /&gt;
&lt;br /&gt;
The server validation component of TLS provides authentication of the server to the client.  If configured to require client side certificates, TLS can also play a role in client authentication to the server. However, in practice client side certificates are not often used in lieu of username and password based authentication models for clients.&lt;br /&gt;
&lt;br /&gt;
TLS also provides two additional benefits that are commonly overlooked; integrity guarantees and replay prevention. A TLS stream of communication contains built-in controls to prevent tampering with any portion of the encrypted data. In addition, controls are also built-in to prevent a captured stream of TLS data from being replayed at a later time.&lt;br /&gt;
&lt;br /&gt;
It should be noted that TLS provides the above guarantees to data during transmission. TLS does not offer any of these security benefits to data that is at rest. Therefore appropriate security controls must be added to protect data while at rest within the application or within data stores.&lt;br /&gt;
&lt;br /&gt;
== Basic Requirements ==&lt;br /&gt;
&lt;br /&gt;
The basic requirements for using TLS are: access to a Public Key Infrastructure (PKI) in order to obtain certificates, access to a directory or an Online Certificate Status Protocol (OCSP) responder in order to check certificate revocation status, and agreement/ability to support a minimum configuration of protocol versions and protocol options for each version.&lt;br /&gt;
&lt;br /&gt;
== SSL vs. TLS  ==&lt;br /&gt;
&lt;br /&gt;
The terms, Secure Socket Layer (SSL) and Transport Layer Security (TLS) are often used interchangeably. In fact, SSL v3.1 is equivalent to TLS v1.0. However, different versions of SSL and TLS are supported by modern web browsers and by most modern web frameworks and platforms. For the purposes of this cheat sheet we will refer to the technology generically as TLS. Recommendations regarding the use of SSL and TLS protocols, as well as browser support for TLS, can be found in the rule below title [[Transport_Layer_Protection_Cheat_Sheet#Rule_-_Only_Support_Strong_Protocols| &amp;quot;Only Support Strong Protocols&amp;quot;]].&lt;br /&gt;
&lt;br /&gt;
[[Image:Asvs_cryptomodule.gif|thumb|350px|right|Cryptomodule Parts and Operation]]&lt;br /&gt;
&lt;br /&gt;
== When to Use a FIPS 140-2 Validated Cryptomodule ==&lt;br /&gt;
&lt;br /&gt;
If the web application may be the target of determined attackers (a common threat model for Internet accessible applications handling sensitive data), it is strongly advised to use TLS services that are provided by [http://csrc.nist.gov/groups/STM/cmvp/validation.html FIPS 140-2 validated cryptomodules]. &lt;br /&gt;
&lt;br /&gt;
A cryptomodule, whether it is a software library or a hardware device, basically consists of three parts:&lt;br /&gt;
&lt;br /&gt;
* Components that implement cryptographic algorithms (symmetric and asymmetric algorithms, hash algorithms, random number generator algorithms, and message authentication code algorithms) &lt;br /&gt;
* Components that call and manage cryptographic functions (inputs and outputs include cryptographic keys and so-called critical security parameters) &lt;br /&gt;
* A physical container around the components that implement cryptographic algorithms and the components that call and manage cryptographic functions&lt;br /&gt;
&lt;br /&gt;
The security of a cryptomodule and its services (and the web applications that call the cryptomodule) depend on the correct implementation and integration of each of these three parts. In addition, the cryptomodule must be used and accessed securely. The includes consideration for:&lt;br /&gt;
&lt;br /&gt;
* Calling and managing cryptographic functions&lt;br /&gt;
* Securely Handling inputs and output&lt;br /&gt;
* Ensuring the secure construction of the physical container around the components&lt;br /&gt;
&lt;br /&gt;
In order to leverage the benefits of TLS it is important to use a TLS service (e.g. library, web framework, web application server) which has been FIPS 140-2 validated. In addition, the cryptomodule must be installed, configured and operated in either an approved or an allowed mode to provide a high degree of certainty that the FIPS 140-2 validated cryptomodule is providing the expected security services in the expected manner.&lt;br /&gt;
&lt;br /&gt;
If the system is legally required to use FIPS 140-2 encryption (e.g., owned or operated by or on behalf of the U.S. Government) then TLS must be used and SSL disabled. Details on why SSL is unacceptable are described in Section 7.1 of [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program].&lt;br /&gt;
&lt;br /&gt;
Further reading on the use of TLS to protect highly sensitive data against determined attackers can be viewed in [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP800-52 Guidelines for the Selection and Use of Transport Layer Security (TLS) Implementations]&lt;br /&gt;
&lt;br /&gt;
== Secure Server Design  ==&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS for All Login Pages and All Authenticated Pages  ===&lt;br /&gt;
&lt;br /&gt;
The login page and all subsequent authenticated pages must be exclusively accessed over TLS. The initial login page, referred to as the &amp;quot;login landing page&amp;quot;, must be served over TLS. Failure to utilize TLS for the login landing page allows an attacker to modify the login form action, causing the user's credentials to be posted to an arbitrary location. Failure to utilize TLS for authenticated pages after the login enables an attacker to view the unencrypted session ID and compromise the user's authenticated session. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Use TLS on Any Networks (External and Internal) Transmitting Sensitive Data  ===&lt;br /&gt;
&lt;br /&gt;
All networks, both external and internal, which transmit sensitive data must utilize TLS or an equivalent transport layer security mechanism. It is not sufficient to claim that access to the internal network is &amp;quot;restricted to employees&amp;quot;. Numerous recent data compromises have shown that the internal network can be breached by attackers. In these attacks, sniffers have been installed to access unencrypted sensitive data sent on the internal network. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Provide Non-TLS Pages for Secure Content  ===&lt;br /&gt;
&lt;br /&gt;
All pages which are available over TLS must not be available over a non-TLS connection. A user may inadvertently bookmark or manually type a URL to a HTTP page (e.g. http://example.com/myaccount) within the authenticated portion of the application. If this request is processed by the application then the response, and any sensitive data, would be returned to the user over the clear text HTTP.&lt;br /&gt;
&lt;br /&gt;
=== Rule - REMOVED - Do Not Perform Redirects from Non-TLS Page to TLS Login Page  ===&lt;br /&gt;
&lt;br /&gt;
This recommendation has been removed. Ultimately, the below guidance will only provide user education and cannot provide any technical controls to protect the user against a man-in-the-middle attack.  &lt;br /&gt;
&lt;br /&gt;
--&lt;br /&gt;
&lt;br /&gt;
A common practice is to redirect users that have requested a non-TLS version of the login page to the TLS version (e.g. http://example.com/login redirects to https://example.com/login). This practice creates an additional attack vector for a man in the middle attack. In addition, redirecting from non-TLS versions to the TLS version reinforces to the user that the practice of requesting the non-TLS page is acceptable and secure.&lt;br /&gt;
&lt;br /&gt;
In this scenario, the man-in-the-middle attack is used by the attacker to intercept the non-TLS to TLS redirect message. The attacker then injects the HTML of the actual login page and changes the form to post over unencrypted HTTP. This allows the attacker to view the user's credentials as they are transmitted in the clear.&lt;br /&gt;
&lt;br /&gt;
It is recommended to display a security warning message to the user whenever the non-TLS login page is requested. This security warning should urge the user to always type &amp;quot;HTTPS&amp;quot; into the browser or bookmark the secure login page.  This approach will help educate users on the correct and most secure method of accessing the application.&lt;br /&gt;
&lt;br /&gt;
Currently there are no controls that an application can enforce to entirely mitigate this risk. Ultimately, this issue is the responsibility of the user since the application cannot prevent the user from initially typing [http://owasp.org http://example.com/login] (versus HTTPS). &lt;br /&gt;
&lt;br /&gt;
Note: [http://www.w3.org/Security/wiki/Strict_Transport_Security Strict Transport Security] will address this issue and will provide a server side control to instruct supporting browsers that the site should only be accessed over HTTPS&lt;br /&gt;
&lt;br /&gt;
=== Rule - Do Not Mix TLS and Non-TLS Content  ===&lt;br /&gt;
&lt;br /&gt;
A page that is available over TLS must be comprised completely of content which is transmitted over TLS. The page must not contain any content that is transmitted over unencrypted HTTP. This includes content from unrelated third party sites. &lt;br /&gt;
&lt;br /&gt;
An attacker could intercept any of the data transmitted over the unencrypted HTTP and inject malicious content into the user's page. This malicious content would be included in the page even if the overall page is served over TLS. In addition, an attacker could steal the user's session cookie that is transmitted with any non-TLS requests. This is possible if the cookie's 'secure' flag is not set. See the rule 'Use &amp;quot;Secure&amp;quot; Cookie Flag'&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use &amp;quot;Secure&amp;quot; Cookie Flag  ===&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;Secure&amp;quot; flag must be set for all user cookies. Failure to use the &amp;quot;secure&amp;quot; flag enables an attacker to access the session cookie by tricking the user's browser into submitting a request to an unencrypted page on the site. This attack is possible even if the server is not configured to offer HTTP content since the attacker is monitoring the requests and does not care if the server responds with a 404 or doesn't respond at all.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Keep Sensitive Data Out of the URL ===&lt;br /&gt;
&lt;br /&gt;
Sensitive data must not be transmitted via URL arguments. A more appropriate place is to store sensitive data in a server side repository or within the user's session.  When using TLS the URL arguments and values are encrypted during transit. However, there are two methods that the URL arguments and values could be exposed.&lt;br /&gt;
&lt;br /&gt;
1. The entire URL is cached within the local user's browser history. This may expose sensitive data to any other user of the workstation.&lt;br /&gt;
&lt;br /&gt;
2. The entire URL is exposed if the user clicks on a link to another HTTPS site. This may expose sensitive data within the referral field to the third party site. This exposure occurs in most browsers and will only occur on transitions between two TLS sites. &lt;br /&gt;
&lt;br /&gt;
For example, a user following a link on [http://owasp.org https://example.com] which leads to [http://owasp.org https://someOtherexample.com] would expose the full URL of [http://owasp.org https://example.com] (including URL arguments) in the referral header (within most browsers). This would not be the case if the user followed a link on [http://owasp.org https://example.com] to [http://owasp.org http://someHTTPexample.com]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Prevent Caching of Sensitive Data ===&lt;br /&gt;
&lt;br /&gt;
The TLS protocol provides confidentiality only for data in transit but it does not help with potential data leakage issues at the client or intermediary proxies. As a result, it is frequently prudent to instruct these nodes not to cache or persist sensitive data. One option is to add a suitable Cache-Control header to relevant HTTP responses, for example &amp;quot;Cache-Control: no-cache, no store, must-revalidate&amp;quot;. For compatibility with HTTP/1.0 the response should include header &amp;quot;Pragma: no-cache&amp;quot;. More information is available in [http://www.ietf.org/rfc/rfc2616.txt HTTP 1.1 RFC 2616], section 14.9.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use HTTP Strict Transport Security ===&lt;br /&gt;
&lt;br /&gt;
A new browser security setting called HTTP Strict Transport Security (HSTS) will significantly enhance the implementation of TLS for a domain. HSTS is enabled via a special response header and this instructs [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security#Browser_Support compatible browsers] to enforce the following security controls:&lt;br /&gt;
&lt;br /&gt;
* All requests to the domain will be sent over HTTPS&lt;br /&gt;
* Any attempts to send an HTTP requests to the domain will be automatically upgraded by the browser to HTTPS before the request is sent&lt;br /&gt;
* If a user encounters a bad SSL certificate, the user will receive an error message and will not be allowed to override the warning message&lt;br /&gt;
&lt;br /&gt;
Additional information on HSTS can be found at [https://www.owasp.org/index.php/HTTP_Strict_Transport_Security https://www.owasp.org/index.php/HTTP_Strict_Transport_Security] and also on the OWASP [http://www.youtube.com/watch?v=zEV3HOuM_Vw&amp;amp;feature=youtube_gdata AppSecTutorial Series - Episode 4]&lt;br /&gt;
&lt;br /&gt;
== Server Certificate and Protocol Configuration  ==&lt;br /&gt;
&lt;br /&gt;
Note: If using a FIPS 140-2 cryptomodule disregard the following rules and defer to the recommended configuration for the particular cryptomodule.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use an Appropriate Certification Authority for the Application's User Base  ===&lt;br /&gt;
&lt;br /&gt;
An application user must never be presented with a warning that the certificate was signed by an unknown or untrusted authority. The application's user population must have access to the public certificate of the certification authority which issued the server's certificate. For Internet accessible websites, the most effective method of achieving this goal is to purchase the TLS certificate from a recognize certification authority. Popular Internet browsers already contain the public certificates of these recognized certification authorities. &lt;br /&gt;
&lt;br /&gt;
Internal applications with a limited user population can use an internal certification authority provided its public certificate is securely distributed to all users. However, remember that all certificates issued by this certification authority will be trusted by the users. Therefore, utilize controls to protect the private key and ensure that only authorized individuals have the ability to sign certificates. &lt;br /&gt;
&lt;br /&gt;
The use of self signed certificates is never acceptable. Self signed certificates negate the benefit of end-point authentication and also significantly decrease the ability for an individual to detect a man-in-the-middle attack. &lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Protocols ===&lt;br /&gt;
&lt;br /&gt;
SSL/TLS is a collection of protocols. Weaknesses have been identified with earlier SSL protocols, including [http://www.schneier.com/paper-ssl-revised.pdf SSLv2] and [http://www.yaksman.org/~lweith/ssl.pdf SSLv3]. The best practice for transport layer protection is to only provide support for the TLS protocols - TLS1.0, TLS 1.1 and TLS 1.2. This configuration will provide maximum protection against skilled and determined attackers and is appropriate for applications handling sensitive data or performing critical operations.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers Nearly all modern browsers support at least TLS 1.0]. As of February 2013, contemporary browsers (Chrome v20+, IE v8+, Opera v10+, and Safari v5+) [http://en.wikipedia.org/wiki/Transport_Layer_Security#Web_browsers support TLS 1.1 and TLS 1.2]. You should provide support for TLS 1.1 and TLS 1.2 to accommodate clients which support the protocols.&lt;br /&gt;
&lt;br /&gt;
In situations where lesser security requirements are necessary, it may be acceptable to also provide support for SSL 3.0 and TLS 1.0. [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 has known weaknesses] which severely compromise the channel's security. TLS 1.0 suffers [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html CBC Chaining attacks and Padding Oracle attacks]. SSLv3 and TLSv1.0 should only be used only after risk analysis and acceptance.&lt;br /&gt;
&lt;br /&gt;
Under no circumstances should SSLv2 be enabled as a protocol selection. The [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 protocol is broken] and does not provide adequate transport layer protection.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Strong Cryptographic Ciphers  ===&lt;br /&gt;
&lt;br /&gt;
Each protocol (SSLv3, TLSv1.0, etc) provide cipher suites. As of TLS 1.2, [http://www.iana.org/assignments/tls-parameters/tls-parameters.xml#tls-parameters-3 there is support for over 300 suites (320+ and counting)], including [http://www.mail-archive.com/cryptography@randombit.net/msg03785.html national vanity cipher suites]. The strength of the encryption used within a TLS session is determined by the encryption cipher negotiated between the server and the browser. In order to ensure that only strong cryptographic ciphers are selected the server must be modified to disable the use of weak ciphers. It is recommended to configure the server to only support strong ciphers and to use sufficiently large key sizes. In general, the following should be observed when selecting CipherSuites:&lt;br /&gt;
&lt;br /&gt;
* Use AES, 3-key 3DES for encryption operated in CBC mode &lt;br /&gt;
* Stream Ciphers which XOR the key stream with plaintext (such as AES/CTR mode)&lt;br /&gt;
* Use SHA1 or above for digests, prefer SHA2 (or equivalent)&lt;br /&gt;
* MD5 should not be used except as a PRF (no signing, no MACs)&lt;br /&gt;
* Do not provide support for NULL ciphersuites (aNULL or eNULL)&lt;br /&gt;
* Do not provide support for anonymous Diffie-Hellman &lt;br /&gt;
* Support ephemeral Diffie-Hellman key exchange&lt;br /&gt;
&lt;br /&gt;
Note: The TLS usage of MD5 does not expose the TLS protocol to any of the weaknesses of the MD5 algorithm (see FIPS 140-2 IG). However, MD5 must never be used outside of TLS protocol (e.g. for general hashing).&lt;br /&gt;
&lt;br /&gt;
Note: Use of Ephemeral Diffie-Hellman key exchange will protect confidentiality of the transmitted plaintext data even if the corresponding RSA or DSS server private key got compromised. An attacker would have to perform active man-in-the-middle attack at the time of the key exchange to be able to extract the transmitted plaintext. All modern browsers support this key exchange with the notable exception of Internet Explorer prior to Windows Vista.&lt;br /&gt;
&lt;br /&gt;
Additional information can be obtained within the [http://www.ietf.org/rfc/rfc4346.txt TLS 1.1 RFC 4346] and [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf FIPS 140-2 IG]&lt;br /&gt;
&lt;br /&gt;
=== Rule - Only Support Secure Renegotiations  ===&lt;br /&gt;
&lt;br /&gt;
A design weakness in TLS, identified as [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2009-3555 CVE-2009-3555], allows an attacker to inject a plaintext of his choice into a TLS session of a victim. In the HTTPS context the attacker might be able to inject his own HTTP requests on behalf of the victim. The issue can be mitigated either by disabling support for TLS renegotiations or by supporting only renegotiations compliant with [http://www.ietf.org/rfc/rfc5746.txt RFC 5746]. All modern browsers have been updated to comply with this RFC.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Disable Compression ===&lt;br /&gt;
&lt;br /&gt;
Compression Ratio Info-leak Made Easy (CRIME) is an exploit against the data compression scheme used by the TLS and SPDY protocols. The exploit allows an adversary to recover user authentication cookies from HTTPS. The recovered cookie can be subsequently used for session hijacking attacks.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use Strong Keys &amp;amp; Protect Them ===&lt;br /&gt;
&lt;br /&gt;
The private key used to generate the cipher key must be sufficiently strong for the anticipated lifetime of the private key and corresponding certificate. The current best practice is to select a key size of at least 2048. Keys of length 1024 will be obsolete beginning in 2010.  Additional information on key lifetimes and comparable key strengths can be found in [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57_part1_rev3_general.pdf NIST SP 800-57]. In addition, the private key must be stored in a location that is protected from unauthorized access.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Use a Certificate That Supports Required Domain Names  ===&lt;br /&gt;
&lt;br /&gt;
A user should never be presented with a certificate error, including prompts to reconcile domain or hostname mismatches, or expired certificates. If the application is available at both [https://owasp.org https://www.example.com] and [https://owasp.org https://example.com] then an appropriate certificate, or certificates, must be presented to accommodate the situation. The presence of certificate errors desensitizes users to TLS error messages and increases the possibility an attacker could launch a convincing phishing or man-in-the-middle attack.&lt;br /&gt;
&lt;br /&gt;
For example, consider a web application accessible at [https://owasp.org https://abc.example.com] and [https://owasp.org https://xyz.example.com]. One certificate should be acquired for the host or server ''abc.example.com''; and a second certificate for host or server ''xyz.example.com''. In both cases, the hostname would be present in the Subject's Common Name (CN).&lt;br /&gt;
&lt;br /&gt;
Alternatively, the Subject Alternate Names (SANs) can be used to provide a specific listing of multiple names where the certificate is valid. In the example above, the certificate could list the Subject's CN as ''example.com'', and list two SANs: ''abc.example.com'' and ''xyz.example.com''. These certificates are sometimes referred to as &amp;quot;multiple domain certificates&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
You should refrain from using wildcard certificates. Though they are expedient at circumventing annoying user prompts, they also [[Least_privilege|violate the principal of least privilege]] and asks the user to trust all machines, including developer's machines, the secretary's machine in the lobby and the sign-in kiosk. Statistics gather by Qualsys for their [http://media.blackhat.com/bh-us-10/presentations/Ristic/BlackHat-USA-2010-Ristic-Qualys-SSL-Survey-HTTP-Rating-Guide-slides.pdf Internet SSL Survey] indicate wildcard certificates have a 4.4% share, so the practice is clearly not standard. Obtaining access to the private key is left as an exercise for the attacker, but its made much easier when stored on the file system unprotected.&lt;br /&gt;
&lt;br /&gt;
=== Rule - Always Provide All Needed Certificates ===&lt;br /&gt;
&lt;br /&gt;
Clients attempt to solve the problem of identifying a server or host using PKI and X509 certificate. When a user receives a server or host's certificate, the certificate must be validated back to a trusted root certification authority. This is known as path validation.&lt;br /&gt;
&lt;br /&gt;
There can be one or more intermediate certificates in between the end-entity (server or host) certificate and root certificate. In addition to validating both endpoints, the user will also have to validate all intermediate certificates. Validating all intermediate certificates can be tricky because the user may not have them locally. This is a well-known PKI issue called the “Which Directory?&amp;quot; problem.&lt;br /&gt;
&lt;br /&gt;
To avoid the “Which Directory?&amp;quot; problem, a server should provide the user with all required certificates used in a path validation.&lt;br /&gt;
&lt;br /&gt;
== Client (Browser) Configuration  ==&lt;br /&gt;
&lt;br /&gt;
The validation procedures to ensure that a certificate is valid are complex and difficult to correctly perform.  In a typical web application model, these checks will be performed by the client's web browser in accordance with local browser settings and are out of the control of the application. However, these items do need to be addressed in the following scenarios:&lt;br /&gt;
&lt;br /&gt;
* The application server establishes connections to other applications over TLS for purposes such as web services or any exchange of data&lt;br /&gt;
* A thick client application is connecting to a server via TLS&lt;br /&gt;
&lt;br /&gt;
In these situations extensive certificate validation checks must occur in order to establish the validity of the certificate. Consult the following resources to assist in the design and testing of this functionality. The NIST PKI testing site includes a full test suite of certificates and expected outcomes of the test cases.&lt;br /&gt;
* [http://csrc.nist.gov/groups/ST/crypto_apps_infra/pki/pkitesting.html NIST PKI Testing]&lt;br /&gt;
* [http://www.ietf.org/rfc/rfc5280.txt IETF RFC 5280]&lt;br /&gt;
&lt;br /&gt;
As specified in the above guidance, if the certificate can not be validated for any reason then the connection between the client and server must be dropped. Any data exchanged over a connection where the certificate has not properly been validated could be exposed to unauthorized access or modification.&lt;br /&gt;
&lt;br /&gt;
== Additional Controls  ==&lt;br /&gt;
&lt;br /&gt;
=== Extended Validation Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Extended validation certificates (EV Certificates) proffer an enhanced investigation by the issuer into the requesting party due to the industry's race to the bottom. The purpose of EV certificates is to provide the user with greater assurance that the owner of the certificate is a verified legal entity for the site. Browsers with support for EV certificates distinguish an EV certificate in a variety of ways. Internet Explorer will color a portion of the URL in green, while Mozilla will add a green portion to the left of the URL indicating the company name. &lt;br /&gt;
&lt;br /&gt;
High value websites should consider the use of EV certificates to enhance customer confidence in the certificate. It should also be noted that EV certificates do not provide any greater technical security for the TLS. The purpose of the EV certificate is to increase user confidence that the target site is indeed who it claims to be.&lt;br /&gt;
&lt;br /&gt;
=== Client-Side Certificates  ===&lt;br /&gt;
&lt;br /&gt;
Client side certificates can be used with TLS to prove the identity of the client to the server. Referred to as &amp;quot;two-way TLS&amp;quot;, this configuration requires the client to provide their certificate to the server, in addition to the server providing their's to the client. If client certificates are used, ensure that the same validation of the client certificate is performed by the server, as indicated for the validation of server certificates above. In addition, the server should be configured to drop the TLS connection if the client certificate cannot be verified or is not provided. &lt;br /&gt;
&lt;br /&gt;
The use of client side certificates is relatively rare currently due to the complexities of certificate generation, safe distribution, client side configuration, certificate revocation and reissuance, and the fact that clients can only authenticate on machines where their client side certificate is installed. Such certificates are typically used for very high value connections that have small user populations.&lt;br /&gt;
&lt;br /&gt;
=== Certificate and Public Key Pinning ===&lt;br /&gt;
&lt;br /&gt;
Hybrid and native applications can take advantage of [[Certificate_and_Public_Key_Pinning|certificate and public key pinning]]. Pinning associates a host (for example, server) with an identity (for example, certificate or public key), and allows an application to leverage knowledge of the pre-existing relationship. At runtime, the application would inspect the certificate or public key received after connecting to the server. If the certificate or public key is expected, then the application would proceed as normal. If unexpected, the application would stop using the channel and close the connection since an adversary could control the channel or server.&lt;br /&gt;
&lt;br /&gt;
Pinning still requires customary X509 checks, such as revocation, since CRLs and OCSP provides real time status information. Otherwise, an application could possibly (1) accept a known bad certificate; or (2) require an out-of-band update, which could result in a lengthy App Store approval.&lt;br /&gt;
&lt;br /&gt;
Browser based applications are at a disadvantage since most browsers do not allow the user to leverage pre-existing relationships and ''a priori'' knowledge. In addition, Javascript and Websockets do not expose methods to for a web app to query the underlying secure connection information (such as the certificate or public key). It is noteworthy that Chromium based browsers perform pinning on selected sites, but the list is currently maintained by the vendor.&lt;br /&gt;
&lt;br /&gt;
= Providing Transport Layer Protection for Back End and Other Connections  =&lt;br /&gt;
&lt;br /&gt;
Although not the focus of this cheat sheet, it should be stressed that transport layer protection is necessary for back-end connections and any other connection where sensitive data is exchanged or where user identity is established. Failure to implement an effective and robust transport layer security will expose sensitive data and undermine the effectiveness of any authentication or access control mechanism. &lt;br /&gt;
&lt;br /&gt;
== Secure Internal Network Fallacy  ==&lt;br /&gt;
&lt;br /&gt;
The internal network of a corporation is not immune to attacks. Many recent high profile intrusions, where thousands of sensitive customer records were compromised, have been perpetrated by attackers that have gained internal network access and then used sniffers to capture unencrypted data as it traversed the internal network.&lt;br /&gt;
&lt;br /&gt;
= Related Articles  =&lt;br /&gt;
&lt;br /&gt;
* OWASP – [[Testing for SSL-TLS (OWASP-CM-001)|Testing for SSL-TLS]], and OWASP [[Guide to Cryptography]] &lt;br /&gt;
* OWASP – [http://www.owasp.org/index.php/ASVS Application Security Verification Standard (ASVS) – Communication Security Verification Requirements (V10)]&lt;br /&gt;
* OWASP – ASVS Article on [[Why you need to use a FIPS 140-2 validated cryptomodule]]&lt;br /&gt;
* SSL Labs http://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide]&lt;br /&gt;
* yaSSL – [http://www.yassl.com/yaSSL/Blog/Entries/2010/10/7_Differences_between_SSL_and_TLS_Protocol_Versions.html Differences between SSL and TLS Protocol Versions]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf SP 800-52 Guidelines for the selection and use of transport layer security (TLS) Implementations]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/fips/fips140-2/fips1402.pdf FIPS 140-2 Security Requirements for Cryptographic Modules]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/groups/STM/cmvp/documents/fips140-2/FIPS1402IG.pdf Implementation Guidance for FIPS PUB 140-2 and the Cryptographic Module Validation Program]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/nistpubs/800-57/sp800-57-Part1-revised2_Mar08-2007.pdf SP 800-57 Recommendation for Key Management, Revision 2]&lt;br /&gt;
* NIST – [http://csrc.nist.gov/publications/drafts.html#sp800-95 SP 800-95 Guide to Secure Web Services] &lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5280.txt RFC 5280 Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc2246.txt RFC 2246 The Transport Layer Security (TLS) Protocol Version 1.0 (JAN 1999)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc4346.txt RFC 4346 The Transport Layer Security (TLS) Protocol Version 1.1 (APR 2006)]&lt;br /&gt;
* IETF – [http://www.ietf.org/rfc/rfc5246.txt RFC 5246 The Transport Layer Security (TLS) Protocol Version 1.2 (AUG 2008)]&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
Michael Coates - michael.coates[at]owasp.org &amp;lt;br/&amp;gt;&lt;br /&gt;
Dave Wichers - dave.wichers[at]aspectsecurity.com &amp;lt;br/&amp;gt;&lt;br /&gt;
Michael Boberski - boberski_michael[at]bah.com&amp;lt;br/&amp;gt;&lt;br /&gt;
Tyler Reguly -treguly[at]sslfail.com&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=148720</id>
		<title>C-Based Toolchain Hardening</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=148720"/>
				<updated>2013-03-28T03:00:18Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening]] is a treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. This article will examine Microsoft and GCC toolchains for the C, C++ and Objective C languages. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, preprocessor, compiler, and linker. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, Visual Studio-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
This is a prescriptive article, and it will not debate semantics or speculate on behavior. Some information, such as the C/C++ committee's motivation and pedigree for [https://groups.google.com/a/isocpp.org/forum/?fromgroups=#!topic/std-discussion/ak8e1mzBhGs &amp;quot;program diagnostics&amp;quot;, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;], appears to be lost like a tale in the Lord of the Rings. As such, the article will specify semantics (for example, the philosophy of 'debug' and 'release' build configurations), assign behaviors (for example, what an assert should do in a 'debug' and 'release' build configurations), and present a position. If you find the posture is too aggressive, then you should back off as required to suite your taste.&lt;br /&gt;
&lt;br /&gt;
A secure toolchain is not a silver bullet. It is one piece of an overall strategy in the engineering process to help ensure success. It will compliment existing processes such as static analysis, dynamic analysis, secure coding, negative test suites, and the like. Tools such as Valgrind and Helgrind will still be needed. And a project will still require solid designs and architectures.&lt;br /&gt;
&lt;br /&gt;
The OWASP [http://code.google.com/p/owasp-esapi-cplusplus/source ESAPI C++] project eats its own dog food. Many of the examples you will see in this article come directly from the ESAPI C++ project.&lt;br /&gt;
&lt;br /&gt;
Finally, a [[Category:Cheat Sheet|cheat sheet]] is available for those who desire a terse treatment of the material. Please visit [[C-Based_Toolchain_Hardening_Cheat_Sheet|C-Based Toolchain Hardening Cheat Sheet]] for the abbreviated version.&lt;br /&gt;
&lt;br /&gt;
== Wisdom ==&lt;br /&gt;
&lt;br /&gt;
Code '''must''' be correct. It '''should''' be secure. It '''can''' be efficient.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Jon_Bentley Dr. Jon Bentley]: ''&amp;quot;If it doesn't have to be correct, I can make it as fast as you'd like it to be&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Gary_McGraw Dr. Gary McGraw]: ''&amp;quot;Thou shalt not rely solely on security features and functions to build secure software as security is an emergent property of the entire system and thus relies on building and integrating all parts properly&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
Configuration is the first opportunity to configure your project for success. Not only do you have to configure your project to meet reliability and security goals, you must also configure integrated libraries properly. You typically have has three choices. First, you can use auto-configuration utilities if on Linux or Unix. Second, you can write a makefile by hand. This is predominant on Linux, Mac OS X, and Unix, but it applies to Windows as well. Finally, you can use an integrated development environment or IDE.&lt;br /&gt;
&lt;br /&gt;
=== Build Configurations ===&lt;br /&gt;
&lt;br /&gt;
At this stage in the process, you should concentrate on configuring for two builds: Debug and Release. Debug will be used for development and include full instrumentation. Release will be configured for production. The difference between the two settings is usually ''optimization level'' and ''debug level''. A third build configuration is Test, and its usually a special case of Release.&lt;br /&gt;
&lt;br /&gt;
For debug and release builds, the settings are typically diametrically opposed. Debug configurations have no optimizations and full debug information; while Release builds have optimizations and minimal to moderate debug information. In addition, debug code has full assertions and additional library integration, such as mudflaps and malloc guards such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The Test configuration is often a Release configuration that makes everything public for testing and builds a test harness. For example, all member functions public (C++ class) and all interfaces (library or shared object) should be made available for testing. Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
[http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8] introduced an optimization of &amp;lt;tt&amp;gt;-Og&amp;lt;/tt&amp;gt;. Note that it is only an optimization, and still requires a customary debug level via &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Debug Builds ====&lt;br /&gt;
&lt;br /&gt;
Debug builds are where developers spend most of their time when vetting problems, so this build should concentrate forces and tools or be a 'force multiplier'. Though many do not realize, debug code is more highly valued than release code because its adorned with additional instrumentation. The debug instrumentation will cause a program to become nearly &amp;quot;self-debugging&amp;quot;, and help you catch mistakes such as bad parameters, failed API calls, and memory problems.&lt;br /&gt;
&lt;br /&gt;
Self-debugging code reduces your time during trouble shooting and debugging. Reducing time under the debugger means you have more time for development and feature requests. If code is checked in without debug instrumentation, it should be fixed by adding instrumentation or rejected.&lt;br /&gt;
&lt;br /&gt;
For GCC, optimizations and debug symbolication are controlled through two switches: &amp;lt;tt&amp;gt;-O&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;. You should use the following as part of your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a minimal debug session:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-O0 -g3 -ggdb&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; turns off optimizations and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available. You may need to use &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; so some analysis is performed. Otherwise, your debug build will be missing a number of warnings not present in release builds. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debugging information is available for the debug session, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; includes extensions to help with a debug session under GDB. For completeness, Jan Krachtovil stated &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; currently has no effect in a private email.&lt;br /&gt;
&lt;br /&gt;
Debug build should also define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is not defined. &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; removes &amp;quot;program diagnostics&amp;quot;; and has undesirable behavior and side effects which discussed below in more detail. The defines should be present for all code, and not just the program. You use it for all code (your program and included libraries) because you need to know how they fail too (remember, you take the bug report - not the third party library).&lt;br /&gt;
&lt;br /&gt;
In addition, you should use other relevant flags, such as &amp;lt;tt&amp;gt;-fno-omit-frame-pointer&amp;lt;/tt&amp;gt;. Ensuring a frame pointer exists makes it easier to decode stack traces. Since debug builds are not shipped, its OK to leave symbols in the executable. Programs with debug information do not suffer performance hits. See, for example, [http://gcc.gnu.org/ml/gcc-help/2005-03/msg00032.html How does the gcc -g option affect performance?]&lt;br /&gt;
&lt;br /&gt;
Finally, you should ensure your project includes additional diagnostic libraries, such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt; and [http://code.google.com/p/address-sanitizer/ Address Sanitizer]. A comparison of some memory checking tools can be found at [http://code.google.com/p/address-sanitizer/wiki/ComparisonOfMemoryTools Comparison Of Memory Tools]. If you don't include additional diagostics in debug builds, then you should start using them sinces its OK to find errors you are not looking for.&lt;br /&gt;
&lt;br /&gt;
==== Release Builds ====&lt;br /&gt;
&lt;br /&gt;
Release builds are what your customer receives. They are meant to be run on production hardware and servers, and they should be reliable, secure, and efficient. A stable release build is the product of the hard work and effort during development.&lt;br /&gt;
&lt;br /&gt;
For release builds, you should use the following as part of &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for release builds:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-On -g2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O''n''&amp;lt;/tt&amp;gt; sets optimizations for speed or size (for example, &amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;), and &amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt; ensure debugging information is created.&lt;br /&gt;
&lt;br /&gt;
Debugging information should be stripped and retained in case of symbolication for a crash report from the field. While not desired, debug information can be left in place without a performance penalty. See ''[http://gcc.gnu.org/ml/gcc-help/2005-03/msg00032.html How does the gcc -g option affect performance?]'' for details.&lt;br /&gt;
&lt;br /&gt;
Release builds should also define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is not defined. The time for debugging and diagnostics is over, so users get production code with full optimizations, no &amp;quot;programming diagnostics&amp;quot;, and other efficiencies. If you can't optimize or your are performing excessive logging, it usually means the program is not ready for production.&lt;br /&gt;
&lt;br /&gt;
If you have been relying on an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and then a subsequent &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;, you have been abusing &amp;quot;program diagnostics&amp;quot; since it has no place in production code. If you want a memory dump, create one so users don't have to worry about secrets and other sensitive information being written to the filesystem and emailed in plain text.&lt;br /&gt;
&lt;br /&gt;
For Windows, you would use &amp;lt;tt&amp;gt;/Od&amp;lt;/tt&amp;gt; for debug builds; and &amp;lt;tt&amp;gt;/Ox&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/O2&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/Os&amp;lt;/tt&amp;gt; for release builds. See Microsoft's [http://msdn.microsoft.com/en-us/library/k1ack8f1.aspx /O Options (Optimize Code)] for details.&lt;br /&gt;
&lt;br /&gt;
==== Test Builds ====&lt;br /&gt;
&lt;br /&gt;
Test builds are used to provide heuristic validation by way of positive and negative test suites. Under a test configuration, all interfaces are tested to ensure they perform to specification and satisfaction. &amp;quot;Satisfaction&amp;quot; is subjective, but it should include no crashing and no trashing of your memory arena, even when faced with negative tests.&lt;br /&gt;
&lt;br /&gt;
Because all interfaces are tested (and not just the public ones), your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; should include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should also change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Nearly everyone gets a positive test right, so no more needs to be said. The negative self tests are much more interesting, and you should concentrate on trying to make your program fail so you can verify its fails gracefully. Remember, a bad guy is not going to be courteous when he attempts to cause your program to fail. And its your project that takes egg on the face by way of a bug report or guest appearance on [http://www.grok.org.uk/full-disclosure/ Full Disclosure] or [http://www.securityfocus.com/archive Bugtraq] - not ''&amp;lt;nowiki&amp;gt;&amp;lt;some library&amp;gt;&amp;lt;/nowiki&amp;gt;'' you included.&lt;br /&gt;
&lt;br /&gt;
=== Auto Tools ===&lt;br /&gt;
&lt;br /&gt;
Auto configuration tools are popular on many Linux and Unix based systems, and the tools include ''Autoconf'', ''Automake'', ''config'', and ''Configure''. The tools work together to produce project files from scripts and template files. After the process completes, your project should be setup and ready to be made with &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When using auto configuration tools, there are a few files of interest worth mentioning. The files are part of the auto tools chain and include &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; and the various &amp;lt;tt&amp;gt;*.in&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;*.ac&amp;lt;/tt&amp;gt; (autoconf), and &amp;lt;tt&amp;gt;*.am&amp;lt;/tt&amp;gt; (automake) files. At times, you will have to open them, or the resulting makefiles, to tune the &amp;quot;stock&amp;quot; configuration.&lt;br /&gt;
&lt;br /&gt;
There are three downsides to the command line configuration tools in the toolchain: (1) they often ignore user requests, (2) they cannot create configurations, and (3) security is often not a goal.&lt;br /&gt;
&lt;br /&gt;
To demonstrate the first issue, confider your project with the following: &amp;lt;tt&amp;gt;configure CFLAGS=&amp;quot;-Wall -fPIE&amp;quot; CXXFLAGS=&amp;quot;-Wall -fPIE&amp;quot; LDFLAGS=&amp;quot;-pie&amp;quot;&amp;lt;/tt&amp;gt;. You will probably find the auto tools ignored your request, which means the command below will not produce expected results. As a work around, you will have to open an &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; scripts, &amp;lt;tt&amp;gt;Makefile.in&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;Makefile.am&amp;lt;/tt&amp;gt; and fix the configuration.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ configure CFLAGS=&amp;quot;-Wall -Wextra -Wconversion -fPIE -Wno-unused-parameter&lt;br /&gt;
    -Wformat=2 -Wformat-security -fstack-protector-all -Wstrict-overflow&amp;quot;&lt;br /&gt;
    LDFLAGS=&amp;quot;-pie -z,noexecstack -z,noexecheap -z,relro -z,now&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the second point, you will probably be disappointed to learn [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html Automake does not support the concept of configurations]. Its not entirely Autoconf's or Automake's fault - ''Make'' and its inability to detect changes is the underlying problem. Specifically, ''Make'' only [http://pubs.opengroup.org/onlinepubs/009695399/utilities/make.html checks modification times of prerequisites and targets], and does not check things like &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;. The net effect is you will not receive expected results when you issue &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt; and then &amp;lt;tt&amp;gt;make test&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Finally, you will probably be disappointed to learn tools such as Autoconf and Automake miss many security related opportunities and ship insecure out of the box. There are a number of compiler switches and linker flags that improve the defensive posture of a program, but they are not 'on' by default. Tools like Autoconf - which are supposed to handle this situation - often provides setting to serve the lowest of all denominators.&lt;br /&gt;
&lt;br /&gt;
A recent discussion on the Automake mailing list illuminates the issue: ''[https://lists.gnu.org/archive/html/autoconf/2012-12/msg00038.html Enabling compiler warning flags]''. Attempts to improve default configurations were met with resistance and no action was taken. The resistance is often of the form, &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some useful warning&amp;gt;&amp;lt;/nowiki&amp;gt; also produces false positives&amp;quot; or &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some obscure platform&amp;gt;&amp;lt;/nowiki&amp;gt; does not support &amp;lt;nowiki&amp;gt;&amp;lt;established security feature&amp;gt;&amp;lt;/nowiki&amp;gt;&amp;quot;. Its noteworthy that David Wheeler, the author of ''[http://www.dwheeler.com/secure-programs/ Secure Programming for Linux and Unix HOWTO]'', was one of the folks trying to improve the posture.&lt;br /&gt;
&lt;br /&gt;
=== Makefiles ===&lt;br /&gt;
&lt;br /&gt;
Make is one of the earliest build systems dating back to the 1970s. Its available on Linux, Mac OS X and Unix, so you will frequently encounter projects using it. Unfortunately, Make has a number of short comings (''[http://aegis.sourceforge.net/auug97.pdf Recursive Make Considered Harmful]'' and ''[http://www.conifersystems.com/whitepapers/gnu-make/ What’s Wrong With GNU make?]''), and can cause some discomfort. Despite issues with Make, ESAPI C++ uses Make primarily for three reasons: first, its omnipresent; second, its easier to manage than the Auto Tools family; and third, &amp;lt;tt&amp;gt;libtool&amp;lt;/tt&amp;gt; was out of the question.&lt;br /&gt;
&lt;br /&gt;
Consider what happens when you: (1) type &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt;, and then type &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;. Each build would require different &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; due to optimizations and level of debug support. In your makefile, you would extract the relevant target and set &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; similar to below (taken from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/Makefile ESAPI C++ Makefile]):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Makefile&lt;br /&gt;
DEBUG_GOALS = $(filter $(MAKECMDGOALS), debug)&lt;br /&gt;
ifneq ($(DEBUG_GOALS),)&lt;br /&gt;
  WANT_DEBUG := 1&lt;br /&gt;
  WANT_TEST := 0&lt;br /&gt;
  WANT_RELEASE := 0&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_DEBUG),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
  ESAPI_CXXFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_RELEASE),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
  ESAPI_CXXFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_TEST),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
  ESAPI_CXXFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
# Merge ESAPI flags with user supplied flags. We perform the extra step to ensure &lt;br /&gt;
# user options follow our options, which should give user option's a preference.&lt;br /&gt;
override CFLAGS := $(ESAPI_CFLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(ESAPI_CXXFLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(ESAPI_LDFLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make will first build the program in a debug configuration for a session under the debugger using a rule similar to:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;%.cpp:%.o:&lt;br /&gt;
        $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $&amp;lt; -o $@&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you want the release build, Make will do nothing because it considers everything up to date despite the fact &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; have changed. Hence, your program will actually be in a debug configuration and risk a &amp;lt;tt&amp;gt;SIGABRT&amp;lt;/tt&amp;gt; at runtime because debug instrumentation is present (recall &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; calls &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; when &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined). In essence, you have DoS'd yourself due to &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, many projects do not honor the user's command line. ESAPI C++ does its best to ensure a user's flags are honored via &amp;lt;tt&amp;gt;override&amp;lt;/tt&amp;gt; as shown above, but other projects do not. For example, consider a project that should be built with Position Independent Executable (PIE or ASLR) enabled and data execution prevention (DEP) enabled. Dismissing user settings combined with insecure out of the box settings (and not picking them up during auto-setup or auto-configure) means a program built with the following will likely have neither defense:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ make CFLAGS=&amp;quot;-fPIE&amp;quot; CXXFLAGS=&amp;quot;-fPIE&amp;quot; LDFLAGS=&amp;quot;-pie -z,noexecstack, -z,noexecheap&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Defenses such as ASLR and DEP are especially important on Linux because [http://linux.die.net/man/5/elf Data Execution - not Prevention - is the norm].&lt;br /&gt;
&lt;br /&gt;
=== Integration ===&lt;br /&gt;
&lt;br /&gt;
Project level integration presents opportunities to harden your program or library with domain specific knowledge. For example, if the platform supports Position Independent Executables (PIE or ASLR) and data execution prevention (DEP), then you should integrate with it. The consequences of not doing so could result in exploitation. As a case in point, see KingCope's 0-days for MySQL in December, 2012 (CVE-2012-5579 and CVE-2012-5612, among others). Integration with platform security would have neutered a number of the 0-days.&lt;br /&gt;
&lt;br /&gt;
You also have the opportunity to include helpful libraries that are not need for business logic support. For example, if you are working on a platform with [http://dmalloc.com DMalloc] or [http://code.google.com/p/address-sanitizer/ Address Sanitizer], you should probably use it in your debug builds. For Ubuntu, DMalloc available from the package manager and can be installed with &amp;lt;tt&amp;gt;sudo apt-get install libdmalloc5&amp;lt;/tt&amp;gt;. For Apple platforms, its available as a scheme option (see [[#Clang/Xcode|Clang/Xcode]] below). Address Sanitizer is available in [http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8 and above] for many platforms.&lt;br /&gt;
&lt;br /&gt;
In addition, project level integration is an opportunity to harden third party libraries you chose to include. Because you chose to include them, you and your users are responsible for them. If you or your users endure a SP800-53 audit, third party libraries will be in scope because the supply chain is included (specifically, item SA-12, Supply Chain Protection). The audits are not limited to those in the US Federal arena - financial institutions perform reviews too. A perfect example of violating this guidance is [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525], which was due to [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library].&lt;br /&gt;
&lt;br /&gt;
Another example is including OpenSSL. You know (1) [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 is insecure], (2) [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 is insecure], and (3) [http://arstechnica.com/security/2012/09/crime-hijacks-https-sessions/ compression is insecure] (among others). In addition, suppose you don't use hardware and engines, and only allow static linking. Given the knowledge and specifications, you would configure the OpenSSL library as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ Configure darwin64-x86_64-cc -no-hw -no-engines -no-comp -no-shared -no-dso -no-sslv2 -no-sslv3 --openssldir=…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Note Well'': you might want engines, especially on Ivy Bridge microarchitectures (3rd generation Intel Core i5 and i7 processors). To have OpenSSL use the processor's random number generator (via the of &amp;lt;tt&amp;gt;rdrand&amp;lt;/tt&amp;gt; instruction), you will need to call OpenSSL's &amp;lt;tt&amp;gt;ENGINE_load_rdrand()&amp;lt;/tt&amp;gt; function and then &amp;lt;tt&amp;gt;ENGINE_set_default&amp;lt;/tt&amp;gt; with &amp;lt;tt&amp;gt;ENGINE_METHOD_RAND&amp;lt;/tt&amp;gt;. See [http://wiki.opensslfoundation.com/index.php/Random_Numbers OpenSSL's Random Numbers] for details.&lt;br /&gt;
&lt;br /&gt;
If you configure without the switches, then you will likely have vulnerable code/libraries and risk failing an audit. If the program is a remote server, then the following command will reveal if compression is active on the channel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ echo &amp;quot;GET / HTTP1.0&amp;quot; | openssl s_client -connect &amp;lt;nowiki&amp;gt;example.com:443&amp;lt;/nowiki&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;nm&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;openssl s_client&amp;lt;/tt&amp;gt; will show that compression is enabled in the client. In fact, any symbol within the &amp;lt;tt&amp;gt;OPENSSL_NO_COMP&amp;lt;/tt&amp;gt; preprocessor macro will bear witness since &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; is translated into a &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; define.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ nm /usr/local/ssl/iphoneos/lib/libcrypto.a 2&amp;gt;/dev/null | egrep -i &amp;quot;(COMP_CTX_new|COMP_CTX_free)&amp;quot;&lt;br /&gt;
0000000000000110 T COMP_CTX_free&lt;br /&gt;
0000000000000000 T COMP_CTX_new&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even more egregious is the answer given to auditors who specifically ask about configurations and protocols: &amp;quot;we don't use weak/wounded/broken ciphers&amp;quot; or &amp;quot;we follow best practices.&amp;quot; The use of compression tells the auditor that you are using wounded protocol in an insecure configuration and you don't follow best practices. That will likely set off alarm bells, and ensure the auditor dives deeper on more items.&lt;br /&gt;
&lt;br /&gt;
== Preprocessor ==&lt;br /&gt;
&lt;br /&gt;
The preprocessor is crucial to setting up a project for success. The C committee provided one macro - &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; - and the macro can be used to derive a number of configurations and drive engineering processes. Unfortunately, the committee also left many related items to chance, which has resulted in programmers abusing builtin facilities. This section will help you set up you projects to integrate well with other projects and ensure reliability and security.&lt;br /&gt;
&lt;br /&gt;
There are three topics to discuss when hardening the preprocessor. The first is well defined configurations which produce well defined behaviors, the second is useful behavior from assert, and the third is proper use of macros when integrating vendor code and third party libraries.&lt;br /&gt;
&lt;br /&gt;
=== Configurations ===&lt;br /&gt;
&lt;br /&gt;
To remove ambiguity, you should recognize two configurations: Release and Debug. Release is for production code on live servers, and its behavior is requested via the C/C++ &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; macro. Its also the only macro observed by the C and C++ Committees and Posix. Diametrically opposed to release is Debug. While there is a compelling argument for &amp;lt;tt&amp;gt;!defined(NDEBUG)&amp;lt;/tt&amp;gt;, you should have an explicit macro for the configuration and that macro should be &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;. This is because vendors and outside libraries use &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (or similar) macro for their configuration. For example, Carnegie Mellon's Mach kernel uses &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, Microsoft's CRT uses [http://msdn.microsoft.com/en-us/library/ww5t02fa%28v=vs.71%29.aspx&amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt;], and Wind River Workbench uses &amp;lt;tt&amp;gt;DEBUG_MODE&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (Release) and &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (Debug), you have two additional cross products: both are defined or neither are defined. Defining both should be an error, and defining neither should default to a release configuration. Below is from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/esapi/EsapiCommon.h ESAPI C++ EsapiCommon.h], which is the configuration file used by all source files:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// Only one or the other, but not both&lt;br /&gt;
#if (defined(DEBUG) || defined(_DEBUG)) &amp;amp;&amp;amp; (defined(NDEBUG) || defined(_NDEBUG))&lt;br /&gt;
# error Both DEBUG and NDEBUG are defined.&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
// The only time we switch to debug is when asked. NDEBUG or {nothing} results&lt;br /&gt;
// in release build (fewer surprises at runtime).&lt;br /&gt;
#if defined(DEBUG) || defined(_DEBUG)&lt;br /&gt;
# define ESAPI_BUILD_DEBUG 1&lt;br /&gt;
#else&lt;br /&gt;
# define ESAPI_BUILD_RELEASE 1&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is in effect, your code should receive full debug instrumentation, including the full force of assertions.&lt;br /&gt;
&lt;br /&gt;
=== ASSERT ===&lt;br /&gt;
&lt;br /&gt;
Asserts will help you create self-debugging code by helping you find the point of first failure quickly and easily. Asserts should be used throughout your program, including parameter validation, return value checking and program state. The &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; will silently guard your code through its lifetime. It will always be there, even when not debugging a specific component of a module. If you have thorough code coverage, you will spend less time debugging and more time developing because programs will debug themselves.&lt;br /&gt;
&lt;br /&gt;
To use asserts effectively, you should assert everything. That includes parameters upon entering a function, return values from function calls, and any program state. Everywhere you place an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation or checking, you should have an assert. Everywhere you have an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; for validation or checking, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand.&lt;br /&gt;
&lt;br /&gt;
If you are still using &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt;'s, then you have an opportunity for improvement. In the time it takes for you to write a &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; statement, you could have written an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;. Unlike the &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; which are often removed when no longer needed, the &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; stays active forever. Remember, this is all about finding the point of first failure quickly so you can spend your time doing other things.&lt;br /&gt;
&lt;br /&gt;
There is one problem with using asserts - [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html Posix states &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; should call &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;] if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined. When debugging, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will never be defined since you want the &amp;quot;program diagnostics&amp;quot; (quote from the Posix description). The behavior makes &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and its accompanying &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; completely useless for development. The result of &amp;quot;program diagnostics&amp;quot; calling &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; due to standard C/C++ behavior is disuse - developers simply don't use them. Its incredibly bad for the development community because self-debugging programs can help eradicate so many stability problems.&lt;br /&gt;
&lt;br /&gt;
Since self-debugging programs are so powerful, you will have to have to supply your own assert and signal handler with improved behavior. Your assert will exchange auto-aborting behavior for auto-debugging behavior. The auto-debugging facility will ensure the debugger snaps when a problem is detected, and you will find the point of first failure quickly and easily.&lt;br /&gt;
&lt;br /&gt;
ESAPI C++ supplies its own assert with the behavior described above. In the code below, &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt; raises &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; when in effect or it evaluates to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; in other cases.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// A debug assert which should be sprinkled liberally. This assert fires and then continues rather&lt;br /&gt;
// than calling abort(). Useful when examining negative test cases from the command line.&lt;br /&gt;
#if (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_STARNIX))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp) {                                    \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; std::endl;                                           \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) {                               \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; &amp;quot;: \&amp;quot;&amp;quot; &amp;lt;&amp;lt; (msg) &amp;lt;&amp;lt; &amp;quot;\&amp;quot;&amp;quot; &amp;lt;&amp;lt; std::endl;                \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#elif (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_WINDOWS))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      assert(exp)&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) assert(exp)&lt;br /&gt;
#else&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      ((void)(exp))&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) ((void)(exp))&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
#if !defined(ASSERT)&lt;br /&gt;
#  define ASSERT(exp)     ESAPI_ASSERT1(exp)&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At program startup, a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; handler will be installed if one is not provided by another component:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct DebugTrapHandler&lt;br /&gt;
{&lt;br /&gt;
  DebugTrapHandler()&lt;br /&gt;
  {&lt;br /&gt;
    struct sigaction new_handler, old_handler;&lt;br /&gt;
&lt;br /&gt;
    do&lt;br /&gt;
      {&lt;br /&gt;
        int ret = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, NULL, &amp;amp;old_handler);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        // Don't step on another's handler&lt;br /&gt;
        if (old_handler.sa_handler != NULL) break;&lt;br /&gt;
&lt;br /&gt;
        new_handler.sa_handler = &amp;amp;DebugTrapHandler::NullHandler;&lt;br /&gt;
        new_handler.sa_flags = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigemptyset (&amp;amp;new_handler.sa_mask);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, &amp;amp;new_handler, NULL);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
      } while(0);&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  static void NullHandler(int /*unused*/) { }&lt;br /&gt;
&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
// We specify a relatively low priority, to make sure we run before other CTORs&lt;br /&gt;
// http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Attributes.html#C_002b_002b-Attributes&lt;br /&gt;
static const DebugTrapHandler g_dummyHandler __attribute__ ((init_priority (110)));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On a Windows platform, you would call &amp;lt;tt&amp;gt;_set_invalid_parameter_handler&amp;lt;/tt&amp;gt; (and possibly &amp;lt;tt&amp;gt;set_unexpected&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;set_terminate&amp;lt;/tt&amp;gt;) to install a new handler.&lt;br /&gt;
&lt;br /&gt;
Live hosts running production code should always define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (i.e., release configuration), which means they do not assert or auto-abort. Auto-abortion is not acceptable behavior, and anyone who asks for the behavior is completely abusing the functionality of &amp;quot;program diagnostics&amp;quot;. If a program wants a core dump, then it should create the dump rather than crashing.&lt;br /&gt;
&lt;br /&gt;
For more reading on asserting effectively, please see one of John Robbin's books, such as ''[http://www.amazon.com/dp/0735608865 Debugging Applications]''. John is a legendary bug slayer in Windows circles, and he will show you how to do nearly everything, from debugging a simple program to bug slaying in multithreaded programs.&lt;br /&gt;
&lt;br /&gt;
=== Additional Macros ===&lt;br /&gt;
&lt;br /&gt;
Additional macros include any macros needed to integrate properly and securely. It includes integrating the program with the platform (for example MFC or Cocoa/CocoaTouch) and libraries (for example, Crypto++ or OpenSSL). It can be a challenge because you have to have proficiency with your platform and all included libraries and frameworks. The list below illustrates the level of detail you will need when integrating.&lt;br /&gt;
&lt;br /&gt;
Though Boost is missing from the list, it appears to lack recommendations, additional debug diagnostics, and a hardening guide. See ''[http://stackoverflow.com/questions/14927033/boost-hardening-guide-preprocessor-macros BOOST Hardening Guide (Preprocessor Macros)]'' for details. In addition, Tim Day points to ''[http://boost.2283326.n4.nabble.com/boost-build-should-we-not-define-SECURE-SCL-0-by-default-for-all-msvc-toolsets-td2654710.html &amp;lt;nowiki&amp;gt;[boost.build] should we not define _SECURE_SCL=0 by default for all msvc toolsets&amp;lt;/nowiki&amp;gt;]'' for a recent discussion related to hardening (or lack thereof).&lt;br /&gt;
&lt;br /&gt;
In addition to what you should define, defining some macros and undefining others should trigger a security related defect. For example, &amp;lt;tt&amp;gt;-U_FORTIFY_SOURCES&amp;lt;/tt&amp;gt; on Linux and &amp;lt;tt&amp;gt;_CRT_SECURE_NO_WARNINGS=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_SCL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_ATL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;STRSAFE_NO_DEPRECATE&amp;lt;/tt&amp;gt; on Windows.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Platform/Library!!Debug!!Release&lt;br /&gt;
|+ Table 1: Additional Platform/Library Macros&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;175pt&amp;quot;|All&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|DEBUG=1&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|NDEBUG=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Linux&lt;br /&gt;
|_GLIBCXX_DEBUG=1&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
_GLIBCXX_CONCEPT_CHECKS=1&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
|_FORTIFY_SOURCE=2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Android&lt;br /&gt;
|NDK_DEBUG=1&lt;br /&gt;
|_FORTIFY_SOURCE=1 (4.2 and above)&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define LOGI(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt logging)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Cocoa/CocoaTouch&lt;br /&gt;
|&lt;br /&gt;
|NS_BLOCK_ASSERTIONS=1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define NSLog(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt ASL)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SafeInt&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft&lt;br /&gt;
|_DEBUG=1, STRICT,&amp;lt;br&amp;gt;&lt;br /&gt;
_SECURE_SCL=1, _HAS_ITERATOR_DEBUGGING=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
|STRICT&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft ATL &amp;amp; MFC&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|STLPort&lt;br /&gt;
|_STLP_DEBUG=1, _STLP_USE_DEBUG_LIB=1&amp;lt;br&amp;gt;&lt;br /&gt;
_STLP_DEBUG_ALLOC=1, _STLP_DEBUG_UNINITIALIZED=1&lt;br /&gt;
|&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLite&lt;br /&gt;
|SQLITE_DEBUG, SQLITE_MEMDEBUG&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLCipher&lt;br /&gt;
|SQLITE_HAS_CODEC=1&amp;lt;BR&amp;gt;&lt;br /&gt;
SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;e&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_HAS_CODEC=1&amp;lt;BR&amp;gt;&lt;br /&gt;
SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;e&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Be careful with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; when using pre-compiled libraries such as Boost from a distribution. There are ABI incompatibilities, and the result will likely be a crash. You will have to compile Boost with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; or omit &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; See [http://gcc.gnu.org/onlinedocs/libstdc++/manual/concept_checking.html Chapter 5, Diagnostics] of the libstdc++ manual for details.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt; SQLite secure deletion zeroizes memory on destruction. Define as required, and always define in US Federal since zeroization is required for FIPS 140-2, Level 1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt; ''N'' is 0644 by default, which means everyone has some access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;e&amp;lt;/sup&amp;gt; Force temporary tables into memory (no unencrypted data to disk).&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
##########################################&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Compiler and Linker ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers provide a rich set of warnings from the analysis of code during compilation. Both GCC and Visual Studio have static analysis capabilities to help find mistakes early in the development process. The built in static analysis capabilities of GCC and Visual Studio are usually sufficient to ensure proper API usage and catch a number of mistakes such as using an uninitialized variable or comparing a negative signed int and a positive unsigned int.&lt;br /&gt;
&lt;br /&gt;
As a concrete example, (and for those not familiar with C/C++ promotion rules), a warning will be issued if a signed integer is promoted to an unsigned integer and then compared because a side effect is &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! GCC and Visual Studio will not currently catch, for example, SQL injections and other tainted data usage. For that, you will need a tool designed to perform data flow analysis or taint analysis.&lt;br /&gt;
&lt;br /&gt;
Some in the development community resist static analysis or refute its results. For example, when static analysis warned the Linux kernel's &amp;lt;tt&amp;gt;sys_prctl&amp;lt;/tt&amp;gt; was comparing an unsigned value against less than zero, Jesper Juhl offered a patch to clean up the code. Linus Torvalds howled “No, you don't do this… GCC is crap” (referring to compiling with warnings). For the full discussion, see ''[http://linux.derkeiler.com/Mailing-Lists/Kernel/2006-11/msg08325.html &amp;lt;nowiki&amp;gt;[PATCH] Don't compare unsigned variable for &amp;lt;0 in sys_prctl()&amp;lt;/nowiki&amp;gt;]'' from the Linux Kernel mailing list.&lt;br /&gt;
&lt;br /&gt;
The following sections will detail steps for three platforms. First is a typical GNU Linux based distribution offering GCC and Binutils, second is Clang and Xcode, and third is modern Windows platforms.&lt;br /&gt;
&lt;br /&gt;
=== Distribution Hardening ===&lt;br /&gt;
&lt;br /&gt;
Before discussing GCC and Binutils, it would be a good time to point out some of the defenses discussed below are all ready present in a distribution. Unfortunately, its design by committee, so what is present is usually only a mild variation of what is available (this way, everyone is mildly offended). For those who are purely worried about performance, you might be surprised to learn you have already taken the small performance hint without even knowing.&lt;br /&gt;
&lt;br /&gt;
Linux and BSD distributions often apply some hardening without intervention via ''[http://gcc.gnu.org/onlinedocs/gcc/Spec-Files.html GCC Spec Files]''. If you are using Debian, Ubuntu, Linux Mint and family, see ''[http://wiki.debian.org/Hardening Debian Hardening]''. For Red Hat and Fedora systems, see ''[http://lists.fedoraproject.org/pipermail/devel-announce/2011-August/000821.html New hardened build support (coming) in F16]''. Gentoo users should visit ''[http://www.gentoo.org/proj/en/hardened/ Hardened Gentoo]''.&lt;br /&gt;
&lt;br /&gt;
You can see the settings being used by a distribution via &amp;lt;tt&amp;gt;gcc -dumpspecs&amp;lt;/tt&amp;gt;. From Linux Mint 12 below, -fstack-protector (but not -fstack-protector-all) is used by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ gcc -dumpspecs&lt;br /&gt;
…&lt;br /&gt;
*link_ssp: %{fstack-protector:}&lt;br /&gt;
&lt;br /&gt;
*ssp_default: %{!fno-stack-protector:%{!fstack-protector-all: %{!ffreestanding:%{!nostdlib:-fstack-protector}}}}&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The “SSP” above stands for Stack Smashing Protector. SSP is a reimplementation of Hiroaki Etoh's work on IBM Pro Police Stack Detector. See Hiroaki Etoh's patch ''[http://gcc.gnu.org/ml/gcc-patches/2001-06/msg01753.html gcc stack-smashing protector]'' and IBM's ''[http://www.research.ibm.com/trl/projects/security/ssp/ GCC extension for protecting applications from stack-smashing attacks]'' for details.&lt;br /&gt;
&lt;br /&gt;
=== GCC/Binutils ===&lt;br /&gt;
&lt;br /&gt;
GCC (the compiler collection) and Binutils (the assemblers, linkers, and other tools) are separate projects that work together to produce a final executable. Both the compiler and linker offer options to help you write safer and more secure code. The linker will produce code which takes advantage of platform security features offered by the kernel and PaX, such as no-exec stacks and heaps (NX) and Position Independent Executable (PIE).&lt;br /&gt;
&lt;br /&gt;
The table below offers a set of compiler options to build your program. Static analysis warnings help catch mistakes early, while the linker options harden the executable at runtime. In the table below, “GCC” should be loosely taken as “non-ancient distributions.” While the GCC team considers 4.2 ancient, you will still encounter it on Apple and BSD platforms due to changes in GPL licensing around 2007. Refer to ''[http://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html GCC Option Summary]'', ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html Options to Request or Suppress Warnings]'' and ''[http://sourceware.org/binutils/docs-2.21/ld/Options.html Binutils (LD) Command Line Options]'' for usage details.&lt;br /&gt;
&lt;br /&gt;
Noteworthy of special mention are &amp;lt;tt&amp;gt;-fno-strict-overflow&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fwrapv&amp;lt;/tt&amp;gt;&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;. The flags ensure the compiler does not remove statements that result in overflow or wrap. If your program only runs correctly using the flags, it is likely violating C/C++ rules on overflow and illegal. If the program is illegal due to overflow or wrap checking, you should consider using [http://code.google.com/p/safe-iop/ safe-iop] for C or David LeBlanc's [http://safeint.codeplex.com SafeInt] in C++.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, some of those settings can be verified with the [http://www.trapkit.de/tools/checksec.html Checksec] tool written by Tobias Klein. The &amp;lt;tt&amp;gt;checksec.sh&amp;lt;/tt&amp;gt; script is designed to test standard Linux OS and PaX security features being used by an application. See the [http://www.trapkit.de/tools/checksec.html Trapkit] web page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 2: GCC C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wall -Wextra&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;75t&amp;quot;|GCC&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Enables many warnings (despite their names, all and extra do not turn on all warnings).&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wconversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may alter a value (includes -Wsign-conversion).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-conversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may change the sign of an integer value, such as assigning a signed integer to an unsigned integer (&amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion!).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wcast-align&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for a pointer cast to a type which has a different size, causing an invalid alignment and subsequent bus error on ARM processors.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wformat=2 -Wformat-security&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Increases warnings related to possible security defects, including incorrect format specifiers.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-common&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Prevent global variables being simultaneously defined in different object files.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fstack-protector or -fstack-protector-all&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Stack Smashing Protector (SSP). Improves stack layout and adds a guard to detect stack based buffer overflows.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-omit-frame-pointer&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Improves backtraces for post-mortem analysis&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wmissing-prototypes and -Wmissing-declarations&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a global function is defined without a prototype or declaration.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-prototypes&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a function is declared or defined without specifying the argument types.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-overflow&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.2&lt;br /&gt;
|Warn about optimizations taken due to &amp;lt;nowiki&amp;gt;[undefined]&amp;lt;/nowiki&amp;gt; signed integer overflow assumptions.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wtrampolines&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.3&lt;br /&gt;
|Warn about trampolines generated for pointers to nested functions. Trampolines require executable stacks.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=address&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/address-sanitizer/ AddressSanitizer], a fast memory error detector. Memory access instructions will be instrumented to help detect heap, stack, and global buffer overflows; as well as use-after-free bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=thread&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/data-race-test/wiki/ThreadSanitizer ThreadSanitizer], a fast data race detector. Memory access instructions will be instrumented to detect data race bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,nodlopen and -Wl,-z,nodump&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.10&lt;br /&gt;
|Reduces the ability of an attacker to load, manipulate, and dump shared objects.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,noexecstack and -Wl,-z,noexecheap&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.14&lt;br /&gt;
|Data Execution Prevention (DEP). ELF headers are marked with PT_GNU_STACK and PT_GNU_HEAP.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,relro&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Global Offset Table (GOT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,now&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Procedure Linkage Table (PLT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIC&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils&lt;br /&gt;
|Position Independent Code. Used for libraries and shared objects. Both -fPIC (compiler) and -shared (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIE&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.16&lt;br /&gt;
|Position Independent Executable (ASLR). Used for programs. Both -fPIE (compiler) and -pie (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Unlike Clang and -Weverything, GCC does not provide a switch to truly enable all warnings.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; -fstack-protector guards functions with high risk objects such as C strings, while -fstack-protector-all guards all objects.&lt;br /&gt;
&lt;br /&gt;
Additional C++ warnings which can be used include the following in Table 3. See ''[http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html GCC's Options Controlling C++ Dialect]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 3: GCC C++ Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Woverloaded-virtual&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn when a function declaration hides virtual functions from a base class. &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wreorder&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when the order of member initializers given in the code does not match the order in which they must be executed.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-promo&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when overload resolution chooses a promotion from unsigned or enumerated type to a signed type, over a conversion to an unsigned type of the same size.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wnon-virtual-dtor&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when a class has virtual functions and an accessible non-virtual destructor.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Weffc++&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn about violations of the following style guidelines from Scott Meyers' ''[http://www.aristeia.com/books.html Effective C++, Second Edition]'' book.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
And additional Objective C warnings which are often useful include the following. See ''[http://gcc.gnu.org/onlinedocs/gcc/Objective_002dC-and-Objective_002dC_002b_002b-Dialect-Options.html Options Controlling Objective-C and Objective-C++ Dialects]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 4: GCC Objective C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wstrict-selector-match&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn if multiple methods with differing argument and/or return types are found for a given selector when attempting to send a message using this selector to a receiver of type id or Class.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wundeclared-selector&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn if a &amp;lt;tt&amp;gt;@selector(…)&amp;lt;/tt&amp;gt; expression referring to an undeclared selector is found. &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The use of aggressive warnings will produce spurious noise. The noise is a tradeoff - you can learn of potential problems at the cost of wading through some chaff. The following will help reduces spurious noise from the warning system:&lt;br /&gt;
&lt;br /&gt;
* -Wno-unused-parameter (GCC)&lt;br /&gt;
* -Wno-type-limits (GCC 4.3)&lt;br /&gt;
* -Wno-tautological-compare (Clang)&lt;br /&gt;
&lt;br /&gt;
Finally, a simple version based Makefile example is shown below. This is different than feature based makefile produced by auto tools (which will test for a particular feature and then define a symbol or configure a template file). Not all platforms use all options and flags. To address the issue you can pursue one of two strategies. First, you can ship with a weakened posture by servicing the lowest common denominator; or you can ship with everything in force. In the latter case, those who don't have a feature available will edit the makefile to accommodate their installation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;CXX=g++&lt;br /&gt;
EGREP = egrep&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GCC_COMPILER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version')&lt;br /&gt;
GCC41_OR_LATER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version (4\.[1-9]|[5-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GNU_LD210_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[0-9]|2\.[2-9])')&lt;br /&gt;
GNU_LD214_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[4-9]|2\.[2-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC_COMPILER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wall -Wextra -Wconversion&lt;br /&gt;
  MY_CC_FLAGS += -Wformat=2 -Wformat-security&lt;br /&gt;
  MY_CC_FLAGS += -Wno-unused-parameter&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC41_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fstack-protector-all&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC42_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wstrict-overflow&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC43_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wtrampolines&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD210_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,nodlopen -z,nodump&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD214_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,noexecstack -z,noexecheap&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD215_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,relro -z,now&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD216_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fPIE&lt;br /&gt;
  MY_LD_FLAGS += -pie&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
# Use 'override' to honor the user's command line&lt;br /&gt;
override CFLAGS := $(MY_CC_FLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(MY_CC_FLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(MY_LD_FLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Clang/Xcode ===&lt;br /&gt;
&lt;br /&gt;
[http://clang.llvm.org Clang] and [http://llvm.org LLVM] have been aggressively developed since Apple lost its GPL compiler back in 2007 (due to Tivoization which resulted in GPLv3). Since that time, a number of developers and Goggle have joined the effort. While Clang will consume most (all?) GCC/Binutil flags and switches, the project supports a number of its own options, including a static analyzer. In addition, Clang is relatively easy to build with additional diagnostics, such as Dr. John Regher and Peng Li's [http://embed.cs.utah.edu/ioc/ Integer Overflow Checker (IOC)].&lt;br /&gt;
&lt;br /&gt;
IOC is incredibly useful, and has found bugs in a number of projects, from the Linux Kernel (&amp;lt;tt&amp;gt;include/linux/bitops.h&amp;lt;/tt&amp;gt;, still unfixed), SQLite, PHP, Firefox (many still unfixed), LLVM, and Python. Future version of Clang (Clang 3.3 and above) will allow you to enable the checks out of the box with &amp;lt;tt&amp;gt;-fsanitize=integer&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=shift&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Clang options can be found at [http://clang.llvm.org/docs/UsersManual.html Clang Compiler User’s Manual]. Clang does include an option to turn on all warnings - &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt;. Use it with care but use it regularly since you will get back a lot of noise and issues you missed. For example, add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; for production builds and make non-spurious issues a quality gate. Under Xcode, simply add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to compiler warnings, both static analysis and additional security checks can be performed. Reading on Clang's static analysis capabilities can be found at [http://clang-analyzer.llvm.org Clang Static Analyzer]. Figure 1 below shows some of the security checks utilized by Xcode.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-11.png|thumb|450px|Figure 1: Clang/LLVM and Xcode options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Visual Studio ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a convenient Integrated Development Environment (IDE) for managing solutions and their settings. the section called “Visual Studio Options” discusses option which should be used with Visual Studio, and the section called “Project Properties” demonstrates incorporating those options into a solution's project.&lt;br /&gt;
&lt;br /&gt;
The table below lists the compiler and linker switches which should be used under Visual Studio. Refer to Howard and LeBlanc's Writing Secure Code (Microsoft Press) for a detailed discussion; or ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]'' in Security Briefs by Michael Howard. In the table below, “Visual Studio” refers to nearly all versions of the development environment, including Visual Studio 5.0 and 6.0.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, those settings can be verified with BinScope. BinScope is a verification tool from Microsoft that analyzes binaries to ensure that they have been built in compliance with Microsoft's Security Development Lifecycle (SDLC) requirements and recommendations. See the ''[https://www.microsoft.com/download/en/details.aspx?id=11910 BinScope Binary Analyzer]'' download page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 5: Visual Studio Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;150pt&amp;quot;|&amp;lt;nowiki&amp;gt;/W4&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;100pt&amp;quot;|Visual Studio&lt;br /&gt;
|width=&amp;quot;350pt&amp;quot;|Warning level 4, which includes most warnings.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/WAll&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Enable all warnings, including those off by default.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/GS&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Adds a security cookie (guard or canary) on the stack before the return address buffer stack based for overflow checks.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/SafeSEH&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Safe structured exception handling to remediate SEH overwrites.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/analyze&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Enterprise code analysis (freely available with Windows SDK for Windows Server 2008 and .NET Framework 3.5).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/NXCOMPAT&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Data Execution Prevention (DEP).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/dynamicbase&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Address Space Layout Randomization (ASLR).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;strict_gs_check&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Aggressively applies stack protections to a source file to help detect some categories of stack based buffer overruns.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;See Jon Sturgeon's discussion of the switch at ''[https://blogs.msdn.com/b/vcblog/archive/2010/12/14/off-by-default-compiler-warnings-in-visual-c.aspx Off By Default Compiler Warnings in Visual C++]''.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;When using /GS, there are a number of circumstances which affect the inclusion of a security cookie. For example, the guard is not used if there is no buffer in the stack frame, optimizations are disabled, or the function is declared naked or contains inline assembly.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;tt&amp;gt;#pragma strict_gs_check(on)&amp;lt;/tt&amp;gt; should be used sparingly, but is recommend in high risk situations, such as when a source file parses input from the internet.&lt;br /&gt;
&lt;br /&gt;
=== Warn Suppression ===&lt;br /&gt;
&lt;br /&gt;
From the tables above, a lot of warnings have been enabled to help detect possible programming mistakes. The potential mistakes are detected via compiler which carries around a lot of contextual information during its code analysis phase. At times, you will receive spurious warnings because the compiler is not ''that'' smart. Its understandable and even a good thing (how would you like to be out of a job because a program writes its own programs?). At times you will have to learn how to work with the compiler's warning system to suppress warnings. Notice what was not said: turn off the warnings.&lt;br /&gt;
&lt;br /&gt;
Suppressing warnings placates the compiler for spurious noise so you can get to the issues that matter (you are separating the wheat from the chaff). This section will offer some hints and point out some potential minefields. First is an unused parameter (for example, &amp;lt;tt&amp;gt;argc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;argv&amp;lt;/tt&amp;gt;). Suppressing unused parameter warnings is especially helpful for C++ and interface programming, where parameters are often unused. For this warning, simply define an &amp;quot;UNUSED&amp;quot; macro and warp the parameter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#define UNUSED_PARAMETER(x) ((void)x)&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    UNUSED_PARAMETER(argc);&lt;br /&gt;
    UNUSED_PARAMETER(argv);&lt;br /&gt;
    …&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A potential minefield lies near &amp;quot;comparing unsigned and signed&amp;quot; values, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will catch it for you. This is because C/C++ promotion rules state the signed value will be promoted to an unsigned value and then compared. That means &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! To fix this, you cannot blindly cast - you must first range test the value:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;int x = GetX();&lt;br /&gt;
unsigned int y = GetY();&lt;br /&gt;
&lt;br /&gt;
ASSERT(x &amp;gt;= 0);&lt;br /&gt;
if(!(x &amp;gt;= 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? X is negative.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
if(static_cast&amp;lt;unsigned int&amp;gt;(x) &amp;gt; y)&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is greater than y&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
else&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is not greater than y&amp;quot; &amp;lt;&amp;lt; endl;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;x&amp;lt;/tt&amp;gt;. Just run the program and wait for it to tell you there is a problem. If there is a problem, the program will snap the debugger (and more importantly, not call a useless &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; as specified by Posix). It beats the snot out of &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; that are removed when no longer needed or pollute outputs.&lt;br /&gt;
&lt;br /&gt;
Another conversion problem you will encounter conversion between types, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will also catch it for you. The following will always have an opportunity to fail, and should light up like a Christmas tree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct sockaddr_in addr;&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
addr.sin_port = htons(atoi(argv[2]));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following would probably serve you much better. Notice &amp;lt;tt&amp;gt;atoi&amp;lt;/tt&amp;gt; and fiends are not used because they can silently fail. In addition, the code is instrumented so you don't need to waste a lot of time debugging potential problems:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;const char* cstr = GetPortString();&lt;br /&gt;
&lt;br /&gt;
ASSERT(cstr != NULL);&lt;br /&gt;
if(!(cstr != NULL))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port string is not valid.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
istringstream iss(cstr);&lt;br /&gt;
long long t = 0;&lt;br /&gt;
iss &amp;gt;&amp;gt; t;&lt;br /&gt;
&lt;br /&gt;
ASSERT(!(iss.fail()));&lt;br /&gt;
if(iss.fail())&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Failed to read port.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// Should this be a port above the reserved range ([0-1024] on Unix)?&lt;br /&gt;
ASSERT(t &amp;gt; 0);&lt;br /&gt;
if(!(t &amp;gt; 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too small&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
ASSERT(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max()));&lt;br /&gt;
if(!(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max())))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too large&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use port&lt;br /&gt;
unsigned short port = static_cast&amp;lt;unsigned short&amp;gt;(t);&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;port&amp;lt;/tt&amp;gt;. This code will continue checking conditions, years after being instrumented (assuming to wrote code to read a config file early in the project). There's no need to remove the &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt;s as with &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; since they are silent guardians.&lt;br /&gt;
&lt;br /&gt;
Another useful suppression trick is too avoid ignoring return values. Not only is it useful to suppress the warning, its required for correct code. For example, &amp;lt;tt&amp;gt;snprint&amp;lt;/tt&amp;gt; will alert you to truncations through its return value. You should not make them silent truncations by ignoring the warning or casting to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;char path[PATH_MAX];&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int ret = snprintf(path, sizeof(path), &amp;quot;%s/%s&amp;quot;, GetDirectory(), GetObjectName());&lt;br /&gt;
ASSERT(ret != -1);&lt;br /&gt;
ASSERT(!(ret &amp;gt;= sizeof(path)));&lt;br /&gt;
&lt;br /&gt;
if(ret == -1 || ret &amp;gt;= sizeof(path))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Unable to build full object name&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use path&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem is pandemic, and not just boring user land programs. Projects which offer high integrity code, such as SELinux, suffer silent truncations. The following is from an approved SELinux patch even though a comment was made that it [http://permalink.gmane.org/gmane.comp.security.selinux/16845 suffered silent truncations in its &amp;lt;tt&amp;gt;security_compute_create_name&amp;lt;/tt&amp;gt; function] from &amp;lt;tt&amp;gt;compute_create.c&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12  int security_compute_create_raw(security_context_t scon,&lt;br /&gt;
13                                  security_context_t tcon,&lt;br /&gt;
14                                  security_class_t   tclass,&lt;br /&gt;
15                                  security_context_t * newcon)&lt;br /&gt;
16  {&lt;br /&gt;
17    char path[PATH_MAX];&lt;br /&gt;
18    char *buf;&lt;br /&gt;
19    size_t size;&lt;br /&gt;
20    int fd, ret;&lt;br /&gt;
21 	&lt;br /&gt;
22    if (!selinux_mnt) {&lt;br /&gt;
23      errno = ENOENT;&lt;br /&gt;
24      return -1;&lt;br /&gt;
25    }&lt;br /&gt;
26 	&lt;br /&gt;
27    snprintf(path, sizeof path, &amp;quot;%s/create&amp;quot;, selinux_mnt);&lt;br /&gt;
28    fd = open(path, O_RDWR);&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Unlike other examples, the above code will not debug itself, and you will have to set breakpoints and trace calls to determine the point of first failure. (And the code above gambles that the truncated file does not exist or is not under an adversary's control by blindly performing the &amp;lt;tt&amp;gt;open&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
== Runtime ==&lt;br /&gt;
&lt;br /&gt;
The previous sections concentrated on setting up your project for success. This section will examine additional hints for running with increased diagnostics and defenses. Not all platforms are created equal - GNU Linux is difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening to a program after compiling and static linking]; while Windows allows post-build hardening through a download. Remember, the goal is to find the point of first failure quickly so you can improve the reliability and security of the code.&lt;br /&gt;
&lt;br /&gt;
=== Xcode ===&lt;br /&gt;
&lt;br /&gt;
Xcode offers additional [http://developer.apple.com/library/mac/#recipes/xcode_help-scheme_editor/Articles/SchemeDiagnostics.html Application Diagnostics] that can help find memory errors and object use problems. Schemes can be managed through ''Products'' menu item, ''Scheme'' submenu item, and then ''Edit''. From the editor, navigate to the ''Diagnostics'' tab. In the figure below, four additional instruments are enabled for the debugging cycle: Scribble guards, Edge guards, Malloc guards, and Zombies.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-1.png|thumb|450px|Figure 2: Xcode Memory Diagnostics]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
There is one caveat with using some of the guards: Apple only provides them for the simulator, and not a device. In the past, the guards were available for both devices and simulators.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a number of debugging aides for use during development. The aides are called [http://msdn.microsoft.com/en-us/library/d21c150d.aspx Managed Debugging Assistants (MDAs)]. You can find the MDAs on the ''Debug'' menu, then ''Exceptions'' submenu. MDAs allow you to tune your debugging experience by, for example, filter exceptions for which the debugger should snap. For more details, see Stephen Toub's ''[http://msdn.microsoft.com/en-us/magazine/cc163606.aspx Let The CLR Find Bugs For You With Managed Debugging Assistants]''.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-2.png|thumb|450px|Figure 3: Managed Debugging Assistants]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Finally, for runtime hardening, Microsoft has a helpful tool called EMET. EMET is the [http://support.microsoft.com/kb/2458544 Enhanced Mitigation Experience Toolkit], and allows you to apply runtime hardening to an executable which was built without. Its very useful for utilities and other programs that were built without an SDLC.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-3.png|thumb|450px|Figure 4: Windows and EMET]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=148000</id>
		<title>C-Based Toolchain Hardening</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=148000"/>
				<updated>2013-03-18T04:23:43Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening]] is a treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. This article will examine Microsoft and GCC toolchains for the C, C++ and Objective C languages. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, preprocessor, compiler, and linker. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, Visual Studio-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
This is a prescriptive article, and it will not debate semantics or speculate on behavior. Some information, such as the C/C++ committee's motivation and pedigree for [https://groups.google.com/a/isocpp.org/forum/?fromgroups=#!topic/std-discussion/ak8e1mzBhGs &amp;quot;program diagnostics&amp;quot;, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;], appears to be lost like a tale in the Lord of the Rings. As such, the article will specify semantics (for example, the philosophy of 'debug' and 'release' build configurations), assign behaviors (for example, what an assert should do in a 'debug' and 'release' build configurations), and present a position. If you find the posture is too aggressive, then you should back off as required to suite your taste.&lt;br /&gt;
&lt;br /&gt;
A secure toolchain is not a silver bullet. It is one piece of an overall strategy in the engineering process to help ensure success. It will compliment existing processes such as static analysis, dynamic analysis, secure coding, negative test suites, and the like. Tools such as Valgrind and Helgrind will still be needed. And a project will still require solid designs and architectures.&lt;br /&gt;
&lt;br /&gt;
The OWASP [http://code.google.com/p/owasp-esapi-cplusplus/source ESAPI C++] project eats its own dog food. Many of the examples you will see in this article come directly from the ESAPI C++ project.&lt;br /&gt;
&lt;br /&gt;
Finally, a [[Category:Cheat Sheet|cheat sheet]] is available for those who desire a terse treatment of the material. Please visit [[C-Based_Toolchain_Hardening_Cheat_Sheet|C-Based Toolchain Hardening Cheat Sheet]] for the abbreviated version.&lt;br /&gt;
&lt;br /&gt;
== Wisdom ==&lt;br /&gt;
&lt;br /&gt;
Code '''must''' be correct. It '''should''' be secure. It '''can''' be efficient.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Jon_Bentley Dr. Jon Bentley]: ''&amp;quot;If it doesn't have to be correct, I can make it as fast as you'd like it to be&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Gary_McGraw Dr. Gary McGraw]: ''&amp;quot;Thou shalt not rely solely on security features and functions to build secure software as security is an emergent property of the entire system and thus relies on building and integrating all parts properly&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
Configuration is the first opportunity to configure your project for success. Not only do you have to configure your project to meet reliability and security goals, you must also configure integrated libraries properly. You typically have has three choices. First, you can use auto-configuration utilities if on Linux or Unix. Second, you can write a makefile by hand. This is predominant on Linux, Mac OS X, and Unix, but it applies to Windows as well. Finally, you can use an integrated development environment or IDE.&lt;br /&gt;
&lt;br /&gt;
=== Build Configurations ===&lt;br /&gt;
&lt;br /&gt;
At this stage in the process, you should concentrate on configuring for two builds: Debug and Release. Debug will be used for development and include full instrumentation. Release will be configured for production. The difference between the two settings is usually ''optimization level'' and ''debug level''. A third build configuration is Test, and its usually a special case of Release.&lt;br /&gt;
&lt;br /&gt;
For debug and release builds, the settings are typically diametrically opposed. Debug configurations have no optimizations and full debug information; while Release builds have optimizations and minimal to moderate debug information. In addition, debug code has full assertions and additional library integration, such as mudflaps and malloc guards such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The Test configuration is often a Release configuration that makes everything public for testing and builds a test harness. For example, all member functions public (C++ class) and all interfaces (library or shared object) should be made available for testing. Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
[http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8] introduced an optimization of &amp;lt;tt&amp;gt;-Og&amp;lt;/tt&amp;gt;. Note that it is only an optimization, and still requires a customary debug level via &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Debug Builds ====&lt;br /&gt;
&lt;br /&gt;
Debug builds are where developers spend most of their time when vetting problems, so this build should concentrate forces and tools or be a 'force multiplier'. Though many do not realize, debug code is more highly valued than release code because its adorned with additional instrumentation. The debug instrumentation will cause a program to become nearly &amp;quot;self-debugging&amp;quot;, and help you catch mistakes such as bad parameters, failed API calls, and memory problems.&lt;br /&gt;
&lt;br /&gt;
Self-debugging code reduces your time during trouble shooting and debugging. Reducing time under the debugger means you have more time for development and feature requests. If code is checked in without debug instrumentation, it should be fixed by adding instrumentation or rejected.&lt;br /&gt;
&lt;br /&gt;
For GCC, optimizations and debug symbolication are controlled through two switches: &amp;lt;tt&amp;gt;-O&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;. You should use the following as part of your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a minimal debug session:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-O0 -g3 -ggdb&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; turns off optimizations and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available. You may need to use &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; so some analysis is performed. Otherwise, your debug build will be missing a number of warnings not present in release builds. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debugging information is available for the debug session, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; includes extensions to help with a debug session under GDB. For completeness, Jan Krachtovil stated &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; currently has no effect in a private email.&lt;br /&gt;
&lt;br /&gt;
Debug build should also define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is not defined. &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; removes &amp;quot;program diagnostics&amp;quot;; and has undesirable behavior and side effects which discussed below in more detail. The defines should be present for all code, and not just the program. You use it for all code (your program and included libraries) because you need to know how they fail too (remember, you take the bug report - not the third party library).&lt;br /&gt;
&lt;br /&gt;
In addition, you should use other relevant flags, such as &amp;lt;tt&amp;gt;-fno-omit-frame-pointer&amp;lt;/tt&amp;gt;. Ensuring a frame pointer exists makes it easier to decode stack traces. Since debug builds are not shipped, its OK to leave symbols in the executable. Programs with debug information do not suffer performance hits. See, for example, [http://gcc.gnu.org/ml/gcc-help/2005-03/msg00032.html How does the gcc -g option affect performance?]&lt;br /&gt;
&lt;br /&gt;
Finally, you should ensure your project includes additional diagnostic libraries, such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt; and [http://code.google.com/p/address-sanitizer/ Address Sanitizer]. A comparison of some memory checking tools can be found at [http://code.google.com/p/address-sanitizer/wiki/ComparisonOfMemoryTools Comparison Of Memory Tools]. If you don't include additional diagostics in debug builds, then you should start using them sinces its OK to find errors you are not looking for.&lt;br /&gt;
&lt;br /&gt;
==== Release Builds ====&lt;br /&gt;
&lt;br /&gt;
Release builds are what your customer receives. They are meant to be run on production hardware and servers, and they should be reliable, secure, and efficient. A stable release build is the product of the hard work and effort during development.&lt;br /&gt;
&lt;br /&gt;
For release builds, you should use the following as part of &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for release builds:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-On -g2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O''n''&amp;lt;/tt&amp;gt; sets optimizations for speed or size (for example, &amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;), and &amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt; ensure debugging information is created.&lt;br /&gt;
&lt;br /&gt;
Debugging information should be stripped and retained in case of symbolication for a crash report from the field. While not desired, debug information can be left in place without a performance penalty. See ''[http://gcc.gnu.org/ml/gcc-help/2005-03/msg00032.html How does the gcc -g option affect performance?]'' for details.&lt;br /&gt;
&lt;br /&gt;
Release builds should also define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is not defined. The time for debugging and diagnostics is over, so users get production code with full optimizations, no &amp;quot;programming diagnostics&amp;quot;, and other efficiencies. If you can't optimize or your are performing excessive logging, it usually means the program is not ready for production.&lt;br /&gt;
&lt;br /&gt;
If you have been relying on an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and then a subsequent &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;, you have been abusing &amp;quot;program diagnostics&amp;quot; since it has no place in production code. If you want a memory dump, create one so users don't have to worry about secrets and other sensitive information being written to the filesystem and emailed in plain text.&lt;br /&gt;
&lt;br /&gt;
For Windows, you would use &amp;lt;tt&amp;gt;/Od&amp;lt;/tt&amp;gt; for debug builds; and &amp;lt;tt&amp;gt;/Ox&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/O2&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/Os&amp;lt;/tt&amp;gt; for release builds. See Microsoft's [http://msdn.microsoft.com/en-us/library/k1ack8f1.aspx /O Options (Optimize Code)] for details.&lt;br /&gt;
&lt;br /&gt;
==== Test Builds ====&lt;br /&gt;
&lt;br /&gt;
Test builds are used to provide heuristic validation by way of positive and negative test suites. Under a test configuration, all interfaces are tested to ensure they perform to specification and satisfaction. &amp;quot;Satisfaction&amp;quot; is subjective, but it should include no crashing and no trashing of your memory arena, even when faced with negative tests.&lt;br /&gt;
&lt;br /&gt;
Because all interfaces are tested (and not just the public ones), your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; should include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should also change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Nearly everyone gets a positive test right, so no more needs to be said. The negative self tests are much more interesting, and you should concentrate on trying to make your program fail so you can verify its fails gracefully. Remember, a bad guy is not going to be courteous when he attempts to cause your program to fail. And its your project that takes egg on the face by way of a bug report or guest appearance on [http://www.grok.org.uk/full-disclosure/ Full Disclosure] or [http://www.securityfocus.com/archive Bugtraq] - not ''&amp;lt;nowiki&amp;gt;&amp;lt;some library&amp;gt;&amp;lt;/nowiki&amp;gt;'' you included.&lt;br /&gt;
&lt;br /&gt;
=== Auto Tools ===&lt;br /&gt;
&lt;br /&gt;
Auto configuration tools are popular on many Linux and Unix based systems, and the tools include ''Autoconf'', ''Automake'', ''config'', and ''Configure''. The tools work together to produce project files from scripts and template files. After the process completes, your project should be setup and ready to be made with &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When using auto configuration tools, there are a few files of interest worth mentioning. The files are part of the auto tools chain and include &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; and the various &amp;lt;tt&amp;gt;*.in&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;*.ac&amp;lt;/tt&amp;gt; (autoconf), and &amp;lt;tt&amp;gt;*.am&amp;lt;/tt&amp;gt; (automake) files. At times, you will have to open them, or the resulting makefiles, to tune the &amp;quot;stock&amp;quot; configuration.&lt;br /&gt;
&lt;br /&gt;
There are three downsides to the command line configuration tools in the toolchain: (1) they often ignore user requests, (2) they cannot create configurations, and (3) security is often not a goal.&lt;br /&gt;
&lt;br /&gt;
To demonstrate the first issue, confider your project with the following: &amp;lt;tt&amp;gt;configure CFLAGS=&amp;quot;-Wall -fPIE&amp;quot; CXXFLAGS=&amp;quot;-Wall -fPIE&amp;quot; LDFLAGS=&amp;quot;-pie&amp;quot;&amp;lt;/tt&amp;gt;. You will probably find the auto tools ignored your request, which means the command below will not produce expected results. As a work around, you will have to open an &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; scripts, &amp;lt;tt&amp;gt;Makefile.in&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;Makefile.am&amp;lt;/tt&amp;gt; and fix the configuration.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ configure CFLAGS=&amp;quot;-Wall -Wextra -Wconversion -fPIE -Wno-unused-parameter&lt;br /&gt;
    -Wformat=2 -Wformat-security -fstack-protector-all -Wstrict-overflow&amp;quot;&lt;br /&gt;
    LDFLAGS=&amp;quot;-pie -z,noexecstack -z,noexecheap -z,relro -z,now&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the second point, you will probably be disappointed to learn [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html Automake does not support the concept of configurations]. Its not entirely Autoconf's or Automake's fault - ''Make'' and its inability to detect changes is the underlying problem. Specifically, ''Make'' only [http://pubs.opengroup.org/onlinepubs/009695399/utilities/make.html checks modification times of prerequisites and targets], and does not check things like &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;. The net effect is you will not receive expected results when you issue &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt; and then &amp;lt;tt&amp;gt;make test&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Finally, you will probably be disappointed to learn tools such as Autoconf and Automake miss many security related opportunities and ship insecure out of the box. There are a number of compiler switches and linker flags that improve the defensive posture of a program, but they are not 'on' by default. Tools like Autoconf - which are supposed to handle this situation - often provides setting to serve the lowest of all denominators.&lt;br /&gt;
&lt;br /&gt;
A recent discussion on the Automake mailing list illuminates the issue: ''[https://lists.gnu.org/archive/html/autoconf/2012-12/msg00038.html Enabling compiler warning flags]''. Attempts to improve default configurations were met with resistance and no action was taken. The resistance is often of the form, &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some useful warning&amp;gt;&amp;lt;/nowiki&amp;gt; also produces false positives&amp;quot; or &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some obscure platform&amp;gt;&amp;lt;/nowiki&amp;gt; does not support &amp;lt;nowiki&amp;gt;&amp;lt;established security feature&amp;gt;&amp;lt;/nowiki&amp;gt;&amp;quot;. Its noteworthy that David Wheeler, the author of ''[http://www.dwheeler.com/secure-programs/ Secure Programming for Linux and Unix HOWTO]'', was one of the folks trying to improve the posture.&lt;br /&gt;
&lt;br /&gt;
=== Makefiles ===&lt;br /&gt;
&lt;br /&gt;
Make is one of the earliest build systems dating back to the 1970s. Its available on Linux, Mac OS X and Unix, so you will frequently encounter projects using it. Unfortunately, Make has a number of short comings (''[http://aegis.sourceforge.net/auug97.pdf Recursive Make Considered Harmful]'' and ''[http://www.conifersystems.com/whitepapers/gnu-make/ What’s Wrong With GNU make?]''), and can cause some discomfort. Despite issues with Make, ESAPI C++ uses Make primarily for three reasons: first, its omnipresent; second, its easier to manage than the Auto Tools family; and third, &amp;lt;tt&amp;gt;libtool&amp;lt;/tt&amp;gt; was out of the question.&lt;br /&gt;
&lt;br /&gt;
Consider what happens when you: (1) type &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt;, and then type &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;. Each build would require different &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; due to optimizations and level of debug support. In your makefile, you would extract the relevant target and set &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; similar to below (taken from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/Makefile ESAPI C++ Makefile]):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Makefile&lt;br /&gt;
DEBUG_GOALS = $(filter $(MAKECMDGOALS), debug)&lt;br /&gt;
ifneq ($(DEBUG_GOALS),)&lt;br /&gt;
  WANT_DEBUG := 1&lt;br /&gt;
  WANT_TEST := 0&lt;br /&gt;
  WANT_RELEASE := 0&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_DEBUG),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
  ESAPI_CXXFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_RELEASE),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
  ESAPI_CXXFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_TEST),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
  ESAPI_CXXFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
# Merge ESAPI flags with user supplied flags. We perform the extra step to ensure &lt;br /&gt;
# user options follow our options, which should give user option's a preference.&lt;br /&gt;
override CFLAGS := $(ESAPI_CFLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(ESAPI_CXXFLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(ESAPI_LDFLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make will first build the program in a debug configuration for a session under the debugger using a rule similar to:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;%.cpp:%.o:&lt;br /&gt;
        $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $&amp;lt; -o $@&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you want the release build, Make will do nothing because it considers everything up to date despite the fact &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; have changed. Hence, your program will actually be in a debug configuration and risk a &amp;lt;tt&amp;gt;SIGABRT&amp;lt;/tt&amp;gt; at runtime because debug instrumentation is present (recall &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; calls &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; when &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined). In essence, you have DoS'd yourself due to &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, many projects do not honor the user's command line. ESAPI C++ does its best to ensure a user's flags are honored via &amp;lt;tt&amp;gt;override&amp;lt;/tt&amp;gt; as shown above, but other projects do not. For example, consider a project that should be built with Position Independent Executable (PIE or ASLR) enabled and data execution prevention (DEP) enabled. Dismissing user settings combined with insecure out of the box settings (and not picking them up during auto-setup or auto-configure) means a program built with the following will likely have neither defense:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ make CFLAGS=&amp;quot;-fPIE&amp;quot; CXXFLAGS=&amp;quot;-fPIE&amp;quot; LDFLAGS=&amp;quot;-pie -z,noexecstack, -z,noexecheap&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Defenses such as ASLR and DEP are especially important on Linux because [http://linux.die.net/man/5/elf Data Execution - not Prevention - is the norm].&lt;br /&gt;
&lt;br /&gt;
=== Integration ===&lt;br /&gt;
&lt;br /&gt;
Project level integration presents opportunities to harden your program or library with domain specific knowledge. For example, if the platform supports Position Independent Executables (PIE or ASLR) and data execution prevention (DEP), then you should integrate with it. The consequences of not doing so could result in exploitation. As a case in point, see KingCope's 0-days for MySQL in December, 2012 (CVE-2012-5579 and CVE-2012-5612, among others). Integration with platform security would have neutered a number of the 0-days.&lt;br /&gt;
&lt;br /&gt;
You also have the opportunity to include helpful libraries that are not need for business logic support. For example, if you are working on a platform with [http://dmalloc.com DMalloc] or [http://code.google.com/p/address-sanitizer/ Address Sanitizer], you should probably use it in your debug builds. For Ubuntu, DMalloc available from the package manager and can be installed with &amp;lt;tt&amp;gt;sudo apt-get install libdmalloc5&amp;lt;/tt&amp;gt;. For Apple platforms, its available as a scheme option (see [[#Clang/Xcode|Clang/Xcode]] below). Address Sanitizer is available in [http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8 and above] for many platforms.&lt;br /&gt;
&lt;br /&gt;
In addition, project level integration is an opportunity to harden third party libraries you chose to include. Because you chose to include them, you and your users are responsible for them. If you or your users endure a SP800-53 audit, third party libraries will be in scope because the supply chain is included (specifically, item SA-12, Supply Chain Protection). The audits are not limited to those in the US Federal arena - financial institutions perform reviews too. A perfect example of violating this guidance is [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525], which was due to [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library].&lt;br /&gt;
&lt;br /&gt;
Another example is including OpenSSL. You know (1) [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 is insecure], (2) [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 is insecure], and (3) [http://arstechnica.com/security/2012/09/crime-hijacks-https-sessions/ compression is insecure] (among others). In addition, suppose you don't use hardware and engines, and only allow static linking. Given the knowledge and specifications, you would configure the OpenSSL library as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ Configure darwin64-x86_64-cc -no-hw -no-engines -no-comp -no-shared -no-dso -no-sslv2 -no-sslv3 --openssldir=…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Note Well'': you might want engines, especially on Ivy Bridge microarchitectures (3rd generation Intel Core i5 and i7 processors). To have OpenSSL use the processor's random number generator (via the of &amp;lt;tt&amp;gt;rdrand&amp;lt;/tt&amp;gt; instruction), you will need to call OpenSSL's &amp;lt;tt&amp;gt;ENGINE_load_rdrand()&amp;lt;/tt&amp;gt; function and then &amp;lt;tt&amp;gt;ENGINE_set_default&amp;lt;/tt&amp;gt; with &amp;lt;tt&amp;gt;ENGINE_METHOD_RAND&amp;lt;/tt&amp;gt;. See [http://wiki.opensslfoundation.com/index.php/Random_Numbers OpenSSL's Random Numbers] for details.&lt;br /&gt;
&lt;br /&gt;
If you configure without the switches, then you will likely have vulnerable code/libraries and risk failing an audit. If the program is a remote server, then the following command will reveal if compression is active on the channel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ echo &amp;quot;GET / HTTP1.0&amp;quot; | openssl s_client -connect &amp;lt;nowiki&amp;gt;example.com:443&amp;lt;/nowiki&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;nm&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;openssl s_client&amp;lt;/tt&amp;gt; will show that compression is enabled in the client. In fact, any symbol within the &amp;lt;tt&amp;gt;OPENSSL_NO_COMP&amp;lt;/tt&amp;gt; preprocessor macro will bear witness since &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; is translated into a &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; define.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ nm /usr/local/ssl/iphoneos/lib/libcrypto.a 2&amp;gt;/dev/null | egrep -i &amp;quot;(COMP_CTX_new|COMP_CTX_free)&amp;quot;&lt;br /&gt;
0000000000000110 T COMP_CTX_free&lt;br /&gt;
0000000000000000 T COMP_CTX_new&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even more egregious is the answer given to auditors who specifically ask about configurations and protocols: &amp;quot;we don't use weak/wounded/broken ciphers&amp;quot; or &amp;quot;we follow best practices.&amp;quot; The use of compression tells the auditor that you are using wounded protocol in an insecure configuration and you don't follow best practices. That will likely set off alarm bells, and ensure the auditor dives deeper on more items.&lt;br /&gt;
&lt;br /&gt;
== Preprocessor ==&lt;br /&gt;
&lt;br /&gt;
The preprocessor is crucial to setting up a project for success. The C committee provided one macro - &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; - and the macro can be used to derive a number of configurations and drive engineering processes. Unfortunately, the committee also left many related items to chance, which has resulted in programmers abusing builtin facilities. This section will help you set up you projects to integrate well with other projects and ensure reliability and security.&lt;br /&gt;
&lt;br /&gt;
There are three topics to discuss when hardening the preprocessor. The first is well defined configurations which produce well defined behaviors, the second is useful behavior from assert, and the third is proper use of macros when integrating vendor code and third party libraries.&lt;br /&gt;
&lt;br /&gt;
=== Configurations ===&lt;br /&gt;
&lt;br /&gt;
To remove ambiguity, you should recognize two configurations: Release and Debug. Release is for production code on live servers, and its behavior is requested via the C/C++ &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; macro. Its also the only macro observed by the C and C++ Committees and Posix. Diametrically opposed to release is Debug. While there is a compelling argument for &amp;lt;tt&amp;gt;!defined(NDEBUG)&amp;lt;/tt&amp;gt;, you should have an explicit macro for the configuration and that macro should be &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;. This is because vendors and outside libraries use &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (or similar) macro for their configuration. For example, Carnegie Mellon's Mach kernel uses &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, Microsoft's CRT uses [http://msdn.microsoft.com/en-us/library/ww5t02fa%28v=vs.71%29.aspx&amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt;], and Wind River Workbench uses &amp;lt;tt&amp;gt;DEBUG_MODE&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (Release) and &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (Debug), you have two additional cross products: both are defined or neither are defined. Defining both should be an error, and defining neither should default to a release configuration. Below is from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/esapi/EsapiCommon.h ESAPI C++ EsapiCommon.h], which is the configuration file used by all source files:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// Only one or the other, but not both&lt;br /&gt;
#if (defined(DEBUG) || defined(_DEBUG)) &amp;amp;&amp;amp; (defined(NDEBUG) || defined(_NDEBUG))&lt;br /&gt;
# error Both DEBUG and NDEBUG are defined.&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
// The only time we switch to debug is when asked. NDEBUG or {nothing} results&lt;br /&gt;
// in release build (fewer surprises at runtime).&lt;br /&gt;
#if defined(DEBUG) || defined(_DEBUG)&lt;br /&gt;
# define ESAPI_BUILD_DEBUG 1&lt;br /&gt;
#else&lt;br /&gt;
# define ESAPI_BUILD_RELEASE 1&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is in effect, your code should receive full debug instrumentation, including the full force of assertions.&lt;br /&gt;
&lt;br /&gt;
=== ASSERT ===&lt;br /&gt;
&lt;br /&gt;
Asserts will help you create self-debugging code by helping you find the point of first failure quickly and easily. Asserts should be used throughout your program, including parameter validation, return value checking and program state. The &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; will silently guard your code through its lifetime. It will always be there, even when not debugging a specific component of a module. If you have thorough code coverage, you will spend less time debugging and more time developing because programs will debug themselves.&lt;br /&gt;
&lt;br /&gt;
To use asserts effectively, you should assert everything. That includes parameters upon entering a function, return values from function calls, and any program state. Everywhere you place an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation or checking, you should have an assert. Everywhere you have an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; for validation or checking, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand.&lt;br /&gt;
&lt;br /&gt;
If you are still using &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt;'s, then you have an opportunity for improvement. In the time it takes for you to write a &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; statement, you could have written an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;. Unlike the &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; which are often removed when no longer needed, the &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; stays active forever. Remember, this is all about finding the point of first failure quickly so you can spend your time doing other things.&lt;br /&gt;
&lt;br /&gt;
There is one problem with using asserts - [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html Posix states &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; should call &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;] if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined. When debugging, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will never be defined since you want the &amp;quot;program diagnostics&amp;quot; (quote from the Posix description). The behavior makes &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and its accompanying &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; completely useless for development. The result of &amp;quot;program diagnostics&amp;quot; calling &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; due to standard C/C++ behavior is disuse - developers simply don't use them. Its incredibly bad for the development community because self-debugging programs can help eradicate so many stability problems.&lt;br /&gt;
&lt;br /&gt;
Since self-debugging programs are so powerful, you will have to have to supply your own assert and signal handler with improved behavior. Your assert will exchange auto-aborting behavior for auto-debugging behavior. The auto-debugging facility will ensure the debugger snaps when a problem is detected, and you will find the point of first failure quickly and easily.&lt;br /&gt;
&lt;br /&gt;
ESAPI C++ supplies its own assert with the behavior described above. In the code below, &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt; raises &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; when in effect or it evaluates to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; in other cases.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// A debug assert which should be sprinkled liberally. This assert fires and then continues rather&lt;br /&gt;
// than calling abort(). Useful when examining negative test cases from the command line.&lt;br /&gt;
#if (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_STARNIX))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp) {                                    \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; std::endl;                                           \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) {                               \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; &amp;quot;: \&amp;quot;&amp;quot; &amp;lt;&amp;lt; (msg) &amp;lt;&amp;lt; &amp;quot;\&amp;quot;&amp;quot; &amp;lt;&amp;lt; std::endl;                \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#elif (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_WINDOWS))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      assert(exp)&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) assert(exp)&lt;br /&gt;
#else&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      ((void)(exp))&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) ((void)(exp))&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
#if !defined(ASSERT)&lt;br /&gt;
#  define ASSERT(exp)     ESAPI_ASSERT1(exp)&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At program startup, a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; handler will be installed if one is not provided by another component:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct DebugTrapHandler&lt;br /&gt;
{&lt;br /&gt;
  DebugTrapHandler()&lt;br /&gt;
  {&lt;br /&gt;
    struct sigaction new_handler, old_handler;&lt;br /&gt;
&lt;br /&gt;
    do&lt;br /&gt;
      {&lt;br /&gt;
        int ret = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, NULL, &amp;amp;old_handler);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        // Don't step on another's handler&lt;br /&gt;
        if (old_handler.sa_handler != NULL) break;&lt;br /&gt;
&lt;br /&gt;
        new_handler.sa_handler = &amp;amp;DebugTrapHandler::NullHandler;&lt;br /&gt;
        new_handler.sa_flags = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigemptyset (&amp;amp;new_handler.sa_mask);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, &amp;amp;new_handler, NULL);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
      } while(0);&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  static void NullHandler(int /*unused*/) { }&lt;br /&gt;
&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
// We specify a relatively low priority, to make sure we run before other CTORs&lt;br /&gt;
// http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Attributes.html#C_002b_002b-Attributes&lt;br /&gt;
static const DebugTrapHandler g_dummyHandler __attribute__ ((init_priority (110)));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On a Windows platform, you would call &amp;lt;tt&amp;gt;_set_invalid_parameter_handler&amp;lt;/tt&amp;gt; (and possibly &amp;lt;tt&amp;gt;set_unexpected&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;set_terminate&amp;lt;/tt&amp;gt;) to install a new handler.&lt;br /&gt;
&lt;br /&gt;
Live hosts running production code should always define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (i.e., release configuration), which means they do not assert or auto-abort. Auto-abortion is not acceptable behavior, and anyone who asks for the behavior is completely abusing the functionality of &amp;quot;program diagnostics&amp;quot;. If a program wants a core dump, then it should create the dump rather than crashing.&lt;br /&gt;
&lt;br /&gt;
For more reading on asserting effectively, please see one of John Robbin's books, such as ''[http://www.amazon.com/dp/0735608865 Debugging Applications]''. John is a legendary bug slayer in Windows circles, and he will show you how to do nearly everything, from debugging a simple program to bug slaying in multithreaded programs.&lt;br /&gt;
&lt;br /&gt;
=== Additional Macros ===&lt;br /&gt;
&lt;br /&gt;
Additional macros include any macros needed to integrate properly and securely. It includes integrating the program with the platform (for example MFC or Cocoa/CocoaTouch) and libraries (for example, Crypto++ or OpenSSL). It can be a challenge because you have to have proficiency with your platform and all included libraries and frameworks. The list below illustrates the level of detail you will need when integrating.&lt;br /&gt;
&lt;br /&gt;
Though Boost is missing from the list, it appears to lack recommendations, additional debug diagnostics, and a hardening guide. See ''[http://stackoverflow.com/questions/14927033/boost-hardening-guide-preprocessor-macros BOOST Hardening Guide (Preprocessor Macros)]'' for details. In addition, Tim Day points to ''[http://boost.2283326.n4.nabble.com/boost-build-should-we-not-define-SECURE-SCL-0-by-default-for-all-msvc-toolsets-td2654710.html &amp;lt;nowiki&amp;gt;[boost.build] should we not define _SECURE_SCL=0 by default for all msvc toolsets&amp;lt;/nowiki&amp;gt;]'' for a recent discussion related to hardening (or lack thereof).&lt;br /&gt;
&lt;br /&gt;
In addition to what you should define, defining some macros and undefining others should trigger a security related defect. For example, &amp;lt;tt&amp;gt;-U_FORTIFY_SOURCES&amp;lt;/tt&amp;gt; on Linux and &amp;lt;tt&amp;gt;_CRT_SECURE_NO_WARNINGS=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_SCL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_ATL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;STRSAFE_NO_DEPRECATE&amp;lt;/tt&amp;gt; on Windows.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Platform/Library!!Debug!!Release&lt;br /&gt;
|+ Table 1: Additional Platform/Library Macros&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;175pt&amp;quot;|All&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|DEBUG=1&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|NDEBUG=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Linux&lt;br /&gt;
|_GLIBCXX_DEBUG=1&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
|_FORTIFY_SOURCE=2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Android&lt;br /&gt;
|NDK_DEBUG=1&lt;br /&gt;
|_FORTIFY_SOURCE=1 (4.2 and above)&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define LOGI(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt logging)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Cocoa/CocoaTouch&lt;br /&gt;
|&lt;br /&gt;
|NS_BLOCK_ASSERTIONS=1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define NSLog(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt ASL)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SafeInt&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft&lt;br /&gt;
|_DEBUG=1, STRICT,&amp;lt;br&amp;gt;&lt;br /&gt;
_SECURE_SCL=1, _HAS_ITERATOR_DEBUGGING=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
|STRICT&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft ATL &amp;amp; MFC&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|STLPort&lt;br /&gt;
|_STLP_DEBUG=1, _STLP_USE_DEBUG_LIB=1&amp;lt;br&amp;gt;&lt;br /&gt;
_STLP_DEBUG_ALLOC=1, _STLP_DEBUG_UNINITIALIZED=1&lt;br /&gt;
|&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLite&lt;br /&gt;
|SQLITE_DEBUG, SQLITE_MEMDEBUG&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLCipher&lt;br /&gt;
|SQLITE_HAS_CODEC=1&amp;lt;BR&amp;gt;&lt;br /&gt;
SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_HAS_CODEC=1&amp;lt;BR&amp;gt;&lt;br /&gt;
SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Be careful with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; when using pre-compiled libraries such as Boost from a distribution. There are ABI incompatibilities, and the result will likely be a crash. You will have to compile Boost with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; or omit &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; SQLite secure deletion zeroizes memory on destruction. Define as required, and always define in US Federal since zeroization is required for FIPS 140-2, Level 1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt; ''N'' is 0644 by default, which means everyone has some access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt; Force temporary tables into memory (no unencrypted data to disk).&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
##########################################&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Compiler and Linker ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers provide a rich set of warnings from the analysis of code during compilation. Both GCC and Visual Studio have static analysis capabilities to help find mistakes early in the development process. The built in static analysis capabilities of GCC and Visual Studio are usually sufficient to ensure proper API usage and catch a number of mistakes such as using an uninitialized variable or comparing a negative signed int and a positive unsigned int.&lt;br /&gt;
&lt;br /&gt;
As a concrete example, (and for those not familiar with C/C++ promotion rules), a warning will be issued if a signed integer is promoted to an unsigned integer and then compared because a side effect is &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! GCC and Visual Studio will not currently catch, for example, SQL injections and other tainted data usage. For that, you will need a tool designed to perform data flow analysis or taint analysis.&lt;br /&gt;
&lt;br /&gt;
Some in the development community resist static analysis or refute its results. For example, when static analysis warned the Linux kernel's &amp;lt;tt&amp;gt;sys_prctl&amp;lt;/tt&amp;gt; was comparing an unsigned value against less than zero, Jesper Juhl offered a patch to clean up the code. Linus Torvalds howled “No, you don't do this… GCC is crap” (referring to compiling with warnings). For the full discussion, see ''[http://linux.derkeiler.com/Mailing-Lists/Kernel/2006-11/msg08325.html &amp;lt;nowiki&amp;gt;[PATCH] Don't compare unsigned variable for &amp;lt;0 in sys_prctl()&amp;lt;/nowiki&amp;gt;]'' from the Linux Kernel mailing list.&lt;br /&gt;
&lt;br /&gt;
The following sections will detail steps for three platforms. First is a typical GNU Linux based distribution offering GCC and Binutils, second is Clang and Xcode, and third is modern Windows platforms.&lt;br /&gt;
&lt;br /&gt;
=== Distribution Hardening ===&lt;br /&gt;
&lt;br /&gt;
Before discussing GCC and Binutils, it would be a good time to point out some of the defenses discussed below are all ready present in a distribution. Unfortunately, its design by committee, so what is present is usually only a mild variation of what is available (this way, everyone is mildly offended). For those who are purely worried about performance, you might be surprised to learn you have already taken the small performance hint without even knowing.&lt;br /&gt;
&lt;br /&gt;
Linux and BSD distributions often apply some hardening without intervention via ''[http://gcc.gnu.org/onlinedocs/gcc/Spec-Files.html GCC Spec Files]''. If you are using Debian, Ubuntu, Linux Mint and family, see ''[http://wiki.debian.org/Hardening Debian Hardening]''. For Red Hat and Fedora systems, see ''[http://lists.fedoraproject.org/pipermail/devel-announce/2011-August/000821.html New hardened build support (coming) in F16]''. Gentoo users should visit ''[http://www.gentoo.org/proj/en/hardened/ Hardened Gentoo]''.&lt;br /&gt;
&lt;br /&gt;
You can see the settings being used by a distribution via &amp;lt;tt&amp;gt;gcc -dumpspecs&amp;lt;/tt&amp;gt;. From Linux Mint 12 below, -fstack-protector (but not -fstack-protector-all) is used by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ gcc -dumpspecs&lt;br /&gt;
…&lt;br /&gt;
*link_ssp: %{fstack-protector:}&lt;br /&gt;
&lt;br /&gt;
*ssp_default: %{!fno-stack-protector:%{!fstack-protector-all: %{!ffreestanding:%{!nostdlib:-fstack-protector}}}}&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The “SSP” above stands for Stack Smashing Protector. SSP is a reimplementation of Hiroaki Etoh's work on IBM Pro Police Stack Detector. See Hiroaki Etoh's patch ''[http://gcc.gnu.org/ml/gcc-patches/2001-06/msg01753.html gcc stack-smashing protector]'' and IBM's ''[http://www.research.ibm.com/trl/projects/security/ssp/ GCC extension for protecting applications from stack-smashing attacks]'' for details.&lt;br /&gt;
&lt;br /&gt;
=== GCC/Binutils ===&lt;br /&gt;
&lt;br /&gt;
GCC (the compiler collection) and Binutils (the assemblers, linkers, and other tools) are separate projects that work together to produce a final executable. Both the compiler and linker offer options to help you write safer and more secure code. The linker will produce code which takes advantage of platform security features offered by the kernel and PaX, such as no-exec stacks and heaps (NX) and Position Independent Executable (PIE).&lt;br /&gt;
&lt;br /&gt;
The table below offers a set of compiler options to build your program. Static analysis warnings help catch mistakes early, while the linker options harden the executable at runtime. In the table below, “GCC” should be loosely taken as “non-ancient distributions.” While the GCC team considers 4.2 ancient, you will still encounter it on Apple and BSD platforms due to changes in GPL licensing around 2007. Refer to ''[http://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html GCC Option Summary]'', ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html Options to Request or Suppress Warnings]'' and ''[http://sourceware.org/binutils/docs-2.21/ld/Options.html Binutils (LD) Command Line Options]'' for usage details.&lt;br /&gt;
&lt;br /&gt;
Noteworthy of special mention are &amp;lt;tt&amp;gt;-fno-strict-overflow&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fwrapv&amp;lt;/tt&amp;gt;&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;. The flags ensure the compiler does not remove statements that result in overflow or wrap. If your program only runs correctly using the flags, it is likely violating C/C++ rules on overflow and illegal. If the program is illegal due to overflow or wrap checking, you should consider using [http://code.google.com/p/safe-iop/ safe-iop] for C or David LeBlanc's [http://safeint.codeplex.com SafeInt] in C++.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, some of those settings can be verified with the [http://www.trapkit.de/tools/checksec.html Checksec] tool written by Tobias Klein. The &amp;lt;tt&amp;gt;checksec.sh&amp;lt;/tt&amp;gt; script is designed to test standard Linux OS and PaX security features being used by an application. See the [http://www.trapkit.de/tools/checksec.html Trapkit] web page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 2: GCC C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wall -Wextra&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;75t&amp;quot;|GCC&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Enables many warnings (despite their names, all and extra do not turn on all warnings).&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wconversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may alter a value (includes -Wsign-conversion).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-conversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may change the sign of an integer value, such as assigning a signed integer to an unsigned integer (&amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion!).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wcast-align&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for a pointer cast to a type which has a different size, causing an invalid alignment and subsequent bus error on ARM processors.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wformat=2 -Wformat-security&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Increases warnings related to possible security defects, including incorrect format specifiers.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-common&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Prevent global variables being simultaneously defined in different object files.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fstack-protector or -fstack-protector-all&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Stack Smashing Protector (SSP). Improves stack layout and adds a guard to detect stack based buffer overflows.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-omit-frame-pointer&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Improves backtraces for post-mortem analysis&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wmissing-prototypes and -Wmissing-declarations&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a global function is defined without a prototype or declaration.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-prototypes&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a function is declared or defined without specifying the argument types.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-overflow&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.2&lt;br /&gt;
|Warn about optimizations taken due to &amp;lt;nowiki&amp;gt;[undefined]&amp;lt;/nowiki&amp;gt; signed integer overflow assumptions.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wtrampolines&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.3&lt;br /&gt;
|Warn about trampolines generated for pointers to nested functions. Trampolines require executable stacks.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=address&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/address-sanitizer/ AddressSanitizer], a fast memory error detector. Memory access instructions will be instrumented to help detect heap, stack, and global buffer overflows; as well as use-after-free bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=thread&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/data-race-test/wiki/ThreadSanitizer ThreadSanitizer], a fast data race detector. Memory access instructions will be instrumented to detect data race bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,nodlopen and -Wl,-z,nodump&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.10&lt;br /&gt;
|Reduces the ability of an attacker to load, manipulate, and dump shared objects.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,noexecstack and -Wl,-z,noexecheap&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.14&lt;br /&gt;
|Data Execution Prevention (DEP). ELF headers are marked with PT_GNU_STACK and PT_GNU_HEAP.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,relro&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Global Offset Table (GOT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,now&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Procedure Linkage Table (PLT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIC&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils&lt;br /&gt;
|Position Independent Code. Used for libraries and shared objects. Both -fPIC (compiler) and -shared (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIE&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.16&lt;br /&gt;
|Position Independent Executable (ASLR). Used for programs. Both -fPIE (compiler) and -pie (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Unlike Clang and -Weverything, GCC does not provide a switch to truly enable all warnings.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; -fstack-protector guards functions with high risk objects such as C strings, while -fstack-protector-all guards all objects.&lt;br /&gt;
&lt;br /&gt;
Additional C++ warnings which can be used include the following in Table 3. See ''[http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html GCC's Options Controlling C++ Dialect]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 3: GCC C++ Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Woverloaded-virtual&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn when a function declaration hides virtual functions from a base class. &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wreorder&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when the order of member initializers given in the code does not match the order in which they must be executed.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-promo&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when overload resolution chooses a promotion from unsigned or enumerated type to a signed type, over a conversion to an unsigned type of the same size.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wnon-virtual-dtor&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when a class has virtual functions and an accessible non-virtual destructor.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Weffc++&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn about violations of the following style guidelines from Scott Meyers' ''[http://www.aristeia.com/books.html Effective C++, Second Edition]'' book.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
And additional Objective C warnings which are often useful include the following. See ''[http://gcc.gnu.org/onlinedocs/gcc/Objective_002dC-and-Objective_002dC_002b_002b-Dialect-Options.html Options Controlling Objective-C and Objective-C++ Dialects]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 4: GCC Objective C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wstrict-selector-match&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn if multiple methods with differing argument and/or return types are found for a given selector when attempting to send a message using this selector to a receiver of type id or Class.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wundeclared-selector&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn if a &amp;lt;tt&amp;gt;@selector(…)&amp;lt;/tt&amp;gt; expression referring to an undeclared selector is found. &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The use of aggressive warnings will produce spurious noise. The noise is a tradeoff - you can learn of potential problems at the cost of wading through some chaff. The following will help reduces spurious noise from the warning system:&lt;br /&gt;
&lt;br /&gt;
* -Wno-unused-parameter (GCC)&lt;br /&gt;
* -Wno-type-limits (GCC 4.3)&lt;br /&gt;
* -Wno-tautological-compare (Clang)&lt;br /&gt;
&lt;br /&gt;
Finally, a simple version based Makefile example is shown below. This is different than feature based makefile produced by auto tools (which will test for a particular feature and then define a symbol or configure a template file). Not all platforms use all options and flags. To address the issue you can pursue one of two strategies. First, you can ship with a weakened posture by servicing the lowest common denominator; or you can ship with everything in force. In the latter case, those who don't have a feature available will edit the makefile to accommodate their installation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;CXX=g++&lt;br /&gt;
EGREP = egrep&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GCC_COMPILER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version')&lt;br /&gt;
GCC41_OR_LATER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version (4\.[1-9]|[5-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GNU_LD210_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[0-9]|2\.[2-9])')&lt;br /&gt;
GNU_LD214_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[4-9]|2\.[2-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC_COMPILER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wall -Wextra -Wconversion&lt;br /&gt;
  MY_CC_FLAGS += -Wformat=2 -Wformat-security&lt;br /&gt;
  MY_CC_FLAGS += -Wno-unused-parameter&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC41_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fstack-protector-all&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC42_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wstrict-overflow&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC43_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wtrampolines&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD210_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,nodlopen -z,nodump&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD214_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,noexecstack -z,noexecheap&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD215_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,relro -z,now&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD216_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fPIE&lt;br /&gt;
  MY_LD_FLAGS += -pie&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
# Use 'override' to honor the user's command line&lt;br /&gt;
override CFLAGS := $(MY_CC_FLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(MY_CC_FLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(MY_LD_FLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Clang/Xcode ===&lt;br /&gt;
&lt;br /&gt;
[http://clang.llvm.org Clang] and [http://llvm.org LLVM] have been aggressively developed since Apple lost its GPL compiler back in 2007 (due to Tivoization which resulted in GPLv3). Since that time, a number of developers and Goggle have joined the effort. While Clang will consume most (all?) GCC/Binutil flags and switches, the project supports a number of its own options, including a static analyzer. In addition, Clang is relatively easy to build with additional diagnostics, such as Dr. John Regher and Peng Li's [http://embed.cs.utah.edu/ioc/ Integer Overflow Checker (IOC)].&lt;br /&gt;
&lt;br /&gt;
IOC is incredibly useful, and has found bugs in a number of projects, from the Linux Kernel (&amp;lt;tt&amp;gt;include/linux/bitops.h&amp;lt;/tt&amp;gt;, still unfixed), SQLite, PHP, Firefox (many still unfixed), LLVM, and Python. Future version of Clang (Clang 3.3 and above) will allow you to enable the checks out of the box with &amp;lt;tt&amp;gt;-fsanitize=integer&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=shift&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Clang options can be found at [http://clang.llvm.org/docs/UsersManual.html Clang Compiler User’s Manual]. Clang does include an option to turn on all warnings - &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt;. Use it with care but use it regularly since you will get back a lot of noise and issues you missed. For example, add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; for production builds and make non-spurious issues a quality gate. Under Xcode, simply add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to compiler warnings, both static analysis and additional security checks can be performed. Reading on Clang's static analysis capabilities can be found at [http://clang-analyzer.llvm.org Clang Static Analyzer]. Figure 1 below shows some of the security checks utilized by Xcode.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-11.png|thumb|450px|Figure 1: Clang/LLVM and Xcode options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Visual Studio ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a convenient Integrated Development Environment (IDE) for managing solutions and their settings. the section called “Visual Studio Options” discusses option which should be used with Visual Studio, and the section called “Project Properties” demonstrates incorporating those options into a solution's project.&lt;br /&gt;
&lt;br /&gt;
The table below lists the compiler and linker switches which should be used under Visual Studio. Refer to Howard and LeBlanc's Writing Secure Code (Microsoft Press) for a detailed discussion; or ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]'' in Security Briefs by Michael Howard. In the table below, “Visual Studio” refers to nearly all versions of the development environment, including Visual Studio 5.0 and 6.0.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, those settings can be verified with BinScope. BinScope is a verification tool from Microsoft that analyzes binaries to ensure that they have been built in compliance with Microsoft's Security Development Lifecycle (SDLC) requirements and recommendations. See the ''[https://www.microsoft.com/download/en/details.aspx?id=11910 BinScope Binary Analyzer]'' download page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 5: Visual Studio Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;150pt&amp;quot;|&amp;lt;nowiki&amp;gt;/W4&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;100pt&amp;quot;|Visual Studio&lt;br /&gt;
|width=&amp;quot;350pt&amp;quot;|Warning level 4, which includes most warnings.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/WAll&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Enable all warnings, including those off by default.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/GS&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Adds a security cookie (guard or canary) on the stack before the return address buffer stack based for overflow checks.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/SafeSEH&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Safe structured exception handling to remediate SEH overwrites.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/analyze&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Enterprise code analysis (freely available with Windows SDK for Windows Server 2008 and .NET Framework 3.5).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/NXCOMPAT&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Data Execution Prevention (DEP).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/dynamicbase&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Address Space Layout Randomization (ASLR).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;strict_gs_check&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Aggressively applies stack protections to a source file to help detect some categories of stack based buffer overruns.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;See Jon Sturgeon's discussion of the switch at ''[https://blogs.msdn.com/b/vcblog/archive/2010/12/14/off-by-default-compiler-warnings-in-visual-c.aspx Off By Default Compiler Warnings in Visual C++]''.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;When using /GS, there are a number of circumstances which affect the inclusion of a security cookie. For example, the guard is not used if there is no buffer in the stack frame, optimizations are disabled, or the function is declared naked or contains inline assembly.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;tt&amp;gt;#pragma strict_gs_check(on)&amp;lt;/tt&amp;gt; should be used sparingly, but is recommend in high risk situations, such as when a source file parses input from the internet.&lt;br /&gt;
&lt;br /&gt;
=== Warn Suppression ===&lt;br /&gt;
&lt;br /&gt;
From the tables above, a lot of warnings have been enabled to help detect possible programming mistakes. The potential mistakes are detected via compiler which carries around a lot of contextual information during its code analysis phase. At times, you will receive spurious warnings because the compiler is not ''that'' smart. Its understandable and even a good thing (how would you like to be out of a job because a program writes its own programs?). At times you will have to learn how to work with the compiler's warning system to suppress warnings. Notice what was not said: turn off the warnings.&lt;br /&gt;
&lt;br /&gt;
Suppressing warnings placates the compiler for spurious noise so you can get to the issues that matter (you are separating the wheat from the chaff). This section will offer some hints and point out some potential minefields. First is an unused parameter (for example, &amp;lt;tt&amp;gt;argc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;argv&amp;lt;/tt&amp;gt;). Suppressing unused parameter warnings is especially helpful for C++ and interface programming, where parameters are often unused. For this warning, simply define an &amp;quot;UNUSED&amp;quot; macro and warp the parameter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#define UNUSED_PARAMETER(x) ((void)x)&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    UNUSED_PARAMETER(argc);&lt;br /&gt;
    UNUSED_PARAMETER(argv);&lt;br /&gt;
    …&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A potential minefield lies near &amp;quot;comparing unsigned and signed&amp;quot; values, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will catch it for you. This is because C/C++ promotion rules state the signed value will be promoted to an unsigned value and then compared. That means &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! To fix this, you cannot blindly cast - you must first range test the value:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;int x = GetX();&lt;br /&gt;
unsigned int y = GetY();&lt;br /&gt;
&lt;br /&gt;
ASSERT(x &amp;gt;= 0);&lt;br /&gt;
if(!(x &amp;gt;= 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? X is negative.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
if(static_cast&amp;lt;unsigned int&amp;gt;(x) &amp;gt; y)&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is greater than y&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
else&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is not greater than y&amp;quot; &amp;lt;&amp;lt; endl;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;x&amp;lt;/tt&amp;gt;. Just run the program and wait for it to tell you there is a problem. If there is a problem, the program will snap the debugger (and more importantly, not call a useless &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; as specified by Posix). It beats the snot out of &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; that are removed when no longer needed or pollute outputs.&lt;br /&gt;
&lt;br /&gt;
Another conversion problem you will encounter conversion between types, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will also catch it for you. The following will always have an opportunity to fail, and should light up like a Christmas tree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct sockaddr_in addr;&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
addr.sin_port = htons(atoi(argv[2]));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following would probably serve you much better. Notice &amp;lt;tt&amp;gt;atoi&amp;lt;/tt&amp;gt; and fiends are not used because they can silently fail. In addition, the code is instrumented so you don't need to waste a lot of time debugging potential problems:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;const char* cstr = GetPortString();&lt;br /&gt;
&lt;br /&gt;
ASSERT(cstr != NULL);&lt;br /&gt;
if(!(cstr != NULL))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port string is not valid.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
istringstream iss(cstr);&lt;br /&gt;
long long t = 0;&lt;br /&gt;
iss &amp;gt;&amp;gt; t;&lt;br /&gt;
&lt;br /&gt;
ASSERT(!(iss.fail()));&lt;br /&gt;
if(iss.fail())&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Failed to read port.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// Should this be a port above the reserved range ([0-1024] on Unix)?&lt;br /&gt;
ASSERT(t &amp;gt; 0);&lt;br /&gt;
if(!(t &amp;gt; 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too small&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
ASSERT(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max()));&lt;br /&gt;
if(!(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max())))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too large&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use port&lt;br /&gt;
unsigned short port = static_cast&amp;lt;unsigned short&amp;gt;(t);&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;port&amp;lt;/tt&amp;gt;. This code will continue checking conditions, years after being instrumented (assuming to wrote code to read a config file early in the project). There's no need to remove the &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt;s as with &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; since they are silent guardians.&lt;br /&gt;
&lt;br /&gt;
Another useful suppression trick is too avoid ignoring return values. Not only is it useful to suppress the warning, its required for correct code. For example, &amp;lt;tt&amp;gt;snprint&amp;lt;/tt&amp;gt; will alert you to truncations through its return value. You should not make them silent truncations by ignoring the warning or casting to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;char path[PATH_MAX];&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int ret = snprintf(path, sizeof(path), &amp;quot;%s/%s&amp;quot;, GetDirectory(), GetObjectName());&lt;br /&gt;
ASSERT(ret != -1);&lt;br /&gt;
ASSERT(!(ret &amp;gt;= sizeof(path)));&lt;br /&gt;
&lt;br /&gt;
if(ret == -1 || ret &amp;gt;= sizeof(path))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Unable to build full object name&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use path&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem is pandemic, and not just boring user land programs. Projects which offer high integrity code, such as SELinux, suffer silent truncations. The following is from an approved SELinux patch even though a comment was made that it [http://permalink.gmane.org/gmane.comp.security.selinux/16845 suffered silent truncations in its &amp;lt;tt&amp;gt;security_compute_create_name&amp;lt;/tt&amp;gt; function] from &amp;lt;tt&amp;gt;compute_create.c&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12  int security_compute_create_raw(security_context_t scon,&lt;br /&gt;
13                                  security_context_t tcon,&lt;br /&gt;
14                                  security_class_t   tclass,&lt;br /&gt;
15                                  security_context_t * newcon)&lt;br /&gt;
16  {&lt;br /&gt;
17    char path[PATH_MAX];&lt;br /&gt;
18    char *buf;&lt;br /&gt;
19    size_t size;&lt;br /&gt;
20    int fd, ret;&lt;br /&gt;
21 	&lt;br /&gt;
22    if (!selinux_mnt) {&lt;br /&gt;
23      errno = ENOENT;&lt;br /&gt;
24      return -1;&lt;br /&gt;
25    }&lt;br /&gt;
26 	&lt;br /&gt;
27    snprintf(path, sizeof path, &amp;quot;%s/create&amp;quot;, selinux_mnt);&lt;br /&gt;
28    fd = open(path, O_RDWR);&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Unlike other examples, the above code will not debug itself, and you will have to set breakpoints and trace calls to determine the point of first failure. (And the code above gambles that the truncated file does not exist or is not under an adversary's control by blindly performing the &amp;lt;tt&amp;gt;open&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
== Runtime ==&lt;br /&gt;
&lt;br /&gt;
The previous sections concentrated on setting up your project for success. This section will examine additional hints for running with increased diagnostics and defenses. Not all platforms are created equal - GNU Linux is difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening to a program after compiling and static linking]; while Windows allows post-build hardening through a download. Remember, the goal is to find the point of first failure quickly so you can improve the reliability and security of the code.&lt;br /&gt;
&lt;br /&gt;
=== Xcode ===&lt;br /&gt;
&lt;br /&gt;
Xcode offers additional [http://developer.apple.com/library/mac/#recipes/xcode_help-scheme_editor/Articles/SchemeDiagnostics.html Application Diagnostics] that can help find memory errors and object use problems. Schemes can be managed through ''Products'' menu item, ''Scheme'' submenu item, and then ''Edit''. From the editor, navigate to the ''Diagnostics'' tab. In the figure below, four additional instruments are enabled for the debugging cycle: Scribble guards, Edge guards, Malloc guards, and Zombies.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-1.png|thumb|450px|Figure 2: Xcode Memory Diagnostics]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
There is one caveat with using some of the guards: Apple only provides them for the simulator, and not a device. In the past, the guards were available for both devices and simulators.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a number of debugging aides for use during development. The aides are called [http://msdn.microsoft.com/en-us/library/d21c150d.aspx Managed Debugging Assistants (MDAs)]. You can find the MDAs on the ''Debug'' menu, then ''Exceptions'' submenu. MDAs allow you to tune your debugging experience by, for example, filter exceptions for which the debugger should snap. For more details, see Stephen Toub's ''[http://msdn.microsoft.com/en-us/magazine/cc163606.aspx Let The CLR Find Bugs For You With Managed Debugging Assistants]''.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-2.png|thumb|450px|Figure 3: Managed Debugging Assistants]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Finally, for runtime hardening, Microsoft has a helpful tool called EMET. EMET is the [http://support.microsoft.com/kb/2458544 Enhanced Mitigation Experience Toolkit], and allows you to apply runtime hardening to an executable which was built without. Its very useful for utilities and other programs that were built without an SDLC.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-3.png|thumb|450px|Figure 4: Windows and EMET]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=147744</id>
		<title>C-Based Toolchain Hardening</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=147744"/>
				<updated>2013-03-13T02:27:15Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Moved SQLITE_TEMP_STORE into SQLCipher per Stephen Lombardo recommendation (SL is author of SQLCipher)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening]] is a treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. This article will examine Microsoft and GCC toolchains for the C, C++ and Objective C languages. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, preprocessor, compiler, and linker. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, Visual Studio-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
This is a prescriptive article, and it will not debate semantics or speculate on behavior. Some information, such as the C/C++ committee's motivation and pedigree for [https://groups.google.com/a/isocpp.org/forum/?fromgroups=#!topic/std-discussion/ak8e1mzBhGs &amp;quot;program diagnostics&amp;quot;, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;], appears to be lost like a tale in the Lord of the Rings. As such, the article will specify semantics (for example, the philosophy of 'debug' and 'release' build configurations), assign behaviors (for example, what an assert should do in a 'debug' and 'release' build configurations), and present a position. If you find the posture is too aggressive, then you should back off as required to suite your taste.&lt;br /&gt;
&lt;br /&gt;
A secure toolchain is not a silver bullet. It is one piece of an overall strategy in the engineering process to help ensure success. It will compliment existing processes such as static analysis, dynamic analysis, secure coding, negative test suites, and the like. Tools such as Valgrind and Helgrind will still be needed. And a project will still require solid designs and architectures.&lt;br /&gt;
&lt;br /&gt;
The OWASP [http://code.google.com/p/owasp-esapi-cplusplus/source ESAPI C++] project eats its own dog food. Many of the examples you will see in this article come directly from the ESAPI C++ project.&lt;br /&gt;
&lt;br /&gt;
Finally, a [[Category:Cheat Sheet|cheat sheet]] is available for those who desire a terse treatment of the material. Please visit [[C-Based_Toolchain_Hardening_Cheat_Sheet|C-Based Toolchain Hardening Cheat Sheet]] for the abbreviated version.&lt;br /&gt;
&lt;br /&gt;
== Wisdom ==&lt;br /&gt;
&lt;br /&gt;
Code '''must''' be correct. It '''should''' be secure. It '''can''' be efficient.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Jon_Bentley Dr. Jon Bentley]: ''&amp;quot;If it doesn't have to be correct, I can make it as fast as you'd like it to be&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Gary_McGraw Dr. Gary McGraw]: ''&amp;quot;Thou shalt not rely solely on security features and functions to build secure software as security is an emergent property of the entire system and thus relies on building and integrating all parts properly&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
Configuration is the first opportunity to configure your project for success. Not only do you have to configure your project to meet reliability and security goals, you must also configure integrated libraries properly. You typically have has three choices. First, you can use auto-configuration utilities if on Linux or Unix. Second, you can write a makefile by hand. This is predominant on Linux, Mac OS X, and Unix, but it applies to Windows as well. Finally, you can use an integrated development environment or IDE.&lt;br /&gt;
&lt;br /&gt;
=== Build Configurations ===&lt;br /&gt;
&lt;br /&gt;
At this stage in the process, you should concentrate on configuring for two builds: Debug and Release. Debug will be used for development and include full instrumentation. Release will be configured for production. The difference between the two settings is usually ''optimization level'' and ''debug level''. A third build configuration is Test, and its usually a special case of Release.&lt;br /&gt;
&lt;br /&gt;
For debug and release builds, the settings are typically diametrically opposed. Debug configurations have no optimizations and full debug information; while Release builds have optimizations and minimal to moderate debug information. In addition, debug code has full assertions and additional library integration, such as mudflaps and malloc guards such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The Test configuration is often a Release configuration that makes everything public for testing and builds a test harness. For example, all member functions public (C++ class) and all interfaces (library or shared object) should be made available for testing. Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
[http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8] introduced an optimization of &amp;lt;tt&amp;gt;-Og&amp;lt;/tt&amp;gt;. Note that it is only an optimization, and still requires a customary debug level via &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Debug Builds ====&lt;br /&gt;
&lt;br /&gt;
Debug builds are where developers spend most of their time when vetting problems, so this build should concentrate forces and tools or be a 'force multiplier'. Though many do not realize, debug code is more highly valued than release code because its adorned with additional instrumentation. The debug instrumentation will cause a program to become nearly &amp;quot;self-debugging&amp;quot;, and help you catch mistakes such as bad parameters, failed API calls, and memory problems.&lt;br /&gt;
&lt;br /&gt;
Self-debugging code reduces your time during trouble shooting and debugging. Reducing time under the debugger means you have more time for development and feature requests. If code is checked in without debug instrumentation, it should be fixed by adding instrumentation or rejected.&lt;br /&gt;
&lt;br /&gt;
For GCC, optimizations and debug symbolication are controlled through two switches: &amp;lt;tt&amp;gt;-O&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;. You should use the following as part of your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a minimal debug session:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-O0 -g3 -ggdb&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; turns off optimizations and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available. You may need to use &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; so some analysis is performed. Otherwise, your debug build will be missing a number of warnings not present in release builds. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debugging information is available for the debug session, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; includes extensions to help with a debug session under GDB. For completeness, Jan Krachtovil stated &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; currently has no effect in a private email.&lt;br /&gt;
&lt;br /&gt;
Debug build should also define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is not defined. &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; removes &amp;quot;program diagnostics&amp;quot;; and has undesirable behavior and side effects which discussed below in more detail. The defines should be present for all code, and not just the program. You use it for all code (your program and included libraries) because you need to know how they fail too (remember, you take the bug report - not the third party library).&lt;br /&gt;
&lt;br /&gt;
In addition, you should also use other relevant flags, such as &amp;lt;tt&amp;gt;-fno-omit-frame-pointer&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=address&amp;lt;/tt&amp;gt;. Finally, you should also ensure your project includes additional diagnostic libraries, such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt;. The additional flags and libraries are discussed below in more detail.&lt;br /&gt;
&lt;br /&gt;
==== Release Builds ====&lt;br /&gt;
&lt;br /&gt;
Release builds are what your customer receives. They are meant to be run on production hardware and servers, and they should be reliable, secure, and efficient. A stable release build is the product of the hard work and effort during development.&lt;br /&gt;
&lt;br /&gt;
For release builds, you should use the following as part of &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for release builds:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-On -g2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O''n''&amp;lt;/tt&amp;gt; sets optimizations for speed or size (for example, &amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;), and &amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt; ensure debugging information is created.&lt;br /&gt;
&lt;br /&gt;
Debugging information should be stripped and retained in case of symbolication for a crash report from the field. While not desired, debug information can be left in place without a performance penalty. See ''[http://gcc.gnu.org/ml/gcc-help/2005-03/msg00032.html How does the gcc -g option affect performance?]'' for details.&lt;br /&gt;
&lt;br /&gt;
Release builds should also define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is not defined. The time for debugging and diagnostics is over, so users get production code with full optimizations, no &amp;quot;programming diagnostics&amp;quot;, and other efficiencies. If you can't optimize or your are performing excessive logging, it usually means the program is not ready for production.&lt;br /&gt;
&lt;br /&gt;
If you have been relying on an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and then a subsequent &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;, you have been abusing &amp;quot;program diagnostics&amp;quot; since it has no place in production code. If you want a memory dump, create one so users don't have to worry about secrets and other sensitive information being written to the filesystem and emailed in plain text.&lt;br /&gt;
&lt;br /&gt;
For Windows, you would use &amp;lt;tt&amp;gt;/Od&amp;lt;/tt&amp;gt; for debug builds; and &amp;lt;tt&amp;gt;/Ox&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/O2&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/Os&amp;lt;/tt&amp;gt; for release builds. See Microsoft's [http://msdn.microsoft.com/en-us/library/k1ack8f1.aspx /O Options (Optimize Code)] for details.&lt;br /&gt;
&lt;br /&gt;
==== Test Builds ====&lt;br /&gt;
&lt;br /&gt;
Test builds are used to provide heuristic validation by way of positive and negative test suites. Under a test configuration, all interfaces are tested to ensure they perform to specification and satisfaction. &amp;quot;Satisfaction&amp;quot; is subjective, but it should include no crashing and no trashing of your memory arena, even when faced with negative tests.&lt;br /&gt;
&lt;br /&gt;
Because all interfaces are tested (and not just the public ones), your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; should include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should also change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Nearly everyone gets a positive test right, so no more needs to be said. The negative self tests are much more interesting, and you should concentrate on trying to make your program fail so you can verify its fails gracefully. Remember, a bad guy is not going to be courteous when he attempts to cause your program to fail. And its your project that takes egg on the face by way of a bug report or guest appearance on [http://www.grok.org.uk/full-disclosure/ Full Disclosure] or [http://www.securityfocus.com/archive Bugtraq] - not ''&amp;lt;nowiki&amp;gt;&amp;lt;some library&amp;gt;&amp;lt;/nowiki&amp;gt;'' you included.&lt;br /&gt;
&lt;br /&gt;
=== Auto Tools ===&lt;br /&gt;
&lt;br /&gt;
Auto configuration tools are popular on many Linux and Unix based systems, and the tools include ''Autoconf'', ''Automake'', ''config'', and ''Configure''. The tools work together to produce project files from scripts and template files. After the process completes, your project should be setup and ready to be made with &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When using auto configuration tools, there are a few files of interest worth mentioning. The files are part of the auto tools chain and include &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; and the various &amp;lt;tt&amp;gt;*.in&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;*.ac&amp;lt;/tt&amp;gt; (autoconf), and &amp;lt;tt&amp;gt;*.am&amp;lt;/tt&amp;gt; (automake) files. At times, you will have to open them, or the resulting makefiles, to tune the &amp;quot;stock&amp;quot; configuration.&lt;br /&gt;
&lt;br /&gt;
There are three downsides to the command line configuration tools in the toolchain: (1) they often ignore user requests, (2) they cannot create configurations, and (3) security is often not a goal.&lt;br /&gt;
&lt;br /&gt;
To demonstrate the first issue, confider your project with the following: &amp;lt;tt&amp;gt;configure CFLAGS=&amp;quot;-Wall -fPIE&amp;quot; CXXFLAGS=&amp;quot;-Wall -fPIE&amp;quot; LDFLAGS=&amp;quot;-pie&amp;quot;&amp;lt;/tt&amp;gt;. You will probably find the auto tools ignored your request, which means the command below will not produce expected results. As a work around, you will have to open an &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; scripts, &amp;lt;tt&amp;gt;Makefile.in&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;Makefile.am&amp;lt;/tt&amp;gt; and fix the configuration.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ configure CFLAGS=&amp;quot;-Wall -Wextra -Wconversion -fPIE -Wno-unused-parameter&lt;br /&gt;
    -Wformat=2 -Wformat-security -fstack-protector-all -Wstrict-overflow&amp;quot;&lt;br /&gt;
    LDFLAGS=&amp;quot;-pie -z,noexecstack -z,noexecheap -z,relro -z,now&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the second point, you will probably be disappointed to learn [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html Automake does not support the concept of configurations]. Its not entirely Autoconf's or Automake's fault - ''Make'' and its inability to detect changes is the underlying problem. Specifically, ''Make'' only [http://pubs.opengroup.org/onlinepubs/009695399/utilities/make.html checks modification times of prerequisites and targets], and does not check things like &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;. The net effect is you will not receive expected results when you issue &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt; and then &amp;lt;tt&amp;gt;make test&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Finally, you will probably be disappointed to learn tools such as Autoconf and Automake miss many security related opportunities and ship insecure out of the box. There are a number of compiler switches and linker flags that improve the defensive posture of a program, but they are not 'on' by default. Tools like Autoconf - which are supposed to handle this situation - often provides setting to serve the lowest of all denominators.&lt;br /&gt;
&lt;br /&gt;
A recent discussion on the Automake mailing list illuminates the issue: ''[https://lists.gnu.org/archive/html/autoconf/2012-12/msg00038.html Enabling compiler warning flags]''. Attempts to improve default configurations were met with resistance and no action was taken. The resistance is often of the form, &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some useful warning&amp;gt;&amp;lt;/nowiki&amp;gt; also produces false positives&amp;quot; or &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some obscure platform&amp;gt;&amp;lt;/nowiki&amp;gt; does not support &amp;lt;nowiki&amp;gt;&amp;lt;established security feature&amp;gt;&amp;lt;/nowiki&amp;gt;&amp;quot;. Its noteworthy that David Wheeler, the author of ''[http://www.dwheeler.com/secure-programs/ Secure Programming for Linux and Unix HOWTO]'', was one of the folks trying to improve the posture.&lt;br /&gt;
&lt;br /&gt;
=== Makefiles ===&lt;br /&gt;
&lt;br /&gt;
Make is one of the earliest build systems dating back to the 1970s. Its available on Linux, Mac OS X and Unix, so you will frequently encounter projects using it. Unfortunately, Make has a number of short comings (''[http://aegis.sourceforge.net/auug97.pdf Recursive Make Considered Harmful]'' and ''[http://www.conifersystems.com/whitepapers/gnu-make/ What’s Wrong With GNU make?]''), and can cause some discomfort. Despite issues with Make, ESAPI C++ uses Make primarily for three reasons: first, its omnipresent; second, its easier to manage than the Auto Tools family; and third, &amp;lt;tt&amp;gt;libtool&amp;lt;/tt&amp;gt; was out of the question.&lt;br /&gt;
&lt;br /&gt;
Consider what happens when you: (1) type &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt;, and then type &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;. Each build would require different &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; due to optimizations and level of debug support. In your makefile, you would extract the relevant target and set &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; similar to below (taken from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/Makefile ESAPI C++ Makefile]):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Makefile&lt;br /&gt;
DEBUG_GOALS = $(filter $(MAKECMDGOALS), debug)&lt;br /&gt;
ifneq ($(DEBUG_GOALS),)&lt;br /&gt;
  WANT_DEBUG := 1&lt;br /&gt;
  WANT_TEST := 0&lt;br /&gt;
  WANT_RELEASE := 0&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_DEBUG),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
  ESAPI_CXXFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_RELEASE),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
  ESAPI_CXXFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_TEST),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
  ESAPI_CXXFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
# Merge ESAPI flags with user supplied flags. We perform the extra step to ensure &lt;br /&gt;
# user options follow our options, which should give user option's a preference.&lt;br /&gt;
override CFLAGS := $(ESAPI_CFLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(ESAPI_CXXFLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(ESAPI_LDFLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make will first build the program in a debug configuration for a session under the debugger using a rule similar to:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;%.cpp:%.o:&lt;br /&gt;
        $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $&amp;lt; -o $@&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you want the release build, Make will do nothing because it considers everything up to date despite the fact &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; have changed. Hence, your program will actually be in a debug configuration and risk a &amp;lt;tt&amp;gt;SIGABRT&amp;lt;/tt&amp;gt; at runtime because debug instrumentation is present (recall &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; calls &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; when &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined). In essence, you have DoS'd yourself due to &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, many projects do not honor the user's command line. ESAPI C++ does its best to ensure a user's flags are honored via &amp;lt;tt&amp;gt;override&amp;lt;/tt&amp;gt; as shown above, but other projects do not. For example, consider a project that should be built with Position Independent Executable (PIE or ASLR) enabled and data execution prevention (DEP) enabled. Dismissing user settings combined with insecure out of the box settings (and not picking them up during auto-setup or auto-configure) means a program built with the following will likely have neither defense:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ make CFLAGS=&amp;quot;-fPIE&amp;quot; CXXFLAGS=&amp;quot;-fPIE&amp;quot; LDFLAGS=&amp;quot;-pie -z,noexecstack, -z,noexecheap&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Defenses such as ASLR and DEP are especially important on Linux because [http://linux.die.net/man/5/elf Data Execution - not Prevention - is the norm].&lt;br /&gt;
&lt;br /&gt;
=== Integration ===&lt;br /&gt;
&lt;br /&gt;
Project level integration presents opportunities to harden your program or library with domain specific knowledge. For example, if the platform supports Position Independent Executables (PIE or ASLR) and data execution prevention (DEP), then you should integrate with it. The consequences of not doing so could result in exploitation. As a case in point, see KingCope's 0-days for MySQL in December, 2012 (CVE-2012-5579 and CVE-2012-5612, among others). Integration with platform security would have neutered a number of the 0-days.&lt;br /&gt;
&lt;br /&gt;
You also have the opportunity to include helpful libraries that are not need for business logic support. For example, if you are working on a platform with [http://dmalloc.com DMalloc] or [http://code.google.com/p/address-sanitizer/ Address Sanitizer], you should probably use it in your debug builds. For Ubuntu, DMalloc available from the package manager and can be installed with &amp;lt;tt&amp;gt;sudo apt-get install libdmalloc5&amp;lt;/tt&amp;gt;. For Apple platforms, its available as a scheme option (see [[#Clang/Xcode|Clang/Xcode]] below). Address Sanitizer is available in [http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8 and above] for many platforms.&lt;br /&gt;
&lt;br /&gt;
In addition, project level integration is an opportunity to harden third party libraries you chose to include. Because you chose to include them, you and your users are responsible for them. If you or your users endure a SP800-53 audit, third party libraries will be in scope because the supply chain is included (specifically, item SA-12, Supply Chain Protection). The audits are not limited to those in the US Federal arena - financial institutions perform reviews too. A perfect example of violating this guidance is [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525], which was due to [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library].&lt;br /&gt;
&lt;br /&gt;
Another example is including OpenSSL. You know (1) [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 is insecure], (2) [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 is insecure], and (3) [http://arstechnica.com/security/2012/09/crime-hijacks-https-sessions/ compression is insecure] (among others). In addition, suppose you don't use hardware and engines, and only allow static linking. Given the knowledge and specifications, you would configure the OpenSSL library as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ Configure darwin64-x86_64-cc -no-hw -no-engines -no-comp -no-shared -no-dso -no-sslv2 -no-sslv3 --openssldir=…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Note Well'': you might want engines, especially on Ivy Bridge microarchitectures (3rd generation Intel Core i5 and i7 processors). To have OpenSSL use the processor's random number generator (via the of &amp;lt;tt&amp;gt;rdrand&amp;lt;/tt&amp;gt; instruction), you will need to call OpenSSL's &amp;lt;tt&amp;gt;ENGINE_load_rdrand()&amp;lt;/tt&amp;gt; function and then &amp;lt;tt&amp;gt;ENGINE_set_default&amp;lt;/tt&amp;gt; with &amp;lt;tt&amp;gt;ENGINE_METHOD_RAND&amp;lt;/tt&amp;gt;. See [http://wiki.opensslfoundation.com/index.php/Random_Numbers OpenSSL's Random Numbers] for details.&lt;br /&gt;
&lt;br /&gt;
If you configure without the switches, then you will likely have vulnerable code/libraries and risk failing an audit. If the program is a remote server, then the following command will reveal if compression is active on the channel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ echo &amp;quot;GET / HTTP1.0&amp;quot; | openssl s_client -connect &amp;lt;nowiki&amp;gt;example.com:443&amp;lt;/nowiki&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;nm&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;openssl s_client&amp;lt;/tt&amp;gt; will show that compression is enabled in the client. In fact, any symbol within the &amp;lt;tt&amp;gt;OPENSSL_NO_COMP&amp;lt;/tt&amp;gt; preprocessor macro will bear witness since &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; is translated into a &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; define.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ nm /usr/local/ssl/iphoneos/lib/libcrypto.a 2&amp;gt;/dev/null | egrep -i &amp;quot;(COMP_CTX_new|COMP_CTX_free)&amp;quot;&lt;br /&gt;
0000000000000110 T COMP_CTX_free&lt;br /&gt;
0000000000000000 T COMP_CTX_new&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even more egregious is the answer given to auditors who specifically ask about configurations and protocols: &amp;quot;we don't use weak/wounded/broken ciphers&amp;quot; or &amp;quot;we follow best practices.&amp;quot; The use of compression tells the auditor that you are using wounded protocol in an insecure configuration and you don't follow best practices. That will likely set off alarm bells, and ensure the auditor dives deeper on more items.&lt;br /&gt;
&lt;br /&gt;
== Preprocessor ==&lt;br /&gt;
&lt;br /&gt;
The preprocessor is crucial to setting up a project for success. The C committee provided one macro - &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; - and the macro can be used to derive a number of configurations and drive engineering processes. Unfortunately, the committee also left many related items to chance, which has resulted in programmers abusing builtin facilities. This section will help you set up you projects to integrate well with other projects and ensure reliability and security.&lt;br /&gt;
&lt;br /&gt;
There are three topics to discuss when hardening the preprocessor. The first is well defined configurations which produce well defined behaviors, the second is useful behavior from assert, and the third is proper use of macros when integrating vendor code and third party libraries.&lt;br /&gt;
&lt;br /&gt;
=== Configurations ===&lt;br /&gt;
&lt;br /&gt;
To remove ambiguity, you should recognize two configurations: Release and Debug. Release is for production code on live servers, and its behavior is requested via the C/C++ &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; macro. Its also the only macro observed by the C and C++ Committees and Posix. Diametrically opposed to release is Debug. While there is a compelling argument for &amp;lt;tt&amp;gt;!defined(NDEBUG)&amp;lt;/tt&amp;gt;, you should have an explicit macro for the configuration and that macro should be &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;. This is because vendors and outside libraries use &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (or similar) macro for their configuration. For example, Carnegie Mellon's Mach kernel uses &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, Microsoft's CRT uses [http://msdn.microsoft.com/en-us/library/ww5t02fa%28v=vs.71%29.aspx&amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt;], and Wind River Workbench uses &amp;lt;tt&amp;gt;DEBUG_MODE&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (Release) and &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (Debug), you have two additional cross products: both are defined or neither are defined. Defining both should be an error, and defining neither should default to a release configuration. Below is from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/esapi/EsapiCommon.h ESAPI C++ EsapiCommon.h], which is the configuration file used by all source files:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// Only one or the other, but not both&lt;br /&gt;
#if (defined(DEBUG) || defined(_DEBUG)) &amp;amp;&amp;amp; (defined(NDEBUG) || defined(_NDEBUG))&lt;br /&gt;
# error Both DEBUG and NDEBUG are defined.&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
// The only time we switch to debug is when asked. NDEBUG or {nothing} results&lt;br /&gt;
// in release build (fewer surprises at runtime).&lt;br /&gt;
#if defined(DEBUG) || defined(_DEBUG)&lt;br /&gt;
# define ESAPI_BUILD_DEBUG 1&lt;br /&gt;
#else&lt;br /&gt;
# define ESAPI_BUILD_RELEASE 1&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is in effect, your code should receive full debug instrumentation, including the full force of assertions.&lt;br /&gt;
&lt;br /&gt;
=== ASSERT ===&lt;br /&gt;
&lt;br /&gt;
Asserts will help you create self-debugging code by helping you find the point of first failure quickly and easily. Asserts should be used throughout your program, including parameter validation, return value checking and program state. The &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; will silently guard your code through its lifetime. It will always be there, even when not debugging a specific component of a module. If you have thorough code coverage, you will spend less time debugging and more time developing because programs will debug themselves.&lt;br /&gt;
&lt;br /&gt;
To use asserts effectively, you should assert everything. That includes parameters upon entering a function, return values from function calls, and any program state. Everywhere you place an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation or checking, you should have an assert. Everywhere you have an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; for validation or checking, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand.&lt;br /&gt;
&lt;br /&gt;
If you are still using &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt;'s, then you have an opportunity for improvement. In the time it takes for you to write a &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; statement, you could have written an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;. Unlike the &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; which are often removed when no longer needed, the &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; stays active forever. Remember, this is all about finding the point of first failure quickly so you can spend your time doing other things.&lt;br /&gt;
&lt;br /&gt;
There is one problem with using asserts - [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html Posix states &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; should call &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;] if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined. When debugging, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will never be defined since you want the &amp;quot;program diagnostics&amp;quot; (quote from the Posix description). The behavior makes &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and its accompanying &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; completely useless for development. The result of &amp;quot;program diagnostics&amp;quot; calling &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; due to standard C/C++ behavior is disuse - developers simply don't use them. Its incredibly bad for the development community because self-debugging programs can help eradicate so many stability problems.&lt;br /&gt;
&lt;br /&gt;
Since self-debugging programs are so powerful, you will have to have to supply your own assert and signal handler with improved behavior. Your assert will exchange auto-aborting behavior for auto-debugging behavior. The auto-debugging facility will ensure the debugger snaps when a problem is detected, and you will find the point of first failure quickly and easily.&lt;br /&gt;
&lt;br /&gt;
ESAPI C++ supplies its own assert with the behavior described above. In the code below, &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt; raises &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; when in effect or it evaluates to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; in other cases.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// A debug assert which should be sprinkled liberally. This assert fires and then continues rather&lt;br /&gt;
// than calling abort(). Useful when examining negative test cases from the command line.&lt;br /&gt;
#if (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_STARNIX))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp) {                                    \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; std::endl;                                           \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) {                               \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; &amp;quot;: \&amp;quot;&amp;quot; &amp;lt;&amp;lt; (msg) &amp;lt;&amp;lt; &amp;quot;\&amp;quot;&amp;quot; &amp;lt;&amp;lt; std::endl;                \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#elif (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_WINDOWS))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      assert(exp)&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) assert(exp)&lt;br /&gt;
#else&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      ((void)(exp))&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) ((void)(exp))&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
#if !defined(ASSERT)&lt;br /&gt;
#  define ASSERT(exp)     ESAPI_ASSERT1(exp)&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At program startup, a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; handler will be installed if one is not provided by another component:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct DebugTrapHandler&lt;br /&gt;
{&lt;br /&gt;
  DebugTrapHandler()&lt;br /&gt;
  {&lt;br /&gt;
    struct sigaction new_handler, old_handler;&lt;br /&gt;
&lt;br /&gt;
    do&lt;br /&gt;
      {&lt;br /&gt;
        int ret = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, NULL, &amp;amp;old_handler);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        // Don't step on another's handler&lt;br /&gt;
        if (old_handler.sa_handler != NULL) break;&lt;br /&gt;
&lt;br /&gt;
        new_handler.sa_handler = &amp;amp;DebugTrapHandler::NullHandler;&lt;br /&gt;
        new_handler.sa_flags = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigemptyset (&amp;amp;new_handler.sa_mask);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, &amp;amp;new_handler, NULL);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
      } while(0);&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  static void NullHandler(int /*unused*/) { }&lt;br /&gt;
&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
// We specify a relatively low priority, to make sure we run before other CTORs&lt;br /&gt;
// http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Attributes.html#C_002b_002b-Attributes&lt;br /&gt;
static const DebugTrapHandler g_dummyHandler __attribute__ ((init_priority (110)));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On a Windows platform, you would call &amp;lt;tt&amp;gt;_set_invalid_parameter_handler&amp;lt;/tt&amp;gt; (and possibly &amp;lt;tt&amp;gt;set_unexpected&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;set_terminate&amp;lt;/tt&amp;gt;) to install a new handler.&lt;br /&gt;
&lt;br /&gt;
Live hosts running production code should always define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (i.e., release configuration), which means they do not assert or auto-abort. Auto-abortion is not acceptable behavior, and anyone who asks for the behavior is completely abusing the functionality of &amp;quot;program diagnostics&amp;quot;. If a program wants a core dump, then it should create the dump rather than crashing.&lt;br /&gt;
&lt;br /&gt;
For more reading on asserting effectively, please see one of John Robbin's books, such as ''[http://www.amazon.com/dp/0735608865 Debugging Applications]''. John is a legendary bug slayer in Windows circles, and he will show you how to do nearly everything, from debugging a simple program to bug slaying in multithreaded programs.&lt;br /&gt;
&lt;br /&gt;
=== Additional Macros ===&lt;br /&gt;
&lt;br /&gt;
Additional macros include any macros needed to integrate properly and securely. It includes integrating the program with the platform (for example MFC or Cocoa/CocoaTouch) and libraries (for example, Crypto++ or OpenSSL). It can be a challenge because you have to have proficiency with your platform and all included libraries and frameworks. The list below illustrates the level of detail you will need when integrating.&lt;br /&gt;
&lt;br /&gt;
Though Boost is missing from the list, it appears to lack recommendations, additional debug diagnostics, and a hardening guide. See ''[http://stackoverflow.com/questions/14927033/boost-hardening-guide-preprocessor-macros BOOST Hardening Guide (Preprocessor Macros)]'' for details. In addition, Tim Day points to ''[http://boost.2283326.n4.nabble.com/boost-build-should-we-not-define-SECURE-SCL-0-by-default-for-all-msvc-toolsets-td2654710.html &amp;lt;nowiki&amp;gt;[boost.build] should we not define _SECURE_SCL=0 by default for all msvc toolsets&amp;lt;/nowiki&amp;gt;]'' for a recent discussion related to hardening (or lack thereof).&lt;br /&gt;
&lt;br /&gt;
In addition to what you should define, defining some macros and undefining others should trigger a security related defect. For example, &amp;lt;tt&amp;gt;-U_FORTIFY_SOURCES&amp;lt;/tt&amp;gt; on Linux and &amp;lt;tt&amp;gt;_CRT_SECURE_NO_WARNINGS=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_SCL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_ATL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;STRSAFE_NO_DEPRECATE&amp;lt;/tt&amp;gt; on Windows.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Platform/Library!!Debug!!Release&lt;br /&gt;
|+ Table 1: Additional Platform/Library Macros&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;175pt&amp;quot;|All&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|DEBUG=1&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|NDEBUG=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Linux&lt;br /&gt;
|_GLIBCXX_DEBUG=1&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
|_FORTIFY_SOURCE=2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Android&lt;br /&gt;
|NDK_DEBUG=1&lt;br /&gt;
|_FORTIFY_SOURCE=1 (4.2 and above)&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define LOGI(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt logging)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Cocoa/CocoaTouch&lt;br /&gt;
|&lt;br /&gt;
|NS_BLOCK_ASSERTIONS=1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define NSLog(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt ASL)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SafeInt&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft&lt;br /&gt;
|_DEBUG=1, STRICT,&amp;lt;br&amp;gt;&lt;br /&gt;
_SECURE_SCL=1, _HAS_ITERATOR_DEBUGGING=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
|STRICT&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft ATL &amp;amp; MFC&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|STLPort&lt;br /&gt;
|_STLP_DEBUG=1, _STLP_USE_DEBUG_LIB=1&amp;lt;br&amp;gt;&lt;br /&gt;
_STLP_DEBUG_ALLOC=1, _STLP_DEBUG_UNINITIALIZED=1&lt;br /&gt;
|&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLite&lt;br /&gt;
|SQLITE_DEBUG, SQLITE_MEMDEBUG&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLCipher&lt;br /&gt;
|SQLITE_HAS_CODEC=1&amp;lt;BR&amp;gt;&lt;br /&gt;
SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_HAS_CODEC=1&amp;lt;BR&amp;gt;&lt;br /&gt;
SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Be careful with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; when using pre-compiled libraries such as Boost from a distribution. There are ABI incompatibilities, and the result will likely be a crash. You will have to compile Boost with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; or omit &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; SQLite secure deletion zeroizes memory on destruction. Define as required, and always define in US Federal since zeroization is required for FIPS 140-2, Level 1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt; ''N'' is 0644 by default, which means everyone has some access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt; Force temporary tables into memory (no unencrypted data to disk).&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
##########################################&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Compiler and Linker ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers provide a rich set of warnings from the analysis of code during compilation. Both GCC and Visual Studio have static analysis capabilities to help find mistakes early in the development process. The built in static analysis capabilities of GCC and Visual Studio are usually sufficient to ensure proper API usage and catch a number of mistakes such as using an uninitialized variable or comparing a negative signed int and a positive unsigned int.&lt;br /&gt;
&lt;br /&gt;
As a concrete example, (and for those not familiar with C/C++ promotion rules), a warning will be issued if a signed integer is promoted to an unsigned integer and then compared because a side effect is &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! GCC and Visual Studio will not currently catch, for example, SQL injections and other tainted data usage. For that, you will need a tool designed to perform data flow analysis or taint analysis.&lt;br /&gt;
&lt;br /&gt;
Some in the development community resist static analysis or refute its results. For example, when static analysis warned the Linux kernel's &amp;lt;tt&amp;gt;sys_prctl&amp;lt;/tt&amp;gt; was comparing an unsigned value against less than zero, Jesper Juhl offered a patch to clean up the code. Linus Torvalds howled “No, you don't do this… GCC is crap” (referring to compiling with warnings). For the full discussion, see ''[http://linux.derkeiler.com/Mailing-Lists/Kernel/2006-11/msg08325.html &amp;lt;nowiki&amp;gt;[PATCH] Don't compare unsigned variable for &amp;lt;0 in sys_prctl()&amp;lt;/nowiki&amp;gt;]'' from the Linux Kernel mailing list.&lt;br /&gt;
&lt;br /&gt;
The following sections will detail steps for three platforms. First is a typical GNU Linux based distribution offering GCC and Binutils, second is Clang and Xcode, and third is modern Windows platforms.&lt;br /&gt;
&lt;br /&gt;
=== Distribution Hardening ===&lt;br /&gt;
&lt;br /&gt;
Before discussing GCC and Binutils, it would be a good time to point out some of the defenses discussed below are all ready present in a distribution. Unfortunately, its design by committee, so what is present is usually only a mild variation of what is available (this way, everyone is mildly offended). For those who are purely worried about performance, you might be surprised to learn you have already taken the small performance hint without even knowing.&lt;br /&gt;
&lt;br /&gt;
Linux and BSD distributions often apply some hardening without intervention via ''[http://gcc.gnu.org/onlinedocs/gcc/Spec-Files.html GCC Spec Files]''. If you are using Debian, Ubuntu, Linux Mint and family, see ''[http://wiki.debian.org/Hardening Debian Hardening]''. For Red Hat and Fedora systems, see ''[http://lists.fedoraproject.org/pipermail/devel-announce/2011-August/000821.html New hardened build support (coming) in F16]''. Gentoo users should visit ''[http://www.gentoo.org/proj/en/hardened/ Hardened Gentoo]''.&lt;br /&gt;
&lt;br /&gt;
You can see the settings being used by a distribution via &amp;lt;tt&amp;gt;gcc -dumpspecs&amp;lt;/tt&amp;gt;. From Linux Mint 12 below, -fstack-protector (but not -fstack-protector-all) is used by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ gcc -dumpspecs&lt;br /&gt;
…&lt;br /&gt;
*link_ssp: %{fstack-protector:}&lt;br /&gt;
&lt;br /&gt;
*ssp_default: %{!fno-stack-protector:%{!fstack-protector-all: %{!ffreestanding:%{!nostdlib:-fstack-protector}}}}&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The “SSP” above stands for Stack Smashing Protector. SSP is a reimplementation of Hiroaki Etoh's work on IBM Pro Police Stack Detector. See Hiroaki Etoh's patch ''[http://gcc.gnu.org/ml/gcc-patches/2001-06/msg01753.html gcc stack-smashing protector]'' and IBM's ''[http://www.research.ibm.com/trl/projects/security/ssp/ GCC extension for protecting applications from stack-smashing attacks]'' for details.&lt;br /&gt;
&lt;br /&gt;
=== GCC/Binutils ===&lt;br /&gt;
&lt;br /&gt;
GCC (the compiler collection) and Binutils (the assemblers, linkers, and other tools) are separate projects that work together to produce a final executable. Both the compiler and linker offer options to help you write safer and more secure code. The linker will produce code which takes advantage of platform security features offered by the kernel and PaX, such as no-exec stacks and heaps (NX) and Position Independent Executable (PIE).&lt;br /&gt;
&lt;br /&gt;
The table below offers a set of compiler options to build your program. Static analysis warnings help catch mistakes early, while the linker options harden the executable at runtime. In the table below, “GCC” should be loosely taken as “non-ancient distributions.” While the GCC team considers 4.2 ancient, you will still encounter it on Apple and BSD platforms due to changes in GPL licensing around 2007. Refer to ''[http://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html GCC Option Summary]'', ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html Options to Request or Suppress Warnings]'' and ''[http://sourceware.org/binutils/docs-2.21/ld/Options.html Binutils (LD) Command Line Options]'' for usage details.&lt;br /&gt;
&lt;br /&gt;
Noteworthy of special mention are &amp;lt;tt&amp;gt;-fno-strict-overflow&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fwrapv&amp;lt;/tt&amp;gt;&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;. The flags ensure the compiler does not remove statements that result in overflow or wrap. If your program only runs correctly using the flags, it is likely violating C/C++ rules on overflow and illegal. If the program is illegal due to overflow or wrap checking, you should consider using [http://code.google.com/p/safe-iop/ safe-iop] for C or David LeBlanc's [http://safeint.codeplex.com SafeInt] in C++.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, some of those settings can be verified with the [http://www.trapkit.de/tools/checksec.html Checksec] tool written by Tobias Klein. The &amp;lt;tt&amp;gt;checksec.sh&amp;lt;/tt&amp;gt; script is designed to test standard Linux OS and PaX security features being used by an application. See the [http://www.trapkit.de/tools/checksec.html Trapkit] web page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 2: GCC C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wall -Wextra&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;75t&amp;quot;|GCC&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Enables many warnings (despite their names, all and extra do not turn on all warnings).&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wconversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may alter a value (includes -Wsign-conversion).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-conversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may change the sign of an integer value, such as assigning a signed integer to an unsigned integer (&amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion!).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wcast-align&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for a pointer cast to a type which has a different size, causing an invalid alignment and subsequent bus error on ARM processors.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wformat=2 -Wformat-security&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Increases warnings related to possible security defects, including incorrect format specifiers.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-common&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Prevent global variables being simultaneously defined in different object files.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fstack-protector or -fstack-protector-all&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Stack Smashing Protector (SSP). Improves stack layout and adds a guard to detect stack based buffer overflows.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-omit-frame-pointer&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Improves backtraces for post-mortem analysis&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wmissing-prototypes and -Wmissing-declarations&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a global function is defined without a prototype or declaration.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-prototypes&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a function is declared or defined without specifying the argument types.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-overflow&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.2&lt;br /&gt;
|Warn about optimizations taken due to &amp;lt;nowiki&amp;gt;[undefined]&amp;lt;/nowiki&amp;gt; signed integer overflow assumptions.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wtrampolines&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.3&lt;br /&gt;
|Warn about trampolines generated for pointers to nested functions. Trampolines require executable stacks.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=address&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/address-sanitizer/ AddressSanitizer], a fast memory error detector. Memory access instructions will be instrumented to help detect heap, stack, and global buffer overflows; as well as use-after-free bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=thread&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/data-race-test/wiki/ThreadSanitizer ThreadSanitizer], a fast data race detector. Memory access instructions will be instrumented to detect data race bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,nodlopen and -Wl,-z,nodump&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.10&lt;br /&gt;
|Reduces the ability of an attacker to load, manipulate, and dump shared objects.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,noexecstack and -Wl,-z,noexecheap&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.14&lt;br /&gt;
|Data Execution Prevention (DEP). ELF headers are marked with PT_GNU_STACK and PT_GNU_HEAP.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,relro&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Global Offset Table (GOT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,now&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Procedure Linkage Table (PLT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIC&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils&lt;br /&gt;
|Position Independent Code. Used for libraries and shared objects. Both -fPIC (compiler) and -shared (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIE&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.16&lt;br /&gt;
|Position Independent Executable (ASLR). Used for programs. Both -fPIE (compiler) and -pie (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Unlike Clang and -Weverything, GCC does not provide a switch to truly enable all warnings.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; -fstack-protector guards functions with high risk objects such as C strings, while -fstack-protector-all guards all objects.&lt;br /&gt;
&lt;br /&gt;
Additional C++ warnings which can be used include the following in Table 3. See ''[http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html GCC's Options Controlling C++ Dialect]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 3: GCC C++ Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Woverloaded-virtual&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn when a function declaration hides virtual functions from a base class. &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wreorder&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when the order of member initializers given in the code does not match the order in which they must be executed.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-promo&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when overload resolution chooses a promotion from unsigned or enumerated type to a signed type, over a conversion to an unsigned type of the same size.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wnon-virtual-dtor&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when a class has virtual functions and an accessible non-virtual destructor.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Weffc++&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn about violations of the following style guidelines from Scott Meyers' ''[http://www.aristeia.com/books.html Effective C++, Second Edition]'' book.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
And additional Objective C warnings which are often useful include the following. See ''[http://gcc.gnu.org/onlinedocs/gcc/Objective_002dC-and-Objective_002dC_002b_002b-Dialect-Options.html Options Controlling Objective-C and Objective-C++ Dialects]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 4: GCC Objective C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wstrict-selector-match&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn if multiple methods with differing argument and/or return types are found for a given selector when attempting to send a message using this selector to a receiver of type id or Class.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wundeclared-selector&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn if a &amp;lt;tt&amp;gt;@selector(…)&amp;lt;/tt&amp;gt; expression referring to an undeclared selector is found. &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The use of aggressive warnings will produce spurious noise. The noise is a tradeoff - you can learn of potential problems at the cost of wading through some chaff. The following will help reduces spurious noise from the warning system:&lt;br /&gt;
&lt;br /&gt;
* -Wno-unused-parameter (GCC)&lt;br /&gt;
* -Wno-type-limits (GCC 4.3)&lt;br /&gt;
* -Wno-tautological-compare (Clang)&lt;br /&gt;
&lt;br /&gt;
Finally, a simple version based Makefile example is shown below. This is different than feature based makefile produced by auto tools (which will test for a particular feature and then define a symbol or configure a template file). Not all platforms use all options and flags. To address the issue you can pursue one of two strategies. First, you can ship with a weakened posture by servicing the lowest common denominator; or you can ship with everything in force. In the latter case, those who don't have a feature available will edit the makefile to accommodate their installation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;CXX=g++&lt;br /&gt;
EGREP = egrep&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GCC_COMPILER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version')&lt;br /&gt;
GCC41_OR_LATER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version (4\.[1-9]|[5-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GNU_LD210_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[0-9]|2\.[2-9])')&lt;br /&gt;
GNU_LD214_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[4-9]|2\.[2-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC_COMPILER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wall -Wextra -Wconversion&lt;br /&gt;
  MY_CC_FLAGS += -Wformat=2 -Wformat-security&lt;br /&gt;
  MY_CC_FLAGS += -Wno-unused-parameter&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC41_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fstack-protector-all&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC42_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wstrict-overflow&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC43_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wtrampolines&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD210_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,nodlopen -z,nodump&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD214_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,noexecstack -z,noexecheap&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD215_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,relro -z,now&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD216_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fPIE&lt;br /&gt;
  MY_LD_FLAGS += -pie&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
# Use 'override' to honor the user's command line&lt;br /&gt;
override CFLAGS := $(MY_CC_FLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(MY_CC_FLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(MY_LD_FLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Clang/Xcode ===&lt;br /&gt;
&lt;br /&gt;
[http://clang.llvm.org Clang] and [http://llvm.org LLVM] have been aggressively developed since Apple lost its GPL compiler back in 2007 (due to Tivoization which resulted in GPLv3). Since that time, a number of developers and Goggle have joined the effort. While Clang will consume most (all?) GCC/Binutil flags and switches, the project supports a number of its own options, including a static analyzer. In addition, Clang is relatively easy to build with additional diagnostics, such as Dr. John Regher and Peng Li's [http://embed.cs.utah.edu/ioc/ Integer Overflow Checker (IOC)].&lt;br /&gt;
&lt;br /&gt;
IOC is incredibly useful, and has found bugs in a number of projects, from the Linux Kernel (&amp;lt;tt&amp;gt;include/linux/bitops.h&amp;lt;/tt&amp;gt;, still unfixed), SQLite, PHP, Firefox (many still unfixed), LLVM, and Python. Future version of Clang (Clang 3.3 and above) will allow you to enable the checks out of the box with &amp;lt;tt&amp;gt;-fsanitize=integer&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=shift&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Clang options can be found at [http://clang.llvm.org/docs/UsersManual.html Clang Compiler User’s Manual]. Clang does include an option to turn on all warnings - &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt;. Use it with care but use it regularly since you will get back a lot of noise and issues you missed. For example, add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; for production builds and make non-spurious issues a quality gate. Under Xcode, simply add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to compiler warnings, both static analysis and additional security checks can be performed. Reading on Clang's static analysis capabilities can be found at [http://clang-analyzer.llvm.org Clang Static Analyzer]. Figure 1 below shows some of the security checks utilized by Xcode.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-11.png|thumb|450px|Figure 1: Clang/LLVM and Xcode options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Visual Studio ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a convenient Integrated Development Environment (IDE) for managing solutions and their settings. the section called “Visual Studio Options” discusses option which should be used with Visual Studio, and the section called “Project Properties” demonstrates incorporating those options into a solution's project.&lt;br /&gt;
&lt;br /&gt;
The table below lists the compiler and linker switches which should be used under Visual Studio. Refer to Howard and LeBlanc's Writing Secure Code (Microsoft Press) for a detailed discussion; or ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]'' in Security Briefs by Michael Howard. In the table below, “Visual Studio” refers to nearly all versions of the development environment, including Visual Studio 5.0 and 6.0.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, those settings can be verified with BinScope. BinScope is a verification tool from Microsoft that analyzes binaries to ensure that they have been built in compliance with Microsoft's Security Development Lifecycle (SDLC) requirements and recommendations. See the ''[https://www.microsoft.com/download/en/details.aspx?id=11910 BinScope Binary Analyzer]'' download page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 5: Visual Studio Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;150pt&amp;quot;|&amp;lt;nowiki&amp;gt;/W4&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;100pt&amp;quot;|Visual Studio&lt;br /&gt;
|width=&amp;quot;350pt&amp;quot;|Warning level 4, which includes most warnings.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/WAll&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Enable all warnings, including those off by default.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/GS&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Adds a security cookie (guard or canary) on the stack before the return address buffer stack based for overflow checks.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/SafeSEH&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Safe structured exception handling to remediate SEH overwrites.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/analyze&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Enterprise code analysis (freely available with Windows SDK for Windows Server 2008 and .NET Framework 3.5).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/NXCOMPAT&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Data Execution Prevention (DEP).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/dynamicbase&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Address Space Layout Randomization (ASLR).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;strict_gs_check&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Aggressively applies stack protections to a source file to help detect some categories of stack based buffer overruns.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;See Jon Sturgeon's discussion of the switch at ''[https://blogs.msdn.com/b/vcblog/archive/2010/12/14/off-by-default-compiler-warnings-in-visual-c.aspx Off By Default Compiler Warnings in Visual C++]''.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;When using /GS, there are a number of circumstances which affect the inclusion of a security cookie. For example, the guard is not used if there is no buffer in the stack frame, optimizations are disabled, or the function is declared naked or contains inline assembly.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;tt&amp;gt;#pragma strict_gs_check(on)&amp;lt;/tt&amp;gt; should be used sparingly, but is recommend in high risk situations, such as when a source file parses input from the internet.&lt;br /&gt;
&lt;br /&gt;
=== Warn Suppression ===&lt;br /&gt;
&lt;br /&gt;
From the tables above, a lot of warnings have been enabled to help detect possible programming mistakes. The potential mistakes are detected via compiler which carries around a lot of contextual information during its code analysis phase. At times, you will receive spurious warnings because the compiler is not ''that'' smart. Its understandable and even a good thing (how would you like to be out of a job because a program writes its own programs?). At times you will have to learn how to work with the compiler's warning system to suppress warnings. Notice what was not said: turn off the warnings.&lt;br /&gt;
&lt;br /&gt;
Suppressing warnings placates the compiler for spurious noise so you can get to the issues that matter (you are separating the wheat from the chaff). This section will offer some hints and point out some potential minefields. First is an unused parameter (for example, &amp;lt;tt&amp;gt;argc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;argv&amp;lt;/tt&amp;gt;). Suppressing unused parameter warnings is especially helpful for C++ and interface programming, where parameters are often unused. For this warning, simply define an &amp;quot;UNUSED&amp;quot; macro and warp the parameter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#define UNUSED_PARAMETER(x) ((void)x)&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    UNUSED_PARAMETER(argc);&lt;br /&gt;
    UNUSED_PARAMETER(argv);&lt;br /&gt;
    …&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A potential minefield lies near &amp;quot;comparing unsigned and signed&amp;quot; values, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will catch it for you. This is because C/C++ promotion rules state the signed value will be promoted to an unsigned value and then compared. That means &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! To fix this, you cannot blindly cast - you must first range test the value:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;int x = GetX();&lt;br /&gt;
unsigned int y = GetY();&lt;br /&gt;
&lt;br /&gt;
ASSERT(x &amp;gt;= 0);&lt;br /&gt;
if(!(x &amp;gt;= 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? X is negative.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
if(static_cast&amp;lt;unsigned int&amp;gt;(x) &amp;gt; y)&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is greater than y&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
else&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is not greater than y&amp;quot; &amp;lt;&amp;lt; endl;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;x&amp;lt;/tt&amp;gt;. Just run the program and wait for it to tell you there is a problem. If there is a problem, the program will snap the debugger (and more importantly, not call a useless &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; as specified by Posix). It beats the snot out of &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; that are removed when no longer needed or pollute outputs.&lt;br /&gt;
&lt;br /&gt;
Another conversion problem you will encounter conversion between types, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will also catch it for you. The following will always have an opportunity to fail, and should light up like a Christmas tree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct sockaddr_in addr;&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
addr.sin_port = htons(atoi(argv[2]));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following would probably serve you much better. Notice &amp;lt;tt&amp;gt;atoi&amp;lt;/tt&amp;gt; and fiends are not used because they can silently fail. In addition, the code is instrumented so you don't need to waste a lot of time debugging potential problems:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;const char* cstr = GetPortString();&lt;br /&gt;
&lt;br /&gt;
ASSERT(cstr != NULL);&lt;br /&gt;
if(!(cstr != NULL))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port string is not valid.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
istringstream iss(cstr);&lt;br /&gt;
long long t = 0;&lt;br /&gt;
iss &amp;gt;&amp;gt; t;&lt;br /&gt;
&lt;br /&gt;
ASSERT(!(iss.fail()));&lt;br /&gt;
if(iss.fail())&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Failed to read port.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// Should this be a port above the reserved range ([0-1024] on Unix)?&lt;br /&gt;
ASSERT(t &amp;gt; 0);&lt;br /&gt;
if(!(t &amp;gt; 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too small&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
ASSERT(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max()));&lt;br /&gt;
if(!(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max())))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too large&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use port&lt;br /&gt;
unsigned short port = static_cast&amp;lt;unsigned short&amp;gt;(t);&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;port&amp;lt;/tt&amp;gt;. This code will continue checking conditions, years after being instrumented (assuming to wrote code to read a config file early in the project). There's no need to remove the &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt;s as with &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; since they are silent guardians.&lt;br /&gt;
&lt;br /&gt;
Another useful suppression trick is too avoid ignoring return values. Not only is it useful to suppress the warning, its required for correct code. For example, &amp;lt;tt&amp;gt;snprint&amp;lt;/tt&amp;gt; will alert you to truncations through its return value. You should not make them silent truncations by ignoring the warning or casting to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;char path[PATH_MAX];&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int ret = snprintf(path, sizeof(path), &amp;quot;%s/%s&amp;quot;, GetDirectory(), GetObjectName());&lt;br /&gt;
ASSERT(ret != -1);&lt;br /&gt;
ASSERT(!(ret &amp;gt;= sizeof(path)));&lt;br /&gt;
&lt;br /&gt;
if(ret == -1 || ret &amp;gt;= sizeof(path))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Unable to build full object name&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use path&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem is pandemic, and not just boring user land programs. Projects which offer high integrity code, such as SELinux, suffer silent truncations. The following is from an approved SELinux patch even though a comment was made that it [http://permalink.gmane.org/gmane.comp.security.selinux/16845 suffered silent truncations in its &amp;lt;tt&amp;gt;security_compute_create_name&amp;lt;/tt&amp;gt; function] from &amp;lt;tt&amp;gt;compute_create.c&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12  int security_compute_create_raw(security_context_t scon,&lt;br /&gt;
13                                  security_context_t tcon,&lt;br /&gt;
14                                  security_class_t   tclass,&lt;br /&gt;
15                                  security_context_t * newcon)&lt;br /&gt;
16  {&lt;br /&gt;
17    char path[PATH_MAX];&lt;br /&gt;
18    char *buf;&lt;br /&gt;
19    size_t size;&lt;br /&gt;
20    int fd, ret;&lt;br /&gt;
21 	&lt;br /&gt;
22    if (!selinux_mnt) {&lt;br /&gt;
23      errno = ENOENT;&lt;br /&gt;
24      return -1;&lt;br /&gt;
25    }&lt;br /&gt;
26 	&lt;br /&gt;
27    snprintf(path, sizeof path, &amp;quot;%s/create&amp;quot;, selinux_mnt);&lt;br /&gt;
28    fd = open(path, O_RDWR);&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Unlike other examples, the above code will not debug itself, and you will have to set breakpoints and trace calls to determine the point of first failure. (And the code above gambles that the truncated file does not exist or is not under an adversary's control by blindly performing the &amp;lt;tt&amp;gt;open&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
== Runtime ==&lt;br /&gt;
&lt;br /&gt;
The previous sections concentrated on setting up your project for success. This section will examine additional hints for running with increased diagnostics and defenses. Not all platforms are created equal - GNU Linux is difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening to a program after compiling and static linking]; while Windows allows post-build hardening through a download. Remember, the goal is to find the point of first failure quickly so you can improve the reliability and security of the code.&lt;br /&gt;
&lt;br /&gt;
=== Xcode ===&lt;br /&gt;
&lt;br /&gt;
Xcode offers additional [http://developer.apple.com/library/mac/#recipes/xcode_help-scheme_editor/Articles/SchemeDiagnostics.html Application Diagnostics] that can help find memory errors and object use problems. Schemes can be managed through ''Products'' menu item, ''Scheme'' submenu item, and then ''Edit''. From the editor, navigate to the ''Diagnostics'' tab. In the figure below, four additional instruments are enabled for the debugging cycle: Scribble guards, Edge guards, Malloc guards, and Zombies.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-1.png|thumb|450px|Figure 2: Xcode Memory Diagnostics]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
There is one caveat with using some of the guards: Apple only provides them for the simulator, and not a device. In the past, the guards were available for both devices and simulators.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a number of debugging aides for use during development. The aides are called [http://msdn.microsoft.com/en-us/library/d21c150d.aspx Managed Debugging Assistants (MDAs)]. You can find the MDAs on the ''Debug'' menu, then ''Exceptions'' submenu. MDAs allow you to tune your debugging experience by, for example, filter exceptions for which the debugger should snap. For more details, see Stephen Toub's ''[http://msdn.microsoft.com/en-us/magazine/cc163606.aspx Let The CLR Find Bugs For You With Managed Debugging Assistants]''.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-2.png|thumb|450px|Figure 3: Managed Debugging Assistants]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Finally, for runtime hardening, Microsoft has a helpful tool called EMET. EMET is the [http://support.microsoft.com/kb/2458544 Enhanced Mitigation Experience Toolkit], and allows you to apply runtime hardening to an executable which was built without. Its very useful for utilities and other programs that were built without an SDLC.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-3.png|thumb|450px|Figure 4: Windows and EMET]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&amp;diff=147384</id>
		<title>Testing for SSL-TLS (OWASP-CM-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&amp;diff=147384"/>
				<updated>2013-03-10T05:46:03Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Broke RSA and DSA key sizes out into separate entries&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v3}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
Due to historic export restrictions of high grade cryptography, legacy and new web servers are often able and configured to handle weak cryptographic options.&lt;br /&gt;
&lt;br /&gt;
Even if high grade ciphers are normally used and installed, some server misconfiguration could be used to force the use of a weaker cipher to gain access to the supposed secure communication channel.&lt;br /&gt;
&lt;br /&gt;
==Testing SSL / TLS Cipher Specifications and Requirements ==&lt;br /&gt;
&lt;br /&gt;
The http clear-text protocol is normally secured via an SSL or TLS tunnel, resulting in https traffic. In addition to providing encryption of data in transit, https allows the identification of servers (and, optionally, of clients) by means of digital certificates.&lt;br /&gt;
&lt;br /&gt;
Historically, there have been limitations set in place by the U.S. government to allow cryptosystems to be exported only for key sizes of, at most, 40 bits, a key length which could be broken and would allow the decryption of communications. Since then, cryptographic export regulations have been relaxed (though some constraints still hold); however, it is important to check the SSL configuration being used to avoid putting in place cryptographic support which could be easily defeated. SSL-based services should not offer the possibility to choose weak ciphers.&lt;br /&gt;
&lt;br /&gt;
Cipher determination is performed as follows: in the initial phase of a SSL connection setup, the client sends the server a Client Hello message specifying, among other information, the cipher suites that it is able to handle. A client is usually a web browser (most popular SSL client nowadays), but not necessarily, since it can be any SSL-enabled application; the same holds for the server, which needs not be a web server, though this is the most common case. (For example, a noteworthy class of SSL clients is that of SSL proxies such as stunnel (www.stunnel.org) which can be used to allow non-SSL enabled tools to talk to SSL services.) A cipher suite is specified by an encryption protocol (DES, RC4, AES), the encryption key length (such as 40, 56, or 128 bits), and a hash algorithm (SHA, MD5) used for integrity checking. Upon receiving a Client Hello message, the server decides which cipher suite it will use for that session. It is possible (for example, by means of configuration directives) to specify which cipher suites the server will honor. In this way you may control, for example, whether or not conversations with clients will support 40-bit encryption only.&lt;br /&gt;
&lt;br /&gt;
==SSL Testing Criteria==&lt;br /&gt;
Large number of available cipher suites and quick progress in cryptoanalysis makes judging a SSL server a non-trivial task. These criteria are widely recognised as minimum checklist:&lt;br /&gt;
&lt;br /&gt;
* SSLv2, due to known weaknesses in protocol design [http://www.schneier.com/paper-ssl.html]&lt;br /&gt;
* SSLv3, due to known weaknesses in protocol design [http://www.yaksman.org/~lweith/ssl.pdf]&lt;br /&gt;
* Compression, due to known weaknesses in protocol design [http://www.ekoparty.org/2012/juliano-rizzo.php]&lt;br /&gt;
* Cipher suites with symmetric encryption algorithm smaller than 112 bits&lt;br /&gt;
* X.509 certificates with RSA key smaller than 2048 bits&lt;br /&gt;
* X.509 certificates with DSA key smaller than 2048 bits&lt;br /&gt;
* X.509 certificates signed using MD5 hash, due to known collision attacks on this hash&lt;br /&gt;
* TLS Renegotiation vulnerability [http://www.phonefactor.com/sslgap/ssl-tls-authentication-patches]&lt;br /&gt;
&lt;br /&gt;
The following standards can be used as reference while assessing SSL servers:&lt;br /&gt;
&lt;br /&gt;
* [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf NIST SP 800-52] recommends U.S. federal systems to use at least TLS 1.0 with ciphersuites based on RSA or DSA key agreement with ephemeral Diffie-Hellman, 3DES or AES for confidentality and SHA1 for integrity protection. NIST SP 800-52 specifically disallows non-FIPS compliant algorithms like RC4 and MD5. An exception is U.S. federal systems making connections to outside servers, where these algorithms can be used in SSL client mode.&lt;br /&gt;
* [https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml PCI-DSS v1.2] in point 4.1 requires compliant parties to use &amp;quot;strong cryptography&amp;quot; without precisely defining key lengths and algorithms. Common interpretation, partially based on previous versions of the standard, is that at least 128 bit key cipher, no export strength algorithms and no SSLv2 should be used[http://www.digicert.com/news/DigiCert_PCI_White_Paper.pdf].&lt;br /&gt;
* [https://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide] has been proposed to standardize SSL server assessment and currently is in draft version.&lt;br /&gt;
&lt;br /&gt;
SSL Server Database can be used to assess configuration of publicly available SSL servers[https://www.ssllabs.com/ssldb/analyze.html] based on SSL Rating Guide[https://www.ssllabs.com/projects/rating-guide/index.html]&lt;br /&gt;
&lt;br /&gt;
==Black Box Test and example==&lt;br /&gt;
&lt;br /&gt;
In order to detect possible support of weak ciphers, the ports associated to SSL/TLS wrapped services must be identified. These typically include port 443, which is the standard https port; however, this may change because a) https services may be configured to run on non-standard ports, and b) there may be additional SSL/TLS wrapped services related to the web application. In general, a service discovery is required to identify such ports.&lt;br /&gt;
&lt;br /&gt;
The nmap scanner, via the “–sV” scan option, is able to identify SSL services. Vulnerability Scanners, in addition to performing service discovery, may include checks against weak ciphers (for example, the Nessus scanner has the capability of checking SSL services on arbitrary ports, and will report weak ciphers).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 1'''. SSL service recognition via nmap.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@test]# nmap -F -sV localhost&lt;br /&gt;
&lt;br /&gt;
Starting nmap 3.75 ( http://www.insecure.org/nmap/ ) at 2005-07-27 14:41 CEST&lt;br /&gt;
Interesting ports on localhost.localdomain (127.0.0.1):&lt;br /&gt;
(The 1205 ports scanned but not shown below are in state: closed)&lt;br /&gt;
&lt;br /&gt;
PORT      STATE SERVICE         VERSION&lt;br /&gt;
443/tcp   open  ssl             OpenSSL&lt;br /&gt;
901/tcp   open  http            Samba SWAT administration server&lt;br /&gt;
8080/tcp  open  http            Apache httpd 2.0.54 ((Unix) mod_ssl/2.0.54 OpenSSL/0.9.7g PHP/4.3.11)&lt;br /&gt;
8081/tcp  open  http            Apache Tomcat/Coyote JSP engine 1.0&lt;br /&gt;
&lt;br /&gt;
Nmap run completed -- 1 IP address (1 host up) scanned in 27.881 seconds&lt;br /&gt;
[root@test]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 2'''. Identifying weak ciphers with Nessus.&lt;br /&gt;
The following is an anonymized excerpt of a report generated by the Nessus scanner, corresponding to the identification of a server certificate allowing weak ciphers (see underlined text).&lt;br /&gt;
&lt;br /&gt;
  '''https (443/tcp)'''&lt;br /&gt;
  '''Description'''&lt;br /&gt;
  Here is the SSLv2 server certificate:&lt;br /&gt;
  Certificate:&lt;br /&gt;
  Data:&lt;br /&gt;
  Version: 3 (0x2)&lt;br /&gt;
  Serial Number: 1 (0x1)&lt;br /&gt;
  Signature Algorithm: md5WithRSAEncryption&lt;br /&gt;
  Issuer: C=**, ST=******, L=******, O=******, OU=******, CN=******&lt;br /&gt;
  Validity&lt;br /&gt;
  Not Before: Oct 17 07:12:16 2002 GMT&lt;br /&gt;
  Not After : Oct 16 07:12:16 2004 GMT&lt;br /&gt;
  Subject: C=**, ST=******, L=******, O=******, CN=******&lt;br /&gt;
  Subject Public Key Info:&lt;br /&gt;
  Public Key Algorithm: rsaEncryption&lt;br /&gt;
  RSA Public Key: (1024 bit)&lt;br /&gt;
  Modulus (1024 bit):&lt;br /&gt;
  00:98:4f:24:16:cb:0f:74:e8:9c:55:ce:62:14:4e:&lt;br /&gt;
  6b:84:c5:81:43:59:c1:2e:ac:ba:af:92:51:f3:0b:&lt;br /&gt;
  ad:e1:4b:22:ba:5a:9a:1e:0f:0b:fb:3d:5d:e6:fc:&lt;br /&gt;
  ef:b8:8c:dc:78:28:97:8b:f0:1f:17:9f:69:3f:0e:&lt;br /&gt;
  72:51:24:1b:9c:3d:85:52:1d:df:da:5a:b8:2e:d2:&lt;br /&gt;
  09:00:76:24:43:bc:08:67:6b:dd:6b:e9:d2:f5:67:&lt;br /&gt;
  e1:90:2a:b4:3b:b4:3c:b3:71:4e:88:08:74:b9:a8:&lt;br /&gt;
  2d:c4:8c:65:93:08:e6:2f:fd:e0:fa:dc:6d:d7:a2:&lt;br /&gt;
  3d:0a:75:26:cf:dc:47:74:29&lt;br /&gt;
  Exponent: 65537 (0x10001)&lt;br /&gt;
  X509v3 extensions:&lt;br /&gt;
  X509v3 Basic Constraints:&lt;br /&gt;
  CA:FALSE&lt;br /&gt;
  Netscape Comment:&lt;br /&gt;
  OpenSSL Generated Certificate&lt;br /&gt;
  Page 10&lt;br /&gt;
  Network Vulnerability Assessment Report 25.05.2005&lt;br /&gt;
  X509v3 Subject Key Identifier:&lt;br /&gt;
  10:00:38:4C:45:F0:7C:E4:C6:A7:A4:E2:C9:F0:E4:2B:A8:F9:63:A8&lt;br /&gt;
  X509v3 Authority Key Identifier:&lt;br /&gt;
  keyid:CE:E5:F9:41:7B:D9:0E:5E:5D:DF:5E:B9:F3:E6:4A:12:19:02:76:CE&lt;br /&gt;
  DirName:/C=**/ST=******/L=******/O=******/OU=******/CN=******&lt;br /&gt;
  serial:00&lt;br /&gt;
  Signature Algorithm: md5WithRSAEncryption&lt;br /&gt;
  7b:14:bd:c7:3c:0c:01:8d:69:91:95:46:5c:e6:1e:25:9b:aa:&lt;br /&gt;
  8b:f5:0d:de:e3:2e:82:1e:68:be:97:3b:39:4a:83:ae:fd:15:&lt;br /&gt;
  2e:50:c8:a7:16:6e:c9:4e:76:cc:fd:69:ae:4f:12:b8:e7:01:&lt;br /&gt;
  b6:58:7e:39:d1:fa:8d:49:bd:ff:6b:a8:dd:ae:83:ed:bc:b2:&lt;br /&gt;
  40:e3:a5:e0:fd:ae:3f:57:4d:ec:f3:21:34:b1:84:97:06:6f:&lt;br /&gt;
  f4:7d:f4:1c:84:cc:bb:1c:1c:e7:7a:7d:2d:e9:49:60:93:12:&lt;br /&gt;
  0d:9f:05:8c:8e:f9:cf:e8:9f:fc:15:c0:6e:e2:fe:e5:07:81:&lt;br /&gt;
  82:fc&lt;br /&gt;
  Here is the list of available SSLv2 ciphers:&lt;br /&gt;
  RC4-MD5&lt;br /&gt;
  EXP-RC4-MD5&lt;br /&gt;
  RC2-CBC-MD5&lt;br /&gt;
  EXP-RC2-CBC-MD5&lt;br /&gt;
  DES-CBC-MD5&lt;br /&gt;
  DES-CBC3-MD5&lt;br /&gt;
  RC4-64-MD5&lt;br /&gt;
  &amp;lt;u&amp;gt;The SSLv2 server offers 5 strong ciphers, but also 0 medium strength and '''2 weak &amp;quot;export class&amp;quot; ciphers'''.&lt;br /&gt;
  The weak/medium ciphers may be chosen by an export-grade or badly configured client software. They only offer a limited protection against a brute force attack&amp;lt;/u&amp;gt;&lt;br /&gt;
  &amp;lt;u&amp;gt;Solution: disable those ciphers and upgrade your client software if necessary.&amp;lt;/u&amp;gt;&lt;br /&gt;
  See http://support.microsoft.com/default.aspx?scid=kben-us216482&lt;br /&gt;
  or http://httpd.apache.org/docs-2.0/mod/mod_ssl.html#sslciphersuite&lt;br /&gt;
  This SSLv2 server also accepts SSLv3 connections.&lt;br /&gt;
  This SSLv2 server also accepts TLSv1 connections.&lt;br /&gt;
  &lt;br /&gt;
  Vulnerable hosts&lt;br /&gt;
  ''(list of vulnerable hosts follows)''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 3'''. Manually audit weak SSL cipher levels with OpenSSL. The following will attempt to connect to Google.com with SSLv2.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@test]# openssl s_client -no_tls1 -no_ssl3 -connect www.google.com:443&lt;br /&gt;
CONNECTED(00000003)&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=20:unable to get local issuer certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=27:certificate not trusted&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=21:unable to verify the first certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
---&lt;br /&gt;
Server certificate&lt;br /&gt;
-----BEGIN CERTIFICATE-----&lt;br /&gt;
MIIDYzCCAsygAwIBAgIQYFbAC3yUC8RFj9MS7lfBkzANBgkqhkiG9w0BAQQFADCB&lt;br /&gt;
zjELMAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJ&lt;br /&gt;
Q2FwZSBUb3duMR0wGwYDVQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UE&lt;br /&gt;
CxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhh&lt;br /&gt;
d3RlIFByZW1pdW0gU2VydmVyIENBMSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNl&lt;br /&gt;
cnZlckB0aGF3dGUuY29tMB4XDTA2MDQyMTAxMDc0NVoXDTA3MDQyMTAxMDc0NVow&lt;br /&gt;
aDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDU1v&lt;br /&gt;
dW50YWluIFZpZXcxEzARBgNVBAoTCkdvb2dsZSBJbmMxFzAVBgNVBAMTDnd3dy5n&lt;br /&gt;
b29nbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC/e2Vs8U33fRDk&lt;br /&gt;
5NNpNgkB1zKw4rqTozmfwty7eTEI8PVH1Bf6nthocQ9d9SgJAI2WOBP4grPj7MqO&lt;br /&gt;
dXMTFWGDfiTnwes16G7NZlyh6peT68r7ifrwSsVLisJp6pUf31M5Z3D88b+Yy4PE&lt;br /&gt;
D7BJaTxq6NNmP1vYUJeXsGSGrV6FUQIDAQABo4GmMIGjMB0GA1UdJQQWMBQGCCsG&lt;br /&gt;
AQUFBwMBBggrBgEFBQcDAjBABgNVHR8EOTA3MDWgM6Axhi9odHRwOi8vY3JsLnRo&lt;br /&gt;
YXd0ZS5jb20vVGhhd3RlUHJlbWl1bVNlcnZlckNBLmNybDAyBggrBgEFBQcBAQQm&lt;br /&gt;
MCQwIgYIKwYBBQUHMAGGFmh0dHA6Ly9vY3NwLnRoYXd0ZS5jb20wDAYDVR0TAQH/&lt;br /&gt;
BAIwADANBgkqhkiG9w0BAQQFAAOBgQADlTbBdVY6LD1nHWkhTadmzuWq2rWE0KO3&lt;br /&gt;
Ay+7EleYWPOo+EST315QLpU6pQgblgobGoI5x/fUg2U8WiYj1I1cbavhX2h1hda3&lt;br /&gt;
FJWnB3SiXaiuDTsGxQ267EwCVWD5bCrSWa64ilSJTgiUmzAv0a2W8YHXdG08+nYc&lt;br /&gt;
X/dVk5WRTw==&lt;br /&gt;
-----END CERTIFICATE-----&lt;br /&gt;
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
issuer=/C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/emailAddress=premium-server@thawte.com&lt;br /&gt;
---&lt;br /&gt;
No client certificate CA names sent&lt;br /&gt;
---&lt;br /&gt;
Ciphers common between both SSL endpoints:&lt;br /&gt;
RC4-MD5         EXP-RC4-MD5     RC2-CBC-MD5&lt;br /&gt;
EXP-RC2-CBC-MD5 DES-CBC-MD5     DES-CBC3-MD5&lt;br /&gt;
RC4-64-MD5&lt;br /&gt;
---&lt;br /&gt;
SSL handshake has read 1023 bytes and written 333 bytes&lt;br /&gt;
---&lt;br /&gt;
New, SSLv2, Cipher is DES-CBC3-MD5&lt;br /&gt;
Server public key is 1024 bit&lt;br /&gt;
Compression: NONE&lt;br /&gt;
Expansion: NONE&lt;br /&gt;
SSL-Session:&lt;br /&gt;
    Protocol  : SSLv2&lt;br /&gt;
    Cipher    : DES-CBC3-MD5&lt;br /&gt;
    Session-ID: 709F48E4D567C70A2E49886E4C697CDE&lt;br /&gt;
    Session-ID-ctx:&lt;br /&gt;
    Master-Key: 649E68F8CF936E69642286AC40A80F433602E3C36FD288C3&lt;br /&gt;
    Key-Arg   : E8CB6FEB9ECF3033&lt;br /&gt;
    Start Time: 1156977226&lt;br /&gt;
    Timeout   : 300 (sec)&lt;br /&gt;
    Verify return code: 21 (unable to verify the first certificate)&lt;br /&gt;
---&lt;br /&gt;
closed&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 4'''. Testing supported protocols and ciphers using SSLScan.&lt;br /&gt;
&lt;br /&gt;
SSLScan is a free command line tool that scans a HTTPS service to enumerate what protocols (supports SSLv2, SSLv3 and TLS1) and what ciphers the HTTPS service supports. It runs both on Linux and Windows OS (OSX not tested) and is released under a open source license.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[user@test]$ ./SSLScan --no-failed mail.google.com&lt;br /&gt;
                   _&lt;br /&gt;
           ___ ___| |___  ___ __ _ _ __&lt;br /&gt;
          / __/ __| / __|/ __/ _` | '_ \&lt;br /&gt;
          \__ \__ \ \__ \ (_| (_| | | | |&lt;br /&gt;
          |___/___/_|___/\___\__,_|_| |_|&lt;br /&gt;
&lt;br /&gt;
                  Version 1.9.0-win&lt;br /&gt;
             http://www.titania.co.uk&lt;br /&gt;
 Copyright 2010 Ian Ventura-Whiting / Michael Boman&lt;br /&gt;
    Compiled against OpenSSL 0.9.8n 24 Mar 2010&lt;br /&gt;
&lt;br /&gt;
Testing SSL server mail.google.com on port 443&lt;br /&gt;
&lt;br /&gt;
  Supported Server Cipher(s):&lt;br /&gt;
    accepted  SSLv3  256 bits  AES256-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  AES128-SHA&lt;br /&gt;
    accepted  SSLv3  168 bits  DES-CBC3-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  RC4-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  RC4-MD5&lt;br /&gt;
    accepted  TLSv1  256 bits  AES256-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  AES128-SHA&lt;br /&gt;
    accepted  TLSv1  168 bits  DES-CBC3-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  RC4-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  RC4-MD5&lt;br /&gt;
&lt;br /&gt;
  Prefered Server Cipher(s):&lt;br /&gt;
    SSLv3  128 bits  RC4-SHA&lt;br /&gt;
    TLSv1  128 bits  RC4-SHA&lt;br /&gt;
&lt;br /&gt;
  SSL Certificate:&lt;br /&gt;
    Version: 2&lt;br /&gt;
    Serial Number: -4294967295&lt;br /&gt;
    Signature Algorithm: sha1WithRSAEncryption&lt;br /&gt;
    Issuer: /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA&lt;br /&gt;
    Not valid before: Dec 18 00:00:00 2009 GMT&lt;br /&gt;
    Not valid after: Dec 18 23:59:59 2011 GMT&lt;br /&gt;
    Subject: /C=US/ST=California/L=Mountain View/O=Google Inc/CN=mail.google.com&lt;br /&gt;
    Public Key Algorithm: rsaEncryption&lt;br /&gt;
    RSA Public Key: (1024 bit)&lt;br /&gt;
      Modulus (1024 bit):&lt;br /&gt;
          00:d9:27:c8:11:f2:7b:e4:45:c9:46:b6:63:75:83:&lt;br /&gt;
          b1:77:7e:17:41:89:80:38:f1:45:27:a0:3c:d9:e8:&lt;br /&gt;
          a8:00:4b:d9:07:d0:ba:de:ed:f4:2c:a6:ac:dc:27:&lt;br /&gt;
          13:ec:0c:c1:a6:99:17:42:e6:8d:27:d2:81:14:b0:&lt;br /&gt;
          4b:82:fa:b2:c5:d0:bb:20:59:62:28:a3:96:b5:61:&lt;br /&gt;
          f6:76:c1:6d:46:d2:fd:ba:c6:0f:3d:d1:c9:77:9a:&lt;br /&gt;
          58:33:f6:06:76:32:ad:51:5f:29:5f:6e:f8:12:8b:&lt;br /&gt;
          ad:e6:c5:08:39:b3:43:43:a9:5b:91:1d:d7:e3:cf:&lt;br /&gt;
          51:df:75:59:8e:8d:80:ab:53&lt;br /&gt;
      Exponent: 65537 (0x10001)&lt;br /&gt;
    X509v3 Extensions:&lt;br /&gt;
      X509v3 Basic Constraints: critical&lt;br /&gt;
        CA:FALSE      X509v3 CRL Distribution Points: &lt;br /&gt;
        URI:http://crl.thawte.com/ThawteSGCCA.crl&lt;br /&gt;
      X509v3 Extended Key Usage: &lt;br /&gt;
        TLS Web Server Authentication, TLS Web Client Authentication, Netscape Server Gated Crypto      Authority Information Access: &lt;br /&gt;
        OCSP - URI:http://ocsp.thawte.com&lt;br /&gt;
        CA Issuers - URI:http://www.thawte.com/repository/Thawte_SGC_CA.crt&lt;br /&gt;
  Verify Certificate:&lt;br /&gt;
    unable to get local issuer certificate&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renegotiation requests supported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 5'''. Testing common SSL flaws with ssl_tests&lt;br /&gt;
&lt;br /&gt;
ssl_tests (http://www.pentesterscripting.com/discovery/ssl_tests) is a bash script that uses sslscan and openssl to check for various flaws - ssl version 2, weak ciphers, md5withRSAEncryption,SSLv3 Force Ciphering Bug/Renegotiation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[user@test]$ ./ssl_test.sh 192.168.1.3 443&lt;br /&gt;
+++++++++++++++++++++++++++++++++++++++++++++++++&lt;br /&gt;
SSL Tests - v2, weak ciphers, MD5, Renegotiation&lt;br /&gt;
by Aung Khant, http://yehg.net&lt;br /&gt;
+++++++++++++++++++++++++++++++++++++++++++++++++&lt;br /&gt;
&lt;br /&gt;
[*] testing on 192.168.1.3:443 ..&lt;br /&gt;
&lt;br /&gt;
[*] tesing for sslv2 ..&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep Accepted  SSLv2&lt;br /&gt;
    Accepted  SSLv2  168 bits  DES-CBC3-MD5&lt;br /&gt;
    Accepted  SSLv2  56 bits   DES-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  128 bits  RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  SSLv2  128 bits  RC4-MD5&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] testing for weak ciphers ...&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep  40 bits | grep Accepted&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-RC4-MD5&lt;br /&gt;
&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep  56 bits | grep Accepted&lt;br /&gt;
    Accepted  SSLv2  56 bits   DES-CBC-MD5&lt;br /&gt;
    Accepted  SSLv3  56 bits   EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  56 bits   DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  56 bits   EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  56 bits   DES-CBC-SHA&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] testing for MD5 certificate ..&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep MD5WithRSAEncryption&lt;br /&gt;
&lt;br /&gt;
[*] testing for SSLv3 Force Ciphering Bug/Renegotiation ..&lt;br /&gt;
[*] echo R | openssl s_client -connect 192.168.1.3:443 | grep DONE&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify error:num=18:self signed certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify return:1&lt;br /&gt;
RENEGOTIATING&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify error:num=18:self signed certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify return:1&lt;br /&gt;
DONE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] done&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==White Box Test and example==&lt;br /&gt;
&lt;br /&gt;
Check the configuration of the web servers which provide https services. If the web application provides other SSL/TLS wrapped services, these should be checked as well.&lt;br /&gt;
&lt;br /&gt;
'''Example:''' The following registry path in Microsoft Windows 2003 defines the ciphers available to the server:&lt;br /&gt;
&lt;br /&gt;
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\&lt;br /&gt;
&lt;br /&gt;
==Testing SSL certificate validity – client and server==&lt;br /&gt;
&lt;br /&gt;
When accessing a web application via the https protocol, a secure channel is established between the client (usually the browser) and the server. The identity of one (the server) or both parties (client and server)  is then established by means of digital certificates. In order for the communication to be set up, a number of checks on the certificates must be passed. While discussing SSL and certificate based authentication is beyond the scope of this Guide, we will focus on the main criteria involved in ascertaining certificate validity: a) checking if the Certificate Authority (CA) is a known one (meaning one considered trusted), b) checking that the certificate is currently valid, and c) checking that the name of the site and the name reported in the certificate  match.&lt;br /&gt;
Remember to upgrade your browser because CA certs expired too, in every release of the browser, CA Certs has been renewed. Moreover it's important to update the browser because more web sites require strong cipher more of 40 or 56 bit.&lt;br /&gt;
&lt;br /&gt;
Let’s examine each check more in detail.&lt;br /&gt;
&lt;br /&gt;
a) Each browser comes with a preloaded list of trusted CAs, against which the certificate signing CA is compared (this list can be customized and expanded at will). During the initial negotiations with an https server, if the server certificate relates to a CA unknown to the browser, a warning is usually raised. This happens most often because a web application relies on a certificate signed by a self-established CA. Whether this is to be considered a concern depends on several factors. For example, this may be fine for an Intranet environment (think of corporate web email being provided via https; here, obviously all users recognize the internal CA as a trusted CA). When a service is provided to the general public via the Internet, however (i.e. when it is important to positively verify the identity of the server we are talking to), it is usually imperative to rely on a trusted CA, one which is  recognized by all the user base (and here we stop with our considerations; we won’t delve deeper in the implications of the trust model being used by digital certificates).&lt;br /&gt;
&lt;br /&gt;
b) Certificates have an associated period of validity, therefore they may expire. Again, we are warned by the browser about this. A public service needs a temporally valid certificate; otherwise, it means we are talking with a server whose certificate was issued by someone we trust, but has expired without being renewed.&lt;br /&gt;
&lt;br /&gt;
c) What if the name on the certificate and the name of the server do not match? If this happens, it might sound suspicious. For a number of reasons, this is not so rare to see. A system may host a number of name-based virtual hosts, which share the same IP address and are identified by means of the HTTP 1.1 Host: header information. In this case, since the SSL handshake checks the server certificate before the HTTP request is processed, it is not possible to assign different certificates to each virtual server. Therefore, if the name of the site and the name reported in the certificate do not match, we have a condition which is typically signalled by the browser. To avoid this, one of two techniques should be used. First is Server Name Indication (SNI), which is a TLS extension from [http://www.ietf.org/rfc/rfc3546.txt RFC 3546]; and second is IP-based virtual servers must be used. [2] and [3] describe techniques to deal with this problem and allow name-based virtual hosts to be correctly referenced.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Black Box Testing and examples===&lt;br /&gt;
&lt;br /&gt;
Examine the validity of the certificates used by the application. Browsers will issue a warning when encountering expired certificates, certificates issued by untrusted CAs, and certificates which do not match namewise with the site to which they should refer. By clicking on the padlock which appears in the browser window when visiting an https site, you can look at information related to the certificate – including the issuer, period of validity, encryption characteristics, etc.&lt;br /&gt;
&lt;br /&gt;
If the application requires a client certificate, you probably have installed one to access it. Certificate information is available in the browser by inspecting the relevant certificate(s) in the list of the installed certificates.&lt;br /&gt;
&lt;br /&gt;
These checks must be applied to all visible SSL-wrapped communication channels used by the application. Though this is the usual https service running on port 443, there may be additional services involved depending on the web application architecture and on deployment issues (an https administrative port left open, https services on non-standard ports, etc.). Therefore, apply these checks to all SSL-wrapped ports which have been discovered. For example, the nmap scanner features a scanning mode (enabled by the –sV command line switch) which identifies SSL-wrapped services. The Nessus vulnerability scanner has the capability of performing SSL checks on all SSL/TLS-wrapped services.&lt;br /&gt;
&lt;br /&gt;
'''Examples'''&lt;br /&gt;
&lt;br /&gt;
Rather than providing a fictitious example, we have inserted an anonymized real-life example to stress how frequently one stumbles on https sites whose certificates are inaccurate with respect to naming.&lt;br /&gt;
&lt;br /&gt;
The following screenshots refer to a regional site of a high-profile IT company.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Warning issued by Microsoft Internet Explorer.&amp;lt;/u&amp;gt; We are visiting an ''.it'' site and the certificate was issued to a ''.com ''site! Internet Explorer warns that the name on the certificate does not match the name of the site.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SSL Certificate Validity Testing IE Warning.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Warning issued by Mozilla Firefox.&amp;lt;/u&amp;gt; The message issued by Firefox is different – Firefox complains because it cannot ascertain the identity of the ''.com'' site the certificate refers to because it does not know the CA which signed the certificate. In fact, Internet Explorer and Firefox do not come preloaded with the same list of CAs. Therefore, the behavior experienced with various browsers may differ.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SSL Certificate Validity Testing Firefox Warning.gif]]&lt;br /&gt;
&lt;br /&gt;
===White Box Testing and examples===&lt;br /&gt;
&lt;br /&gt;
Examine the validity of the certificates used by the application at both server and client levels. The usage of certificates is primarily at the web server level; however, there may be additional communication paths protected by SSL (for example, towards the DBMS). You should check the application architecture to identify all SSL protected channels.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] RFC2246. The TLS Protocol Version 1.0 (updated by RFC3546) - http://www.ietf.org/rfc/rfc2246.txt&lt;br /&gt;
* [2] RFC2817. Upgrading to TLS Within HTTP/1.1 - http://www.ietf.org/rfc/rfc2817.txt&lt;br /&gt;
* [3] RFC3546. Transport Layer Security (TLS) Extensions - http://www.ietf.org/rfc/rfc3546.txt&lt;br /&gt;
* [4] &amp;lt;u&amp;gt;www.verisign.net&amp;lt;/u&amp;gt; features various material on the topic&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&lt;br /&gt;
&lt;br /&gt;
* https://www.ssllabs.com/ssldb/&lt;br /&gt;
&lt;br /&gt;
* Vulnerability scanners may include checks regarding certificate validity, including name mismatch and time expiration. They usually report other information as well, such as the CA which issued the certificate. Remember that there is no unified notion of a “trusted CA”; what is trusted depends on the configuration of the software and on the human assumptions made beforehand. Browsers come with a preloaded list of trusted CAs. If your web application relies on a CA which is not in this list (for example, because you rely on a self-made CA), you should take into account the process of configuring user browsers to recognize the CA.&lt;br /&gt;
&lt;br /&gt;
* The Nessus scanner includes a plugin to check for expired certificates or certificates which are going to expire within 60 days (plugin “SSL certificate expiry”, plugin id 15901). This plugin will check certificates ''installed on the server.&lt;br /&gt;
&lt;br /&gt;
* Vulnerability scanners may include checks against weak ciphers. For example, the Nessus scanner (http://www.nessus.org) has this capability and flags the presence of SSL weak ciphers (see example provided above).&lt;br /&gt;
&lt;br /&gt;
* You may also rely on specialized tools such as SSL Digger (http://www.mcafee.com/us/downloads/free-tools/ssldigger.aspx), or – for the command line oriented – experiment with the openssl tool, which provides access to OpenSSL cryptographic functions directly from a Unix shell (may be already available on *nix boxes, otherwise see www.openssl.org).&lt;br /&gt;
&lt;br /&gt;
* To identify SSL-based services, use a vulnerability scanner or a port scanner with service recognition capabilities. The nmap scanner features a “-sV” scanning option which tries to identify services, while the nessus vulnerability scanner has the capability of identifying SSL-based services on arbitrary ports and to run vulnerability checks on them regardless of whether they are configured on standard or non-standard ports.&lt;br /&gt;
&lt;br /&gt;
* In case you need to talk to a SSL service but your favourite tool doesn’t support SSL, you may benefit from a SSL proxy such as stunnel; stunnel will take care of tunneling the underlying protocol (usually http, but not necessarily so) and communicate with the SSL service you need to reach.&lt;br /&gt;
&lt;br /&gt;
* ssl_tests, http://www.pentesterscripting.com/discovery/ssl_tests&lt;br /&gt;
&lt;br /&gt;
* Finally, a word of advice. Though it may be tempting to use a regular browser to check certificates, there are various reasons for not doing so. Browsers have been plagued by various bugs in this area, and the way the browser will perform the check might be influenced by configuration settings that may not be evident. Instead, rely on vulnerability scanners or on specialized tools to do the job.&lt;br /&gt;
&lt;br /&gt;
* [http://www.owasp.org/index.php/Transport_Layer_Protection_Cheat_Sheet OWASP Transport Layer Protection Cheat Sheet]&lt;br /&gt;
&lt;br /&gt;
[[Category:Cryptographic Vulnerability]]&lt;br /&gt;
[[Category:SSL]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&amp;diff=147383</id>
		<title>Testing for SSL-TLS (OWASP-CM-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&amp;diff=147383"/>
				<updated>2013-03-10T05:37:25Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: SP800-52 states 1024-bit is acceptable until 2010. Time for an update to 2048 (112-bit security level).....&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v3}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
Due to historic export restrictions of high grade cryptography, legacy and new web servers are often able and configured to handle weak cryptographic options.&lt;br /&gt;
&lt;br /&gt;
Even if high grade ciphers are normally used and installed, some server misconfiguration could be used to force the use of a weaker cipher to gain access to the supposed secure communication channel.&lt;br /&gt;
&lt;br /&gt;
==Testing SSL / TLS Cipher Specifications and Requirements ==&lt;br /&gt;
&lt;br /&gt;
The http clear-text protocol is normally secured via an SSL or TLS tunnel, resulting in https traffic. In addition to providing encryption of data in transit, https allows the identification of servers (and, optionally, of clients) by means of digital certificates.&lt;br /&gt;
&lt;br /&gt;
Historically, there have been limitations set in place by the U.S. government to allow cryptosystems to be exported only for key sizes of, at most, 40 bits, a key length which could be broken and would allow the decryption of communications. Since then, cryptographic export regulations have been relaxed (though some constraints still hold); however, it is important to check the SSL configuration being used to avoid putting in place cryptographic support which could be easily defeated. SSL-based services should not offer the possibility to choose weak ciphers.&lt;br /&gt;
&lt;br /&gt;
Cipher determination is performed as follows: in the initial phase of a SSL connection setup, the client sends the server a Client Hello message specifying, among other information, the cipher suites that it is able to handle. A client is usually a web browser (most popular SSL client nowadays), but not necessarily, since it can be any SSL-enabled application; the same holds for the server, which needs not be a web server, though this is the most common case. (For example, a noteworthy class of SSL clients is that of SSL proxies such as stunnel (www.stunnel.org) which can be used to allow non-SSL enabled tools to talk to SSL services.) A cipher suite is specified by an encryption protocol (DES, RC4, AES), the encryption key length (such as 40, 56, or 128 bits), and a hash algorithm (SHA, MD5) used for integrity checking. Upon receiving a Client Hello message, the server decides which cipher suite it will use for that session. It is possible (for example, by means of configuration directives) to specify which cipher suites the server will honor. In this way you may control, for example, whether or not conversations with clients will support 40-bit encryption only.&lt;br /&gt;
&lt;br /&gt;
==SSL Testing Criteria==&lt;br /&gt;
Large number of available cipher suites and quick progress in cryptoanalysis makes judging a SSL server a non-trivial task. These criteria are widely recognised as minimum checklist:&lt;br /&gt;
&lt;br /&gt;
* SSLv2, due to known weaknesses in protocol design [http://www.schneier.com/paper-ssl.html]&lt;br /&gt;
* SSLv3, due to known weaknesses in protocol design [http://www.yaksman.org/~lweith/ssl.pdf]&lt;br /&gt;
* Compression, due to known weaknesses in protocol design [http://www.ekoparty.org/2012/juliano-rizzo.php]&lt;br /&gt;
* Cipher suites with symmetric encryption algorithm smaller than 112 bits&lt;br /&gt;
* X.509 certificates with RSA or DSA key smaller than 2048 bits&lt;br /&gt;
* X.509 certificates signed using MD5 hash, due to known collision attacks on this hash&lt;br /&gt;
* TLS Renegotiation vulnerability [http://www.phonefactor.com/sslgap/ssl-tls-authentication-patches]&lt;br /&gt;
&lt;br /&gt;
The following standards can be used as reference while assessing SSL servers:&lt;br /&gt;
&lt;br /&gt;
* [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf NIST SP 800-52] recommends U.S. federal systems to use at least TLS 1.0 with ciphersuites based on RSA or DSA key agreement with ephemeral Diffie-Hellman, 3DES or AES for confidentality and SHA1 for integrity protection. NIST SP 800-52 specifically disallows non-FIPS compliant algorithms like RC4 and MD5. An exception is U.S. federal systems making connections to outside servers, where these algorithms can be used in SSL client mode.&lt;br /&gt;
* [https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml PCI-DSS v1.2] in point 4.1 requires compliant parties to use &amp;quot;strong cryptography&amp;quot; without precisely defining key lengths and algorithms. Common interpretation, partially based on previous versions of the standard, is that at least 128 bit key cipher, no export strength algorithms and no SSLv2 should be used[http://www.digicert.com/news/DigiCert_PCI_White_Paper.pdf].&lt;br /&gt;
* [https://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide] has been proposed to standardize SSL server assessment and currently is in draft version.&lt;br /&gt;
&lt;br /&gt;
SSL Server Database can be used to assess configuration of publicly available SSL servers[https://www.ssllabs.com/ssldb/analyze.html] based on SSL Rating Guide[https://www.ssllabs.com/projects/rating-guide/index.html]&lt;br /&gt;
&lt;br /&gt;
==Black Box Test and example==&lt;br /&gt;
&lt;br /&gt;
In order to detect possible support of weak ciphers, the ports associated to SSL/TLS wrapped services must be identified. These typically include port 443, which is the standard https port; however, this may change because a) https services may be configured to run on non-standard ports, and b) there may be additional SSL/TLS wrapped services related to the web application. In general, a service discovery is required to identify such ports.&lt;br /&gt;
&lt;br /&gt;
The nmap scanner, via the “–sV” scan option, is able to identify SSL services. Vulnerability Scanners, in addition to performing service discovery, may include checks against weak ciphers (for example, the Nessus scanner has the capability of checking SSL services on arbitrary ports, and will report weak ciphers).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 1'''. SSL service recognition via nmap.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@test]# nmap -F -sV localhost&lt;br /&gt;
&lt;br /&gt;
Starting nmap 3.75 ( http://www.insecure.org/nmap/ ) at 2005-07-27 14:41 CEST&lt;br /&gt;
Interesting ports on localhost.localdomain (127.0.0.1):&lt;br /&gt;
(The 1205 ports scanned but not shown below are in state: closed)&lt;br /&gt;
&lt;br /&gt;
PORT      STATE SERVICE         VERSION&lt;br /&gt;
443/tcp   open  ssl             OpenSSL&lt;br /&gt;
901/tcp   open  http            Samba SWAT administration server&lt;br /&gt;
8080/tcp  open  http            Apache httpd 2.0.54 ((Unix) mod_ssl/2.0.54 OpenSSL/0.9.7g PHP/4.3.11)&lt;br /&gt;
8081/tcp  open  http            Apache Tomcat/Coyote JSP engine 1.0&lt;br /&gt;
&lt;br /&gt;
Nmap run completed -- 1 IP address (1 host up) scanned in 27.881 seconds&lt;br /&gt;
[root@test]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 2'''. Identifying weak ciphers with Nessus.&lt;br /&gt;
The following is an anonymized excerpt of a report generated by the Nessus scanner, corresponding to the identification of a server certificate allowing weak ciphers (see underlined text).&lt;br /&gt;
&lt;br /&gt;
  '''https (443/tcp)'''&lt;br /&gt;
  '''Description'''&lt;br /&gt;
  Here is the SSLv2 server certificate:&lt;br /&gt;
  Certificate:&lt;br /&gt;
  Data:&lt;br /&gt;
  Version: 3 (0x2)&lt;br /&gt;
  Serial Number: 1 (0x1)&lt;br /&gt;
  Signature Algorithm: md5WithRSAEncryption&lt;br /&gt;
  Issuer: C=**, ST=******, L=******, O=******, OU=******, CN=******&lt;br /&gt;
  Validity&lt;br /&gt;
  Not Before: Oct 17 07:12:16 2002 GMT&lt;br /&gt;
  Not After : Oct 16 07:12:16 2004 GMT&lt;br /&gt;
  Subject: C=**, ST=******, L=******, O=******, CN=******&lt;br /&gt;
  Subject Public Key Info:&lt;br /&gt;
  Public Key Algorithm: rsaEncryption&lt;br /&gt;
  RSA Public Key: (1024 bit)&lt;br /&gt;
  Modulus (1024 bit):&lt;br /&gt;
  00:98:4f:24:16:cb:0f:74:e8:9c:55:ce:62:14:4e:&lt;br /&gt;
  6b:84:c5:81:43:59:c1:2e:ac:ba:af:92:51:f3:0b:&lt;br /&gt;
  ad:e1:4b:22:ba:5a:9a:1e:0f:0b:fb:3d:5d:e6:fc:&lt;br /&gt;
  ef:b8:8c:dc:78:28:97:8b:f0:1f:17:9f:69:3f:0e:&lt;br /&gt;
  72:51:24:1b:9c:3d:85:52:1d:df:da:5a:b8:2e:d2:&lt;br /&gt;
  09:00:76:24:43:bc:08:67:6b:dd:6b:e9:d2:f5:67:&lt;br /&gt;
  e1:90:2a:b4:3b:b4:3c:b3:71:4e:88:08:74:b9:a8:&lt;br /&gt;
  2d:c4:8c:65:93:08:e6:2f:fd:e0:fa:dc:6d:d7:a2:&lt;br /&gt;
  3d:0a:75:26:cf:dc:47:74:29&lt;br /&gt;
  Exponent: 65537 (0x10001)&lt;br /&gt;
  X509v3 extensions:&lt;br /&gt;
  X509v3 Basic Constraints:&lt;br /&gt;
  CA:FALSE&lt;br /&gt;
  Netscape Comment:&lt;br /&gt;
  OpenSSL Generated Certificate&lt;br /&gt;
  Page 10&lt;br /&gt;
  Network Vulnerability Assessment Report 25.05.2005&lt;br /&gt;
  X509v3 Subject Key Identifier:&lt;br /&gt;
  10:00:38:4C:45:F0:7C:E4:C6:A7:A4:E2:C9:F0:E4:2B:A8:F9:63:A8&lt;br /&gt;
  X509v3 Authority Key Identifier:&lt;br /&gt;
  keyid:CE:E5:F9:41:7B:D9:0E:5E:5D:DF:5E:B9:F3:E6:4A:12:19:02:76:CE&lt;br /&gt;
  DirName:/C=**/ST=******/L=******/O=******/OU=******/CN=******&lt;br /&gt;
  serial:00&lt;br /&gt;
  Signature Algorithm: md5WithRSAEncryption&lt;br /&gt;
  7b:14:bd:c7:3c:0c:01:8d:69:91:95:46:5c:e6:1e:25:9b:aa:&lt;br /&gt;
  8b:f5:0d:de:e3:2e:82:1e:68:be:97:3b:39:4a:83:ae:fd:15:&lt;br /&gt;
  2e:50:c8:a7:16:6e:c9:4e:76:cc:fd:69:ae:4f:12:b8:e7:01:&lt;br /&gt;
  b6:58:7e:39:d1:fa:8d:49:bd:ff:6b:a8:dd:ae:83:ed:bc:b2:&lt;br /&gt;
  40:e3:a5:e0:fd:ae:3f:57:4d:ec:f3:21:34:b1:84:97:06:6f:&lt;br /&gt;
  f4:7d:f4:1c:84:cc:bb:1c:1c:e7:7a:7d:2d:e9:49:60:93:12:&lt;br /&gt;
  0d:9f:05:8c:8e:f9:cf:e8:9f:fc:15:c0:6e:e2:fe:e5:07:81:&lt;br /&gt;
  82:fc&lt;br /&gt;
  Here is the list of available SSLv2 ciphers:&lt;br /&gt;
  RC4-MD5&lt;br /&gt;
  EXP-RC4-MD5&lt;br /&gt;
  RC2-CBC-MD5&lt;br /&gt;
  EXP-RC2-CBC-MD5&lt;br /&gt;
  DES-CBC-MD5&lt;br /&gt;
  DES-CBC3-MD5&lt;br /&gt;
  RC4-64-MD5&lt;br /&gt;
  &amp;lt;u&amp;gt;The SSLv2 server offers 5 strong ciphers, but also 0 medium strength and '''2 weak &amp;quot;export class&amp;quot; ciphers'''.&lt;br /&gt;
  The weak/medium ciphers may be chosen by an export-grade or badly configured client software. They only offer a limited protection against a brute force attack&amp;lt;/u&amp;gt;&lt;br /&gt;
  &amp;lt;u&amp;gt;Solution: disable those ciphers and upgrade your client software if necessary.&amp;lt;/u&amp;gt;&lt;br /&gt;
  See http://support.microsoft.com/default.aspx?scid=kben-us216482&lt;br /&gt;
  or http://httpd.apache.org/docs-2.0/mod/mod_ssl.html#sslciphersuite&lt;br /&gt;
  This SSLv2 server also accepts SSLv3 connections.&lt;br /&gt;
  This SSLv2 server also accepts TLSv1 connections.&lt;br /&gt;
  &lt;br /&gt;
  Vulnerable hosts&lt;br /&gt;
  ''(list of vulnerable hosts follows)''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 3'''. Manually audit weak SSL cipher levels with OpenSSL. The following will attempt to connect to Google.com with SSLv2.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@test]# openssl s_client -no_tls1 -no_ssl3 -connect www.google.com:443&lt;br /&gt;
CONNECTED(00000003)&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=20:unable to get local issuer certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=27:certificate not trusted&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=21:unable to verify the first certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
---&lt;br /&gt;
Server certificate&lt;br /&gt;
-----BEGIN CERTIFICATE-----&lt;br /&gt;
MIIDYzCCAsygAwIBAgIQYFbAC3yUC8RFj9MS7lfBkzANBgkqhkiG9w0BAQQFADCB&lt;br /&gt;
zjELMAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJ&lt;br /&gt;
Q2FwZSBUb3duMR0wGwYDVQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UE&lt;br /&gt;
CxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhh&lt;br /&gt;
d3RlIFByZW1pdW0gU2VydmVyIENBMSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNl&lt;br /&gt;
cnZlckB0aGF3dGUuY29tMB4XDTA2MDQyMTAxMDc0NVoXDTA3MDQyMTAxMDc0NVow&lt;br /&gt;
aDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDU1v&lt;br /&gt;
dW50YWluIFZpZXcxEzARBgNVBAoTCkdvb2dsZSBJbmMxFzAVBgNVBAMTDnd3dy5n&lt;br /&gt;
b29nbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC/e2Vs8U33fRDk&lt;br /&gt;
5NNpNgkB1zKw4rqTozmfwty7eTEI8PVH1Bf6nthocQ9d9SgJAI2WOBP4grPj7MqO&lt;br /&gt;
dXMTFWGDfiTnwes16G7NZlyh6peT68r7ifrwSsVLisJp6pUf31M5Z3D88b+Yy4PE&lt;br /&gt;
D7BJaTxq6NNmP1vYUJeXsGSGrV6FUQIDAQABo4GmMIGjMB0GA1UdJQQWMBQGCCsG&lt;br /&gt;
AQUFBwMBBggrBgEFBQcDAjBABgNVHR8EOTA3MDWgM6Axhi9odHRwOi8vY3JsLnRo&lt;br /&gt;
YXd0ZS5jb20vVGhhd3RlUHJlbWl1bVNlcnZlckNBLmNybDAyBggrBgEFBQcBAQQm&lt;br /&gt;
MCQwIgYIKwYBBQUHMAGGFmh0dHA6Ly9vY3NwLnRoYXd0ZS5jb20wDAYDVR0TAQH/&lt;br /&gt;
BAIwADANBgkqhkiG9w0BAQQFAAOBgQADlTbBdVY6LD1nHWkhTadmzuWq2rWE0KO3&lt;br /&gt;
Ay+7EleYWPOo+EST315QLpU6pQgblgobGoI5x/fUg2U8WiYj1I1cbavhX2h1hda3&lt;br /&gt;
FJWnB3SiXaiuDTsGxQ267EwCVWD5bCrSWa64ilSJTgiUmzAv0a2W8YHXdG08+nYc&lt;br /&gt;
X/dVk5WRTw==&lt;br /&gt;
-----END CERTIFICATE-----&lt;br /&gt;
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
issuer=/C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/emailAddress=premium-server@thawte.com&lt;br /&gt;
---&lt;br /&gt;
No client certificate CA names sent&lt;br /&gt;
---&lt;br /&gt;
Ciphers common between both SSL endpoints:&lt;br /&gt;
RC4-MD5         EXP-RC4-MD5     RC2-CBC-MD5&lt;br /&gt;
EXP-RC2-CBC-MD5 DES-CBC-MD5     DES-CBC3-MD5&lt;br /&gt;
RC4-64-MD5&lt;br /&gt;
---&lt;br /&gt;
SSL handshake has read 1023 bytes and written 333 bytes&lt;br /&gt;
---&lt;br /&gt;
New, SSLv2, Cipher is DES-CBC3-MD5&lt;br /&gt;
Server public key is 1024 bit&lt;br /&gt;
Compression: NONE&lt;br /&gt;
Expansion: NONE&lt;br /&gt;
SSL-Session:&lt;br /&gt;
    Protocol  : SSLv2&lt;br /&gt;
    Cipher    : DES-CBC3-MD5&lt;br /&gt;
    Session-ID: 709F48E4D567C70A2E49886E4C697CDE&lt;br /&gt;
    Session-ID-ctx:&lt;br /&gt;
    Master-Key: 649E68F8CF936E69642286AC40A80F433602E3C36FD288C3&lt;br /&gt;
    Key-Arg   : E8CB6FEB9ECF3033&lt;br /&gt;
    Start Time: 1156977226&lt;br /&gt;
    Timeout   : 300 (sec)&lt;br /&gt;
    Verify return code: 21 (unable to verify the first certificate)&lt;br /&gt;
---&lt;br /&gt;
closed&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 4'''. Testing supported protocols and ciphers using SSLScan.&lt;br /&gt;
&lt;br /&gt;
SSLScan is a free command line tool that scans a HTTPS service to enumerate what protocols (supports SSLv2, SSLv3 and TLS1) and what ciphers the HTTPS service supports. It runs both on Linux and Windows OS (OSX not tested) and is released under a open source license.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[user@test]$ ./SSLScan --no-failed mail.google.com&lt;br /&gt;
                   _&lt;br /&gt;
           ___ ___| |___  ___ __ _ _ __&lt;br /&gt;
          / __/ __| / __|/ __/ _` | '_ \&lt;br /&gt;
          \__ \__ \ \__ \ (_| (_| | | | |&lt;br /&gt;
          |___/___/_|___/\___\__,_|_| |_|&lt;br /&gt;
&lt;br /&gt;
                  Version 1.9.0-win&lt;br /&gt;
             http://www.titania.co.uk&lt;br /&gt;
 Copyright 2010 Ian Ventura-Whiting / Michael Boman&lt;br /&gt;
    Compiled against OpenSSL 0.9.8n 24 Mar 2010&lt;br /&gt;
&lt;br /&gt;
Testing SSL server mail.google.com on port 443&lt;br /&gt;
&lt;br /&gt;
  Supported Server Cipher(s):&lt;br /&gt;
    accepted  SSLv3  256 bits  AES256-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  AES128-SHA&lt;br /&gt;
    accepted  SSLv3  168 bits  DES-CBC3-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  RC4-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  RC4-MD5&lt;br /&gt;
    accepted  TLSv1  256 bits  AES256-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  AES128-SHA&lt;br /&gt;
    accepted  TLSv1  168 bits  DES-CBC3-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  RC4-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  RC4-MD5&lt;br /&gt;
&lt;br /&gt;
  Prefered Server Cipher(s):&lt;br /&gt;
    SSLv3  128 bits  RC4-SHA&lt;br /&gt;
    TLSv1  128 bits  RC4-SHA&lt;br /&gt;
&lt;br /&gt;
  SSL Certificate:&lt;br /&gt;
    Version: 2&lt;br /&gt;
    Serial Number: -4294967295&lt;br /&gt;
    Signature Algorithm: sha1WithRSAEncryption&lt;br /&gt;
    Issuer: /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA&lt;br /&gt;
    Not valid before: Dec 18 00:00:00 2009 GMT&lt;br /&gt;
    Not valid after: Dec 18 23:59:59 2011 GMT&lt;br /&gt;
    Subject: /C=US/ST=California/L=Mountain View/O=Google Inc/CN=mail.google.com&lt;br /&gt;
    Public Key Algorithm: rsaEncryption&lt;br /&gt;
    RSA Public Key: (1024 bit)&lt;br /&gt;
      Modulus (1024 bit):&lt;br /&gt;
          00:d9:27:c8:11:f2:7b:e4:45:c9:46:b6:63:75:83:&lt;br /&gt;
          b1:77:7e:17:41:89:80:38:f1:45:27:a0:3c:d9:e8:&lt;br /&gt;
          a8:00:4b:d9:07:d0:ba:de:ed:f4:2c:a6:ac:dc:27:&lt;br /&gt;
          13:ec:0c:c1:a6:99:17:42:e6:8d:27:d2:81:14:b0:&lt;br /&gt;
          4b:82:fa:b2:c5:d0:bb:20:59:62:28:a3:96:b5:61:&lt;br /&gt;
          f6:76:c1:6d:46:d2:fd:ba:c6:0f:3d:d1:c9:77:9a:&lt;br /&gt;
          58:33:f6:06:76:32:ad:51:5f:29:5f:6e:f8:12:8b:&lt;br /&gt;
          ad:e6:c5:08:39:b3:43:43:a9:5b:91:1d:d7:e3:cf:&lt;br /&gt;
          51:df:75:59:8e:8d:80:ab:53&lt;br /&gt;
      Exponent: 65537 (0x10001)&lt;br /&gt;
    X509v3 Extensions:&lt;br /&gt;
      X509v3 Basic Constraints: critical&lt;br /&gt;
        CA:FALSE      X509v3 CRL Distribution Points: &lt;br /&gt;
        URI:http://crl.thawte.com/ThawteSGCCA.crl&lt;br /&gt;
      X509v3 Extended Key Usage: &lt;br /&gt;
        TLS Web Server Authentication, TLS Web Client Authentication, Netscape Server Gated Crypto      Authority Information Access: &lt;br /&gt;
        OCSP - URI:http://ocsp.thawte.com&lt;br /&gt;
        CA Issuers - URI:http://www.thawte.com/repository/Thawte_SGC_CA.crt&lt;br /&gt;
  Verify Certificate:&lt;br /&gt;
    unable to get local issuer certificate&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renegotiation requests supported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 5'''. Testing common SSL flaws with ssl_tests&lt;br /&gt;
&lt;br /&gt;
ssl_tests (http://www.pentesterscripting.com/discovery/ssl_tests) is a bash script that uses sslscan and openssl to check for various flaws - ssl version 2, weak ciphers, md5withRSAEncryption,SSLv3 Force Ciphering Bug/Renegotiation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[user@test]$ ./ssl_test.sh 192.168.1.3 443&lt;br /&gt;
+++++++++++++++++++++++++++++++++++++++++++++++++&lt;br /&gt;
SSL Tests - v2, weak ciphers, MD5, Renegotiation&lt;br /&gt;
by Aung Khant, http://yehg.net&lt;br /&gt;
+++++++++++++++++++++++++++++++++++++++++++++++++&lt;br /&gt;
&lt;br /&gt;
[*] testing on 192.168.1.3:443 ..&lt;br /&gt;
&lt;br /&gt;
[*] tesing for sslv2 ..&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep Accepted  SSLv2&lt;br /&gt;
    Accepted  SSLv2  168 bits  DES-CBC3-MD5&lt;br /&gt;
    Accepted  SSLv2  56 bits   DES-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  128 bits  RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  SSLv2  128 bits  RC4-MD5&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] testing for weak ciphers ...&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep  40 bits | grep Accepted&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-RC4-MD5&lt;br /&gt;
&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep  56 bits | grep Accepted&lt;br /&gt;
    Accepted  SSLv2  56 bits   DES-CBC-MD5&lt;br /&gt;
    Accepted  SSLv3  56 bits   EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  56 bits   DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  56 bits   EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  56 bits   DES-CBC-SHA&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] testing for MD5 certificate ..&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep MD5WithRSAEncryption&lt;br /&gt;
&lt;br /&gt;
[*] testing for SSLv3 Force Ciphering Bug/Renegotiation ..&lt;br /&gt;
[*] echo R | openssl s_client -connect 192.168.1.3:443 | grep DONE&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify error:num=18:self signed certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify return:1&lt;br /&gt;
RENEGOTIATING&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify error:num=18:self signed certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify return:1&lt;br /&gt;
DONE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] done&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==White Box Test and example==&lt;br /&gt;
&lt;br /&gt;
Check the configuration of the web servers which provide https services. If the web application provides other SSL/TLS wrapped services, these should be checked as well.&lt;br /&gt;
&lt;br /&gt;
'''Example:''' The following registry path in Microsoft Windows 2003 defines the ciphers available to the server:&lt;br /&gt;
&lt;br /&gt;
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\&lt;br /&gt;
&lt;br /&gt;
==Testing SSL certificate validity – client and server==&lt;br /&gt;
&lt;br /&gt;
When accessing a web application via the https protocol, a secure channel is established between the client (usually the browser) and the server. The identity of one (the server) or both parties (client and server)  is then established by means of digital certificates. In order for the communication to be set up, a number of checks on the certificates must be passed. While discussing SSL and certificate based authentication is beyond the scope of this Guide, we will focus on the main criteria involved in ascertaining certificate validity: a) checking if the Certificate Authority (CA) is a known one (meaning one considered trusted), b) checking that the certificate is currently valid, and c) checking that the name of the site and the name reported in the certificate  match.&lt;br /&gt;
Remember to upgrade your browser because CA certs expired too, in every release of the browser, CA Certs has been renewed. Moreover it's important to update the browser because more web sites require strong cipher more of 40 or 56 bit.&lt;br /&gt;
&lt;br /&gt;
Let’s examine each check more in detail.&lt;br /&gt;
&lt;br /&gt;
a) Each browser comes with a preloaded list of trusted CAs, against which the certificate signing CA is compared (this list can be customized and expanded at will). During the initial negotiations with an https server, if the server certificate relates to a CA unknown to the browser, a warning is usually raised. This happens most often because a web application relies on a certificate signed by a self-established CA. Whether this is to be considered a concern depends on several factors. For example, this may be fine for an Intranet environment (think of corporate web email being provided via https; here, obviously all users recognize the internal CA as a trusted CA). When a service is provided to the general public via the Internet, however (i.e. when it is important to positively verify the identity of the server we are talking to), it is usually imperative to rely on a trusted CA, one which is  recognized by all the user base (and here we stop with our considerations; we won’t delve deeper in the implications of the trust model being used by digital certificates).&lt;br /&gt;
&lt;br /&gt;
b) Certificates have an associated period of validity, therefore they may expire. Again, we are warned by the browser about this. A public service needs a temporally valid certificate; otherwise, it means we are talking with a server whose certificate was issued by someone we trust, but has expired without being renewed.&lt;br /&gt;
&lt;br /&gt;
c) What if the name on the certificate and the name of the server do not match? If this happens, it might sound suspicious. For a number of reasons, this is not so rare to see. A system may host a number of name-based virtual hosts, which share the same IP address and are identified by means of the HTTP 1.1 Host: header information. In this case, since the SSL handshake checks the server certificate before the HTTP request is processed, it is not possible to assign different certificates to each virtual server. Therefore, if the name of the site and the name reported in the certificate do not match, we have a condition which is typically signalled by the browser. To avoid this, one of two techniques should be used. First is Server Name Indication (SNI), which is a TLS extension from [http://www.ietf.org/rfc/rfc3546.txt RFC 3546]; and second is IP-based virtual servers must be used. [2] and [3] describe techniques to deal with this problem and allow name-based virtual hosts to be correctly referenced.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Black Box Testing and examples===&lt;br /&gt;
&lt;br /&gt;
Examine the validity of the certificates used by the application. Browsers will issue a warning when encountering expired certificates, certificates issued by untrusted CAs, and certificates which do not match namewise with the site to which they should refer. By clicking on the padlock which appears in the browser window when visiting an https site, you can look at information related to the certificate – including the issuer, period of validity, encryption characteristics, etc.&lt;br /&gt;
&lt;br /&gt;
If the application requires a client certificate, you probably have installed one to access it. Certificate information is available in the browser by inspecting the relevant certificate(s) in the list of the installed certificates.&lt;br /&gt;
&lt;br /&gt;
These checks must be applied to all visible SSL-wrapped communication channels used by the application. Though this is the usual https service running on port 443, there may be additional services involved depending on the web application architecture and on deployment issues (an https administrative port left open, https services on non-standard ports, etc.). Therefore, apply these checks to all SSL-wrapped ports which have been discovered. For example, the nmap scanner features a scanning mode (enabled by the –sV command line switch) which identifies SSL-wrapped services. The Nessus vulnerability scanner has the capability of performing SSL checks on all SSL/TLS-wrapped services.&lt;br /&gt;
&lt;br /&gt;
'''Examples'''&lt;br /&gt;
&lt;br /&gt;
Rather than providing a fictitious example, we have inserted an anonymized real-life example to stress how frequently one stumbles on https sites whose certificates are inaccurate with respect to naming.&lt;br /&gt;
&lt;br /&gt;
The following screenshots refer to a regional site of a high-profile IT company.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Warning issued by Microsoft Internet Explorer.&amp;lt;/u&amp;gt; We are visiting an ''.it'' site and the certificate was issued to a ''.com ''site! Internet Explorer warns that the name on the certificate does not match the name of the site.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SSL Certificate Validity Testing IE Warning.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Warning issued by Mozilla Firefox.&amp;lt;/u&amp;gt; The message issued by Firefox is different – Firefox complains because it cannot ascertain the identity of the ''.com'' site the certificate refers to because it does not know the CA which signed the certificate. In fact, Internet Explorer and Firefox do not come preloaded with the same list of CAs. Therefore, the behavior experienced with various browsers may differ.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SSL Certificate Validity Testing Firefox Warning.gif]]&lt;br /&gt;
&lt;br /&gt;
===White Box Testing and examples===&lt;br /&gt;
&lt;br /&gt;
Examine the validity of the certificates used by the application at both server and client levels. The usage of certificates is primarily at the web server level; however, there may be additional communication paths protected by SSL (for example, towards the DBMS). You should check the application architecture to identify all SSL protected channels.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] RFC2246. The TLS Protocol Version 1.0 (updated by RFC3546) - http://www.ietf.org/rfc/rfc2246.txt&lt;br /&gt;
* [2] RFC2817. Upgrading to TLS Within HTTP/1.1 - http://www.ietf.org/rfc/rfc2817.txt&lt;br /&gt;
* [3] RFC3546. Transport Layer Security (TLS) Extensions - http://www.ietf.org/rfc/rfc3546.txt&lt;br /&gt;
* [4] &amp;lt;u&amp;gt;www.verisign.net&amp;lt;/u&amp;gt; features various material on the topic&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&lt;br /&gt;
&lt;br /&gt;
* https://www.ssllabs.com/ssldb/&lt;br /&gt;
&lt;br /&gt;
* Vulnerability scanners may include checks regarding certificate validity, including name mismatch and time expiration. They usually report other information as well, such as the CA which issued the certificate. Remember that there is no unified notion of a “trusted CA”; what is trusted depends on the configuration of the software and on the human assumptions made beforehand. Browsers come with a preloaded list of trusted CAs. If your web application relies on a CA which is not in this list (for example, because you rely on a self-made CA), you should take into account the process of configuring user browsers to recognize the CA.&lt;br /&gt;
&lt;br /&gt;
* The Nessus scanner includes a plugin to check for expired certificates or certificates which are going to expire within 60 days (plugin “SSL certificate expiry”, plugin id 15901). This plugin will check certificates ''installed on the server.&lt;br /&gt;
&lt;br /&gt;
* Vulnerability scanners may include checks against weak ciphers. For example, the Nessus scanner (http://www.nessus.org) has this capability and flags the presence of SSL weak ciphers (see example provided above).&lt;br /&gt;
&lt;br /&gt;
* You may also rely on specialized tools such as SSL Digger (http://www.mcafee.com/us/downloads/free-tools/ssldigger.aspx), or – for the command line oriented – experiment with the openssl tool, which provides access to OpenSSL cryptographic functions directly from a Unix shell (may be already available on *nix boxes, otherwise see www.openssl.org).&lt;br /&gt;
&lt;br /&gt;
* To identify SSL-based services, use a vulnerability scanner or a port scanner with service recognition capabilities. The nmap scanner features a “-sV” scanning option which tries to identify services, while the nessus vulnerability scanner has the capability of identifying SSL-based services on arbitrary ports and to run vulnerability checks on them regardless of whether they are configured on standard or non-standard ports.&lt;br /&gt;
&lt;br /&gt;
* In case you need to talk to a SSL service but your favourite tool doesn’t support SSL, you may benefit from a SSL proxy such as stunnel; stunnel will take care of tunneling the underlying protocol (usually http, but not necessarily so) and communicate with the SSL service you need to reach.&lt;br /&gt;
&lt;br /&gt;
* ssl_tests, http://www.pentesterscripting.com/discovery/ssl_tests&lt;br /&gt;
&lt;br /&gt;
* Finally, a word of advice. Though it may be tempting to use a regular browser to check certificates, there are various reasons for not doing so. Browsers have been plagued by various bugs in this area, and the way the browser will perform the check might be influenced by configuration settings that may not be evident. Instead, rely on vulnerability scanners or on specialized tools to do the job.&lt;br /&gt;
&lt;br /&gt;
* [http://www.owasp.org/index.php/Transport_Layer_Protection_Cheat_Sheet OWASP Transport Layer Protection Cheat Sheet]&lt;br /&gt;
&lt;br /&gt;
[[Category:Cryptographic Vulnerability]]&lt;br /&gt;
[[Category:SSL]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&amp;diff=147381</id>
		<title>Testing for SSL-TLS (OWASP-CM-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&amp;diff=147381"/>
				<updated>2013-03-10T05:31:03Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added references to Testing Criteria&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v3}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
Due to historic export restrictions of high grade cryptography, legacy and new web servers are often able and configured to handle weak cryptographic options.&lt;br /&gt;
&lt;br /&gt;
Even if high grade ciphers are normally used and installed, some server misconfiguration could be used to force the use of a weaker cipher to gain access to the supposed secure communication channel.&lt;br /&gt;
&lt;br /&gt;
==Testing SSL / TLS Cipher Specifications and Requirements ==&lt;br /&gt;
&lt;br /&gt;
The http clear-text protocol is normally secured via an SSL or TLS tunnel, resulting in https traffic. In addition to providing encryption of data in transit, https allows the identification of servers (and, optionally, of clients) by means of digital certificates.&lt;br /&gt;
&lt;br /&gt;
Historically, there have been limitations set in place by the U.S. government to allow cryptosystems to be exported only for key sizes of, at most, 40 bits, a key length which could be broken and would allow the decryption of communications. Since then, cryptographic export regulations have been relaxed (though some constraints still hold); however, it is important to check the SSL configuration being used to avoid putting in place cryptographic support which could be easily defeated. SSL-based services should not offer the possibility to choose weak ciphers.&lt;br /&gt;
&lt;br /&gt;
Cipher determination is performed as follows: in the initial phase of a SSL connection setup, the client sends the server a Client Hello message specifying, among other information, the cipher suites that it is able to handle. A client is usually a web browser (most popular SSL client nowadays), but not necessarily, since it can be any SSL-enabled application; the same holds for the server, which needs not be a web server, though this is the most common case. (For example, a noteworthy class of SSL clients is that of SSL proxies such as stunnel (www.stunnel.org) which can be used to allow non-SSL enabled tools to talk to SSL services.) A cipher suite is specified by an encryption protocol (DES, RC4, AES), the encryption key length (such as 40, 56, or 128 bits), and a hash algorithm (SHA, MD5) used for integrity checking. Upon receiving a Client Hello message, the server decides which cipher suite it will use for that session. It is possible (for example, by means of configuration directives) to specify which cipher suites the server will honor. In this way you may control, for example, whether or not conversations with clients will support 40-bit encryption only.&lt;br /&gt;
&lt;br /&gt;
==SSL Testing Criteria==&lt;br /&gt;
Large number of available cipher suites and quick progress in cryptoanalysis makes judging a SSL server a non-trivial task. These criteria are widely recognised as minimum checklist:&lt;br /&gt;
&lt;br /&gt;
* SSLv2, due to known weaknesses in protocol design [www.schneier.com/paper-ssl.html]&lt;br /&gt;
* SSLv3, due to known weaknesses in protocol design [www.yaksman.org/~lweith/ssl.pdf]&lt;br /&gt;
* Compression, due to known weaknesses in protocol design [http://www.ekoparty.org/2012/juliano-rizzo.php]&lt;br /&gt;
* Cipher suites with symmetric encryption algorithm smaller than 112 bits&lt;br /&gt;
* X.509 certificates with RSA or DSA key smaller than 1024 bits&lt;br /&gt;
* X.509 certificates signed using MD5 hash, due to known collision attacks on this hash&lt;br /&gt;
* TLS Renegotiation vulnerability [http://www.phonefactor.com/sslgap/ssl-tls-authentication-patches]&lt;br /&gt;
&lt;br /&gt;
The following standards can be used as reference while assessing SSL servers:&lt;br /&gt;
&lt;br /&gt;
* [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf NIST SP 800-52] recommends U.S. federal systems to use at least TLS 1.0 with ciphersuites based on RSA or DSA key agreement with ephemeral Diffie-Hellman, 3DES or AES for confidentality and SHA1 for integrity protection. NIST SP 800-52 specifically disallows non-FIPS compliant algorithms like RC4 and MD5. An exception is U.S. federal systems making connections to outside servers, where these algorithms can be used in SSL client mode.&lt;br /&gt;
* [https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml PCI-DSS v1.2] in point 4.1 requires compliant parties to use &amp;quot;strong cryptography&amp;quot; without precisely defining key lengths and algorithms. Common interpretation, partially based on previous versions of the standard, is that at least 128 bit key cipher, no export strength algorithms and no SSLv2 should be used[http://www.digicert.com/news/DigiCert_PCI_White_Paper.pdf].&lt;br /&gt;
* [https://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide] has been proposed to standardize SSL server assessment and currently is in draft version.&lt;br /&gt;
&lt;br /&gt;
SSL Server Database can be used to assess configuration of publicly available SSL servers[https://www.ssllabs.com/ssldb/analyze.html] based on SSL Rating Guide[https://www.ssllabs.com/projects/rating-guide/index.html]&lt;br /&gt;
&lt;br /&gt;
==Black Box Test and example==&lt;br /&gt;
&lt;br /&gt;
In order to detect possible support of weak ciphers, the ports associated to SSL/TLS wrapped services must be identified. These typically include port 443, which is the standard https port; however, this may change because a) https services may be configured to run on non-standard ports, and b) there may be additional SSL/TLS wrapped services related to the web application. In general, a service discovery is required to identify such ports.&lt;br /&gt;
&lt;br /&gt;
The nmap scanner, via the “–sV” scan option, is able to identify SSL services. Vulnerability Scanners, in addition to performing service discovery, may include checks against weak ciphers (for example, the Nessus scanner has the capability of checking SSL services on arbitrary ports, and will report weak ciphers).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 1'''. SSL service recognition via nmap.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@test]# nmap -F -sV localhost&lt;br /&gt;
&lt;br /&gt;
Starting nmap 3.75 ( http://www.insecure.org/nmap/ ) at 2005-07-27 14:41 CEST&lt;br /&gt;
Interesting ports on localhost.localdomain (127.0.0.1):&lt;br /&gt;
(The 1205 ports scanned but not shown below are in state: closed)&lt;br /&gt;
&lt;br /&gt;
PORT      STATE SERVICE         VERSION&lt;br /&gt;
443/tcp   open  ssl             OpenSSL&lt;br /&gt;
901/tcp   open  http            Samba SWAT administration server&lt;br /&gt;
8080/tcp  open  http            Apache httpd 2.0.54 ((Unix) mod_ssl/2.0.54 OpenSSL/0.9.7g PHP/4.3.11)&lt;br /&gt;
8081/tcp  open  http            Apache Tomcat/Coyote JSP engine 1.0&lt;br /&gt;
&lt;br /&gt;
Nmap run completed -- 1 IP address (1 host up) scanned in 27.881 seconds&lt;br /&gt;
[root@test]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 2'''. Identifying weak ciphers with Nessus.&lt;br /&gt;
The following is an anonymized excerpt of a report generated by the Nessus scanner, corresponding to the identification of a server certificate allowing weak ciphers (see underlined text).&lt;br /&gt;
&lt;br /&gt;
  '''https (443/tcp)'''&lt;br /&gt;
  '''Description'''&lt;br /&gt;
  Here is the SSLv2 server certificate:&lt;br /&gt;
  Certificate:&lt;br /&gt;
  Data:&lt;br /&gt;
  Version: 3 (0x2)&lt;br /&gt;
  Serial Number: 1 (0x1)&lt;br /&gt;
  Signature Algorithm: md5WithRSAEncryption&lt;br /&gt;
  Issuer: C=**, ST=******, L=******, O=******, OU=******, CN=******&lt;br /&gt;
  Validity&lt;br /&gt;
  Not Before: Oct 17 07:12:16 2002 GMT&lt;br /&gt;
  Not After : Oct 16 07:12:16 2004 GMT&lt;br /&gt;
  Subject: C=**, ST=******, L=******, O=******, CN=******&lt;br /&gt;
  Subject Public Key Info:&lt;br /&gt;
  Public Key Algorithm: rsaEncryption&lt;br /&gt;
  RSA Public Key: (1024 bit)&lt;br /&gt;
  Modulus (1024 bit):&lt;br /&gt;
  00:98:4f:24:16:cb:0f:74:e8:9c:55:ce:62:14:4e:&lt;br /&gt;
  6b:84:c5:81:43:59:c1:2e:ac:ba:af:92:51:f3:0b:&lt;br /&gt;
  ad:e1:4b:22:ba:5a:9a:1e:0f:0b:fb:3d:5d:e6:fc:&lt;br /&gt;
  ef:b8:8c:dc:78:28:97:8b:f0:1f:17:9f:69:3f:0e:&lt;br /&gt;
  72:51:24:1b:9c:3d:85:52:1d:df:da:5a:b8:2e:d2:&lt;br /&gt;
  09:00:76:24:43:bc:08:67:6b:dd:6b:e9:d2:f5:67:&lt;br /&gt;
  e1:90:2a:b4:3b:b4:3c:b3:71:4e:88:08:74:b9:a8:&lt;br /&gt;
  2d:c4:8c:65:93:08:e6:2f:fd:e0:fa:dc:6d:d7:a2:&lt;br /&gt;
  3d:0a:75:26:cf:dc:47:74:29&lt;br /&gt;
  Exponent: 65537 (0x10001)&lt;br /&gt;
  X509v3 extensions:&lt;br /&gt;
  X509v3 Basic Constraints:&lt;br /&gt;
  CA:FALSE&lt;br /&gt;
  Netscape Comment:&lt;br /&gt;
  OpenSSL Generated Certificate&lt;br /&gt;
  Page 10&lt;br /&gt;
  Network Vulnerability Assessment Report 25.05.2005&lt;br /&gt;
  X509v3 Subject Key Identifier:&lt;br /&gt;
  10:00:38:4C:45:F0:7C:E4:C6:A7:A4:E2:C9:F0:E4:2B:A8:F9:63:A8&lt;br /&gt;
  X509v3 Authority Key Identifier:&lt;br /&gt;
  keyid:CE:E5:F9:41:7B:D9:0E:5E:5D:DF:5E:B9:F3:E6:4A:12:19:02:76:CE&lt;br /&gt;
  DirName:/C=**/ST=******/L=******/O=******/OU=******/CN=******&lt;br /&gt;
  serial:00&lt;br /&gt;
  Signature Algorithm: md5WithRSAEncryption&lt;br /&gt;
  7b:14:bd:c7:3c:0c:01:8d:69:91:95:46:5c:e6:1e:25:9b:aa:&lt;br /&gt;
  8b:f5:0d:de:e3:2e:82:1e:68:be:97:3b:39:4a:83:ae:fd:15:&lt;br /&gt;
  2e:50:c8:a7:16:6e:c9:4e:76:cc:fd:69:ae:4f:12:b8:e7:01:&lt;br /&gt;
  b6:58:7e:39:d1:fa:8d:49:bd:ff:6b:a8:dd:ae:83:ed:bc:b2:&lt;br /&gt;
  40:e3:a5:e0:fd:ae:3f:57:4d:ec:f3:21:34:b1:84:97:06:6f:&lt;br /&gt;
  f4:7d:f4:1c:84:cc:bb:1c:1c:e7:7a:7d:2d:e9:49:60:93:12:&lt;br /&gt;
  0d:9f:05:8c:8e:f9:cf:e8:9f:fc:15:c0:6e:e2:fe:e5:07:81:&lt;br /&gt;
  82:fc&lt;br /&gt;
  Here is the list of available SSLv2 ciphers:&lt;br /&gt;
  RC4-MD5&lt;br /&gt;
  EXP-RC4-MD5&lt;br /&gt;
  RC2-CBC-MD5&lt;br /&gt;
  EXP-RC2-CBC-MD5&lt;br /&gt;
  DES-CBC-MD5&lt;br /&gt;
  DES-CBC3-MD5&lt;br /&gt;
  RC4-64-MD5&lt;br /&gt;
  &amp;lt;u&amp;gt;The SSLv2 server offers 5 strong ciphers, but also 0 medium strength and '''2 weak &amp;quot;export class&amp;quot; ciphers'''.&lt;br /&gt;
  The weak/medium ciphers may be chosen by an export-grade or badly configured client software. They only offer a limited protection against a brute force attack&amp;lt;/u&amp;gt;&lt;br /&gt;
  &amp;lt;u&amp;gt;Solution: disable those ciphers and upgrade your client software if necessary.&amp;lt;/u&amp;gt;&lt;br /&gt;
  See http://support.microsoft.com/default.aspx?scid=kben-us216482&lt;br /&gt;
  or http://httpd.apache.org/docs-2.0/mod/mod_ssl.html#sslciphersuite&lt;br /&gt;
  This SSLv2 server also accepts SSLv3 connections.&lt;br /&gt;
  This SSLv2 server also accepts TLSv1 connections.&lt;br /&gt;
  &lt;br /&gt;
  Vulnerable hosts&lt;br /&gt;
  ''(list of vulnerable hosts follows)''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 3'''. Manually audit weak SSL cipher levels with OpenSSL. The following will attempt to connect to Google.com with SSLv2.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@test]# openssl s_client -no_tls1 -no_ssl3 -connect www.google.com:443&lt;br /&gt;
CONNECTED(00000003)&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=20:unable to get local issuer certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=27:certificate not trusted&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=21:unable to verify the first certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
---&lt;br /&gt;
Server certificate&lt;br /&gt;
-----BEGIN CERTIFICATE-----&lt;br /&gt;
MIIDYzCCAsygAwIBAgIQYFbAC3yUC8RFj9MS7lfBkzANBgkqhkiG9w0BAQQFADCB&lt;br /&gt;
zjELMAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJ&lt;br /&gt;
Q2FwZSBUb3duMR0wGwYDVQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UE&lt;br /&gt;
CxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhh&lt;br /&gt;
d3RlIFByZW1pdW0gU2VydmVyIENBMSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNl&lt;br /&gt;
cnZlckB0aGF3dGUuY29tMB4XDTA2MDQyMTAxMDc0NVoXDTA3MDQyMTAxMDc0NVow&lt;br /&gt;
aDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDU1v&lt;br /&gt;
dW50YWluIFZpZXcxEzARBgNVBAoTCkdvb2dsZSBJbmMxFzAVBgNVBAMTDnd3dy5n&lt;br /&gt;
b29nbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC/e2Vs8U33fRDk&lt;br /&gt;
5NNpNgkB1zKw4rqTozmfwty7eTEI8PVH1Bf6nthocQ9d9SgJAI2WOBP4grPj7MqO&lt;br /&gt;
dXMTFWGDfiTnwes16G7NZlyh6peT68r7ifrwSsVLisJp6pUf31M5Z3D88b+Yy4PE&lt;br /&gt;
D7BJaTxq6NNmP1vYUJeXsGSGrV6FUQIDAQABo4GmMIGjMB0GA1UdJQQWMBQGCCsG&lt;br /&gt;
AQUFBwMBBggrBgEFBQcDAjBABgNVHR8EOTA3MDWgM6Axhi9odHRwOi8vY3JsLnRo&lt;br /&gt;
YXd0ZS5jb20vVGhhd3RlUHJlbWl1bVNlcnZlckNBLmNybDAyBggrBgEFBQcBAQQm&lt;br /&gt;
MCQwIgYIKwYBBQUHMAGGFmh0dHA6Ly9vY3NwLnRoYXd0ZS5jb20wDAYDVR0TAQH/&lt;br /&gt;
BAIwADANBgkqhkiG9w0BAQQFAAOBgQADlTbBdVY6LD1nHWkhTadmzuWq2rWE0KO3&lt;br /&gt;
Ay+7EleYWPOo+EST315QLpU6pQgblgobGoI5x/fUg2U8WiYj1I1cbavhX2h1hda3&lt;br /&gt;
FJWnB3SiXaiuDTsGxQ267EwCVWD5bCrSWa64ilSJTgiUmzAv0a2W8YHXdG08+nYc&lt;br /&gt;
X/dVk5WRTw==&lt;br /&gt;
-----END CERTIFICATE-----&lt;br /&gt;
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
issuer=/C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/emailAddress=premium-server@thawte.com&lt;br /&gt;
---&lt;br /&gt;
No client certificate CA names sent&lt;br /&gt;
---&lt;br /&gt;
Ciphers common between both SSL endpoints:&lt;br /&gt;
RC4-MD5         EXP-RC4-MD5     RC2-CBC-MD5&lt;br /&gt;
EXP-RC2-CBC-MD5 DES-CBC-MD5     DES-CBC3-MD5&lt;br /&gt;
RC4-64-MD5&lt;br /&gt;
---&lt;br /&gt;
SSL handshake has read 1023 bytes and written 333 bytes&lt;br /&gt;
---&lt;br /&gt;
New, SSLv2, Cipher is DES-CBC3-MD5&lt;br /&gt;
Server public key is 1024 bit&lt;br /&gt;
Compression: NONE&lt;br /&gt;
Expansion: NONE&lt;br /&gt;
SSL-Session:&lt;br /&gt;
    Protocol  : SSLv2&lt;br /&gt;
    Cipher    : DES-CBC3-MD5&lt;br /&gt;
    Session-ID: 709F48E4D567C70A2E49886E4C697CDE&lt;br /&gt;
    Session-ID-ctx:&lt;br /&gt;
    Master-Key: 649E68F8CF936E69642286AC40A80F433602E3C36FD288C3&lt;br /&gt;
    Key-Arg   : E8CB6FEB9ECF3033&lt;br /&gt;
    Start Time: 1156977226&lt;br /&gt;
    Timeout   : 300 (sec)&lt;br /&gt;
    Verify return code: 21 (unable to verify the first certificate)&lt;br /&gt;
---&lt;br /&gt;
closed&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 4'''. Testing supported protocols and ciphers using SSLScan.&lt;br /&gt;
&lt;br /&gt;
SSLScan is a free command line tool that scans a HTTPS service to enumerate what protocols (supports SSLv2, SSLv3 and TLS1) and what ciphers the HTTPS service supports. It runs both on Linux and Windows OS (OSX not tested) and is released under a open source license.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[user@test]$ ./SSLScan --no-failed mail.google.com&lt;br /&gt;
                   _&lt;br /&gt;
           ___ ___| |___  ___ __ _ _ __&lt;br /&gt;
          / __/ __| / __|/ __/ _` | '_ \&lt;br /&gt;
          \__ \__ \ \__ \ (_| (_| | | | |&lt;br /&gt;
          |___/___/_|___/\___\__,_|_| |_|&lt;br /&gt;
&lt;br /&gt;
                  Version 1.9.0-win&lt;br /&gt;
             http://www.titania.co.uk&lt;br /&gt;
 Copyright 2010 Ian Ventura-Whiting / Michael Boman&lt;br /&gt;
    Compiled against OpenSSL 0.9.8n 24 Mar 2010&lt;br /&gt;
&lt;br /&gt;
Testing SSL server mail.google.com on port 443&lt;br /&gt;
&lt;br /&gt;
  Supported Server Cipher(s):&lt;br /&gt;
    accepted  SSLv3  256 bits  AES256-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  AES128-SHA&lt;br /&gt;
    accepted  SSLv3  168 bits  DES-CBC3-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  RC4-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  RC4-MD5&lt;br /&gt;
    accepted  TLSv1  256 bits  AES256-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  AES128-SHA&lt;br /&gt;
    accepted  TLSv1  168 bits  DES-CBC3-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  RC4-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  RC4-MD5&lt;br /&gt;
&lt;br /&gt;
  Prefered Server Cipher(s):&lt;br /&gt;
    SSLv3  128 bits  RC4-SHA&lt;br /&gt;
    TLSv1  128 bits  RC4-SHA&lt;br /&gt;
&lt;br /&gt;
  SSL Certificate:&lt;br /&gt;
    Version: 2&lt;br /&gt;
    Serial Number: -4294967295&lt;br /&gt;
    Signature Algorithm: sha1WithRSAEncryption&lt;br /&gt;
    Issuer: /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA&lt;br /&gt;
    Not valid before: Dec 18 00:00:00 2009 GMT&lt;br /&gt;
    Not valid after: Dec 18 23:59:59 2011 GMT&lt;br /&gt;
    Subject: /C=US/ST=California/L=Mountain View/O=Google Inc/CN=mail.google.com&lt;br /&gt;
    Public Key Algorithm: rsaEncryption&lt;br /&gt;
    RSA Public Key: (1024 bit)&lt;br /&gt;
      Modulus (1024 bit):&lt;br /&gt;
          00:d9:27:c8:11:f2:7b:e4:45:c9:46:b6:63:75:83:&lt;br /&gt;
          b1:77:7e:17:41:89:80:38:f1:45:27:a0:3c:d9:e8:&lt;br /&gt;
          a8:00:4b:d9:07:d0:ba:de:ed:f4:2c:a6:ac:dc:27:&lt;br /&gt;
          13:ec:0c:c1:a6:99:17:42:e6:8d:27:d2:81:14:b0:&lt;br /&gt;
          4b:82:fa:b2:c5:d0:bb:20:59:62:28:a3:96:b5:61:&lt;br /&gt;
          f6:76:c1:6d:46:d2:fd:ba:c6:0f:3d:d1:c9:77:9a:&lt;br /&gt;
          58:33:f6:06:76:32:ad:51:5f:29:5f:6e:f8:12:8b:&lt;br /&gt;
          ad:e6:c5:08:39:b3:43:43:a9:5b:91:1d:d7:e3:cf:&lt;br /&gt;
          51:df:75:59:8e:8d:80:ab:53&lt;br /&gt;
      Exponent: 65537 (0x10001)&lt;br /&gt;
    X509v3 Extensions:&lt;br /&gt;
      X509v3 Basic Constraints: critical&lt;br /&gt;
        CA:FALSE      X509v3 CRL Distribution Points: &lt;br /&gt;
        URI:http://crl.thawte.com/ThawteSGCCA.crl&lt;br /&gt;
      X509v3 Extended Key Usage: &lt;br /&gt;
        TLS Web Server Authentication, TLS Web Client Authentication, Netscape Server Gated Crypto      Authority Information Access: &lt;br /&gt;
        OCSP - URI:http://ocsp.thawte.com&lt;br /&gt;
        CA Issuers - URI:http://www.thawte.com/repository/Thawte_SGC_CA.crt&lt;br /&gt;
  Verify Certificate:&lt;br /&gt;
    unable to get local issuer certificate&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renegotiation requests supported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 5'''. Testing common SSL flaws with ssl_tests&lt;br /&gt;
&lt;br /&gt;
ssl_tests (http://www.pentesterscripting.com/discovery/ssl_tests) is a bash script that uses sslscan and openssl to check for various flaws - ssl version 2, weak ciphers, md5withRSAEncryption,SSLv3 Force Ciphering Bug/Renegotiation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[user@test]$ ./ssl_test.sh 192.168.1.3 443&lt;br /&gt;
+++++++++++++++++++++++++++++++++++++++++++++++++&lt;br /&gt;
SSL Tests - v2, weak ciphers, MD5, Renegotiation&lt;br /&gt;
by Aung Khant, http://yehg.net&lt;br /&gt;
+++++++++++++++++++++++++++++++++++++++++++++++++&lt;br /&gt;
&lt;br /&gt;
[*] testing on 192.168.1.3:443 ..&lt;br /&gt;
&lt;br /&gt;
[*] tesing for sslv2 ..&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep Accepted  SSLv2&lt;br /&gt;
    Accepted  SSLv2  168 bits  DES-CBC3-MD5&lt;br /&gt;
    Accepted  SSLv2  56 bits   DES-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  128 bits  RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  SSLv2  128 bits  RC4-MD5&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] testing for weak ciphers ...&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep  40 bits | grep Accepted&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-RC4-MD5&lt;br /&gt;
&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep  56 bits | grep Accepted&lt;br /&gt;
    Accepted  SSLv2  56 bits   DES-CBC-MD5&lt;br /&gt;
    Accepted  SSLv3  56 bits   EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  56 bits   DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  56 bits   EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  56 bits   DES-CBC-SHA&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] testing for MD5 certificate ..&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep MD5WithRSAEncryption&lt;br /&gt;
&lt;br /&gt;
[*] testing for SSLv3 Force Ciphering Bug/Renegotiation ..&lt;br /&gt;
[*] echo R | openssl s_client -connect 192.168.1.3:443 | grep DONE&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify error:num=18:self signed certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify return:1&lt;br /&gt;
RENEGOTIATING&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify error:num=18:self signed certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify return:1&lt;br /&gt;
DONE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] done&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==White Box Test and example==&lt;br /&gt;
&lt;br /&gt;
Check the configuration of the web servers which provide https services. If the web application provides other SSL/TLS wrapped services, these should be checked as well.&lt;br /&gt;
&lt;br /&gt;
'''Example:''' The following registry path in Microsoft Windows 2003 defines the ciphers available to the server:&lt;br /&gt;
&lt;br /&gt;
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\&lt;br /&gt;
&lt;br /&gt;
==Testing SSL certificate validity – client and server==&lt;br /&gt;
&lt;br /&gt;
When accessing a web application via the https protocol, a secure channel is established between the client (usually the browser) and the server. The identity of one (the server) or both parties (client and server)  is then established by means of digital certificates. In order for the communication to be set up, a number of checks on the certificates must be passed. While discussing SSL and certificate based authentication is beyond the scope of this Guide, we will focus on the main criteria involved in ascertaining certificate validity: a) checking if the Certificate Authority (CA) is a known one (meaning one considered trusted), b) checking that the certificate is currently valid, and c) checking that the name of the site and the name reported in the certificate  match.&lt;br /&gt;
Remember to upgrade your browser because CA certs expired too, in every release of the browser, CA Certs has been renewed. Moreover it's important to update the browser because more web sites require strong cipher more of 40 or 56 bit.&lt;br /&gt;
&lt;br /&gt;
Let’s examine each check more in detail.&lt;br /&gt;
&lt;br /&gt;
a) Each browser comes with a preloaded list of trusted CAs, against which the certificate signing CA is compared (this list can be customized and expanded at will). During the initial negotiations with an https server, if the server certificate relates to a CA unknown to the browser, a warning is usually raised. This happens most often because a web application relies on a certificate signed by a self-established CA. Whether this is to be considered a concern depends on several factors. For example, this may be fine for an Intranet environment (think of corporate web email being provided via https; here, obviously all users recognize the internal CA as a trusted CA). When a service is provided to the general public via the Internet, however (i.e. when it is important to positively verify the identity of the server we are talking to), it is usually imperative to rely on a trusted CA, one which is  recognized by all the user base (and here we stop with our considerations; we won’t delve deeper in the implications of the trust model being used by digital certificates).&lt;br /&gt;
&lt;br /&gt;
b) Certificates have an associated period of validity, therefore they may expire. Again, we are warned by the browser about this. A public service needs a temporally valid certificate; otherwise, it means we are talking with a server whose certificate was issued by someone we trust, but has expired without being renewed.&lt;br /&gt;
&lt;br /&gt;
c) What if the name on the certificate and the name of the server do not match? If this happens, it might sound suspicious. For a number of reasons, this is not so rare to see. A system may host a number of name-based virtual hosts, which share the same IP address and are identified by means of the HTTP 1.1 Host: header information. In this case, since the SSL handshake checks the server certificate before the HTTP request is processed, it is not possible to assign different certificates to each virtual server. Therefore, if the name of the site and the name reported in the certificate do not match, we have a condition which is typically signalled by the browser. To avoid this, one of two techniques should be used. First is Server Name Indication (SNI), which is a TLS extension from [http://www.ietf.org/rfc/rfc3546.txt RFC 3546]; and second is IP-based virtual servers must be used. [2] and [3] describe techniques to deal with this problem and allow name-based virtual hosts to be correctly referenced.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Black Box Testing and examples===&lt;br /&gt;
&lt;br /&gt;
Examine the validity of the certificates used by the application. Browsers will issue a warning when encountering expired certificates, certificates issued by untrusted CAs, and certificates which do not match namewise with the site to which they should refer. By clicking on the padlock which appears in the browser window when visiting an https site, you can look at information related to the certificate – including the issuer, period of validity, encryption characteristics, etc.&lt;br /&gt;
&lt;br /&gt;
If the application requires a client certificate, you probably have installed one to access it. Certificate information is available in the browser by inspecting the relevant certificate(s) in the list of the installed certificates.&lt;br /&gt;
&lt;br /&gt;
These checks must be applied to all visible SSL-wrapped communication channels used by the application. Though this is the usual https service running on port 443, there may be additional services involved depending on the web application architecture and on deployment issues (an https administrative port left open, https services on non-standard ports, etc.). Therefore, apply these checks to all SSL-wrapped ports which have been discovered. For example, the nmap scanner features a scanning mode (enabled by the –sV command line switch) which identifies SSL-wrapped services. The Nessus vulnerability scanner has the capability of performing SSL checks on all SSL/TLS-wrapped services.&lt;br /&gt;
&lt;br /&gt;
'''Examples'''&lt;br /&gt;
&lt;br /&gt;
Rather than providing a fictitious example, we have inserted an anonymized real-life example to stress how frequently one stumbles on https sites whose certificates are inaccurate with respect to naming.&lt;br /&gt;
&lt;br /&gt;
The following screenshots refer to a regional site of a high-profile IT company.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Warning issued by Microsoft Internet Explorer.&amp;lt;/u&amp;gt; We are visiting an ''.it'' site and the certificate was issued to a ''.com ''site! Internet Explorer warns that the name on the certificate does not match the name of the site.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SSL Certificate Validity Testing IE Warning.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Warning issued by Mozilla Firefox.&amp;lt;/u&amp;gt; The message issued by Firefox is different – Firefox complains because it cannot ascertain the identity of the ''.com'' site the certificate refers to because it does not know the CA which signed the certificate. In fact, Internet Explorer and Firefox do not come preloaded with the same list of CAs. Therefore, the behavior experienced with various browsers may differ.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SSL Certificate Validity Testing Firefox Warning.gif]]&lt;br /&gt;
&lt;br /&gt;
===White Box Testing and examples===&lt;br /&gt;
&lt;br /&gt;
Examine the validity of the certificates used by the application at both server and client levels. The usage of certificates is primarily at the web server level; however, there may be additional communication paths protected by SSL (for example, towards the DBMS). You should check the application architecture to identify all SSL protected channels.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] RFC2246. The TLS Protocol Version 1.0 (updated by RFC3546) - http://www.ietf.org/rfc/rfc2246.txt&lt;br /&gt;
* [2] RFC2817. Upgrading to TLS Within HTTP/1.1 - http://www.ietf.org/rfc/rfc2817.txt&lt;br /&gt;
* [3] RFC3546. Transport Layer Security (TLS) Extensions - http://www.ietf.org/rfc/rfc3546.txt&lt;br /&gt;
* [4] &amp;lt;u&amp;gt;www.verisign.net&amp;lt;/u&amp;gt; features various material on the topic&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&lt;br /&gt;
&lt;br /&gt;
* https://www.ssllabs.com/ssldb/&lt;br /&gt;
&lt;br /&gt;
* Vulnerability scanners may include checks regarding certificate validity, including name mismatch and time expiration. They usually report other information as well, such as the CA which issued the certificate. Remember that there is no unified notion of a “trusted CA”; what is trusted depends on the configuration of the software and on the human assumptions made beforehand. Browsers come with a preloaded list of trusted CAs. If your web application relies on a CA which is not in this list (for example, because you rely on a self-made CA), you should take into account the process of configuring user browsers to recognize the CA.&lt;br /&gt;
&lt;br /&gt;
* The Nessus scanner includes a plugin to check for expired certificates or certificates which are going to expire within 60 days (plugin “SSL certificate expiry”, plugin id 15901). This plugin will check certificates ''installed on the server.&lt;br /&gt;
&lt;br /&gt;
* Vulnerability scanners may include checks against weak ciphers. For example, the Nessus scanner (http://www.nessus.org) has this capability and flags the presence of SSL weak ciphers (see example provided above).&lt;br /&gt;
&lt;br /&gt;
* You may also rely on specialized tools such as SSL Digger (http://www.mcafee.com/us/downloads/free-tools/ssldigger.aspx), or – for the command line oriented – experiment with the openssl tool, which provides access to OpenSSL cryptographic functions directly from a Unix shell (may be already available on *nix boxes, otherwise see www.openssl.org).&lt;br /&gt;
&lt;br /&gt;
* To identify SSL-based services, use a vulnerability scanner or a port scanner with service recognition capabilities. The nmap scanner features a “-sV” scanning option which tries to identify services, while the nessus vulnerability scanner has the capability of identifying SSL-based services on arbitrary ports and to run vulnerability checks on them regardless of whether they are configured on standard or non-standard ports.&lt;br /&gt;
&lt;br /&gt;
* In case you need to talk to a SSL service but your favourite tool doesn’t support SSL, you may benefit from a SSL proxy such as stunnel; stunnel will take care of tunneling the underlying protocol (usually http, but not necessarily so) and communicate with the SSL service you need to reach.&lt;br /&gt;
&lt;br /&gt;
* ssl_tests, http://www.pentesterscripting.com/discovery/ssl_tests&lt;br /&gt;
&lt;br /&gt;
* Finally, a word of advice. Though it may be tempting to use a regular browser to check certificates, there are various reasons for not doing so. Browsers have been plagued by various bugs in this area, and the way the browser will perform the check might be influenced by configuration settings that may not be evident. Instead, rely on vulnerability scanners or on specialized tools to do the job.&lt;br /&gt;
&lt;br /&gt;
* [http://www.owasp.org/index.php/Transport_Layer_Protection_Cheat_Sheet OWASP Transport Layer Protection Cheat Sheet]&lt;br /&gt;
&lt;br /&gt;
[[Category:Cryptographic Vulnerability]]&lt;br /&gt;
[[Category:SSL]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&amp;diff=147380</id>
		<title>Testing for SSL-TLS (OWASP-CM-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&amp;diff=147380"/>
				<updated>2013-03-10T05:23:50Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Title case for heading&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v3}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
Due to historic export restrictions of high grade cryptography, legacy and new web servers are often able and configured to handle weak cryptographic options.&lt;br /&gt;
&lt;br /&gt;
Even if high grade ciphers are normally used and installed, some server misconfiguration could be used to force the use of a weaker cipher to gain access to the supposed secure communication channel.&lt;br /&gt;
&lt;br /&gt;
==Testing SSL / TLS Cipher Specifications and Requirements ==&lt;br /&gt;
&lt;br /&gt;
The http clear-text protocol is normally secured via an SSL or TLS tunnel, resulting in https traffic. In addition to providing encryption of data in transit, https allows the identification of servers (and, optionally, of clients) by means of digital certificates.&lt;br /&gt;
&lt;br /&gt;
Historically, there have been limitations set in place by the U.S. government to allow cryptosystems to be exported only for key sizes of, at most, 40 bits, a key length which could be broken and would allow the decryption of communications. Since then, cryptographic export regulations have been relaxed (though some constraints still hold); however, it is important to check the SSL configuration being used to avoid putting in place cryptographic support which could be easily defeated. SSL-based services should not offer the possibility to choose weak ciphers.&lt;br /&gt;
&lt;br /&gt;
Cipher determination is performed as follows: in the initial phase of a SSL connection setup, the client sends the server a Client Hello message specifying, among other information, the cipher suites that it is able to handle. A client is usually a web browser (most popular SSL client nowadays), but not necessarily, since it can be any SSL-enabled application; the same holds for the server, which needs not be a web server, though this is the most common case. (For example, a noteworthy class of SSL clients is that of SSL proxies such as stunnel (www.stunnel.org) which can be used to allow non-SSL enabled tools to talk to SSL services.) A cipher suite is specified by an encryption protocol (DES, RC4, AES), the encryption key length (such as 40, 56, or 128 bits), and a hash algorithm (SHA, MD5) used for integrity checking. Upon receiving a Client Hello message, the server decides which cipher suite it will use for that session. It is possible (for example, by means of configuration directives) to specify which cipher suites the server will honor. In this way you may control, for example, whether or not conversations with clients will support 40-bit encryption only.&lt;br /&gt;
&lt;br /&gt;
==SSL testing criteria==&lt;br /&gt;
Large number of available cipher suites and quick progress in cryptoanalysis makes judging a SSL server a non-trivial task. These criteria are widely recognised as minimum checklist:&lt;br /&gt;
&lt;br /&gt;
* SSLv2, due to known weaknesses in protocol design&lt;br /&gt;
* SSLv3, due to known weaknesses in protocol design&lt;br /&gt;
* Compression, due to known weaknesses in protocol design&lt;br /&gt;
* Cipher suites with symmetric encryption algorithm smaller than 112 bits&lt;br /&gt;
* X.509 certificates with RSA or DSA key smaller than 1024 bits&lt;br /&gt;
* X.509 certificates signed using MD5 hash, due to known collision attacks on this hash&lt;br /&gt;
* TLS Renegotiation vulnerability[http://www.phonefactor.com/sslgap/ssl-tls-authentication-patches]&lt;br /&gt;
&lt;br /&gt;
The following standards can be used as reference while assessing SSL servers:&lt;br /&gt;
&lt;br /&gt;
* [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf NIST SP 800-52] recommends U.S. federal systems to use at least TLS 1.0 with ciphersuites based on RSA or DSA key agreement with ephemeral Diffie-Hellman, 3DES or AES for confidentality and SHA1 for integrity protection. NIST SP 800-52 specifically disallows non-FIPS compliant algorithms like RC4 and MD5. An exception is U.S. federal systems making connections to outside servers, where these algorithms can be used in SSL client mode.&lt;br /&gt;
* [https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml PCI-DSS v1.2] in point 4.1 requires compliant parties to use &amp;quot;strong cryptography&amp;quot; without precisely defining key lengths and algorithms. Common interpretation, partially based on previous versions of the standard, is that at least 128 bit key cipher, no export strength algorithms and no SSLv2 should be used[http://www.digicert.com/news/DigiCert_PCI_White_Paper.pdf].&lt;br /&gt;
* [https://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide] has been proposed to standardize SSL server assessment and currently is in draft version.&lt;br /&gt;
&lt;br /&gt;
SSL Server Database can be used to assess configuration of publicly available SSL servers[https://www.ssllabs.com/ssldb/analyze.html] based on SSL Rating Guide[https://www.ssllabs.com/projects/rating-guide/index.html]&lt;br /&gt;
&lt;br /&gt;
==Black Box Test and example==&lt;br /&gt;
&lt;br /&gt;
In order to detect possible support of weak ciphers, the ports associated to SSL/TLS wrapped services must be identified. These typically include port 443, which is the standard https port; however, this may change because a) https services may be configured to run on non-standard ports, and b) there may be additional SSL/TLS wrapped services related to the web application. In general, a service discovery is required to identify such ports.&lt;br /&gt;
&lt;br /&gt;
The nmap scanner, via the “–sV” scan option, is able to identify SSL services. Vulnerability Scanners, in addition to performing service discovery, may include checks against weak ciphers (for example, the Nessus scanner has the capability of checking SSL services on arbitrary ports, and will report weak ciphers).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 1'''. SSL service recognition via nmap.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@test]# nmap -F -sV localhost&lt;br /&gt;
&lt;br /&gt;
Starting nmap 3.75 ( http://www.insecure.org/nmap/ ) at 2005-07-27 14:41 CEST&lt;br /&gt;
Interesting ports on localhost.localdomain (127.0.0.1):&lt;br /&gt;
(The 1205 ports scanned but not shown below are in state: closed)&lt;br /&gt;
&lt;br /&gt;
PORT      STATE SERVICE         VERSION&lt;br /&gt;
443/tcp   open  ssl             OpenSSL&lt;br /&gt;
901/tcp   open  http            Samba SWAT administration server&lt;br /&gt;
8080/tcp  open  http            Apache httpd 2.0.54 ((Unix) mod_ssl/2.0.54 OpenSSL/0.9.7g PHP/4.3.11)&lt;br /&gt;
8081/tcp  open  http            Apache Tomcat/Coyote JSP engine 1.0&lt;br /&gt;
&lt;br /&gt;
Nmap run completed -- 1 IP address (1 host up) scanned in 27.881 seconds&lt;br /&gt;
[root@test]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 2'''. Identifying weak ciphers with Nessus.&lt;br /&gt;
The following is an anonymized excerpt of a report generated by the Nessus scanner, corresponding to the identification of a server certificate allowing weak ciphers (see underlined text).&lt;br /&gt;
&lt;br /&gt;
  '''https (443/tcp)'''&lt;br /&gt;
  '''Description'''&lt;br /&gt;
  Here is the SSLv2 server certificate:&lt;br /&gt;
  Certificate:&lt;br /&gt;
  Data:&lt;br /&gt;
  Version: 3 (0x2)&lt;br /&gt;
  Serial Number: 1 (0x1)&lt;br /&gt;
  Signature Algorithm: md5WithRSAEncryption&lt;br /&gt;
  Issuer: C=**, ST=******, L=******, O=******, OU=******, CN=******&lt;br /&gt;
  Validity&lt;br /&gt;
  Not Before: Oct 17 07:12:16 2002 GMT&lt;br /&gt;
  Not After : Oct 16 07:12:16 2004 GMT&lt;br /&gt;
  Subject: C=**, ST=******, L=******, O=******, CN=******&lt;br /&gt;
  Subject Public Key Info:&lt;br /&gt;
  Public Key Algorithm: rsaEncryption&lt;br /&gt;
  RSA Public Key: (1024 bit)&lt;br /&gt;
  Modulus (1024 bit):&lt;br /&gt;
  00:98:4f:24:16:cb:0f:74:e8:9c:55:ce:62:14:4e:&lt;br /&gt;
  6b:84:c5:81:43:59:c1:2e:ac:ba:af:92:51:f3:0b:&lt;br /&gt;
  ad:e1:4b:22:ba:5a:9a:1e:0f:0b:fb:3d:5d:e6:fc:&lt;br /&gt;
  ef:b8:8c:dc:78:28:97:8b:f0:1f:17:9f:69:3f:0e:&lt;br /&gt;
  72:51:24:1b:9c:3d:85:52:1d:df:da:5a:b8:2e:d2:&lt;br /&gt;
  09:00:76:24:43:bc:08:67:6b:dd:6b:e9:d2:f5:67:&lt;br /&gt;
  e1:90:2a:b4:3b:b4:3c:b3:71:4e:88:08:74:b9:a8:&lt;br /&gt;
  2d:c4:8c:65:93:08:e6:2f:fd:e0:fa:dc:6d:d7:a2:&lt;br /&gt;
  3d:0a:75:26:cf:dc:47:74:29&lt;br /&gt;
  Exponent: 65537 (0x10001)&lt;br /&gt;
  X509v3 extensions:&lt;br /&gt;
  X509v3 Basic Constraints:&lt;br /&gt;
  CA:FALSE&lt;br /&gt;
  Netscape Comment:&lt;br /&gt;
  OpenSSL Generated Certificate&lt;br /&gt;
  Page 10&lt;br /&gt;
  Network Vulnerability Assessment Report 25.05.2005&lt;br /&gt;
  X509v3 Subject Key Identifier:&lt;br /&gt;
  10:00:38:4C:45:F0:7C:E4:C6:A7:A4:E2:C9:F0:E4:2B:A8:F9:63:A8&lt;br /&gt;
  X509v3 Authority Key Identifier:&lt;br /&gt;
  keyid:CE:E5:F9:41:7B:D9:0E:5E:5D:DF:5E:B9:F3:E6:4A:12:19:02:76:CE&lt;br /&gt;
  DirName:/C=**/ST=******/L=******/O=******/OU=******/CN=******&lt;br /&gt;
  serial:00&lt;br /&gt;
  Signature Algorithm: md5WithRSAEncryption&lt;br /&gt;
  7b:14:bd:c7:3c:0c:01:8d:69:91:95:46:5c:e6:1e:25:9b:aa:&lt;br /&gt;
  8b:f5:0d:de:e3:2e:82:1e:68:be:97:3b:39:4a:83:ae:fd:15:&lt;br /&gt;
  2e:50:c8:a7:16:6e:c9:4e:76:cc:fd:69:ae:4f:12:b8:e7:01:&lt;br /&gt;
  b6:58:7e:39:d1:fa:8d:49:bd:ff:6b:a8:dd:ae:83:ed:bc:b2:&lt;br /&gt;
  40:e3:a5:e0:fd:ae:3f:57:4d:ec:f3:21:34:b1:84:97:06:6f:&lt;br /&gt;
  f4:7d:f4:1c:84:cc:bb:1c:1c:e7:7a:7d:2d:e9:49:60:93:12:&lt;br /&gt;
  0d:9f:05:8c:8e:f9:cf:e8:9f:fc:15:c0:6e:e2:fe:e5:07:81:&lt;br /&gt;
  82:fc&lt;br /&gt;
  Here is the list of available SSLv2 ciphers:&lt;br /&gt;
  RC4-MD5&lt;br /&gt;
  EXP-RC4-MD5&lt;br /&gt;
  RC2-CBC-MD5&lt;br /&gt;
  EXP-RC2-CBC-MD5&lt;br /&gt;
  DES-CBC-MD5&lt;br /&gt;
  DES-CBC3-MD5&lt;br /&gt;
  RC4-64-MD5&lt;br /&gt;
  &amp;lt;u&amp;gt;The SSLv2 server offers 5 strong ciphers, but also 0 medium strength and '''2 weak &amp;quot;export class&amp;quot; ciphers'''.&lt;br /&gt;
  The weak/medium ciphers may be chosen by an export-grade or badly configured client software. They only offer a limited protection against a brute force attack&amp;lt;/u&amp;gt;&lt;br /&gt;
  &amp;lt;u&amp;gt;Solution: disable those ciphers and upgrade your client software if necessary.&amp;lt;/u&amp;gt;&lt;br /&gt;
  See http://support.microsoft.com/default.aspx?scid=kben-us216482&lt;br /&gt;
  or http://httpd.apache.org/docs-2.0/mod/mod_ssl.html#sslciphersuite&lt;br /&gt;
  This SSLv2 server also accepts SSLv3 connections.&lt;br /&gt;
  This SSLv2 server also accepts TLSv1 connections.&lt;br /&gt;
  &lt;br /&gt;
  Vulnerable hosts&lt;br /&gt;
  ''(list of vulnerable hosts follows)''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 3'''. Manually audit weak SSL cipher levels with OpenSSL. The following will attempt to connect to Google.com with SSLv2.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@test]# openssl s_client -no_tls1 -no_ssl3 -connect www.google.com:443&lt;br /&gt;
CONNECTED(00000003)&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=20:unable to get local issuer certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=27:certificate not trusted&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=21:unable to verify the first certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
---&lt;br /&gt;
Server certificate&lt;br /&gt;
-----BEGIN CERTIFICATE-----&lt;br /&gt;
MIIDYzCCAsygAwIBAgIQYFbAC3yUC8RFj9MS7lfBkzANBgkqhkiG9w0BAQQFADCB&lt;br /&gt;
zjELMAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJ&lt;br /&gt;
Q2FwZSBUb3duMR0wGwYDVQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UE&lt;br /&gt;
CxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhh&lt;br /&gt;
d3RlIFByZW1pdW0gU2VydmVyIENBMSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNl&lt;br /&gt;
cnZlckB0aGF3dGUuY29tMB4XDTA2MDQyMTAxMDc0NVoXDTA3MDQyMTAxMDc0NVow&lt;br /&gt;
aDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDU1v&lt;br /&gt;
dW50YWluIFZpZXcxEzARBgNVBAoTCkdvb2dsZSBJbmMxFzAVBgNVBAMTDnd3dy5n&lt;br /&gt;
b29nbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC/e2Vs8U33fRDk&lt;br /&gt;
5NNpNgkB1zKw4rqTozmfwty7eTEI8PVH1Bf6nthocQ9d9SgJAI2WOBP4grPj7MqO&lt;br /&gt;
dXMTFWGDfiTnwes16G7NZlyh6peT68r7ifrwSsVLisJp6pUf31M5Z3D88b+Yy4PE&lt;br /&gt;
D7BJaTxq6NNmP1vYUJeXsGSGrV6FUQIDAQABo4GmMIGjMB0GA1UdJQQWMBQGCCsG&lt;br /&gt;
AQUFBwMBBggrBgEFBQcDAjBABgNVHR8EOTA3MDWgM6Axhi9odHRwOi8vY3JsLnRo&lt;br /&gt;
YXd0ZS5jb20vVGhhd3RlUHJlbWl1bVNlcnZlckNBLmNybDAyBggrBgEFBQcBAQQm&lt;br /&gt;
MCQwIgYIKwYBBQUHMAGGFmh0dHA6Ly9vY3NwLnRoYXd0ZS5jb20wDAYDVR0TAQH/&lt;br /&gt;
BAIwADANBgkqhkiG9w0BAQQFAAOBgQADlTbBdVY6LD1nHWkhTadmzuWq2rWE0KO3&lt;br /&gt;
Ay+7EleYWPOo+EST315QLpU6pQgblgobGoI5x/fUg2U8WiYj1I1cbavhX2h1hda3&lt;br /&gt;
FJWnB3SiXaiuDTsGxQ267EwCVWD5bCrSWa64ilSJTgiUmzAv0a2W8YHXdG08+nYc&lt;br /&gt;
X/dVk5WRTw==&lt;br /&gt;
-----END CERTIFICATE-----&lt;br /&gt;
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
issuer=/C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/emailAddress=premium-server@thawte.com&lt;br /&gt;
---&lt;br /&gt;
No client certificate CA names sent&lt;br /&gt;
---&lt;br /&gt;
Ciphers common between both SSL endpoints:&lt;br /&gt;
RC4-MD5         EXP-RC4-MD5     RC2-CBC-MD5&lt;br /&gt;
EXP-RC2-CBC-MD5 DES-CBC-MD5     DES-CBC3-MD5&lt;br /&gt;
RC4-64-MD5&lt;br /&gt;
---&lt;br /&gt;
SSL handshake has read 1023 bytes and written 333 bytes&lt;br /&gt;
---&lt;br /&gt;
New, SSLv2, Cipher is DES-CBC3-MD5&lt;br /&gt;
Server public key is 1024 bit&lt;br /&gt;
Compression: NONE&lt;br /&gt;
Expansion: NONE&lt;br /&gt;
SSL-Session:&lt;br /&gt;
    Protocol  : SSLv2&lt;br /&gt;
    Cipher    : DES-CBC3-MD5&lt;br /&gt;
    Session-ID: 709F48E4D567C70A2E49886E4C697CDE&lt;br /&gt;
    Session-ID-ctx:&lt;br /&gt;
    Master-Key: 649E68F8CF936E69642286AC40A80F433602E3C36FD288C3&lt;br /&gt;
    Key-Arg   : E8CB6FEB9ECF3033&lt;br /&gt;
    Start Time: 1156977226&lt;br /&gt;
    Timeout   : 300 (sec)&lt;br /&gt;
    Verify return code: 21 (unable to verify the first certificate)&lt;br /&gt;
---&lt;br /&gt;
closed&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 4'''. Testing supported protocols and ciphers using SSLScan.&lt;br /&gt;
&lt;br /&gt;
SSLScan is a free command line tool that scans a HTTPS service to enumerate what protocols (supports SSLv2, SSLv3 and TLS1) and what ciphers the HTTPS service supports. It runs both on Linux and Windows OS (OSX not tested) and is released under a open source license.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[user@test]$ ./SSLScan --no-failed mail.google.com&lt;br /&gt;
                   _&lt;br /&gt;
           ___ ___| |___  ___ __ _ _ __&lt;br /&gt;
          / __/ __| / __|/ __/ _` | '_ \&lt;br /&gt;
          \__ \__ \ \__ \ (_| (_| | | | |&lt;br /&gt;
          |___/___/_|___/\___\__,_|_| |_|&lt;br /&gt;
&lt;br /&gt;
                  Version 1.9.0-win&lt;br /&gt;
             http://www.titania.co.uk&lt;br /&gt;
 Copyright 2010 Ian Ventura-Whiting / Michael Boman&lt;br /&gt;
    Compiled against OpenSSL 0.9.8n 24 Mar 2010&lt;br /&gt;
&lt;br /&gt;
Testing SSL server mail.google.com on port 443&lt;br /&gt;
&lt;br /&gt;
  Supported Server Cipher(s):&lt;br /&gt;
    accepted  SSLv3  256 bits  AES256-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  AES128-SHA&lt;br /&gt;
    accepted  SSLv3  168 bits  DES-CBC3-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  RC4-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  RC4-MD5&lt;br /&gt;
    accepted  TLSv1  256 bits  AES256-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  AES128-SHA&lt;br /&gt;
    accepted  TLSv1  168 bits  DES-CBC3-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  RC4-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  RC4-MD5&lt;br /&gt;
&lt;br /&gt;
  Prefered Server Cipher(s):&lt;br /&gt;
    SSLv3  128 bits  RC4-SHA&lt;br /&gt;
    TLSv1  128 bits  RC4-SHA&lt;br /&gt;
&lt;br /&gt;
  SSL Certificate:&lt;br /&gt;
    Version: 2&lt;br /&gt;
    Serial Number: -4294967295&lt;br /&gt;
    Signature Algorithm: sha1WithRSAEncryption&lt;br /&gt;
    Issuer: /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA&lt;br /&gt;
    Not valid before: Dec 18 00:00:00 2009 GMT&lt;br /&gt;
    Not valid after: Dec 18 23:59:59 2011 GMT&lt;br /&gt;
    Subject: /C=US/ST=California/L=Mountain View/O=Google Inc/CN=mail.google.com&lt;br /&gt;
    Public Key Algorithm: rsaEncryption&lt;br /&gt;
    RSA Public Key: (1024 bit)&lt;br /&gt;
      Modulus (1024 bit):&lt;br /&gt;
          00:d9:27:c8:11:f2:7b:e4:45:c9:46:b6:63:75:83:&lt;br /&gt;
          b1:77:7e:17:41:89:80:38:f1:45:27:a0:3c:d9:e8:&lt;br /&gt;
          a8:00:4b:d9:07:d0:ba:de:ed:f4:2c:a6:ac:dc:27:&lt;br /&gt;
          13:ec:0c:c1:a6:99:17:42:e6:8d:27:d2:81:14:b0:&lt;br /&gt;
          4b:82:fa:b2:c5:d0:bb:20:59:62:28:a3:96:b5:61:&lt;br /&gt;
          f6:76:c1:6d:46:d2:fd:ba:c6:0f:3d:d1:c9:77:9a:&lt;br /&gt;
          58:33:f6:06:76:32:ad:51:5f:29:5f:6e:f8:12:8b:&lt;br /&gt;
          ad:e6:c5:08:39:b3:43:43:a9:5b:91:1d:d7:e3:cf:&lt;br /&gt;
          51:df:75:59:8e:8d:80:ab:53&lt;br /&gt;
      Exponent: 65537 (0x10001)&lt;br /&gt;
    X509v3 Extensions:&lt;br /&gt;
      X509v3 Basic Constraints: critical&lt;br /&gt;
        CA:FALSE      X509v3 CRL Distribution Points: &lt;br /&gt;
        URI:http://crl.thawte.com/ThawteSGCCA.crl&lt;br /&gt;
      X509v3 Extended Key Usage: &lt;br /&gt;
        TLS Web Server Authentication, TLS Web Client Authentication, Netscape Server Gated Crypto      Authority Information Access: &lt;br /&gt;
        OCSP - URI:http://ocsp.thawte.com&lt;br /&gt;
        CA Issuers - URI:http://www.thawte.com/repository/Thawte_SGC_CA.crt&lt;br /&gt;
  Verify Certificate:&lt;br /&gt;
    unable to get local issuer certificate&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renegotiation requests supported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 5'''. Testing common SSL flaws with ssl_tests&lt;br /&gt;
&lt;br /&gt;
ssl_tests (http://www.pentesterscripting.com/discovery/ssl_tests) is a bash script that uses sslscan and openssl to check for various flaws - ssl version 2, weak ciphers, md5withRSAEncryption,SSLv3 Force Ciphering Bug/Renegotiation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[user@test]$ ./ssl_test.sh 192.168.1.3 443&lt;br /&gt;
+++++++++++++++++++++++++++++++++++++++++++++++++&lt;br /&gt;
SSL Tests - v2, weak ciphers, MD5, Renegotiation&lt;br /&gt;
by Aung Khant, http://yehg.net&lt;br /&gt;
+++++++++++++++++++++++++++++++++++++++++++++++++&lt;br /&gt;
&lt;br /&gt;
[*] testing on 192.168.1.3:443 ..&lt;br /&gt;
&lt;br /&gt;
[*] tesing for sslv2 ..&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep Accepted  SSLv2&lt;br /&gt;
    Accepted  SSLv2  168 bits  DES-CBC3-MD5&lt;br /&gt;
    Accepted  SSLv2  56 bits   DES-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  128 bits  RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  SSLv2  128 bits  RC4-MD5&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] testing for weak ciphers ...&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep  40 bits | grep Accepted&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-RC4-MD5&lt;br /&gt;
&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep  56 bits | grep Accepted&lt;br /&gt;
    Accepted  SSLv2  56 bits   DES-CBC-MD5&lt;br /&gt;
    Accepted  SSLv3  56 bits   EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  56 bits   DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  56 bits   EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  56 bits   DES-CBC-SHA&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] testing for MD5 certificate ..&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep MD5WithRSAEncryption&lt;br /&gt;
&lt;br /&gt;
[*] testing for SSLv3 Force Ciphering Bug/Renegotiation ..&lt;br /&gt;
[*] echo R | openssl s_client -connect 192.168.1.3:443 | grep DONE&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify error:num=18:self signed certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify return:1&lt;br /&gt;
RENEGOTIATING&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify error:num=18:self signed certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify return:1&lt;br /&gt;
DONE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] done&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==White Box Test and example==&lt;br /&gt;
&lt;br /&gt;
Check the configuration of the web servers which provide https services. If the web application provides other SSL/TLS wrapped services, these should be checked as well.&lt;br /&gt;
&lt;br /&gt;
'''Example:''' The following registry path in Microsoft Windows 2003 defines the ciphers available to the server:&lt;br /&gt;
&lt;br /&gt;
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\&lt;br /&gt;
&lt;br /&gt;
==Testing SSL certificate validity – client and server==&lt;br /&gt;
&lt;br /&gt;
When accessing a web application via the https protocol, a secure channel is established between the client (usually the browser) and the server. The identity of one (the server) or both parties (client and server)  is then established by means of digital certificates. In order for the communication to be set up, a number of checks on the certificates must be passed. While discussing SSL and certificate based authentication is beyond the scope of this Guide, we will focus on the main criteria involved in ascertaining certificate validity: a) checking if the Certificate Authority (CA) is a known one (meaning one considered trusted), b) checking that the certificate is currently valid, and c) checking that the name of the site and the name reported in the certificate  match.&lt;br /&gt;
Remember to upgrade your browser because CA certs expired too, in every release of the browser, CA Certs has been renewed. Moreover it's important to update the browser because more web sites require strong cipher more of 40 or 56 bit.&lt;br /&gt;
&lt;br /&gt;
Let’s examine each check more in detail.&lt;br /&gt;
&lt;br /&gt;
a) Each browser comes with a preloaded list of trusted CAs, against which the certificate signing CA is compared (this list can be customized and expanded at will). During the initial negotiations with an https server, if the server certificate relates to a CA unknown to the browser, a warning is usually raised. This happens most often because a web application relies on a certificate signed by a self-established CA. Whether this is to be considered a concern depends on several factors. For example, this may be fine for an Intranet environment (think of corporate web email being provided via https; here, obviously all users recognize the internal CA as a trusted CA). When a service is provided to the general public via the Internet, however (i.e. when it is important to positively verify the identity of the server we are talking to), it is usually imperative to rely on a trusted CA, one which is  recognized by all the user base (and here we stop with our considerations; we won’t delve deeper in the implications of the trust model being used by digital certificates).&lt;br /&gt;
&lt;br /&gt;
b) Certificates have an associated period of validity, therefore they may expire. Again, we are warned by the browser about this. A public service needs a temporally valid certificate; otherwise, it means we are talking with a server whose certificate was issued by someone we trust, but has expired without being renewed.&lt;br /&gt;
&lt;br /&gt;
c) What if the name on the certificate and the name of the server do not match? If this happens, it might sound suspicious. For a number of reasons, this is not so rare to see. A system may host a number of name-based virtual hosts, which share the same IP address and are identified by means of the HTTP 1.1 Host: header information. In this case, since the SSL handshake checks the server certificate before the HTTP request is processed, it is not possible to assign different certificates to each virtual server. Therefore, if the name of the site and the name reported in the certificate do not match, we have a condition which is typically signalled by the browser. To avoid this, one of two techniques should be used. First is Server Name Indication (SNI), which is a TLS extension from [http://www.ietf.org/rfc/rfc3546.txt RFC 3546]; and second is IP-based virtual servers must be used. [2] and [3] describe techniques to deal with this problem and allow name-based virtual hosts to be correctly referenced.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Black Box Testing and examples===&lt;br /&gt;
&lt;br /&gt;
Examine the validity of the certificates used by the application. Browsers will issue a warning when encountering expired certificates, certificates issued by untrusted CAs, and certificates which do not match namewise with the site to which they should refer. By clicking on the padlock which appears in the browser window when visiting an https site, you can look at information related to the certificate – including the issuer, period of validity, encryption characteristics, etc.&lt;br /&gt;
&lt;br /&gt;
If the application requires a client certificate, you probably have installed one to access it. Certificate information is available in the browser by inspecting the relevant certificate(s) in the list of the installed certificates.&lt;br /&gt;
&lt;br /&gt;
These checks must be applied to all visible SSL-wrapped communication channels used by the application. Though this is the usual https service running on port 443, there may be additional services involved depending on the web application architecture and on deployment issues (an https administrative port left open, https services on non-standard ports, etc.). Therefore, apply these checks to all SSL-wrapped ports which have been discovered. For example, the nmap scanner features a scanning mode (enabled by the –sV command line switch) which identifies SSL-wrapped services. The Nessus vulnerability scanner has the capability of performing SSL checks on all SSL/TLS-wrapped services.&lt;br /&gt;
&lt;br /&gt;
'''Examples'''&lt;br /&gt;
&lt;br /&gt;
Rather than providing a fictitious example, we have inserted an anonymized real-life example to stress how frequently one stumbles on https sites whose certificates are inaccurate with respect to naming.&lt;br /&gt;
&lt;br /&gt;
The following screenshots refer to a regional site of a high-profile IT company.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Warning issued by Microsoft Internet Explorer.&amp;lt;/u&amp;gt; We are visiting an ''.it'' site and the certificate was issued to a ''.com ''site! Internet Explorer warns that the name on the certificate does not match the name of the site.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SSL Certificate Validity Testing IE Warning.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Warning issued by Mozilla Firefox.&amp;lt;/u&amp;gt; The message issued by Firefox is different – Firefox complains because it cannot ascertain the identity of the ''.com'' site the certificate refers to because it does not know the CA which signed the certificate. In fact, Internet Explorer and Firefox do not come preloaded with the same list of CAs. Therefore, the behavior experienced with various browsers may differ.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SSL Certificate Validity Testing Firefox Warning.gif]]&lt;br /&gt;
&lt;br /&gt;
===White Box Testing and examples===&lt;br /&gt;
&lt;br /&gt;
Examine the validity of the certificates used by the application at both server and client levels. The usage of certificates is primarily at the web server level; however, there may be additional communication paths protected by SSL (for example, towards the DBMS). You should check the application architecture to identify all SSL protected channels.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] RFC2246. The TLS Protocol Version 1.0 (updated by RFC3546) - http://www.ietf.org/rfc/rfc2246.txt&lt;br /&gt;
* [2] RFC2817. Upgrading to TLS Within HTTP/1.1 - http://www.ietf.org/rfc/rfc2817.txt&lt;br /&gt;
* [3] RFC3546. Transport Layer Security (TLS) Extensions - http://www.ietf.org/rfc/rfc3546.txt&lt;br /&gt;
* [4] &amp;lt;u&amp;gt;www.verisign.net&amp;lt;/u&amp;gt; features various material on the topic&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&lt;br /&gt;
&lt;br /&gt;
* https://www.ssllabs.com/ssldb/&lt;br /&gt;
&lt;br /&gt;
* Vulnerability scanners may include checks regarding certificate validity, including name mismatch and time expiration. They usually report other information as well, such as the CA which issued the certificate. Remember that there is no unified notion of a “trusted CA”; what is trusted depends on the configuration of the software and on the human assumptions made beforehand. Browsers come with a preloaded list of trusted CAs. If your web application relies on a CA which is not in this list (for example, because you rely on a self-made CA), you should take into account the process of configuring user browsers to recognize the CA.&lt;br /&gt;
&lt;br /&gt;
* The Nessus scanner includes a plugin to check for expired certificates or certificates which are going to expire within 60 days (plugin “SSL certificate expiry”, plugin id 15901). This plugin will check certificates ''installed on the server.&lt;br /&gt;
&lt;br /&gt;
* Vulnerability scanners may include checks against weak ciphers. For example, the Nessus scanner (http://www.nessus.org) has this capability and flags the presence of SSL weak ciphers (see example provided above).&lt;br /&gt;
&lt;br /&gt;
* You may also rely on specialized tools such as SSL Digger (http://www.mcafee.com/us/downloads/free-tools/ssldigger.aspx), or – for the command line oriented – experiment with the openssl tool, which provides access to OpenSSL cryptographic functions directly from a Unix shell (may be already available on *nix boxes, otherwise see www.openssl.org).&lt;br /&gt;
&lt;br /&gt;
* To identify SSL-based services, use a vulnerability scanner or a port scanner with service recognition capabilities. The nmap scanner features a “-sV” scanning option which tries to identify services, while the nessus vulnerability scanner has the capability of identifying SSL-based services on arbitrary ports and to run vulnerability checks on them regardless of whether they are configured on standard or non-standard ports.&lt;br /&gt;
&lt;br /&gt;
* In case you need to talk to a SSL service but your favourite tool doesn’t support SSL, you may benefit from a SSL proxy such as stunnel; stunnel will take care of tunneling the underlying protocol (usually http, but not necessarily so) and communicate with the SSL service you need to reach.&lt;br /&gt;
&lt;br /&gt;
* ssl_tests, http://www.pentesterscripting.com/discovery/ssl_tests&lt;br /&gt;
&lt;br /&gt;
* Finally, a word of advice. Though it may be tempting to use a regular browser to check certificates, there are various reasons for not doing so. Browsers have been plagued by various bugs in this area, and the way the browser will perform the check might be influenced by configuration settings that may not be evident. Instead, rely on vulnerability scanners or on specialized tools to do the job.&lt;br /&gt;
&lt;br /&gt;
* [http://www.owasp.org/index.php/Transport_Layer_Protection_Cheat_Sheet OWASP Transport Layer Protection Cheat Sheet]&lt;br /&gt;
&lt;br /&gt;
[[Category:Cryptographic Vulnerability]]&lt;br /&gt;
[[Category:SSL]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&amp;diff=147379</id>
		<title>Testing for SSL-TLS (OWASP-CM-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&amp;diff=147379"/>
				<updated>2013-03-10T05:23:09Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Testing criteria: removed paragraph on why its OK to use MD5 (no longer relevant/true)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v3}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
Due to historic export restrictions of high grade cryptography, legacy and new web servers are often able and configured to handle weak cryptographic options.&lt;br /&gt;
&lt;br /&gt;
Even if high grade ciphers are normally used and installed, some server misconfiguration could be used to force the use of a weaker cipher to gain access to the supposed secure communication channel.&lt;br /&gt;
&lt;br /&gt;
==Testing SSL / TLS cipher specifications and requirements for site==&lt;br /&gt;
&lt;br /&gt;
The http clear-text protocol is normally secured via an SSL or TLS tunnel, resulting in https traffic. In addition to providing encryption of data in transit, https allows the identification of servers (and, optionally, of clients) by means of digital certificates.&lt;br /&gt;
&lt;br /&gt;
Historically, there have been limitations set in place by the U.S. government to allow cryptosystems to be exported only for key sizes of, at most, 40 bits, a key length which could be broken and would allow the decryption of communications. Since then, cryptographic export regulations have been relaxed (though some constraints still hold); however, it is important to check the SSL configuration being used to avoid putting in place cryptographic support which could be easily defeated. SSL-based services should not offer the possibility to choose weak ciphers.&lt;br /&gt;
&lt;br /&gt;
Cipher determination is performed as follows: in the initial phase of a SSL connection setup, the client sends the server a Client Hello message specifying, among other information, the cipher suites that it is able to handle. A client is usually a web browser (most popular SSL client nowadays), but not necessarily, since it can be any SSL-enabled application; the same holds for the server, which needs not be a web server, though this is the most common case. (For example, a noteworthy class of SSL clients is that of SSL proxies such as stunnel (www.stunnel.org) which can be used to allow non-SSL enabled tools to talk to SSL services.) A cipher suite is specified by an encryption protocol (DES, RC4, AES), the encryption key length (such as 40, 56, or 128 bits), and a hash algorithm (SHA, MD5) used for integrity checking. Upon receiving a Client Hello message, the server decides which cipher suite it will use for that session. It is possible (for example, by means of configuration directives) to specify which cipher suites the server will honor. In this way you may control, for example, whether or not conversations with clients will support 40-bit encryption only.&lt;br /&gt;
&lt;br /&gt;
==SSL testing criteria==&lt;br /&gt;
Large number of available cipher suites and quick progress in cryptoanalysis makes judging a SSL server a non-trivial task. These criteria are widely recognised as minimum checklist:&lt;br /&gt;
&lt;br /&gt;
* SSLv2, due to known weaknesses in protocol design&lt;br /&gt;
* SSLv3, due to known weaknesses in protocol design&lt;br /&gt;
* Compression, due to known weaknesses in protocol design&lt;br /&gt;
* Cipher suites with symmetric encryption algorithm smaller than 112 bits&lt;br /&gt;
* X.509 certificates with RSA or DSA key smaller than 1024 bits&lt;br /&gt;
* X.509 certificates signed using MD5 hash, due to known collision attacks on this hash&lt;br /&gt;
* TLS Renegotiation vulnerability[http://www.phonefactor.com/sslgap/ssl-tls-authentication-patches]&lt;br /&gt;
&lt;br /&gt;
The following standards can be used as reference while assessing SSL servers:&lt;br /&gt;
&lt;br /&gt;
* [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf NIST SP 800-52] recommends U.S. federal systems to use at least TLS 1.0 with ciphersuites based on RSA or DSA key agreement with ephemeral Diffie-Hellman, 3DES or AES for confidentality and SHA1 for integrity protection. NIST SP 800-52 specifically disallows non-FIPS compliant algorithms like RC4 and MD5. An exception is U.S. federal systems making connections to outside servers, where these algorithms can be used in SSL client mode.&lt;br /&gt;
* [https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml PCI-DSS v1.2] in point 4.1 requires compliant parties to use &amp;quot;strong cryptography&amp;quot; without precisely defining key lengths and algorithms. Common interpretation, partially based on previous versions of the standard, is that at least 128 bit key cipher, no export strength algorithms and no SSLv2 should be used[http://www.digicert.com/news/DigiCert_PCI_White_Paper.pdf].&lt;br /&gt;
* [https://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide] has been proposed to standardize SSL server assessment and currently is in draft version.&lt;br /&gt;
&lt;br /&gt;
SSL Server Database can be used to assess configuration of publicly available SSL servers[https://www.ssllabs.com/ssldb/analyze.html] based on SSL Rating Guide[https://www.ssllabs.com/projects/rating-guide/index.html]&lt;br /&gt;
&lt;br /&gt;
==Black Box Test and example==&lt;br /&gt;
&lt;br /&gt;
In order to detect possible support of weak ciphers, the ports associated to SSL/TLS wrapped services must be identified. These typically include port 443, which is the standard https port; however, this may change because a) https services may be configured to run on non-standard ports, and b) there may be additional SSL/TLS wrapped services related to the web application. In general, a service discovery is required to identify such ports.&lt;br /&gt;
&lt;br /&gt;
The nmap scanner, via the “–sV” scan option, is able to identify SSL services. Vulnerability Scanners, in addition to performing service discovery, may include checks against weak ciphers (for example, the Nessus scanner has the capability of checking SSL services on arbitrary ports, and will report weak ciphers).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 1'''. SSL service recognition via nmap.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@test]# nmap -F -sV localhost&lt;br /&gt;
&lt;br /&gt;
Starting nmap 3.75 ( http://www.insecure.org/nmap/ ) at 2005-07-27 14:41 CEST&lt;br /&gt;
Interesting ports on localhost.localdomain (127.0.0.1):&lt;br /&gt;
(The 1205 ports scanned but not shown below are in state: closed)&lt;br /&gt;
&lt;br /&gt;
PORT      STATE SERVICE         VERSION&lt;br /&gt;
443/tcp   open  ssl             OpenSSL&lt;br /&gt;
901/tcp   open  http            Samba SWAT administration server&lt;br /&gt;
8080/tcp  open  http            Apache httpd 2.0.54 ((Unix) mod_ssl/2.0.54 OpenSSL/0.9.7g PHP/4.3.11)&lt;br /&gt;
8081/tcp  open  http            Apache Tomcat/Coyote JSP engine 1.0&lt;br /&gt;
&lt;br /&gt;
Nmap run completed -- 1 IP address (1 host up) scanned in 27.881 seconds&lt;br /&gt;
[root@test]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 2'''. Identifying weak ciphers with Nessus.&lt;br /&gt;
The following is an anonymized excerpt of a report generated by the Nessus scanner, corresponding to the identification of a server certificate allowing weak ciphers (see underlined text).&lt;br /&gt;
&lt;br /&gt;
  '''https (443/tcp)'''&lt;br /&gt;
  '''Description'''&lt;br /&gt;
  Here is the SSLv2 server certificate:&lt;br /&gt;
  Certificate:&lt;br /&gt;
  Data:&lt;br /&gt;
  Version: 3 (0x2)&lt;br /&gt;
  Serial Number: 1 (0x1)&lt;br /&gt;
  Signature Algorithm: md5WithRSAEncryption&lt;br /&gt;
  Issuer: C=**, ST=******, L=******, O=******, OU=******, CN=******&lt;br /&gt;
  Validity&lt;br /&gt;
  Not Before: Oct 17 07:12:16 2002 GMT&lt;br /&gt;
  Not After : Oct 16 07:12:16 2004 GMT&lt;br /&gt;
  Subject: C=**, ST=******, L=******, O=******, CN=******&lt;br /&gt;
  Subject Public Key Info:&lt;br /&gt;
  Public Key Algorithm: rsaEncryption&lt;br /&gt;
  RSA Public Key: (1024 bit)&lt;br /&gt;
  Modulus (1024 bit):&lt;br /&gt;
  00:98:4f:24:16:cb:0f:74:e8:9c:55:ce:62:14:4e:&lt;br /&gt;
  6b:84:c5:81:43:59:c1:2e:ac:ba:af:92:51:f3:0b:&lt;br /&gt;
  ad:e1:4b:22:ba:5a:9a:1e:0f:0b:fb:3d:5d:e6:fc:&lt;br /&gt;
  ef:b8:8c:dc:78:28:97:8b:f0:1f:17:9f:69:3f:0e:&lt;br /&gt;
  72:51:24:1b:9c:3d:85:52:1d:df:da:5a:b8:2e:d2:&lt;br /&gt;
  09:00:76:24:43:bc:08:67:6b:dd:6b:e9:d2:f5:67:&lt;br /&gt;
  e1:90:2a:b4:3b:b4:3c:b3:71:4e:88:08:74:b9:a8:&lt;br /&gt;
  2d:c4:8c:65:93:08:e6:2f:fd:e0:fa:dc:6d:d7:a2:&lt;br /&gt;
  3d:0a:75:26:cf:dc:47:74:29&lt;br /&gt;
  Exponent: 65537 (0x10001)&lt;br /&gt;
  X509v3 extensions:&lt;br /&gt;
  X509v3 Basic Constraints:&lt;br /&gt;
  CA:FALSE&lt;br /&gt;
  Netscape Comment:&lt;br /&gt;
  OpenSSL Generated Certificate&lt;br /&gt;
  Page 10&lt;br /&gt;
  Network Vulnerability Assessment Report 25.05.2005&lt;br /&gt;
  X509v3 Subject Key Identifier:&lt;br /&gt;
  10:00:38:4C:45:F0:7C:E4:C6:A7:A4:E2:C9:F0:E4:2B:A8:F9:63:A8&lt;br /&gt;
  X509v3 Authority Key Identifier:&lt;br /&gt;
  keyid:CE:E5:F9:41:7B:D9:0E:5E:5D:DF:5E:B9:F3:E6:4A:12:19:02:76:CE&lt;br /&gt;
  DirName:/C=**/ST=******/L=******/O=******/OU=******/CN=******&lt;br /&gt;
  serial:00&lt;br /&gt;
  Signature Algorithm: md5WithRSAEncryption&lt;br /&gt;
  7b:14:bd:c7:3c:0c:01:8d:69:91:95:46:5c:e6:1e:25:9b:aa:&lt;br /&gt;
  8b:f5:0d:de:e3:2e:82:1e:68:be:97:3b:39:4a:83:ae:fd:15:&lt;br /&gt;
  2e:50:c8:a7:16:6e:c9:4e:76:cc:fd:69:ae:4f:12:b8:e7:01:&lt;br /&gt;
  b6:58:7e:39:d1:fa:8d:49:bd:ff:6b:a8:dd:ae:83:ed:bc:b2:&lt;br /&gt;
  40:e3:a5:e0:fd:ae:3f:57:4d:ec:f3:21:34:b1:84:97:06:6f:&lt;br /&gt;
  f4:7d:f4:1c:84:cc:bb:1c:1c:e7:7a:7d:2d:e9:49:60:93:12:&lt;br /&gt;
  0d:9f:05:8c:8e:f9:cf:e8:9f:fc:15:c0:6e:e2:fe:e5:07:81:&lt;br /&gt;
  82:fc&lt;br /&gt;
  Here is the list of available SSLv2 ciphers:&lt;br /&gt;
  RC4-MD5&lt;br /&gt;
  EXP-RC4-MD5&lt;br /&gt;
  RC2-CBC-MD5&lt;br /&gt;
  EXP-RC2-CBC-MD5&lt;br /&gt;
  DES-CBC-MD5&lt;br /&gt;
  DES-CBC3-MD5&lt;br /&gt;
  RC4-64-MD5&lt;br /&gt;
  &amp;lt;u&amp;gt;The SSLv2 server offers 5 strong ciphers, but also 0 medium strength and '''2 weak &amp;quot;export class&amp;quot; ciphers'''.&lt;br /&gt;
  The weak/medium ciphers may be chosen by an export-grade or badly configured client software. They only offer a limited protection against a brute force attack&amp;lt;/u&amp;gt;&lt;br /&gt;
  &amp;lt;u&amp;gt;Solution: disable those ciphers and upgrade your client software if necessary.&amp;lt;/u&amp;gt;&lt;br /&gt;
  See http://support.microsoft.com/default.aspx?scid=kben-us216482&lt;br /&gt;
  or http://httpd.apache.org/docs-2.0/mod/mod_ssl.html#sslciphersuite&lt;br /&gt;
  This SSLv2 server also accepts SSLv3 connections.&lt;br /&gt;
  This SSLv2 server also accepts TLSv1 connections.&lt;br /&gt;
  &lt;br /&gt;
  Vulnerable hosts&lt;br /&gt;
  ''(list of vulnerable hosts follows)''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 3'''. Manually audit weak SSL cipher levels with OpenSSL. The following will attempt to connect to Google.com with SSLv2.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@test]# openssl s_client -no_tls1 -no_ssl3 -connect www.google.com:443&lt;br /&gt;
CONNECTED(00000003)&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=20:unable to get local issuer certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=27:certificate not trusted&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=21:unable to verify the first certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
---&lt;br /&gt;
Server certificate&lt;br /&gt;
-----BEGIN CERTIFICATE-----&lt;br /&gt;
MIIDYzCCAsygAwIBAgIQYFbAC3yUC8RFj9MS7lfBkzANBgkqhkiG9w0BAQQFADCB&lt;br /&gt;
zjELMAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJ&lt;br /&gt;
Q2FwZSBUb3duMR0wGwYDVQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UE&lt;br /&gt;
CxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhh&lt;br /&gt;
d3RlIFByZW1pdW0gU2VydmVyIENBMSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNl&lt;br /&gt;
cnZlckB0aGF3dGUuY29tMB4XDTA2MDQyMTAxMDc0NVoXDTA3MDQyMTAxMDc0NVow&lt;br /&gt;
aDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDU1v&lt;br /&gt;
dW50YWluIFZpZXcxEzARBgNVBAoTCkdvb2dsZSBJbmMxFzAVBgNVBAMTDnd3dy5n&lt;br /&gt;
b29nbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC/e2Vs8U33fRDk&lt;br /&gt;
5NNpNgkB1zKw4rqTozmfwty7eTEI8PVH1Bf6nthocQ9d9SgJAI2WOBP4grPj7MqO&lt;br /&gt;
dXMTFWGDfiTnwes16G7NZlyh6peT68r7ifrwSsVLisJp6pUf31M5Z3D88b+Yy4PE&lt;br /&gt;
D7BJaTxq6NNmP1vYUJeXsGSGrV6FUQIDAQABo4GmMIGjMB0GA1UdJQQWMBQGCCsG&lt;br /&gt;
AQUFBwMBBggrBgEFBQcDAjBABgNVHR8EOTA3MDWgM6Axhi9odHRwOi8vY3JsLnRo&lt;br /&gt;
YXd0ZS5jb20vVGhhd3RlUHJlbWl1bVNlcnZlckNBLmNybDAyBggrBgEFBQcBAQQm&lt;br /&gt;
MCQwIgYIKwYBBQUHMAGGFmh0dHA6Ly9vY3NwLnRoYXd0ZS5jb20wDAYDVR0TAQH/&lt;br /&gt;
BAIwADANBgkqhkiG9w0BAQQFAAOBgQADlTbBdVY6LD1nHWkhTadmzuWq2rWE0KO3&lt;br /&gt;
Ay+7EleYWPOo+EST315QLpU6pQgblgobGoI5x/fUg2U8WiYj1I1cbavhX2h1hda3&lt;br /&gt;
FJWnB3SiXaiuDTsGxQ267EwCVWD5bCrSWa64ilSJTgiUmzAv0a2W8YHXdG08+nYc&lt;br /&gt;
X/dVk5WRTw==&lt;br /&gt;
-----END CERTIFICATE-----&lt;br /&gt;
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
issuer=/C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/emailAddress=premium-server@thawte.com&lt;br /&gt;
---&lt;br /&gt;
No client certificate CA names sent&lt;br /&gt;
---&lt;br /&gt;
Ciphers common between both SSL endpoints:&lt;br /&gt;
RC4-MD5         EXP-RC4-MD5     RC2-CBC-MD5&lt;br /&gt;
EXP-RC2-CBC-MD5 DES-CBC-MD5     DES-CBC3-MD5&lt;br /&gt;
RC4-64-MD5&lt;br /&gt;
---&lt;br /&gt;
SSL handshake has read 1023 bytes and written 333 bytes&lt;br /&gt;
---&lt;br /&gt;
New, SSLv2, Cipher is DES-CBC3-MD5&lt;br /&gt;
Server public key is 1024 bit&lt;br /&gt;
Compression: NONE&lt;br /&gt;
Expansion: NONE&lt;br /&gt;
SSL-Session:&lt;br /&gt;
    Protocol  : SSLv2&lt;br /&gt;
    Cipher    : DES-CBC3-MD5&lt;br /&gt;
    Session-ID: 709F48E4D567C70A2E49886E4C697CDE&lt;br /&gt;
    Session-ID-ctx:&lt;br /&gt;
    Master-Key: 649E68F8CF936E69642286AC40A80F433602E3C36FD288C3&lt;br /&gt;
    Key-Arg   : E8CB6FEB9ECF3033&lt;br /&gt;
    Start Time: 1156977226&lt;br /&gt;
    Timeout   : 300 (sec)&lt;br /&gt;
    Verify return code: 21 (unable to verify the first certificate)&lt;br /&gt;
---&lt;br /&gt;
closed&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 4'''. Testing supported protocols and ciphers using SSLScan.&lt;br /&gt;
&lt;br /&gt;
SSLScan is a free command line tool that scans a HTTPS service to enumerate what protocols (supports SSLv2, SSLv3 and TLS1) and what ciphers the HTTPS service supports. It runs both on Linux and Windows OS (OSX not tested) and is released under a open source license.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[user@test]$ ./SSLScan --no-failed mail.google.com&lt;br /&gt;
                   _&lt;br /&gt;
           ___ ___| |___  ___ __ _ _ __&lt;br /&gt;
          / __/ __| / __|/ __/ _` | '_ \&lt;br /&gt;
          \__ \__ \ \__ \ (_| (_| | | | |&lt;br /&gt;
          |___/___/_|___/\___\__,_|_| |_|&lt;br /&gt;
&lt;br /&gt;
                  Version 1.9.0-win&lt;br /&gt;
             http://www.titania.co.uk&lt;br /&gt;
 Copyright 2010 Ian Ventura-Whiting / Michael Boman&lt;br /&gt;
    Compiled against OpenSSL 0.9.8n 24 Mar 2010&lt;br /&gt;
&lt;br /&gt;
Testing SSL server mail.google.com on port 443&lt;br /&gt;
&lt;br /&gt;
  Supported Server Cipher(s):&lt;br /&gt;
    accepted  SSLv3  256 bits  AES256-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  AES128-SHA&lt;br /&gt;
    accepted  SSLv3  168 bits  DES-CBC3-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  RC4-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  RC4-MD5&lt;br /&gt;
    accepted  TLSv1  256 bits  AES256-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  AES128-SHA&lt;br /&gt;
    accepted  TLSv1  168 bits  DES-CBC3-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  RC4-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  RC4-MD5&lt;br /&gt;
&lt;br /&gt;
  Prefered Server Cipher(s):&lt;br /&gt;
    SSLv3  128 bits  RC4-SHA&lt;br /&gt;
    TLSv1  128 bits  RC4-SHA&lt;br /&gt;
&lt;br /&gt;
  SSL Certificate:&lt;br /&gt;
    Version: 2&lt;br /&gt;
    Serial Number: -4294967295&lt;br /&gt;
    Signature Algorithm: sha1WithRSAEncryption&lt;br /&gt;
    Issuer: /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA&lt;br /&gt;
    Not valid before: Dec 18 00:00:00 2009 GMT&lt;br /&gt;
    Not valid after: Dec 18 23:59:59 2011 GMT&lt;br /&gt;
    Subject: /C=US/ST=California/L=Mountain View/O=Google Inc/CN=mail.google.com&lt;br /&gt;
    Public Key Algorithm: rsaEncryption&lt;br /&gt;
    RSA Public Key: (1024 bit)&lt;br /&gt;
      Modulus (1024 bit):&lt;br /&gt;
          00:d9:27:c8:11:f2:7b:e4:45:c9:46:b6:63:75:83:&lt;br /&gt;
          b1:77:7e:17:41:89:80:38:f1:45:27:a0:3c:d9:e8:&lt;br /&gt;
          a8:00:4b:d9:07:d0:ba:de:ed:f4:2c:a6:ac:dc:27:&lt;br /&gt;
          13:ec:0c:c1:a6:99:17:42:e6:8d:27:d2:81:14:b0:&lt;br /&gt;
          4b:82:fa:b2:c5:d0:bb:20:59:62:28:a3:96:b5:61:&lt;br /&gt;
          f6:76:c1:6d:46:d2:fd:ba:c6:0f:3d:d1:c9:77:9a:&lt;br /&gt;
          58:33:f6:06:76:32:ad:51:5f:29:5f:6e:f8:12:8b:&lt;br /&gt;
          ad:e6:c5:08:39:b3:43:43:a9:5b:91:1d:d7:e3:cf:&lt;br /&gt;
          51:df:75:59:8e:8d:80:ab:53&lt;br /&gt;
      Exponent: 65537 (0x10001)&lt;br /&gt;
    X509v3 Extensions:&lt;br /&gt;
      X509v3 Basic Constraints: critical&lt;br /&gt;
        CA:FALSE      X509v3 CRL Distribution Points: &lt;br /&gt;
        URI:http://crl.thawte.com/ThawteSGCCA.crl&lt;br /&gt;
      X509v3 Extended Key Usage: &lt;br /&gt;
        TLS Web Server Authentication, TLS Web Client Authentication, Netscape Server Gated Crypto      Authority Information Access: &lt;br /&gt;
        OCSP - URI:http://ocsp.thawte.com&lt;br /&gt;
        CA Issuers - URI:http://www.thawte.com/repository/Thawte_SGC_CA.crt&lt;br /&gt;
  Verify Certificate:&lt;br /&gt;
    unable to get local issuer certificate&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renegotiation requests supported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 5'''. Testing common SSL flaws with ssl_tests&lt;br /&gt;
&lt;br /&gt;
ssl_tests (http://www.pentesterscripting.com/discovery/ssl_tests) is a bash script that uses sslscan and openssl to check for various flaws - ssl version 2, weak ciphers, md5withRSAEncryption,SSLv3 Force Ciphering Bug/Renegotiation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[user@test]$ ./ssl_test.sh 192.168.1.3 443&lt;br /&gt;
+++++++++++++++++++++++++++++++++++++++++++++++++&lt;br /&gt;
SSL Tests - v2, weak ciphers, MD5, Renegotiation&lt;br /&gt;
by Aung Khant, http://yehg.net&lt;br /&gt;
+++++++++++++++++++++++++++++++++++++++++++++++++&lt;br /&gt;
&lt;br /&gt;
[*] testing on 192.168.1.3:443 ..&lt;br /&gt;
&lt;br /&gt;
[*] tesing for sslv2 ..&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep Accepted  SSLv2&lt;br /&gt;
    Accepted  SSLv2  168 bits  DES-CBC3-MD5&lt;br /&gt;
    Accepted  SSLv2  56 bits   DES-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  128 bits  RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  SSLv2  128 bits  RC4-MD5&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] testing for weak ciphers ...&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep  40 bits | grep Accepted&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-RC4-MD5&lt;br /&gt;
&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep  56 bits | grep Accepted&lt;br /&gt;
    Accepted  SSLv2  56 bits   DES-CBC-MD5&lt;br /&gt;
    Accepted  SSLv3  56 bits   EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  56 bits   DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  56 bits   EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  56 bits   DES-CBC-SHA&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] testing for MD5 certificate ..&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep MD5WithRSAEncryption&lt;br /&gt;
&lt;br /&gt;
[*] testing for SSLv3 Force Ciphering Bug/Renegotiation ..&lt;br /&gt;
[*] echo R | openssl s_client -connect 192.168.1.3:443 | grep DONE&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify error:num=18:self signed certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify return:1&lt;br /&gt;
RENEGOTIATING&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify error:num=18:self signed certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify return:1&lt;br /&gt;
DONE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] done&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==White Box Test and example==&lt;br /&gt;
&lt;br /&gt;
Check the configuration of the web servers which provide https services. If the web application provides other SSL/TLS wrapped services, these should be checked as well.&lt;br /&gt;
&lt;br /&gt;
'''Example:''' The following registry path in Microsoft Windows 2003 defines the ciphers available to the server:&lt;br /&gt;
&lt;br /&gt;
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\&lt;br /&gt;
&lt;br /&gt;
==Testing SSL certificate validity – client and server==&lt;br /&gt;
&lt;br /&gt;
When accessing a web application via the https protocol, a secure channel is established between the client (usually the browser) and the server. The identity of one (the server) or both parties (client and server)  is then established by means of digital certificates. In order for the communication to be set up, a number of checks on the certificates must be passed. While discussing SSL and certificate based authentication is beyond the scope of this Guide, we will focus on the main criteria involved in ascertaining certificate validity: a) checking if the Certificate Authority (CA) is a known one (meaning one considered trusted), b) checking that the certificate is currently valid, and c) checking that the name of the site and the name reported in the certificate  match.&lt;br /&gt;
Remember to upgrade your browser because CA certs expired too, in every release of the browser, CA Certs has been renewed. Moreover it's important to update the browser because more web sites require strong cipher more of 40 or 56 bit.&lt;br /&gt;
&lt;br /&gt;
Let’s examine each check more in detail.&lt;br /&gt;
&lt;br /&gt;
a) Each browser comes with a preloaded list of trusted CAs, against which the certificate signing CA is compared (this list can be customized and expanded at will). During the initial negotiations with an https server, if the server certificate relates to a CA unknown to the browser, a warning is usually raised. This happens most often because a web application relies on a certificate signed by a self-established CA. Whether this is to be considered a concern depends on several factors. For example, this may be fine for an Intranet environment (think of corporate web email being provided via https; here, obviously all users recognize the internal CA as a trusted CA). When a service is provided to the general public via the Internet, however (i.e. when it is important to positively verify the identity of the server we are talking to), it is usually imperative to rely on a trusted CA, one which is  recognized by all the user base (and here we stop with our considerations; we won’t delve deeper in the implications of the trust model being used by digital certificates).&lt;br /&gt;
&lt;br /&gt;
b) Certificates have an associated period of validity, therefore they may expire. Again, we are warned by the browser about this. A public service needs a temporally valid certificate; otherwise, it means we are talking with a server whose certificate was issued by someone we trust, but has expired without being renewed.&lt;br /&gt;
&lt;br /&gt;
c) What if the name on the certificate and the name of the server do not match? If this happens, it might sound suspicious. For a number of reasons, this is not so rare to see. A system may host a number of name-based virtual hosts, which share the same IP address and are identified by means of the HTTP 1.1 Host: header information. In this case, since the SSL handshake checks the server certificate before the HTTP request is processed, it is not possible to assign different certificates to each virtual server. Therefore, if the name of the site and the name reported in the certificate do not match, we have a condition which is typically signalled by the browser. To avoid this, one of two techniques should be used. First is Server Name Indication (SNI), which is a TLS extension from [http://www.ietf.org/rfc/rfc3546.txt RFC 3546]; and second is IP-based virtual servers must be used. [2] and [3] describe techniques to deal with this problem and allow name-based virtual hosts to be correctly referenced.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Black Box Testing and examples===&lt;br /&gt;
&lt;br /&gt;
Examine the validity of the certificates used by the application. Browsers will issue a warning when encountering expired certificates, certificates issued by untrusted CAs, and certificates which do not match namewise with the site to which they should refer. By clicking on the padlock which appears in the browser window when visiting an https site, you can look at information related to the certificate – including the issuer, period of validity, encryption characteristics, etc.&lt;br /&gt;
&lt;br /&gt;
If the application requires a client certificate, you probably have installed one to access it. Certificate information is available in the browser by inspecting the relevant certificate(s) in the list of the installed certificates.&lt;br /&gt;
&lt;br /&gt;
These checks must be applied to all visible SSL-wrapped communication channels used by the application. Though this is the usual https service running on port 443, there may be additional services involved depending on the web application architecture and on deployment issues (an https administrative port left open, https services on non-standard ports, etc.). Therefore, apply these checks to all SSL-wrapped ports which have been discovered. For example, the nmap scanner features a scanning mode (enabled by the –sV command line switch) which identifies SSL-wrapped services. The Nessus vulnerability scanner has the capability of performing SSL checks on all SSL/TLS-wrapped services.&lt;br /&gt;
&lt;br /&gt;
'''Examples'''&lt;br /&gt;
&lt;br /&gt;
Rather than providing a fictitious example, we have inserted an anonymized real-life example to stress how frequently one stumbles on https sites whose certificates are inaccurate with respect to naming.&lt;br /&gt;
&lt;br /&gt;
The following screenshots refer to a regional site of a high-profile IT company.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Warning issued by Microsoft Internet Explorer.&amp;lt;/u&amp;gt; We are visiting an ''.it'' site and the certificate was issued to a ''.com ''site! Internet Explorer warns that the name on the certificate does not match the name of the site.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SSL Certificate Validity Testing IE Warning.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Warning issued by Mozilla Firefox.&amp;lt;/u&amp;gt; The message issued by Firefox is different – Firefox complains because it cannot ascertain the identity of the ''.com'' site the certificate refers to because it does not know the CA which signed the certificate. In fact, Internet Explorer and Firefox do not come preloaded with the same list of CAs. Therefore, the behavior experienced with various browsers may differ.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SSL Certificate Validity Testing Firefox Warning.gif]]&lt;br /&gt;
&lt;br /&gt;
===White Box Testing and examples===&lt;br /&gt;
&lt;br /&gt;
Examine the validity of the certificates used by the application at both server and client levels. The usage of certificates is primarily at the web server level; however, there may be additional communication paths protected by SSL (for example, towards the DBMS). You should check the application architecture to identify all SSL protected channels.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] RFC2246. The TLS Protocol Version 1.0 (updated by RFC3546) - http://www.ietf.org/rfc/rfc2246.txt&lt;br /&gt;
* [2] RFC2817. Upgrading to TLS Within HTTP/1.1 - http://www.ietf.org/rfc/rfc2817.txt&lt;br /&gt;
* [3] RFC3546. Transport Layer Security (TLS) Extensions - http://www.ietf.org/rfc/rfc3546.txt&lt;br /&gt;
* [4] &amp;lt;u&amp;gt;www.verisign.net&amp;lt;/u&amp;gt; features various material on the topic&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&lt;br /&gt;
&lt;br /&gt;
* https://www.ssllabs.com/ssldb/&lt;br /&gt;
&lt;br /&gt;
* Vulnerability scanners may include checks regarding certificate validity, including name mismatch and time expiration. They usually report other information as well, such as the CA which issued the certificate. Remember that there is no unified notion of a “trusted CA”; what is trusted depends on the configuration of the software and on the human assumptions made beforehand. Browsers come with a preloaded list of trusted CAs. If your web application relies on a CA which is not in this list (for example, because you rely on a self-made CA), you should take into account the process of configuring user browsers to recognize the CA.&lt;br /&gt;
&lt;br /&gt;
* The Nessus scanner includes a plugin to check for expired certificates or certificates which are going to expire within 60 days (plugin “SSL certificate expiry”, plugin id 15901). This plugin will check certificates ''installed on the server.&lt;br /&gt;
&lt;br /&gt;
* Vulnerability scanners may include checks against weak ciphers. For example, the Nessus scanner (http://www.nessus.org) has this capability and flags the presence of SSL weak ciphers (see example provided above).&lt;br /&gt;
&lt;br /&gt;
* You may also rely on specialized tools such as SSL Digger (http://www.mcafee.com/us/downloads/free-tools/ssldigger.aspx), or – for the command line oriented – experiment with the openssl tool, which provides access to OpenSSL cryptographic functions directly from a Unix shell (may be already available on *nix boxes, otherwise see www.openssl.org).&lt;br /&gt;
&lt;br /&gt;
* To identify SSL-based services, use a vulnerability scanner or a port scanner with service recognition capabilities. The nmap scanner features a “-sV” scanning option which tries to identify services, while the nessus vulnerability scanner has the capability of identifying SSL-based services on arbitrary ports and to run vulnerability checks on them regardless of whether they are configured on standard or non-standard ports.&lt;br /&gt;
&lt;br /&gt;
* In case you need to talk to a SSL service but your favourite tool doesn’t support SSL, you may benefit from a SSL proxy such as stunnel; stunnel will take care of tunneling the underlying protocol (usually http, but not necessarily so) and communicate with the SSL service you need to reach.&lt;br /&gt;
&lt;br /&gt;
* ssl_tests, http://www.pentesterscripting.com/discovery/ssl_tests&lt;br /&gt;
&lt;br /&gt;
* Finally, a word of advice. Though it may be tempting to use a regular browser to check certificates, there are various reasons for not doing so. Browsers have been plagued by various bugs in this area, and the way the browser will perform the check might be influenced by configuration settings that may not be evident. Instead, rely on vulnerability scanners or on specialized tools to do the job.&lt;br /&gt;
&lt;br /&gt;
* [http://www.owasp.org/index.php/Transport_Layer_Protection_Cheat_Sheet OWASP Transport Layer Protection Cheat Sheet]&lt;br /&gt;
&lt;br /&gt;
[[Category:Cryptographic Vulnerability]]&lt;br /&gt;
[[Category:SSL]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&amp;diff=147378</id>
		<title>Testing for SSL-TLS (OWASP-CM-001)</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&amp;diff=147378"/>
				<updated>2013-03-10T05:19:28Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Testing criteria: added SSLv3 (should also add TLS 1.0); added compression; removed Export (EXP) level cipher suites; lowered security level to 112-bits (e.g., 3-key TDES) (matches 1024 moduli)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Template:OWASP Testing Guide v3}}&lt;br /&gt;
&lt;br /&gt;
== Brief Summary ==&lt;br /&gt;
&lt;br /&gt;
Due to historic export restrictions of high grade cryptography, legacy and new web servers are often able and configured to handle weak cryptographic options.&lt;br /&gt;
&lt;br /&gt;
Even if high grade ciphers are normally used and installed, some server misconfiguration could be used to force the use of a weaker cipher to gain access to the supposed secure communication channel.&lt;br /&gt;
&lt;br /&gt;
==Testing SSL / TLS cipher specifications and requirements for site==&lt;br /&gt;
&lt;br /&gt;
The http clear-text protocol is normally secured via an SSL or TLS tunnel, resulting in https traffic. In addition to providing encryption of data in transit, https allows the identification of servers (and, optionally, of clients) by means of digital certificates.&lt;br /&gt;
&lt;br /&gt;
Historically, there have been limitations set in place by the U.S. government to allow cryptosystems to be exported only for key sizes of, at most, 40 bits, a key length which could be broken and would allow the decryption of communications. Since then, cryptographic export regulations have been relaxed (though some constraints still hold); however, it is important to check the SSL configuration being used to avoid putting in place cryptographic support which could be easily defeated. SSL-based services should not offer the possibility to choose weak ciphers.&lt;br /&gt;
&lt;br /&gt;
Technically, cipher determination is performed as follows. In the initial phase of a SSL connection setup, the client sends the server a Client Hello message specifying, among other information, the cipher suites that it is able to handle. A client is usually a web browser (most popular SSL client nowadays), but not necessarily, since it can be any SSL-enabled application; the same holds for the server, which needs not be a web server, though this is the most common case. (For example, a noteworthy class of SSL clients is that of SSL proxies such as stunnel (www.stunnel.org) which can be used to allow non-SSL enabled tools to talk to SSL services.) A cipher suite is specified by an encryption protocol (DES, RC4, AES), the encryption key length (such as 40, 56, or 128 bits), and a hash algorithm (SHA, MD5) used for integrity checking. Upon receiving a Client Hello message, the server decides which cipher suite it will use for that session. It is possible (for example, by means of configuration directives) to specify which cipher suites the server will honor. In this way you may control, for example, whether or not conversations with clients will support 40-bit encryption only.&lt;br /&gt;
&lt;br /&gt;
==SSL testing criteria==&lt;br /&gt;
Large number of available cipher suites and quick progress in cryptoanalysis makes judging a SSL server a non-trivial task. These criteria are widely recognised as minimum checklist:&lt;br /&gt;
&lt;br /&gt;
* SSLv2, due to known weaknesses in protocol design&lt;br /&gt;
* SSLv3, due to known weaknesses in protocol design&lt;br /&gt;
* Compression, due to known weaknesses in protocol design&lt;br /&gt;
* Cipher suites with symmetric encryption algorithm smaller than 112 bits&lt;br /&gt;
* X.509 certificates with RSA or DSA key smaller than 1024 bits&lt;br /&gt;
* X.509 certificates signed using MD5 hash, due to known collision attacks on this hash&lt;br /&gt;
* TLS Renegotiation vulnerability[http://www.phonefactor.com/sslgap/ssl-tls-authentication-patches]&lt;br /&gt;
&lt;br /&gt;
While there are known collision attacks on MD5 and known cryptoanalytical attacks on RC4, their specific usage in SSL and TLS doesn't allow these attacks to be practical and SSLv3 or TLSv1 cipher suites using RC4 and MD5 with key length of 128 bit is still considered sufficient[http://www.rsa.com/rsalabs/node.asp?id=2009].&lt;br /&gt;
&lt;br /&gt;
The following standards can be used as reference while assessing SSL servers:&lt;br /&gt;
&lt;br /&gt;
* [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf NIST SP 800-52] recommends U.S. federal systems to use at least TLS 1.0 with ciphersuites based on RSA or DSA key agreement with ephemeral Diffie-Hellman, 3DES or AES for confidentality and SHA1 for integrity protection. NIST SP 800-52 specifically disallows non-FIPS compliant algorithms like RC4 and MD5. An exception is U.S. federal systems making connections to outside servers, where these algorithms can be used in SSL client mode.&lt;br /&gt;
* [https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml PCI-DSS v1.2] in point 4.1 requires compliant parties to use &amp;quot;strong cryptography&amp;quot; without precisely defining key lengths and algorithms. Common interpretation, partially based on previous versions of the standard, is that at least 128 bit key cipher, no export strength algorithms and no SSLv2 should be used[http://www.digicert.com/news/DigiCert_PCI_White_Paper.pdf].&lt;br /&gt;
* [https://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide] has been proposed to standardize SSL server assessment and currently is in draft version.&lt;br /&gt;
&lt;br /&gt;
SSL Server Database can be used to assess configuration of publicly available SSL servers[https://www.ssllabs.com/ssldb/analyze.html] based on SSL Rating Guide[https://www.ssllabs.com/projects/rating-guide/index.html]&lt;br /&gt;
&lt;br /&gt;
==Black Box Test and example==&lt;br /&gt;
&lt;br /&gt;
In order to detect possible support of weak ciphers, the ports associated to SSL/TLS wrapped services must be identified. These typically include port 443, which is the standard https port; however, this may change because a) https services may be configured to run on non-standard ports, and b) there may be additional SSL/TLS wrapped services related to the web application. In general, a service discovery is required to identify such ports.&lt;br /&gt;
&lt;br /&gt;
The nmap scanner, via the “–sV” scan option, is able to identify SSL services. Vulnerability Scanners, in addition to performing service discovery, may include checks against weak ciphers (for example, the Nessus scanner has the capability of checking SSL services on arbitrary ports, and will report weak ciphers).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 1'''. SSL service recognition via nmap.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@test]# nmap -F -sV localhost&lt;br /&gt;
&lt;br /&gt;
Starting nmap 3.75 ( http://www.insecure.org/nmap/ ) at 2005-07-27 14:41 CEST&lt;br /&gt;
Interesting ports on localhost.localdomain (127.0.0.1):&lt;br /&gt;
(The 1205 ports scanned but not shown below are in state: closed)&lt;br /&gt;
&lt;br /&gt;
PORT      STATE SERVICE         VERSION&lt;br /&gt;
443/tcp   open  ssl             OpenSSL&lt;br /&gt;
901/tcp   open  http            Samba SWAT administration server&lt;br /&gt;
8080/tcp  open  http            Apache httpd 2.0.54 ((Unix) mod_ssl/2.0.54 OpenSSL/0.9.7g PHP/4.3.11)&lt;br /&gt;
8081/tcp  open  http            Apache Tomcat/Coyote JSP engine 1.0&lt;br /&gt;
&lt;br /&gt;
Nmap run completed -- 1 IP address (1 host up) scanned in 27.881 seconds&lt;br /&gt;
[root@test]# &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 2'''. Identifying weak ciphers with Nessus.&lt;br /&gt;
The following is an anonymized excerpt of a report generated by the Nessus scanner, corresponding to the identification of a server certificate allowing weak ciphers (see underlined text).&lt;br /&gt;
&lt;br /&gt;
  '''https (443/tcp)'''&lt;br /&gt;
  '''Description'''&lt;br /&gt;
  Here is the SSLv2 server certificate:&lt;br /&gt;
  Certificate:&lt;br /&gt;
  Data:&lt;br /&gt;
  Version: 3 (0x2)&lt;br /&gt;
  Serial Number: 1 (0x1)&lt;br /&gt;
  Signature Algorithm: md5WithRSAEncryption&lt;br /&gt;
  Issuer: C=**, ST=******, L=******, O=******, OU=******, CN=******&lt;br /&gt;
  Validity&lt;br /&gt;
  Not Before: Oct 17 07:12:16 2002 GMT&lt;br /&gt;
  Not After : Oct 16 07:12:16 2004 GMT&lt;br /&gt;
  Subject: C=**, ST=******, L=******, O=******, CN=******&lt;br /&gt;
  Subject Public Key Info:&lt;br /&gt;
  Public Key Algorithm: rsaEncryption&lt;br /&gt;
  RSA Public Key: (1024 bit)&lt;br /&gt;
  Modulus (1024 bit):&lt;br /&gt;
  00:98:4f:24:16:cb:0f:74:e8:9c:55:ce:62:14:4e:&lt;br /&gt;
  6b:84:c5:81:43:59:c1:2e:ac:ba:af:92:51:f3:0b:&lt;br /&gt;
  ad:e1:4b:22:ba:5a:9a:1e:0f:0b:fb:3d:5d:e6:fc:&lt;br /&gt;
  ef:b8:8c:dc:78:28:97:8b:f0:1f:17:9f:69:3f:0e:&lt;br /&gt;
  72:51:24:1b:9c:3d:85:52:1d:df:da:5a:b8:2e:d2:&lt;br /&gt;
  09:00:76:24:43:bc:08:67:6b:dd:6b:e9:d2:f5:67:&lt;br /&gt;
  e1:90:2a:b4:3b:b4:3c:b3:71:4e:88:08:74:b9:a8:&lt;br /&gt;
  2d:c4:8c:65:93:08:e6:2f:fd:e0:fa:dc:6d:d7:a2:&lt;br /&gt;
  3d:0a:75:26:cf:dc:47:74:29&lt;br /&gt;
  Exponent: 65537 (0x10001)&lt;br /&gt;
  X509v3 extensions:&lt;br /&gt;
  X509v3 Basic Constraints:&lt;br /&gt;
  CA:FALSE&lt;br /&gt;
  Netscape Comment:&lt;br /&gt;
  OpenSSL Generated Certificate&lt;br /&gt;
  Page 10&lt;br /&gt;
  Network Vulnerability Assessment Report 25.05.2005&lt;br /&gt;
  X509v3 Subject Key Identifier:&lt;br /&gt;
  10:00:38:4C:45:F0:7C:E4:C6:A7:A4:E2:C9:F0:E4:2B:A8:F9:63:A8&lt;br /&gt;
  X509v3 Authority Key Identifier:&lt;br /&gt;
  keyid:CE:E5:F9:41:7B:D9:0E:5E:5D:DF:5E:B9:F3:E6:4A:12:19:02:76:CE&lt;br /&gt;
  DirName:/C=**/ST=******/L=******/O=******/OU=******/CN=******&lt;br /&gt;
  serial:00&lt;br /&gt;
  Signature Algorithm: md5WithRSAEncryption&lt;br /&gt;
  7b:14:bd:c7:3c:0c:01:8d:69:91:95:46:5c:e6:1e:25:9b:aa:&lt;br /&gt;
  8b:f5:0d:de:e3:2e:82:1e:68:be:97:3b:39:4a:83:ae:fd:15:&lt;br /&gt;
  2e:50:c8:a7:16:6e:c9:4e:76:cc:fd:69:ae:4f:12:b8:e7:01:&lt;br /&gt;
  b6:58:7e:39:d1:fa:8d:49:bd:ff:6b:a8:dd:ae:83:ed:bc:b2:&lt;br /&gt;
  40:e3:a5:e0:fd:ae:3f:57:4d:ec:f3:21:34:b1:84:97:06:6f:&lt;br /&gt;
  f4:7d:f4:1c:84:cc:bb:1c:1c:e7:7a:7d:2d:e9:49:60:93:12:&lt;br /&gt;
  0d:9f:05:8c:8e:f9:cf:e8:9f:fc:15:c0:6e:e2:fe:e5:07:81:&lt;br /&gt;
  82:fc&lt;br /&gt;
  Here is the list of available SSLv2 ciphers:&lt;br /&gt;
  RC4-MD5&lt;br /&gt;
  EXP-RC4-MD5&lt;br /&gt;
  RC2-CBC-MD5&lt;br /&gt;
  EXP-RC2-CBC-MD5&lt;br /&gt;
  DES-CBC-MD5&lt;br /&gt;
  DES-CBC3-MD5&lt;br /&gt;
  RC4-64-MD5&lt;br /&gt;
  &amp;lt;u&amp;gt;The SSLv2 server offers 5 strong ciphers, but also 0 medium strength and '''2 weak &amp;quot;export class&amp;quot; ciphers'''.&lt;br /&gt;
  The weak/medium ciphers may be chosen by an export-grade or badly configured client software. They only offer a limited protection against a brute force attack&amp;lt;/u&amp;gt;&lt;br /&gt;
  &amp;lt;u&amp;gt;Solution: disable those ciphers and upgrade your client software if necessary.&amp;lt;/u&amp;gt;&lt;br /&gt;
  See http://support.microsoft.com/default.aspx?scid=kben-us216482&lt;br /&gt;
  or http://httpd.apache.org/docs-2.0/mod/mod_ssl.html#sslciphersuite&lt;br /&gt;
  This SSLv2 server also accepts SSLv3 connections.&lt;br /&gt;
  This SSLv2 server also accepts TLSv1 connections.&lt;br /&gt;
  &lt;br /&gt;
  Vulnerable hosts&lt;br /&gt;
  ''(list of vulnerable hosts follows)''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 3'''. Manually audit weak SSL cipher levels with OpenSSL. The following will attempt to connect to Google.com with SSLv2.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@test]# openssl s_client -no_tls1 -no_ssl3 -connect www.google.com:443&lt;br /&gt;
CONNECTED(00000003)&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=20:unable to get local issuer certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=27:certificate not trusted&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
verify error:num=21:unable to verify the first certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
---&lt;br /&gt;
Server certificate&lt;br /&gt;
-----BEGIN CERTIFICATE-----&lt;br /&gt;
MIIDYzCCAsygAwIBAgIQYFbAC3yUC8RFj9MS7lfBkzANBgkqhkiG9w0BAQQFADCB&lt;br /&gt;
zjELMAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJ&lt;br /&gt;
Q2FwZSBUb3duMR0wGwYDVQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UE&lt;br /&gt;
CxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhh&lt;br /&gt;
d3RlIFByZW1pdW0gU2VydmVyIENBMSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNl&lt;br /&gt;
cnZlckB0aGF3dGUuY29tMB4XDTA2MDQyMTAxMDc0NVoXDTA3MDQyMTAxMDc0NVow&lt;br /&gt;
aDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDU1v&lt;br /&gt;
dW50YWluIFZpZXcxEzARBgNVBAoTCkdvb2dsZSBJbmMxFzAVBgNVBAMTDnd3dy5n&lt;br /&gt;
b29nbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC/e2Vs8U33fRDk&lt;br /&gt;
5NNpNgkB1zKw4rqTozmfwty7eTEI8PVH1Bf6nthocQ9d9SgJAI2WOBP4grPj7MqO&lt;br /&gt;
dXMTFWGDfiTnwes16G7NZlyh6peT68r7ifrwSsVLisJp6pUf31M5Z3D88b+Yy4PE&lt;br /&gt;
D7BJaTxq6NNmP1vYUJeXsGSGrV6FUQIDAQABo4GmMIGjMB0GA1UdJQQWMBQGCCsG&lt;br /&gt;
AQUFBwMBBggrBgEFBQcDAjBABgNVHR8EOTA3MDWgM6Axhi9odHRwOi8vY3JsLnRo&lt;br /&gt;
YXd0ZS5jb20vVGhhd3RlUHJlbWl1bVNlcnZlckNBLmNybDAyBggrBgEFBQcBAQQm&lt;br /&gt;
MCQwIgYIKwYBBQUHMAGGFmh0dHA6Ly9vY3NwLnRoYXd0ZS5jb20wDAYDVR0TAQH/&lt;br /&gt;
BAIwADANBgkqhkiG9w0BAQQFAAOBgQADlTbBdVY6LD1nHWkhTadmzuWq2rWE0KO3&lt;br /&gt;
Ay+7EleYWPOo+EST315QLpU6pQgblgobGoI5x/fUg2U8WiYj1I1cbavhX2h1hda3&lt;br /&gt;
FJWnB3SiXaiuDTsGxQ267EwCVWD5bCrSWa64ilSJTgiUmzAv0a2W8YHXdG08+nYc&lt;br /&gt;
X/dVk5WRTw==&lt;br /&gt;
-----END CERTIFICATE-----&lt;br /&gt;
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com&lt;br /&gt;
issuer=/C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/emailAddress=premium-server@thawte.com&lt;br /&gt;
---&lt;br /&gt;
No client certificate CA names sent&lt;br /&gt;
---&lt;br /&gt;
Ciphers common between both SSL endpoints:&lt;br /&gt;
RC4-MD5         EXP-RC4-MD5     RC2-CBC-MD5&lt;br /&gt;
EXP-RC2-CBC-MD5 DES-CBC-MD5     DES-CBC3-MD5&lt;br /&gt;
RC4-64-MD5&lt;br /&gt;
---&lt;br /&gt;
SSL handshake has read 1023 bytes and written 333 bytes&lt;br /&gt;
---&lt;br /&gt;
New, SSLv2, Cipher is DES-CBC3-MD5&lt;br /&gt;
Server public key is 1024 bit&lt;br /&gt;
Compression: NONE&lt;br /&gt;
Expansion: NONE&lt;br /&gt;
SSL-Session:&lt;br /&gt;
    Protocol  : SSLv2&lt;br /&gt;
    Cipher    : DES-CBC3-MD5&lt;br /&gt;
    Session-ID: 709F48E4D567C70A2E49886E4C697CDE&lt;br /&gt;
    Session-ID-ctx:&lt;br /&gt;
    Master-Key: 649E68F8CF936E69642286AC40A80F433602E3C36FD288C3&lt;br /&gt;
    Key-Arg   : E8CB6FEB9ECF3033&lt;br /&gt;
    Start Time: 1156977226&lt;br /&gt;
    Timeout   : 300 (sec)&lt;br /&gt;
    Verify return code: 21 (unable to verify the first certificate)&lt;br /&gt;
---&lt;br /&gt;
closed&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 4'''. Testing supported protocols and ciphers using SSLScan.&lt;br /&gt;
&lt;br /&gt;
SSLScan is a free command line tool that scans a HTTPS service to enumerate what protocols (supports SSLv2, SSLv3 and TLS1) and what ciphers the HTTPS service supports. It runs both on Linux and Windows OS (OSX not tested) and is released under a open source license.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[user@test]$ ./SSLScan --no-failed mail.google.com&lt;br /&gt;
                   _&lt;br /&gt;
           ___ ___| |___  ___ __ _ _ __&lt;br /&gt;
          / __/ __| / __|/ __/ _` | '_ \&lt;br /&gt;
          \__ \__ \ \__ \ (_| (_| | | | |&lt;br /&gt;
          |___/___/_|___/\___\__,_|_| |_|&lt;br /&gt;
&lt;br /&gt;
                  Version 1.9.0-win&lt;br /&gt;
             http://www.titania.co.uk&lt;br /&gt;
 Copyright 2010 Ian Ventura-Whiting / Michael Boman&lt;br /&gt;
    Compiled against OpenSSL 0.9.8n 24 Mar 2010&lt;br /&gt;
&lt;br /&gt;
Testing SSL server mail.google.com on port 443&lt;br /&gt;
&lt;br /&gt;
  Supported Server Cipher(s):&lt;br /&gt;
    accepted  SSLv3  256 bits  AES256-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  AES128-SHA&lt;br /&gt;
    accepted  SSLv3  168 bits  DES-CBC3-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  RC4-SHA&lt;br /&gt;
    accepted  SSLv3  128 bits  RC4-MD5&lt;br /&gt;
    accepted  TLSv1  256 bits  AES256-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  AES128-SHA&lt;br /&gt;
    accepted  TLSv1  168 bits  DES-CBC3-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  RC4-SHA&lt;br /&gt;
    accepted  TLSv1  128 bits  RC4-MD5&lt;br /&gt;
&lt;br /&gt;
  Prefered Server Cipher(s):&lt;br /&gt;
    SSLv3  128 bits  RC4-SHA&lt;br /&gt;
    TLSv1  128 bits  RC4-SHA&lt;br /&gt;
&lt;br /&gt;
  SSL Certificate:&lt;br /&gt;
    Version: 2&lt;br /&gt;
    Serial Number: -4294967295&lt;br /&gt;
    Signature Algorithm: sha1WithRSAEncryption&lt;br /&gt;
    Issuer: /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA&lt;br /&gt;
    Not valid before: Dec 18 00:00:00 2009 GMT&lt;br /&gt;
    Not valid after: Dec 18 23:59:59 2011 GMT&lt;br /&gt;
    Subject: /C=US/ST=California/L=Mountain View/O=Google Inc/CN=mail.google.com&lt;br /&gt;
    Public Key Algorithm: rsaEncryption&lt;br /&gt;
    RSA Public Key: (1024 bit)&lt;br /&gt;
      Modulus (1024 bit):&lt;br /&gt;
          00:d9:27:c8:11:f2:7b:e4:45:c9:46:b6:63:75:83:&lt;br /&gt;
          b1:77:7e:17:41:89:80:38:f1:45:27:a0:3c:d9:e8:&lt;br /&gt;
          a8:00:4b:d9:07:d0:ba:de:ed:f4:2c:a6:ac:dc:27:&lt;br /&gt;
          13:ec:0c:c1:a6:99:17:42:e6:8d:27:d2:81:14:b0:&lt;br /&gt;
          4b:82:fa:b2:c5:d0:bb:20:59:62:28:a3:96:b5:61:&lt;br /&gt;
          f6:76:c1:6d:46:d2:fd:ba:c6:0f:3d:d1:c9:77:9a:&lt;br /&gt;
          58:33:f6:06:76:32:ad:51:5f:29:5f:6e:f8:12:8b:&lt;br /&gt;
          ad:e6:c5:08:39:b3:43:43:a9:5b:91:1d:d7:e3:cf:&lt;br /&gt;
          51:df:75:59:8e:8d:80:ab:53&lt;br /&gt;
      Exponent: 65537 (0x10001)&lt;br /&gt;
    X509v3 Extensions:&lt;br /&gt;
      X509v3 Basic Constraints: critical&lt;br /&gt;
        CA:FALSE      X509v3 CRL Distribution Points: &lt;br /&gt;
        URI:http://crl.thawte.com/ThawteSGCCA.crl&lt;br /&gt;
      X509v3 Extended Key Usage: &lt;br /&gt;
        TLS Web Server Authentication, TLS Web Client Authentication, Netscape Server Gated Crypto      Authority Information Access: &lt;br /&gt;
        OCSP - URI:http://ocsp.thawte.com&lt;br /&gt;
        CA Issuers - URI:http://www.thawte.com/repository/Thawte_SGC_CA.crt&lt;br /&gt;
  Verify Certificate:&lt;br /&gt;
    unable to get local issuer certificate&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Renegotiation requests supported&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Example 5'''. Testing common SSL flaws with ssl_tests&lt;br /&gt;
&lt;br /&gt;
ssl_tests (http://www.pentesterscripting.com/discovery/ssl_tests) is a bash script that uses sslscan and openssl to check for various flaws - ssl version 2, weak ciphers, md5withRSAEncryption,SSLv3 Force Ciphering Bug/Renegotiation.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[user@test]$ ./ssl_test.sh 192.168.1.3 443&lt;br /&gt;
+++++++++++++++++++++++++++++++++++++++++++++++++&lt;br /&gt;
SSL Tests - v2, weak ciphers, MD5, Renegotiation&lt;br /&gt;
by Aung Khant, http://yehg.net&lt;br /&gt;
+++++++++++++++++++++++++++++++++++++++++++++++++&lt;br /&gt;
&lt;br /&gt;
[*] testing on 192.168.1.3:443 ..&lt;br /&gt;
&lt;br /&gt;
[*] tesing for sslv2 ..&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep Accepted  SSLv2&lt;br /&gt;
    Accepted  SSLv2  168 bits  DES-CBC3-MD5&lt;br /&gt;
    Accepted  SSLv2  56 bits   DES-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  128 bits  RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  SSLv2  128 bits  RC4-MD5&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] testing for weak ciphers ...&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep  40 bits | grep Accepted&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv2  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  SSLv3  40 bits   EXP-RC4-MD5&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-RC2-CBC-MD5&lt;br /&gt;
    Accepted  TLSv1  40 bits   EXP-RC4-MD5&lt;br /&gt;
&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep  56 bits | grep Accepted&lt;br /&gt;
    Accepted  SSLv2  56 bits   DES-CBC-MD5&lt;br /&gt;
    Accepted  SSLv3  56 bits   EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  SSLv3  56 bits   DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  56 bits   EDH-RSA-DES-CBC-SHA&lt;br /&gt;
    Accepted  TLSv1  56 bits   DES-CBC-SHA&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] testing for MD5 certificate ..&lt;br /&gt;
[*] sslscan 192.168.1.3:443 | grep MD5WithRSAEncryption&lt;br /&gt;
&lt;br /&gt;
[*] testing for SSLv3 Force Ciphering Bug/Renegotiation ..&lt;br /&gt;
[*] echo R | openssl s_client -connect 192.168.1.3:443 | grep DONE&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify error:num=18:self signed certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify return:1&lt;br /&gt;
RENEGOTIATING&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify error:num=18:self signed certificate&lt;br /&gt;
verify return:1&lt;br /&gt;
depth=0 /C=DE/ST=Berlin/L=Berlin/O=XAMPP/OU=XAMPP/CN=localhost/emailAddress=admin@localhost&lt;br /&gt;
verify return:1&lt;br /&gt;
DONE&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[*] done&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==White Box Test and example==&lt;br /&gt;
&lt;br /&gt;
Check the configuration of the web servers which provide https services. If the web application provides other SSL/TLS wrapped services, these should be checked as well.&lt;br /&gt;
&lt;br /&gt;
'''Example:''' The following registry path in Microsoft Windows 2003 defines the ciphers available to the server:&lt;br /&gt;
&lt;br /&gt;
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\&lt;br /&gt;
&lt;br /&gt;
==Testing SSL certificate validity – client and server==&lt;br /&gt;
&lt;br /&gt;
When accessing a web application via the https protocol, a secure channel is established between the client (usually the browser) and the server. The identity of one (the server) or both parties (client and server)  is then established by means of digital certificates. In order for the communication to be set up, a number of checks on the certificates must be passed. While discussing SSL and certificate based authentication is beyond the scope of this Guide, we will focus on the main criteria involved in ascertaining certificate validity: a) checking if the Certificate Authority (CA) is a known one (meaning one considered trusted), b) checking that the certificate is currently valid, and c) checking that the name of the site and the name reported in the certificate  match.&lt;br /&gt;
Remember to upgrade your browser because CA certs expired too, in every release of the browser, CA Certs has been renewed. Moreover it's important to update the browser because more web sites require strong cipher more of 40 or 56 bit.&lt;br /&gt;
&lt;br /&gt;
Let’s examine each check more in detail.&lt;br /&gt;
&lt;br /&gt;
a) Each browser comes with a preloaded list of trusted CAs, against which the certificate signing CA is compared (this list can be customized and expanded at will). During the initial negotiations with an https server, if the server certificate relates to a CA unknown to the browser, a warning is usually raised. This happens most often because a web application relies on a certificate signed by a self-established CA. Whether this is to be considered a concern depends on several factors. For example, this may be fine for an Intranet environment (think of corporate web email being provided via https; here, obviously all users recognize the internal CA as a trusted CA). When a service is provided to the general public via the Internet, however (i.e. when it is important to positively verify the identity of the server we are talking to), it is usually imperative to rely on a trusted CA, one which is  recognized by all the user base (and here we stop with our considerations; we won’t delve deeper in the implications of the trust model being used by digital certificates).&lt;br /&gt;
&lt;br /&gt;
b) Certificates have an associated period of validity, therefore they may expire. Again, we are warned by the browser about this. A public service needs a temporally valid certificate; otherwise, it means we are talking with a server whose certificate was issued by someone we trust, but has expired without being renewed.&lt;br /&gt;
&lt;br /&gt;
c) What if the name on the certificate and the name of the server do not match? If this happens, it might sound suspicious. For a number of reasons, this is not so rare to see. A system may host a number of name-based virtual hosts, which share the same IP address and are identified by means of the HTTP 1.1 Host: header information. In this case, since the SSL handshake checks the server certificate before the HTTP request is processed, it is not possible to assign different certificates to each virtual server. Therefore, if the name of the site and the name reported in the certificate do not match, we have a condition which is typically signalled by the browser. To avoid this, one of two techniques should be used. First is Server Name Indication (SNI), which is a TLS extension from [http://www.ietf.org/rfc/rfc3546.txt RFC 3546]; and second is IP-based virtual servers must be used. [2] and [3] describe techniques to deal with this problem and allow name-based virtual hosts to be correctly referenced.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Black Box Testing and examples===&lt;br /&gt;
&lt;br /&gt;
Examine the validity of the certificates used by the application. Browsers will issue a warning when encountering expired certificates, certificates issued by untrusted CAs, and certificates which do not match namewise with the site to which they should refer. By clicking on the padlock which appears in the browser window when visiting an https site, you can look at information related to the certificate – including the issuer, period of validity, encryption characteristics, etc.&lt;br /&gt;
&lt;br /&gt;
If the application requires a client certificate, you probably have installed one to access it. Certificate information is available in the browser by inspecting the relevant certificate(s) in the list of the installed certificates.&lt;br /&gt;
&lt;br /&gt;
These checks must be applied to all visible SSL-wrapped communication channels used by the application. Though this is the usual https service running on port 443, there may be additional services involved depending on the web application architecture and on deployment issues (an https administrative port left open, https services on non-standard ports, etc.). Therefore, apply these checks to all SSL-wrapped ports which have been discovered. For example, the nmap scanner features a scanning mode (enabled by the –sV command line switch) which identifies SSL-wrapped services. The Nessus vulnerability scanner has the capability of performing SSL checks on all SSL/TLS-wrapped services.&lt;br /&gt;
&lt;br /&gt;
'''Examples'''&lt;br /&gt;
&lt;br /&gt;
Rather than providing a fictitious example, we have inserted an anonymized real-life example to stress how frequently one stumbles on https sites whose certificates are inaccurate with respect to naming.&lt;br /&gt;
&lt;br /&gt;
The following screenshots refer to a regional site of a high-profile IT company.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Warning issued by Microsoft Internet Explorer.&amp;lt;/u&amp;gt; We are visiting an ''.it'' site and the certificate was issued to a ''.com ''site! Internet Explorer warns that the name on the certificate does not match the name of the site.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SSL Certificate Validity Testing IE Warning.gif]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;u&amp;gt;Warning issued by Mozilla Firefox.&amp;lt;/u&amp;gt; The message issued by Firefox is different – Firefox complains because it cannot ascertain the identity of the ''.com'' site the certificate refers to because it does not know the CA which signed the certificate. In fact, Internet Explorer and Firefox do not come preloaded with the same list of CAs. Therefore, the behavior experienced with various browsers may differ.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:SSL Certificate Validity Testing Firefox Warning.gif]]&lt;br /&gt;
&lt;br /&gt;
===White Box Testing and examples===&lt;br /&gt;
&lt;br /&gt;
Examine the validity of the certificates used by the application at both server and client levels. The usage of certificates is primarily at the web server level; however, there may be additional communication paths protected by SSL (for example, towards the DBMS). You should check the application architecture to identify all SSL protected channels.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
'''Whitepapers'''&amp;lt;br&amp;gt;&lt;br /&gt;
* [1] RFC2246. The TLS Protocol Version 1.0 (updated by RFC3546) - http://www.ietf.org/rfc/rfc2246.txt&lt;br /&gt;
* [2] RFC2817. Upgrading to TLS Within HTTP/1.1 - http://www.ietf.org/rfc/rfc2817.txt&lt;br /&gt;
* [3] RFC3546. Transport Layer Security (TLS) Extensions - http://www.ietf.org/rfc/rfc3546.txt&lt;br /&gt;
* [4] &amp;lt;u&amp;gt;www.verisign.net&amp;lt;/u&amp;gt; features various material on the topic&lt;br /&gt;
&lt;br /&gt;
'''Tools'''&lt;br /&gt;
&lt;br /&gt;
* https://www.ssllabs.com/ssldb/&lt;br /&gt;
&lt;br /&gt;
* Vulnerability scanners may include checks regarding certificate validity, including name mismatch and time expiration. They usually report other information as well, such as the CA which issued the certificate. Remember that there is no unified notion of a “trusted CA”; what is trusted depends on the configuration of the software and on the human assumptions made beforehand. Browsers come with a preloaded list of trusted CAs. If your web application relies on a CA which is not in this list (for example, because you rely on a self-made CA), you should take into account the process of configuring user browsers to recognize the CA.&lt;br /&gt;
&lt;br /&gt;
* The Nessus scanner includes a plugin to check for expired certificates or certificates which are going to expire within 60 days (plugin “SSL certificate expiry”, plugin id 15901). This plugin will check certificates ''installed on the server.&lt;br /&gt;
&lt;br /&gt;
* Vulnerability scanners may include checks against weak ciphers. For example, the Nessus scanner (http://www.nessus.org) has this capability and flags the presence of SSL weak ciphers (see example provided above).&lt;br /&gt;
&lt;br /&gt;
* You may also rely on specialized tools such as SSL Digger (http://www.mcafee.com/us/downloads/free-tools/ssldigger.aspx), or – for the command line oriented – experiment with the openssl tool, which provides access to OpenSSL cryptographic functions directly from a Unix shell (may be already available on *nix boxes, otherwise see www.openssl.org).&lt;br /&gt;
&lt;br /&gt;
* To identify SSL-based services, use a vulnerability scanner or a port scanner with service recognition capabilities. The nmap scanner features a “-sV” scanning option which tries to identify services, while the nessus vulnerability scanner has the capability of identifying SSL-based services on arbitrary ports and to run vulnerability checks on them regardless of whether they are configured on standard or non-standard ports.&lt;br /&gt;
&lt;br /&gt;
* In case you need to talk to a SSL service but your favourite tool doesn’t support SSL, you may benefit from a SSL proxy such as stunnel; stunnel will take care of tunneling the underlying protocol (usually http, but not necessarily so) and communicate with the SSL service you need to reach.&lt;br /&gt;
&lt;br /&gt;
* ssl_tests, http://www.pentesterscripting.com/discovery/ssl_tests&lt;br /&gt;
&lt;br /&gt;
* Finally, a word of advice. Though it may be tempting to use a regular browser to check certificates, there are various reasons for not doing so. Browsers have been plagued by various bugs in this area, and the way the browser will perform the check might be influenced by configuration settings that may not be evident. Instead, rely on vulnerability scanners or on specialized tools to do the job.&lt;br /&gt;
&lt;br /&gt;
* [http://www.owasp.org/index.php/Transport_Layer_Protection_Cheat_Sheet OWASP Transport Layer Protection Cheat Sheet]&lt;br /&gt;
&lt;br /&gt;
[[Category:Cryptographic Vulnerability]]&lt;br /&gt;
[[Category:SSL]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Clickjacking_Defense_Cheat_Sheet&amp;diff=147233</id>
		<title>Clickjacking Defense Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Clickjacking_Defense_Cheat_Sheet&amp;diff=147233"/>
				<updated>2013-03-09T07:04:15Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Improved flow&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Clickjacking Defense Introduction =&lt;br /&gt;
&lt;br /&gt;
This cheat sheet is focused on providing developer guidance on Clickjack/UI Redress attack prevention. For more information on the risk of Clickjacking, please visit [https://www.owasp.org/index.php/Clickjacking this page]. Additional references on clickjacking can be found [https://www.owasp.org/index.php/Clickjacking#References here].&lt;br /&gt;
&lt;br /&gt;
The most popular way to defend against clickjacking is to include some sort of &amp;quot;frame-breaking&amp;quot; functionality which prevents other web pages from framing the site you wish to defend. This cheat sheet will discuss two methods of implementing frame-breaking: first is X-FRAME-OPTIONS headers (used if the browser supports the functionality); and second is javascript frame-breaking code.&lt;br /&gt;
&lt;br /&gt;
= Defending with X-FRAME-OPTIONS response headers =&lt;br /&gt;
&lt;br /&gt;
The X-Frame-Options HTTP response header can be used to indicate whether or not a browser should be allowed to render a page in a &amp;amp;lt;frame&amp;amp;gt; or &amp;amp;lt;iframe&amp;amp;gt;. Sites can use this to avoid clickjacking attacks, by ensuring that their content is not embedded into other sites.&lt;br /&gt;
&lt;br /&gt;
=== X-FRAME-OPTIONS Header Types  ===&lt;br /&gt;
&lt;br /&gt;
There are three types of X-FRAME-OPTIONS headers.&lt;br /&gt;
* The first is X-FRAME-OPTIONS &amp;lt;b&amp;gt;DENY&amp;lt;/b&amp;gt;, which prevents any domain from framing the content.&lt;br /&gt;
* The second option is X-FRAME-OPTIONS &amp;lt;b&amp;gt;SAMEORIGIN&amp;lt;/b&amp;gt;, which only allows the current site to frame the content.&lt;br /&gt;
* The third, is the X-FRAME-OPTIONS &amp;lt;b&amp;gt;ALLOW-FROM&amp;lt;/b&amp;gt; 'sitename' header, which permits the specified 'sitename' to frame this page. (e.g., ALLOW-FROM http&amp;amp;#58;//www.foo.com) The ALLOW-FROM option is a relatively recent addition (circa 2012) and may not be supported by all browsers yet.&lt;br /&gt;
&lt;br /&gt;
=== Browser Support ===&lt;br /&gt;
&lt;br /&gt;
The following browsers support X-FRAME-OPTIONS headers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;table border=&amp;quot;1&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt; &amp;lt;th&amp;gt;Browser&amp;lt;/th&amp;gt; &amp;lt;th&amp;gt;Lowest version&amp;lt;/th&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt; &amp;lt;td&amp;gt;Internet Explorer&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;8.0&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt; &amp;lt;td&amp;gt;Firefox (Gecko)&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;3.6.9 (1.9.2.9)&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt; &amp;lt;td&amp;gt;Opera&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;10.50&amp;lt;/td&amp;gt; &amp;lt;/tr&amp;gt; &amp;lt;tr&amp;gt; &amp;lt;td&amp;gt;Safari&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;4.0&amp;lt;/td&amp;gt;&amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;tr&amp;gt; &amp;lt;td&amp;gt;Chrome&amp;lt;/td&amp;gt; &amp;lt;td&amp;gt;4.1.249.1042&amp;lt;/td&amp;gt; &amp;lt;/tr&amp;gt;&lt;br /&gt;
&amp;lt;/table&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Reference: [https://developer.mozilla.org/en-US/docs/The_X-FRAME-OPTIONS_response_header https://developer.mozilla.org/en-US/docs/The_X-FRAME-OPTIONS_response_header]&lt;br /&gt;
&lt;br /&gt;
=== Implementation ===&lt;br /&gt;
&lt;br /&gt;
To implement this protection, you need to add the header to any page that you want to protect from being clickjacked. One way to do this is to add the header manually to every page.  A possibly simpler way is to implement a filter that automatically adds the header to every page.&lt;br /&gt;
&lt;br /&gt;
OWASP has an [[ClickjackFilter for Java EE|article and some code]] that provides all the details for implementing this in the Java EE environment.&lt;br /&gt;
&lt;br /&gt;
The SDL blog has posted an [http://blogs.msdn.com/sdl/archive/2009/02/05/clickjacking-defense-in-ie8.aspx article] covering how to implement this in a .NET environment.&lt;br /&gt;
&lt;br /&gt;
=== Limitations ===&lt;br /&gt;
&lt;br /&gt;
* '''Per-page policy specifcation''' &lt;br /&gt;
The policy needs to be specifed for every page, which can complicate deployment. Providing the ability to enforce it for the entire site, at login time for instance, could simplify adoption.&lt;br /&gt;
&lt;br /&gt;
* '''Problems with multi-domain sites'''&lt;br /&gt;
The current implementation does not allow the webmaster to provide a whitelist of domains that are allowed to frame the page. While whitelisting can be dangerous, in some cases a webmaster might have no choice but to use more than one hostname.&lt;br /&gt;
&lt;br /&gt;
* '''Proxies'''&lt;br /&gt;
Web proxies are notorious for adding and stripping headers. If a web proxy strips the X-FRAME-OPTIONS header then the site loses its framing protection.&lt;br /&gt;
&lt;br /&gt;
= Clickjacking defense for legacy browsers =&lt;br /&gt;
&lt;br /&gt;
The following methodology will prevent a webpage from being framed even in legacy browsers.&lt;br /&gt;
&lt;br /&gt;
In the document HEAD element, add the following:&lt;br /&gt;
&lt;br /&gt;
First apply an ID to the style element itself:&lt;br /&gt;
&lt;br /&gt;
 &amp;amp;lt;style id=&amp;amp;quot;antiClickjack&amp;amp;quot;&amp;amp;gt;body{display:none !important;}&amp;amp;lt;/style&amp;amp;gt;&lt;br /&gt;
&lt;br /&gt;
And then delete that style by its ID immediately after in the script:&lt;br /&gt;
&lt;br /&gt;
 &amp;amp;lt;script type=&amp;amp;quot;text/javascript&amp;amp;quot;&amp;amp;gt;&lt;br /&gt;
    if (self === top) {&lt;br /&gt;
        var antiClickjack = document.getElementById(&amp;amp;quot;antiClickjack&amp;amp;quot;);&lt;br /&gt;
        antiClickjack.parentNode.removeChild(antiClickjack);&lt;br /&gt;
    } else {&lt;br /&gt;
        top.location = self.location;&lt;br /&gt;
    }&lt;br /&gt;
 &amp;amp;lt;/script&amp;amp;gt;&lt;br /&gt;
&lt;br /&gt;
This way, everything can be in the document HEAD and you only need one method/taglib in your API.&lt;br /&gt;
&lt;br /&gt;
Reference: [https://www.codemagi.com/blog/post/194 https://www.codemagi.com/blog/post/194]&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147232</id>
		<title>C-Based Toolchain Hardening Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147232"/>
				<updated>2013-03-09T06:57:42Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added note on removing dependencies&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening Cheat Sheet]] is a brief treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, integration, static analysis, and platform security. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
For those who would like a deeper treatment of the subject matter, please visit [[C-Based_Toolchain_Hardening|C-Based Toolchain Hardening]].&lt;br /&gt;
&lt;br /&gt;
== Actionable Items ==&lt;br /&gt;
&lt;br /&gt;
The [[C-Based Toolchain Hardening Cheat Sheet]] calls for the following actionable items:&lt;br /&gt;
&lt;br /&gt;
* Provide debug, release, and test configurations&lt;br /&gt;
* Provide an assert with useful behavior&lt;br /&gt;
* Configure code to take advantage of configurations&lt;br /&gt;
* Properly integrate third party libraries&lt;br /&gt;
* Use the compiler's built-in static analysis capabilities&lt;br /&gt;
* Integrate with platform security measures&lt;br /&gt;
&lt;br /&gt;
The remainder of this cheat sheet briefly explains the bulleted, actionable items. For a thorough treatment, please visit the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
== Build Configurations ==&lt;br /&gt;
&lt;br /&gt;
You should support three build configurations. First is ''Debug'', second is ''Release'', and third is ''Test''. One size does '''not''' fit all, and each speaks to a different facet of the engineering process. You will use a debug build while developing, your continuous integration or build server will use test configurations, and you will ship release builds.&lt;br /&gt;
&lt;br /&gt;
1970's K&amp;amp;R code and one size fits all flags are from a bygone era. Proccesses have evolved and matured to meet the challanges of a modern landscape, including threats. Because tools like Autconfig and Automake [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html do not support the notion of build configurations], you should prefer to work in an Integrated Develop Environments (IDE) or write your makefiles so the desired targets are supported. In addition, Autconfig and Automake often ignores user supplied flags (it depends on the folks writing the various scripts and templates), so you might find it easier to again write a makefile from scratch rather than retrofitting existing auto tool files.&lt;br /&gt;
&lt;br /&gt;
=== Debug Builds ===&lt;br /&gt;
&lt;br /&gt;
Debug is used during development, and the build assists you in finding problems in the code. During this phase, you develop your program and test integration with third party libraries you program depends upon. To help with debugging and diagnostics, you should define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt; (if on a Windows platform) preprocessor macros and supply other 'debugging and diagnostic' oriented flags to the compiler and linker. Additional preprocessor macros for selected libraries are offered in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
You should use the following for GCC when building for debug: &amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt;) and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt;. No optimizations improve debuggability because optimizations often rearrange statements to improve instruction scheduling and remove unneeded code. You may need &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; to ensure some analysis is performed. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Asserts will help you write self debugging programs. The program will alert you to the point of first failure quickly and easily. Because asserts are so powerful, the code should be completely and full instrumented with asserts that: (1) validates and asserts all program state relevant to a function or a method; (2) validates and asserts all function parameters; and (3) validates and asserts all return values for functions or methods which return a value. Because of item (3), you should be very suspicious of void functions that cannot convey failures.&lt;br /&gt;
&lt;br /&gt;
Anywhere you have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation, you should have an assert. Anywhere you have an assert, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand. Posix states if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined, then &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html &amp;quot;shall write information about the particular call that failed on stderr and shall call abort&amp;quot;]. Calling abort during development is useless behavior, so you must supply your own assert that &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt;s. A Unix and Linux example of a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; based assert is provided in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
Unlike other debugging and diagnostic methods - such as breakpoints and &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; - asserts stay in forever and become silent guardians. If you accidentally nudge something in an apparently unrelated code path, the assert will snap the debugger for you. The enduring coverage means debug code - with its additional diagnostics and instrumentation - is more highly valued than unadorned release code. If code is checked in that does not have the additional debugging and diagnostics, including full assertions, you should reject the check-in.&lt;br /&gt;
&lt;br /&gt;
=== Release Builds ===&lt;br /&gt;
&lt;br /&gt;
Release builds are diametrically opposed to debug configurations. In a release configuration, the program will be built for use in production. Your program is expected to operate correctly, securely and efficiently. The time for debugging and diagnostics is over, and your program will define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; to remove the supplemental information and behavior.&lt;br /&gt;
&lt;br /&gt;
A release configuration should also use &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. The optimizations will make it somewhat more difficult to make sense of a stack trace, but they should be few and far between. The &amp;lt;tt&amp;gt;-g''N''&amp;lt;/tt&amp;gt; flag ensures debugging information is available for post mortem analysis. While you generate debugging information for release builds, you should strip the information before shipping and check the symbols into you version control system along with the tagged build.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will also remove asserts from your program by defining them to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; since its not acceptable to crash via &amp;lt;tt&amp;gt;abort&amp;lt;/tt&amp;gt; in production. You should not depend upon assert for crash report generation because those reports could contain sensitive information and may end up on foreign systems, including for example, [http://msdn.microsoft.com/en-us/library/windows/hardware/gg487440.aspx Windows Error Reporting]. If you want a crash dump, you should generate it yourself in a controlled manner while ensuring no sensitive information is written or leaked.&lt;br /&gt;
&lt;br /&gt;
Release builds should also curtail logging. If you followed earlier guidance, you have properly instrumented code and can determine the point of first failure quickly and easily. Simply log the failure and and relevant parameters. Remove all &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; and similar calls because sensitive information might be logged to a system logger. Worse, the data in the logs might be egressed by backup or sync. If your default configuration includes a logging level of ten or ''maximum verbosity'', you probably lack stability and are trying to track problems in the field. That usually means your program or library is not ready for production.&lt;br /&gt;
&lt;br /&gt;
=== Test Builds ===&lt;br /&gt;
&lt;br /&gt;
A Test build is closely related to a release build. In this build configuration, you want to be as close to production as possible, so you should be using &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. You will run your suite of ''positive'' and ''negative'' tests against the test build.&lt;br /&gt;
&lt;br /&gt;
You will also want to exercise all functions or methods provided by the program and not just the public interfaces, so everything should be made public. For example, all member functions public (C++ classes), all selectors (Objective C), all methods (Java), and all interfaces (library or shared object) should be made available for testing. As such, you should:&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;tt&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
You should also concentrate on negative tests. Positive self tests are relatively useless except for functional and regression tests. Since this is your line of business or area of expertise, you should have the business logic correct when operating in a benign environment. A hostile or toxic environment is much more interesting, and that's where you want to know how your library or program will fail in the field when under attack.&lt;br /&gt;
&lt;br /&gt;
== Library Integration ==&lt;br /&gt;
&lt;br /&gt;
You must properly integrate and utilize libraries in your program. Proper integration includes acceptance testing, configuring for your build system, identifying libraries you ''should'' be using, and correctly using the libraries. A well integrated library can compliment your code, and a poorlly written library can detract from your program. Because a stable library with required functionality can be elusive and its tricky to integrate libraries, you should try to minimize dependencies and avoid thrid party libraries whenever possible.&lt;br /&gt;
&lt;br /&gt;
Acceptance testing a library is practically non-existent. The testing can be a simple code review or can include additional measures, such as negative self tests. If the library is defective or does not meet standards, you must fix it or reject the library. An example of lack of acceptance testing is [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library], which resulted in [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525]. Another example is the 10's to 100's of millions of vulnerable embedded devices due to defects in &amp;lt;tt&amp;gt;libupnp&amp;lt;/tt&amp;gt;. While its popular to lay blame on others, the bottom line is you chose the library so you are responsible for it.&lt;br /&gt;
&lt;br /&gt;
You must also ensure the library is integrated into your build process. For example, the OpenSSL library should be configured '''without''' SSLv2, SSLv3 and compression since they are defective. That means &amp;lt;tt&amp;gt;config&amp;lt;/tt&amp;gt; should be executed with &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-no-sslv2&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-no-sslv3&amp;lt;/tt&amp;gt;. As an additional example, using STLPort your debug configuration should also define &amp;lt;tt&amp;gt;_STLP_DEBUG=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_USE_DEBUG_LIB=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_ALLOC=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_UNINITIALIZED=1&amp;lt;/tt&amp;gt; because the library offers the additional diagnostics during development.&lt;br /&gt;
&lt;br /&gt;
Debug builds also present an opportunity to use additional libraries to help locate problems in the code. For example, you should be using a memory checker such as ''Debug Malloc Library (Dmalloc)'' during development. If you are not using Dmalloc, then ensure you have an equivalent checker, such as GCC 4.8's &amp;lt;tt&amp;gt;-fsanitize=memory&amp;lt;/tt&amp;gt;. This is one area where one size clearly does not fit all.&lt;br /&gt;
&lt;br /&gt;
Using a library properly is always difficult, especially when there is no documentation. Review any hardening documents available for the library, and be sure to visit the library's documentation to ensure proper API usage. If required, you might have to review code or step library code under the debugger to ensure there are no bugs or undocumented features.&lt;br /&gt;
&lt;br /&gt;
== Static Analysis ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers do a fantastic job of generating object code from source code. The process creates a lot of additional information useful in analyzing code. Compilers use the analysis to offer programmers warnings to help detect problems in their code, but the catch is you have to ask for them. After you ask for them, you should take time to understand what the underlying issue is when a statement is flagged. For example, compilers will warn you when comparing a signed integer to an unsigned integer because &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after C/C++ promotion. At other times, you will need to back off some warnings to help separate the wheat from the chaff. For example, interface programming is a popular C++ paradigm, so &amp;lt;tt&amp;gt;-Wno-unused-parameter&amp;lt;/tt&amp;gt; will probably be helpful with C++ code.&lt;br /&gt;
&lt;br /&gt;
You should consider a clean compile as a security gate. If you find its painful to turn warnings on, then you have likely been overlooking some of the finer points in the details. In addition, you should strive for multiple compilers and platforms support since each has its own personality (and interpretation of the C/C++ standards). By the time your core modules clean compile under Clang, GCC, ICC, and Visual Studio on the Linux and Windows platforms, your code will have many stability obstacles removed.&lt;br /&gt;
&lt;br /&gt;
When compiling programs with GCC, you should use the following flags to help detect errors in your programs. The options should be added to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; for a program with C source files, and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a program with C++ source files. Objective C developers should add their warnings to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt;: &amp;lt;tt&amp;gt;-Wall -Wextra -Wconversion (or -Wsign-conversion), -Wcast-align, -Wformat=2 -Wformat-security, -fno-common, -Wmissing-prototypes, -Wmissing-declarations, -Wstrict-prototypes, -Wstrict-overflow, and -Wtrampolines&amp;lt;/tt&amp;gt;. C++ presents additional opportunities under GCC, and the flags include &amp;lt;tt&amp;gt;-Woverloaded-virtual, -Wreorder, -Wsign-promo, -Wnon-virtual-dtor&amp;lt;/tt&amp;gt; and possibly &amp;lt;tt&amp;gt;-Weffc++&amp;lt;/tt&amp;gt;. Finally, Objective C should include &amp;lt;tt&amp;gt;-Wstrict-selector-match&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-Wundeclared-selector&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
For a Microsoft platform, you should use: &amp;lt;tt&amp;gt;/W4&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/Wall&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt;. If you don't use &amp;lt;tt&amp;gt;/Wall&amp;lt;/tt&amp;gt;, Microsoft recomends using &amp;lt;tt&amp;gt;/W4&amp;lt;/tt&amp;gt; and enabling C4191, C4242, C4263, C4264, C4265, C4266, C4302, C4826, C4905, C4906, and C4928. Finally, &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt; is Enterprise Code Analysis, which is freely available with the [http://www.microsoft.com/en-us/download/details.aspx?id=24826 Windows SDK for Windows Server 2008 and .NET Framework 3.5 SDK] (you don't need Visual Studio Enterprise edition).&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html GCC Options to Request or Suppress Warnings]'', ''[http://blogs.msdn.com/b/vcblog/archive/2010/12/14/off-by-default-compiler-warnings-in-visual-c.aspx “Off By Default” Compiler Warnings in Visual C++]'', and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Platform Security ==&lt;br /&gt;
&lt;br /&gt;
Integrating with platform security is essential to a defensive posture. Platform security will be your safety umbrella if someone discovers a bug with security implications - and you should always have it with you. For example, if your parser fails, then no-execute stacks and heaps can turn a 0-day into an annoying crash. Not integrating often leaves your users and customers vulnerable to malicious code. While you may not be familiar with some of the flags, you are probably familiar with the effects of omitting them. For example, Android's Gingerbreak overwrote the Global Offset Table (GOT) in the ELF headers, and could have been avoided with &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When integrating with platform security on a Linux host, you should use the following flags: &amp;lt;tt&amp;gt;-fPIE&amp;lt;/tt&amp;gt; (compiler) and &amp;lt;tt&amp;gt;-pie&amp;lt;/tt&amp;gt; (linker), -fstack-protector-all (or -fstack-protector), &amp;lt;tt&amp;gt;-z,noexecstack&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,now&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;. If available, you should also use &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=2&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=1&amp;lt;/tt&amp;gt; on Android 4.2), &amp;lt;tt&amp;gt;-fsanitize=address&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=thread&amp;lt;/tt&amp;gt; (the last two should be used in debug configurations). &amp;lt;tt&amp;gt;-z,nodlopen&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-z,nodump&amp;lt;/tt&amp;gt; might help in reducing an attacker's ability to load and manipulate a shared object. On Gentoo and other systems with no-exec heaps, you should also use &amp;lt;tt&amp;gt;-z,noexecheap&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Windows programs should include &amp;lt;tt&amp;gt;/dynamicbase&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/NXCOMPAT&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/GS&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/SafeSEH&amp;lt;/tt&amp;gt; to ensure address space layout randomizations (ASLR), data execution prevention (DEP), use of stack cookies, and thwart exception handler overwrites.&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html GCC Options Summary]'' and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;br /&gt;
&lt;br /&gt;
== Other Cheat sheets ==&lt;br /&gt;
&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147231</id>
		<title>C-Based Toolchain Hardening Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147231"/>
				<updated>2013-03-09T06:52:11Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added preamble before recommending against using Autotools (its sure to raise objections)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening Cheat Sheet]] is a brief treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, integration, static analysis, and platform security. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
For those who would like a deeper treatment of the subject matter, please visit [[C-Based_Toolchain_Hardening|C-Based Toolchain Hardening]].&lt;br /&gt;
&lt;br /&gt;
== Actionable Items ==&lt;br /&gt;
&lt;br /&gt;
The [[C-Based Toolchain Hardening Cheat Sheet]] calls for the following actionable items:&lt;br /&gt;
&lt;br /&gt;
* Provide debug, release, and test configurations&lt;br /&gt;
* Provide an assert with useful behavior&lt;br /&gt;
* Configure code to take advantage of configurations&lt;br /&gt;
* Properly integrate third party libraries&lt;br /&gt;
* Use the compiler's built-in static analysis capabilities&lt;br /&gt;
* Integrate with platform security measures&lt;br /&gt;
&lt;br /&gt;
The remainder of this cheat sheet briefly explains the bulleted, actionable items. For a thorough treatment, please visit the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
== Build Configurations ==&lt;br /&gt;
&lt;br /&gt;
You should support three build configurations. First is ''Debug'', second is ''Release'', and third is ''Test''. One size does '''not''' fit all, and each speaks to a different facet of the engineering process. You will use a debug build while developing, your continuous integration or build server will use test configurations, and you will ship release builds.&lt;br /&gt;
&lt;br /&gt;
1970's K&amp;amp;R code and one size fits all flags are from a bygone era. Proccesses have evolved and matured to meet the challanges of a modern landscape, including threats. Because tools like Autconfig and Automake [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html do not support the notion of build configurations], you should prefer to work in an Integrated Develop Environments (IDE) or write your makefiles so the desired targets are supported. In addition, Autconfig and Automake often ignores user supplied flags (it depends on the folks writing the various scripts and templates), so you might find it easier to again write a makefile from scratch rather than retrofitting existing auto tool files.&lt;br /&gt;
&lt;br /&gt;
=== Debug Builds ===&lt;br /&gt;
&lt;br /&gt;
Debug is used during development, and the build assists you in finding problems in the code. During this phase, you develop your program and test integration with third party libraries you program depends upon. To help with debugging and diagnostics, you should define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt; (if on a Windows platform) preprocessor macros and supply other 'debugging and diagnostic' oriented flags to the compiler and linker. Additional preprocessor macros for selected libraries are offered in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
You should use the following for GCC when building for debug: &amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt;) and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt;. No optimizations improve debuggability because optimizations often rearrange statements to improve instruction scheduling and remove unneeded code. You may need &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; to ensure some analysis is performed. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Asserts will help you write self debugging programs. The program will alert you to the point of first failure quickly and easily. Because asserts are so powerful, the code should be completely and full instrumented with asserts that: (1) validates and asserts all program state relevant to a function or a method; (2) validates and asserts all function parameters; and (3) validates and asserts all return values for functions or methods which return a value. Because of item (3), you should be very suspicious of void functions that cannot convey failures.&lt;br /&gt;
&lt;br /&gt;
Anywhere you have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation, you should have an assert. Anywhere you have an assert, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand. Posix states if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined, then &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html &amp;quot;shall write information about the particular call that failed on stderr and shall call abort&amp;quot;]. Calling abort during development is useless behavior, so you must supply your own assert that &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt;s. A Unix and Linux example of a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; based assert is provided in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
Unlike other debugging and diagnostic methods - such as breakpoints and &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; - asserts stay in forever and become silent guardians. If you accidentally nudge something in an apparently unrelated code path, the assert will snap the debugger for you. The enduring coverage means debug code - with its additional diagnostics and instrumentation - is more highly valued than unadorned release code. If code is checked in that does not have the additional debugging and diagnostics, including full assertions, you should reject the check-in.&lt;br /&gt;
&lt;br /&gt;
=== Release Builds ===&lt;br /&gt;
&lt;br /&gt;
Release builds are diametrically opposed to debug configurations. In a release configuration, the program will be built for use in production. Your program is expected to operate correctly, securely and efficiently. The time for debugging and diagnostics is over, and your program will define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; to remove the supplemental information and behavior.&lt;br /&gt;
&lt;br /&gt;
A release configuration should also use &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. The optimizations will make it somewhat more difficult to make sense of a stack trace, but they should be few and far between. The &amp;lt;tt&amp;gt;-g''N''&amp;lt;/tt&amp;gt; flag ensures debugging information is available for post mortem analysis. While you generate debugging information for release builds, you should strip the information before shipping and check the symbols into you version control system along with the tagged build.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will also remove asserts from your program by defining them to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; since its not acceptable to crash via &amp;lt;tt&amp;gt;abort&amp;lt;/tt&amp;gt; in production. You should not depend upon assert for crash report generation because those reports could contain sensitive information and may end up on foreign systems, including for example, [http://msdn.microsoft.com/en-us/library/windows/hardware/gg487440.aspx Windows Error Reporting]. If you want a crash dump, you should generate it yourself in a controlled manner while ensuring no sensitive information is written or leaked.&lt;br /&gt;
&lt;br /&gt;
Release builds should also curtail logging. If you followed earlier guidance, you have properly instrumented code and can determine the point of first failure quickly and easily. Simply log the failure and and relevant parameters. Remove all &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; and similar calls because sensitive information might be logged to a system logger. Worse, the data in the logs might be egressed by backup or sync. If your default configuration includes a logging level of ten or ''maximum verbosity'', you probably lack stability and are trying to track problems in the field. That usually means your program or library is not ready for production.&lt;br /&gt;
&lt;br /&gt;
=== Test Builds ===&lt;br /&gt;
&lt;br /&gt;
A Test build is closely related to a release build. In this build configuration, you want to be as close to production as possible, so you should be using &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. You will run your suite of ''positive'' and ''negative'' tests against the test build.&lt;br /&gt;
&lt;br /&gt;
You will also want to exercise all functions or methods provided by the program and not just the public interfaces, so everything should be made public. For example, all member functions public (C++ classes), all selectors (Objective C), all methods (Java), and all interfaces (library or shared object) should be made available for testing. As such, you should:&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;tt&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
You should also concentrate on negative tests. Positive self tests are relatively useless except for functional and regression tests. Since this is your line of business or area of expertise, you should have the business logic correct when operating in a benign environment. A hostile or toxic environment is much more interesting, and that's where you want to know how your library or program will fail in the field when under attack.&lt;br /&gt;
&lt;br /&gt;
== Library Integration ==&lt;br /&gt;
&lt;br /&gt;
You must properly integrate and utilize libraries in your program. Proper integration includes acceptance testing, configuring for your build system, identifying libraries you ''should'' be using, and correctly using the libraries. A well integrated library can compliment your code, and a poorlly written library can detract from your program.&lt;br /&gt;
&lt;br /&gt;
Acceptance testing a library is practically non-existent. The testing can be a simple code review or can include additional measures, such as negative self tests. If the library is defective or does not meet standards, you must fix it or reject the library. An example of lack of acceptance testing is [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library], which resulted in [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525]. Another example is the 10's to 100's of millions of vulnerable embedded devices due to defects in &amp;lt;tt&amp;gt;libupnp&amp;lt;/tt&amp;gt;. While its popular to lay blame on others, the bottom line is you chose the library so you are responsible for it.&lt;br /&gt;
&lt;br /&gt;
You must also ensure the library is integrated into your build process. For example, the OpenSSL library should be configured '''without''' SSLv2, SSLv3 and compression since they are defective. That means &amp;lt;tt&amp;gt;config&amp;lt;/tt&amp;gt; should be executed with &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-no-sslv2&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-no-sslv3&amp;lt;/tt&amp;gt;. As an additional example, using STLPort your debug configuration should also define &amp;lt;tt&amp;gt;_STLP_DEBUG=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_USE_DEBUG_LIB=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_ALLOC=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_UNINITIALIZED=1&amp;lt;/tt&amp;gt; because the library offers the additional diagnostics during development.&lt;br /&gt;
&lt;br /&gt;
Debug builds also present an opportunity to use additional libraries to help locate problems in the code. For example, you should be using a memory checker such as ''Debug Malloc Library (Dmalloc)'' during development. If you are not using Dmalloc, then ensure you have an equivalent checker, such as GCC 4.8's &amp;lt;tt&amp;gt;-fsanitize=memory&amp;lt;/tt&amp;gt;. This is one area where one size clearly does not fit all.&lt;br /&gt;
&lt;br /&gt;
Using a library properly is always difficult, especially when there is no documentation. Review any hardening documents available for the library, and be sure to visit the library's documentation to ensure proper API usage. If required, you might have to review code or step library code under the debugger to ensure there are no bugs or undocumented features.&lt;br /&gt;
&lt;br /&gt;
== Static Analysis ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers do a fantastic job of generating object code from source code. The process creates a lot of additional information useful in analyzing code. Compilers use the analysis to offer programmers warnings to help detect problems in their code, but the catch is you have to ask for them. After you ask for them, you should take time to understand what the underlying issue is when a statement is flagged. For example, compilers will warn you when comparing a signed integer to an unsigned integer because &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after C/C++ promotion. At other times, you will need to back off some warnings to help separate the wheat from the chaff. For example, interface programming is a popular C++ paradigm, so &amp;lt;tt&amp;gt;-Wno-unused-parameter&amp;lt;/tt&amp;gt; will probably be helpful with C++ code.&lt;br /&gt;
&lt;br /&gt;
You should consider a clean compile as a security gate. If you find its painful to turn warnings on, then you have likely been overlooking some of the finer points in the details. In addition, you should strive for multiple compilers and platforms support since each has its own personality (and interpretation of the C/C++ standards). By the time your core modules clean compile under Clang, GCC, ICC, and Visual Studio on the Linux and Windows platforms, your code will have many stability obstacles removed.&lt;br /&gt;
&lt;br /&gt;
When compiling programs with GCC, you should use the following flags to help detect errors in your programs. The options should be added to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; for a program with C source files, and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a program with C++ source files. Objective C developers should add their warnings to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt;: &amp;lt;tt&amp;gt;-Wall -Wextra -Wconversion (or -Wsign-conversion), -Wcast-align, -Wformat=2 -Wformat-security, -fno-common, -Wmissing-prototypes, -Wmissing-declarations, -Wstrict-prototypes, -Wstrict-overflow, and -Wtrampolines&amp;lt;/tt&amp;gt;. C++ presents additional opportunities under GCC, and the flags include &amp;lt;tt&amp;gt;-Woverloaded-virtual, -Wreorder, -Wsign-promo, -Wnon-virtual-dtor&amp;lt;/tt&amp;gt; and possibly &amp;lt;tt&amp;gt;-Weffc++&amp;lt;/tt&amp;gt;. Finally, Objective C should include &amp;lt;tt&amp;gt;-Wstrict-selector-match&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-Wundeclared-selector&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
For a Microsoft platform, you should use: &amp;lt;tt&amp;gt;/W4&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/Wall&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt;. If you don't use &amp;lt;tt&amp;gt;/Wall&amp;lt;/tt&amp;gt;, Microsoft recomends using &amp;lt;tt&amp;gt;/W4&amp;lt;/tt&amp;gt; and enabling C4191, C4242, C4263, C4264, C4265, C4266, C4302, C4826, C4905, C4906, and C4928. Finally, &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt; is Enterprise Code Analysis, which is freely available with the [http://www.microsoft.com/en-us/download/details.aspx?id=24826 Windows SDK for Windows Server 2008 and .NET Framework 3.5 SDK] (you don't need Visual Studio Enterprise edition).&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html GCC Options to Request or Suppress Warnings]'', ''[http://blogs.msdn.com/b/vcblog/archive/2010/12/14/off-by-default-compiler-warnings-in-visual-c.aspx “Off By Default” Compiler Warnings in Visual C++]'', and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Platform Security ==&lt;br /&gt;
&lt;br /&gt;
Integrating with platform security is essential to a defensive posture. Platform security will be your safety umbrella if someone discovers a bug with security implications - and you should always have it with you. For example, if your parser fails, then no-execute stacks and heaps can turn a 0-day into an annoying crash. Not integrating often leaves your users and customers vulnerable to malicious code. While you may not be familiar with some of the flags, you are probably familiar with the effects of omitting them. For example, Android's Gingerbreak overwrote the Global Offset Table (GOT) in the ELF headers, and could have been avoided with &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When integrating with platform security on a Linux host, you should use the following flags: &amp;lt;tt&amp;gt;-fPIE&amp;lt;/tt&amp;gt; (compiler) and &amp;lt;tt&amp;gt;-pie&amp;lt;/tt&amp;gt; (linker), -fstack-protector-all (or -fstack-protector), &amp;lt;tt&amp;gt;-z,noexecstack&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,now&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;. If available, you should also use &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=2&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=1&amp;lt;/tt&amp;gt; on Android 4.2), &amp;lt;tt&amp;gt;-fsanitize=address&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=thread&amp;lt;/tt&amp;gt; (the last two should be used in debug configurations). &amp;lt;tt&amp;gt;-z,nodlopen&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-z,nodump&amp;lt;/tt&amp;gt; might help in reducing an attacker's ability to load and manipulate a shared object. On Gentoo and other systems with no-exec heaps, you should also use &amp;lt;tt&amp;gt;-z,noexecheap&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Windows programs should include &amp;lt;tt&amp;gt;/dynamicbase&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/NXCOMPAT&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/GS&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/SafeSEH&amp;lt;/tt&amp;gt; to ensure address space layout randomizations (ASLR), data execution prevention (DEP), use of stack cookies, and thwart exception handler overwrites.&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html GCC Options Summary]'' and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;br /&gt;
&lt;br /&gt;
== Other Cheat sheets ==&lt;br /&gt;
&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147229</id>
		<title>C-Based Toolchain Hardening Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147229"/>
				<updated>2013-03-09T06:38:52Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Improved flow&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening Cheat Sheet]] is a brief treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, integration, static analysis, and platform security. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
For those who would like a deeper treatment of the subject matter, please visit [[C-Based_Toolchain_Hardening|C-Based Toolchain Hardening]].&lt;br /&gt;
&lt;br /&gt;
== Actionable Items ==&lt;br /&gt;
&lt;br /&gt;
The [[C-Based Toolchain Hardening Cheat Sheet]] calls for the following actionable items:&lt;br /&gt;
&lt;br /&gt;
* Provide debug, release, and test configurations&lt;br /&gt;
* Provide an assert with useful behavior&lt;br /&gt;
* Configure code to take advantage of configurations&lt;br /&gt;
* Properly integrate third party libraries&lt;br /&gt;
* Use the compiler's built-in static analysis capabilities&lt;br /&gt;
* Integrate with platform security measures&lt;br /&gt;
&lt;br /&gt;
The remainder of this cheat sheet briefly explains the bulleted, actionable items. For a thorough treatment, please visit the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
== Build Configurations ==&lt;br /&gt;
&lt;br /&gt;
You should support three build configurations. First is ''Debug'', second is ''Release'', and third is ''Test''. One size does '''not''' fit all, and each speaks to a different facet of the engineering process. Because tools like Autconfig and Automake [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html do not support the notion of build configurations], you should prefer to work in an Integrated Develop Environments (IDE) or write your makefiles so the desired targets are supported. In addition, Autconfig and Automake often ignores user supplied flags (it depends on the folks writing the various scripts and templates), so you might find it easier to again write a makefile from scratch rather than retrofitting existing auto tool files.&lt;br /&gt;
&lt;br /&gt;
=== Debug Builds ===&lt;br /&gt;
&lt;br /&gt;
Debug is used during development, and the build assists you in finding problems in the code. During this phase, you develop your program and test integration with third party libraries you program depends upon. To help with debugging and diagnostics, you should define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt; (if on a Windows platform) preprocessor macros and supply other 'debugging and diagnostic' oriented flags to the compiler and linker. Additional preprocessor macros for selected libraries are offered in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
You should use the following for GCC when building for debug: &amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt;) and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt;. No optimizations improve debuggability because optimizations often rearrange statements to improve instruction scheduling and remove unneeded code. You may need &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; to ensure some analysis is performed. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Asserts will help you write self debugging programs. The program will alert you to the point of first failure quickly and easily. Because asserts are so powerful, the code should be completely and full instrumented with asserts that: (1) validates and asserts all program state relevant to a function or a method; (2) validates and asserts all function parameters; and (3) validates and asserts all return values for functions or methods which return a value. Because of item (3), you should be very suspicious of void functions that cannot convey failures.&lt;br /&gt;
&lt;br /&gt;
Anywhere you have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation, you should have an assert. Anywhere you have an assert, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand. Posix states if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined, then &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html &amp;quot;shall write information about the particular call that failed on stderr and shall call abort&amp;quot;]. Calling abort during development is useless behavior, so you must supply your own assert that &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt;s. A Unix and Linux example of a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; based assert is provided in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
Unlike other debugging and diagnostic methods - such as breakpoints and &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; - asserts stay in forever and become silent guardians. If you accidentally nudge something in an apparently unrelated code path, the assert will snap the debugger for you. The enduring coverage means debug code - with its additional diagnostics and instrumentation - is more highly valued than unadorned release code. If code is checked in that does not have the additional debugging and diagnostics, including full assertions, you should reject the check-in.&lt;br /&gt;
&lt;br /&gt;
=== Release Builds ===&lt;br /&gt;
&lt;br /&gt;
Release builds are diametrically opposed to debug configurations. In a release configuration, the program will be built for use in production. Your program is expected to operate correctly, securely and efficiently. The time for debugging and diagnostics is over, and your program will define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; to remove the supplemental information and behavior.&lt;br /&gt;
&lt;br /&gt;
A release configuration should also use &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. The optimizations will make it somewhat more difficult to make sense of a stack trace, but they should be few and far between. The &amp;lt;tt&amp;gt;-g''N''&amp;lt;/tt&amp;gt; flag ensures debugging information is available for post mortem analysis. While you generate debugging information for release builds, you should strip the information before shipping and check the symbols into you version control system along with the tagged build.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will also remove asserts from your program by defining them to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; since its not acceptable to crash via &amp;lt;tt&amp;gt;abort&amp;lt;/tt&amp;gt; in production. You should not depend upon assert for crash report generation because those reports could contain sensitive information and may end up on foreign systems, including for example, [http://msdn.microsoft.com/en-us/library/windows/hardware/gg487440.aspx Windows Error Reporting]. If you want a crash dump, you should generate it yourself in a controlled manner while ensuring no sensitive information is written or leaked.&lt;br /&gt;
&lt;br /&gt;
Release builds should also curtail logging. If you followed earlier guidance, you have properly instrumented code and can determine the point of first failure quickly and easily. Simply log the failure and and relevant parameters. Remove all &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; and similar calls because sensitive information might be logged to a system logger. Worse, the data in the logs might be egressed by backup or sync. If your default configuration includes a logging level of ten or ''maximum verbosity'', you probably lack stability and are trying to track problems in the field. That usually means your program or library is not ready for production.&lt;br /&gt;
&lt;br /&gt;
=== Test Builds ===&lt;br /&gt;
&lt;br /&gt;
A Test build is closely related to a release build. In this build configuration, you want to be as close to production as possible, so you should be using &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. You will run your suite of ''positive'' and ''negative'' tests against the test build.&lt;br /&gt;
&lt;br /&gt;
You will also want to exercise all functions or methods provided by the program and not just the public interfaces, so everything should be made public. For example, all member functions public (C++ classes), all selectors (Objective C), all methods (Java), and all interfaces (library or shared object) should be made available for testing. As such, you should:&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;tt&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
You should also concentrate on negative tests. Positive self tests are relatively useless except for functional and regression tests. Since this is your line of business or area of expertise, you should have the business logic correct when operating in a benign environment. A hostile or toxic environment is much more interesting, and that's where you want to know how your library or program will fail in the field when under attack.&lt;br /&gt;
&lt;br /&gt;
== Library Integration ==&lt;br /&gt;
&lt;br /&gt;
You must properly integrate and utilize libraries in your program. Proper integration includes acceptance testing, configuring for your build system, identifying libraries you ''should'' be using, and correctly using the libraries. A well integrated library can compliment your code, and a poorlly written library can detract from your program.&lt;br /&gt;
&lt;br /&gt;
Acceptance testing a library is practically non-existent. The testing can be a simple code review or can include additional measures, such as negative self tests. If the library is defective or does not meet standards, you must fix it or reject the library. An example of lack of acceptance testing is [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library], which resulted in [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525]. Another example is the 10's to 100's of millions of vulnerable embedded devices due to defects in &amp;lt;tt&amp;gt;libupnp&amp;lt;/tt&amp;gt;. While its popular to lay blame on others, the bottom line is you chose the library so you are responsible for it.&lt;br /&gt;
&lt;br /&gt;
You must also ensure the library is integrated into your build process. For example, the OpenSSL library should be configured '''without''' SSLv2, SSLv3 and compression since they are defective. That means &amp;lt;tt&amp;gt;config&amp;lt;/tt&amp;gt; should be executed with &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-no-sslv2&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-no-sslv3&amp;lt;/tt&amp;gt;. As an additional example, using STLPort your debug configuration should also define &amp;lt;tt&amp;gt;_STLP_DEBUG=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_USE_DEBUG_LIB=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_ALLOC=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_UNINITIALIZED=1&amp;lt;/tt&amp;gt; because the library offers the additional diagnostics during development.&lt;br /&gt;
&lt;br /&gt;
Debug builds also present an opportunity to use additional libraries to help locate problems in the code. For example, you should be using a memory checker such as ''Debug Malloc Library (Dmalloc)'' during development. If you are not using Dmalloc, then ensure you have an equivalent checker, such as GCC 4.8's &amp;lt;tt&amp;gt;-fsanitize=memory&amp;lt;/tt&amp;gt;. This is one area where one size clearly does not fit all.&lt;br /&gt;
&lt;br /&gt;
Using a library properly is always difficult, especially when there is no documentation. Review any hardening documents available for the library, and be sure to visit the library's documentation to ensure proper API usage. If required, you might have to review code or step library code under the debugger to ensure there are no bugs or undocumented features.&lt;br /&gt;
&lt;br /&gt;
== Static Analysis ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers do a fantastic job of generating object code from source code. The process creates a lot of additional information useful in analyzing code. Compilers use the analysis to offer programmers warnings to help detect problems in their code, but the catch is you have to ask for them. After you ask for them, you should take time to understand what the underlying issue is when a statement is flagged. For example, compilers will warn you when comparing a signed integer to an unsigned integer because &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after C/C++ promotion. At other times, you will need to back off some warnings to help separate the wheat from the chaff. For example, interface programming is a popular C++ paradigm, so &amp;lt;tt&amp;gt;-Wno-unused-parameter&amp;lt;/tt&amp;gt; will probably be helpful with C++ code.&lt;br /&gt;
&lt;br /&gt;
You should consider a clean compile as a security gate. If you find its painful to turn warnings on, then you have likely been overlooking some of the finer points in the details. In addition, you should strive for multiple compilers and platforms support since each has its own personality (and interpretation of the C/C++ standards). By the time your core modules clean compile under Clang, GCC, ICC, and Visual Studio on the Linux and Windows platforms, your code will have many stability obstacles removed.&lt;br /&gt;
&lt;br /&gt;
When compiling programs with GCC, you should use the following flags to help detect errors in your programs. The options should be added to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; for a program with C source files, and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a program with C++ source files. Objective C developers should add their warnings to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt;: &amp;lt;tt&amp;gt;-Wall -Wextra -Wconversion (or -Wsign-conversion), -Wcast-align, -Wformat=2 -Wformat-security, -fno-common, -Wmissing-prototypes, -Wmissing-declarations, -Wstrict-prototypes, -Wstrict-overflow, and -Wtrampolines&amp;lt;/tt&amp;gt;. C++ presents additional opportunities under GCC, and the flags include &amp;lt;tt&amp;gt;-Woverloaded-virtual, -Wreorder, -Wsign-promo, -Wnon-virtual-dtor&amp;lt;/tt&amp;gt; and possibly &amp;lt;tt&amp;gt;-Weffc++&amp;lt;/tt&amp;gt;. Finally, Objective C should include &amp;lt;tt&amp;gt;-Wstrict-selector-match&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-Wundeclared-selector&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
For a Microsoft platform, you should use: &amp;lt;tt&amp;gt;/W4&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/Wall&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt;. If you don't use &amp;lt;tt&amp;gt;/Wall&amp;lt;/tt&amp;gt;, Microsoft recomends using &amp;lt;tt&amp;gt;/W4&amp;lt;/tt&amp;gt; and enabling C4191, C4242, C4263, C4264, C4265, C4266, C4302, C4826, C4905, C4906, and C4928. Finally, &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt; is Enterprise Code Analysis, which is freely available with the [http://www.microsoft.com/en-us/download/details.aspx?id=24826 Windows SDK for Windows Server 2008 and .NET Framework 3.5 SDK] (you don't need Visual Studio Enterprise edition).&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html GCC Options to Request or Suppress Warnings]'', ''[http://blogs.msdn.com/b/vcblog/archive/2010/12/14/off-by-default-compiler-warnings-in-visual-c.aspx “Off By Default” Compiler Warnings in Visual C++]'', and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Platform Security ==&lt;br /&gt;
&lt;br /&gt;
Integrating with platform security is essential to a defensive posture. Platform security will be your safety umbrella if someone discovers a bug with security implications - and you should always have it with you. For example, if your parser fails, then no-execute stacks and heaps can turn a 0-day into an annoying crash. Not integrating often leaves your users and customers vulnerable to malicious code. While you may not be familiar with some of the flags, you are probably familiar with the effects of omitting them. For example, Android's Gingerbreak overwrote the Global Offset Table (GOT) in the ELF headers, and could have been avoided with &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When integrating with platform security on a Linux host, you should use the following flags: &amp;lt;tt&amp;gt;-fPIE&amp;lt;/tt&amp;gt; (compiler) and &amp;lt;tt&amp;gt;-pie&amp;lt;/tt&amp;gt; (linker), -fstack-protector-all (or -fstack-protector), &amp;lt;tt&amp;gt;-z,noexecstack&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,now&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;. If available, you should also use &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=2&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=1&amp;lt;/tt&amp;gt; on Android 4.2), &amp;lt;tt&amp;gt;-fsanitize=address&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=thread&amp;lt;/tt&amp;gt; (the last two should be used in debug configurations). &amp;lt;tt&amp;gt;-z,nodlopen&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-z,nodump&amp;lt;/tt&amp;gt; might help in reducing an attacker's ability to load and manipulate a shared object. On Gentoo and other systems with no-exec heaps, you should also use &amp;lt;tt&amp;gt;-z,noexecheap&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Windows programs should include &amp;lt;tt&amp;gt;/dynamicbase&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/NXCOMPAT&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/GS&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/SafeSEH&amp;lt;/tt&amp;gt; to ensure address space layout randomizations (ASLR), data execution prevention (DEP), use of stack cookies, and thwart exception handler overwrites.&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html GCC Options Summary]'' and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;br /&gt;
&lt;br /&gt;
== Other Cheat sheets ==&lt;br /&gt;
&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Mobile_Jailbreaking_Cheat_Sheet&amp;diff=147226</id>
		<title>Mobile Jailbreaking Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Mobile_Jailbreaking_Cheat_Sheet&amp;diff=147226"/>
				<updated>2013-03-09T06:23:38Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added opening paragraph&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Dangers of Jailbreaking and Rooting Mobile Devices =&lt;br /&gt;
&lt;br /&gt;
==What is &amp;quot;jailbreaking&amp;quot; and &amp;quot;rooting&amp;quot;?==&lt;br /&gt;
&lt;br /&gt;
Jailbreaking and rooting is the process of gaining unauthorized access or elevated privileges on a system. The terms are different between operating system, and the differences in terminology reflect the differences in security models used by the operating systems vendors.&lt;br /&gt;
&lt;br /&gt;
For iOS, '''Jailbreaking''' is the process of modifying iOS system kernels to allow file system read and write access. Most jailbreaking tools (and exploits) remove the limitations and security features built by the manufacturer Apple (the &amp;quot;jail&amp;quot;) through the use of custom kernels, which make unauthorized modifications to the operating system.  Almost all jailbreaking tools allow users to run code not approved and signed by Apple. This allows users to install additional applications, extensions and patches without the control of Apple’s App Store.&lt;br /&gt;
&lt;br /&gt;
On Android, '''Rooting''' is the process of gaining administrative or privileged access for the Android OS. As the Android OS is based on the Linux Kernel, rooting a device is analogous to gaining access to administrative, root user-equivalent, permissions on Linux. Unlike iOS, rooting is (usually) not required to run applications outside from the Android Market. Some carriers control this through operating system settings or device firmware. Rooting also enables the user to completely remove and replace the device's operating system.&lt;br /&gt;
&lt;br /&gt;
==Why do they occur?==&lt;br /&gt;
iOS: Many users are lured into jailbreaking to take advantage of apps made available through third party app sources, such as Cydia, which are otherwise banned or not approved by Apple. There is an inherent risk in installing such applications as they are not quality controlled nor have they gone through the Apple approval and application approval process. Hence, they may contain vulnerable or malicious code that could allow the device to be compromised. Alternately, jailbreaking can allow users to enhance some built in functions on their device. For example, a jailbroken phone can be used with a different carrier than the one it was configured with, FaceTime can be used over a 3G connection, or the phone can be unlocked to be used internationally. More technically savvy users also perform jailbreaking to enable user interface customizations, preferences and features not available through the normal software interface. Typically, these functionalities are achieved by patching specific binaries in the operating system.&lt;br /&gt;
A debated purpose for jailbreaking in the iOS community is for installing pirated iOS applications. Jailbreaking proponents discourage this use, such as Cydia warning users of pirated software when they add a pirated software repository. However, repositories such as Hackulous promote pirated applications and the tools to pirate and distribute applications.&lt;br /&gt;
&lt;br /&gt;
Android: Rooting Android devices allows users to gain access to additional hardware rights, backup utilities and direct hardware access. Additionally, rooting allows users to remove the pre-installed &amp;quot;bloatware&amp;quot;, additional features that many carriers or manufacturers put onto devices, which can use considerable amounts of disk space and memory. Most users root their device to leverage a custom read only memory (ROM) developed by the Android Community, which brings distinctive capabilities that are not available through the official ROMs installed by the carriers. Custom ROMs also provide users an option to 'upgrade' the operating system and optimize the phone experience by giving users access to features, such as tethering, that are normally blocked or limited by carriers.&lt;br /&gt;
&lt;br /&gt;
==What are the common tools used?==&lt;br /&gt;
&lt;br /&gt;
iOS: Jailbreaking software can be categorized into two main groups:&lt;br /&gt;
#Tethered: Requires the device to be connected to a system in order to bypass the iBoot signature check for iOS devices.  The iOS device needs to be connected or tethered to a computer system every time it has to reboot in order to access the jailbreak application, such as redsn0w, and boot correctly. &lt;br /&gt;
#Un-tethered: Requires connection for the initial jailbreak process and then all the software, such as sn0wbreeze and evasi0n, is on the device for future un-tethered reboots, without losing the jailbreak or the functionality of the phone. &lt;br /&gt;
&lt;br /&gt;
Some common, but not all of the iOS jailbreaking tools are listed below:&lt;br /&gt;
*Absinthe&lt;br /&gt;
*blackra1n&lt;br /&gt;
*Corona&lt;br /&gt;
*greenpois0n&lt;br /&gt;
*JailbreakMe&lt;br /&gt;
*limera1n&lt;br /&gt;
*PwnageTool&lt;br /&gt;
*redsn0w&lt;br /&gt;
* evasi0n&lt;br /&gt;
*sn0wbreeze&lt;br /&gt;
*Spirit&lt;br /&gt;
 &lt;br /&gt;
A more comprehensive list of jailbreaking tools for iOS, exploits and kernel patches can be found on the iPhoneWiki website. &lt;br /&gt;
&lt;br /&gt;
Android: There are various rooting software available for Android. Tools and processes vary depending on the user’s device. The process is usually to:&lt;br /&gt;
#Unlock the boot loader.&lt;br /&gt;
#Install a rooting application and / or flash a custom ROM through the recovery mode. &lt;br /&gt;
&lt;br /&gt;
Not all of the above tasks are necessary and different toolkits are available for device specific rooting process. Custom ROMs are based on the hardware being used; examples of some are as follows:&lt;br /&gt;
&lt;br /&gt;
CyanogenMod ROMs are one of the most popular aftermarket replacement firmware in the Android world. More comprehensive device specific firmwares, flashing guides, rooting tools and patch details can be referenced from the homepage. &lt;br /&gt;
&lt;br /&gt;
ClockWorkMod is a custom recovery option for Android phones and tablets that allows you to perform several advanced recovery, restoration, installation and maintenance operations etc. Please refer to xda-developers for more details. &lt;br /&gt;
&lt;br /&gt;
==Why can it be dangerous?==&lt;br /&gt;
&lt;br /&gt;
The tools above can be broadly categorized in the following categories:&lt;br /&gt;
*Userland Exploits: Jailbroken access is only obtained within the user layer.  For instance, a user may have root access, but is not able to change the boot process. These exploits can be patched with a firmware update.&lt;br /&gt;
*iBoot Exploit: Jailbroken access to user level and boot process. iBoot exploits can be patched with a firmware update.&lt;br /&gt;
*Bootrom Exploits: Jailbroken access to user level and boot process. Bootrom exploits cannot be patched with a firmware update. Hardware update of bootrom required to patch in such cases.&lt;br /&gt;
&lt;br /&gt;
Some high level risks for rooting or jailbreaking devices are as follows:&lt;br /&gt;
&lt;br /&gt;
===Technical Risks:===&lt;br /&gt;
#General Mobile&lt;br /&gt;
##Some jailbreaking methods leave SSH enabled with a well known default password (i.e. alpine) that attackers can use for Command &amp;amp; Control.&lt;br /&gt;
##Entire file system of a rooted or jailbroken device is vulnerable to a malicious user inserting or extracting files.  This vulnerability is exploited by many malware programs, including Droid Kung Fu, Droid Dream and Ikee. &lt;br /&gt;
##Credentials to sensitive applications, such as banking or corporate applications, can be stolen using key logging, sniffing or other malicious software and then transmitted via the internet connection. &lt;br /&gt;
#iOS&lt;br /&gt;
##Applications on a jailbroken device run as root outside of the iOS sandbox.  This can allow applications to access sensitive data contained in other apps or install malicious software negating sandboxing functionality. &lt;br /&gt;
##Jailbroken devices can allow a user to install and run self-signed applications. Since the apps do not go through the App Store, they are not reviewed by Apple. These apps may contain vulnerable or malicious code that can be used to exploit a device. &lt;br /&gt;
#Android&lt;br /&gt;
##Android users that change the permissions on their device to grant root access to applications increase security exposure to malicious applications and potential application flaws. &lt;br /&gt;
##3rd party Android application markets have been identified as hosting malicious applications with remote administrative (RAT) capabilities.&lt;br /&gt;
&lt;br /&gt;
===Non-technical risks:===&lt;br /&gt;
#According to the Unted States Librarian of Congress (who issues Digital Millennium Copyright Act (DMCA) excemptions), jailbreaking or rooting of a smartphone is '''not''' deemed illegal in the US for persons who engage in noninfringing uses. The approval can provide some users with a false sense safety and jailbreaking or rooting as being harmless. Its noteworthy the Librarian does not apporve jailbreaking of tablets, however. Please see ''[http://www.theinquirer.net/inquirer/news/2220251/us-rules-jailbreaking-tablets-is-illegal US rules jailbreaking tablets is illegal]'' for a layman's analysis.&lt;br /&gt;
&lt;br /&gt;
#Software updates cannot be immediately applied because doing so would remove the jailbreak.  This leaves the device vulnerable to known, unpatched software vulnerabilities. &lt;br /&gt;
#Users can be tricked into downloading malicious software. For example, malware commonly uses the following tactics to trick users into downloading software. &lt;br /&gt;
##Apps will often advertise that they provide additional functionality or remove ads from popular apps but also contain malicious code. &lt;br /&gt;
##Some apps will not have any malicious code as part of the initial version of the app but subsequent &amp;quot;Updates&amp;quot; will insert malicious code. &lt;br /&gt;
#Manufacturers have determined that jailbreaking or rooting is a breach of the terms of use for the device and therefore voids the warranty. This can be an issue for the user if the device needs hardware repair or technical support (Note: a device can be restored and therefore it is not a major issue, unless hardware damage otherwise covered by the warranty prevents restoration).&lt;br /&gt;
&lt;br /&gt;
What controls can be used to protect against it?&lt;br /&gt;
Before an organization chooses to implement a mobile solution in their environment they should conduct a thorough risk assessment. This risk assessment should include an evaluation of the dangers posed by jailbroken or rooted devices, which are inherently more vulnerable to malicious applications or vulnerabilities such as those listed in the OWASP Mobile Security Top Ten Risks. Once this has assessment has been completed, management can determine which risks to accept and which risks will require additional controls to mitigate. Below are a few examples of both technical and non-technical controls that an organization may use. &lt;br /&gt;
&lt;br /&gt;
===Technical Controls:===&lt;br /&gt;
 &lt;br /&gt;
Some of the detective controls to monitor for jailbroken or rooted devices include:&lt;br /&gt;
#Identify 3rd party app stores (e.g., Cydia).&lt;br /&gt;
#Attempt to identify modified kernels by comparing certain system files that the application would have access to on a non jailbroken device to known good file hashes. This technique can serve as a good starting point for detection.&lt;br /&gt;
#Attempt to write a file outside of the application’s root directory.  The attempt should fail for non-jailbroken devices.&lt;br /&gt;
&lt;br /&gt;
Note: Most Mobile Device Management (MDM) solutions can perform these checks but require an application to be installed on the device.&lt;br /&gt;
&lt;br /&gt;
===Non-Technical Controls:===&lt;br /&gt;
&lt;br /&gt;
Organizations must understand the following key points when thinking about mobile security:&lt;br /&gt;
#Perform a risk assessment to determine risks associated with mobile device use are appropriately identified, prioritized and mitigated to reduce or manage risk at levels acceptable to management.&lt;br /&gt;
#Review application inventory listing on frequent basis to identify applications posing significant risk to the mobility environment.&lt;br /&gt;
#Technology solutions such as Mobile Device Management (MDM) or Mobile Application Management (MAM) should be only one part of the overall security strategy.  High level considerations include:&lt;br /&gt;
##Policies and procedures.&lt;br /&gt;
##User awareness and user buy-in.&lt;br /&gt;
##Technical controls and platforms. &lt;br /&gt;
##Auditing, logging, and monitoring.&lt;br /&gt;
#While many organizations choose a Bring Your Own Device (BYOD) strategy, the risks and benefits need to be considered and addressed before such a strategy is put in place. For example, the organization may consider developing a support plan for the various devices and operating systems that could be introduced to the environment. Many organizations struggle with this since there are such a wide variety of devices, particularly Android devices. &lt;br /&gt;
#There is not a ‘one size fits all’ solution to mobile security.  Different levels of security controls should be employed based on the sensitivity of data that is collected, stored, or processed on a mobile device or through a mobile application.&lt;br /&gt;
#User awareness and user buy-in are key.  For consumers or customers, this could be a focus on privacy and how Personally Identifiable Information (PII) is handled. For employees, this could be a focus on Acceptable Use Agreements (AUA) as well as privacy for personal devices.&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
&lt;br /&gt;
Jailbreaking and rooting tools, resources and processes are constantly updated and have made the process easier than ever for end-users. Many users are lured to jailbreak or root their device in order to gain more control over the device, upgrade their operating systems or install packages normally unavailable through standard channels. While having these options may allow the user to utilize the device more effectively, many users do not understand that jailbreaking or rooting can potentially allow malware to bypass many of the device's built in security features. The balance of user experience versus corporate security needs to be carefully considered since all mobile platforms have seen an increase in malware attacks over the past year. Mobile devices now hold more personal and corporate data than ever before and have become a very appealing target for attackers. Overall, the best defense for an enterprise is to build an overarching mobile strategy that accounts for technical controls, non technical controls and the people in the environment. Considerations need to not only focus on solutions such as MDM, but also policies and procedures around common issues of BYOD, and user security awareness.&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors =&lt;br /&gt;
&lt;br /&gt;
Suktika Mukhopadhyay&amp;lt;br/&amp;gt;&lt;br /&gt;
Brandon Clark&amp;lt;br/&amp;gt;&lt;br /&gt;
Talha Tariq&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Mobile_Jailbreaking_Cheat_Sheet&amp;diff=147207</id>
		<title>Mobile Jailbreaking Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Mobile_Jailbreaking_Cheat_Sheet&amp;diff=147207"/>
				<updated>2013-03-09T05:41:21Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added &amp;quot; ... for persons who engage in noninfringing uses&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Dangers of Jailbreaking and Rooting Mobile Devices (Cheat Sheet) =&lt;br /&gt;
&lt;br /&gt;
==What is &amp;quot;jailbreaking&amp;quot; and &amp;quot;rooting&amp;quot;?==&lt;br /&gt;
&lt;br /&gt;
iOS: Jailbreaking is the process of modifying iOS system kernels to allow file system read and write access. Most jailbreaking tools (and exploits) remove the limitations and security features built by the manufacturer Apple (the &amp;quot;jail&amp;quot;) through the use of custom kernels, which make unauthorized modifications to the operating system.  Almost all jailbreaking tools allow users to run code not approved and signed by Apple. This allows users to install additional applications, extensions and patches without the control of Apple’s App Store.&lt;br /&gt;
&lt;br /&gt;
Android: Rooting is the process of gaining administrative or privileged access for the Android OS. As the Android OS is based on the Linux Kernel, rooting a device is analogous to gaining access to administrative, root user-equivalent, permissions on Linux. Unlike iOS, rooting is (usually) not required to run applications outside from the Android Market. Some carriers control this through operating system settings or device firmware. Rooting also enables the user to completely remove and replace the device's operating system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Why do they occur?==&lt;br /&gt;
iOS: Many users are lured into jailbreaking to take advantage of apps made available through third party app sources, such as Cydia, which are otherwise banned or not approved by Apple. There is an inherent risk in installing such applications as they are not quality controlled nor have they gone through the Apple approval and application approval process. Hence, they may contain vulnerable or malicious code that could allow the device to be compromised. Alternately, jailbreaking can allow users to enhance some built in functions on their device. For example, a jailbroken phone can be used with a different carrier than the one it was configured with, FaceTime can be used over a 3G connection, or the phone can be unlocked to be used internationally. More technically savvy users also perform jailbreaking to enable user interface customizations, preferences and features not available through the normal software interface. Typically, these functionalities are achieved by patching specific binaries in the operating system.&lt;br /&gt;
A debated purpose for jailbreaking in the iOS community is for installing pirated iOS applications. Jailbreaking proponents discourage this use, such as Cydia warning users of pirated software when they add a pirated software repository. However, repositories such as Hackulous promote pirated applications and the tools to pirate and distribute applications.&lt;br /&gt;
&lt;br /&gt;
Android: Rooting Android devices allows users to gain access to additional hardware rights, backup utilities and direct hardware access. Additionally, rooting allows users to remove the pre-installed &amp;quot;bloatware&amp;quot;, additional features that many carriers or manufacturers put onto devices, which can use considerable amounts of disk space and memory. Most users root their device to leverage a custom read only memory (ROM) developed by the Android Community, which brings distinctive capabilities that are not available through the official ROMs installed by the carriers. Custom ROMs also provide users an option to 'upgrade' the operating system and optimize the phone experience by giving users access to features, such as tethering, that are normally blocked or limited by carriers.&lt;br /&gt;
&lt;br /&gt;
==What are the common tools used?==&lt;br /&gt;
&lt;br /&gt;
iOS: Jailbreaking software can be categorized into two main groups:&lt;br /&gt;
#Tethered: Requires the device to be connected to a system in order to bypass the iBoot signature check for iOS devices.  The iOS device needs to be connected or tethered to a computer system every time it has to reboot in order to access the jailbreak application, such as redsn0w, and boot correctly. &lt;br /&gt;
#Un-tethered: Requires connection for the initial jailbreak process and then all the software, such as sn0wbreeze and evasi0n, is on the device for future un-tethered reboots, without losing the jailbreak or the functionality of the phone. &lt;br /&gt;
&lt;br /&gt;
Some common, but not all of the iOS jailbreaking tools are listed below:&lt;br /&gt;
*Absinthe&lt;br /&gt;
*blackra1n&lt;br /&gt;
*Corona&lt;br /&gt;
*greenpois0n&lt;br /&gt;
*JailbreakMe&lt;br /&gt;
*limera1n&lt;br /&gt;
*PwnageTool&lt;br /&gt;
*redsn0w&lt;br /&gt;
* evasi0n&lt;br /&gt;
*sn0wbreeze&lt;br /&gt;
*Spirit&lt;br /&gt;
 &lt;br /&gt;
A more comprehensive list of jailbreaking tools for iOS, exploits and kernel patches can be found on the iPhoneWiki website. &lt;br /&gt;
&lt;br /&gt;
Android: There are various rooting software available for Android. Tools and processes vary depending on the user’s device. The process is usually to:&lt;br /&gt;
#Unlock the boot loader.&lt;br /&gt;
#Install a rooting application and / or flash a custom ROM through the recovery mode. &lt;br /&gt;
&lt;br /&gt;
Not all of the above tasks are necessary and different toolkits are available for device specific rooting process. Custom ROMs are based on the hardware being used; examples of some are as follows:&lt;br /&gt;
&lt;br /&gt;
CyanogenMod ROMs are one of the most popular aftermarket replacement firmware in the Android world. More comprehensive device specific firmwares, flashing guides, rooting tools and patch details can be referenced from the homepage. &lt;br /&gt;
&lt;br /&gt;
ClockWorkMod is a custom recovery option for Android phones and tablets that allows you to perform several advanced recovery, restoration, installation and maintenance operations etc. Please refer to xda-developers for more details. &lt;br /&gt;
&lt;br /&gt;
==Why can it be dangerous?==&lt;br /&gt;
&lt;br /&gt;
The tools above can be broadly categorized in the following categories:&lt;br /&gt;
*Userland Exploits: Jailbroken access is only obtained within the user layer.  For instance, a user may have root access, but is not able to change the boot process. These exploits can be patched with a firmware update.&lt;br /&gt;
*iBoot Exploit: Jailbroken access to user level and boot process. iBoot exploits can be patched with a firmware update.&lt;br /&gt;
*Bootrom Exploits: Jailbroken access to user level and boot process. Bootrom exploits cannot be patched with a firmware update. Hardware update of bootrom required to patch in such cases.&lt;br /&gt;
&lt;br /&gt;
Some high level risks for rooting or jailbreaking devices are as follows:&lt;br /&gt;
&lt;br /&gt;
===Technical Risks:===&lt;br /&gt;
#General Mobile&lt;br /&gt;
##Some jailbreaking methods leave SSH enabled with a well known default password (i.e. alpine) that attackers can use for Command &amp;amp; Control.&lt;br /&gt;
##Entire file system of a rooted or jailbroken device is vulnerable to a malicious user inserting or extracting files.  This vulnerability is exploited by many malware programs, including Droid Kung Fu, Droid Dream and Ikee. &lt;br /&gt;
##Credentials to sensitive applications, such as banking or corporate applications, can be stolen using key logging, sniffing or other malicious software and then transmitted via the internet connection. &lt;br /&gt;
#iOS&lt;br /&gt;
##Applications on a jailbroken device run as root outside of the iOS sandbox.  This can allow applications to access sensitive data contained in other apps or install malicious software negating sandboxing functionality. &lt;br /&gt;
##Jailbroken devices can allow a user to install and run self-signed applications. Since the apps do not go through the App Store, they are not reviewed by Apple. These apps may contain vulnerable or malicious code that can be used to exploit a device. &lt;br /&gt;
#Android&lt;br /&gt;
##Android users that change the permissions on their device to grant root access to applications increase security exposure to malicious applications and potential application flaws. &lt;br /&gt;
##3rd party Android application markets have been identified as hosting malicious applications with remote administrative (RAT) capabilities.&lt;br /&gt;
&lt;br /&gt;
===Non-technical risks:===&lt;br /&gt;
#According to the Unted States Librarian of Congress (who issues Digital Millennium Copyright Act (DMCA) excemptions), jailbreaking or rooting of a smartphone is '''not''' deemed illegal in the US for persons who engage in noninfringing uses. The approval can provide some users with a false sense safety and jailbreaking or rooting as being harmless. Its noteworthy the Librarian does not apporve jailbreaking of tablets, however. Please see ''[http://www.theinquirer.net/inquirer/news/2220251/us-rules-jailbreaking-tablets-is-illegal US rules jailbreaking tablets is illegal]'' for a layman's analysis.&lt;br /&gt;
&lt;br /&gt;
#Software updates cannot be immediately applied because doing so would remove the jailbreak.  This leaves the device vulnerable to known, unpatched software vulnerabilities. &lt;br /&gt;
#Users can be tricked into downloading malicious software. For example, malware commonly uses the following tactics to trick users into downloading software. &lt;br /&gt;
##Apps will often advertise that they provide additional functionality or remove ads from popular apps but also contain malicious code. &lt;br /&gt;
##Some apps will not have any malicious code as part of the initial version of the app but subsequent &amp;quot;Updates&amp;quot; will insert malicious code. &lt;br /&gt;
#Manufacturers have determined that jailbreaking or rooting is a breach of the terms of use for the device and therefore voids the warranty. This can be an issue for the user if the device needs hardware repair or technical support (Note: a device can be restored and therefore it is not a major issue, unless hardware damage otherwise covered by the warranty prevents restoration).&lt;br /&gt;
&lt;br /&gt;
What controls can be used to protect against it?&lt;br /&gt;
Before an organization chooses to implement a mobile solution in their environment they should conduct a thorough risk assessment. This risk assessment should include an evaluation of the dangers posed by jailbroken or rooted devices, which are inherently more vulnerable to malicious applications or vulnerabilities such as those listed in the OWASP Mobile Security Top Ten Risks. Once this has assessment has been completed, management can determine which risks to accept and which risks will require additional controls to mitigate. Below are a few examples of both technical and non-technical controls that an organization may use. &lt;br /&gt;
&lt;br /&gt;
===Technical Controls:===&lt;br /&gt;
 &lt;br /&gt;
Some of the detective controls to monitor for jailbroken or rooted devices include:&lt;br /&gt;
#Identify 3rd party app stores (e.g., Cydia).&lt;br /&gt;
#Attempt to identify modified kernels by comparing certain system files that the application would have access to on a non jailbroken device to known good file hashes. This technique can serve as a good starting point for detection.&lt;br /&gt;
#Attempt to write a file outside of the application’s root directory.  The attempt should fail for non-jailbroken devices.&lt;br /&gt;
&lt;br /&gt;
Note: Most Mobile Device Management (MDM) solutions can perform these checks but require an application to be installed on the device.&lt;br /&gt;
&lt;br /&gt;
===Non-Technical Controls:===&lt;br /&gt;
&lt;br /&gt;
Organizations must understand the following key points when thinking about mobile security:&lt;br /&gt;
#Perform a risk assessment to determine risks associated with mobile device use are appropriately identified, prioritized and mitigated to reduce or manage risk at levels acceptable to management.&lt;br /&gt;
#Review application inventory listing on frequent basis to identify applications posing significant risk to the mobility environment.&lt;br /&gt;
#Technology solutions such as Mobile Device Management (MDM) or Mobile Application Management (MAM) should be only one part of the overall security strategy.  High level considerations include:&lt;br /&gt;
##Policies and procedures.&lt;br /&gt;
##User awareness and user buy-in.&lt;br /&gt;
##Technical controls and platforms. &lt;br /&gt;
##Auditing, logging, and monitoring.&lt;br /&gt;
#While many organizations choose a Bring Your Own Device (BYOD) strategy, the risks and benefits need to be considered and addressed before such a strategy is put in place. For example, the organization may consider developing a support plan for the various devices and operating systems that could be introduced to the environment. Many organizations struggle with this since there are such a wide variety of devices, particularly Android devices. &lt;br /&gt;
#There is not a ‘one size fits all’ solution to mobile security.  Different levels of security controls should be employed based on the sensitivity of data that is collected, stored, or processed on a mobile device or through a mobile application.&lt;br /&gt;
#User awareness and user buy-in are key.  For consumers or customers, this could be a focus on privacy and how Personally Identifiable Information (PII) is handled. For employees, this could be a focus on Acceptable Use Agreements (AUA) as well as privacy for personal devices.&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
&lt;br /&gt;
Jailbreaking and rooting tools, resources and processes are constantly updated and have made the process easier than ever for end-users. Many users are lured to jailbreak or root their device in order to gain more control over the device, upgrade their operating systems or install packages normally unavailable through standard channels. While having these options may allow the user to utilize the device more effectively, many users do not understand that jailbreaking or rooting can potentially allow malware to bypass many of the device's built in security features. The balance of user experience versus corporate security needs to be carefully considered since all mobile platforms have seen an increase in malware attacks over the past year. Mobile devices now hold more personal and corporate data than ever before and have become a very appealing target for attackers. Overall, the best defense for an enterprise is to build an overarching mobile strategy that accounts for technical controls, non technical controls and the people in the environment. Considerations need to not only focus on solutions such as MDM, but also policies and procedures around common issues of BYOD, and user security awareness.&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors =&lt;br /&gt;
&lt;br /&gt;
Suktika Mukhopadhyay&amp;lt;br/&amp;gt;&lt;br /&gt;
Brandon Clark&amp;lt;br/&amp;gt;&lt;br /&gt;
Talha Tariq&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Mobile_Jailbreaking_Cheat_Sheet&amp;diff=147205</id>
		<title>Mobile Jailbreaking Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Mobile_Jailbreaking_Cheat_Sheet&amp;diff=147205"/>
				<updated>2013-03-09T05:39:50Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: &amp;quot;deemed legal&amp;quot; -&amp;gt; &amp;quot;not deemed illegal&amp;quot;. The register letter stated &amp;quot;...circumvention of technological measures... does not apply to persons who engage in noninfring ing uses...&amp;quot;. &amp;quot;not illegal&amp;quot; seems closer to the register letter.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Dangers of Jailbreaking and Rooting Mobile Devices (Cheat Sheet) =&lt;br /&gt;
&lt;br /&gt;
==What is &amp;quot;jailbreaking&amp;quot; and &amp;quot;rooting&amp;quot;?==&lt;br /&gt;
&lt;br /&gt;
iOS: Jailbreaking is the process of modifying iOS system kernels to allow file system read and write access. Most jailbreaking tools (and exploits) remove the limitations and security features built by the manufacturer Apple (the &amp;quot;jail&amp;quot;) through the use of custom kernels, which make unauthorized modifications to the operating system.  Almost all jailbreaking tools allow users to run code not approved and signed by Apple. This allows users to install additional applications, extensions and patches without the control of Apple’s App Store.&lt;br /&gt;
&lt;br /&gt;
Android: Rooting is the process of gaining administrative or privileged access for the Android OS. As the Android OS is based on the Linux Kernel, rooting a device is analogous to gaining access to administrative, root user-equivalent, permissions on Linux. Unlike iOS, rooting is (usually) not required to run applications outside from the Android Market. Some carriers control this through operating system settings or device firmware. Rooting also enables the user to completely remove and replace the device's operating system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Why do they occur?==&lt;br /&gt;
iOS: Many users are lured into jailbreaking to take advantage of apps made available through third party app sources, such as Cydia, which are otherwise banned or not approved by Apple. There is an inherent risk in installing such applications as they are not quality controlled nor have they gone through the Apple approval and application approval process. Hence, they may contain vulnerable or malicious code that could allow the device to be compromised. Alternately, jailbreaking can allow users to enhance some built in functions on their device. For example, a jailbroken phone can be used with a different carrier than the one it was configured with, FaceTime can be used over a 3G connection, or the phone can be unlocked to be used internationally. More technically savvy users also perform jailbreaking to enable user interface customizations, preferences and features not available through the normal software interface. Typically, these functionalities are achieved by patching specific binaries in the operating system.&lt;br /&gt;
A debated purpose for jailbreaking in the iOS community is for installing pirated iOS applications. Jailbreaking proponents discourage this use, such as Cydia warning users of pirated software when they add a pirated software repository. However, repositories such as Hackulous promote pirated applications and the tools to pirate and distribute applications.&lt;br /&gt;
&lt;br /&gt;
Android: Rooting Android devices allows users to gain access to additional hardware rights, backup utilities and direct hardware access. Additionally, rooting allows users to remove the pre-installed &amp;quot;bloatware&amp;quot;, additional features that many carriers or manufacturers put onto devices, which can use considerable amounts of disk space and memory. Most users root their device to leverage a custom read only memory (ROM) developed by the Android Community, which brings distinctive capabilities that are not available through the official ROMs installed by the carriers. Custom ROMs also provide users an option to 'upgrade' the operating system and optimize the phone experience by giving users access to features, such as tethering, that are normally blocked or limited by carriers.&lt;br /&gt;
&lt;br /&gt;
==What are the common tools used?==&lt;br /&gt;
&lt;br /&gt;
iOS: Jailbreaking software can be categorized into two main groups:&lt;br /&gt;
#Tethered: Requires the device to be connected to a system in order to bypass the iBoot signature check for iOS devices.  The iOS device needs to be connected or tethered to a computer system every time it has to reboot in order to access the jailbreak application, such as redsn0w, and boot correctly. &lt;br /&gt;
#Un-tethered: Requires connection for the initial jailbreak process and then all the software, such as sn0wbreeze and evasi0n, is on the device for future un-tethered reboots, without losing the jailbreak or the functionality of the phone. &lt;br /&gt;
&lt;br /&gt;
Some common, but not all of the iOS jailbreaking tools are listed below:&lt;br /&gt;
*Absinthe&lt;br /&gt;
*blackra1n&lt;br /&gt;
*Corona&lt;br /&gt;
*greenpois0n&lt;br /&gt;
*JailbreakMe&lt;br /&gt;
*limera1n&lt;br /&gt;
*PwnageTool&lt;br /&gt;
*redsn0w&lt;br /&gt;
* evasi0n&lt;br /&gt;
*sn0wbreeze&lt;br /&gt;
*Spirit&lt;br /&gt;
 &lt;br /&gt;
A more comprehensive list of jailbreaking tools for iOS, exploits and kernel patches can be found on the iPhoneWiki website. &lt;br /&gt;
&lt;br /&gt;
Android: There are various rooting software available for Android. Tools and processes vary depending on the user’s device. The process is usually to:&lt;br /&gt;
#Unlock the boot loader.&lt;br /&gt;
#Install a rooting application and / or flash a custom ROM through the recovery mode. &lt;br /&gt;
&lt;br /&gt;
Not all of the above tasks are necessary and different toolkits are available for device specific rooting process. Custom ROMs are based on the hardware being used; examples of some are as follows:&lt;br /&gt;
&lt;br /&gt;
CyanogenMod ROMs are one of the most popular aftermarket replacement firmware in the Android world. More comprehensive device specific firmwares, flashing guides, rooting tools and patch details can be referenced from the homepage. &lt;br /&gt;
&lt;br /&gt;
ClockWorkMod is a custom recovery option for Android phones and tablets that allows you to perform several advanced recovery, restoration, installation and maintenance operations etc. Please refer to xda-developers for more details. &lt;br /&gt;
&lt;br /&gt;
==Why can it be dangerous?==&lt;br /&gt;
&lt;br /&gt;
The tools above can be broadly categorized in the following categories:&lt;br /&gt;
*Userland Exploits: Jailbroken access is only obtained within the user layer.  For instance, a user may have root access, but is not able to change the boot process. These exploits can be patched with a firmware update.&lt;br /&gt;
*iBoot Exploit: Jailbroken access to user level and boot process. iBoot exploits can be patched with a firmware update.&lt;br /&gt;
*Bootrom Exploits: Jailbroken access to user level and boot process. Bootrom exploits cannot be patched with a firmware update. Hardware update of bootrom required to patch in such cases.&lt;br /&gt;
&lt;br /&gt;
Some high level risks for rooting or jailbreaking devices are as follows:&lt;br /&gt;
&lt;br /&gt;
===Technical Risks:===&lt;br /&gt;
#General Mobile&lt;br /&gt;
##Some jailbreaking methods leave SSH enabled with a well known default password (i.e. alpine) that attackers can use for Command &amp;amp; Control.&lt;br /&gt;
##Entire file system of a rooted or jailbroken device is vulnerable to a malicious user inserting or extracting files.  This vulnerability is exploited by many malware programs, including Droid Kung Fu, Droid Dream and Ikee. &lt;br /&gt;
##Credentials to sensitive applications, such as banking or corporate applications, can be stolen using key logging, sniffing or other malicious software and then transmitted via the internet connection. &lt;br /&gt;
#iOS&lt;br /&gt;
##Applications on a jailbroken device run as root outside of the iOS sandbox.  This can allow applications to access sensitive data contained in other apps or install malicious software negating sandboxing functionality. &lt;br /&gt;
##Jailbroken devices can allow a user to install and run self-signed applications. Since the apps do not go through the App Store, they are not reviewed by Apple. These apps may contain vulnerable or malicious code that can be used to exploit a device. &lt;br /&gt;
#Android&lt;br /&gt;
##Android users that change the permissions on their device to grant root access to applications increase security exposure to malicious applications and potential application flaws. &lt;br /&gt;
##3rd party Android application markets have been identified as hosting malicious applications with remote administrative (RAT) capabilities.&lt;br /&gt;
&lt;br /&gt;
===Non-technical risks:===&lt;br /&gt;
#According to the Unted States Librarian of Congress (who issues Digital Millennium Copyright Act (DMCA) excemptions), jailbreaking or rooting of a smartphone is '''not''' deemed illegal in the US. The approval can provide some users with a false sense safety and jailbreaking or rooting as being harmless. Its noteworthy the Librarian does not apporve jailbreaking of tablets, however. Please see ''[http://www.theinquirer.net/inquirer/news/2220251/us-rules-jailbreaking-tablets-is-illegal US rules jailbreaking tablets is illegal]'' for a layman's analysis.&lt;br /&gt;
&lt;br /&gt;
#Software updates cannot be immediately applied because doing so would remove the jailbreak.  This leaves the device vulnerable to known, unpatched software vulnerabilities. &lt;br /&gt;
#Users can be tricked into downloading malicious software. For example, malware commonly uses the following tactics to trick users into downloading software. &lt;br /&gt;
##Apps will often advertise that they provide additional functionality or remove ads from popular apps but also contain malicious code. &lt;br /&gt;
##Some apps will not have any malicious code as part of the initial version of the app but subsequent &amp;quot;Updates&amp;quot; will insert malicious code. &lt;br /&gt;
#Manufacturers have determined that jailbreaking or rooting is a breach of the terms of use for the device and therefore voids the warranty. This can be an issue for the user if the device needs hardware repair or technical support (Note: a device can be restored and therefore it is not a major issue, unless hardware damage otherwise covered by the warranty prevents restoration).&lt;br /&gt;
&lt;br /&gt;
What controls can be used to protect against it?&lt;br /&gt;
Before an organization chooses to implement a mobile solution in their environment they should conduct a thorough risk assessment. This risk assessment should include an evaluation of the dangers posed by jailbroken or rooted devices, which are inherently more vulnerable to malicious applications or vulnerabilities such as those listed in the OWASP Mobile Security Top Ten Risks. Once this has assessment has been completed, management can determine which risks to accept and which risks will require additional controls to mitigate. Below are a few examples of both technical and non-technical controls that an organization may use. &lt;br /&gt;
&lt;br /&gt;
===Technical Controls:===&lt;br /&gt;
 &lt;br /&gt;
Some of the detective controls to monitor for jailbroken or rooted devices include:&lt;br /&gt;
#Identify 3rd party app stores (e.g., Cydia).&lt;br /&gt;
#Attempt to identify modified kernels by comparing certain system files that the application would have access to on a non jailbroken device to known good file hashes. This technique can serve as a good starting point for detection.&lt;br /&gt;
#Attempt to write a file outside of the application’s root directory.  The attempt should fail for non-jailbroken devices.&lt;br /&gt;
&lt;br /&gt;
Note: Most Mobile Device Management (MDM) solutions can perform these checks but require an application to be installed on the device.&lt;br /&gt;
&lt;br /&gt;
===Non-Technical Controls:===&lt;br /&gt;
&lt;br /&gt;
Organizations must understand the following key points when thinking about mobile security:&lt;br /&gt;
#Perform a risk assessment to determine risks associated with mobile device use are appropriately identified, prioritized and mitigated to reduce or manage risk at levels acceptable to management.&lt;br /&gt;
#Review application inventory listing on frequent basis to identify applications posing significant risk to the mobility environment.&lt;br /&gt;
#Technology solutions such as Mobile Device Management (MDM) or Mobile Application Management (MAM) should be only one part of the overall security strategy.  High level considerations include:&lt;br /&gt;
##Policies and procedures.&lt;br /&gt;
##User awareness and user buy-in.&lt;br /&gt;
##Technical controls and platforms. &lt;br /&gt;
##Auditing, logging, and monitoring.&lt;br /&gt;
#While many organizations choose a Bring Your Own Device (BYOD) strategy, the risks and benefits need to be considered and addressed before such a strategy is put in place. For example, the organization may consider developing a support plan for the various devices and operating systems that could be introduced to the environment. Many organizations struggle with this since there are such a wide variety of devices, particularly Android devices. &lt;br /&gt;
#There is not a ‘one size fits all’ solution to mobile security.  Different levels of security controls should be employed based on the sensitivity of data that is collected, stored, or processed on a mobile device or through a mobile application.&lt;br /&gt;
#User awareness and user buy-in are key.  For consumers or customers, this could be a focus on privacy and how Personally Identifiable Information (PII) is handled. For employees, this could be a focus on Acceptable Use Agreements (AUA) as well as privacy for personal devices.&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
&lt;br /&gt;
Jailbreaking and rooting tools, resources and processes are constantly updated and have made the process easier than ever for end-users. Many users are lured to jailbreak or root their device in order to gain more control over the device, upgrade their operating systems or install packages normally unavailable through standard channels. While having these options may allow the user to utilize the device more effectively, many users do not understand that jailbreaking or rooting can potentially allow malware to bypass many of the device's built in security features. The balance of user experience versus corporate security needs to be carefully considered since all mobile platforms have seen an increase in malware attacks over the past year. Mobile devices now hold more personal and corporate data than ever before and have become a very appealing target for attackers. Overall, the best defense for an enterprise is to build an overarching mobile strategy that accounts for technical controls, non technical controls and the people in the environment. Considerations need to not only focus on solutions such as MDM, but also policies and procedures around common issues of BYOD, and user security awareness.&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors =&lt;br /&gt;
&lt;br /&gt;
Suktika Mukhopadhyay&amp;lt;br/&amp;gt;&lt;br /&gt;
Brandon Clark&amp;lt;br/&amp;gt;&lt;br /&gt;
Talha Tariq&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Mobile_Jailbreaking_Cheat_Sheet&amp;diff=147202</id>
		<title>Mobile Jailbreaking Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Mobile_Jailbreaking_Cheat_Sheet&amp;diff=147202"/>
				<updated>2013-03-09T05:18:31Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Clarified excemptions made by Congressal Librarian, cited reference&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Dangers of Jailbreaking and Rooting Mobile Devices (Cheat Sheet) =&lt;br /&gt;
&lt;br /&gt;
==What is &amp;quot;jailbreaking&amp;quot; and &amp;quot;rooting&amp;quot;?==&lt;br /&gt;
&lt;br /&gt;
iOS: Jailbreaking is the process of modifying iOS system kernels to allow file system read and write access. Most jailbreaking tools (and exploits) remove the limitations and security features built by the manufacturer Apple (the &amp;quot;jail&amp;quot;) through the use of custom kernels, which make unauthorized modifications to the operating system.  Almost all jailbreaking tools allow users to run code not approved and signed by Apple. This allows users to install additional applications, extensions and patches without the control of Apple’s App Store.&lt;br /&gt;
&lt;br /&gt;
Android: Rooting is the process of gaining administrative or privileged access for the Android OS. As the Android OS is based on the Linux Kernel, rooting a device is analogous to gaining access to administrative, root user-equivalent, permissions on Linux. Unlike iOS, rooting is (usually) not required to run applications outside from the Android Market. Some carriers control this through operating system settings or device firmware. Rooting also enables the user to completely remove and replace the device's operating system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Why do they occur?==&lt;br /&gt;
iOS: Many users are lured into jailbreaking to take advantage of apps made available through third party app sources, such as Cydia, which are otherwise banned or not approved by Apple. There is an inherent risk in installing such applications as they are not quality controlled nor have they gone through the Apple approval and application approval process. Hence, they may contain vulnerable or malicious code that could allow the device to be compromised. Alternately, jailbreaking can allow users to enhance some built in functions on their device. For example, a jailbroken phone can be used with a different carrier than the one it was configured with, FaceTime can be used over a 3G connection, or the phone can be unlocked to be used internationally. More technically savvy users also perform jailbreaking to enable user interface customizations, preferences and features not available through the normal software interface. Typically, these functionalities are achieved by patching specific binaries in the operating system.&lt;br /&gt;
A debated purpose for jailbreaking in the iOS community is for installing pirated iOS applications. Jailbreaking proponents discourage this use, such as Cydia warning users of pirated software when they add a pirated software repository. However, repositories such as Hackulous promote pirated applications and the tools to pirate and distribute applications.&lt;br /&gt;
&lt;br /&gt;
Android: Rooting Android devices allows users to gain access to additional hardware rights, backup utilities and direct hardware access. Additionally, rooting allows users to remove the pre-installed &amp;quot;bloatware&amp;quot;, additional features that many carriers or manufacturers put onto devices, which can use considerable amounts of disk space and memory. Most users root their device to leverage a custom read only memory (ROM) developed by the Android Community, which brings distinctive capabilities that are not available through the official ROMs installed by the carriers. Custom ROMs also provide users an option to 'upgrade' the operating system and optimize the phone experience by giving users access to features, such as tethering, that are normally blocked or limited by carriers.&lt;br /&gt;
&lt;br /&gt;
==What are the common tools used?==&lt;br /&gt;
&lt;br /&gt;
iOS: Jailbreaking software can be categorized into two main groups:&lt;br /&gt;
#Tethered: Requires the device to be connected to a system in order to bypass the iBoot signature check for iOS devices.  The iOS device needs to be connected or tethered to a computer system every time it has to reboot in order to access the jailbreak application, such as redsn0w, and boot correctly. &lt;br /&gt;
#Un-tethered: Requires connection for the initial jailbreak process and then all the software, such as sn0wbreeze and evasi0n, is on the device for future un-tethered reboots, without losing the jailbreak or the functionality of the phone. &lt;br /&gt;
&lt;br /&gt;
Some common, but not all of the iOS jailbreaking tools are listed below:&lt;br /&gt;
*Absinthe&lt;br /&gt;
*blackra1n&lt;br /&gt;
*Corona&lt;br /&gt;
*greenpois0n&lt;br /&gt;
*JailbreakMe&lt;br /&gt;
*limera1n&lt;br /&gt;
*PwnageTool&lt;br /&gt;
*redsn0w&lt;br /&gt;
* evasi0n&lt;br /&gt;
*sn0wbreeze&lt;br /&gt;
*Spirit&lt;br /&gt;
 &lt;br /&gt;
A more comprehensive list of jailbreaking tools for iOS, exploits and kernel patches can be found on the iPhoneWiki website. &lt;br /&gt;
&lt;br /&gt;
Android: There are various rooting software available for Android. Tools and processes vary depending on the user’s device. The process is usually to:&lt;br /&gt;
#Unlock the boot loader.&lt;br /&gt;
#Install a rooting application and / or flash a custom ROM through the recovery mode. &lt;br /&gt;
&lt;br /&gt;
Not all of the above tasks are necessary and different toolkits are available for device specific rooting process. Custom ROMs are based on the hardware being used; examples of some are as follows:&lt;br /&gt;
&lt;br /&gt;
CyanogenMod ROMs are one of the most popular aftermarket replacement firmware in the Android world. More comprehensive device specific firmwares, flashing guides, rooting tools and patch details can be referenced from the homepage. &lt;br /&gt;
&lt;br /&gt;
ClockWorkMod is a custom recovery option for Android phones and tablets that allows you to perform several advanced recovery, restoration, installation and maintenance operations etc. Please refer to xda-developers for more details. &lt;br /&gt;
&lt;br /&gt;
==Why can it be dangerous?==&lt;br /&gt;
&lt;br /&gt;
The tools above can be broadly categorized in the following categories:&lt;br /&gt;
*Userland Exploits: Jailbroken access is only obtained within the user layer.  For instance, a user may have root access, but is not able to change the boot process. These exploits can be patched with a firmware update.&lt;br /&gt;
*iBoot Exploit: Jailbroken access to user level and boot process. iBoot exploits can be patched with a firmware update.&lt;br /&gt;
*Bootrom Exploits: Jailbroken access to user level and boot process. Bootrom exploits cannot be patched with a firmware update. Hardware update of bootrom required to patch in such cases.&lt;br /&gt;
&lt;br /&gt;
Some high level risks for rooting or jailbreaking devices are as follows:&lt;br /&gt;
&lt;br /&gt;
===Technical Risks:===&lt;br /&gt;
#General Mobile&lt;br /&gt;
##Some jailbreaking methods leave SSH enabled with a well known default password (i.e. alpine) that attackers can use for Command &amp;amp; Control.&lt;br /&gt;
##Entire file system of a rooted or jailbroken device is vulnerable to a malicious user inserting or extracting files.  This vulnerability is exploited by many malware programs, including Droid Kung Fu, Droid Dream and Ikee. &lt;br /&gt;
##Credentials to sensitive applications, such as banking or corporate applications, can be stolen using key logging, sniffing or other malicious software and then transmitted via the internet connection. &lt;br /&gt;
#iOS&lt;br /&gt;
##Applications on a jailbroken device run as root outside of the iOS sandbox.  This can allow applications to access sensitive data contained in other apps or install malicious software negating sandboxing functionality. &lt;br /&gt;
##Jailbroken devices can allow a user to install and run self-signed applications. Since the apps do not go through the App Store, they are not reviewed by Apple. These apps may contain vulnerable or malicious code that can be used to exploit a device. &lt;br /&gt;
#Android&lt;br /&gt;
##Android users that change the permissions on their device to grant root access to applications increase security exposure to malicious applications and potential application flaws. &lt;br /&gt;
##3rd party Android application markets have been identified as hosting malicious applications with remote administrative (RAT) capabilities.&lt;br /&gt;
&lt;br /&gt;
===Non-technical risks:===&lt;br /&gt;
#According to the Unted States Librarian of Congress (who issues Digital Millennium Copyright Act (DMCA) excemptions), jailbreaking or rooting of a smartphone is deemed 'legal' in the US. The approval can provide some users with a false sense safety and jailbreaking or rooting as being harmless. Its noteworthy the Librarian does not apporve jailbreaking of tablets, however. Please see ''[http://www.theinquirer.net/inquirer/news/2220251/us-rules-jailbreaking-tablets-is-illegal US rules jailbreaking tablets is illegal]'' for a layman's analysis.&lt;br /&gt;
&lt;br /&gt;
#Software updates cannot be immediately applied because doing so would remove the jailbreak.  This leaves the device vulnerable to known, unpatched software vulnerabilities. &lt;br /&gt;
#Users can be tricked into downloading malicious software. For example, malware commonly uses the following tactics to trick users into downloading software. &lt;br /&gt;
##Apps will often advertise that they provide additional functionality or remove ads from popular apps but also contain malicious code. &lt;br /&gt;
##Some apps will not have any malicious code as part of the initial version of the app but subsequent &amp;quot;Updates&amp;quot; will insert malicious code. &lt;br /&gt;
#Manufacturers have determined that jailbreaking or rooting is a breach of the terms of use for the device and therefore voids the warranty. This can be an issue for the user if the device needs hardware repair or technical support (Note: a device can be restored and therefore it is not a major issue, unless hardware damage otherwise covered by the warranty prevents restoration).&lt;br /&gt;
&lt;br /&gt;
What controls can be used to protect against it?&lt;br /&gt;
Before an organization chooses to implement a mobile solution in their environment they should conduct a thorough risk assessment. This risk assessment should include an evaluation of the dangers posed by jailbroken or rooted devices, which are inherently more vulnerable to malicious applications or vulnerabilities such as those listed in the OWASP Mobile Security Top Ten Risks. Once this has assessment has been completed, management can determine which risks to accept and which risks will require additional controls to mitigate. Below are a few examples of both technical and non-technical controls that an organization may use. &lt;br /&gt;
&lt;br /&gt;
===Technical Controls:===&lt;br /&gt;
 &lt;br /&gt;
Some of the detective controls to monitor for jailbroken or rooted devices include:&lt;br /&gt;
#Identify 3rd party app stores (e.g., Cydia).&lt;br /&gt;
#Attempt to identify modified kernels by comparing certain system files that the application would have access to on a non jailbroken device to known good file hashes. This technique can serve as a good starting point for detection.&lt;br /&gt;
#Attempt to write a file outside of the application’s root directory.  The attempt should fail for non-jailbroken devices.&lt;br /&gt;
&lt;br /&gt;
Note: Most Mobile Device Management (MDM) solutions can perform these checks but require an application to be installed on the device.&lt;br /&gt;
&lt;br /&gt;
===Non-Technical Controls:===&lt;br /&gt;
&lt;br /&gt;
Organizations must understand the following key points when thinking about mobile security:&lt;br /&gt;
#Perform a risk assessment to determine risks associated with mobile device use are appropriately identified, prioritized and mitigated to reduce or manage risk at levels acceptable to management.&lt;br /&gt;
#Review application inventory listing on frequent basis to identify applications posing significant risk to the mobility environment.&lt;br /&gt;
#Technology solutions such as Mobile Device Management (MDM) or Mobile Application Management (MAM) should be only one part of the overall security strategy.  High level considerations include:&lt;br /&gt;
##Policies and procedures.&lt;br /&gt;
##User awareness and user buy-in.&lt;br /&gt;
##Technical controls and platforms. &lt;br /&gt;
##Auditing, logging, and monitoring.&lt;br /&gt;
#While many organizations choose a Bring Your Own Device (BYOD) strategy, the risks and benefits need to be considered and addressed before such a strategy is put in place. For example, the organization may consider developing a support plan for the various devices and operating systems that could be introduced to the environment. Many organizations struggle with this since there are such a wide variety of devices, particularly Android devices. &lt;br /&gt;
#There is not a ‘one size fits all’ solution to mobile security.  Different levels of security controls should be employed based on the sensitivity of data that is collected, stored, or processed on a mobile device or through a mobile application.&lt;br /&gt;
#User awareness and user buy-in are key.  For consumers or customers, this could be a focus on privacy and how Personally Identifiable Information (PII) is handled. For employees, this could be a focus on Acceptable Use Agreements (AUA) as well as privacy for personal devices.&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
&lt;br /&gt;
Jailbreaking and rooting tools, resources and processes are constantly updated and have made the process easier than ever for end-users. Many users are lured to jailbreak or root their device in order to gain more control over the device, upgrade their operating systems or install packages normally unavailable through standard channels. While having these options may allow the user to utilize the device more effectively, many users do not understand that jailbreaking or rooting can potentially allow malware to bypass many of the device's built in security features. The balance of user experience versus corporate security needs to be carefully considered since all mobile platforms have seen an increase in malware attacks over the past year. Mobile devices now hold more personal and corporate data than ever before and have become a very appealing target for attackers. Overall, the best defense for an enterprise is to build an overarching mobile strategy that accounts for technical controls, non technical controls and the people in the environment. Considerations need to not only focus on solutions such as MDM, but also policies and procedures around common issues of BYOD, and user security awareness.&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors =&lt;br /&gt;
&lt;br /&gt;
Suktika Mukhopadhyay&amp;lt;br/&amp;gt;&lt;br /&gt;
Brandon Clark&amp;lt;br/&amp;gt;&lt;br /&gt;
Talha Tariq&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Mobile_Jailbreaking_Cheat_Sheet&amp;diff=147195</id>
		<title>Mobile Jailbreaking Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Mobile_Jailbreaking_Cheat_Sheet&amp;diff=147195"/>
				<updated>2013-03-09T04:59:54Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added the newest family member: evasi0n&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Dangers of Jailbreaking and Rooting Mobile Devices (Cheat Sheet) =&lt;br /&gt;
&lt;br /&gt;
==What is &amp;quot;jailbreaking&amp;quot; and &amp;quot;rooting&amp;quot;?==&lt;br /&gt;
&lt;br /&gt;
iOS: Jailbreaking is the process of modifying iOS system kernels to allow file system read and write access. Most jailbreaking tools (and exploits) remove the limitations and security features built by the manufacturer Apple (the &amp;quot;jail&amp;quot;) through the use of custom kernels, which make unauthorized modifications to the operating system.  Almost all jailbreaking tools allow users to run code not approved and signed by Apple. This allows users to install additional applications, extensions and patches without the control of Apple’s App Store.&lt;br /&gt;
&lt;br /&gt;
Android: Rooting is the process of gaining administrative or privileged access for the Android OS. As the Android OS is based on the Linux Kernel, rooting a device is analogous to gaining access to administrative, root user-equivalent, permissions on Linux. Unlike iOS, rooting is (usually) not required to run applications outside from the Android Market. Some carriers control this through operating system settings or device firmware. Rooting also enables the user to completely remove and replace the device's operating system.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Why do they occur?==&lt;br /&gt;
iOS: Many users are lured into jailbreaking to take advantage of apps made available through third party app sources, such as Cydia, which are otherwise banned or not approved by Apple. There is an inherent risk in installing such applications as they are not quality controlled nor have they gone through the Apple approval and application approval process. Hence, they may contain vulnerable or malicious code that could allow the device to be compromised. Alternately, jailbreaking can allow users to enhance some built in functions on their device. For example, a jailbroken phone can be used with a different carrier than the one it was configured with, FaceTime can be used over a 3G connection, or the phone can be unlocked to be used internationally. More technically savvy users also perform jailbreaking to enable user interface customizations, preferences and features not available through the normal software interface. Typically, these functionalities are achieved by patching specific binaries in the operating system.&lt;br /&gt;
A debated purpose for jailbreaking in the iOS community is for installing pirated iOS applications. Jailbreaking proponents discourage this use, such as Cydia warning users of pirated software when they add a pirated software repository. However, repositories such as Hackulous promote pirated applications and the tools to pirate and distribute applications.&lt;br /&gt;
&lt;br /&gt;
Android: Rooting Android devices allows users to gain access to additional hardware rights, backup utilities and direct hardware access. Additionally, rooting allows users to remove the pre-installed &amp;quot;bloatware&amp;quot;, additional features that many carriers or manufacturers put onto devices, which can use considerable amounts of disk space and memory. Most users root their device to leverage a custom read only memory (ROM) developed by the Android Community, which brings distinctive capabilities that are not available through the official ROMs installed by the carriers. Custom ROMs also provide users an option to 'upgrade' the operating system and optimize the phone experience by giving users access to features, such as tethering, that are normally blocked or limited by carriers.&lt;br /&gt;
&lt;br /&gt;
==What are the common tools used?==&lt;br /&gt;
&lt;br /&gt;
iOS: Jailbreaking software can be categorized into two main groups:&lt;br /&gt;
#Tethered: Requires the device to be connected to a system in order to bypass the iBoot signature check for iOS devices.  The iOS device needs to be connected or tethered to a computer system every time it has to reboot in order to access the jailbreak application, such as redsn0w, and boot correctly. &lt;br /&gt;
#Un-tethered: Requires connection for the initial jailbreak process and then all the software, such as sn0wbreeze and evasi0n, is on the device for future un-tethered reboots, without losing the jailbreak or the functionality of the phone. &lt;br /&gt;
&lt;br /&gt;
Some common, but not all of the iOS jailbreaking tools are listed below:&lt;br /&gt;
*Absinthe&lt;br /&gt;
*blackra1n&lt;br /&gt;
*Corona&lt;br /&gt;
*greenpois0n&lt;br /&gt;
*JailbreakMe&lt;br /&gt;
*limera1n&lt;br /&gt;
*PwnageTool&lt;br /&gt;
*redsn0w&lt;br /&gt;
*sn0wbreeze&lt;br /&gt;
*Spirit&lt;br /&gt;
 &lt;br /&gt;
A more comprehensive list of jailbreaking tools for iOS, exploits and kernel patches can be found on the iPhoneWiki website. &lt;br /&gt;
&lt;br /&gt;
Android: There are various rooting software available for Android. Tools and processes vary depending on the user’s device. The process is usually to:&lt;br /&gt;
#Unlock the boot loader.&lt;br /&gt;
#Install a rooting application and / or flash a custom ROM through the recovery mode. &lt;br /&gt;
&lt;br /&gt;
Not all of the above tasks are necessary and different toolkits are available for device specific rooting process. Custom ROMs are based on the hardware being used; examples of some are as follows:&lt;br /&gt;
&lt;br /&gt;
CyanogenMod ROMs are one of the most popular aftermarket replacement firmware in the Android world. More comprehensive device specific firmwares, flashing guides, rooting tools and patch details can be referenced from the homepage. &lt;br /&gt;
&lt;br /&gt;
ClockWorkMod is a custom recovery option for Android phones and tablets that allows you to perform several advanced recovery, restoration, installation and maintenance operations etc. Please refer to xda-developers for more details. &lt;br /&gt;
&lt;br /&gt;
==Why can it be dangerous?==&lt;br /&gt;
&lt;br /&gt;
The tools above can be broadly categorized in the following categories:&lt;br /&gt;
*Userland Exploits: Jailbroken access is only obtained within the user layer.  For instance, a user may have root access, but is not able to change the boot process. These exploits can be patched with a firmware update.&lt;br /&gt;
*iBoot Exploit: Jailbroken access to user level and boot process. iBoot exploits can be patched with a firmware update.&lt;br /&gt;
*Bootrom Exploits: Jailbroken access to user level and boot process. Bootrom exploits cannot be patched with a firmware update. Hardware update of bootrom required to patch in such cases.&lt;br /&gt;
&lt;br /&gt;
Some high level risks for rooting or jailbreaking devices are as follows:&lt;br /&gt;
&lt;br /&gt;
===Technical Risks:===&lt;br /&gt;
#General Mobile&lt;br /&gt;
##Some jailbreaking methods leave SSH enabled with a well known default password (i.e. alpine) that attackers can use for Command &amp;amp; Control.&lt;br /&gt;
##Entire file system of a rooted or jailbroken device is vulnerable to a malicious user inserting or extracting files.  This vulnerability is exploited by many malware programs, including Droid Kung Fu, Droid Dream and Ikee. &lt;br /&gt;
##Credentials to sensitive applications, such as banking or corporate applications, can be stolen using key logging, sniffing or other malicious software and then transmitted via the internet connection. &lt;br /&gt;
#iOS&lt;br /&gt;
##Applications on a jailbroken device run as root outside of the iOS sandbox.  This can allow applications to access sensitive data contained in other apps or install malicious software negating sandboxing functionality. &lt;br /&gt;
##Jailbroken devices can allow a user to install and run self-signed applications. Since the apps do not go through the App Store, they are not reviewed by Apple. These apps may contain vulnerable or malicious code that can be used to exploit a device. &lt;br /&gt;
#Android&lt;br /&gt;
##Android users that change the permissions on their device to grant root access to applications increase security exposure to malicious applications and potential application flaws. &lt;br /&gt;
##3rd party Android application markets have been identified as hosting malicious applications with remote administrative (RAT) capabilities.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Non-technical risks:===&lt;br /&gt;
#Under the current Digital Millennium Copyright Act (DMCA), jailbreaking or rooting is termed as 'legal' in the US, which can provide some users with a false sense safety and jailbreaking or rooting as being harmless. Please refer to 'Rulemaking on Anticircumvention' for more details. &lt;br /&gt;
#Software updates cannot be immediately applied because doing so would remove the jailbreak.  This leaves the device vulnerable to known, unpatched software vulnerabilities. &lt;br /&gt;
#Users can be tricked into downloading malicious software. For example, malware commonly uses the following tactics to trick users into downloading software. &lt;br /&gt;
##Apps will often advertise that they provide additional functionality or remove ads from popular apps but also contain malicious code. &lt;br /&gt;
##Some apps will not have any malicious code as part of the initial version of the app but subsequent &amp;quot;Updates&amp;quot; will insert malicious code. &lt;br /&gt;
#Manufacturers have determined that jailbreaking or rooting is a breach of the terms of use for the device and therefore voids the warranty. This can be an issue for the user if the device needs hardware repair or technical support (Note: a device can be restored and therefore it is not a major issue, unless hardware damage otherwise covered by the warranty prevents restoration).&lt;br /&gt;
&lt;br /&gt;
What controls can be used to protect against it?&lt;br /&gt;
Before an organization chooses to implement a mobile solution in their environment they should conduct a thorough risk assessment. This risk assessment should include an evaluation of the dangers posed by jailbroken or rooted devices, which are inherently more vulnerable to malicious applications or vulnerabilities such as those listed in the OWASP Mobile Security Top Ten Risks. Once this has assessment has been completed, management can determine which risks to accept and which risks will require additional controls to mitigate. Below are a few examples of both technical and non-technical controls that an organization may use. &lt;br /&gt;
&lt;br /&gt;
===Technical Controls:===&lt;br /&gt;
 &lt;br /&gt;
Some of the detective controls to monitor for jailbroken or rooted devices include:&lt;br /&gt;
#Identify 3rd party app stores (e.g., Cydia).&lt;br /&gt;
#Attempt to identify modified kernels by comparing certain system files that the application would have access to on a non jailbroken device to known good file hashes. This technique can serve as a good starting point for detection.&lt;br /&gt;
#Attempt to write a file outside of the application’s root directory.  The attempt should fail for non-jailbroken devices.&lt;br /&gt;
&lt;br /&gt;
Note: Most Mobile Device Management (MDM) solutions can perform these checks but require an application to be installed on the device.&lt;br /&gt;
&lt;br /&gt;
===Non-Technical Controls:===&lt;br /&gt;
&lt;br /&gt;
Organizations must understand the following key points when thinking about mobile security:&lt;br /&gt;
#Perform a risk assessment to determine risks associated with mobile device use are appropriately identified, prioritized and mitigated to reduce or manage risk at levels acceptable to management.&lt;br /&gt;
#Review application inventory listing on frequent basis to identify applications posing significant risk to the mobility environment.&lt;br /&gt;
#Technology solutions such as Mobile Device Management (MDM) or Mobile Application Management (MAM) should be only one part of the overall security strategy.  High level considerations include:&lt;br /&gt;
##Policies and procedures.&lt;br /&gt;
##User awareness and user buy-in.&lt;br /&gt;
##Technical controls and platforms. &lt;br /&gt;
##Auditing, logging, and monitoring.&lt;br /&gt;
#While many organizations choose a Bring Your Own Device (BYOD) strategy, the risks and benefits need to be considered and addressed before such a strategy is put in place. For example, the organization may consider developing a support plan for the various devices and operating systems that could be introduced to the environment. Many organizations struggle with this since there are such a wide variety of devices, particularly Android devices. &lt;br /&gt;
#There is not a ‘one size fits all’ solution to mobile security.  Different levels of security controls should be employed based on the sensitivity of data that is collected, stored, or processed on a mobile device or through a mobile application.&lt;br /&gt;
#User awareness and user buy-in are key.  For consumers or customers, this could be a focus on privacy and how Personally Identifiable Information (PII) is handled. For employees, this could be a focus on Acceptable Use Agreements (AUA) as well as privacy for personal devices.&lt;br /&gt;
&lt;br /&gt;
==Conclusion==&lt;br /&gt;
&lt;br /&gt;
Jailbreaking and rooting tools, resources and processes are constantly updated and have made the process easier than ever for end-users. Many users are lured to jailbreak or root their device in order to gain more control over the device, upgrade their operating systems or install packages normally unavailable through standard channels. While having these options may allow the user to utilize the device more effectively, many users do not understand that jailbreaking or rooting can potentially allow malware to bypass many of the device's built in security features. The balance of user experience versus corporate security needs to be carefully considered since all mobile platforms have seen an increase in malware attacks over the past year. Mobile devices now hold more personal and corporate data than ever before and have become a very appealing target for attackers. Overall, the best defense for an enterprise is to build an overarching mobile strategy that accounts for technical controls, non technical controls and the people in the environment. Considerations need to not only focus on solutions such as MDM, but also policies and procedures around common issues of BYOD, and user security awareness.&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors =&lt;br /&gt;
&lt;br /&gt;
Suktika Mukhopadhyay&amp;lt;br/&amp;gt;&lt;br /&gt;
Brandon Clark&amp;lt;br/&amp;gt;&lt;br /&gt;
Talha Tariq&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147188</id>
		<title>C-Based Toolchain Hardening Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147188"/>
				<updated>2013-03-09T04:55:08Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Fixed markup&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening Cheat Sheet]] is a brief treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, integration, static analysis, and platform security. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
For those who would like a deeper treatment of the subject matter, please visit [[C-Based_Toolchain_Hardening|C-Based Toolchain Hardening]].&lt;br /&gt;
&lt;br /&gt;
== Actionable Items ==&lt;br /&gt;
&lt;br /&gt;
The [[C-Based Toolchain Hardening Cheat Sheet]] calls for the following actionable items:&lt;br /&gt;
&lt;br /&gt;
* Provide debug, release, and test configurations&lt;br /&gt;
* Provide an assert with useful behavior&lt;br /&gt;
* Configure code to take advantage of configurations&lt;br /&gt;
* Properly integrate third party libraries&lt;br /&gt;
* Use the compiler's built-in static analysis capabilities&lt;br /&gt;
* Integrate with platform security measures&lt;br /&gt;
&lt;br /&gt;
The remainder of this cheat sheet briefly explains the bulleted, actionable items. For a thorough treatment, please visit the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
== Build Configurations ==&lt;br /&gt;
&lt;br /&gt;
You should support three build configurations. First is ''Debug'', second is ''Release'', and third is ''Test''. One size does '''not''' fit all, and each speaks to a different facet of the engineering process. Because tools like Autconfig and Automake [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html do not support the notion of build configurations], you should prefer to work in an Integrated Develop Environments (IDE) or write your makefiles so the desired targets are supported. In addition, Autconfig and Automake often ignores user supplied flags (it depends on the folks writing the various scripts and templates), so you might find it easier to again write a makefile from scratch rather than retrofitting existing auto tool files.&lt;br /&gt;
&lt;br /&gt;
=== Debug Builds ===&lt;br /&gt;
&lt;br /&gt;
Debug is used during development, and the build assists you in finding problems in the code. During this phase, you develop your program and test integration with third party libraries you program depends upon. To help with debugging and diagnostics, you should define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt; (if on a Windows platform) preprocessor macros and supply other 'debugging and diagnostic' oriented flags to the compiler and linker. Additional preprocessor macros for selected libraries are offered in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
You should use the following for GCC when building for debug: &amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt;) and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt;. No optimizations improve debuggability because optimizations often rearrange statements to improve instruction scheduling and remove unneeded code. You may need &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; to ensure some analysis is performed. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Asserts will help you write self debugging programs. The program will alert you to the point of first failure quickly and easily. Because asserts are so powerful, the code should be completely and full instrumented with asserts that: (1) validates and asserts all program state relevant to a function or a method; (2) validates and asserts all function parameters; and (3) validates and asserts all return values for functions or methods which return a value. Because of item (3), you should be very suspicious of void functions that cannot convey failures.&lt;br /&gt;
&lt;br /&gt;
Anywhere you have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation, you should have an assert. Anywhere you have an assert, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand. Posix states if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined, then &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html &amp;quot;shall write information about the particular call that failed on stderr and shall call abort&amp;quot;]. Calling abort during development is useless behavior, so you must supply your own assert that &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt;s. A Unix and Linux example of a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; based assert is provided in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
Unlike other debugging and diagnostic methods - such as breakpoints and &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; - asserts stay in forever and become silent guardians. If you accidentally nudge something in an apparently unrelated code path, the assert will snap the debugger for you. The enduring coverage means debug code - with its additional diagnostics and instrumentation - is more highly valued than unadorned release code. If code is checked in that does not have the additional debugging and diagnostics, including full assertions, you should reject the check-in.&lt;br /&gt;
&lt;br /&gt;
=== Release Builds ===&lt;br /&gt;
&lt;br /&gt;
Release builds are diametrically opposed to debug configurations. In a release configuration, the program will be built for use in production. Your program is expected to operate correctly, securely and efficiently. The time for debugging and diagnostics is over, and your program will define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; to remove the supplemental information and behavior.&lt;br /&gt;
&lt;br /&gt;
A release configuration should also use &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. The optimizations will make it somewhat more difficult to make sense of a stack trace, but they should be few and far between. The &amp;lt;tt&amp;gt;-g''N''&amp;lt;/tt&amp;gt; flag ensures debugging information is available for post mortem analysis. While you generate debugging information for release builds, you should strip the information before shipping and check the symbols into you version control system along with the tagged build.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will also remove asserts from your program by defining them to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; since its not acceptable to crash via &amp;lt;tt&amp;gt;abort&amp;lt;/tt&amp;gt; in production. You should not depend upon assert for crash report generation because those reports could contain sensitive information and may end up on foreign systems, including for example, [http://msdn.microsoft.com/en-us/library/windows/hardware/gg487440.aspx Windows Error Reporting]. If you want a crash dump, you should generate it yourself in a controlled manner while ensuring no sensitive information is written or leaked.&lt;br /&gt;
&lt;br /&gt;
Release builds should also curtail logging. If you followed earlier guidance, you have properly instrumented code and can determine the point of first failure quickly and easily. Simply log the failure and and relevant parameters. Remove all &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; and similar calls because sensitive information might be logged to a system logger. Worse, the data in the logs might be egressed by backup or sync. If your default configuration includes a logging level of ten or ''maximum verbosity'', you probably lack stability and are trying to track problems in the field. That usually means your program or library is not ready for production.&lt;br /&gt;
&lt;br /&gt;
=== Test Builds ===&lt;br /&gt;
&lt;br /&gt;
A Test build is closely related to a release build. In this build configuration, you want to be as close to production as possible, so you should be using &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. You will run your suite of ''positive'' and ''negative'' tests against the test build.&lt;br /&gt;
&lt;br /&gt;
You will also want to exercise all functions or methods provided by the program and not just the public interfaces, so everything should be made public. For example, all member functions public (C++ classes), all selectors (Objective C), all methods (Java), and all interfaces (library or shared object) should be made available for testing. As such, you should:&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;tt&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
You should also concentrate on negative tests. Positive self tests are relatively useless except for functional and regression tests. Since this is your line of business or area of expertise, you should have the business logic correct when operating in a benign environment. A hostile or toxic environment is much more interesting, and that's where you want to know how your library or program will fail in the field when under attack.&lt;br /&gt;
&lt;br /&gt;
== Library Integration ==&lt;br /&gt;
&lt;br /&gt;
You must properly integrate and utilize libraries in your program. Proper integration includes acceptance testing, configuring for your build system, identifying libraries you ''should'' be using, and correctly using the libraries. A well integrated library can compliment your code, and a poorlly written library can detract from your program.&lt;br /&gt;
&lt;br /&gt;
Acceptance testing a library is practically non-existent. The testing can be a simple code review or can include additional measures, such as negative self tests. If the library is defective or does not meet standards, you must fix it or reject the library. An example of lack of acceptance testing is [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library], which resulted in [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525]. Another example is the 10's to 100's of millions of vulnerable embedded devices due to defects in &amp;lt;tt&amp;gt;libupnp&amp;lt;/tt&amp;gt;. While its popular to lay blame on others, the bottom line is you chose the library so you are responsible for it.&lt;br /&gt;
&lt;br /&gt;
You must also ensure the library is integrated into your build process. For example, the OpenSSL library should be configured '''without''' SSLv2, SSLv3 and compression since they are defective. That means &amp;lt;tt&amp;gt;config&amp;lt;/tt&amp;gt; should be executed with &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-no-sslv2&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-no-sslv3&amp;lt;/tt&amp;gt;. As an additional example, using STLPort your debug configuration should also define &amp;lt;tt&amp;gt;_STLP_DEBUG=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_USE_DEBUG_LIB=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_ALLOC=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_UNINITIALIZED=1&amp;lt;/tt&amp;gt; because the library offers the additional diagnostics during development.&lt;br /&gt;
&lt;br /&gt;
Debug build present an opportunity to use additional libraries to help locate problems in the code. For example, you should be using a memory checker such as ''Debug Malloc Library (Dmalloc)'' during development. If you are not using Dmalloc, then ensure you have an equivalent checker, such as GCC 4.8's &amp;lt;tt&amp;gt;-fsanitize=memory&amp;lt;/tt&amp;gt;. This is one area where one size clearly does not fit all.&lt;br /&gt;
&lt;br /&gt;
Using a library properly is always difficult, especially when there is no documentation. Review any hardening documents available for the library, and be sure to visit the library's documentation to ensure proper API usage. If required, you might have to review code or step library code under the debugger to ensure there are no bugs or undocumented features.&lt;br /&gt;
&lt;br /&gt;
== Static Analysis ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers do a fantastic job of generating object code from source code. The process creates a lot of additional information useful in analyzing code. Compilers use the analysis to offer programmers warnings to help detect problems in their code, but the catch is you have to ask for them. After you ask for them, you should take time to understand what the underlying issue is when a statement is flagged. For example, compilers will warn you when comparing a signed integer to an unsigned integer because &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after C/C++ promotion. At other times, you will need to back off some warnings to help separate the wheat from the chaff. For example, interface programming is a popular C++ paradigm, so &amp;lt;tt&amp;gt;-Wno-unused-parameter&amp;lt;/tt&amp;gt; will probably be helpful with C++ code.&lt;br /&gt;
&lt;br /&gt;
You should consider a clean compile as a security gate. If you find its painful to turn warnings on, then you have likely been overlooking some of the finer points in the details. In addition, you should strive for multiple compilers and platforms support since each has its own personality (and interpretation of the C/C++ standards). By the time your core modules clean compile under Clang, GCC, ICC, and Visual Studio on the Linux and Windows platforms, your code will have many stability obstacles removed.&lt;br /&gt;
&lt;br /&gt;
When compiling programs with GCC, you should use the following flags to help detect errors in your programs. The options should be added to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; for a program with C source files, and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a program with C++ source files. Objective C developers should add their warnings to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt;: &amp;lt;tt&amp;gt;-Wall -Wextra -Wconversion (or -Wsign-conversion), -Wcast-align, -Wformat=2 -Wformat-security, -fno-common, -Wmissing-prototypes, -Wmissing-declarations, -Wstrict-prototypes, -Wstrict-overflow, and -Wtrampolines&amp;lt;/tt&amp;gt;. C++ presents additional opportunities under GCC, and the flags include &amp;lt;tt&amp;gt;-Woverloaded-virtual, -Wreorder, -Wsign-promo, -Wnon-virtual-dtor&amp;lt;/tt&amp;gt; and possibly &amp;lt;tt&amp;gt;-Weffc++&amp;lt;/tt&amp;gt;. Finally, Objective C should include &amp;lt;tt&amp;gt;-Wstrict-selector-match&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-Wundeclared-selector&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
For a Microsoft platform, you should use: &amp;lt;tt&amp;gt;/W4&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/Wall&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt;. If you don't use &amp;lt;tt&amp;gt;/Wall&amp;lt;/tt&amp;gt;, Microsoft recomends using &amp;lt;tt&amp;gt;/W4&amp;lt;/tt&amp;gt; and enabling C4191, C4242, C4263, C4264, C4265, C4266, C4302, C4826, C4905, C4906, and C4928. Finally, &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt; is Enterprise Code Analysis, which is freely available with the [http://www.microsoft.com/en-us/download/details.aspx?id=24826 Windows SDK for Windows Server 2008 and .NET Framework 3.5 SDK] (you don't need Visual Studio Enterprise edition).&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html GCC Options to Request or Suppress Warnings]'', ''[http://blogs.msdn.com/b/vcblog/archive/2010/12/14/off-by-default-compiler-warnings-in-visual-c.aspx “Off By Default” Compiler Warnings in Visual C++]'', and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Platform Security ==&lt;br /&gt;
&lt;br /&gt;
Integrating with platform security is essential to a defensive posture. Platform security will be your safety umbrella if someone discovers a bug with security implications - and you should always have it with you. For example, if your parser fails, then no-execute stacks and heaps can turn a 0-day into an annoying crash. Not integrating often leaves your users and customers vulnerable to malicious code. While you may not be familiar with some of the flags, you are probably familiar with the effects of omitting them. For example, Android's Gingerbreak overwrote the Global Offset Table (GOT) in the ELF headers, and could have been avoided with &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When integrating with platform security on a Linux host, you should use the following flags: &amp;lt;tt&amp;gt;-fPIE&amp;lt;/tt&amp;gt; (compiler) and &amp;lt;tt&amp;gt;-pie&amp;lt;/tt&amp;gt; (linker), -fstack-protector-all (or -fstack-protector), &amp;lt;tt&amp;gt;-z,noexecstack&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,now&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;. If available, you should also use &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=2&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=1&amp;lt;/tt&amp;gt; on Android 4.2), &amp;lt;tt&amp;gt;-fsanitize=address&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=thread&amp;lt;/tt&amp;gt; (the last two should be used in debug configurations). &amp;lt;tt&amp;gt;-z,nodlopen&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-z,nodump&amp;lt;/tt&amp;gt; might help in reducing an attacker's ability to load and manipulate a shared object. On Gentoo and other systems with no-exec heaps, you should also use &amp;lt;tt&amp;gt;-z,noexecheap&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Windows programs should include &amp;lt;tt&amp;gt;/dynamicbase&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/NXCOMPAT&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/GS&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/SafeSEH&amp;lt;/tt&amp;gt; to ensure address space layout randomizations (ASLR), data execution prevention (DEP), use of stack cookies, and thwart exception handler overwrites.&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html GCC Options Summary]'' and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;br /&gt;
&lt;br /&gt;
== Other Cheat sheets ==&lt;br /&gt;
&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147184</id>
		<title>C-Based Toolchain Hardening Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147184"/>
				<updated>2013-03-09T04:53:41Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Added Microsoft warnings C4191, C4242, C4263, C4264, C4265, C4266, C4302, C4826, C4905, C4906, and C4928&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening Cheat Sheet]] is a brief treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, integration, static analysis, and platform security. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
For those who would like a deeper treatment of the subject matter, please visit [[C-Based_Toolchain_Hardening|C-Based Toolchain Hardening]].&lt;br /&gt;
&lt;br /&gt;
== Actionable Items ==&lt;br /&gt;
&lt;br /&gt;
The [[C-Based Toolchain Hardening Cheat Sheet]] calls for the following actionable items:&lt;br /&gt;
&lt;br /&gt;
* Provide debug, release, and test configurations&lt;br /&gt;
* Provide an assert with useful behavior&lt;br /&gt;
* Configure code to take advantage of configurations&lt;br /&gt;
* Properly integrate third party libraries&lt;br /&gt;
* Use the compiler's built-in static analysis capabilities&lt;br /&gt;
* Integrate with platform security measures&lt;br /&gt;
&lt;br /&gt;
The remainder of this cheat sheet briefly explains the bulleted, actionable items. For a thorough treatment, please visit the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
== Build Configurations ==&lt;br /&gt;
&lt;br /&gt;
You should support three build configurations. First is ''Debug'', second is ''Release'', and third is ''Test''. One size does '''not''' fit all, and each speaks to a different facet of the engineering process. Because tools like Autconfig and Automake [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html do not support the notion of build configurations], you should prefer to work in an Integrated Develop Environments (IDE) or write your makefiles so the desired targets are supported. In addition, Autconfig and Automake often ignores user supplied flags (it depends on the folks writing the various scripts and templates), so you might find it easier to again write a makefile from scratch rather than retrofitting existing auto tool files.&lt;br /&gt;
&lt;br /&gt;
=== Debug Builds ===&lt;br /&gt;
&lt;br /&gt;
Debug is used during development, and the build assists you in finding problems in the code. During this phase, you develop your program and test integration with third party libraries you program depends upon. To help with debugging and diagnostics, you should define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt; (if on a Windows platform) preprocessor macros and supply other 'debugging and diagnostic' oriented flags to the compiler and linker. Additional preprocessor macros for selected libraries are offered in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
You should use the following for GCC when building for debug: &amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt;) and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt;. No optimizations improve debuggability because optimizations often rearrange statements to improve instruction scheduling and remove unneeded code. You may need &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; to ensure some analysis is performed. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Asserts will help you write self debugging programs. The program will alert you to the point of first failure quickly and easily. Because asserts are so powerful, the code should be completely and full instrumented with asserts that: (1) validates and asserts all program state relevant to a function or a method; (2) validates and asserts all function parameters; and (3) validates and asserts all return values for functions or methods which return a value. Because of item (3), you should be very suspicious of void functions that cannot convey failures.&lt;br /&gt;
&lt;br /&gt;
Anywhere you have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation, you should have an assert. Anywhere you have an assert, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand. Posix states if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined, then &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html &amp;quot;shall write information about the particular call that failed on stderr and shall call abort&amp;quot;]. Calling abort during development is useless behavior, so you must supply your own assert that &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt;s. A Unix and Linux example of a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; based assert is provided in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
Unlike other debugging and diagnostic methods - such as breakpoints and &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; - asserts stay in forever and become silent guardians. If you accidentally nudge something in an apparently unrelated code path, the assert will snap the debugger for you. The enduring coverage means debug code - with its additional diagnostics and instrumentation - is more highly valued than unadorned release code. If code is checked in that does not have the additional debugging and diagnostics, including full assertions, you should reject the check-in.&lt;br /&gt;
&lt;br /&gt;
=== Release Builds ===&lt;br /&gt;
&lt;br /&gt;
Release builds are diametrically opposed to debug configurations. In a release configuration, the program will be built for use in production. Your program is expected to operate correctly, securely and efficiently. The time for debugging and diagnostics is over, and your program will define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; to remove the supplemental information and behavior.&lt;br /&gt;
&lt;br /&gt;
A release configuration should also use &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. The optimizations will make it somewhat more difficult to make sense of a stack trace, but they should be few and far between. The &amp;lt;tt&amp;gt;-g''N''&amp;lt;/tt&amp;gt; flag ensures debugging information is available for post mortem analysis. While you generate debugging information for release builds, you should strip the information before shipping and check the symbols into you version control system along with the tagged build.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will also remove asserts from your program by defining them to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; since its not acceptable to crash via &amp;lt;tt&amp;gt;abort&amp;lt;/tt&amp;gt; in production. You should not depend upon assert for crash report generation because those reports could contain sensitive information and may end up on foreign systems, including for example, [http://msdn.microsoft.com/en-us/library/windows/hardware/gg487440.aspx Windows Error Reporting]. If you want a crash dump, you should generate it yourself in a controlled manner while ensuring no sensitive information is written or leaked.&lt;br /&gt;
&lt;br /&gt;
Release builds should also curtail logging. If you followed earlier guidance, you have properly instrumented code and can determine the point of first failure quickly and easily. Simply log the failure and and relevant parameters. Remove all &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; and similar calls because sensitive information might be logged to a system logger. Worse, the data in the logs might be egressed by backup or sync. If your default configuration includes a logging level of ten or ''maximum verbosity'', you probably lack stability and are trying to track problems in the field. That usually means your program or library is not ready for production.&lt;br /&gt;
&lt;br /&gt;
=== Test Builds ===&lt;br /&gt;
&lt;br /&gt;
A Test build is closely related to a release build. In this build configuration, you want to be as close to production as possible, so you should be using &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. You will run your suite of ''positive'' and ''negative'' tests against the test build.&lt;br /&gt;
&lt;br /&gt;
You will also want to exercise all functions or methods provided by the program and not just the public interfaces, so everything should be made public. For example, all member functions public (C++ classes), all selectors (Objective C), all methods (Java), and all interfaces (library or shared object) should be made available for testing. As such, you should:&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;tt&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
You should also concentrate on negative tests. Positive self tests are relatively useless except for functional and regression tests. Since this is your line of business or area of expertise, you should have the business logic correct when operating in a benign environment. A hostile or toxic environment is much more interesting, and that's where you want to know how your library or program will fail in the field when under attack.&lt;br /&gt;
&lt;br /&gt;
== Library Integration ==&lt;br /&gt;
&lt;br /&gt;
You must properly integrate and utilize libraries in your program. Proper integration includes acceptance testing, configuring for your build system, identifying libraries you ''should'' be using, and correctly using the libraries. A well integrated library can compliment your code, and a poorlly written library can detract from your program.&lt;br /&gt;
&lt;br /&gt;
Acceptance testing a library is practically non-existent. The testing can be a simple code review or can include additional measures, such as negative self tests. If the library is defective or does not meet standards, you must fix it or reject the library. An example of lack of acceptance testing is [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library], which resulted in [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525]. Another example is the 10's to 100's of millions of vulnerable embedded devices due to defects in &amp;lt;tt&amp;gt;libupnp&amp;lt;/tt&amp;gt;. While its popular to lay blame on others, the bottom line is you chose the library so you are responsible for it.&lt;br /&gt;
&lt;br /&gt;
You must also ensure the library is integrated into your build process. For example, the OpenSSL library should be configured '''without''' SSLv2, SSLv3 and compression since they are defective. That means &amp;lt;tt&amp;gt;config&amp;lt;/tt&amp;gt; should be executed with &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-no-sslv2&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-no-sslv3&amp;lt;/tt&amp;gt;. As an additional example, using STLPort your debug configuration should also define &amp;lt;tt&amp;gt;_STLP_DEBUG=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_USE_DEBUG_LIB=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_ALLOC=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_UNINITIALIZED=1&amp;lt;/tt&amp;gt; because the library offers the additional diagnostics during development.&lt;br /&gt;
&lt;br /&gt;
Debug build present an opportunity to use additional libraries to help locate problems in the code. For example, you should be using a memory checker such as ''Debug Malloc Library (Dmalloc)'' during development. If you are not using Dmalloc, then ensure you have an equivalent checker, such as GCC 4.8's &amp;lt;tt&amp;gt;-fsanitize=memory&amp;lt;/tt&amp;gt;. This is one area where one size clearly does not fit all.&lt;br /&gt;
&lt;br /&gt;
Using a library properly is always difficult, especially when there is no documentation. Review any hardening documents available for the library, and be sure to visit the library's documentation to ensure proper API usage. If required, you might have to review code or step library code under the debugger to ensure there are no bugs or undocumented features.&lt;br /&gt;
&lt;br /&gt;
== Static Analysis ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers do a fantastic job of generating object code from source code. The process creates a lot of additional information useful in analyzing code. Compilers use the analysis to offer programmers warnings to help detect problems in their code, but the catch is you have to ask for them. After you ask for them, you should take time to understand what the underlying issue is when a statement is flagged. For example, compilers will warn you when comparing a signed integer to an unsigned integer because &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after C/C++ promotion. At other times, you will need to back off some warnings to help separate the wheat from the chaff. For example, interface programming is a popular C++ paradigm, so &amp;lt;tt&amp;gt;-Wno-unused-parameter&amp;lt;/tt&amp;gt; will probably be helpful with C++ code.&lt;br /&gt;
&lt;br /&gt;
You should consider a clean compile as a security gate. If you find its painful to turn warnings on, then you have likely been overlooking some of the finer points in the details. In addition, you should strive for multiple compilers and platforms support since each has its own personality (and interpretation of the C/C++ standards). By the time your core modules clean compile under Clang, GCC, ICC, and Visual Studio on the Linux and Windows platforms, your code will have many stability obstacles removed.&lt;br /&gt;
&lt;br /&gt;
When compiling programs with GCC, you should use the following flags to help detect errors in your programs. The options should be added to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; for a program with C source files, and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a program with C++ source files. Objective C developers should add their warnings to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt;: &amp;lt;tt&amp;gt;-Wall -Wextra -Wconversion (or -Wsign-conversion), -Wcast-align, -Wformat=2 -Wformat-security, -fno-common, -Wmissing-prototypes, -Wmissing-declarations, -Wstrict-prototypes, -Wstrict-overflow, and -Wtrampolines&amp;lt;/tt&amp;gt;. C++ presents additional opportunities under GCC, and the flags include &amp;lt;tt&amp;gt;-Woverloaded-virtual, -Wreorder, -Wsign-promo, -Wnon-virtual-dtor&amp;lt;/tt&amp;gt; and possibly &amp;lt;tt&amp;gt;-Weffc++&amp;lt;/tt&amp;gt;. Finally, Objective C should include &amp;lt;tt&amp;gt;-Wstrict-selector-match&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-Wundeclared-selector&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
For a Microsoft platform, you should use: &amp;lt;tt&amp;gt;/W4&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/Wall&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt;. If you don't use &amp;lt;tt&amp;gt;/Wall&amp;lt;/tt&amp;gt;, Microsoft recomends using &amp;lt;/W4&amp;gt; and enabling C4191, C4242, C4263, C4264, C4265, C4266, C4302, C4826, C4905, C4906, and C4928. Finally, &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt; is Enterprise Code Analysis, which is freely available with the [http://www.microsoft.com/en-us/download/details.aspx?id=24826 Windows SDK for Windows Server 2008 and .NET Framework 3.5 SDK] (you don't need Visual Studio Enterprise edition).&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html GCC Options to Request or Suppress Warnings]'', ''[http://blogs.msdn.com/b/vcblog/archive/2010/12/14/off-by-default-compiler-warnings-in-visual-c.aspx “Off By Default” Compiler Warnings in Visual C++]'', and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Platform Security ==&lt;br /&gt;
&lt;br /&gt;
Integrating with platform security is essential to a defensive posture. Platform security will be your safety umbrella if someone discovers a bug with security implications - and you should always have it with you. For example, if your parser fails, then no-execute stacks and heaps can turn a 0-day into an annoying crash. Not integrating often leaves your users and customers vulnerable to malicious code. While you may not be familiar with some of the flags, you are probably familiar with the effects of omitting them. For example, Android's Gingerbreak overwrote the Global Offset Table (GOT) in the ELF headers, and could have been avoided with &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When integrating with platform security on a Linux host, you should use the following flags: &amp;lt;tt&amp;gt;-fPIE&amp;lt;/tt&amp;gt; (compiler) and &amp;lt;tt&amp;gt;-pie&amp;lt;/tt&amp;gt; (linker), -fstack-protector-all (or -fstack-protector), &amp;lt;tt&amp;gt;-z,noexecstack&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,now&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;. If available, you should also use &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=2&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=1&amp;lt;/tt&amp;gt; on Android 4.2), &amp;lt;tt&amp;gt;-fsanitize=address&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=thread&amp;lt;/tt&amp;gt; (the last two should be used in debug configurations). &amp;lt;tt&amp;gt;-z,nodlopen&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-z,nodump&amp;lt;/tt&amp;gt; might help in reducing an attacker's ability to load and manipulate a shared object. On Gentoo and other systems with no-exec heaps, you should also use &amp;lt;tt&amp;gt;-z,noexecheap&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Windows programs should include &amp;lt;tt&amp;gt;/dynamicbase&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/NXCOMPAT&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/GS&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/SafeSEH&amp;lt;/tt&amp;gt; to ensure address space layout randomizations (ASLR), data execution prevention (DEP), use of stack cookies, and thwart exception handler overwrites.&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html GCC Options Summary]'' and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;br /&gt;
&lt;br /&gt;
== Other Cheat sheets ==&lt;br /&gt;
&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147181</id>
		<title>C-Based Toolchain Hardening Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147181"/>
				<updated>2013-03-09T04:45:04Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Code -&amp;gt; Program&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening Cheat Sheet]] is a brief treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, integration, static analysis, and platform security. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
For those who would like a deeper treatment of the subject matter, please visit [[C-Based_Toolchain_Hardening|C-Based Toolchain Hardening]].&lt;br /&gt;
&lt;br /&gt;
== Actionable Items ==&lt;br /&gt;
&lt;br /&gt;
The [[C-Based Toolchain Hardening Cheat Sheet]] calls for the following actionable items:&lt;br /&gt;
&lt;br /&gt;
* Provide debug, release, and test configurations&lt;br /&gt;
* Provide an assert with useful behavior&lt;br /&gt;
* Configure code to take advantage of configurations&lt;br /&gt;
* Properly integrate third party libraries&lt;br /&gt;
* Use the compiler's built-in static analysis capabilities&lt;br /&gt;
* Integrate with platform security measures&lt;br /&gt;
&lt;br /&gt;
The remainder of this cheat sheet briefly explains the bulleted, actionable items. For a thorough treatment, please visit the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
== Build Configurations ==&lt;br /&gt;
&lt;br /&gt;
You should support three build configurations. First is ''Debug'', second is ''Release'', and third is ''Test''. One size does '''not''' fit all, and each speaks to a different facet of the engineering process. Because tools like Autconfig and Automake [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html do not support the notion of build configurations], you should prefer to work in an Integrated Develop Environments (IDE) or write your makefiles so the desired targets are supported. In addition, Autconfig and Automake often ignores user supplied flags (it depends on the folks writing the various scripts and templates), so you might find it easier to again write a makefile from scratch rather than retrofitting existing auto tool files.&lt;br /&gt;
&lt;br /&gt;
=== Debug Builds ===&lt;br /&gt;
&lt;br /&gt;
Debug is used during development, and the build assists you in finding problems in the code. During this phase, you develop your program and test integration with third party libraries you program depends upon. To help with debugging and diagnostics, you should define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt; (if on a Windows platform) preprocessor macros and supply other 'debugging and diagnostic' oriented flags to the compiler and linker. Additional preprocessor macros for selected libraries are offered in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
You should use the following for GCC when building for debug: &amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt;) and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt;. No optimizations improve debuggability because optimizations often rearrange statements to improve instruction scheduling and remove unneeded code. You may need &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; to ensure some analysis is performed. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Asserts will help you write self debugging programs. The program will alert you to the point of first failure quickly and easily. Because asserts are so powerful, the code should be completely and full instrumented with asserts that: (1) validates and asserts all program state relevant to a function or a method; (2) validates and asserts all function parameters; and (3) validates and asserts all return values for functions or methods which return a value. Because of item (3), you should be very suspicious of void functions that cannot convey failures.&lt;br /&gt;
&lt;br /&gt;
Anywhere you have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation, you should have an assert. Anywhere you have an assert, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand. Posix states if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined, then &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html &amp;quot;shall write information about the particular call that failed on stderr and shall call abort&amp;quot;]. Calling abort during development is useless behavior, so you must supply your own assert that &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt;s. A Unix and Linux example of a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; based assert is provided in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
Unlike other debugging and diagnostic methods - such as breakpoints and &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; - asserts stay in forever and become silent guardians. If you accidentally nudge something in an apparently unrelated code path, the assert will snap the debugger for you. The enduring coverage means debug code - with its additional diagnostics and instrumentation - is more highly valued than unadorned release code. If code is checked in that does not have the additional debugging and diagnostics, including full assertions, you should reject the check-in.&lt;br /&gt;
&lt;br /&gt;
=== Release Builds ===&lt;br /&gt;
&lt;br /&gt;
Release builds are diametrically opposed to debug configurations. In a release configuration, the program will be built for use in production. Your program is expected to operate correctly, securely and efficiently. The time for debugging and diagnostics is over, and your program will define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; to remove the supplemental information and behavior.&lt;br /&gt;
&lt;br /&gt;
A release configuration should also use &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. The optimizations will make it somewhat more difficult to make sense of a stack trace, but they should be few and far between. The &amp;lt;tt&amp;gt;-g''N''&amp;lt;/tt&amp;gt; flag ensures debugging information is available for post mortem analysis. While you generate debugging information for release builds, you should strip the information before shipping and check the symbols into you version control system along with the tagged build.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will also remove asserts from your program by defining them to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; since its not acceptable to crash via &amp;lt;tt&amp;gt;abort&amp;lt;/tt&amp;gt; in production. You should not depend upon assert for crash report generation because those reports could contain sensitive information and may end up on foreign systems, including for example, [http://msdn.microsoft.com/en-us/library/windows/hardware/gg487440.aspx Windows Error Reporting]. If you want a crash dump, you should generate it yourself in a controlled manner while ensuring no sensitive information is written or leaked.&lt;br /&gt;
&lt;br /&gt;
Release builds should also curtail logging. If you followed earlier guidance, you have properly instrumented code and can determine the point of first failure quickly and easily. Simply log the failure and and relevant parameters. Remove all &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; and similar calls because sensitive information might be logged to a system logger. Worse, the data in the logs might be egressed by backup or sync. If your default configuration includes a logging level of ten or ''maximum verbosity'', you probably lack stability and are trying to track problems in the field. That usually means your program or library is not ready for production.&lt;br /&gt;
&lt;br /&gt;
=== Test Builds ===&lt;br /&gt;
&lt;br /&gt;
A Test build is closely related to a release build. In this build configuration, you want to be as close to production as possible, so you should be using &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. You will run your suite of ''positive'' and ''negative'' tests against the test build.&lt;br /&gt;
&lt;br /&gt;
You will also want to exercise all functions or methods provided by the program and not just the public interfaces, so everything should be made public. For example, all member functions public (C++ classes), all selectors (Objective C), all methods (Java), and all interfaces (library or shared object) should be made available for testing. As such, you should:&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;tt&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
You should also concentrate on negative tests. Positive self tests are relatively useless except for functional and regression tests. Since this is your line of business or area of expertise, you should have the business logic correct when operating in a benign environment. A hostile or toxic environment is much more interesting, and that's where you want to know how your library or program will fail in the field when under attack.&lt;br /&gt;
&lt;br /&gt;
== Library Integration ==&lt;br /&gt;
&lt;br /&gt;
You must properly integrate and utilize libraries in your program. Proper integration includes acceptance testing, configuring for your build system, identifying libraries you ''should'' be using, and correctly using the libraries. A well integrated library can compliment your code, and a poorlly written library can detract from your program.&lt;br /&gt;
&lt;br /&gt;
Acceptance testing a library is practically non-existent. The testing can be a simple code review or can include additional measures, such as negative self tests. If the library is defective or does not meet standards, you must fix it or reject the library. An example of lack of acceptance testing is [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library], which resulted in [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525]. Another example is the 10's to 100's of millions of vulnerable embedded devices due to defects in &amp;lt;tt&amp;gt;libupnp&amp;lt;/tt&amp;gt;. While its popular to lay blame on others, the bottom line is you chose the library so you are responsible for it.&lt;br /&gt;
&lt;br /&gt;
You must also ensure the library is integrated into your build process. For example, the OpenSSL library should be configured '''without''' SSLv2, SSLv3 and compression since they are defective. That means &amp;lt;tt&amp;gt;config&amp;lt;/tt&amp;gt; should be executed with &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-no-sslv2&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-no-sslv3&amp;lt;/tt&amp;gt;. As an additional example, using STLPort your debug configuration should also define &amp;lt;tt&amp;gt;_STLP_DEBUG=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_USE_DEBUG_LIB=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_ALLOC=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_UNINITIALIZED=1&amp;lt;/tt&amp;gt; because the library offers the additional diagnostics during development.&lt;br /&gt;
&lt;br /&gt;
Debug build present an opportunity to use additional libraries to help locate problems in the code. For example, you should be using a memory checker such as ''Debug Malloc Library (Dmalloc)'' during development. If you are not using Dmalloc, then ensure you have an equivalent checker, such as GCC 4.8's &amp;lt;tt&amp;gt;-fsanitize=memory&amp;lt;/tt&amp;gt;. This is one area where one size clearly does not fit all.&lt;br /&gt;
&lt;br /&gt;
Using a library properly is always difficult, especially when there is no documentation. Review any hardening documents available for the library, and be sure to visit the library's documentation to ensure proper API usage. If required, you might have to review code or step library code under the debugger to ensure there are no bugs or undocumented features.&lt;br /&gt;
&lt;br /&gt;
== Static Analysis ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers do a fantastic job of generating object code from source code. The process creates a lot of additional information useful in analyzing code. Compilers use the analysis to offer programmers warnings to help detect problems in their code, but the catch is you have to ask for them. After you ask for them, you should take time to understand what the underlying issue is when a statement is flagged. For example, compilers will warn you when comparing a signed integer to an unsigned integer because &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after C/C++ promotion. At other times, you will need to back off some warnings to help separate the wheat from the chaff. For example, interface programming is a popular C++ paradigm, so &amp;lt;tt&amp;gt;-Wno-unused-parameter&amp;lt;/tt&amp;gt; will probably be helpful with C++ code.&lt;br /&gt;
&lt;br /&gt;
You should consider a clean compile as a security gate. If you find its painful to turn warnings on, then you have likely been overlooking some of the finer points in the details. In addition, you should strive for multiple compilers and platforms support since each has its own personality (and interpretation of the C/C++ standards). By the time your core modules clean compile under Clang, GCC, ICC, and Visual Studio on the Linux and Windows platforms, your code will have many stability obstacles removed.&lt;br /&gt;
&lt;br /&gt;
When compiling programs with GCC, you should use the following flags to help detect errors in your programs. The options should be added to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; for a program with C source files, and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a program with C++ source files. Objective C developers should add their warnings to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt;: &amp;lt;tt&amp;gt;-Wall -Wextra -Wconversion (or -Wsign-conversion), -Wcast-align, -Wformat=2 -Wformat-security, -fno-common, -Wmissing-prototypes, -Wmissing-declarations, -Wstrict-prototypes, -Wstrict-overflow, and -Wtrampolines&amp;lt;/tt&amp;gt;. C++ presents additional opportunities under GCC, and the flags include &amp;lt;tt&amp;gt;-Woverloaded-virtual, -Wreorder, -Wsign-promo, -Wnon-virtual-dtor&amp;lt;/tt&amp;gt; and possibly &amp;lt;tt&amp;gt;-Weffc++&amp;lt;/tt&amp;gt;. Finally, Objective C should include &amp;lt;tt&amp;gt;-Wstrict-selector-match&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-Wundeclared-selector&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
For a Microsoft platform, you should use: &amp;lt;tt&amp;gt;/W4&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/Wall&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt; is Enterprise Code Analysis, which is freely available with the [http://www.microsoft.com/en-us/download/details.aspx?id=24826 Windows SDK for Windows Server 2008 and .NET Framework 3.5 SDK] (you don't need Visual Studio Enterprise edition).&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html GCC Options to Request or Suppress Warnings]'', ''[http://blogs.msdn.com/b/vcblog/archive/2010/12/14/off-by-default-compiler-warnings-in-visual-c.aspx “Off By Default” Compiler Warnings in Visual C++]'', and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Platform Security ==&lt;br /&gt;
&lt;br /&gt;
Integrating with platform security is essential to a defensive posture. Platform security will be your safety umbrella if someone discovers a bug with security implications - and you should always have it with you. For example, if your parser fails, then no-execute stacks and heaps can turn a 0-day into an annoying crash. Not integrating often leaves your users and customers vulnerable to malicious code. While you may not be familiar with some of the flags, you are probably familiar with the effects of omitting them. For example, Android's Gingerbreak overwrote the Global Offset Table (GOT) in the ELF headers, and could have been avoided with &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When integrating with platform security on a Linux host, you should use the following flags: &amp;lt;tt&amp;gt;-fPIE&amp;lt;/tt&amp;gt; (compiler) and &amp;lt;tt&amp;gt;-pie&amp;lt;/tt&amp;gt; (linker), -fstack-protector-all (or -fstack-protector), &amp;lt;tt&amp;gt;-z,noexecstack&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,now&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;. If available, you should also use &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=2&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=1&amp;lt;/tt&amp;gt; on Android 4.2), &amp;lt;tt&amp;gt;-fsanitize=address&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=thread&amp;lt;/tt&amp;gt; (the last two should be used in debug configurations). &amp;lt;tt&amp;gt;-z,nodlopen&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-z,nodump&amp;lt;/tt&amp;gt; might help in reducing an attacker's ability to load and manipulate a shared object. On Gentoo and other systems with no-exec heaps, you should also use &amp;lt;tt&amp;gt;-z,noexecheap&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Windows programs should include &amp;lt;tt&amp;gt;/dynamicbase&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/NXCOMPAT&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/GS&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/SafeSEH&amp;lt;/tt&amp;gt; to ensure address space layout randomizations (ASLR), data execution prevention (DEP), use of stack cookies, and thwart exception handler overwrites.&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html GCC Options Summary]'' and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;br /&gt;
&lt;br /&gt;
== Other Cheat sheets ==&lt;br /&gt;
&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147179</id>
		<title>C-Based Toolchain Hardening Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147179"/>
				<updated>2013-03-09T04:41:22Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Improved references&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening Cheat Sheet]] is a brief treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, integration, static analysis, and platform security. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
For those who would like a deeper treatment of the subject matter, please visit [[C-Based_Toolchain_Hardening|C-Based Toolchain Hardening]].&lt;br /&gt;
&lt;br /&gt;
== Actionable Items ==&lt;br /&gt;
&lt;br /&gt;
The [[C-Based Toolchain Hardening Cheat Sheet]] calls for the following actionable items:&lt;br /&gt;
&lt;br /&gt;
* Provide debug, release, and test configurations&lt;br /&gt;
* Provide an assert with useful behavior&lt;br /&gt;
* Configure code to take advantage of configurations&lt;br /&gt;
* Properly integrate third party libraries&lt;br /&gt;
* Use the compiler's built-in static analysis capabilities&lt;br /&gt;
* Integrate with platform security measures&lt;br /&gt;
&lt;br /&gt;
The remainder of this cheat sheet briefly explains the bulleted, actionable items. For a thorough treatment, please visit the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
== Build Configurations ==&lt;br /&gt;
&lt;br /&gt;
You should support three build configurations. First is ''Debug'', second is ''Release'', and third is ''Test''. One size does '''not''' fit all, and each speaks to a different facet of the engineering process. Because tools like Autconfig and Automake [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html do not support the notion of build configurations], you should prefer to work in an Integrated Develop Environments (IDE) or write your makefiles so the desired targets are supported. In addition, Autconfig and Automake often ignores user supplied flags (it depends on the folks writing the various scripts and templates), so you might find it easier to again write a makefile from scratch rather than retrofitting existing auto tool files.&lt;br /&gt;
&lt;br /&gt;
=== Debug Builds ===&lt;br /&gt;
&lt;br /&gt;
Debug is used during development, and the build assists you in finding problems in the code. During this phase, you develop your program and test integration with third party libraries you program depends upon. To help with debugging and diagnostics, you should define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt; (if on a Windows platform) preprocessor macros and supply other 'debugging and diagnostic' oriented flags to the compiler and linker. Additional preprocessor macros for selected libraries are offered in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
You should use the following for GCC when building for debug: &amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt;) and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt;. No optimizations improve debuggability because optimizations often rearrange statements to improve instruction scheduling and remove unneeded code. You may need &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; to ensure some analysis is performed. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Asserts will help you write self debugging programs. The program will alert you to the point of first failure quickly and easily. Because asserts are so powerful, the code should be completely and full instrumented with asserts that: (1) validates and asserts all program state relevant to a function or a method; (2) validates and asserts all function parameters; and (3) validates and asserts all return values for functions or methods which return a value. Because of item (3), you should be very suspicious of void functions that cannot convey failures.&lt;br /&gt;
&lt;br /&gt;
Anywhere you have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation, you should have an assert. Anywhere you have an assert, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand. Posix states if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined, then &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html &amp;quot;shall write information about the particular call that failed on stderr and shall call abort&amp;quot;]. Calling abort during development is useless behavior, so you must supply your own assert that &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt;s. A Unix and Linux example of a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; based assert is provided in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
Unlike other debugging and diagnostic methods - such as breakpoints and &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; - asserts stay in forever and become silent guardians. If you accidentally nudge something in an apparently unrelated code path, the assert will snap the debugger for you. The enduring coverage means debug code - with its additional diagnostics and instrumentation - is more highly valued than unadorned release code. If code is checked in that does not have the additional debugging and diagnostics, including full assertions, you should reject the check-in.&lt;br /&gt;
&lt;br /&gt;
=== Release Builds ===&lt;br /&gt;
&lt;br /&gt;
Release builds are diametrically opposed to debug configurations. In a release configuration, the program will be built for use in production. Your program is expected to operate correctly, securely and efficiently. The time for debugging and diagnostics is over, and your program will define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; to remove the supplemental information and behavior.&lt;br /&gt;
&lt;br /&gt;
A release configuration should also use &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. The optimizations will make it somewhat more difficult to make sense of a stack trace, but they should be few and far between. The &amp;lt;tt&amp;gt;-g''N''&amp;lt;/tt&amp;gt; flag ensures debugging information is available for post mortem analysis. While you generate debugging information for release builds, you should strip the information before shipping and check the symbols into you version control system along with the tagged build.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will also remove asserts from your program by defining them to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; since its not acceptable to crash via &amp;lt;tt&amp;gt;abort&amp;lt;/tt&amp;gt; in production. You should not depend upon assert for crash report generation because those reports could contain sensitive information and may end up on foreign systems, including for example, [http://msdn.microsoft.com/en-us/library/windows/hardware/gg487440.aspx Windows Error Reporting]. If you want a crash dump, you should generate it yourself in a controlled manner while ensuring no sensitive information is written or leaked.&lt;br /&gt;
&lt;br /&gt;
Release builds should also curtail logging. If you followed earlier guidance, you have properly instrumented code and can determine the point of first failure quickly and easily. Simply log the failure and and relevant parameters. Remove all &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; and similar calls because sensitive information might be logged to a system logger. Worse, the data in the logs might be egressed by backup or sync. If your default configuration includes a logging level of ten or ''maximum verbosity'', you probably lack stability and are trying to track problems in the field. That usually means your program or library is not ready for production.&lt;br /&gt;
&lt;br /&gt;
=== Test Builds ===&lt;br /&gt;
&lt;br /&gt;
A Test build is closely related to a release build. In this build configuration, you want to be as close to production as possible, so you should be using &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. You will run your suite of ''positive'' and ''negative'' tests against the test build.&lt;br /&gt;
&lt;br /&gt;
You will also want to exercise all functions or methods provided by the program and not just the public interfaces, so everything should be made public. For example, all member functions public (C++ classes), all selectors (Objective C), all methods (Java), and all interfaces (library or shared object) should be made available for testing. As such, you should:&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;tt&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
You should also concentrate on negative tests. Positive self tests are relatively useless except for functional and regression tests. Since this is your line of business or area of expertise, you should have the business logic correct when operating in a benign environment. A hostile or toxic environment is much more interesting, and that's where you want to know how your library or program will fail in the field when under attack.&lt;br /&gt;
&lt;br /&gt;
== Library Integration ==&lt;br /&gt;
&lt;br /&gt;
You must properly integrate and utilize libraries in your code. Proper integration includes acceptance testing, configuring for your build system, identifying libraries you ''should'' be using, and correctly using the libraries. A well integrated library can compliment your code, and a poorlly written library can detract from your program.&lt;br /&gt;
&lt;br /&gt;
Acceptance testing a library is practically non-existent. The testing can be a simple code review or can include additional measures, such as negative self tests. If the library is defective or does not meet standards, you must fix it or reject the library. An example of lack of acceptance testing is [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library], which resulted in [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525]. Another example is the 10's to 100's of millions of vulnerable embedded devices due to defects in &amp;lt;tt&amp;gt;libupnp&amp;lt;/tt&amp;gt;. While its popular to lay blame on others, the bottom line is you chose the library so you are responsible for it.&lt;br /&gt;
&lt;br /&gt;
You must also ensure the library is integrated into your build process. For example, the OpenSSL library should be configured '''without''' SSLv2, SSLv3 and compression since they are defective. That means &amp;lt;tt&amp;gt;config&amp;lt;/tt&amp;gt; should be executed with &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-no-sslv2&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-no-sslv3&amp;lt;/tt&amp;gt;. As an additional example, using STLPort your debug configuration should also define &amp;lt;tt&amp;gt;_STLP_DEBUG=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_USE_DEBUG_LIB=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_ALLOC=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_UNINITIALIZED=1&amp;lt;/tt&amp;gt; because the library offers the additional diagnostics during development.&lt;br /&gt;
&lt;br /&gt;
Debug build present an opportunity to use additional libraries to help locate problems in the code. For example, you should be using a memory checker such as ''Debug Malloc Library (Dmalloc)'' during development. If you are not using Dmalloc, then ensure you have an equivalent checker, such as GCC 4.8's &amp;lt;tt&amp;gt;-fsanitize=memory&amp;lt;/tt&amp;gt;. This is one area where one size clearly does not fit all.&lt;br /&gt;
&lt;br /&gt;
Using a library properly is always difficult, especially when there is no documentation. Review any hardening documents available for the library, and be sure to visit the library's documentation to ensure proper API usage. If required, you might have to review code or step library code under the debugger to ensure there are no bugs or undocumented features.&lt;br /&gt;
&lt;br /&gt;
== Static Analysis ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers do a fantastic job of generating object code from source code. The process creates a lot of additional information useful in analyzing code. Compilers use the analysis to offer programmers warnings to help detect problems in their code, but the catch is you have to ask for them. After you ask for them, you should take time to understand what the underlying issue is when a statement is flagged. For example, compilers will warn you when comparing a signed integer to an unsigned integer because &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after C/C++ promotion. At other times, you will need to back off some warnings to help separate the wheat from the chaff. For example, interface programming is a popular C++ paradigm, so &amp;lt;tt&amp;gt;-Wno-unused-parameter&amp;lt;/tt&amp;gt; will probably be helpful with C++ code.&lt;br /&gt;
&lt;br /&gt;
You should consider a clean compile as a security gate. If you find its painful to turn warnings on, then you have likely been overlooking some of the finer points in the details. In addition, you should strive for multiple compilers and platforms support since each has its own personality (and interpretation of the C/C++ standards). By the time your core modules clean compile under Clang, GCC, ICC, and Visual Studio on the Linux and Windows platforms, your code will have many stability obstacles removed.&lt;br /&gt;
&lt;br /&gt;
When compiling programs with GCC, you should use the following flags to help detect errors in your programs. The options should be added to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; for a program with C source files, and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a program with C++ source files. Objective C developers should add their warnings to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt;: &amp;lt;tt&amp;gt;-Wall -Wextra -Wconversion (or -Wsign-conversion), -Wcast-align, -Wformat=2 -Wformat-security, -fno-common, -Wmissing-prototypes, -Wmissing-declarations, -Wstrict-prototypes, -Wstrict-overflow, and -Wtrampolines&amp;lt;/tt&amp;gt;. C++ presents additional opportunities under GCC, and the flags include &amp;lt;tt&amp;gt;-Woverloaded-virtual, -Wreorder, -Wsign-promo, -Wnon-virtual-dtor&amp;lt;/tt&amp;gt; and possibly &amp;lt;tt&amp;gt;-Weffc++&amp;lt;/tt&amp;gt;. Finally, Objective C should include &amp;lt;tt&amp;gt;-Wstrict-selector-match&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-Wundeclared-selector&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
For a Microsoft platform, you should use: &amp;lt;tt&amp;gt;/W4&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/Wall&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt; is Enterprise Code Analysis, which is freely available with the [http://www.microsoft.com/en-us/download/details.aspx?id=24826 Windows SDK for Windows Server 2008 and .NET Framework 3.5 SDK] (you don't need Visual Studio Enterprise edition).&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html GCC Options to Request or Suppress Warnings]'', ''[http://blogs.msdn.com/b/vcblog/archive/2010/12/14/off-by-default-compiler-warnings-in-visual-c.aspx “Off By Default” Compiler Warnings in Visual C++]'', and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Platform Security ==&lt;br /&gt;
&lt;br /&gt;
Integrating with platform security is essential to a defensive posture. Platform security will be your safety umbrella if someone discovers a bug with security implications - and you should always have it with you. For example, if your parser fails, then no-execute stacks and heaps can turn a 0-day into an annoying crash. Not integrating often leaves your users and customers vulnerable to malicious code. While you may not be familiar with some of the flags, you are probably familiar with the effects of omitting them. For example, Android's Gingerbreak overwrote the Global Offset Table (GOT) in the ELF headers, and could have been avoided with &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When integrating with platform security on a Linux host, you should use the following flags: &amp;lt;tt&amp;gt;-fPIE&amp;lt;/tt&amp;gt; (compiler) and &amp;lt;tt&amp;gt;-pie&amp;lt;/tt&amp;gt; (linker), -fstack-protector-all (or -fstack-protector), &amp;lt;tt&amp;gt;-z,noexecstack&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,now&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;. If available, you should also use &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=2&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=1&amp;lt;/tt&amp;gt; on Android 4.2), &amp;lt;tt&amp;gt;-fsanitize=address&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=thread&amp;lt;/tt&amp;gt; (the last two should be used in debug configurations). &amp;lt;tt&amp;gt;-z,nodlopen&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-z,nodump&amp;lt;/tt&amp;gt; might help in reducing an attacker's ability to load and manipulate a shared object. On Gentoo and other systems with no-exec heaps, you should also use &amp;lt;tt&amp;gt;-z,noexecheap&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Windows programs should include &amp;lt;tt&amp;gt;/dynamicbase&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/NXCOMPAT&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/GS&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/SafeSEH&amp;lt;/tt&amp;gt; to ensure address space layout randomizations (ASLR), data execution prevention (DEP), use of stack cookies, and thwart exception handler overwrites.&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html GCC Options Summary]'' and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;br /&gt;
&lt;br /&gt;
== Other Cheat sheets ==&lt;br /&gt;
&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147177</id>
		<title>C-Based Toolchain Hardening Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147177"/>
				<updated>2013-03-09T04:32:01Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Improved flow&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening Cheat Sheet]] is a brief treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, integration, static analysis, and platform security. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
For those who would like a deeper treatment of the subject matter, please visit [[C-Based_Toolchain_Hardening|C-Based Toolchain Hardening]].&lt;br /&gt;
&lt;br /&gt;
== Actionable Items ==&lt;br /&gt;
&lt;br /&gt;
The [[C-Based Toolchain Hardening Cheat Sheet]] calls for the following actionable items:&lt;br /&gt;
&lt;br /&gt;
* Provide debug, release, and test configurations&lt;br /&gt;
* Provide an assert with useful behavior&lt;br /&gt;
* Configure code to take advantage of configurations&lt;br /&gt;
* Properly integrate third party libraries&lt;br /&gt;
* Use the compiler's built-in static analysis capabilities&lt;br /&gt;
* Integrate with platform security measures&lt;br /&gt;
&lt;br /&gt;
The remainder of this cheat sheet briefly explains the bulleted, actionable items. For a thorough treatment, please visit the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
== Build Configurations ==&lt;br /&gt;
&lt;br /&gt;
You should support three build configurations. First is ''Debug'', second is ''Release'', and third is ''Test''. One size does '''not''' fit all, and each speaks to a different facet of the engineering process. Because tools like Autconfig and Automake [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html do not support the notion of build configurations], you should prefer to work in an Integrated Develop Environments (IDE) or write your makefiles so the desired targets are supported. In addition, Autconfig and Automake often ignores user supplied flags (it depends on the folks writing the various scripts and templates), so you might find it easier to again write a makefile from scratch rather than retrofitting existing auto tool files.&lt;br /&gt;
&lt;br /&gt;
=== Debug Builds ===&lt;br /&gt;
&lt;br /&gt;
Debug is used during development, and the build assists you in finding problems in the code. During this phase, you develop your program and test integration with third party libraries you program depends upon. To help with debugging and diagnostics, you should define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt; (if on a Windows platform) preprocessor macros and supply other 'debugging and diagnostic' oriented flags to the compiler and linker. Additional preprocessor macros for selected libraries are offered in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
You should use the following for GCC when building for debug: &amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt;) and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt;. No optimizations improve debuggability because optimizations often rearrange statements to improve instruction scheduling and remove unneeded code. You may need &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; to ensure some analysis is performed. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Asserts will help you write self debugging programs. The program will alert you to the point of first failure quickly and easily. Because asserts are so powerful, the code should be completely and full instrumented with asserts that: (1) validates and asserts all program state relevant to a function or a method; (2) validates and asserts all function parameters; and (3) validates and asserts all return values for functions or methods which return a value. Because of item (3), you should be very suspicious of void functions that cannot convey failures.&lt;br /&gt;
&lt;br /&gt;
Anywhere you have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation, you should have an assert. Anywhere you have an assert, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand. Posix states if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined, then &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html &amp;quot;shall write information about the particular call that failed on stderr and shall call abort&amp;quot;]. Calling abort during development is useless behavior, so you must supply your own assert that &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt;s. A Unix and Linux example of a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; based assert is provided in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
Unlike other debugging and diagnostic methods - such as breakpoints and &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; - asserts stay in forever and become silent guardians. If you accidentally nudge something in an apparently unrelated code path, the assert will snap the debugger for you. The enduring coverage means debug code - with its additional diagnostics and instrumentation - is more highly valued than unadorned release code. If code is checked in that does not have the additional debugging and diagnostics, including full assertions, you should reject the check-in.&lt;br /&gt;
&lt;br /&gt;
=== Release Builds ===&lt;br /&gt;
&lt;br /&gt;
Release builds are diametrically opposed to debug configurations. In a release configuration, the program will be built for use in production. Your program is expected to operate correctly, securely and efficiently. The time for debugging and diagnostics is over, and your program will define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; to remove the supplemental information and behavior.&lt;br /&gt;
&lt;br /&gt;
A release configuration should also use &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. The optimizations will make it somewhat more difficult to make sense of a stack trace, but they should be few and far between. The &amp;lt;tt&amp;gt;-g''N''&amp;lt;/tt&amp;gt; flag ensures debugging information is available for post mortem analysis. While you generate debugging information for release builds, you should strip the information before shipping and check the symbols into you version control system along with the tagged build.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will also remove asserts from your program by defining them to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; since its not acceptable to crash via &amp;lt;tt&amp;gt;abort&amp;lt;/tt&amp;gt; in production. You should not depend upon assert for crash report generation because those reports could contain sensitive information and may end up on foreign systems, including for example, [http://msdn.microsoft.com/en-us/library/windows/hardware/gg487440.aspx Windows Error Reporting]. If you want a crash dump, you should generate it yourself in a controlled manner while ensuring no sensitive information is written or leaked.&lt;br /&gt;
&lt;br /&gt;
Release builds should also curtail logging. If you followed earlier guidance, you have properly instrumented code and can determine the point of first failure quickly and easily. Simply log the failure and and relevant parameters. Remove all &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; and similar calls because sensitive information might be logged to a system logger. Worse, the data in the logs might be egressed by backup or sync. If your default configuration includes a logging level of ten or ''maximum verbosity'', you probably lack stability and are trying to track problems in the field. That usually means your program or library is not ready for production.&lt;br /&gt;
&lt;br /&gt;
=== Test Builds ===&lt;br /&gt;
&lt;br /&gt;
A Test build is closely related to a release build. In this build configuration, you want to be as close to production as possible, so you should be using &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. You will run your suite of ''positive'' and ''negative'' tests against the test build.&lt;br /&gt;
&lt;br /&gt;
You will also want to exercise all functions or methods provided by the program and not just the public interfaces, so everything should be made public. For example, all member functions public (C++ classes), all selectors (Objective C), all methods (Java), and all interfaces (library or shared object) should be made available for testing. As such, you should:&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;tt&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
You should also concentrate on negative tests. Positive self tests are relatively useless except for functional and regression tests. Since this is your line of business or area of expertise, you should have the business logic correct when operating in a benign environment. A hostile or toxic environment is much more interesting, and that's where you want to know how your library or program will fail in the field when under attack.&lt;br /&gt;
&lt;br /&gt;
== Library Integration ==&lt;br /&gt;
&lt;br /&gt;
You must properly integrate and utilize libraries in your code. Proper integration includes acceptance testing, configuring for your build system, identifying libraries you ''should'' be using, and correctly using the libraries. A well integrated library can compliment your code, and a poorlly written library can detract from your program.&lt;br /&gt;
&lt;br /&gt;
Acceptance testing a library is practically non-existent. The testing can be a simple code review or can include additional measures, such as negative self tests. If the library is defective or does not meet standards, you must fix it or reject the library. An example of lack of acceptance testing is [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library], which resulted in [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525]. Another example is the 10's to 100's of millions of vulnerable embedded devices due to defects in &amp;lt;tt&amp;gt;libupnp&amp;lt;/tt&amp;gt;. While its popular to lay blame on others, the bottom line is you chose the library so you are responsible for it.&lt;br /&gt;
&lt;br /&gt;
You must also ensure the library is integrated into your build process. For example, the OpenSSL library should be configured '''without''' SSLv2, SSLv3 and compression since they are defective. That means &amp;lt;tt&amp;gt;config&amp;lt;/tt&amp;gt; should be executed with &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-no-sslv2&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-no-sslv3&amp;lt;/tt&amp;gt;. As an additional example, using STLPort your debug configuration should also define &amp;lt;tt&amp;gt;_STLP_DEBUG=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_USE_DEBUG_LIB=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_ALLOC=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_UNINITIALIZED=1&amp;lt;/tt&amp;gt; because the library offers the additional diagnostics during development.&lt;br /&gt;
&lt;br /&gt;
Debug build present an opportunity to use additional libraries to help locate problems in the code. For example, you should be using a memory checker such as ''Debug Malloc Library (Dmalloc)'' during development. If you are not using Dmalloc, then ensure you have an equivalent checker, such as GCC 4.8's &amp;lt;tt&amp;gt;-fsanitize=memory&amp;lt;/tt&amp;gt;. This is one area where one size clearly does not fit all.&lt;br /&gt;
&lt;br /&gt;
Using a library properly is always difficult, especially when there is no documentation. Review any hardening documents available for the library, and be sure to visit the library's documentation to ensure proper API usage. If required, you might have to review code or step library code under the debugger to ensure there are no bugs or undocumented features.&lt;br /&gt;
&lt;br /&gt;
== Static Analysis ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers do a fantastic job of generating object code from source code. The process creates a lot of additional information useful in analyzing code. Compilers use the analysis to offer programmers warnings to help detect problems in their code, but the catch is you have to ask for them. After you ask for them, you should take time to understand what the underlying issue is when a statement is flagged. For example, compilers will warn you when comparing a signed integer to an unsigned integer because &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after C/C++ promotion. At other times, you will need to back off some warnings to help separate the wheat from the chaff. For example, interface programming is a popular C++ paradigm, so &amp;lt;tt&amp;gt;-Wno-unused-parameter&amp;lt;/tt&amp;gt; will probably be helpful with C++ code.&lt;br /&gt;
&lt;br /&gt;
You should consider a clean compile as a security gate. If you find its painful to turn warnings on, then you have likely been overlooking some of the finer points in the details. In addition, you should strive for multiple compilers and platforms support since each has its own personality (and interpretation of the C/C++ standards). By the time your core modules clean compile under Clang, GCC, ICC, and Visual Studio on the Linux and Windows platforms, your code will have many stability obstacles removed.&lt;br /&gt;
&lt;br /&gt;
When compiling programs with GCC, you should use the following flags to help detect errors in your programs. The options should be added to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; for a program with C source files, and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a program with C++ source files. Objective C developers should add their warnings to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt;: &amp;lt;tt&amp;gt;-Wall -Wextra -Wconversion (or -Wsign-conversion), -Wcast-align, -Wformat=2 -Wformat-security, -fno-common, -Wmissing-prototypes, -Wmissing-declarations, -Wstrict-prototypes, -Wstrict-overflow, and -Wtrampolines&amp;lt;/tt&amp;gt;. C++ presents additional opportunities under GCC, and the flags include &amp;lt;tt&amp;gt;-Woverloaded-virtual, -Wreorder, -Wsign-promo, -Wnon-virtual-dtor&amp;lt;/tt&amp;gt; and possibly &amp;lt;tt&amp;gt;-Weffc++&amp;lt;/tt&amp;gt;. Finally, Objective C should include &amp;lt;tt&amp;gt;-Wstrict-selector-match&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-Wundeclared-selector&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
For a Microsoft platform, you should use: &amp;lt;tt&amp;gt;/W4&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/Wall&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt; is Enterprise Code Analysis, which is freely available with the [http://www.microsoft.com/en-us/download/details.aspx?id=24826 Windows SDK for Windows Server 2008 and .NET Framework 3.5 SDK] (you don't need Visual Studio Enterprise edition).&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html GCC Options to Request or Suppress Warnings]'' and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Platform Security ==&lt;br /&gt;
&lt;br /&gt;
Integrating with platform security is essential to a defensive posture. Platform security will be your safety umbrella if someone discovers a bug with security implications - and you should always have it with you. For example, if your parser fails, then no-execute stacks and heaps can turn a 0-day into an annoying crash. Not integrating often leaves your users and customers vulnerable to malicious code. While you may not be familiar with some of the flags, you are probably familiar with the effects of omitting them. For example, Android's Gingerbreak overwrote the Global Offset Table (GOT) in the ELF headers, and could have been avoided with &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When integrating with platform security on a Linux host, you should use the following flags: &amp;lt;tt&amp;gt;-fPIE&amp;lt;/tt&amp;gt; (compiler) and &amp;lt;tt&amp;gt;-pie&amp;lt;/tt&amp;gt; (linker), -fstack-protector-all (or -fstack-protector), &amp;lt;tt&amp;gt;-z,noexecstack&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,now&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;. If available, you should also use &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=2&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=1&amp;lt;/tt&amp;gt; on Android 4.2), &amp;lt;tt&amp;gt;-fsanitize=address&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=thread&amp;lt;/tt&amp;gt; (the last two should be used in debug configurations). &amp;lt;tt&amp;gt;-z,nodlopen&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-z,nodump&amp;lt;/tt&amp;gt; might help in reducing an attacker's ability to load and manipulate a shared object. On Gentoo and other systems with no-exec heaps, you should also use &amp;lt;tt&amp;gt;-z,noexecheap&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Windows programs should include &amp;lt;tt&amp;gt;/dynamicbase&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/NXCOMPAT&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/GS&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/SafeSEH&amp;lt;/tt&amp;gt; to ensure address space layout randomizations (ASLR), data execution prevention (DEP), use of stack cookies, and thwart exception handler overwrites.&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html GCC Options to Request or Suppress Warnings]'' and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;br /&gt;
&lt;br /&gt;
== Other Cheat sheets ==&lt;br /&gt;
&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147162</id>
		<title>C-Based Toolchain Hardening Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147162"/>
				<updated>2013-03-09T03:29:05Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Moved some platform security flags from 'static Anlysis' to 'Platform Security'; Improved flow&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening Cheat Sheet]] is a brief treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. This article will examine Microsoft and GCC toolchains for the C, C++ and Objective C languages. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, preprocessor, compiler, and linker. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, Visual Studio-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
For those who would like a deeper treatment of the subject matter, please visit [[C-Based_Toolchain_Hardening|C-Based Toolchain Hardening]].&lt;br /&gt;
&lt;br /&gt;
== Actionable Items ==&lt;br /&gt;
&lt;br /&gt;
The [[C-Based Toolchain Hardening Cheat Sheet]] calls for the following actionable items:&lt;br /&gt;
&lt;br /&gt;
* Provide debug, release, and test configurations&lt;br /&gt;
* Provide an assert with useful behavior&lt;br /&gt;
* Configure code to take advantage of configurations&lt;br /&gt;
* Properly integrate third party libraries&lt;br /&gt;
* Use the compiler's built-in static analysis capabilities&lt;br /&gt;
* Integrate with platform security measures&lt;br /&gt;
&lt;br /&gt;
The remainder of this cheat sheet briefly explains the bulleted, actionable items. For a thorough treatment, please visit the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
== Build Configurations ==&lt;br /&gt;
&lt;br /&gt;
You should support three build configurations. First is ''Debug'', second is ''Release'', and third is ''Test''. One size does '''not''' fit all, and each speaks to a different facet of the engineering process. Because tools like Autconfig and Automake [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html do not support the notion of build configurations], you should prefer to work in an Integrated Develop Environments (IDE) or write your makefiles so the desired targets are supported. In addition, Autconfig and Automake often ignores user supplied flags (it depends on the folks writing the various scripts and templates), so you might find it easier to again write a makefile from scratch rather than retrofitting existing auto tool files.&lt;br /&gt;
&lt;br /&gt;
=== Debug Builds ===&lt;br /&gt;
&lt;br /&gt;
Debug is used during development, and the build assists you in finding problems in the code. During this phase, you develop your program and test integration with third party libraries you program depends upon. To help with debugging and diagnostics, you should define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt; (if on a Windows platform) preprocessor macros and supply other 'debugging and diagnostic' oriented flags to the compiler and linker. Additional preprocessor macros for selected libraries are offered in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
You should use the following for GCC when building for debug: &amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt;) and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt;. No optimizations improve debuggability because optimizations often rearrange statements to improve instruction scheduling and remove unneeded code. You may need &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; to ensure some analysis is performed. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Asserts will help you write self debugging programs. The program will alert you to the point of first failure quickly and easily. Because asserts are so powerful, the code should be completely and full instrumented with asserts that: (1) validates and asserts all program state relevant to a function or a method; (2) validates and asserts all function parameters; and (3) validates and asserts all return values for functions or methods which return a value. Because of item (3), you should be very suspicious of void functions that cannot convey failures.&lt;br /&gt;
&lt;br /&gt;
Anywhere you have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation, you should have an assert. Anywhere you have an assert, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand. Posix states if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined, then &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html &amp;quot;shall write information about the particular call that failed on stderr and shall call abort&amp;quot;]. Calling abort during development is useless behavior, so you must supply your own assert that &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt;s. A Unix and Linux example of a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; based assert is provided in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
Unlike other debugging and diagnostic methods - such as breakpoints and &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; - asserts stay in forever and become silent guardians. If you accidentally nudge something in an apparently unrelated code path, the assert will snap the debugger for you. The enduring coverage means debug code - with its additional diagnostics and instrumentation - is more highly valued than unadorned release code. If code is checked in that does not have the additional debugging and diagnostics, including full assertions, you should reject the check-in.&lt;br /&gt;
&lt;br /&gt;
=== Release Builds ===&lt;br /&gt;
&lt;br /&gt;
Release builds are diametrically opposed to debug configurations. In a release configuration, the program will be built for use in production. Your program is expected to operate correctly, securely and efficiently. The time for debugging and diagnostics is over, and your program will define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; to remove the supplemental information and behavior.&lt;br /&gt;
&lt;br /&gt;
A release configuration should also use &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. The optimizations will make it somewhat more difficult to make sense of a stack trace, but they should be few and far between. The &amp;lt;tt&amp;gt;-g''N''&amp;lt;/tt&amp;gt; flag ensures debugging information is available for post mortem analysis. While you generate debugging information for release builds, you should strip the information before shipping and check the symbols into you version control system along with the tagged build.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will also remove asserts from your program by defining them to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; since its not acceptable to crash via &amp;lt;tt&amp;gt;abort&amp;lt;/tt&amp;gt; in production. You should not depend upon assert for crash report generation because those reports could contain sensitive information and may end up on foreign systems, including for example, [http://msdn.microsoft.com/en-us/library/windows/hardware/gg487440.aspx Windows Error Reporting]. If you want a crash dump, you should generate it yourself in a controlled manner while ensuring no sensitive information is written or leaked.&lt;br /&gt;
&lt;br /&gt;
Release builds should also curtail logging. If you followed earlier guidance, you have properly instrumented code and can determine the point of first failure quickly and easily. Simply log the failure and and relevant parameters. Remove all &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; and similar calls because sensitive information might be logged to a system logger. Worse, the data in the logs might be egressed. If your default configuration includes a logging level of ten or ''Maximum Verbosity'', you are probably trying to track down problems in the field. That usually means your program or library is not ready for production.&lt;br /&gt;
&lt;br /&gt;
=== Test Builds ===&lt;br /&gt;
&lt;br /&gt;
A Test build is closely related to a release build. In this build configuration, you want to be as close to production as possible, so you should be using &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. You will run your suite of ''positive'' and ''negative'' tests against the test build.&lt;br /&gt;
&lt;br /&gt;
You will also want to exercise all functions or methods provided by the program and not just the public interfaces, so everything should be made public. For example, all member functions public (C++ classes), all selectors (Objective C), all methods (Java), and all interfaces (library or shared object) should be made available for testing. As such, you should:&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;tt&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
You should also concentrate on negative tests. Positive self tests are relatively useless except for functional and regression tests. Since this is your line of business or area of expertise, you should have the business logic correct when used in a benign environment. A hostile or toxic environment is much more interesting, and that's where you want to know how your library or program will fail in the field when under attack.&lt;br /&gt;
&lt;br /&gt;
== Library Integration ==&lt;br /&gt;
&lt;br /&gt;
You must properly integrate and utilize libraries in your code. Proper integration includes acceptance testing, configuring for your build system, identifying libraries you ''should'' be using, and correctly using the libraries.&lt;br /&gt;
&lt;br /&gt;
Acceptance testing a library is practically non-existent. The testing can be a simple code review or can include additional measures, such as negative self tests. If the library is defective or does not meet standards, you must fix it or reject the library. An example of lack of acceptance testing is [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library], which resulted in [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525]. Another example is the 10's to 100's of millions of vulnerable embedded devices due to defects in &amp;lt;tt&amp;gt;libupnp&amp;lt;/tt&amp;gt;. While its popular to lay blame on others, the bottom line is you chose the library so you are responsible for it.&lt;br /&gt;
&lt;br /&gt;
You must also ensure the library is integrated into your build process. For example, the OpenSSL library should be configured '''without''' SSLv2, SSLv3 and compression since they are defective. That means &amp;lt;tt&amp;gt;config&amp;lt;/tt&amp;gt; should be executed with &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-no-sslv2&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-no-sslv3&amp;lt;/tt&amp;gt;. As an additional example, using STLPort your debug configuration should also define &amp;lt;tt&amp;gt;_STLP_DEBUG=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_USE_DEBUG_LIB=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_ALLOC=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_UNINITIALIZED=1&amp;lt;/tt&amp;gt; because the library offers the additional diagnostics during development.&lt;br /&gt;
&lt;br /&gt;
Debug build present an opportunity to use additional libraries to help locate problems in the code. For example, you should be using a memory checker such as ''Debug Malloc Library (Dmalloc)'' during development. If you are not using Dmalloc, then ensure you have an equivalent checker, such as GCC 4.8's &amp;lt;tt&amp;gt;-fsanitize=memory&amp;lt;/tt&amp;gt;. This is one area where one size clearly does not fit all.&lt;br /&gt;
&lt;br /&gt;
Using a library properly is always difficult, especially when there is no documentation. Review any hardening documents available for the library, and be sure to visit the library's documentation to ensure proper API usage. If required, you might have to review code or step library code under the debugger to ensure there are no bugs or undocumented features.&lt;br /&gt;
&lt;br /&gt;
== Static Analysis ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers do a fantastic job of generating object code from source code. The process creates a lot of additional information useful in analyzing code. Compilers use the analysis to offer programmers warnings to help detect problems in their code, but the catch is you have to ask for them. After you ask for them, you should take time to understand what the underlying issue is when a statement is flagged. For example, compilers will warn you when comparing a signed integer to an unsigned integer because &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after C/C++ promotion. At other times, you will need to back off some warnings to help separate the wheat from the chaff. For example, interface programming is a popular C++ paradigm, so &amp;lt;tt&amp;gt;-Wno-unused-parameter&amp;lt;/tt&amp;gt; will probably be helpful with C++ code.&lt;br /&gt;
&lt;br /&gt;
You should consider a clean compile as a security gate. If you find its painful to turn warnings on, then you have likely been overlooking some of the finer points in the details. In addition, you should strive for multiple compilers and platforms support since each has its own personality (and interpretation of the C/C++ standards). By the time your core modules clean compile under Clang, GCC, ICC, and Visual Studio on the Linux and Windows platforms, your code will have many stability obstacles removed.&lt;br /&gt;
&lt;br /&gt;
When compiling programs with GCC, you should use the following flags to help detect errors in your programs. The options should be added to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; for a program with C source files, and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a program with C++ source files. Objective C developers should add their warnings to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt;: &amp;lt;tt&amp;gt;-Wall -Wextra -Wconversion (or -Wsign-conversion), -Wcast-align, -Wformat=2 -Wformat-security, -fno-common, -Wmissing-prototypes, -Wmissing-declarations, -Wstrict-prototypes, -Wstrict-overflow, and -Wtrampolines&amp;lt;/tt&amp;gt;. C++ presents additional opportunities under GCC. The additional flags include: -Woverloaded-virtual, -Wreorder, -Wsign-promo, -Wnon-virtual-dtor and possibly -Weffc++. Finally, Objective C should include -Wstrict-selector-match and -Wundeclared-selector.&lt;br /&gt;
&lt;br /&gt;
For a Microsoft platform, you should use: &amp;lt;tt&amp;gt;/W4&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/Wall&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt; is Enterprise Code Analysis, which is freely available with the [http://www.microsoft.com/en-us/download/details.aspx?id=24826 Windows SDK for Windows Server 2008 and .NET Framework 3.5 SDK] (you don't need Visual Studio Enterprise edition).&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html GCC Options to Request or Suppress Warnings]'' and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Platform Security ==&lt;br /&gt;
&lt;br /&gt;
Integrating with platform security is essential to a defensive posture. Platform security will be your safety umbrella if someone discovers a bug with security implications - and you should always have it with you. For example, if your parser fails, then no-execute stacks and heaps can turn a 0-day into an annoying crash. Not integrating often leaves your users and customers vulnerable to malicious code. While you may not be familiar with some of the flags, you are probably familiar with the effects of omitting them. For example, Android's Gingerbreak overwrote the Global Offset Table (GOT) in the ELF headers, and could have been avoided with &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When integrating with platform security on a Linux host, you should use the following flags: &amp;lt;tt&amp;gt;-fPIE&amp;lt;/tt&amp;gt; (compiler) and &amp;lt;tt&amp;gt;-pie&amp;lt;/tt&amp;gt; (linker), -fstack-protector-all (or -fstack-protector), &amp;lt;tt&amp;gt;-z,noexecstack&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,now&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;. If available, you should also use &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=2&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=1&amp;lt;/tt&amp;gt; on Android 4.2), &amp;lt;tt&amp;gt;-fsanitize=address&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=thread&amp;lt;/tt&amp;gt; (the last two should be used in debug configurations). &amp;lt;tt&amp;gt;-z,nodlopen&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-z,nodump&amp;lt;/tt&amp;gt; might help in reducing an attacker's ability to load and manipulate a shared object. On Gentoo and other systems with no-exec heaps, you should also use &amp;lt;tt&amp;gt;-z,noexecheap&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Windows programs should include &amp;lt;tt&amp;gt;/dynamicbase&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/NXCOMPAT&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/GS&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/SafeSEH&amp;lt;/tt&amp;gt; to ensure address space layout randomizations (ASLR), data execution prevention (DEP), use of stack cookies, and thwart exception handler overwrites.&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html GCC Options to Request or Suppress Warnings]'' and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;br /&gt;
&lt;br /&gt;
== Other Cheat sheets ==&lt;br /&gt;
&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147151</id>
		<title>C-Based Toolchain Hardening Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening_Cheat_Sheet&amp;diff=147151"/>
				<updated>2013-03-09T02:26:49Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: Created page with &amp;quot;C-Based Toolchain Hardening Cheat Sheet is a brief treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C lang...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening Cheat Sheet]] is a brief treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. This article will examine Microsoft and GCC toolchains for the C, C++ and Objective C languages. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, preprocessor, compiler, and linker. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, Visual Studio-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
For those who would like a deeper treatment of the subject matter, please visit [[C-Based_Toolchain_Hardening|C-Based Toolchain Hardening]].&lt;br /&gt;
&lt;br /&gt;
== Actionable Items ==&lt;br /&gt;
&lt;br /&gt;
The [[C-Based Toolchain Hardening Cheat Sheet]] calls for the following actionable items:&lt;br /&gt;
&lt;br /&gt;
* Provide debug, release, and test configurations&lt;br /&gt;
* Provide an assert with useful behavior&lt;br /&gt;
* Configure code to take advantage of configurations&lt;br /&gt;
* Properly integrate third party libraries&lt;br /&gt;
* Use the compiler's built-in static analysis capabilities&lt;br /&gt;
* Integrate with platform security measures&lt;br /&gt;
&lt;br /&gt;
The remainder of this cheat sheet briefly explains the bulleted, actionable items. For a thorough treatment, please visit the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
== Build Configurations ==&lt;br /&gt;
&lt;br /&gt;
You should support three build configurations. First is ''Debug'', second is ''Release'', and third is ''Test''. One size does '''not''' fit all, and each speaks to a different facet of the engineering process. Because tools like Autconfig and Automake [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html do not support the notion of build configurations], you should prefer to work in an Integrated Develop Environments (IDE) or write your makefiles so the desired targets are supported. In addition, Autconfig and Automake often ignores user supplied flags (it depends on the folks writing the various scripts and templates), so you might find it easier to again write a makefile from scratch rather than retrofitting existing auto tool files.&lt;br /&gt;
&lt;br /&gt;
=== Debug Builds ===&lt;br /&gt;
&lt;br /&gt;
Debug is used during development, and the build assists you in finding problems in the code. During this phase, you develop your program and test integration with third party libraries you program depends upon. To help with debugging and diagnostics, you should define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt; (if on a Windows platform) preprocessor macros and supply other 'debugging and diagnostic' oriented flags to the compiler and linker. Additional preprocessor macros for selected libraries are offered in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
You should use the following for GCC when building for debug: &amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt;) and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt;. No optimizations improve debuggability because optimizations often rearrange statements to improve instruction scheduling and remove unneeded code. You may need &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; to ensure some analysis is performed. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Asserts will help you write self debugging programs. The program will alert you to the point of first failure quickly and easily. Because asserts are so powerful, the code should be completely and full instrumented with asserts that: (1) validates and asserts all program state relevant to a function or a method; (2) validates and asserts all function parameters; and (3) validates and asserts all return values for functions or methods which return a value. Because of item (3), you should be very suspicious of void functions that cannot convey failures.&lt;br /&gt;
&lt;br /&gt;
Anywhere you have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation, you should have an assert. Anywhere you have an assert, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand. Posix states if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined, then &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html &amp;quot;shall write information about the particular call that failed on stderr and shall call abort&amp;quot;]. Calling abort during development is useless behavior, so you must supply your own assert that &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt;s. A Unix and Linux example of a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; based assert is provided in the [[C-Based_Toolchain_Hardening|full article]].&lt;br /&gt;
&lt;br /&gt;
Unlike other debugging and diagnostic methods - such as breakpoints and &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; - asserts stay in forever and become silent guardians. If you accidentally nudge something in an apparently unrelated code path, the assert will snap the debugger for you. The enduring coverage means debug code - with its additional diagnostics and instrumentation - is more highly valued than unadorned release code. If code is checked in that does not have the additional debugging and diagnostics, including full assertions, you should reject the check-in.&lt;br /&gt;
&lt;br /&gt;
=== Release Builds ===&lt;br /&gt;
&lt;br /&gt;
Release builds are diametrically opposed to debug configurations. In a release configuration, the program will be built for use in production. Your program is expected to operate correctly, securely and efficiently. The time for debugging and diagnostics is over, and your program will define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; to remove the supplemental information and behavior.&lt;br /&gt;
&lt;br /&gt;
A release configuration should also use &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. The optimizations will make it somewhat more difficult to make sense of a stack trace, but they should be few and far between. The &amp;lt;tt&amp;gt;-g''N''&amp;lt;/tt&amp;gt; flag ensures debugging information is available for post mortem analysis. While you generate debugging information for release builds, you should strip the information before shipping and check the symbols into you version control system along with the tagged build.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will also remove asserts from your program by defining them to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; since its not acceptable to crash via &amp;lt;tt&amp;gt;abort&amp;lt;/tt&amp;gt; in production. You should not depend upon assert for crash report generation because those reports could contain sensitive information and may end up on foreign systems, including for example, [http://msdn.microsoft.com/en-us/library/windows/hardware/gg487440.aspx Windows Error Reporting]. If you want a crash dump, you should generate it yourself in a controlled manner while ensuring no sensitive information is written or leaked.&lt;br /&gt;
&lt;br /&gt;
Release builds should also curtail logging. If you followed earlier guidance, you have properly instrumented code and can determine the point of first failure quickly and easily. Simply log the failure and and relevant parameters. Remove all &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; and similar calls because sensitive information might be logged to a system logger. Worse, the data in the logs might be egressed. If your default configuration includes a logging level of ten or ''Maximum Verbosity'', you are probably trying to track down problems in the field. That usually means your program or library is not ready for production.&lt;br /&gt;
&lt;br /&gt;
=== Test Builds ===&lt;br /&gt;
&lt;br /&gt;
A Test build is closely related to a release build. In this build configuration, you want as close to production as possible, so you be using &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-O3&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g1&amp;lt;/tt&amp;gt;/&amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt;. You will run your suite of ''positive'' and ''negative'' tests against the test build.&lt;br /&gt;
&lt;br /&gt;
You will also want to exercise all functions or methods provided by the program and not just the public interfaces, so everything should be made public. For example, all member functions public (C++ classes), all selectors (Objective C), all methods (Java), and all interfaces (library or shared object) should be made available for testing. As such, you should:&lt;br /&gt;
&lt;br /&gt;
* Add &amp;lt;tt&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;&lt;br /&gt;
* Change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
You should also concentrate on negative tests. Positive self tests are relatively useless except for functional and regression tests. Since this is your line of business or area of expertise, you should have the business logic correct when used in a benign environment. A hostile or toxic environment is much more interesting, and that's where you want to know how your library or program will fail in the field when under attack.&lt;br /&gt;
&lt;br /&gt;
== Library Integration ==&lt;br /&gt;
&lt;br /&gt;
You must properly integrate and utilize libraries in your code. Proper integration includes acceptance testing, configuring for your build system, identifying libraries you ''should'' be using, and correctly using the libraries.&lt;br /&gt;
&lt;br /&gt;
Acceptance testing a library is practically non-existent. The testing can be a simple code review or can include additional measures, such as negative self tests. If the library is defective or does not meet standards, you must fix it or reject the library. An example of lack of acceptance testing is [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library], which resulted in [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525]. Another example is the 10's to 100's of millions of vulnerable embedded devices due to defects in &amp;lt;tt&amp;gt;libupnp&amp;lt;/tt&amp;gt;. While its popular to lay blame on others, the bottom line is you chose the library so you are responsible for it.&lt;br /&gt;
&lt;br /&gt;
You must also ensure the library is integrated into your build process. For example, the OpenSSL library should be configured '''without''' SSLv2, SSLv3 and compression since they are defective. That means &amp;lt;tt&amp;gt;config&amp;lt;/tt&amp;gt; should be executed with &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;-no-sslv2&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-no-sslv3&amp;lt;/tt&amp;gt;. As an additional example, using STLPort your debug configuration should also define &amp;lt;tt&amp;gt;_STLP_DEBUG=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_USE_DEBUG_LIB=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_ALLOC=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_STLP_DEBUG_UNINITIALIZED=1&amp;lt;/tt&amp;gt; because the library offers the additional diagnostics during development.&lt;br /&gt;
&lt;br /&gt;
Debug build present an opportunity to use additional libraries to help locate problems in the code. For example, you should be using a memory checker such as ''Debug Malloc Library (Dmalloc)'' during development. If you are not using Dmalloc, then ensure you have an equivalent checker, such as GCC 4.8's &amp;lt;tt&amp;gt;-fsanitize=memory&amp;lt;/tt&amp;gt;. This is one area where one size clearly does not fit all.&lt;br /&gt;
&lt;br /&gt;
Using a library properly is always difficult, especially when there is no documentation. Review any hardening documents available for the library, and be sure to visit the library's documentation to ensure proper API usage. If required, you might have to review code or step library code under the debugger to ensure there are no bugs or undocumented features.&lt;br /&gt;
&lt;br /&gt;
== Static Analysis ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers do a fantastic job of generating object code from source code. The process creates a lot of additional information useful in analyzing code. Compilers use the analysis to offer programmers warnings to help detect problems in their code, but the catch is you have to ask for them. After you ask for them, you should take time to understand what the underlying issue is when a statement is flagged. For example, compilers will warn you when comparing a signed integer to an unsigned integer because &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after C/C++ promotion. At other times, you will need to back off some warnings to help separate the wheat from the chaff. For example, interface programming is a popular C++ paradigm, so &amp;lt;tt&amp;gt;-Wno-unused-parameter&amp;lt;/tt&amp;gt; will probably be helpful with C++ code.&lt;br /&gt;
&lt;br /&gt;
You should consider a clean compile as a security gate. If you find its painful to turn warnings on, then you have likely been overlooking some of the finer points in the details. In addition, you should strive for multiple compilers and platforms support since each has its own personality (and interpretation of the C/C++ standards). By the time your core modules clean compile under Clang, GCC, ICC, and Visual Studio on the Linux and Windows platforms, your code will have many stability obstacles removed.&lt;br /&gt;
&lt;br /&gt;
When compiling programs with GCC, you should use the following flags to help detect errors in your programs. The options should be added to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; for a program with C source files, and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a program with C++ source files. Objective C developers should add their warnings to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt;: &amp;lt;tt&amp;gt;-Wall -Wextra -Wconversion (or -Wsign-conversion), -Wcast-align, -Wformat=2 -Wformat-security, -fno-common, -fstack-protector-all (or -fstack-protector), -Wmissing-prototypes, -Wmissing-declarations, -Wstrict-prototypes, -Wstrict-overflow, and -Wtrampolines&amp;lt;/tt&amp;gt;. If available, also use &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=2&amp;lt;/tt&amp;gt; (or &amp;lt;tt&amp;gt;_FORTIFY_SOURCES=1&amp;lt;/tt&amp;gt; on Android 4.2), &amp;lt;tt&amp;gt;-fsanitize=address&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=thread&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
C++ presents additional opportunities under GCC. The additional flags include: -Woverloaded-virtual, -Wreorder, -Wsign-promo, -Wnon-virtual-dtor and possibly -Weffc++. Finally, Objective C should include -Wstrict-selector-match and -Wundeclared-selector.&lt;br /&gt;
&lt;br /&gt;
For a Microsoft platform, you should use: &amp;lt;tt&amp;gt;/W4&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/Wall&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/GS&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;/analyze&amp;lt;/tt&amp;gt; is Enterprise Code Analysis, which is freely available with the [http://www.microsoft.com/en-us/download/details.aspx?id=24826 Windows SDK for Windows Server 2008 and .NET Framework 3.5 SDK] (you don't need Visual Studio Enterprise edition).&lt;br /&gt;
&lt;br /&gt;
== Platform Security ==&lt;br /&gt;
&lt;br /&gt;
Integrating with platform security is essential to a defensive posture. Platform security will be your safety umbrella if someone discovers a bug with security implications - and you should always have it with you. For example, if your parser fails, then no-execute stacks and heaps can turn a 0-day into an annoying crash. Not integrating often leaves your users and customers vulnerable to malicious code. See, for example, ''[http://www.kb.cert.org/vuls/id/922681 Portable SDK for UPnP Devices (libupnp) contains multiple buffer overflows in SSDP]'' and ''[https://developer.pidgin.im/ticket/15209 Pidgin for Windows - Missing DEP and ASLR]'').&lt;br /&gt;
&lt;br /&gt;
When integrating with platform security on a Linux host, you should use the following flags: &amp;lt;tt&amp;gt;-fPIE&amp;lt;/tt&amp;gt; (compiler) and &amp;lt;tt&amp;gt;-pie&amp;lt;/tt&amp;gt; (linker), &amp;lt;tt&amp;gt;-z,noexecstack&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,now&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;-z,nodlopen&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-z,nodump&amp;lt;/tt&amp;gt; might help in reducing an attacker's ability to load and manipulate a shared object. On Gentoo and other systems with no-exe heaps, you should also use &amp;lt;tt&amp;gt;-z,noexecheap&amp;lt;/tt&amp;gt;. While you may not be familiar with some of the flags, you are probably familiar with the effects of omitting them. For example, Gingerbreak overwrote the Global Offset Table (GOT) in the ELF headers, and could have been avoided with &amp;lt;tt&amp;gt;-z,relro&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Windows programs should include &amp;lt;tt&amp;gt;/dynamicbase&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/NXCOMPAT&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;/SafeSEH&amp;lt;/tt&amp;gt; to ensure address space layout randomizations (ASLR), data execution prevention (DEP), and thwart exception handler overwrites.&lt;br /&gt;
&lt;br /&gt;
For additional details on the GCC and Windows options and flags, see ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html GCC Options to Request or Suppress Warnings]'' and ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]''.&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;br /&gt;
&lt;br /&gt;
== Other Cheat sheets ==&lt;br /&gt;
&lt;br /&gt;
{{Cheatsheet_Navigation}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=147124</id>
		<title>C-Based Toolchain Hardening</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=147124"/>
				<updated>2013-03-08T18:55:31Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening]] is a treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. This article will examine Microsoft and GCC toolchains for the C, C++ and Objective C languages. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, preprocessor, compiler, and linker. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, Visual Studio-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
This is a prescriptive article, and it will not debate semantics or speculate on behavior. Some information, such as the C/C++ committee's motivation and pedigree for [https://groups.google.com/a/isocpp.org/forum/?fromgroups=#!topic/std-discussion/ak8e1mzBhGs &amp;quot;program diagnostics&amp;quot;, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;], appears to be lost like a tale in the Lord of the Rings. As such, the article will specify semantics (for example, the philosophy of 'debug' and 'release' build configurations), assign behaviors (for example, what an assert should do in a 'debug' and 'release' build configurations), and present a position. If you find the posture is too aggressive, then you should back off as required to suite your taste.&lt;br /&gt;
&lt;br /&gt;
A secure toolchain is not a silver bullet. It is one piece of an overall strategy in the engineering process to help ensure success. It will compliment existing processes such as static analysis, dynamic analysis, secure coding, negative test suites, and the like. Tools such as Valgrind and Helgrind will still be needed. And a project will still require solid designs and architectures.&lt;br /&gt;
&lt;br /&gt;
The OWASP [http://code.google.com/p/owasp-esapi-cplusplus/source ESAPI C++] project eats its own dog food. Many of the examples you will see in this article come directly from the ESAPI C++ project.&lt;br /&gt;
&lt;br /&gt;
Finally, a [[Category:Cheat Sheet|cheat sheet]] is available for those who desire a terse treatment of the material. Please visit [[C-Based_Toolchain_Hardening_Cheat_Sheet|C-Based Toolchain Hardening Cheat Sheet]] for the abbreviated version.&lt;br /&gt;
&lt;br /&gt;
== Wisdom ==&lt;br /&gt;
&lt;br /&gt;
Code '''must''' be correct. It '''should''' be secure. It '''can''' be efficient.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Jon_Bentley Dr. Jon Bentley]: ''&amp;quot;If it doesn't have to be correct, I can make it as fast as you'd like it to be&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Gary_McGraw Dr. Gary McGraw]: ''&amp;quot;Thou shalt not rely solely on security features and functions to build secure software as security is an emergent property of the entire system and thus relies on building and integrating all parts properly&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
Configuration is the first opportunity to configure your project for success. Not only do you have to configure your project to meet reliability and security goals, you must also configure integrated libraries properly. You typically have has three choices. First, you can use auto-configuration utilities if on Linux or Unix. Second, you can write a makefile by hand. This is predominant on Linux, Mac OS X, and Unix, but it applies to Windows as well. Finally, you can use an integrated development environment or IDE.&lt;br /&gt;
&lt;br /&gt;
=== Build Configurations ===&lt;br /&gt;
&lt;br /&gt;
At this stage in the process, you should concentrate on configuring for two builds: Debug and Release. Debug will be used for development and include full instrumentation. Release will be configured for production. The difference between the two settings is usually ''optimization level'' and ''debug level''. A third build configuration is Test, and its usually a special case of Release.&lt;br /&gt;
&lt;br /&gt;
For debug and release builds, the settings are typically diametrically opposed. Debug configurations have no optimizations and full debug information; while Release builds have optimizations and minimal to moderate debug information. In addition, debug code has full assertions and additional library integration, such as mudflaps and malloc guards such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The Test configuration is often a Release configuration that makes everything public for testing and builds a test harness. For example, all member functions public (C++ class) and all interfaces (library or shared object) should be made available for testing. Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
[http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8] introduced an optimization of &amp;lt;tt&amp;gt;-Og&amp;lt;/tt&amp;gt;. Note that it is only an optimization, and still requires a customary debug level via &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Debug Builds ====&lt;br /&gt;
&lt;br /&gt;
Debug builds are where developers spend most of their time when vetting problems, so this build should concentrate forces and tools or be a 'force multiplier'. Though many do not realize, debug code is more highly valued than release code because its adorned with additional instrumentation. The debug instrumentation will cause a program to become nearly &amp;quot;self-debugging&amp;quot;, and help you catch mistakes such as bad parameters, failed API calls, and memory problems.&lt;br /&gt;
&lt;br /&gt;
Self-debugging code reduces your time during trouble shooting and debugging. Reducing time under the debugger means you have more time for development and feature requests. If code is checked in without debug instrumentation, it should be fixed by adding instrumentation or rejected.&lt;br /&gt;
&lt;br /&gt;
For GCC, optimizations and debug symbolication are controlled through two switches: &amp;lt;tt&amp;gt;-O&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;. You should use the following as part of your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a minimal debug session:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-O0 -g3 -ggdb&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; turns off optimizations and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available. You may need to use &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; so some analysis is performed. Otherwise, your debug build will be missing a number of warnings not present in release builds. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debugging information is available for the debug session, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; includes extensions to help with a debug session under GDB. For completeness, Jan Krachtovil stated &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; currently has no effect in a private email.&lt;br /&gt;
&lt;br /&gt;
Debug build should also define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is not defined. &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; removes &amp;quot;program diagnostics&amp;quot;; and has undesirable behavior and side effects which discussed below in more detail. The defines should be present for all code, and not just the program. You use it for all code (your program and included libraries) because you need to know how they fail too (remember, you take the bug report - not the third party library).&lt;br /&gt;
&lt;br /&gt;
In addition, you should also use other relevant flags, such as &amp;lt;tt&amp;gt;-fno-omit-frame-pointer&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=address&amp;lt;/tt&amp;gt;. Finally, you should also ensure your project includes additional diagnostic libraries, such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt;. The additional flags and libraries are discussed below in more detail.&lt;br /&gt;
&lt;br /&gt;
==== Release Builds ====&lt;br /&gt;
&lt;br /&gt;
Release builds are what your customer receives. They are meant to be run on production hardware and servers, and they should be reliable, secure, and efficient. A stable release build is the product of the hard work and effort during development.&lt;br /&gt;
&lt;br /&gt;
For release builds, you should use the following as part of &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for release builds:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-On -g2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O''n''&amp;lt;/tt&amp;gt; sets optimizations for speed or size (for example, &amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;), and &amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt; ensure debugging information is created.&lt;br /&gt;
&lt;br /&gt;
Debugging information should be stripped and retained in case of symbolication for a crash report from the field. While not desired, debug information can be left in place without a performance penalty. See ''[http://gcc.gnu.org/ml/gcc-help/2005-03/msg00032.html How does the gcc -g option affect performance?]'' for details.&lt;br /&gt;
&lt;br /&gt;
Release builds should also define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is not defined. The time for debugging and diagnostics is over, so users get production code with full optimizations, no &amp;quot;programming diagnostics&amp;quot;, and other efficiencies. If you can't optimize or your are performing excessive logging, it usually means the program is not ready for production.&lt;br /&gt;
&lt;br /&gt;
If you have been relying on an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and then a subsequent &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;, you have been abusing &amp;quot;program diagnostics&amp;quot; since it has no place in production code. If you want a memory dump, create one so users don't have to worry about secrets and other sensitive information being written to the filesystem and emailed in plain text.&lt;br /&gt;
&lt;br /&gt;
For Windows, you would use &amp;lt;tt&amp;gt;/Od&amp;lt;/tt&amp;gt; for debug builds; and &amp;lt;tt&amp;gt;/Ox&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/O2&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/Os&amp;lt;/tt&amp;gt; for release builds. See Microsoft's [http://msdn.microsoft.com/en-us/library/k1ack8f1.aspx /O Options (Optimize Code)] for details.&lt;br /&gt;
&lt;br /&gt;
==== Test Builds ====&lt;br /&gt;
&lt;br /&gt;
Test builds are used to provide heuristic validation by way of positive and negative test suites. Under a test configuration, all interfaces are tested to ensure they perform to specification and satisfaction. &amp;quot;Satisfaction&amp;quot; is subjective, but it should include no crashing and no trashing of your memory arena, even when faced with negative tests.&lt;br /&gt;
&lt;br /&gt;
Because all interfaces are tested (and not just the public ones), your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; should include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should also change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Nearly everyone gets a positive test right, so no more needs to be said. The negative self tests are much more interesting, and you should concentrate on trying to make your program fail so you can verify its fails gracefully. Remember, a bad guy is not going to be courteous when he attempts to cause your program to fail. And its your project that takes egg on the face by way of a bug report or guest appearance on [http://www.grok.org.uk/full-disclosure/ Full Disclosure] or [http://www.securityfocus.com/archive Bugtraq] - not ''&amp;lt;nowiki&amp;gt;&amp;lt;some library&amp;gt;&amp;lt;/nowiki&amp;gt;'' you included.&lt;br /&gt;
&lt;br /&gt;
=== Auto Tools ===&lt;br /&gt;
&lt;br /&gt;
Auto configuration tools are popular on many Linux and Unix based systems, and the tools include ''Autoconf'', ''Automake'', ''config'', and ''Configure''. The tools work together to produce project files from scripts and template files. After the process completes, your project should be setup and ready to be made with &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When using auto configuration tools, there are a few files of interest worth mentioning. The files are part of the auto tools chain and include &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; and the various &amp;lt;tt&amp;gt;*.in&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;*.ac&amp;lt;/tt&amp;gt; (autoconf), and &amp;lt;tt&amp;gt;*.am&amp;lt;/tt&amp;gt; (automake) files. At times, you will have to open them, or the resulting makefiles, to tune the &amp;quot;stock&amp;quot; configuration.&lt;br /&gt;
&lt;br /&gt;
There are three downsides to the command line configuration tools in the toolchain: (1) they often ignore user requests, (2) they cannot create configurations, and (3) security is often not a goal.&lt;br /&gt;
&lt;br /&gt;
To demonstrate the first issue, confider your project with the following: &amp;lt;tt&amp;gt;configure CFLAGS=&amp;quot;-Wall -fPIE&amp;quot; CXXFLAGS=&amp;quot;-Wall -fPIE&amp;quot; LDFLAGS=&amp;quot;-pie&amp;quot;&amp;lt;/tt&amp;gt;. You will probably find the auto tools ignored your request, which means the command below will not produce expected results. As a work around, you will have to open an &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; scripts, &amp;lt;tt&amp;gt;Makefile.in&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;Makefile.am&amp;lt;/tt&amp;gt; and fix the configuration.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ configure CFLAGS=&amp;quot;-Wall -Wextra -Wconversion -fPIE -Wno-unused-parameter&lt;br /&gt;
    -Wformat=2 -Wformat-security -fstack-protector-all -Wstrict-overflow&amp;quot;&lt;br /&gt;
    LDFLAGS=&amp;quot;-pie -z,noexecstack -z,noexecheap -z,relro -z,now&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the second point, you will probably be disappointed to learn [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html Automake does not support the concept of configurations]. Its not entirely Autoconf's or Automake's fault - ''Make'' and its inability to detect changes is the underlying problem. Specifically, ''Make'' only [http://pubs.opengroup.org/onlinepubs/009695399/utilities/make.html checks modification times of prerequisites and targets], and does not check things like &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;. The net effect is you will not receive expected results when you issue &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt; and then &amp;lt;tt&amp;gt;make test&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Finally, you will probably be disappointed to learn tools such as Autoconf and Automake miss many security related opportunities and ship insecure out of the box. There are a number of compiler switches and linker flags that improve the defensive posture of a program, but they are not 'on' by default. Tools like Autoconf - which are supposed to handle this situation - often provides setting to serve the lowest of all denominators.&lt;br /&gt;
&lt;br /&gt;
A recent discussion on the Automake mailing list illuminates the issue: ''[https://lists.gnu.org/archive/html/autoconf/2012-12/msg00038.html Enabling compiler warning flags]''. Attempts to improve default configurations were met with resistance and no action was taken. The resistance is often of the form, &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some useful warning&amp;gt;&amp;lt;/nowiki&amp;gt; also produces false positives&amp;quot; or &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some obscure platform&amp;gt;&amp;lt;/nowiki&amp;gt; does not support &amp;lt;nowiki&amp;gt;&amp;lt;established security feature&amp;gt;&amp;lt;/nowiki&amp;gt;&amp;quot;. Its noteworthy that David Wheeler, the author of ''[http://www.dwheeler.com/secure-programs/ Secure Programming for Linux and Unix HOWTO]'', was one of the folks trying to improve the posture.&lt;br /&gt;
&lt;br /&gt;
=== Makefiles ===&lt;br /&gt;
&lt;br /&gt;
Make is one of the earliest build systems dating back to the 1970s. Its available on Linux, Mac OS X and Unix, so you will frequently encounter projects using it. Unfortunately, Make has a number of short comings (''[http://aegis.sourceforge.net/auug97.pdf Recursive Make Considered Harmful]'' and ''[http://www.conifersystems.com/whitepapers/gnu-make/ What’s Wrong With GNU make?]''), and can cause some discomfort. Despite issues with Make, ESAPI C++ uses Make primarily for three reasons: first, its omnipresent; second, its easier to manage than the Auto Tools family; and third, &amp;lt;tt&amp;gt;libtool&amp;lt;/tt&amp;gt; was out of the question.&lt;br /&gt;
&lt;br /&gt;
Consider what happens when you: (1) type &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt;, and then type &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;. Each build would require different &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; due to optimizations and level of debug support. In your makefile, you would extract the relevant target and set &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; similar to below (taken from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/Makefile ESAPI C++ Makefile]):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Makefile&lt;br /&gt;
DEBUG_GOALS = $(filter $(MAKECMDGOALS), debug)&lt;br /&gt;
ifneq ($(DEBUG_GOALS),)&lt;br /&gt;
  WANT_DEBUG := 1&lt;br /&gt;
  WANT_TEST := 0&lt;br /&gt;
  WANT_RELEASE := 0&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_DEBUG),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
  ESAPI_CXXFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_RELEASE),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
  ESAPI_CXXFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_TEST),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
  ESAPI_CXXFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
# Merge ESAPI flags with user supplied flags. We perform the extra step to ensure &lt;br /&gt;
# user options follow our options, which should give user option's a preference.&lt;br /&gt;
override CFLAGS := $(ESAPI_CFLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(ESAPI_CXXFLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(ESAPI_LDFLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make will first build the program in a debug configuration for a session under the debugger using a rule similar to:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;%.cpp:%.o:&lt;br /&gt;
        $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $&amp;lt; -o $@&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you want the release build, Make will do nothing because it considers everything up to date despite the fact &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; have changed. Hence, your program will actually be in a debug configuration and risk a &amp;lt;tt&amp;gt;SIGABRT&amp;lt;/tt&amp;gt; at runtime because debug instrumentation is present (recall &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; calls &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; when &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined). In essence, you have DoS'd yourself due to &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, many projects do not honor the user's command line. ESAPI C++ does its best to ensure a user's flags are honored via &amp;lt;tt&amp;gt;override&amp;lt;/tt&amp;gt; as shown above, but other projects do not. For example, consider a project that should be built with Position Independent Executable (PIE or ASLR) enabled and data execution prevention (DEP) enabled. Dismissing user settings combined with insecure out of the box settings (and not picking them up during auto-setup or auto-configure) means a program built with the following will likely have neither defense:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ make CFLAGS=&amp;quot;-fPIE&amp;quot; CXXFLAGS=&amp;quot;-fPIE&amp;quot; LDFLAGS=&amp;quot;-pie -z,noexecstack, -z,noexecheap&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Defenses such as ASLR and DEP are especially important on Linux because [http://linux.die.net/man/5/elf Data Execution - not Prevention - is the norm].&lt;br /&gt;
&lt;br /&gt;
=== Integration ===&lt;br /&gt;
&lt;br /&gt;
Project level integration presents opportunities to harden your program or library with domain specific knowledge. For example, if the platform supports Position Independent Executables (PIE or ASLR) and data execution prevention (DEP), then you should integrate with it. The consequences of not doing so could result in exploitation. As a case in point, see KingCope's 0-days for MySQL in December, 2012 (CVE-2012-5579 and CVE-2012-5612, among others). Integration with platform security would have neutered a number of the 0-days.&lt;br /&gt;
&lt;br /&gt;
You also have the opportunity to include helpful libraries that are not need for business logic support. For example, if you are working on a platform with [http://dmalloc.com DMalloc] or [http://code.google.com/p/address-sanitizer/ Address Sanitizer], you should probably use it in your debug builds. For Ubuntu, DMalloc available from the package manager and can be installed with &amp;lt;tt&amp;gt;sudo apt-get install libdmalloc5&amp;lt;/tt&amp;gt;. For Apple platforms, its available as a scheme option (see [[#Clang/Xcode|Clang/Xcode]] below). Address Sanitizer is available in [http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8 and above] for many platforms.&lt;br /&gt;
&lt;br /&gt;
In addition, project level integration is an opportunity to harden third party libraries you chose to include. Because you chose to include them, you and your users are responsible for them. If you or your users endure a SP800-53 audit, third party libraries will be in scope because the supply chain is included (specifically, item SA-12, Supply Chain Protection). The audits are not limited to those in the US Federal arena - financial institutions perform reviews too. A perfect example of violating this guidance is [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525], which was due to [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library].&lt;br /&gt;
&lt;br /&gt;
Another example is including OpenSSL. You know (1) [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 is insecure], (2) [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 is insecure], and (3) [http://arstechnica.com/security/2012/09/crime-hijacks-https-sessions/ compression is insecure] (among others). In addition, suppose you don't use hardware and engines, and only allow static linking. Given the knowledge and specifications, you would configure the OpenSSL library as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ Configure darwin64-x86_64-cc -no-hw -no-engines -no-comp -no-shared -no-dso -no-sslv2 -no-sslv3 --openssldir=…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Note Well'': you might want engines, especially on Ivy Bridge microarchitectures (3rd generation Intel Core i5 and i7 processors). To have OpenSSL use the processor's random number generator (via the of &amp;lt;tt&amp;gt;rdrand&amp;lt;/tt&amp;gt; instruction), you will need to call OpenSSL's &amp;lt;tt&amp;gt;ENGINE_load_rdrand()&amp;lt;/tt&amp;gt; function and then &amp;lt;tt&amp;gt;ENGINE_set_default&amp;lt;/tt&amp;gt; with &amp;lt;tt&amp;gt;ENGINE_METHOD_RAND&amp;lt;/tt&amp;gt;. See [http://wiki.opensslfoundation.com/index.php/Random_Numbers OpenSSL's Random Numbers] for details.&lt;br /&gt;
&lt;br /&gt;
If you configure without the switches, then you will likely have vulnerable code/libraries and risk failing an audit. If the program is a remote server, then the following command will reveal if compression is active on the channel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ echo &amp;quot;GET / HTTP1.0&amp;quot; | openssl s_client -connect &amp;lt;nowiki&amp;gt;example.com:443&amp;lt;/nowiki&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;nm&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;openssl s_client&amp;lt;/tt&amp;gt; will show that compression is enabled in the client. In fact, any symbol within the &amp;lt;tt&amp;gt;OPENSSL_NO_COMP&amp;lt;/tt&amp;gt; preprocessor macro will bear witness since &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; is translated into a &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; define.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ nm /usr/local/ssl/iphoneos/lib/libcrypto.a 2&amp;gt;/dev/null | egrep -i &amp;quot;(COMP_CTX_new|COMP_CTX_free)&amp;quot;&lt;br /&gt;
0000000000000110 T COMP_CTX_free&lt;br /&gt;
0000000000000000 T COMP_CTX_new&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even more egregious is the answer given to auditors who specifically ask about configurations and protocols: &amp;quot;we don't use weak/wounded/broken ciphers&amp;quot; or &amp;quot;we follow best practices.&amp;quot; The use of compression tells the auditor that you are using wounded protocol in an insecure configuration and you don't follow best practices. That will likely set off alarm bells, and ensure the auditor dives deeper on more items.&lt;br /&gt;
&lt;br /&gt;
== Preprocessor ==&lt;br /&gt;
&lt;br /&gt;
The preprocessor is crucial to setting up a project for success. The C committee provided one macro - &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; - and the macro can be used to derive a number of configurations and drive engineering processes. Unfortunately, the committee also left many related items to chance, which has resulted in programmers abusing builtin facilities. This section will help you set up you projects to integrate well with other projects and ensure reliability and security.&lt;br /&gt;
&lt;br /&gt;
There are three topics to discuss when hardening the preprocessor. The first is well defined configurations which produce well defined behaviors, the second is useful behavior from assert, and the third is proper use of macros when integrating vendor code and third party libraries.&lt;br /&gt;
&lt;br /&gt;
=== Configurations ===&lt;br /&gt;
&lt;br /&gt;
To remove ambiguity, you should recognize two configurations: Release and Debug. Release is for production code on live servers, and its behavior is requested via the C/C++ &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; macro. Its also the only macro observed by the C and C++ Committees and Posix. Diametrically opposed to release is Debug. While there is a compelling argument for &amp;lt;tt&amp;gt;!defined(NDEBUG)&amp;lt;/tt&amp;gt;, you should have an explicit macro for the configuration and that macro should be &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;. This is because vendors and outside libraries use &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (or similar) macro for their configuration. For example, Carnegie Mellon's Mach kernel uses &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, Microsoft's CRT uses [http://msdn.microsoft.com/en-us/library/ww5t02fa%28v=vs.71%29.aspx&amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt;], and Wind River Workbench uses &amp;lt;tt&amp;gt;DEBUG_MODE&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (Release) and &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (Debug), you have two additional cross products: both are defined or neither are defined. Defining both should be an error, and defining neither should default to a release configuration. Below is from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/esapi/EsapiCommon.h ESAPI C++ EsapiCommon.h], which is the configuration file used by all source files:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// Only one or the other, but not both&lt;br /&gt;
#if (defined(DEBUG) || defined(_DEBUG)) &amp;amp;&amp;amp; (defined(NDEBUG) || defined(_NDEBUG))&lt;br /&gt;
# error Both DEBUG and NDEBUG are defined.&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
// The only time we switch to debug is when asked. NDEBUG or {nothing} results&lt;br /&gt;
// in release build (fewer surprises at runtime).&lt;br /&gt;
#if defined(DEBUG) || defined(_DEBUG)&lt;br /&gt;
# define ESAPI_BUILD_DEBUG 1&lt;br /&gt;
#else&lt;br /&gt;
# define ESAPI_BUILD_RELEASE 1&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is in effect, your code should receive full debug instrumentation, including the full force of assertions.&lt;br /&gt;
&lt;br /&gt;
=== ASSERT ===&lt;br /&gt;
&lt;br /&gt;
Asserts will help you create self-debugging code by helping you find the point of first failure quickly and easily. Asserts should be used throughout your program, including parameter validation, return value checking and program state. The &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; will silently guard your code through its lifetime. It will always be there, even when not debugging a specific component of a module. If you have thorough code coverage, you will spend less time debugging and more time developing because programs will debug themselves.&lt;br /&gt;
&lt;br /&gt;
To use asserts effectively, you should assert everything. That includes parameters upon entering a function, return values from function calls, and any program state. Everywhere you place an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation or checking, you should have an assert. Everywhere you have an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; for validation or checking, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand.&lt;br /&gt;
&lt;br /&gt;
If you are still using &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt;'s, then you have an opportunity for improvement. In the time it takes for you to write a &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; statement, you could have written an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;. Unlike the &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; which are often removed when no longer needed, the &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; stays active forever. Remember, this is all about finding the point of first failure quickly so you can spend your time doing other things.&lt;br /&gt;
&lt;br /&gt;
There is one problem with using asserts - [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html Posix states &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; should call &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;] if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined. When debugging, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will never be defined since you want the &amp;quot;program diagnostics&amp;quot; (quote from the Posix description). The behavior makes &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and its accompanying &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; completely useless for development. The result of &amp;quot;program diagnostics&amp;quot; calling &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; due to standard C/C++ behavior is disuse - developers simply don't use them. Its incredibly bad for the development community because self-debugging programs can help eradicate so many stability problems.&lt;br /&gt;
&lt;br /&gt;
Since self-debugging programs are so powerful, you will have to have to supply your own assert and signal handler with improved behavior. Your assert will exchange auto-aborting behavior for auto-debugging behavior. The auto-debugging facility will ensure the debugger snaps when a problem is detected, and you will find the point of first failure quickly and easily.&lt;br /&gt;
&lt;br /&gt;
ESAPI C++ supplies its own assert with the behavior described above. In the code below, &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt; raises &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; when in effect or it evaluates to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; in other cases.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// A debug assert which should be sprinkled liberally. This assert fires and then continues rather&lt;br /&gt;
// than calling abort(). Useful when examining negative test cases from the command line.&lt;br /&gt;
#if (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_STARNIX))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp) {                                    \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; std::endl;                                           \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) {                               \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; &amp;quot;: \&amp;quot;&amp;quot; &amp;lt;&amp;lt; (msg) &amp;lt;&amp;lt; &amp;quot;\&amp;quot;&amp;quot; &amp;lt;&amp;lt; std::endl;                \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#elif (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_WINDOWS))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      assert(exp)&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) assert(exp)&lt;br /&gt;
#else&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      ((void)(exp))&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) ((void)(exp))&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
#if !defined(ASSERT)&lt;br /&gt;
#  define ASSERT(exp)     ESAPI_ASSERT1(exp)&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At program startup, a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; handler will be installed if one is not provided by another component:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct DebugTrapHandler&lt;br /&gt;
{&lt;br /&gt;
  DebugTrapHandler()&lt;br /&gt;
  {&lt;br /&gt;
    struct sigaction new_handler, old_handler;&lt;br /&gt;
&lt;br /&gt;
    do&lt;br /&gt;
      {&lt;br /&gt;
        int ret = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, NULL, &amp;amp;old_handler);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        // Don't step on another's handler&lt;br /&gt;
        if (old_handler.sa_handler != NULL) break;&lt;br /&gt;
&lt;br /&gt;
        new_handler.sa_handler = &amp;amp;DebugTrapHandler::NullHandler;&lt;br /&gt;
        new_handler.sa_flags = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigemptyset (&amp;amp;new_handler.sa_mask);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, &amp;amp;new_handler, NULL);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
      } while(0);&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  static void NullHandler(int /*unused*/) { }&lt;br /&gt;
&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
// We specify a relatively low priority, to make sure we run before other CTORs&lt;br /&gt;
// http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Attributes.html#C_002b_002b-Attributes&lt;br /&gt;
static const DebugTrapHandler g_dummyHandler __attribute__ ((init_priority (110)));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On a Windows platform, you would call &amp;lt;tt&amp;gt;_set_invalid_parameter_handler&amp;lt;/tt&amp;gt; (and possibly &amp;lt;tt&amp;gt;set_unexpected&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;set_terminate&amp;lt;/tt&amp;gt;) to install a new handler.&lt;br /&gt;
&lt;br /&gt;
Live hosts running production code should always define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (i.e., release configuration), which means they do not assert or auto-abort. Auto-abortion is not acceptable behavior, and anyone who asks for the behavior is completely abusing the functionality of &amp;quot;program diagnostics&amp;quot;. If a program wants a core dump, then it should create the dump rather than crashing.&lt;br /&gt;
&lt;br /&gt;
For more reading on asserting effectively, please see one of John Robbin's books, such as ''[http://www.amazon.com/dp/0735608865 Debugging Applications]''. John is a legendary bug slayer in Windows circles, and he will show you how to do nearly everything, from debugging a simple program to bug slaying in multithreaded programs.&lt;br /&gt;
&lt;br /&gt;
=== Additional Macros ===&lt;br /&gt;
&lt;br /&gt;
Additional macros include any macros needed to integrate properly and securely. It includes integrating the program with the platform (for example MFC or Cocoa/CocoaTouch) and libraries (for example, Crypto++ or OpenSSL). It can be a challenge because you have to have proficiency with your platform and all included libraries and frameworks. The list below illustrates the level of detail you will need when integrating.&lt;br /&gt;
&lt;br /&gt;
Though Boost is missing from the list, it appears to lack recommendations, additional debug diagnostics, and a hardening guide. See ''[http://stackoverflow.com/questions/14927033/boost-hardening-guide-preprocessor-macros BOOST Hardening Guide (Preprocessor Macros)]'' for details. In addition, Tim Day points to ''[http://boost.2283326.n4.nabble.com/boost-build-should-we-not-define-SECURE-SCL-0-by-default-for-all-msvc-toolsets-td2654710.html &amp;lt;nowiki&amp;gt;[boost.build] should we not define _SECURE_SCL=0 by default for all msvc toolsets&amp;lt;/nowiki&amp;gt;]'' for a recent discussion related to hardening (or lack thereof).&lt;br /&gt;
&lt;br /&gt;
In addition to what you should define, defining some macros and undefining others should trigger a security related defect. For example, &amp;lt;tt&amp;gt;-U_FORTIFY_SOURCES&amp;lt;/tt&amp;gt; on Linux and &amp;lt;tt&amp;gt;_CRT_SECURE_NO_WARNINGS=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_SCL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_ATL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;STRSAFE_NO_DEPRECATE&amp;lt;/tt&amp;gt; on Windows.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Platform/Library!!Debug!!Release&lt;br /&gt;
|+ Table 1: Additional Platform/Library Macros&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;175pt&amp;quot;|All&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|DEBUG=1&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|NDEBUG=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Linux&lt;br /&gt;
|_GLIBCXX_DEBUG=1&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
|_FORTIFY_SOURCE=2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Android&lt;br /&gt;
|NDK_DEBUG=1&lt;br /&gt;
|_FORTIFY_SOURCE=1 (4.2 and above)&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define LOGI(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt logging)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Cocoa/CocoaTouch&lt;br /&gt;
|&lt;br /&gt;
|NS_BLOCK_ASSERTIONS=1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define NSLog(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt ASL)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SafeInt&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft&lt;br /&gt;
|_DEBUG=1, STRICT,&amp;lt;br&amp;gt;&lt;br /&gt;
_SECURE_SCL=1, _HAS_ITERATOR_DEBUGGING=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
|STRICT&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft ATL &amp;amp; MFC&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|STLPort&lt;br /&gt;
|_STLP_DEBUG=1, _STLP_USE_DEBUG_LIB=1&amp;lt;br&amp;gt;&lt;br /&gt;
_STLP_DEBUG_ALLOC=1, _STLP_DEBUG_UNINITIALIZED=1&lt;br /&gt;
|&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLite&lt;br /&gt;
|SQLITE_DEBUG, SQLITE_MEMDEBUG&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLCipher&lt;br /&gt;
|Remove '''&amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;''' from Debug builds (Xcode)&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_HAS_CODEC=1&lt;br /&gt;
|SQLITE_HAS_CODEC=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLite/SQLCipher&lt;br /&gt;
|SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Be careful with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; when using pre-compiled libraries such as Boost from a distribution. There are ABI incompatibilities, and the result will likely be a crash. You will have to compile Boost with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; or omit &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; SQLite secure deletion zeroizes memory on destruction. Define as required, and always define in US Federal since zeroization is required for FIPS 140-2, Level 1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt; ''N'' is 0644 by default, which means everyone has some access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt; Force temporary tables into memory (no unencrypted data to disk).&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
##########################################&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Compiler and Linker ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers provide a rich set of warnings from the analysis of code during compilation. Both GCC and Visual Studio have static analysis capabilities to help find mistakes early in the development process. The built in static analysis capabilities of GCC and Visual Studio are usually sufficient to ensure proper API usage and catch a number of mistakes such as using an uninitialized variable or comparing a negative signed int and a positive unsigned int.&lt;br /&gt;
&lt;br /&gt;
As a concrete example, (and for those not familiar with C/C++ promotion rules), a warning will be issued if a signed integer is promoted to an unsigned integer and then compared because a side effect is &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! GCC and Visual Studio will not currently catch, for example, SQL injections and other tainted data usage. For that, you will need a tool designed to perform data flow analysis or taint analysis.&lt;br /&gt;
&lt;br /&gt;
Some in the development community resist static analysis or refute its results. For example, when static analysis warned the Linux kernel's &amp;lt;tt&amp;gt;sys_prctl&amp;lt;/tt&amp;gt; was comparing an unsigned value against less than zero, Jesper Juhl offered a patch to clean up the code. Linus Torvalds howled “No, you don't do this… GCC is crap” (referring to compiling with warnings). For the full discussion, see ''[http://linux.derkeiler.com/Mailing-Lists/Kernel/2006-11/msg08325.html &amp;lt;nowiki&amp;gt;[PATCH] Don't compare unsigned variable for &amp;lt;0 in sys_prctl()&amp;lt;/nowiki&amp;gt;]'' from the Linux Kernel mailing list.&lt;br /&gt;
&lt;br /&gt;
The following sections will detail steps for three platforms. First is a typical GNU Linux based distribution offering GCC and Binutils, second is Clang and Xcode, and third is modern Windows platforms.&lt;br /&gt;
&lt;br /&gt;
=== Distribution Hardening ===&lt;br /&gt;
&lt;br /&gt;
Before discussing GCC and Binutils, it would be a good time to point out some of the defenses discussed below are all ready present in a distribution. Unfortunately, its design by committee, so what is present is usually only a mild variation of what is available (this way, everyone is mildly offended). For those who are purely worried about performance, you might be surprised to learn you have already taken the small performance hint without even knowing.&lt;br /&gt;
&lt;br /&gt;
Linux and BSD distributions often apply some hardening without intervention via ''[http://gcc.gnu.org/onlinedocs/gcc/Spec-Files.html GCC Spec Files]''. If you are using Debian, Ubuntu, Linux Mint and family, see ''[http://wiki.debian.org/Hardening Debian Hardening]''. For Red Hat and Fedora systems, see ''[http://lists.fedoraproject.org/pipermail/devel-announce/2011-August/000821.html New hardened build support (coming) in F16]''. Gentoo users should visit ''[http://www.gentoo.org/proj/en/hardened/ Hardened Gentoo]''.&lt;br /&gt;
&lt;br /&gt;
You can see the settings being used by a distribution via &amp;lt;tt&amp;gt;gcc -dumpspecs&amp;lt;/tt&amp;gt;. From Linux Mint 12 below, -fstack-protector (but not -fstack-protector-all) is used by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ gcc -dumpspecs&lt;br /&gt;
…&lt;br /&gt;
*link_ssp: %{fstack-protector:}&lt;br /&gt;
&lt;br /&gt;
*ssp_default: %{!fno-stack-protector:%{!fstack-protector-all: %{!ffreestanding:%{!nostdlib:-fstack-protector}}}}&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The “SSP” above stands for Stack Smashing Protector. SSP is a reimplementation of Hiroaki Etoh's work on IBM Pro Police Stack Detector. See Hiroaki Etoh's patch ''[http://gcc.gnu.org/ml/gcc-patches/2001-06/msg01753.html gcc stack-smashing protector]'' and IBM's ''[http://www.research.ibm.com/trl/projects/security/ssp/ GCC extension for protecting applications from stack-smashing attacks]'' for details.&lt;br /&gt;
&lt;br /&gt;
=== GCC/Binutils ===&lt;br /&gt;
&lt;br /&gt;
GCC (the compiler collection) and Binutils (the assemblers, linkers, and other tools) are separate projects that work together to produce a final executable. Both the compiler and linker offer options to help you write safer and more secure code. The linker will produce code which takes advantage of platform security features offered by the kernel and PaX, such as no-exec stacks and heaps (NX) and Position Independent Executable (PIE).&lt;br /&gt;
&lt;br /&gt;
The table below offers a set of compiler options to build your program. Static analysis warnings help catch mistakes early, while the linker options harden the executable at runtime. In the table below, “GCC” should be loosely taken as “non-ancient distributions.” While the GCC team considers 4.2 ancient, you will still encounter it on Apple and BSD platforms due to changes in GPL licensing around 2007. Refer to ''[http://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html GCC Option Summary]'', ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html Options to Request or Suppress Warnings]'' and ''[http://sourceware.org/binutils/docs-2.21/ld/Options.html Binutils (LD) Command Line Options]'' for usage details.&lt;br /&gt;
&lt;br /&gt;
Noteworthy of special mention are &amp;lt;tt&amp;gt;-fno-strict-overflow&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fwrapv&amp;lt;/tt&amp;gt;&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;. The flags ensure the compiler does not remove statements that result in overflow or wrap. If your program only runs correctly using the flags, it is likely violating C/C++ rules on overflow and illegal. If the program is illegal due to overflow or wrap checking, you should consider using [http://code.google.com/p/safe-iop/ safe-iop] for C or David LeBlanc's [http://safeint.codeplex.com SafeInt] in C++.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, some of those settings can be verified with the [http://www.trapkit.de/tools/checksec.html Checksec] tool written by Tobias Klein. The &amp;lt;tt&amp;gt;checksec.sh&amp;lt;/tt&amp;gt; script is designed to test standard Linux OS and PaX security features being used by an application. See the [http://www.trapkit.de/tools/checksec.html Trapkit] web page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 2: GCC C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wall -Wextra&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;75t&amp;quot;|GCC&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Enables many warnings (despite their names, all and extra do not turn on all warnings).&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wconversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may alter a value (includes -Wsign-conversion).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-conversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may change the sign of an integer value, such as assigning a signed integer to an unsigned integer (&amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion!).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wcast-align&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for a pointer cast to a type which has a different size, causing an invalid alignment and subsequent bus error on ARM processors.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wformat=2 -Wformat-security&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Increases warnings related to possible security defects, including incorrect format specifiers.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-common&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Prevent global variables being simultaneously defined in different object files.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fstack-protector or -fstack-protector-all&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Stack Smashing Protector (SSP). Improves stack layout and adds a guard to detect stack based buffer overflows.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-omit-frame-pointer&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Improves backtraces for post-mortem analysis&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wmissing-prototypes and -Wmissing-declarations&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a global function is defined without a prototype or declaration.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-prototypes&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a function is declared or defined without specifying the argument types.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-overflow&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.2&lt;br /&gt;
|Warn about optimizations taken due to &amp;lt;nowiki&amp;gt;[undefined]&amp;lt;/nowiki&amp;gt; signed integer overflow assumptions.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wtrampolines&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.3&lt;br /&gt;
|Warn about trampolines generated for pointers to nested functions. Trampolines require executable stacks.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=address&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/address-sanitizer/ AddressSanitizer], a fast memory error detector. Memory access instructions will be instrumented to help detect heap, stack, and global buffer overflows; as well as use-after-free bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=thread&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/data-race-test/wiki/ThreadSanitizer ThreadSanitizer], a fast data race detector. Memory access instructions will be instrumented to detect data race bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,nodlopen and -Wl,-z,nodump&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.10&lt;br /&gt;
|Reduces the ability of an attacker to load, manipulate, and dump shared objects.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,noexecstack and -Wl,-z,noexecheap&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.14&lt;br /&gt;
|Data Execution Prevention (DEP). ELF headers are marked with PT_GNU_STACK and PT_GNU_HEAP.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,relro&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Global Offset Table (GOT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,now&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Procedure Linkage Table (PLT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIC&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils&lt;br /&gt;
|Position Independent Code. Used for libraries and shared objects. Both -fPIC (compiler) and -shared (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIE&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.16&lt;br /&gt;
|Position Independent Executable (ASLR). Used for programs. Both -fPIE (compiler) and -pie (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Unlike Clang and -Weverything, GCC does not provide a switch to truly enable all warnings.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; -fstack-protector guards functions with high risk objects such as C strings, while -fstack-protector-all guards all objects.&lt;br /&gt;
&lt;br /&gt;
Additional C++ warnings which can be used include the following in Table 3. See ''[http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html GCC's Options Controlling C++ Dialect]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 3: GCC C++ Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Woverloaded-virtual&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn when a function declaration hides virtual functions from a base class. &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wreorder&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when the order of member initializers given in the code does not match the order in which they must be executed.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-promo&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when overload resolution chooses a promotion from unsigned or enumerated type to a signed type, over a conversion to an unsigned type of the same size.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wnon-virtual-dtor&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when a class has virtual functions and an accessible non-virtual destructor.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Weffc++&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn about violations of the following style guidelines from Scott Meyers' ''[http://www.aristeia.com/books.html Effective C++, Second Edition]'' book.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
And additional Objective C warnings which are often useful include the following. See ''[http://gcc.gnu.org/onlinedocs/gcc/Objective_002dC-and-Objective_002dC_002b_002b-Dialect-Options.html Options Controlling Objective-C and Objective-C++ Dialects]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 4: GCC Objective C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wstrict-selector-match&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn if multiple methods with differing argument and/or return types are found for a given selector when attempting to send a message using this selector to a receiver of type id or Class.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wundeclared-selector&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn if a &amp;lt;tt&amp;gt;@selector(…)&amp;lt;/tt&amp;gt; expression referring to an undeclared selector is found. &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The use of aggressive warnings will produce spurious noise. The noise is a tradeoff - you can learn of potential problems at the cost of wading through some chaff. The following will help reduces spurious noise from the warning system:&lt;br /&gt;
&lt;br /&gt;
* -Wno-unused-parameter (GCC)&lt;br /&gt;
* -Wno-type-limits (GCC 4.3)&lt;br /&gt;
* -Wno-tautological-compare (Clang)&lt;br /&gt;
&lt;br /&gt;
Finally, a simple version based Makefile example is shown below. This is different than feature based makefile produced by auto tools (which will test for a particular feature and then define a symbol or configure a template file). Not all platforms use all options and flags. To address the issue you can pursue one of two strategies. First, you can ship with a weakened posture by servicing the lowest common denominator; or you can ship with everything in force. In the latter case, those who don't have a feature available will edit the makefile to accommodate their installation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;CXX=g++&lt;br /&gt;
EGREP = egrep&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GCC_COMPILER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version')&lt;br /&gt;
GCC41_OR_LATER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version (4\.[1-9]|[5-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GNU_LD210_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[0-9]|2\.[2-9])')&lt;br /&gt;
GNU_LD214_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[4-9]|2\.[2-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC_COMPILER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wall -Wextra -Wconversion&lt;br /&gt;
  MY_CC_FLAGS += -Wformat=2 -Wformat-security&lt;br /&gt;
  MY_CC_FLAGS += -Wno-unused-parameter&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC41_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fstack-protector-all&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC42_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wstrict-overflow&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC43_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wtrampolines&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD210_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,nodlopen -z,nodump&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD214_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,noexecstack -z,noexecheap&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD215_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,relro -z,now&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD216_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fPIE&lt;br /&gt;
  MY_LD_FLAGS += -pie&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
# Use 'override' to honor the user's command line&lt;br /&gt;
override CFLAGS := $(MY_CC_FLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(MY_CC_FLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(MY_LD_FLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Clang/Xcode ===&lt;br /&gt;
&lt;br /&gt;
[http://clang.llvm.org Clang] and [http://llvm.org LLVM] have been aggressively developed since Apple lost its GPL compiler back in 2007 (due to Tivoization which resulted in GPLv3). Since that time, a number of developers and Goggle have joined the effort. While Clang will consume most (all?) GCC/Binutil flags and switches, the project supports a number of its own options, including a static analyzer. In addition, Clang is relatively easy to build with additional diagnostics, such as Dr. John Regher and Peng Li's [http://embed.cs.utah.edu/ioc/ Integer Overflow Checker (IOC)].&lt;br /&gt;
&lt;br /&gt;
IOC is incredibly useful, and has found bugs in a number of projects, from the Linux Kernel (&amp;lt;tt&amp;gt;include/linux/bitops.h&amp;lt;/tt&amp;gt;, still unfixed), SQLite, PHP, Firefox (many still unfixed), LLVM, and Python. Future version of Clang (Clang 3.3 and above) will allow you to enable the checks out of the box with &amp;lt;tt&amp;gt;-fsanitize=integer&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=shift&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Clang options can be found at [http://clang.llvm.org/docs/UsersManual.html Clang Compiler User’s Manual]. Clang does include an option to turn on all warnings - &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt;. Use it with care but use it regularly since you will get back a lot of noise and issues you missed. For example, add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; for production builds and make non-spurious issues a quality gate. Under Xcode, simply add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to compiler warnings, both static analysis and additional security checks can be performed. Reading on Clang's static analysis capabilities can be found at [http://clang-analyzer.llvm.org Clang Static Analyzer]. Figure 1 below shows some of the security checks utilized by Xcode.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-11.png|thumb|450px|Figure 1: Clang/LLVM and Xcode options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Visual Studio ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a convenient Integrated Development Environment (IDE) for managing solutions and their settings. the section called “Visual Studio Options” discusses option which should be used with Visual Studio, and the section called “Project Properties” demonstrates incorporating those options into a solution's project.&lt;br /&gt;
&lt;br /&gt;
The table below lists the compiler and linker switches which should be used under Visual Studio. Refer to Howard and LeBlanc's Writing Secure Code (Microsoft Press) for a detailed discussion; or ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]'' in Security Briefs by Michael Howard. In the table below, “Visual Studio” refers to nearly all versions of the development environment, including Visual Studio 5.0 and 6.0.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, those settings can be verified with BinScope. BinScope is a verification tool from Microsoft that analyzes binaries to ensure that they have been built in compliance with Microsoft's Security Development Lifecycle (SDLC) requirements and recommendations. See the ''[https://www.microsoft.com/download/en/details.aspx?id=11910 BinScope Binary Analyzer]'' download page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 5: Visual Studio Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;150pt&amp;quot;|&amp;lt;nowiki&amp;gt;/W4&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;100pt&amp;quot;|Visual Studio&lt;br /&gt;
|width=&amp;quot;350pt&amp;quot;|Warning level 4, which includes most warnings.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/WAll&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Enable all warnings, including those off by default.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/GS&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Adds a security cookie (guard or canary) on the stack before the return address buffer stack based for overflow checks.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/SafeSEH&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Safe structured exception handling to remediate SEH overwrites.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/analyze&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Enterprise code analysis (freely available with Windows SDK for Windows Server 2008 and .NET Framework 3.5).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/NXCOMPAT&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Data Execution Prevention (DEP).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/dynamicbase&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Address Space Layout Randomization (ASLR).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;strict_gs_check&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Aggressively applies stack protections to a source file to help detect some categories of stack based buffer overruns.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;See Jon Sturgeon's discussion of the switch at ''[https://blogs.msdn.com/b/vcblog/archive/2010/12/14/off-by-default-compiler-warnings-in-visual-c.aspx Off By Default Compiler Warnings in Visual C++]''.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;When using /GS, there are a number of circumstances which affect the inclusion of a security cookie. For example, the guard is not used if there is no buffer in the stack frame, optimizations are disabled, or the function is declared naked or contains inline assembly.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;tt&amp;gt;#pragma strict_gs_check(on)&amp;lt;/tt&amp;gt; should be used sparingly, but is recommend in high risk situations, such as when a source file parses input from the internet.&lt;br /&gt;
&lt;br /&gt;
=== Warn Suppression ===&lt;br /&gt;
&lt;br /&gt;
From the tables above, a lot of warnings have been enabled to help detect possible programming mistakes. The potential mistakes are detected via compiler which carries around a lot of contextual information during its code analysis phase. At times, you will receive spurious warnings because the compiler is not ''that'' smart. Its understandable and even a good thing (how would you like to be out of a job because a program writes its own programs?). At times you will have to learn how to work with the compiler's warning system to suppress warnings. Notice what was not said: turn off the warnings.&lt;br /&gt;
&lt;br /&gt;
Suppressing warnings placates the compiler for spurious noise so you can get to the issues that matter (you are separating the wheat from the chaff). This section will offer some hints and point out some potential minefields. First is an unused parameter (for example, &amp;lt;tt&amp;gt;argc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;argv&amp;lt;/tt&amp;gt;). Suppressing unused parameter warnings is especially helpful for C++ and interface programming, where parameters are often unused. For this warning, simply define an &amp;quot;UNUSED&amp;quot; macro and warp the parameter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#define UNUSED_PARAMETER(x) ((void)x)&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    UNUSED_PARAMETER(argc);&lt;br /&gt;
    UNUSED_PARAMETER(argv);&lt;br /&gt;
    …&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A potential minefield lies near &amp;quot;comparing unsigned and signed&amp;quot; values, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will catch it for you. This is because C/C++ promotion rules state the signed value will be promoted to an unsigned value and then compared. That means &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! To fix this, you cannot blindly cast - you must first range test the value:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;int x = GetX();&lt;br /&gt;
unsigned int y = GetY();&lt;br /&gt;
&lt;br /&gt;
ASSERT(x &amp;gt;= 0);&lt;br /&gt;
if(!(x &amp;gt;= 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? X is negative.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
if(static_cast&amp;lt;unsigned int&amp;gt;(x) &amp;gt; y)&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is greater than y&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
else&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is not greater than y&amp;quot; &amp;lt;&amp;lt; endl;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;x&amp;lt;/tt&amp;gt;. Just run the program and wait for it to tell you there is a problem. If there is a problem, the program will snap the debugger (and more importantly, not call a useless &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; as specified by Posix). It beats the snot out of &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; that are removed when no longer needed or pollute outputs.&lt;br /&gt;
&lt;br /&gt;
Another conversion problem you will encounter conversion between types, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will also catch it for you. The following will always have an opportunity to fail, and should light up like a Christmas tree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct sockaddr_in addr;&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
addr.sin_port = htons(atoi(argv[2]));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following would probably serve you much better. Notice &amp;lt;tt&amp;gt;atoi&amp;lt;/tt&amp;gt; and fiends are not used because they can silently fail. In addition, the code is instrumented so you don't need to waste a lot of time debugging potential problems:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;const char* cstr = GetPortString();&lt;br /&gt;
&lt;br /&gt;
ASSERT(cstr != NULL);&lt;br /&gt;
if(!(cstr != NULL))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port string is not valid.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
istringstream iss(cstr);&lt;br /&gt;
long long t = 0;&lt;br /&gt;
iss &amp;gt;&amp;gt; t;&lt;br /&gt;
&lt;br /&gt;
ASSERT(!(iss.fail()));&lt;br /&gt;
if(iss.fail())&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Failed to read port.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// Should this be a port above the reserved range ([0-1024] on Unix)?&lt;br /&gt;
ASSERT(t &amp;gt; 0);&lt;br /&gt;
if(!(t &amp;gt; 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too small&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
ASSERT(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max()));&lt;br /&gt;
if(!(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max())))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too large&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use port&lt;br /&gt;
unsigned short port = static_cast&amp;lt;unsigned short&amp;gt;(t);&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;port&amp;lt;/tt&amp;gt;. This code will continue checking conditions, years after being instrumented (assuming to wrote code to read a config file early in the project). There's no need to remove the &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt;s as with &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; since they are silent guardians.&lt;br /&gt;
&lt;br /&gt;
Another useful suppression trick is too avoid ignoring return values. Not only is it useful to suppress the warning, its required for correct code. For example, &amp;lt;tt&amp;gt;snprint&amp;lt;/tt&amp;gt; will alert you to truncations through its return value. You should not make them silent truncations by ignoring the warning or casting to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;char path[PATH_MAX];&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int ret = snprintf(path, sizeof(path), &amp;quot;%s/%s&amp;quot;, GetDirectory(), GetObjectName());&lt;br /&gt;
ASSERT(ret != -1);&lt;br /&gt;
ASSERT(!(ret &amp;gt;= sizeof(path)));&lt;br /&gt;
&lt;br /&gt;
if(ret == -1 || ret &amp;gt;= sizeof(path))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Unable to build full object name&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use path&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem is pandemic, and not just boring user land programs. Projects which offer high integrity code, such as SELinux, suffer silent truncations. The following is from an approved SELinux patch even though a comment was made that it [http://permalink.gmane.org/gmane.comp.security.selinux/16845 suffered silent truncations in its &amp;lt;tt&amp;gt;security_compute_create_name&amp;lt;/tt&amp;gt; function] from &amp;lt;tt&amp;gt;compute_create.c&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12  int security_compute_create_raw(security_context_t scon,&lt;br /&gt;
13                                  security_context_t tcon,&lt;br /&gt;
14                                  security_class_t   tclass,&lt;br /&gt;
15                                  security_context_t * newcon)&lt;br /&gt;
16  {&lt;br /&gt;
17    char path[PATH_MAX];&lt;br /&gt;
18    char *buf;&lt;br /&gt;
19    size_t size;&lt;br /&gt;
20    int fd, ret;&lt;br /&gt;
21 	&lt;br /&gt;
22    if (!selinux_mnt) {&lt;br /&gt;
23      errno = ENOENT;&lt;br /&gt;
24      return -1;&lt;br /&gt;
25    }&lt;br /&gt;
26 	&lt;br /&gt;
27    snprintf(path, sizeof path, &amp;quot;%s/create&amp;quot;, selinux_mnt);&lt;br /&gt;
28    fd = open(path, O_RDWR);&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Unlike other examples, the above code will not debug itself, and you will have to set breakpoints and trace calls to determine the point of first failure. (And the code above gambles that the truncated file does not exist or is not under an adversary's control by blindly performing the &amp;lt;tt&amp;gt;open&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
== Runtime ==&lt;br /&gt;
&lt;br /&gt;
The previous sections concentrated on setting up your project for success. This section will examine additional hints for running with increased diagnostics and defenses. Not all platforms are created equal - GNU Linux is difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening to a program after compiling and static linking]; while Windows allows post-build hardening through a download. Remember, the goal is to find the point of first failure quickly so you can improve the reliability and security of the code.&lt;br /&gt;
&lt;br /&gt;
=== Xcode ===&lt;br /&gt;
&lt;br /&gt;
Xcode offers additional [http://developer.apple.com/library/mac/#recipes/xcode_help-scheme_editor/Articles/SchemeDiagnostics.html Application Diagnostics] that can help find memory errors and object use problems. Schemes can be managed through ''Products'' menu item, ''Scheme'' submenu item, and then ''Edit''. From the editor, navigate to the ''Diagnostics'' tab. In the figure below, four additional instruments are enabled for the debugging cycle: Scribble guards, Edge guards, Malloc guards, and Zombies.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-1.png|thumb|450px|Figure 2: Xcode Memory Diagnostics]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
There is one caveat with using some of the guards: Apple only provides them for the simulator, and not a device. In the past, the guards were available for both devices and simulators.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a number of debugging aides for use during development. The aides are called [http://msdn.microsoft.com/en-us/library/d21c150d.aspx Managed Debugging Assistants (MDAs)]. You can find the MDAs on the ''Debug'' menu, then ''Exceptions'' submenu. MDAs allow you to tune your debugging experience by, for example, filter exceptions for which the debugger should snap. For more details, see Stephen Toub's ''[http://msdn.microsoft.com/en-us/magazine/cc163606.aspx Let The CLR Find Bugs For You With Managed Debugging Assistants]''.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-2.png|thumb|450px|Figure 3: Managed Debugging Assistants]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Finally, for runtime hardening, Microsoft has a helpful tool called EMET. EMET is the [http://support.microsoft.com/kb/2458544 Enhanced Mitigation Experience Toolkit], and allows you to apply runtime hardening to an executable which was built without. Its very useful for utilities and other programs that were built without an SDLC.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-3.png|thumb|450px|Figure 4: Windows and EMET]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=147122</id>
		<title>C-Based Toolchain Hardening</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=147122"/>
				<updated>2013-03-08T18:18:53Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening]] is a treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. This article will examine Microsoft and GCC toolchains for the C, C++ and Objective C languages. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, preprocessor, compiler, and linker. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, Visual Studio-based, and Xcode-based. Its important to address the gaps at build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
This is a prescriptive article, and it will not debate semantics or speculate on behavior. Some information, such as the C/C++ committee's motivation and pedigree for [https://groups.google.com/a/isocpp.org/forum/?fromgroups=#!topic/std-discussion/ak8e1mzBhGs &amp;quot;program diagnostics&amp;quot;, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;], appears to be lost like a tale in the Lord of the Rings. As such, the article will specify semantics (for example, the philosophy of 'debug' and 'release' build configurations), assign behaviors (for example, what an assert should do in a 'debug' and 'release' build configurations), and present a position. If you find the posture is too aggressive, then you should back off as required to suite your taste.&lt;br /&gt;
&lt;br /&gt;
A secure toolchain is not a silver bullet. It is one piece of an overall strategy in the engineering process to help ensure success. It will compliment existing processes such as static analysis, dynamic analysis, secure coding, negative test suites, and the like. Tools such as Valgrind and Helgrind will still be needed. And a project will still require solid designs and architectures.&lt;br /&gt;
&lt;br /&gt;
Finally, the OWASP [http://code.google.com/p/owasp-esapi-cplusplus/source ESAPI C++] project eats its own dog food. Many of the examples you will see in this article come directly from the ESAPI C++ project.&lt;br /&gt;
&lt;br /&gt;
== Wisdom ==&lt;br /&gt;
&lt;br /&gt;
Code '''must''' be correct. It '''should''' be secure. It '''can''' be efficient.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Jon_Bentley Dr. Jon Bentley]: ''&amp;quot;If it doesn't have to be correct, I can make it as fast as you'd like it to be&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Gary_McGraw Dr. Gary McGraw]: ''&amp;quot;Thou shalt not rely solely on security features and functions to build secure software as security is an emergent property of the entire system and thus relies on building and integrating all parts properly&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
Configuration is the first opportunity to configure your project for success. Not only do you have to configure your project to meet reliability and security goals, you must also configure integrated libraries properly. You typically have has three choices. First, you can use auto-configuration utilities if on Linux or Unix. Second, you can write a makefile by hand. This is predominant on Linux, Mac OS X, and Unix, but it applies to Windows as well. Finally, you can use an integrated development environment or IDE.&lt;br /&gt;
&lt;br /&gt;
=== Build Configurations ===&lt;br /&gt;
&lt;br /&gt;
At this stage in the process, you should concentrate on configuring for two builds: Debug and Release. Debug will be used for development and include full instrumentation. Release will be configured for production. The difference between the two settings is usually ''optimization level'' and ''debug level''. A third build configuration is Test, and its usually a special case of Release.&lt;br /&gt;
&lt;br /&gt;
For debug and release builds, the settings are typically diametrically opposed. Debug configurations have no optimizations and full debug information; while Release builds have optimizations and minimal to moderate debug information. In addition, debug code has full assertions and additional library integration, such as mudflaps and malloc guards such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The Test configuration is often a Release configuration that makes everything public for testing and builds a test harness. For example, all member functions public (C++ class) and all interfaces (library or shared object) should be made available for testing. Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
[http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8] introduced an optimization of &amp;lt;tt&amp;gt;-Og&amp;lt;/tt&amp;gt;. Note that it is only an optimization, and still requires a customary debug level via &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Debug Builds ====&lt;br /&gt;
&lt;br /&gt;
Debug builds are where developers spend most of their time when vetting problems, so this build should concentrate forces and tools or be a 'force multiplier'. Though many do not realize, debug code is more highly valued than release code because its adorned with additional instrumentation. The debug instrumentation will cause a program to become nearly &amp;quot;self-debugging&amp;quot;, and help you catch mistakes such as bad parameters, failed API calls, and memory problems.&lt;br /&gt;
&lt;br /&gt;
Self-debugging code reduces your time during trouble shooting and debugging. Reducing time under the debugger means you have more time for development and feature requests. If code is checked in without debug instrumentation, it should be fixed by adding instrumentation or rejected.&lt;br /&gt;
&lt;br /&gt;
For GCC, optimizations and debug symbolication are controlled through two switches: &amp;lt;tt&amp;gt;-O&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;. You should use the following as part of your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a minimal debug session:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-O0 -g3 -ggdb&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; turns off optimizations and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available. You may need to use &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; so some analysis is performed. Otherwise, your debug build will be missing a number of warnings not present in release builds. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debugging information is available for the debug session, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; includes extensions to help with a debug session under GDB. For completeness, Jan Krachtovil stated &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; currently has no effect in a private email.&lt;br /&gt;
&lt;br /&gt;
Debug build should also define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is not defined. &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; removes &amp;quot;program diagnostics&amp;quot;; and has undesirable behavior and side effects which discussed below in more detail. The defines should be present for all code, and not just the program. You use it for all code (your program and included libraries) because you need to know how they fail too (remember, you take the bug report - not the third party library).&lt;br /&gt;
&lt;br /&gt;
In addition, you should also use other relevant flags, such as &amp;lt;tt&amp;gt;-fno-omit-frame-pointer&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=address&amp;lt;/tt&amp;gt;. Finally, you should also ensure your project includes additional diagnostic libraries, such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt;. The additional flags and libraries are discussed below in more detail.&lt;br /&gt;
&lt;br /&gt;
==== Release Builds ====&lt;br /&gt;
&lt;br /&gt;
Release builds are what your customer receives. They are meant to be run on production hardware and servers, and they should be reliable, secure, and efficient. A stable release build is the product of the hard work and effort during development.&lt;br /&gt;
&lt;br /&gt;
For release builds, you should use the following as part of &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for release builds:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-On -g2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O''n''&amp;lt;/tt&amp;gt; sets optimizations for speed or size (for example, &amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;), and &amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt; ensure debugging information is created.&lt;br /&gt;
&lt;br /&gt;
Debugging information should be stripped and retained in case of symbolication for a crash report from the field. While not desired, debug information can be left in place without a performance penalty. See ''[http://gcc.gnu.org/ml/gcc-help/2005-03/msg00032.html How does the gcc -g option affect performance?]'' for details.&lt;br /&gt;
&lt;br /&gt;
Release builds should also define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is not defined. The time for debugging and diagnostics is over, so users get production code with full optimizations, no &amp;quot;programming diagnostics&amp;quot;, and other efficiencies. If you can't optimize or your are performing excessive logging, it usually means the program is not ready for production.&lt;br /&gt;
&lt;br /&gt;
If you have been relying on an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and then a subsequent &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;, you have been abusing &amp;quot;program diagnostics&amp;quot; since it has no place in production code. If you want a memory dump, create one so users don't have to worry about secrets and other sensitive information being written to the filesystem and emailed in plain text.&lt;br /&gt;
&lt;br /&gt;
For Windows, you would use &amp;lt;tt&amp;gt;/Od&amp;lt;/tt&amp;gt; for debug builds; and &amp;lt;tt&amp;gt;/Ox&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/O2&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/Os&amp;lt;/tt&amp;gt; for release builds. See Microsoft's [http://msdn.microsoft.com/en-us/library/k1ack8f1.aspx /O Options (Optimize Code)] for details.&lt;br /&gt;
&lt;br /&gt;
==== Test Builds ====&lt;br /&gt;
&lt;br /&gt;
Test builds are used to provide heuristic validation by way of positive and negative test suites. Under a test configuration, all interfaces are tested to ensure they perform to specification and satisfaction. &amp;quot;Satisfaction&amp;quot; is subjective, but it should include no crashing and no trashing of your memory arena, even when faced with negative tests.&lt;br /&gt;
&lt;br /&gt;
Because all interfaces are tested (and not just the public ones), your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; should include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should also change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Nearly everyone gets a positive test right, so no more needs to be said. The negative self tests are much more interesting, and you should concentrate on trying to make your program fail so you can verify its fails gracefully. Remember, a bad guy is not going to be courteous when he attempts to cause your program to fail. And its your project that takes egg on the face by way of a bug report or guest appearance on [http://www.grok.org.uk/full-disclosure/ Full Disclosure] or [http://www.securityfocus.com/archive Bugtraq] - not ''&amp;lt;nowiki&amp;gt;&amp;lt;some library&amp;gt;&amp;lt;/nowiki&amp;gt;'' you included.&lt;br /&gt;
&lt;br /&gt;
=== Auto Tools ===&lt;br /&gt;
&lt;br /&gt;
Auto configuration tools are popular on many Linux and Unix based systems, and the tools include ''Autoconf'', ''Automake'', ''config'', and ''Configure''. The tools work together to produce project files from scripts and template files. After the process completes, your project should be setup and ready to be made with &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When using auto configuration tools, there are a few files of interest worth mentioning. The files are part of the auto tools chain and include &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; and the various &amp;lt;tt&amp;gt;*.in&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;*.ac&amp;lt;/tt&amp;gt; (autoconf), and &amp;lt;tt&amp;gt;*.am&amp;lt;/tt&amp;gt; (automake) files. At times, you will have to open them, or the resulting makefiles, to tune the &amp;quot;stock&amp;quot; configuration.&lt;br /&gt;
&lt;br /&gt;
There are three downsides to the command line configuration tools in the toolchain: (1) they often ignore user requests, (2) they cannot create configurations, and (3) security is often not a goal.&lt;br /&gt;
&lt;br /&gt;
To demonstrate the first issue, confider your project with the following: &amp;lt;tt&amp;gt;configure CFLAGS=&amp;quot;-Wall -fPIE&amp;quot; CXXFLAGS=&amp;quot;-Wall -fPIE&amp;quot; LDFLAGS=&amp;quot;-pie&amp;quot;&amp;lt;/tt&amp;gt;. You will probably find the auto tools ignored your request, which means the command below will not produce expected results. As a work around, you will have to open an &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; scripts, &amp;lt;tt&amp;gt;Makefile.in&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;Makefile.am&amp;lt;/tt&amp;gt; and fix the configuration.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ configure CFLAGS=&amp;quot;-Wall -Wextra -Wconversion -fPIE -Wno-unused-parameter&lt;br /&gt;
    -Wformat=2 -Wformat-security -fstack-protector-all -Wstrict-overflow&amp;quot;&lt;br /&gt;
    LDFLAGS=&amp;quot;-pie -z,noexecstack -z,noexecheap -z,relro -z,now&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the second point, you will probably be disappointed to learn [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html Automake does not support the concept of configurations]. Its not entirely Autoconf's or Automake's fault - ''Make'' and its inability to detect changes is the underlying problem. Specifically, ''Make'' only [http://pubs.opengroup.org/onlinepubs/009695399/utilities/make.html checks modification times of prerequisites and targets], and does not check things like &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;. The net effect is you will not receive expected results when you issue &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt; and then &amp;lt;tt&amp;gt;make test&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Finally, you will probably be disappointed to learn tools such as Autoconf and Automake miss many security related opportunities and ship insecure out of the box. There are a number of compiler switches and linker flags that improve the defensive posture of a program, but they are not 'on' by default. Tools like Autoconf - which are supposed to handle this situation - often provides setting to serve the lowest of all denominators.&lt;br /&gt;
&lt;br /&gt;
A recent discussion on the Automake mailing list illuminates the issue: ''[https://lists.gnu.org/archive/html/autoconf/2012-12/msg00038.html Enabling compiler warning flags]''. Attempts to improve default configurations were met with resistance and no action was taken. The resistance is often of the form, &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some useful warning&amp;gt;&amp;lt;/nowiki&amp;gt; also produces false positives&amp;quot; or &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some obscure platform&amp;gt;&amp;lt;/nowiki&amp;gt; does not support &amp;lt;nowiki&amp;gt;&amp;lt;established security feature&amp;gt;&amp;lt;/nowiki&amp;gt;&amp;quot;. Its noteworthy that David Wheeler, the author of ''[http://www.dwheeler.com/secure-programs/ Secure Programming for Linux and Unix HOWTO]'', was one of the folks trying to improve the posture.&lt;br /&gt;
&lt;br /&gt;
=== Makefiles ===&lt;br /&gt;
&lt;br /&gt;
Make is one of the earliest build systems dating back to the 1970s. Its available on Linux, Mac OS X and Unix, so you will frequently encounter projects using it. Unfortunately, Make has a number of short comings (''[http://aegis.sourceforge.net/auug97.pdf Recursive Make Considered Harmful]'' and ''[http://www.conifersystems.com/whitepapers/gnu-make/ What’s Wrong With GNU make?]''), and can cause some discomfort. Despite issues with Make, ESAPI C++ uses Make primarily for three reasons: first, its omnipresent; second, its easier to manage than the Auto Tools family; and third, &amp;lt;tt&amp;gt;libtool&amp;lt;/tt&amp;gt; was out of the question.&lt;br /&gt;
&lt;br /&gt;
Consider what happens when you: (1) type &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt;, and then type &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;. Each build would require different &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; due to optimizations and level of debug support. In your makefile, you would extract the relevant target and set &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; similar to below (taken from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/Makefile ESAPI C++ Makefile]):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Makefile&lt;br /&gt;
DEBUG_GOALS = $(filter $(MAKECMDGOALS), debug)&lt;br /&gt;
ifneq ($(DEBUG_GOALS),)&lt;br /&gt;
  WANT_DEBUG := 1&lt;br /&gt;
  WANT_TEST := 0&lt;br /&gt;
  WANT_RELEASE := 0&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_DEBUG),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
  ESAPI_CXXFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_RELEASE),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
  ESAPI_CXXFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_TEST),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
  ESAPI_CXXFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
# Merge ESAPI flags with user supplied flags. We perform the extra step to ensure &lt;br /&gt;
# user options follow our options, which should give user option's a preference.&lt;br /&gt;
override CFLAGS := $(ESAPI_CFLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(ESAPI_CXXFLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(ESAPI_LDFLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make will first build the program in a debug configuration for a session under the debugger using a rule similar to:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;%.cpp:%.o:&lt;br /&gt;
        $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $&amp;lt; -o $@&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you want the release build, Make will do nothing because it considers everything up to date despite the fact &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; have changed. Hence, your program will actually be in a debug configuration and risk a &amp;lt;tt&amp;gt;SIGABRT&amp;lt;/tt&amp;gt; at runtime because debug instrumentation is present (recall &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; calls &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; when &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined). In essence, you have DoS'd yourself due to &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, many projects do not honor the user's command line. ESAPI C++ does its best to ensure a user's flags are honored via &amp;lt;tt&amp;gt;override&amp;lt;/tt&amp;gt; as shown above, but other projects do not. For example, consider a project that should be built with Position Independent Executable (PIE or ASLR) enabled and data execution prevention (DEP) enabled. Dismissing user settings combined with insecure out of the box settings (and not picking them up during auto-setup or auto-configure) means a program built with the following will likely have neither defense:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ make CFLAGS=&amp;quot;-fPIE&amp;quot; CXXFLAGS=&amp;quot;-fPIE&amp;quot; LDFLAGS=&amp;quot;-pie -z,noexecstack, -z,noexecheap&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Defenses such as ASLR and DEP are especially important on Linux because [http://linux.die.net/man/5/elf Data Execution - not Prevention - is the norm].&lt;br /&gt;
&lt;br /&gt;
=== Integration ===&lt;br /&gt;
&lt;br /&gt;
Project level integration presents opportunities to harden your program or library with domain specific knowledge. For example, if the platform supports Position Independent Executables (PIE or ASLR) and data execution prevention (DEP), then you should integrate with it. The consequences of not doing so could result in exploitation. As a case in point, see KingCope's 0-days for MySQL in December, 2012 (CVE-2012-5579 and CVE-2012-5612, among others). Integration with platform security would have neutered a number of the 0-days.&lt;br /&gt;
&lt;br /&gt;
You also have the opportunity to include helpful libraries that are not need for business logic support. For example, if you are working on a platform with [http://dmalloc.com DMalloc] or [http://code.google.com/p/address-sanitizer/ Address Sanitizer], you should probably use it in your debug builds. For Ubuntu, DMalloc available from the package manager and can be installed with &amp;lt;tt&amp;gt;sudo apt-get install libdmalloc5&amp;lt;/tt&amp;gt;. For Apple platforms, its available as a scheme option (see [[#Clang/Xcode|Clang/Xcode]] below). Address Sanitizer is available in [http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8 and above] for many platforms.&lt;br /&gt;
&lt;br /&gt;
In addition, project level integration is an opportunity to harden third party libraries you chose to include. Because you chose to include them, you and your users are responsible for them. If you or your users endure a SP800-53 audit, third party libraries will be in scope because the supply chain is included (specifically, item SA-12, Supply Chain Protection). The audits are not limited to those in the US Federal arena - financial institutions perform reviews too. A perfect example of violating this guidance is [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525], which was due to [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library].&lt;br /&gt;
&lt;br /&gt;
Another example is including OpenSSL. You know (1) [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 is insecure], (2) [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 is insecure], and (3) [http://arstechnica.com/security/2012/09/crime-hijacks-https-sessions/ compression is insecure] (among others). In addition, suppose you don't use hardware and engines, and only allow static linking. Given the knowledge and specifications, you would configure the OpenSSL library as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ Configure darwin64-x86_64-cc -no-hw -no-engines -no-comp -no-shared -no-dso -no-sslv2 -no-sslv3 --openssldir=…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Note Well'': you might want engines, especially on Ivy Bridge microarchitectures (3rd generation Intel Core i5 and i7 processors). To have OpenSSL use the processor's random number generator (via the of &amp;lt;tt&amp;gt;rdrand&amp;lt;/tt&amp;gt; instruction), you will need to call OpenSSL's &amp;lt;tt&amp;gt;ENGINE_load_rdrand()&amp;lt;/tt&amp;gt; function and then &amp;lt;tt&amp;gt;ENGINE_set_default&amp;lt;/tt&amp;gt; with &amp;lt;tt&amp;gt;ENGINE_METHOD_RAND&amp;lt;/tt&amp;gt;. See [http://wiki.opensslfoundation.com/index.php/Random_Numbers OpenSSL's Random Numbers] for details.&lt;br /&gt;
&lt;br /&gt;
If you configure without the switches, then you will likely have vulnerable code/libraries and risk failing an audit. If the program is a remote server, then the following command will reveal if compression is active on the channel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ echo &amp;quot;GET / HTTP1.0&amp;quot; | openssl s_client -connect &amp;lt;nowiki&amp;gt;example.com:443&amp;lt;/nowiki&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;nm&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;openssl s_client&amp;lt;/tt&amp;gt; will show that compression is enabled in the client. In fact, any symbol within the &amp;lt;tt&amp;gt;OPENSSL_NO_COMP&amp;lt;/tt&amp;gt; preprocessor macro will bear witness since &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; is translated into a &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; define.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ nm /usr/local/ssl/iphoneos/lib/libcrypto.a 2&amp;gt;/dev/null | egrep -i &amp;quot;(COMP_CTX_new|COMP_CTX_free)&amp;quot;&lt;br /&gt;
0000000000000110 T COMP_CTX_free&lt;br /&gt;
0000000000000000 T COMP_CTX_new&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even more egregious is the answer given to auditors who specifically ask about configurations and protocols: &amp;quot;we don't use weak/wounded/broken ciphers&amp;quot; or &amp;quot;we follow best practices.&amp;quot; The use of compression tells the auditor that you are using wounded protocol in an insecure configuration and you don't follow best practices. That will likely set off alarm bells, and ensure the auditor dives deeper on more items.&lt;br /&gt;
&lt;br /&gt;
== Preprocessor ==&lt;br /&gt;
&lt;br /&gt;
The preprocessor is crucial to setting up a project for success. The C committee provided one macro - &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; - and the macro can be used to derive a number of configurations and drive engineering processes. Unfortunately, the committee also left many related items to chance, which has resulted in programmers abusing builtin facilities. This section will help you set up you projects to integrate well with other projects and ensure reliability and security.&lt;br /&gt;
&lt;br /&gt;
There are three topics to discuss when hardening the preprocessor. The first is well defined configurations which produce well defined behaviors, the second is useful behavior from assert, and the third is proper use of macros when integrating vendor code and third party libraries.&lt;br /&gt;
&lt;br /&gt;
=== Configurations ===&lt;br /&gt;
&lt;br /&gt;
To remove ambiguity, you should recognize two configurations: Release and Debug. Release is for production code on live servers, and its behavior is requested via the C/C++ &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; macro. Its also the only macro observed by the C and C++ Committees and Posix. Diametrically opposed to release is Debug. While there is a compelling argument for &amp;lt;tt&amp;gt;!defined(NDEBUG)&amp;lt;/tt&amp;gt;, you should have an explicit macro for the configuration and that macro should be &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;. This is because vendors and outside libraries use &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (or similar) macro for their configuration. For example, Carnegie Mellon's Mach kernel uses &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, Microsoft's CRT uses [http://msdn.microsoft.com/en-us/library/ww5t02fa%28v=vs.71%29.aspx&amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt;], and Wind River Workbench uses &amp;lt;tt&amp;gt;DEBUG_MODE&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (Release) and &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (Debug), you have two additional cross products: both are defined or neither are defined. Defining both should be an error, and defining neither should default to a release configuration. Below is from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/esapi/EsapiCommon.h ESAPI C++ EsapiCommon.h], which is the configuration file used by all source files:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// Only one or the other, but not both&lt;br /&gt;
#if (defined(DEBUG) || defined(_DEBUG)) &amp;amp;&amp;amp; (defined(NDEBUG) || defined(_NDEBUG))&lt;br /&gt;
# error Both DEBUG and NDEBUG are defined.&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
// The only time we switch to debug is when asked. NDEBUG or {nothing} results&lt;br /&gt;
// in release build (fewer surprises at runtime).&lt;br /&gt;
#if defined(DEBUG) || defined(_DEBUG)&lt;br /&gt;
# define ESAPI_BUILD_DEBUG 1&lt;br /&gt;
#else&lt;br /&gt;
# define ESAPI_BUILD_RELEASE 1&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is in effect, your code should receive full debug instrumentation, including the full force of assertions.&lt;br /&gt;
&lt;br /&gt;
=== ASSERT ===&lt;br /&gt;
&lt;br /&gt;
Asserts will help you create self-debugging code by helping you find the point of first failure quickly and easily. Asserts should be used throughout your program, including parameter validation, return value checking and program state. The &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; will silently guard your code through its lifetime. It will always be there, even when not debugging a specific component of a module. If you have thorough code coverage, you will spend less time debugging and more time developing because programs will debug themselves.&lt;br /&gt;
&lt;br /&gt;
To use asserts effectively, you should assert everything. That includes parameters upon entering a function, return values from function calls, and any program state. Everywhere you place an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation or checking, you should have an assert. Everywhere you have an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; for validation or checking, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand.&lt;br /&gt;
&lt;br /&gt;
If you are still using &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt;'s, then you have an opportunity for improvement. In the time it takes for you to write a &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; statement, you could have written an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;. Unlike the &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; which are often removed when no longer needed, the &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; stays active forever. Remember, this is all about finding the point of first failure quickly so you can spend your time doing other things.&lt;br /&gt;
&lt;br /&gt;
There is one problem with using asserts - [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html Posix states &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; should call &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;] if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined. When debugging, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will never be defined since you want the &amp;quot;program diagnostics&amp;quot; (quote from the Posix description). The behavior makes &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and its accompanying &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; completely useless for development. The result of &amp;quot;program diagnostics&amp;quot; calling &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; due to standard C/C++ behavior is disuse - developers simply don't use them. Its incredibly bad for the development community because self-debugging programs can help eradicate so many stability problems.&lt;br /&gt;
&lt;br /&gt;
Since self-debugging programs are so powerful, you will have to have to supply your own assert and signal handler with improved behavior. Your assert will exchange auto-aborting behavior for auto-debugging behavior. The auto-debugging facility will ensure the debugger snaps when a problem is detected, and you will find the point of first failure quickly and easily.&lt;br /&gt;
&lt;br /&gt;
ESAPI C++ supplies its own assert with the behavior described above. In the code below, &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt; raises &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; when in effect or it evaluates to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; in other cases.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// A debug assert which should be sprinkled liberally. This assert fires and then continues rather&lt;br /&gt;
// than calling abort(). Useful when examining negative test cases from the command line.&lt;br /&gt;
#if (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_STARNIX))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp) {                                    \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; std::endl;                                           \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) {                               \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; &amp;quot;: \&amp;quot;&amp;quot; &amp;lt;&amp;lt; (msg) &amp;lt;&amp;lt; &amp;quot;\&amp;quot;&amp;quot; &amp;lt;&amp;lt; std::endl;                \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#elif (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_WINDOWS))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      assert(exp)&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) assert(exp)&lt;br /&gt;
#else&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      ((void)(exp))&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) ((void)(exp))&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
#if !defined(ASSERT)&lt;br /&gt;
#  define ASSERT(exp)     ESAPI_ASSERT1(exp)&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At program startup, a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; handler will be installed if one is not provided by another component:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct DebugTrapHandler&lt;br /&gt;
{&lt;br /&gt;
  DebugTrapHandler()&lt;br /&gt;
  {&lt;br /&gt;
    struct sigaction new_handler, old_handler;&lt;br /&gt;
&lt;br /&gt;
    do&lt;br /&gt;
      {&lt;br /&gt;
        int ret = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, NULL, &amp;amp;old_handler);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        // Don't step on another's handler&lt;br /&gt;
        if (old_handler.sa_handler != NULL) break;&lt;br /&gt;
&lt;br /&gt;
        new_handler.sa_handler = &amp;amp;DebugTrapHandler::NullHandler;&lt;br /&gt;
        new_handler.sa_flags = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigemptyset (&amp;amp;new_handler.sa_mask);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, &amp;amp;new_handler, NULL);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
      } while(0);&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  static void NullHandler(int /*unused*/) { }&lt;br /&gt;
&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
// We specify a relatively low priority, to make sure we run before other CTORs&lt;br /&gt;
// http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Attributes.html#C_002b_002b-Attributes&lt;br /&gt;
static const DebugTrapHandler g_dummyHandler __attribute__ ((init_priority (110)));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On a Windows platform, you would call &amp;lt;tt&amp;gt;_set_invalid_parameter_handler&amp;lt;/tt&amp;gt; (and possibly &amp;lt;tt&amp;gt;set_unexpected&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;set_terminate&amp;lt;/tt&amp;gt;) to install a new handler.&lt;br /&gt;
&lt;br /&gt;
Live hosts running production code should always define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (i.e., release configuration), which means they do not assert or auto-abort. Auto-abortion is not acceptable behavior, and anyone who asks for the behavior is completely abusing the functionality of &amp;quot;program diagnostics&amp;quot;. If a program wants a core dump, then it should create the dump rather than crashing.&lt;br /&gt;
&lt;br /&gt;
For more reading on asserting effectively, please see one of John Robbin's books, such as ''[http://www.amazon.com/dp/0735608865 Debugging Applications]''. John is a legendary bug slayer in Windows circles, and he will show you how to do nearly everything, from debugging a simple program to bug slaying in multithreaded programs.&lt;br /&gt;
&lt;br /&gt;
=== Additional Macros ===&lt;br /&gt;
&lt;br /&gt;
Additional macros include any macros needed to integrate properly and securely. It includes integrating the program with the platform (for example MFC or Cocoa/CocoaTouch) and libraries (for example, Crypto++ or OpenSSL). It can be a challenge because you have to have proficiency with your platform and all included libraries and frameworks. The list below illustrates the level of detail you will need when integrating.&lt;br /&gt;
&lt;br /&gt;
Though Boost is missing from the list, it appears to lack recommendations, additional debug diagnostics, and a hardening guide. See ''[http://stackoverflow.com/questions/14927033/boost-hardening-guide-preprocessor-macros BOOST Hardening Guide (Preprocessor Macros)]'' for details. In addition, Tim Day points to ''[http://boost.2283326.n4.nabble.com/boost-build-should-we-not-define-SECURE-SCL-0-by-default-for-all-msvc-toolsets-td2654710.html &amp;lt;nowiki&amp;gt;[boost.build] should we not define _SECURE_SCL=0 by default for all msvc toolsets&amp;lt;/nowiki&amp;gt;]'' for a recent discussion related to hardening (or lack thereof).&lt;br /&gt;
&lt;br /&gt;
In addition to what you should define, defining some macros and undefining others should trigger a security related defect. For example, &amp;lt;tt&amp;gt;-U_FORTIFY_SOURCES&amp;lt;/tt&amp;gt; on Linux and &amp;lt;tt&amp;gt;_CRT_SECURE_NO_WARNINGS=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_SCL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_ATL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;STRSAFE_NO_DEPRECATE&amp;lt;/tt&amp;gt; on Windows.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Platform/Library!!Debug!!Release&lt;br /&gt;
|+ Table 1: Additional Platform/Library Macros&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;175pt&amp;quot;|All&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|DEBUG=1&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|NDEBUG=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Linux&lt;br /&gt;
|_GLIBCXX_DEBUG=1&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
|_FORTIFY_SOURCE=2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Android&lt;br /&gt;
|NDK_DEBUG=1&lt;br /&gt;
|_FORTIFY_SOURCE=1 (4.2 and above)&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define LOGI(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt logging)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Cocoa/CocoaTouch&lt;br /&gt;
|&lt;br /&gt;
|NS_BLOCK_ASSERTIONS=1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define NSLog(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt ASL)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SafeInt&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft&lt;br /&gt;
|_DEBUG=1, STRICT,&amp;lt;br&amp;gt;&lt;br /&gt;
_SECURE_SCL=1, _HAS_ITERATOR_DEBUGGING=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
|STRICT&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft ATL &amp;amp; MFC&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|STLPort&lt;br /&gt;
|_STLP_DEBUG=1, _STLP_USE_DEBUG_LIB=1&amp;lt;br&amp;gt;&lt;br /&gt;
_STLP_DEBUG_ALLOC=1, _STLP_DEBUG_UNINITIALIZED=1&lt;br /&gt;
|&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLite&lt;br /&gt;
|SQLITE_DEBUG, SQLITE_MEMDEBUG&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLCipher&lt;br /&gt;
|Remove '''&amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;''' from Debug builds (Xcode)&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_HAS_CODEC=1&lt;br /&gt;
|SQLITE_HAS_CODEC=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLite/SQLCipher&lt;br /&gt;
|SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Be careful with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; when using pre-compiled libraries such as Boost from a distribution. There are ABI incompatibilities, and the result will likely be a crash. You will have to compile Boost with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; or omit &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; SQLite secure deletion zeroizes memory on destruction. Define as required, and always define in US Federal since zeroization is required for FIPS 140-2, Level 1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt; ''N'' is 0644 by default, which means everyone has some access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt; Force temporary tables into memory (no unencrypted data to disk).&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
##########################################&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Compiler and Linker ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers provide a rich set of warnings from the analysis of code during compilation. Both GCC and Visual Studio have static analysis capabilities to help find mistakes early in the development process. The built in static analysis capabilities of GCC and Visual Studio are usually sufficient to ensure proper API usage and catch a number of mistakes such as using an uninitialized variable or comparing a negative signed int and a positive unsigned int.&lt;br /&gt;
&lt;br /&gt;
As a concrete example, (and for those not familiar with C/C++ promotion rules), a warning will be issued if a signed integer is promoted to an unsigned integer and then compared because a side effect is &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! GCC and Visual Studio will not currently catch, for example, SQL injections and other tainted data usage. For that, you will need a tool designed to perform data flow analysis or taint analysis.&lt;br /&gt;
&lt;br /&gt;
Some in the development community resist static analysis or refute its results. For example, when static analysis warned the Linux kernel's &amp;lt;tt&amp;gt;sys_prctl&amp;lt;/tt&amp;gt; was comparing an unsigned value against less than zero, Jesper Juhl offered a patch to clean up the code. Linus Torvalds howled “No, you don't do this… GCC is crap” (referring to compiling with warnings). For the full discussion, see ''[http://linux.derkeiler.com/Mailing-Lists/Kernel/2006-11/msg08325.html &amp;lt;nowiki&amp;gt;[PATCH] Don't compare unsigned variable for &amp;lt;0 in sys_prctl()&amp;lt;/nowiki&amp;gt;]'' from the Linux Kernel mailing list.&lt;br /&gt;
&lt;br /&gt;
The following sections will detail steps for three platforms. First is a typical GNU Linux based distribution offering GCC and Binutils, second is Clang and Xcode, and third is modern Windows platforms.&lt;br /&gt;
&lt;br /&gt;
=== Distribution Hardening ===&lt;br /&gt;
&lt;br /&gt;
Before discussing GCC and Binutils, it would be a good time to point out some of the defenses discussed below are all ready present in a distribution. Unfortunately, its design by committee, so what is present is usually only a mild variation of what is available (this way, everyone is mildly offended). For those who are purely worried about performance, you might be surprised to learn you have already taken the small performance hint without even knowing.&lt;br /&gt;
&lt;br /&gt;
Linux and BSD distributions often apply some hardening without intervention via ''[http://gcc.gnu.org/onlinedocs/gcc/Spec-Files.html GCC Spec Files]''. If you are using Debian, Ubuntu, Linux Mint and family, see ''[http://wiki.debian.org/Hardening Debian Hardening]''. For Red Hat and Fedora systems, see ''[http://lists.fedoraproject.org/pipermail/devel-announce/2011-August/000821.html New hardened build support (coming) in F16]''. Gentoo users should visit ''[http://www.gentoo.org/proj/en/hardened/ Hardened Gentoo]''.&lt;br /&gt;
&lt;br /&gt;
You can see the settings being used by a distribution via &amp;lt;tt&amp;gt;gcc -dumpspecs&amp;lt;/tt&amp;gt;. From Linux Mint 12 below, -fstack-protector (but not -fstack-protector-all) is used by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ gcc -dumpspecs&lt;br /&gt;
…&lt;br /&gt;
*link_ssp: %{fstack-protector:}&lt;br /&gt;
&lt;br /&gt;
*ssp_default: %{!fno-stack-protector:%{!fstack-protector-all: %{!ffreestanding:%{!nostdlib:-fstack-protector}}}}&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The “SSP” above stands for Stack Smashing Protector. SSP is a reimplementation of Hiroaki Etoh's work on IBM Pro Police Stack Detector. See Hiroaki Etoh's patch ''[http://gcc.gnu.org/ml/gcc-patches/2001-06/msg01753.html gcc stack-smashing protector]'' and IBM's ''[http://www.research.ibm.com/trl/projects/security/ssp/ GCC extension for protecting applications from stack-smashing attacks]'' for details.&lt;br /&gt;
&lt;br /&gt;
=== GCC/Binutils ===&lt;br /&gt;
&lt;br /&gt;
GCC (the compiler collection) and Binutils (the assemblers, linkers, and other tools) are separate projects that work together to produce a final executable. Both the compiler and linker offer options to help you write safer and more secure code. The linker will produce code which takes advantage of platform security features offered by the kernel and PaX, such as no-exec stacks and heaps (NX) and Position Independent Executable (PIE).&lt;br /&gt;
&lt;br /&gt;
The table below offers a set of compiler options to build your program. Static analysis warnings help catch mistakes early, while the linker options harden the executable at runtime. In the table below, “GCC” should be loosely taken as “non-ancient distributions.” While the GCC team considers 4.2 ancient, you will still encounter it on Apple and BSD platforms due to changes in GPL licensing around 2007. Refer to ''[http://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html GCC Option Summary]'', ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html Options to Request or Suppress Warnings]'' and ''[http://sourceware.org/binutils/docs-2.21/ld/Options.html Binutils (LD) Command Line Options]'' for usage details.&lt;br /&gt;
&lt;br /&gt;
Noteworthy of special mention are &amp;lt;tt&amp;gt;-fno-strict-overflow&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fwrapv&amp;lt;/tt&amp;gt;&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;. The flags ensure the compiler does not remove statements that result in overflow or wrap. If your program only runs correctly using the flags, it is likely violating C/C++ rules on overflow and illegal. If the program is illegal due to overflow or wrap checking, you should consider using [http://code.google.com/p/safe-iop/ safe-iop] for C or David LeBlanc's [http://safeint.codeplex.com SafeInt] in C++.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, some of those settings can be verified with the [http://www.trapkit.de/tools/checksec.html Checksec] tool written by Tobias Klein. The &amp;lt;tt&amp;gt;checksec.sh&amp;lt;/tt&amp;gt; script is designed to test standard Linux OS and PaX security features being used by an application. See the [http://www.trapkit.de/tools/checksec.html Trapkit] web page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 2: GCC C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wall -Wextra&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;75t&amp;quot;|GCC&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Enables many warnings (despite their names, all and extra do not turn on all warnings).&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wconversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may alter a value (includes -Wsign-conversion).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-conversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may change the sign of an integer value, such as assigning a signed integer to an unsigned integer (&amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion!).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wcast-align&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for a pointer cast to a type which has a different size, causing an invalid alignment and subsequent bus error on ARM processors.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wformat=2 -Wformat-security&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Increases warnings related to possible security defects, including incorrect format specifiers.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-common&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Prevent global variables being simultaneously defined in different object files.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fstack-protector or -fstack-protector-all&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Stack Smashing Protector (SSP). Improves stack layout and adds a guard to detect stack based buffer overflows.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-omit-frame-pointer&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Improves backtraces for post-mortem analysis&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wmissing-prototypes and -Wmissing-declarations&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a global function is defined without a prototype or declaration.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-prototypes&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a function is declared or defined without specifying the argument types.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-overflow&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.2&lt;br /&gt;
|Warn about optimizations taken due to &amp;lt;nowiki&amp;gt;[undefined]&amp;lt;/nowiki&amp;gt; signed integer overflow assumptions.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wtrampolines&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.3&lt;br /&gt;
|Warn about trampolines generated for pointers to nested functions. Trampolines require executable stacks.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=address&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/address-sanitizer/ AddressSanitizer], a fast memory error detector. Memory access instructions will be instrumented to help detect heap, stack, and global buffer overflows; as well as use-after-free bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=thread&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/data-race-test/wiki/ThreadSanitizer ThreadSanitizer], a fast data race detector. Memory access instructions will be instrumented to detect data race bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,nodlopen and -Wl,-z,nodump&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.10&lt;br /&gt;
|Reduces the ability of an attacker to load, manipulate, and dump shared objects.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,noexecstack and -Wl,-z,noexecheap&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.14&lt;br /&gt;
|Data Execution Prevention (DEP). ELF headers are marked with PT_GNU_STACK and PT_GNU_HEAP.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,relro&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Global Offset Table (GOT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,now&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Procedure Linkage Table (PLT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIC&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils&lt;br /&gt;
|Position Independent Code. Used for libraries and shared objects. Both -fPIC (compiler) and -shared (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIE&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.16&lt;br /&gt;
|Position Independent Executable (ASLR). Used for programs. Both -fPIE (compiler) and -pie (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Unlike Clang and -Weverything, GCC does not provide a switch to truly enable all warnings.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; -fstack-protector guards functions with high risk objects such as C strings, while -fstack-protector-all guards all objects.&lt;br /&gt;
&lt;br /&gt;
Additional C++ warnings which can be used include the following in Table 3. See ''[http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html GCC's Options Controlling C++ Dialect]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 3: GCC C++ Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Woverloaded-virtual&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn when a function declaration hides virtual functions from a base class. &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wreorder&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when the order of member initializers given in the code does not match the order in which they must be executed.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-promo&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when overload resolution chooses a promotion from unsigned or enumerated type to a signed type, over a conversion to an unsigned type of the same size.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wnon-virtual-dtor&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when a class has virtual functions and an accessible non-virtual destructor.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Weffc++&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn about violations of the following style guidelines from Scott Meyers' ''[http://www.aristeia.com/books.html Effective C++, Second Edition]'' book.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
And additional Objective C warnings which are often useful include the following. See ''[http://gcc.gnu.org/onlinedocs/gcc/Objective_002dC-and-Objective_002dC_002b_002b-Dialect-Options.html Options Controlling Objective-C and Objective-C++ Dialects]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 4: GCC Objective C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wstrict-selector-match&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn if multiple methods with differing argument and/or return types are found for a given selector when attempting to send a message using this selector to a receiver of type id or Class.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wundeclared-selector&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn if a &amp;lt;tt&amp;gt;@selector(…)&amp;lt;/tt&amp;gt; expression referring to an undeclared selector is found. &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The use of aggressive warnings will produce spurious noise. The noise is a tradeoff - you can learn of potential problems at the cost of wading through some chaff. The following will help reduces spurious noise from the warning system:&lt;br /&gt;
&lt;br /&gt;
* -Wno-unused-parameter (GCC)&lt;br /&gt;
* -Wno-type-limits (GCC 4.3)&lt;br /&gt;
* -Wno-tautological-compare (Clang)&lt;br /&gt;
&lt;br /&gt;
Finally, a simple version based Makefile example is shown below. This is different than feature based makefile produced by auto tools (which will test for a particular feature and then define a symbol or configure a template file). Not all platforms use all options and flags. To address the issue you can pursue one of two strategies. First, you can ship with a weakened posture by servicing the lowest common denominator; or you can ship with everything in force. In the latter case, those who don't have a feature available will edit the makefile to accommodate their installation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;CXX=g++&lt;br /&gt;
EGREP = egrep&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GCC_COMPILER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version')&lt;br /&gt;
GCC41_OR_LATER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version (4\.[1-9]|[5-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GNU_LD210_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[0-9]|2\.[2-9])')&lt;br /&gt;
GNU_LD214_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[4-9]|2\.[2-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC_COMPILER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wall -Wextra -Wconversion&lt;br /&gt;
  MY_CC_FLAGS += -Wformat=2 -Wformat-security&lt;br /&gt;
  MY_CC_FLAGS += -Wno-unused-parameter&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC41_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fstack-protector-all&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC42_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wstrict-overflow&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC43_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wtrampolines&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD210_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,nodlopen -z,nodump&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD214_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,noexecstack -z,noexecheap&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD215_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,relro -z,now&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD216_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fPIE&lt;br /&gt;
  MY_LD_FLAGS += -pie&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
# Use 'override' to honor the user's command line&lt;br /&gt;
override CFLAGS := $(MY_CC_FLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(MY_CC_FLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(MY_LD_FLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Clang/Xcode ===&lt;br /&gt;
&lt;br /&gt;
[http://clang.llvm.org Clang] and [http://llvm.org LLVM] have been aggressively developed since Apple lost its GPL compiler back in 2007 (due to Tivoization which resulted in GPLv3). Since that time, a number of developers and Goggle have joined the effort. While Clang will consume most (all?) GCC/Binutil flags and switches, the project supports a number of its own options, including a static analyzer. In addition, Clang is relatively easy to build with additional diagnostics, such as Dr. John Regher and Peng Li's [http://embed.cs.utah.edu/ioc/ Integer Overflow Checker (IOC)].&lt;br /&gt;
&lt;br /&gt;
IOC is incredibly useful, and has found bugs in a number of projects, from the Linux Kernel (&amp;lt;tt&amp;gt;include/linux/bitops.h&amp;lt;/tt&amp;gt;, still unfixed), SQLite, PHP, Firefox (many still unfixed), LLVM, and Python. Future version of Clang (Clang 3.3 and above) will allow you to enable the checks out of the box with &amp;lt;tt&amp;gt;-fsanitize=integer&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=shift&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Clang options can be found at [http://clang.llvm.org/docs/UsersManual.html Clang Compiler User’s Manual]. Clang does include an option to turn on all warnings - &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt;. Use it with care but use it regularly since you will get back a lot of noise and issues you missed. For example, add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; for production builds and make non-spurious issues a quality gate. Under Xcode, simply add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to compiler warnings, both static analysis and additional security checks can be performed. Reading on Clang's static analysis capabilities can be found at [http://clang-analyzer.llvm.org Clang Static Analyzer]. Figure 1 below shows some of the security checks utilized by Xcode.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-11.png|thumb|450px|Figure 1: Clang/LLVM and Xcode options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Visual Studio ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a convenient Integrated Development Environment (IDE) for managing solutions and their settings. the section called “Visual Studio Options” discusses option which should be used with Visual Studio, and the section called “Project Properties” demonstrates incorporating those options into a solution's project.&lt;br /&gt;
&lt;br /&gt;
The table below lists the compiler and linker switches which should be used under Visual Studio. Refer to Howard and LeBlanc's Writing Secure Code (Microsoft Press) for a detailed discussion; or ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]'' in Security Briefs by Michael Howard. In the table below, “Visual Studio” refers to nearly all versions of the development environment, including Visual Studio 5.0 and 6.0.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, those settings can be verified with BinScope. BinScope is a verification tool from Microsoft that analyzes binaries to ensure that they have been built in compliance with Microsoft's Security Development Lifecycle (SDLC) requirements and recommendations. See the ''[https://www.microsoft.com/download/en/details.aspx?id=11910 BinScope Binary Analyzer]'' download page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 5: Visual Studio Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;150pt&amp;quot;|&amp;lt;nowiki&amp;gt;/W4&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;100pt&amp;quot;|Visual Studio&lt;br /&gt;
|width=&amp;quot;350pt&amp;quot;|Warning level 4, which includes most warnings.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/WAll&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Enable all warnings, including those off by default.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/GS&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Adds a security cookie (guard or canary) on the stack before the return address buffer stack based for overflow checks.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/SafeSEH&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Safe structured exception handling to remediate SEH overwrites.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/analyze&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Enterprise code analysis (freely available with Windows SDK for Windows Server 2008 and .NET Framework 3.5).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/NXCOMPAT&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Data Execution Prevention (DEP).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/dynamicbase&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Address Space Layout Randomization (ASLR).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;strict_gs_check&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Aggressively applies stack protections to a source file to help detect some categories of stack based buffer overruns.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;See Jon Sturgeon's discussion of the switch at ''[https://blogs.msdn.com/b/vcblog/archive/2010/12/14/off-by-default-compiler-warnings-in-visual-c.aspx Off By Default Compiler Warnings in Visual C++]''.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;When using /GS, there are a number of circumstances which affect the inclusion of a security cookie. For example, the guard is not used if there is no buffer in the stack frame, optimizations are disabled, or the function is declared naked or contains inline assembly.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;tt&amp;gt;#pragma strict_gs_check(on)&amp;lt;/tt&amp;gt; should be used sparingly, but is recommend in high risk situations, such as when a source file parses input from the internet.&lt;br /&gt;
&lt;br /&gt;
=== Warn Suppression ===&lt;br /&gt;
&lt;br /&gt;
From the tables above, a lot of warnings have been enabled to help detect possible programming mistakes. The potential mistakes are detected via compiler which carries around a lot of contextual information during its code analysis phase. At times, you will receive spurious warnings because the compiler is not ''that'' smart. Its understandable and even a good thing (how would you like to be out of a job because a program writes its own programs?). At times you will have to learn how to work with the compiler's warning system to suppress warnings. Notice what was not said: turn off the warnings.&lt;br /&gt;
&lt;br /&gt;
Suppressing warnings placates the compiler for spurious noise so you can get to the issues that matter (you are separating the wheat from the chaff). This section will offer some hints and point out some potential minefields. First is an unused parameter (for example, &amp;lt;tt&amp;gt;argc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;argv&amp;lt;/tt&amp;gt;). Suppressing unused parameter warnings is especially helpful for C++ and interface programming, where parameters are often unused. For this warning, simply define an &amp;quot;UNUSED&amp;quot; macro and warp the parameter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#define UNUSED_PARAMETER(x) ((void)x)&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    UNUSED_PARAMETER(argc);&lt;br /&gt;
    UNUSED_PARAMETER(argv);&lt;br /&gt;
    …&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A potential minefield lies near &amp;quot;comparing unsigned and signed&amp;quot; values, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will catch it for you. This is because C/C++ promotion rules state the signed value will be promoted to an unsigned value and then compared. That means &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! To fix this, you cannot blindly cast - you must first range test the value:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;int x = GetX();&lt;br /&gt;
unsigned int y = GetY();&lt;br /&gt;
&lt;br /&gt;
ASSERT(x &amp;gt;= 0);&lt;br /&gt;
if(!(x &amp;gt;= 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? X is negative.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
if(static_cast&amp;lt;unsigned int&amp;gt;(x) &amp;gt; y)&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is greater than y&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
else&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is not greater than y&amp;quot; &amp;lt;&amp;lt; endl;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;x&amp;lt;/tt&amp;gt;. Just run the program and wait for it to tell you there is a problem. If there is a problem, the program will snap the debugger (and more importantly, not call a useless &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; as specified by Posix). It beats the snot out of &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; that are removed when no longer needed or pollute outputs.&lt;br /&gt;
&lt;br /&gt;
Another conversion problem you will encounter conversion between types, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will also catch it for you. The following will always have an opportunity to fail, and should light up like a Christmas tree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct sockaddr_in addr;&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
addr.sin_port = htons(atoi(argv[2]));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following would probably serve you much better. Notice &amp;lt;tt&amp;gt;atoi&amp;lt;/tt&amp;gt; and fiends are not used because they can silently fail. In addition, the code is instrumented so you don't need to waste a lot of time debugging potential problems:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;const char* cstr = GetPortString();&lt;br /&gt;
&lt;br /&gt;
ASSERT(cstr != NULL);&lt;br /&gt;
if(!(cstr != NULL))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port string is not valid.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
istringstream iss(cstr);&lt;br /&gt;
long long t = 0;&lt;br /&gt;
iss &amp;gt;&amp;gt; t;&lt;br /&gt;
&lt;br /&gt;
ASSERT(!(iss.fail()));&lt;br /&gt;
if(iss.fail())&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Failed to read port.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// Should this be a port above the reserved range ([0-1024] on Unix)?&lt;br /&gt;
ASSERT(t &amp;gt; 0);&lt;br /&gt;
if(!(t &amp;gt; 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too small&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
ASSERT(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max()));&lt;br /&gt;
if(!(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max())))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too large&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use port&lt;br /&gt;
unsigned short port = static_cast&amp;lt;unsigned short&amp;gt;(t);&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;port&amp;lt;/tt&amp;gt;. This code will continue checking conditions, years after being instrumented (assuming to wrote code to read a config file early in the project). There's no need to remove the &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt;s as with &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; since they are silent guardians.&lt;br /&gt;
&lt;br /&gt;
Another useful suppression trick is too avoid ignoring return values. Not only is it useful to suppress the warning, its required for correct code. For example, &amp;lt;tt&amp;gt;snprint&amp;lt;/tt&amp;gt; will alert you to truncations through its return value. You should not make them silent truncations by ignoring the warning or casting to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;char path[PATH_MAX];&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int ret = snprintf(path, sizeof(path), &amp;quot;%s/%s&amp;quot;, GetDirectory(), GetObjectName());&lt;br /&gt;
ASSERT(ret != -1);&lt;br /&gt;
ASSERT(!(ret &amp;gt;= sizeof(path)));&lt;br /&gt;
&lt;br /&gt;
if(ret == -1 || ret &amp;gt;= sizeof(path))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Unable to build full object name&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use path&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem is pandemic, and not just boring user land programs. Projects which offer high integrity code, such as SELinux, suffer silent truncations. The following is from an approved SELinux patch even though a comment was made that it [http://permalink.gmane.org/gmane.comp.security.selinux/16845 suffered silent truncations in its &amp;lt;tt&amp;gt;security_compute_create_name&amp;lt;/tt&amp;gt; function] from &amp;lt;tt&amp;gt;compute_create.c&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12  int security_compute_create_raw(security_context_t scon,&lt;br /&gt;
13                                  security_context_t tcon,&lt;br /&gt;
14                                  security_class_t   tclass,&lt;br /&gt;
15                                  security_context_t * newcon)&lt;br /&gt;
16  {&lt;br /&gt;
17    char path[PATH_MAX];&lt;br /&gt;
18    char *buf;&lt;br /&gt;
19    size_t size;&lt;br /&gt;
20    int fd, ret;&lt;br /&gt;
21 	&lt;br /&gt;
22    if (!selinux_mnt) {&lt;br /&gt;
23      errno = ENOENT;&lt;br /&gt;
24      return -1;&lt;br /&gt;
25    }&lt;br /&gt;
26 	&lt;br /&gt;
27    snprintf(path, sizeof path, &amp;quot;%s/create&amp;quot;, selinux_mnt);&lt;br /&gt;
28    fd = open(path, O_RDWR);&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Unlike other examples, the above code will not debug itself, and you will have to set breakpoints and trace calls to determine the point of first failure. (And the code above gambles that the truncated file does not exist or is not under an adversary's control by blindly performing the &amp;lt;tt&amp;gt;open&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
== Runtime ==&lt;br /&gt;
&lt;br /&gt;
The previous sections concentrated on setting up your project for success. This section will examine additional hints for running with increased diagnostics and defenses. Not all platforms are created equal - GNU Linux is difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening to a program after compiling and static linking]; while Windows allows post-build hardening through a download. Remember, the goal is to find the point of first failure quickly so you can improve the reliability and security of the code.&lt;br /&gt;
&lt;br /&gt;
=== Xcode ===&lt;br /&gt;
&lt;br /&gt;
Xcode offers additional [http://developer.apple.com/library/mac/#recipes/xcode_help-scheme_editor/Articles/SchemeDiagnostics.html Application Diagnostics] that can help find memory errors and object use problems. Schemes can be managed through ''Products'' menu item, ''Scheme'' submenu item, and then ''Edit''. From the editor, navigate to the ''Diagnostics'' tab. In the figure below, four additional instruments are enabled for the debugging cycle: Scribble guards, Edge guards, Malloc guards, and Zombies.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-1.png|thumb|450px|Figure 2: Xcode Memory Diagnostics]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
There is one caveat with using some of the guards: Apple only provides them for the simulator, and not a device. In the past, the guards were available for both devices and simulators.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a number of debugging aides for use during development. The aides are called [http://msdn.microsoft.com/en-us/library/d21c150d.aspx Managed Debugging Assistants (MDAs)]. You can find the MDAs on the ''Debug'' menu, then ''Exceptions'' submenu. MDAs allow you to tune your debugging experience by, for example, filter exceptions for which the debugger should snap. For more details, see Stephen Toub's ''[http://msdn.microsoft.com/en-us/magazine/cc163606.aspx Let The CLR Find Bugs For You With Managed Debugging Assistants]''.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-2.png|thumb|450px|Figure 3: Managed Debugging Assistants]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Finally, for runtime hardening, Microsoft has a helpful tool called EMET. EMET is the [http://support.microsoft.com/kb/2458544 Enhanced Mitigation Experience Toolkit], and allows you to apply runtime hardening to an executable which was built without. Its very useful for utilities and other programs that were built without an SDLC.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-3.png|thumb|450px|Figure 4: Windows and EMET]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=147119</id>
		<title>C-Based Toolchain Hardening</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=147119"/>
				<updated>2013-03-08T18:00:17Z</updated>
		
		<summary type="html">&lt;p&gt;Jeffrey Walton: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening]] is a treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. This article will examine Microsoft and GCC toolchains for the C, C++ and Objective C languages. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, preprocessor, compiler, and linker. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, Visual Studio-based, and Xcode-based. Its important to address the gaps at build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
This is a prescriptive article, and it will not debate semantics or speculate on behavior. Some information, such as the C/C++ committee's motivation and pedigree for [https://groups.google.com/a/isocpp.org/forum/?fromgroups=#!topic/std-discussion/ak8e1mzBhGs &amp;quot;program diagnostics&amp;quot;, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;], appears to be lost like a tale in the Lord of the Rings. As such, the article will specify semantics (for example, the philosophy of 'debug' and 'release' build configurations), assign behaviors (for example, what an assert should do in a 'debug' and 'release' build configurations), and present a position. If you find the posture is too aggressive, then you should back off as required to suite your taste.&lt;br /&gt;
&lt;br /&gt;
A secure toolchain is not a silver bullet. It is one piece of an overall strategy in the engineering process to help ensure success. It will compliment existing processes such as static analysis, dynamic analysis, secure coding, negative test suites, and the like. Tools such as Valgrind and Helgrind will still be needed. And a project will still require solid designs and architectures.&lt;br /&gt;
&lt;br /&gt;
Finally, the OWASP [http://code.google.com/p/owasp-esapi-cplusplus/source ESAPI C++] project eats its own dog food. Many of the examples you will see in this article come directly from the ESAPI C++ project.&lt;br /&gt;
&lt;br /&gt;
== Wisdom ==&lt;br /&gt;
&lt;br /&gt;
Code '''must''' be correct. It '''should''' be secure. It '''can''' be efficient.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Jon_Bentley Dr. Jon Bentley]: ''&amp;quot;If it doesn't have to be correct, I can make it as fast as you'd like it to be&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Gary_McGraw Dr. Gary McGraw]: ''&amp;quot;Thou shalt not rely solely on security features and functions to build secure software as security is an emergent property of the entire system and thus relies on building and integrating all parts properly&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
Configuration is the first opportunity to configure your project for success. Not only do you have to configure your project to meet reliability and security goals, you must also configure integrated libraries properly. You typically have has three choices. First, you can use auto-configuration utilities if on Linux or Unix. Second, you can write a makefile by hand. This is predominant on Linux, Mac OS X, and Unix, but it applies to Windows as well. Finally, you can use an integrated development environment or IDE.&lt;br /&gt;
&lt;br /&gt;
=== Build Configurations ===&lt;br /&gt;
&lt;br /&gt;
At this stage in the process, you should concentrate on configuring for two builds: Debug and Release. Debug will be used for development and include full instrumentation. Release will be configured for production. The difference between the two settings is usually ''optimization level'' and ''debug level''. A third build configuration is Test, and its usually a special case of Release.&lt;br /&gt;
&lt;br /&gt;
For debug and release builds, the settings are typically diametrically opposed. Debug configurations have no optimizations and full debug information; while Release builds have optimizations and minimal to moderate debug information. In addition, debug code has full assertions and additional library integration, such as mudflaps and malloc guards such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The Test configuration is often a Release configuration that makes everything public for testing and builds a test harness. For example, all member functions public (C++ class) and all interfaces (library or shared object) should be made available for testing. Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
[http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8] introduced an optimization of &amp;lt;tt&amp;gt;-Og&amp;lt;/tt&amp;gt;. Note that it is only an optimization, and still requires a customary debug level via &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Debug Builds ====&lt;br /&gt;
&lt;br /&gt;
Debug builds are where developers spend most of their time when vetting problems, so this build should concentrate forces and tools or be a 'force multiplier'. Though many do not realize, debug code is more highly valued than release code because its adorned with additional instrumentation. The debug instrumentation will cause a program to become nearly &amp;quot;self-debugging&amp;quot;, and help you catch mistakes such as bad parameters, failed API calls, and memory problems.&lt;br /&gt;
&lt;br /&gt;
Self-debugging code reduces your time during trouble shooting and debugging. Reducing time under the debugger means you have more time for development and feature requests. If code is checked in without debug instrumentation, it should be fixed by adding instrumentation or rejected.&lt;br /&gt;
&lt;br /&gt;
For GCC, optimizations and debug symbolication are controlled through two switches: &amp;lt;tt&amp;gt;-O&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;. You should use the following as part of your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a minimal debug session:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-O0 -g3 -ggdb&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; turns off optimizations and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available. You may need to use &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; so some analysis is performed. Otherwise, your debug build will be missing a number of warnings not present in release builds. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debugging information is available for the debug session, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; includes extensions to help with a debug session under GDB. For completeness, Jan Krachtovil stated &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; currently has no effect in a private email.&lt;br /&gt;
&lt;br /&gt;
Debug build should also define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is not defined. &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; removes &amp;quot;program diagnostics&amp;quot;; and has undesirable behavior and side effects which discussed below in more detail. The defines should be present for all code, and not just the program. You use it for all code (your program and included libraries) because you need to know how they fails too (remember, you take the bug report - not the third party library).&lt;br /&gt;
&lt;br /&gt;
In addition, you should also use other relevant flags, such as &amp;lt;tt&amp;gt;-fno-omit-frame-pointer&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=address&amp;lt;/tt&amp;gt;. Finally, you should also ensure your project includes additional diagnostic libraries, such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt;. The additional flags and libraries are discussed below in more detail.&lt;br /&gt;
&lt;br /&gt;
==== Release Builds ====&lt;br /&gt;
&lt;br /&gt;
Release builds are what your customer receives. They are meant to be run on production hardware and servers, and they should be reliable, secure, and efficient. A stable release build is the product of the hard work and effort during development.&lt;br /&gt;
&lt;br /&gt;
For release builds, you should use the following as part of &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for release builds:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-On -g2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O''n''&amp;lt;/tt&amp;gt; sets optimizations for speed or size (for example, &amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;), and &amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt; ensure debugging information is created.&lt;br /&gt;
&lt;br /&gt;
Debugging information should be stripped and retained in case of symbolication for a crash report from the field. While not desired, debug information can be left in place without a performance penalty. See ''[http://gcc.gnu.org/ml/gcc-help/2005-03/msg00032.html How does the gcc -g option affect performance?]'' for details.&lt;br /&gt;
&lt;br /&gt;
Release builds should also define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is not defined. The time for debugging and diagnostics is over, so users get production code with full optimizations, no &amp;quot;programming diagnostics&amp;quot;, and other efficiencies. If you can't optimize or your are performing excessive logging, it usually means the program is not ready for production.&lt;br /&gt;
&lt;br /&gt;
If you have been relying on an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and then a subsequent &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;, you have been abusing &amp;quot;program diagnostics&amp;quot; since it has no place in production code. If you want a memory dump, create one so users don't have to worry about secrets and other sensitive information being written to the filesystem and emailed in plain text.&lt;br /&gt;
&lt;br /&gt;
For Windows, you would use &amp;lt;tt&amp;gt;/Od&amp;lt;/tt&amp;gt; for debug builds; and &amp;lt;tt&amp;gt;/Ox&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/O2&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/Os&amp;lt;/tt&amp;gt; for release builds. See Microsoft's [http://msdn.microsoft.com/en-us/library/k1ack8f1.aspx /O Options (Optimize Code)] for details.&lt;br /&gt;
&lt;br /&gt;
==== Test Builds ====&lt;br /&gt;
&lt;br /&gt;
Test builds are used to provide heuristic validation by way of positive and negative test suites. Under a test configuration, all interfaces are tested to ensure they perform to specification and satisfaction. &amp;quot;Satisfaction&amp;quot; is subjective, but it should include no crashing and no trashing of your memory arena, even when faced with negative tests.&lt;br /&gt;
&lt;br /&gt;
Because all interfaces are tested (and not just the public ones), your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; should include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should also change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Nearly everyone gets a positive test right, so no more needs to be said. The negative self tests are much more interesting, and you should concentrate on trying to make your program fail so you can verify its fails gracefully. Remember, a bad guy is not going to be courteous when he attempts to cause your program to fail. And its your project that takes egg on the face by way of a bug report or guest appearance on [http://www.grok.org.uk/full-disclosure/ Full Disclosure] or [http://www.securityfocus.com/archive Bugtraq] - not ''&amp;lt;nowiki&amp;gt;&amp;lt;some library&amp;gt;&amp;lt;/nowiki&amp;gt;'' you included.&lt;br /&gt;
&lt;br /&gt;
=== Auto Tools ===&lt;br /&gt;
&lt;br /&gt;
Auto configuration tools are popular on many Linux and Unix based systems, and the tools include ''Autoconf'', ''Automake'', ''config'', and ''Configure''. The tools work together to produce project files from scripts and template files. After the process completes, your project should be setup and ready to be made with &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When using auto configuration tools, there are a few files of interest worth mentioning. The files are part of the auto tools chain and include &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; and the various &amp;lt;tt&amp;gt;*.in&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;*.ac&amp;lt;/tt&amp;gt; (autoconf), and &amp;lt;tt&amp;gt;*.am&amp;lt;/tt&amp;gt; (automake) files. At times, you will have to open them, or the resulting makefiles, to tune the &amp;quot;stock&amp;quot; configuration.&lt;br /&gt;
&lt;br /&gt;
There are three downsides to the command line configuration tools in the toolchain: (1) they often ignore user requests, (2) they cannot create configurations, and (3) security is often not a goal.&lt;br /&gt;
&lt;br /&gt;
To demonstrate the first issue, confider your project with the following: &amp;lt;tt&amp;gt;configure CFLAGS=&amp;quot;-Wall -fPIE&amp;quot; CXXFLAGS=&amp;quot;-Wall -fPIE&amp;quot; LDFLAGS=&amp;quot;-pie&amp;quot;&amp;lt;/tt&amp;gt;. You will probably find the auto tools ignored your request, which means the command below will not produce expected results. As a work around, you will have to open an &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; scripts, &amp;lt;tt&amp;gt;Makefile.in&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;Makefile.am&amp;lt;/tt&amp;gt; and fix the configuration.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ configure CFLAGS=&amp;quot;-Wall -Wextra -Wconversion -fPIE -Wno-unused-parameter&lt;br /&gt;
    -Wformat=2 -Wformat-security -fstack-protector-all -Wstrict-overflow&amp;quot;&lt;br /&gt;
    LDFLAGS=&amp;quot;-pie -z,noexecstack -z,noexecheap -z,relro -z,now&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the second point, you will probably be disappointed to learn [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html Automake does not support the concept of configurations]. Its not entirely Autoconf's or Automake's fault - ''Make'' and its inability to detect changes is the underlying problem. Specifically, ''Make'' only [http://pubs.opengroup.org/onlinepubs/009695399/utilities/make.html checks modification times of prerequisites and targets], and does not check things like &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;. The net effect is you will not receive expected results when you issue &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt; and then &amp;lt;tt&amp;gt;make test&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Finally, you will probably be disappointed to learn tools such as Autoconf and Automake miss many security related opportunities and ship insecure out of the box. There are a number of compiler switches and linker flags that improve the defensive posture of a program, but they are not 'on' by default. Tools like Autoconf - which are supposed to handle this situation - often provides setting to serve the lowest of all denominators.&lt;br /&gt;
&lt;br /&gt;
A recent discussion on the Automake mailing list illuminates the issue: ''[https://lists.gnu.org/archive/html/autoconf/2012-12/msg00038.html Enabling compiler warning flags]''. Attempts to improve default configurations were met with resistance and no action was taken. The resistance is often of the form, &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some useful warning&amp;gt;&amp;lt;/nowiki&amp;gt; also produces false positives&amp;quot; or &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some obscure platform&amp;gt;&amp;lt;/nowiki&amp;gt; does not support &amp;lt;nowiki&amp;gt;&amp;lt;established security feature&amp;gt;&amp;lt;/nowiki&amp;gt;&amp;quot;. Its noteworthy that David Wheeler, the author of ''[http://www.dwheeler.com/secure-programs/ Secure Programming for Linux and Unix HOWTO]'', was one of the folks trying to improve the posture.&lt;br /&gt;
&lt;br /&gt;
=== Makefiles ===&lt;br /&gt;
&lt;br /&gt;
Make is one of the earliest build systems dating back to the 1970s. Its available on Linux, Mac OS X and Unix, so you will frequently encounter projects using it. Unfortunately, Make has a number of short comings (''[http://aegis.sourceforge.net/auug97.pdf Recursive Make Considered Harmful]'' and ''[http://www.conifersystems.com/whitepapers/gnu-make/ What’s Wrong With GNU make?]''), and can cause some discomfort. Despite issues with Make, ESAPI C++ uses Make primarily for three reasons: first, its omnipresent; second, its easier to manage than the Auto Tools family; and third, &amp;lt;tt&amp;gt;libtool&amp;lt;/tt&amp;gt; was out of the question.&lt;br /&gt;
&lt;br /&gt;
Consider what happens when you: (1) type &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt;, and then type &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;. Each build would require different &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; due to optimizations and level of debug support. In your makefile, you would extract the relevant target and set &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; similar to below (taken from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/Makefile ESAPI C++ Makefile]):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Makefile&lt;br /&gt;
DEBUG_GOALS = $(filter $(MAKECMDGOALS), debug)&lt;br /&gt;
ifneq ($(DEBUG_GOALS),)&lt;br /&gt;
  WANT_DEBUG := 1&lt;br /&gt;
  WANT_TEST := 0&lt;br /&gt;
  WANT_RELEASE := 0&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_DEBUG),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
  ESAPI_CXXFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_RELEASE),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
  ESAPI_CXXFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_TEST),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
  ESAPI_CXXFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
# Merge ESAPI flags with user supplied flags. We perform the extra step to ensure &lt;br /&gt;
# user options follow our options, which should give user option's a preference.&lt;br /&gt;
override CFLAGS := $(ESAPI_CFLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(ESAPI_CXXFLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(ESAPI_LDFLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make will first build the program in a debug configuration for a session under the debugger using a rule similar to:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;%.cpp:%.o:&lt;br /&gt;
        $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $&amp;lt; -o $@&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you want the release build, Make will do nothing because it considers everything up to date despite the fact &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; have changed. Hence, your program will actually be in a debug configuration and risk a &amp;lt;tt&amp;gt;SIGABRT&amp;lt;/tt&amp;gt; at runtime because debug instrumentation is present (recall &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; calls &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; when &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined). In essence, you have DoS'd yourself due to &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, many projects do not honor the user's command line. ESAPI C++ does its best to ensure a user's flags are honored via &amp;lt;tt&amp;gt;override&amp;lt;/tt&amp;gt; as shown above, but other projects do not. For example, consider a project that should be built with Position Independent Executable (PIE or ASLR) enabled and data execution prevention (DEP) enabled. Dismissing user settings combined with insecure out of the box settings (and not picking them up during auto-setup or auto-configure) means a program built with the following will likely have neither defense:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ make CFLAGS=&amp;quot;-fPIE&amp;quot; CXXFLAGS=&amp;quot;-fPIE&amp;quot; LDFLAGS=&amp;quot;-pie -z,noexecstack, -z,noexecheap&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Defenses such as ASLR and DEP are especially important on Linux because [http://linux.die.net/man/5/elf Data Execution - not Prevention - is the norm].&lt;br /&gt;
&lt;br /&gt;
=== Integration ===&lt;br /&gt;
&lt;br /&gt;
Project level integration presents opportunities to harden your program or library with domain specific knowledge. For example, if the platform supports Position Independent Executables (PIE or ASLR) and data execution prevention (DEP), then you should integrate with it. The consequences of not doing so could result in exploitation. As a case in point, see KingCope's 0-days for MySQL in December, 2012 (CVE-2012-5579 and CVE-2012-5612, among others). Integration with platform security would have neutered a number of the 0-days.&lt;br /&gt;
&lt;br /&gt;
You also have the opportunity to include helpful libraries that are not need for business logic support. For example, if you are working on a platform with [http://dmalloc.com DMalloc] or [http://code.google.com/p/address-sanitizer/ Address Sanitizer], you should probably use it in your debug builds. For Ubuntu, DMalloc available from the package manager and can be installed with &amp;lt;tt&amp;gt;sudo apt-get install libdmalloc5&amp;lt;/tt&amp;gt;. For Apple platforms, its available as a scheme option (see [[#Clang/Xcode|Clang/Xcode]] below). Address Sanitizer is available in [http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8 and above] for many platforms.&lt;br /&gt;
&lt;br /&gt;
In addition, project level integration is an opportunity to harden third party libraries you chose to include. Because you chose to include them, you and your users are responsible for them. If you or your users endure a SP800-53 audit, third party libraries will be in scope because the supply chain is included (specifically, item SA-12, Supply Chain Protection). The audits are not limited to those in the US Federal arena - financial institutions perform reviews too. A perfect example of violating this guidance is [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525], which was due to [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library].&lt;br /&gt;
&lt;br /&gt;
Another example is including OpenSSL. You know (1) [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 is insecure], (2) [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 is insecure], and (3) [http://arstechnica.com/security/2012/09/crime-hijacks-https-sessions/ compression is insecure] (among others). In addition, suppose you don't use hardware and engines, and only allow static linking. Given the knowledge and specifications, you would configure the OpenSSL library as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ Configure darwin64-x86_64-cc -no-hw -no-engines -no-comp -no-shared -no-dso -no-sslv2 -no-sslv3 --openssldir=…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Note Well'': you might want engines, especially on Ivy Bridge microarchitectures (3rd generation Intel Core i5 and i7 processors). To have OpenSSL use the processor's random number generator (via the of &amp;lt;tt&amp;gt;rdrand&amp;lt;/tt&amp;gt; instruction), you will need to call OpenSSL's &amp;lt;tt&amp;gt;ENGINE_load_rdrand()&amp;lt;/tt&amp;gt; function and then &amp;lt;tt&amp;gt;ENGINE_set_default&amp;lt;/tt&amp;gt; with &amp;lt;tt&amp;gt;ENGINE_METHOD_RAND&amp;lt;/tt&amp;gt;. See [http://wiki.opensslfoundation.com/index.php/Random_Numbers OpenSSL's Random Numbers] for details.&lt;br /&gt;
&lt;br /&gt;
If you configure without the switches, then you will likely have vulnerable code/libraries and risk failing an audit. If the program is a remote server, then the following command will reveal if compression is active on the channel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ echo &amp;quot;GET / HTTP1.0&amp;quot; | openssl s_client -connect &amp;lt;nowiki&amp;gt;example.com:443&amp;lt;/nowiki&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;nm&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;openssl s_client&amp;lt;/tt&amp;gt; will show that compression is enabled in the client. In fact, any symbol within the &amp;lt;tt&amp;gt;OPENSSL_NO_COMP&amp;lt;/tt&amp;gt; preprocessor macro will bear witness since &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; is translated into a &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; define.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ nm /usr/local/ssl/iphoneos/lib/libcrypto.a 2&amp;gt;/dev/null | egrep -i &amp;quot;(COMP_CTX_new|COMP_CTX_free)&amp;quot;&lt;br /&gt;
0000000000000110 T COMP_CTX_free&lt;br /&gt;
0000000000000000 T COMP_CTX_new&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even more egregious is the answer given to auditors who specifically ask about configurations and protocols: &amp;quot;we don't use weak/wounded/broken ciphers&amp;quot; or &amp;quot;we follow best practices.&amp;quot; The use of compression tells the auditor that you are using wounded protocol in an insecure configuration and you don't follow best practices. That will likely set off alarm bells, and ensure the auditor dives deeper on more items.&lt;br /&gt;
&lt;br /&gt;
== Preprocessor ==&lt;br /&gt;
&lt;br /&gt;
The preprocessor is crucial to setting up a project for success. The C committee provided one macro - &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; - and the macro can be used to derive a number of configurations and drive engineering processes. Unfortunately, the committee also left many related items to chance, which has resulted in programmers abusing builtin facilities. This section will help you set up you projects to integrate well with other projects and ensure reliability and security.&lt;br /&gt;
&lt;br /&gt;
There are three topics to discuss when hardening the preprocessor. The first is well defined configurations which produce well defined behaviors, the second is useful behavior from assert, and the third is proper use of macros when integrating vendor code and third party libraries.&lt;br /&gt;
&lt;br /&gt;
=== Configurations ===&lt;br /&gt;
&lt;br /&gt;
To remove ambiguity, you should recognize two configurations: Release and Debug. Release is for production code on live servers, and its behavior is requested via the C/C++ &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; macro. Its also the only macro observed by the C and C++ Committees and Posix. Diametrically opposed to release is Debug. While there is a compelling argument for &amp;lt;tt&amp;gt;!defined(NDEBUG)&amp;lt;/tt&amp;gt;, you should have an explicit macro for the configuration and that macro should be &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;. This is because vendors and outside libraries use &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (or similar) macro for their configuration. For example, Carnegie Mellon's Mach kernel uses &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, Microsoft's CRT uses [http://msdn.microsoft.com/en-us/library/ww5t02fa%28v=vs.71%29.aspx&amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt;], and Wind River Workbench uses &amp;lt;tt&amp;gt;DEBUG_MODE&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (Release) and &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (Debug), you have two additional cross products: both are defined or neither are defined. Defining both should be an error, and defining neither should default to a release configuration. Below is from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/esapi/EsapiCommon.h ESAPI C++ EsapiCommon.h], which is the configuration file used by all source files:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// Only one or the other, but not both&lt;br /&gt;
#if (defined(DEBUG) || defined(_DEBUG)) &amp;amp;&amp;amp; (defined(NDEBUG) || defined(_NDEBUG))&lt;br /&gt;
# error Both DEBUG and NDEBUG are defined.&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
// The only time we switch to debug is when asked. NDEBUG or {nothing} results&lt;br /&gt;
// in release build (fewer surprises at runtime).&lt;br /&gt;
#if defined(DEBUG) || defined(_DEBUG)&lt;br /&gt;
# define ESAPI_BUILD_DEBUG 1&lt;br /&gt;
#else&lt;br /&gt;
# define ESAPI_BUILD_RELEASE 1&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is in effect, your code should receive full debug instrumentation, including the full force of assertions.&lt;br /&gt;
&lt;br /&gt;
=== ASSERT ===&lt;br /&gt;
&lt;br /&gt;
Asserts will help you create self-debugging code by helping you find the point of first failure quickly and easily. Asserts should be used throughout your program, including parameter validation, return value checking and program state. The &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; will silently guard your code through its lifetime. It will always be there, even when not debugging a specific component of a module. If you have thorough code coverage, you will spend less time debugging and more time developing because programs will debug themselves.&lt;br /&gt;
&lt;br /&gt;
To use asserts effectively, you should assert everything. That includes parameters upon entering a function, return values from function calls, and any program state. Everywhere you place an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation or checking, you should have an assert. Everywhere you have an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; for validation or checking, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand.&lt;br /&gt;
&lt;br /&gt;
If you are still using &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt;'s, then you have an opportunity for improvement. In the time it takes for you to write a &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; statement, you could have written an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;. Unlike the &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; which are often removed when no longer needed, the &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; stays active forever. Remember, this is all about finding the point of first failure quickly so you can spend your time doing other things.&lt;br /&gt;
&lt;br /&gt;
There is one problem with using asserts - [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html Posix states &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; should call &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;] if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined. When debugging, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will never be defined since you want the &amp;quot;program diagnostics&amp;quot; (quote from the Posix description). The behavior makes &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and its accompanying &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; completely useless for development. The result of &amp;quot;program diagnostics&amp;quot; calling &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; due to standard C/C++ behavior is disuse - developers simply don't use them. Its incredibly bad for the development community because self-debugging programs can help eradicate so many stability problems.&lt;br /&gt;
&lt;br /&gt;
Since self-debugging programs are so powerful, you will have to have to supply your own assert and signal handler with improved behavior. Your assert will exchange auto-aborting behavior for auto-debugging behavior. The auto-debugging facility will ensure the debugger snaps when a problem is detected, and you will find the point of first failure quickly and easily.&lt;br /&gt;
&lt;br /&gt;
ESAPI C++ supplies its own assert with the behavior described above. In the code below, &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt; raises &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; when in effect or it evaluates to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; in other cases.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// A debug assert which should be sprinkled liberally. This assert fires and then continues rather&lt;br /&gt;
// than calling abort(). Useful when examining negative test cases from the command line.&lt;br /&gt;
#if (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_STARNIX))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp) {                                    \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; std::endl;                                           \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) {                               \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; &amp;quot;: \&amp;quot;&amp;quot; &amp;lt;&amp;lt; (msg) &amp;lt;&amp;lt; &amp;quot;\&amp;quot;&amp;quot; &amp;lt;&amp;lt; std::endl;                \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#elif (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_WINDOWS))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      assert(exp)&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) assert(exp)&lt;br /&gt;
#else&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      ((void)(exp))&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) ((void)(exp))&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
#if !defined(ASSERT)&lt;br /&gt;
#  define ASSERT(exp)     ESAPI_ASSERT1(exp)&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At program startup, a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; handler will be installed if one is not provided by another component:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct DebugTrapHandler&lt;br /&gt;
{&lt;br /&gt;
  DebugTrapHandler()&lt;br /&gt;
  {&lt;br /&gt;
    struct sigaction new_handler, old_handler;&lt;br /&gt;
&lt;br /&gt;
    do&lt;br /&gt;
      {&lt;br /&gt;
        int ret = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, NULL, &amp;amp;old_handler);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        // Don't step on another's handler&lt;br /&gt;
        if (old_handler.sa_handler != NULL) break;&lt;br /&gt;
&lt;br /&gt;
        new_handler.sa_handler = &amp;amp;DebugTrapHandler::NullHandler;&lt;br /&gt;
        new_handler.sa_flags = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigemptyset (&amp;amp;new_handler.sa_mask);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, &amp;amp;new_handler, NULL);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
      } while(0);&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  static void NullHandler(int /*unused*/) { }&lt;br /&gt;
&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
// We specify a relatively low priority, to make sure we run before other CTORs&lt;br /&gt;
// http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Attributes.html#C_002b_002b-Attributes&lt;br /&gt;
static const DebugTrapHandler g_dummyHandler __attribute__ ((init_priority (110)));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On a Windows platform, you would call &amp;lt;tt&amp;gt;_set_invalid_parameter_handler&amp;lt;/tt&amp;gt; (and possibly &amp;lt;tt&amp;gt;set_unexpected&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;set_terminate&amp;lt;/tt&amp;gt;) to install a new handler.&lt;br /&gt;
&lt;br /&gt;
Live hosts running production code should always define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (i.e., release configuration), which means they do not assert or auto-abort. Auto-abortion is not acceptable behavior, and anyone who asks for the behavior is completely abusing the functionality of &amp;quot;program diagnostics&amp;quot;. If a program wants a core dump, then it should create the dump rather than crashing.&lt;br /&gt;
&lt;br /&gt;
For more reading on asserting effectively, please see one of John Robbin's books, such as ''[http://www.amazon.com/dp/0735608865 Debugging Applications]''. John is a legendary bug slayer in Windows circles, and he will show you how to do nearly everything, from debugging a simple program to bug slaying in multithreaded programs.&lt;br /&gt;
&lt;br /&gt;
=== Additional Macros ===&lt;br /&gt;
&lt;br /&gt;
Additional macros include any macros needed to integrate properly and securely. It includes integrating the program with the platform (for example MFC or Cocoa/CocoaTouch) and libraries (for example, Crypto++ or OpenSSL). It can be a challenge because you have to have proficiency with your platform and all included libraries and frameworks. The list below illustrates the level of detail you will need when integrating.&lt;br /&gt;
&lt;br /&gt;
Though Boost is missing from the list, it appears to lack recommendations, additional debug diagnostics, and a hardening guide. See ''[http://stackoverflow.com/questions/14927033/boost-hardening-guide-preprocessor-macros BOOST Hardening Guide (Preprocessor Macros)]'' for details. In addition, Tim Day points to ''[http://boost.2283326.n4.nabble.com/boost-build-should-we-not-define-SECURE-SCL-0-by-default-for-all-msvc-toolsets-td2654710.html &amp;lt;nowiki&amp;gt;[boost.build] should we not define _SECURE_SCL=0 by default for all msvc toolsets&amp;lt;/nowiki&amp;gt;]'' for a recent discussion related to hardening (or lack thereof).&lt;br /&gt;
&lt;br /&gt;
In addition to what you should define, defining some macros and undefining others should trigger a security related defect. For example, &amp;lt;tt&amp;gt;-U_FORTIFY_SOURCES&amp;lt;/tt&amp;gt; on Linux and &amp;lt;tt&amp;gt;_CRT_SECURE_NO_WARNINGS=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_SCL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_ATL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;STRSAFE_NO_DEPRECATE&amp;lt;/tt&amp;gt; on Windows.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Platform/Library!!Debug!!Release&lt;br /&gt;
|+ Table 1: Additional Platform/Library Macros&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;175pt&amp;quot;|All&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|DEBUG=1&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|NDEBUG=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Linux&lt;br /&gt;
|_GLIBCXX_DEBUG=1&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
|_FORTIFY_SOURCE=2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Android&lt;br /&gt;
|NDK_DEBUG=1&lt;br /&gt;
|_FORTIFY_SOURCE=1 (4.2 and above)&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define LOGI(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt logging)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Cocoa/CocoaTouch&lt;br /&gt;
|&lt;br /&gt;
|NS_BLOCK_ASSERTIONS=1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define NSLog(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt ASL)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SafeInt&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft&lt;br /&gt;
|_DEBUG=1, STRICT,&amp;lt;br&amp;gt;&lt;br /&gt;
_SECURE_SCL=1, _HAS_ITERATOR_DEBUGGING=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
|STRICT&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft ATL &amp;amp; MFC&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|STLPort&lt;br /&gt;
|_STLP_DEBUG=1, _STLP_USE_DEBUG_LIB=1&amp;lt;br&amp;gt;&lt;br /&gt;
_STLP_DEBUG_ALLOC=1, _STLP_DEBUG_UNINITIALIZED=1&lt;br /&gt;
|&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLite&lt;br /&gt;
|SQLITE_DEBUG, SQLITE_MEMDEBUG&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLCipher&lt;br /&gt;
|Remove '''&amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;''' from Debug builds (Xcode)&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_HAS_CODEC=1&lt;br /&gt;
|SQLITE_HAS_CODEC=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLite/SQLCipher&lt;br /&gt;
|SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Be careful with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; when using pre-compiled libraries such as Boost from a distribution. There are ABI incompatibilities, and the result will likely be a crash. You will have to compile Boost with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; or omit &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; SQLite secure deletion zeroizes memory on destruction. Define as required, and always define in US Federal since zeroization is required for FIPS 140-2, Level 1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt; ''N'' is 0644 by default, which means everyone has some access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt; Force temporary tables into memory (no unencrypted data to disk).&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
##########################################&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Compiler and Linker ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers provide a rich set of warnings from the analysis of code during compilation. Both GCC and Visual Studio have static analysis capabilities to help find mistakes early in the development process. The built in static analysis capabilities of GCC and Visual Studio are usually sufficient to ensure proper API usage and catch a number of mistakes such as using an uninitialized variable or comparing a negative signed int and a positive unsigned int.&lt;br /&gt;
&lt;br /&gt;
As a concrete example, (and for those not familiar with C/C++ promotion rules), a warning will be issued if a signed integer is promoted to an unsigned integer and then compared because a side effect is &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! GCC and Visual Studio will not currently catch, for example, SQL injections and other tainted data usage. For that, you will need a tool designed to perform data flow analysis or taint analysis.&lt;br /&gt;
&lt;br /&gt;
Some in the development community resist static analysis or refute its results. For example, when static analysis warned the Linux kernel's &amp;lt;tt&amp;gt;sys_prctl&amp;lt;/tt&amp;gt; was comparing an unsigned value against less than zero, Jesper Juhl offered a patch to clean up the code. Linus Torvalds howled “No, you don't do this… GCC is crap” (referring to compiling with warnings). For the full discussion, see ''[http://linux.derkeiler.com/Mailing-Lists/Kernel/2006-11/msg08325.html &amp;lt;nowiki&amp;gt;[PATCH] Don't compare unsigned variable for &amp;lt;0 in sys_prctl()&amp;lt;/nowiki&amp;gt;]'' from the Linux Kernel mailing list.&lt;br /&gt;
&lt;br /&gt;
The following sections will detail steps for three platforms. First is a typical GNU Linux based distribution offering GCC and Binutils, second is Clang and Xcode, and third is modern Windows platforms.&lt;br /&gt;
&lt;br /&gt;
=== Distribution Hardening ===&lt;br /&gt;
&lt;br /&gt;
Before discussing GCC and Binutils, it would be a good time to point out some of the defenses discussed below are all ready present in a distribution. Unfortunately, its design by committee, so what is present is usually only a mild variation of what is available (this way, everyone is mildly offended). For those who are purely worried about performance, you might be surprised to learn you have already taken the small performance hint without even knowing.&lt;br /&gt;
&lt;br /&gt;
Linux and BSD distributions often apply some hardening without intervention via ''[http://gcc.gnu.org/onlinedocs/gcc/Spec-Files.html GCC Spec Files]''. If you are using Debian, Ubuntu, Linux Mint and family, see ''[http://wiki.debian.org/Hardening Debian Hardening]''. For Red Hat and Fedora systems, see ''[http://lists.fedoraproject.org/pipermail/devel-announce/2011-August/000821.html New hardened build support (coming) in F16]''. Gentoo users should visit ''[http://www.gentoo.org/proj/en/hardened/ Hardened Gentoo]''.&lt;br /&gt;
&lt;br /&gt;
You can see the settings being used by a distribution via &amp;lt;tt&amp;gt;gcc -dumpspecs&amp;lt;/tt&amp;gt;. From Linux Mint 12 below, -fstack-protector (but not -fstack-protector-all) is used by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ gcc -dumpspecs&lt;br /&gt;
…&lt;br /&gt;
*link_ssp: %{fstack-protector:}&lt;br /&gt;
&lt;br /&gt;
*ssp_default: %{!fno-stack-protector:%{!fstack-protector-all: %{!ffreestanding:%{!nostdlib:-fstack-protector}}}}&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The “SSP” above stands for Stack Smashing Protector. SSP is a reimplementation of Hiroaki Etoh's work on IBM Pro Police Stack Detector. See Hiroaki Etoh's patch ''[http://gcc.gnu.org/ml/gcc-patches/2001-06/msg01753.html gcc stack-smashing protector]'' and IBM's ''[http://www.research.ibm.com/trl/projects/security/ssp/ GCC extension for protecting applications from stack-smashing attacks]'' for details.&lt;br /&gt;
&lt;br /&gt;
=== GCC/Binutils ===&lt;br /&gt;
&lt;br /&gt;
GCC (the compiler collection) and Binutils (the assemblers, linkers, and other tools) are separate projects that work together to produce a final executable. Both the compiler and linker offer options to help you write safer and more secure code. The linker will produce code which takes advantage of platform security features offered by the kernel and PaX, such as no-exec stacks and heaps (NX) and Position Independent Executable (PIE).&lt;br /&gt;
&lt;br /&gt;
The table below offers a set of compiler options to build your program. Static analysis warnings help catch mistakes early, while the linker options harden the executable at runtime. In the table below, “GCC” should be loosely taken as “non-ancient distributions.” While the GCC team considers 4.2 ancient, you will still encounter it on Apple and BSD platforms due to changes in GPL licensing around 2007. Refer to ''[http://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html GCC Option Summary]'', ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html Options to Request or Suppress Warnings]'' and ''[http://sourceware.org/binutils/docs-2.21/ld/Options.html Binutils (LD) Command Line Options]'' for usage details.&lt;br /&gt;
&lt;br /&gt;
Noteworthy of special mention are &amp;lt;tt&amp;gt;-fno-strict-overflow&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fwrapv&amp;lt;/tt&amp;gt;&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;. The flags ensure the compiler does not remove statements that result in overflow or wrap. If your program only runs correctly using the flags, it is likely violating C/C++ rules on overflow and illegal. If the program is illegal due to overflow or wrap checking, you should consider using [http://code.google.com/p/safe-iop/ safe-iop] for C or David LeBlanc's [http://safeint.codeplex.com SafeInt] in C++.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, some of those settings can be verified with the [http://www.trapkit.de/tools/checksec.html Checksec] tool written by Tobias Klein. The &amp;lt;tt&amp;gt;checksec.sh&amp;lt;/tt&amp;gt; script is designed to test standard Linux OS and PaX security features being used by an application. See the [http://www.trapkit.de/tools/checksec.html Trapkit] web page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 2: GCC C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wall -Wextra&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;75t&amp;quot;|GCC&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Enables many warnings (despite their names, all and extra do not turn on all warnings).&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wconversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may alter a value (includes -Wsign-conversion).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-conversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may change the sign of an integer value, such as assigning a signed integer to an unsigned integer (&amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion!).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wcast-align&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for a pointer cast to a type which has a different size, causing an invalid alignment and subsequent bus error on ARM processors.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wformat=2 -Wformat-security&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Increases warnings related to possible security defects, including incorrect format specifiers.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-common&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Prevent global variables being simultaneously defined in different object files.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fstack-protector or -fstack-protector-all&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Stack Smashing Protector (SSP). Improves stack layout and adds a guard to detect stack based buffer overflows.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-omit-frame-pointer&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Improves backtraces for post-mortem analysis&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wmissing-prototypes and -Wmissing-declarations&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a global function is defined without a prototype or declaration.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-prototypes&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a function is declared or defined without specifying the argument types.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-overflow&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.2&lt;br /&gt;
|Warn about optimizations taken due to &amp;lt;nowiki&amp;gt;[undefined]&amp;lt;/nowiki&amp;gt; signed integer overflow assumptions.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wtrampolines&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.3&lt;br /&gt;
|Warn about trampolines generated for pointers to nested functions. Trampolines require executable stacks.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=address&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/address-sanitizer/ AddressSanitizer], a fast memory error detector. Memory access instructions will be instrumented to help detect heap, stack, and global buffer overflows; as well as use-after-free bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=thread&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/data-race-test/wiki/ThreadSanitizer ThreadSanitizer], a fast data race detector. Memory access instructions will be instrumented to detect data race bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,nodlopen and -Wl,-z,nodump&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.10&lt;br /&gt;
|Reduces the ability of an attacker to load, manipulate, and dump shared objects.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,noexecstack and -Wl,-z,noexecheap&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.14&lt;br /&gt;
|Data Execution Prevention (DEP). ELF headers are marked with PT_GNU_STACK and PT_GNU_HEAP.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,relro&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Global Offset Table (GOT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,now&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Procedure Linkage Table (PLT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIC&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils&lt;br /&gt;
|Position Independent Code. Used for libraries and shared objects. Both -fPIC (compiler) and -shared (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIE&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.16&lt;br /&gt;
|Position Independent Executable (ASLR). Used for programs. Both -fPIE (compiler) and -pie (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Unlike Clang and -Weverything, GCC does not provide a switch to truly enable all warnings.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; -fstack-protector guards functions with high risk objects such as C strings, while -fstack-protector-all guards all objects.&lt;br /&gt;
&lt;br /&gt;
Additional C++ warnings which can be used include the following in Table 3. See ''[http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html GCC's Options Controlling C++ Dialect]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 3: GCC C++ Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Woverloaded-virtual&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn when a function declaration hides virtual functions from a base class. &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wreorder&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when the order of member initializers given in the code does not match the order in which they must be executed.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-promo&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when overload resolution chooses a promotion from unsigned or enumerated type to a signed type, over a conversion to an unsigned type of the same size.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wnon-virtual-dtor&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when a class has virtual functions and an accessible non-virtual destructor.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Weffc++&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn about violations of the following style guidelines from Scott Meyers' ''[http://www.aristeia.com/books.html Effective C++, Second Edition]'' book.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
And additional Objective C warnings which are often useful include the following. See ''[http://gcc.gnu.org/onlinedocs/gcc/Objective_002dC-and-Objective_002dC_002b_002b-Dialect-Options.html Options Controlling Objective-C and Objective-C++ Dialects]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 4: GCC Objective C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wstrict-selector-match&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn if multiple methods with differing argument and/or return types are found for a given selector when attempting to send a message using this selector to a receiver of type id or Class.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wundeclared-selector&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn if a &amp;lt;tt&amp;gt;@selector(…)&amp;lt;/tt&amp;gt; expression referring to an undeclared selector is found. &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The use of aggressive warnings will produce spurious noise. The noise is a tradeoff - you can learn of potential problems at the cost of wading through some chaff. The following will help reduces spurious noise from the warning system:&lt;br /&gt;
&lt;br /&gt;
* -Wno-unused-parameter (GCC)&lt;br /&gt;
* -Wno-type-limits (GCC 4.3)&lt;br /&gt;
* -Wno-tautological-compare (Clang)&lt;br /&gt;
&lt;br /&gt;
Finally, a simple version based Makefile example is shown below. This is different than feature based makefile produced by auto tools (which will test for a particular feature and then define a symbol or configure a template file). Not all platforms use all options and flags. To address the issue you can pursue one of two strategies. First, you can ship with a weakened posture by servicing the lowest common denominator; or you can ship with everything in force. In the latter case, those who don't have a feature available will edit the makefile to accommodate their installation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;CXX=g++&lt;br /&gt;
EGREP = egrep&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GCC_COMPILER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version')&lt;br /&gt;
GCC41_OR_LATER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version (4\.[1-9]|[5-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GNU_LD210_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[0-9]|2\.[2-9])')&lt;br /&gt;
GNU_LD214_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[4-9]|2\.[2-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC_COMPILER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wall -Wextra -Wconversion&lt;br /&gt;
  MY_CC_FLAGS += -Wformat=2 -Wformat-security&lt;br /&gt;
  MY_CC_FLAGS += -Wno-unused-parameter&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC41_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fstack-protector-all&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC42_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wstrict-overflow&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC43_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wtrampolines&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD210_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,nodlopen -z,nodump&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD214_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,noexecstack -z,noexecheap&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD215_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,relro -z,now&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD216_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fPIE&lt;br /&gt;
  MY_LD_FLAGS += -pie&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
# Use 'override' to honor the user's command line&lt;br /&gt;
override CFLAGS := $(MY_CC_FLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(MY_CC_FLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(MY_LD_FLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Clang/Xcode ===&lt;br /&gt;
&lt;br /&gt;
[http://clang.llvm.org Clang] and [http://llvm.org LLVM] have been aggressively developed since Apple lost its GPL compiler back in 2007 (due to Tivoization which resulted in GPLv3). Since that time, a number of developers and Goggle have joined the effort. While Clang will consume most (all?) GCC/Binutil flags and switches, the project supports a number of its own options, including a static analyzer. In addition, Clang is relatively easy to build with additional diagnostics, such as Dr. John Regher and Peng Li's [http://embed.cs.utah.edu/ioc/ Integer Overflow Checker (IOC)].&lt;br /&gt;
&lt;br /&gt;
IOC is incredibly useful, and has found bugs in a number of projects, from the Linux Kernel (&amp;lt;tt&amp;gt;include/linux/bitops.h&amp;lt;/tt&amp;gt;, still unfixed), SQLite, PHP, Firefox (many still unfixed), LLVM, and Python. Future version of Clang (Clang 3.3 and above) will allow you to enable the checks out of the box with &amp;lt;tt&amp;gt;-fsanitize=integer&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=shift&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Clang options can be found at [http://clang.llvm.org/docs/UsersManual.html Clang Compiler User’s Manual]. Clang does include an option to turn on all warnings - &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt;. Use it with care but use it regularly since you will get back a lot of noise and issues you missed. For example, add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; for production builds and make non-spurious issues a quality gate. Under Xcode, simply add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to compiler warnings, both static analysis and additional security checks can be performed. Reading on Clang's static analysis capabilities can be found at [http://clang-analyzer.llvm.org Clang Static Analyzer]. Figure 1 below shows some of the security checks utilized by Xcode.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-11.png|thumb|450px|Figure 1: Clang/LLVM and Xcode options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Visual Studio ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a convenient Integrated Development Environment (IDE) for managing solutions and their settings. the section called “Visual Studio Options” discusses option which should be used with Visual Studio, and the section called “Project Properties” demonstrates incorporating those options into a solution's project.&lt;br /&gt;
&lt;br /&gt;
The table below lists the compiler and linker switches which should be used under Visual Studio. Refer to Howard and LeBlanc's Writing Secure Code (Microsoft Press) for a detailed discussion; or ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]'' in Security Briefs by Michael Howard. In the table below, “Visual Studio” refers to nearly all versions of the development environment, including Visual Studio 5.0 and 6.0.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, those settings can be verified with BinScope. BinScope is a verification tool from Microsoft that analyzes binaries to ensure that they have been built in compliance with Microsoft's Security Development Lifecycle (SDLC) requirements and recommendations. See the ''[https://www.microsoft.com/download/en/details.aspx?id=11910 BinScope Binary Analyzer]'' download page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 5: Visual Studio Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;150pt&amp;quot;|&amp;lt;nowiki&amp;gt;/W4&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;100pt&amp;quot;|Visual Studio&lt;br /&gt;
|width=&amp;quot;350pt&amp;quot;|Warning level 4, which includes most warnings.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/WAll&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Enable all warnings, including those off by default.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/GS&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Adds a security cookie (guard or canary) on the stack before the return address buffer stack based for overflow checks.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/SafeSEH&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Safe structured exception handling to remediate SEH overwrites.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/analyze&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Enterprise code analysis (freely available with Windows SDK for Windows Server 2008 and .NET Framework 3.5).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/NXCOMPAT&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Data Execution Prevention (DEP).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/dynamicbase&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Address Space Layout Randomization (ASLR).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;strict_gs_check&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Aggressively applies stack protections to a source file to help detect some categories of stack based buffer overruns.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;See Jon Sturgeon's discussion of the switch at ''[https://blogs.msdn.com/b/vcblog/archive/2010/12/14/off-by-default-compiler-warnings-in-visual-c.aspx Off By Default Compiler Warnings in Visual C++]''.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;When using /GS, there are a number of circumstances which affect the inclusion of a security cookie. For example, the guard is not used if there is no buffer in the stack frame, optimizations are disabled, or the function is declared naked or contains inline assembly.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;tt&amp;gt;#pragma strict_gs_check(on)&amp;lt;/tt&amp;gt; should be used sparingly, but is recommend in high risk situations, such as when a source file parses input from the internet.&lt;br /&gt;
&lt;br /&gt;
=== Warn Suppression ===&lt;br /&gt;
&lt;br /&gt;
From the tables above, a lot of warnings have been enabled to help detect possible programming mistakes. The potential mistakes are detected via compiler which carries around a lot of contextual information during its code analysis phase. At times, you will receive spurious warnings because the compiler is not ''that'' smart. Its understandable and even a good thing (how would you like to be out of a job because a program writes its own programs?). At times you will have to learn how to work with the compiler's warning system to suppress warnings. Notice what was not said: turn off the warnings.&lt;br /&gt;
&lt;br /&gt;
Suppressing warnings placates the compiler for spurious noise so you can get to the issues that matter (you are separating the wheat from the chaff). This section will offer some hints and point out some potential minefields. First is an unused parameter (for example, &amp;lt;tt&amp;gt;argc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;argv&amp;lt;/tt&amp;gt;). Suppressing unused parameter warnings is especially helpful for C++ and interface programming, where parameters are often unused. For this warning, simply define an &amp;quot;UNUSED&amp;quot; macro and warp the parameter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#define UNUSED_PARAMETER(x) ((void)x)&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    UNUSED_PARAMETER(argc);&lt;br /&gt;
    UNUSED_PARAMETER(argv);&lt;br /&gt;
    …&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A potential minefield lies near &amp;quot;comparing unsigned and signed&amp;quot; values, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will catch it for you. This is because C/C++ promotion rules state the signed value will be promoted to an unsigned value and then compared. That means &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! To fix this, you cannot blindly cast - you must first range test the value:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;int x = GetX();&lt;br /&gt;
unsigned int y = GetY();&lt;br /&gt;
&lt;br /&gt;
ASSERT(x &amp;gt;= 0);&lt;br /&gt;
if(!(x &amp;gt;= 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? X is negative.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
if(static_cast&amp;lt;unsigned int&amp;gt;(x) &amp;gt; y)&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is greater than y&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
else&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is not greater than y&amp;quot; &amp;lt;&amp;lt; endl;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;x&amp;lt;/tt&amp;gt;. Just run the program and wait for it to tell you there is a problem. If there is a problem, the program will snap the debugger (and more importantly, not call a useless &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; as specified by Posix). It beats the snot out of &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; that are removed when no longer needed or pollute outputs.&lt;br /&gt;
&lt;br /&gt;
Another conversion problem you will encounter conversion between types, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will also catch it for you. The following will always have an opportunity to fail, and should light up like a Christmas tree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct sockaddr_in addr;&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
addr.sin_port = htons(atoi(argv[2]));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following would probably serve you much better. Notice &amp;lt;tt&amp;gt;atoi&amp;lt;/tt&amp;gt; and fiends are not used because they can silently fail. In addition, the code is instrumented so you don't need to waste a lot of time debugging potential problems:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;const char* cstr = GetPortString();&lt;br /&gt;
&lt;br /&gt;
ASSERT(cstr != NULL);&lt;br /&gt;
if(!(cstr != NULL))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port string is not valid.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
istringstream iss(cstr);&lt;br /&gt;
long long t = 0;&lt;br /&gt;
iss &amp;gt;&amp;gt; t;&lt;br /&gt;
&lt;br /&gt;
ASSERT(!(iss.fail()));&lt;br /&gt;
if(iss.fail())&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Failed to read port.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// Should this be a port above the reserved range ([0-1024] on Unix)?&lt;br /&gt;
ASSERT(t &amp;gt; 0);&lt;br /&gt;
if(!(t &amp;gt; 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too small&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
ASSERT(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max()));&lt;br /&gt;
if(!(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max())))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too large&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use port&lt;br /&gt;
unsigned short port = static_cast&amp;lt;unsigned short&amp;gt;(t);&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;port&amp;lt;/tt&amp;gt;. This code will continue checking conditions, years after being instrumented (assuming to wrote code to read a config file early in the project). There's no need to remove the &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt;s as with &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; since they are silent guardians.&lt;br /&gt;
&lt;br /&gt;
Another useful suppression trick is too avoid ignoring return values. Not only is it useful to suppress the warning, its required for correct code. For example, &amp;lt;tt&amp;gt;snprint&amp;lt;/tt&amp;gt; will alert you to truncations through its return value. You should not make them silent truncations by ignoring the warning or casting to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;char path[PATH_MAX];&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int ret = snprintf(path, sizeof(path), &amp;quot;%s/%s&amp;quot;, GetDirectory(), GetObjectName());&lt;br /&gt;
ASSERT(ret != -1);&lt;br /&gt;
ASSERT(!(ret &amp;gt;= sizeof(path)));&lt;br /&gt;
&lt;br /&gt;
if(ret == -1 || ret &amp;gt;= sizeof(path))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Unable to build full object name&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use path&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem is pandemic, and not just boring user land programs. Projects which offer high integrity code, such as SELinux, suffer silent truncations. The following is from an approved SELinux patch even though a comment was made that it [http://permalink.gmane.org/gmane.comp.security.selinux/16845 suffered silent truncations in its &amp;lt;tt&amp;gt;security_compute_create_name&amp;lt;/tt&amp;gt; function] from &amp;lt;tt&amp;gt;compute_create.c&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12  int security_compute_create_raw(security_context_t scon,&lt;br /&gt;
13                                  security_context_t tcon,&lt;br /&gt;
14                                  security_class_t   tclass,&lt;br /&gt;
15                                  security_context_t * newcon)&lt;br /&gt;
16  {&lt;br /&gt;
17    char path[PATH_MAX];&lt;br /&gt;
18    char *buf;&lt;br /&gt;
19    size_t size;&lt;br /&gt;
20    int fd, ret;&lt;br /&gt;
21 	&lt;br /&gt;
22    if (!selinux_mnt) {&lt;br /&gt;
23      errno = ENOENT;&lt;br /&gt;
24      return -1;&lt;br /&gt;
25    }&lt;br /&gt;
26 	&lt;br /&gt;
27    snprintf(path, sizeof path, &amp;quot;%s/create&amp;quot;, selinux_mnt);&lt;br /&gt;
28    fd = open(path, O_RDWR);&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Unlike other examples, the above code will not debug itself, and you will have to set breakpoints and trace calls to determine the point of first failure. (And the code above gambles that the truncated file does not exist or is not under an adversary's control by blindly performing the &amp;lt;tt&amp;gt;open&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
== Runtime ==&lt;br /&gt;
&lt;br /&gt;
The previous sections concentrated on setting up your project for success. This section will examine additional hints for running with increased diagnostics and defenses. Not all platforms are created equal - GNU Linux is difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening to a program after compiling and static linking]; while Windows allows post-build hardening through a download. Remember, the goal is to find the point of first failure quickly so you can improve the reliability and security of the code.&lt;br /&gt;
&lt;br /&gt;
=== Xcode ===&lt;br /&gt;
&lt;br /&gt;
Xcode offers additional [http://developer.apple.com/library/mac/#recipes/xcode_help-scheme_editor/Articles/SchemeDiagnostics.html Application Diagnostics] that can help find memory errors and object use problems. Schemes can be managed through ''Products'' menu item, ''Scheme'' submenu item, and then ''Edit''. From the editor, navigate to the ''Diagnostics'' tab. In the figure below, four additional instruments are enabled for the debugging cycle: Scribble guards, Edge guards, Malloc guards, and Zombies.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-1.png|thumb|450px|Figure 2: Xcode Memory Diagnostics]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
There is one caveat with using some of the guards: Apple only provides them for the simulator, and not a device. In the past, the guards were available for both devices and simulators.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a number of debugging aides for use during development. The aides are called [http://msdn.microsoft.com/en-us/library/d21c150d.aspx Managed Debugging Assistants (MDAs)]. You can find the MDAs on the ''Debug'' menu, then ''Exceptions'' submenu. MDAs allow you to tune your debugging experience by, for example, filter exceptions for which the debugger should snap. For more details, see Stephen Toub's ''[http://msdn.microsoft.com/en-us/magazine/cc163606.aspx Let The CLR Find Bugs For You With Managed Debugging Assistants]''.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-2.png|thumb|450px|Figure 3: Managed Debugging Assistants]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Finally, for runtime hardening, Microsoft has a helpful tool called EMET. EMET is the [http://support.microsoft.com/kb/2458544 Enhanced Mitigation Experience Toolkit], and allows you to apply runtime hardening to an executable which was built without. Its very useful for utilities and other programs that were built without an SDLC.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-3.png|thumb|450px|Figure 4: Windows and EMET]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;/div&gt;</summary>
		<author><name>Jeffrey Walton</name></author>	</entry>

	</feed>