<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://wiki.owasp.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Wichers</id>
		<title>OWASP - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://wiki.owasp.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Wichers"/>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php/Special:Contributions/Wichers"/>
		<updated>2026-05-17T01:31:30Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.27.2</generator>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=256308</id>
		<title>Source Code Analysis Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=256308"/>
				<updated>2019-12-12T13:22:16Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Commercial Tools Of This Type */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Static_Code_Analysis | Source code analysis]] tools, also referred to as Static Application Security Testing (SAST) Tools, are designed to analyze source code and/or compiled versions of code to help find security flaws. &lt;br /&gt;
&lt;br /&gt;
Some tools are starting to move into the IDE. For the types of problems that can be detected during the software development phase itself, this is a powerful phase within the development life cycle to employ such tools, as it provides immediate feedback to the developer on issues they might be introducing into the code during code development itself. This immediate feedback is very useful, especially when compared to finding vulnerabilities much later in the development cycle.&lt;br /&gt;
&lt;br /&gt;
== Strengths and Weaknesses ==&lt;br /&gt;
&lt;br /&gt;
=== Strengths ===&lt;br /&gt;
&lt;br /&gt;
* Scales well -- can be run on lots of software, and can be run repeatedly (as with nightly builds or continuous integration)&lt;br /&gt;
* Useful for things that such tools can automatically find with high confidence, such as buffer overflows, SQL Injection Flaws, and so forth&lt;br /&gt;
* Output is good for developers -- highlights the precise source files, line numbers, and even subsections of lines that are affected&lt;br /&gt;
&lt;br /&gt;
=== Weaknesses ===&lt;br /&gt;
&lt;br /&gt;
* Many types of security vulnerabilities are difficult to find automatically, such as authentication problems, access control issues, insecure use of cryptography, etc. The current state of the art only allows such tools to automatically find a relatively small percentage of application security flaws. However, tools of this type are getting better.&lt;br /&gt;
* High numbers of false positives.&lt;br /&gt;
* Frequently can't find configuration issues, since they are not represented in the code.&lt;br /&gt;
* Difficult to 'prove' that an identified security issue is an actual vulnerability.&lt;br /&gt;
* Many of these tools have difficulty analyzing code that can't be compiled. Analysts frequently can't compile code because they don't have the right libraries, all the compilation instructions, all the code, etc.&lt;br /&gt;
&lt;br /&gt;
==Important Selection Criteria==&lt;br /&gt;
&lt;br /&gt;
* Requirement: Must support your programming language, but not usually a key factor once it does.&lt;br /&gt;
* Types of vulnerabilities it can detect (out of the [[OWASP Top Ten]]?) (plus more?)&lt;br /&gt;
* How accurate is it? False Positive/False Negative rates?&lt;br /&gt;
** Does the tool have an OWASP [[Benchmark]] score?&lt;br /&gt;
* Does it understand the libraries/frameworks you use?&lt;br /&gt;
* Does it require a fully buildable set of source?&lt;br /&gt;
* Can it run against binaries instead of source?&lt;br /&gt;
* Can it be integrated into the developer's IDE?&lt;br /&gt;
* How hard is it to setup/use?&lt;br /&gt;
* Can it be run continuously and automatically?&lt;br /&gt;
* License cost for the tool. (Some are sold per user, per org, per app, per line of code analyzed. Consulting licenses are frequently different than end user licenses.)&lt;br /&gt;
&lt;br /&gt;
==OWASP Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [[OWASP SonarQube Project]]&lt;br /&gt;
* [http://www.owasp.org/index.php/Category:OWASP_Orizon_Project OWASP Orizon Project]&lt;br /&gt;
* [[OWASP_LAPSE_Project | OWASP LAPSE Project]]&lt;br /&gt;
* [[OWASP O2 Platform]]&lt;br /&gt;
* [[OWASP WAP-Web Application Protection]]&lt;br /&gt;
&lt;br /&gt;
==Disclaimer==&lt;br /&gt;
&lt;br /&gt;
Disclaimer: &amp;lt;b&amp;gt;The tools listed in the tables below are presented in alphabetical order. &amp;lt;i&amp;gt;OWASP does not endorse any of the vendors or tools by listing them in the table below.&amp;lt;/i&amp;gt; We have made every effort to provide this information as accurately as possible. If you are the vendor of a tool below and think that this information is incomplete or incorrect, please send an e-mail to our mailing list and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Open Source or Free Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.openstack.org/wiki/Security/Projects/Bandit Bandit] - bandit is a comprehensive source vulnerability scanner for Python&lt;br /&gt;
* [http://brakemanscanner.org/ Brakeman] - Brakeman is an open source vulnerability scanner specifically designed for Ruby on Rails applications&lt;br /&gt;
* [http://rubygems.org/gems/codesake-dawn Codesake Dawn] - Codesake Dawn is an open source security source code analyzer designed for Sinatra, Padrino for Ruby on Rails applications. It also works on non-web applications written in Ruby&lt;br /&gt;
* [https://discotek.ca/deepdive.xhtml Deep Dive] - Byte code analysis tool for discovering vulnerabilities in Java deployments (Ear, War, Jar).&lt;br /&gt;
* [http://findbugs.sourceforge.net/ FindBugs] - (Legacy - NOT Maintained - Use SpotBugs (see below) instead) - Find bugs (including a few security flaws) in Java programs&lt;br /&gt;
* [https://find-sec-bugs.github.io/ FindSecBugs] - A security specific plugin for SpotBugs that significantly improves SpotBugs's ability to find security vulnerabilities in Java programs. Works with the old FindBugs too,&lt;br /&gt;
* [http://www.dwheeler.com/flawfinder/ Flawfinder] Flawfinder - Scans C and C++&lt;br /&gt;
* [https://github.com/golangci/golangci-lint GolangCI-Lint] - A Go Linters aggregator - One of the Linters is [https://github.com/securego/gosec gosec (Go Security)], which is off by default but can easily be enabled.&lt;br /&gt;
* [https://www.bishopfox.com/resources/tools/google-hacking-diggity/attack-tools/ Google CodeSearchDiggity] - Uses Google Code Search to identifies vulnerabilities in open source code projects hosted by Google Code, MS CodePlex, SourceForge, Github, and more. The tool comes with over 130 default searches that identify SQL injection, cross-site scripting (XSS), insecure remote and local file includes, hard-coded passwords, and much more.  ''Essentially, Google CodeSearchDiggity provides a source code security analysis of nearly every single open source code project in existence – simultaneously.''&lt;br /&gt;
* [https://github.com/wireghoul/graudit/ Graudit] - Scans multiple languages for various security flaws.&lt;br /&gt;
* [https://lgtm.com/help/lgtm/about-lgtm LGTM] - A free for open source static analysis service that automatically monitors commits to publicly accessible code in: Bitbucket Cloud, GitHub, or GitLab. Supports C/C++, C#, COBOL (in beta), Java, JavaScript/TypeScript, Python&lt;br /&gt;
* [https://dotnet-security-guard.github.io/ .NET Security Guard] - Roslyn analyzers that aim to help security audits on .NET applications. It will find SQL injections, LDAP injections, XXE, cryptography weakness, XSS and more.&lt;br /&gt;
* [https://github.com/FloeDesignTechnologies/phpcs-security-audit phpcs-security-audit] - phpcs-security-audit is a set of PHP_CodeSniffer rules that finds flaws or weaknesses related to security in PHP and its popular CMS or frameworks.  It currently has core PHP rules as well as Drupal 7 specific rules.&lt;br /&gt;
* [http://pmd.sourceforge.net/ PMD] - PMD scans Java source code and looks for potential code problems (this is a code quality tool that does not focus on security issues)&lt;br /&gt;
* [http://msdn.microsoft.com/en-us/library/ms933794.aspx PreFast] (Microsoft) - PREfast is a static analysis tool that identifies defects in C/C++ programs. Last update 2006.&lt;br /&gt;
* [https://github.com/designsecurity/progpilot Progpilot] - Progpilot is a static analyzer tool for PHP that detects security vulnerabilities such as XSS and SQL Injection.&lt;br /&gt;
* [https://pumascan.com/ Puma Scan] - Puma Scan is a .NET C# open source static source code analyzer that runs as an IDE plugin for Visual Studio and via MSBuild in CI pipelines.&lt;br /&gt;
* [https://pyre-check.org/ Pyre] - A performant type-checker for Python 3, that also has [https://pyre-check.org/docs/static-analysis.html limited security/data flow analysis] capabilities.&lt;br /&gt;
* [http://rips-scanner.sourceforge.net/ RIPS] - RIPS Open Source is a static source code analyzer for vulnerabilities in PHP web applications. Please see notes on the sourceforge.net site.&lt;br /&gt;
* [https://discotek.ca/sinktank.xhtml Sink Tank] - Byte code static code analyzer for performing source/sink (taint) analysis.&lt;br /&gt;
* [http://www.sonarqube.org/ SonarQube] - Scans source code for more than 20 languages for Bugs, Vulnerabilities, and Code Smells. SonarQube IDE plugins for Eclipse, Visual Studio, and IntelliJ provided by [http://www.sonarlint.org/ SonarLint].&lt;br /&gt;
* [https://spotbugs.github.io/ SpotBugs] - This is the active fork replacement for FindBugs, which is not maintained anymore.&lt;br /&gt;
* [http://sourceforge.net/projects/visualcodegrepp/ VisualCodeGrepper (VCG)] - Scans C/C++, C#, VB, PHP, Java, and PL/SQL for security issues and for comments which may indicate defective code. The config files can be used to carry out additional checks for banned functions or functions which commonly cause security issues.&lt;br /&gt;
&lt;br /&gt;
[https://docs.gitlab.com/ee/user/application_security/sast/index.html#supported-languages-and-frameworks GitLab has lashed a free SAST tool for a bunch of different languages natively into GitLab. So you might be able to use that, or at least identify a free SAST tool for the language you need from that list].&lt;br /&gt;
&lt;br /&gt;
An even broader list of free static analysis tools (not just for security) for lots of different languages is here called: [https://endler.dev/awesome-static-analysis/ Awesome Static Analysis]&lt;br /&gt;
&lt;br /&gt;
==Commercial Tools Of This Type==&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ Application Inspector] (Positive Technologies) - combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation; has capability to generate test queries (exploits) to verify detected vulnerabilities during SAST analysis; Supported languages include: Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/application-security-on-cloud Application Security on Cloud] (IBM) - Provides SAST, DAST and mobile security testing as well as OpenSource library known vulnerability detection as a cloud service. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/ibm-appscan-source AppScan Source] (IBM)&lt;br /&gt;
* [https://www.blueclosure.com BlueClosure BC Detect] (BlueClosure) - Analyzes client-side JavaScript.&lt;br /&gt;
* [https://bugscout.io/en/ bugScout] (Nalbatech, Formally Buguroo)&lt;br /&gt;
* [https://www.castsoftware.com/products/application-intelligence-platform CAST AIP] (CAST) Performs static and architectural analysis to identify numerous types of security issues. Supports over 30 languages. [https://www.castsoftware.com/solutions/application-security/cwe#SupportedSecurityStandards AIP's security specific coverage is here].&lt;br /&gt;
* [https://www.codacy.com/ Codacy] Offers security patterns for languages such as Python, Ruby, Scala, Java, JavaScript and more. Integrates with tools such as Brakeman, Bandit, FindBugs, and others. (free for open source projects)&lt;br /&gt;
* [https://www.grammatech.com/products/codesonar CodeSonar] tool that supports C, C++, Java and C# and maps against the OWASP top 10 vulnerabilities.&lt;br /&gt;
* [https://www.contrastsecurity.com/interactive-application-security-testing-iast Contrast Assess] (Contrast Security) - Contrast performs code security without actually doing static analysis. Contrast does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.coverity.com/products/code-advisor/ Coverity Code Advisor] (Synopsys)&lt;br /&gt;
* [https://www.checkmarx.com/technology/static-code-analysis-sca/ CxSAST] (Checkmarx)&lt;br /&gt;
* [https://www.microfocus.com/en-us/products/static-code-analysis-sast Fortify] (Micro Focus, Formally HP)&lt;br /&gt;
* [https://hdivsecurity.com/interactive-application-security-testing-iast Hdiv Detection] (Hdiv Security) - Hdiv performs code security without actually doing static analysis. Hdiv does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code-level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.juliasoft.com/solutions Julia] (JuliaSoft) - SaaS Java static analysis&lt;br /&gt;
* [http://www.klocwork.com/capabilities/static-code-analysis KlocWork] (KlocWork)&lt;br /&gt;
* [https://www.kiuwan.com/code-analysis/ Kiuwan] (an [http://www.optimyth.com Optimyth] company) - SaaS Software Quality &amp;amp; Security Analysis&lt;br /&gt;
* [http://www.parasoft.com/jsp/capabilities/static_analysis.jsp?itemId=547 Parasoft Test] (Parasoft)&lt;br /&gt;
* [https://pitss.com/products/pitss-con/ PITSS.CON] (PITTS)&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ PT Application Inspector] combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation for high accuracy rate with minimum false positives; has a unique capability to generate special test queries (exploits) to verify detected vulnerabilities during SAST analysis; integrates with CI/CD, VCS, etc. PT AI helps to easily understand, verify, and fix flaws; has a simple UI; is highly automated and easy to use. Supported languages are Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others.&lt;br /&gt;
* [https://pumascanpro.com/ Puma Scan Professional] - A .NET C# static source code analyzer that runs as a Visual Studio IDE extension, Azure DevOps extension, and Command Line (CLI) executable.&lt;br /&gt;
* [http://www.viva64.com/en/ PVS-Studio] (PVS-Studio) - For C/C++, C#&lt;br /&gt;
* [https://www.reshiftsecurity.com reshift] - A CI/CD tool that uses static code analysis to scan for vulnerabilities and uses machine learning to give a prediction on false positives. Supports Java with future support for NodeJS and JavaScript planned for sometime in 2019.&lt;br /&gt;
* [https://www.ripstech.com/ RIPS Code Analysis] (RIPS Technologies) - A SAST solution specialized for Java and PHP that detects unknown security vulnerabilities and code quality issues.&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/resources/datasheets/secureassist.html SecureAssist] (Synopsys) - Scans code for insecure coding and configurations automatically as an IDE plugin for Eclipse, IntelliJ, and Visual Studio etc. Supports (Java, .NET, PHP, and JavaScript)&lt;br /&gt;
* [https://www.whitehatsec.com/products/static-application-security-testing/ Sentinel Source] (Whitehat)&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/products/interactive-application-security-testing.html Seeker] (Synopsys) Seeker performs code security without actually doing static analysis. Seeker does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis with simulated attacks. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [https://smartdecscanner.com/ SmartDec Scanner] (SmartDec) Capable of identifying vulnerabilities and backdoors (undocumented features) in over 30 programming languages by analyzing source code or executables, without requiring debug info.&lt;br /&gt;
* [http://www.sourcepatrol.co.uk/ Source Patrol] (Pentest)&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] (DefenseCode)&lt;br /&gt;
* [http://www.veracode.com/products/binary-static-analysis-sast Veracode Static Analysis] (Veracode)&lt;br /&gt;
* [http://www.xanitizer.net Xanitizer] - Scans Java and Scala for security vulnerabilities, mainly via taint analysis. Free for academic and open source projects (see [https://www.rigs-it.com/xanitizer-pricing/]).&lt;br /&gt;
&lt;br /&gt;
==More info==&lt;br /&gt;
&lt;br /&gt;
* [[Appendix_A:_Testing_Tools | Appendix A: Testing Tools]]&lt;br /&gt;
* [http://samate.nist.gov/index.php/Source_Code_Security_Analyzers.html NIST's list of Source Code Security Analysis Tools]&lt;br /&gt;
* [[:Category:Vulnerability_Scanning_Tools | DAST Tools]] - Similar info on Dynamic Application Security Testing (DAST) Tools&lt;br /&gt;
* [[Free for Open Source Application Security Tools]] - This page lists the Commercial Source Code Analysis Tools (SAST) we know of that are free for Open Source&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP .NET Project]]&lt;br /&gt;
[[Category:SAMM-CR-2]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Benchmark&amp;diff=255911</id>
		<title>Benchmark</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Benchmark&amp;diff=255911"/>
				<updated>2019-11-02T21:25:34Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Main = &lt;br /&gt;
 &amp;lt;div style=&amp;quot;width:100%;height:100px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Lab_big.jpg|link=OWASP_Project_Stages#tab.3DLab_Projects]]&amp;lt;/div&amp;gt;&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== OWASP Benchmark Project  ==&lt;br /&gt;
The OWASP Benchmark for Security Automation (OWASP Benchmark) is a free and open test suite designed to evaluate the speed, coverage, and accuracy of automated software vulnerability detection tools and services (henceforth simply referred to as 'tools'). Without the ability to measure these tools, it is difficult to understand their strengths and weaknesses, and compare them to each other. Each version of the OWASP Benchmark contains thousands of test cases that are fully runnable and exploitable, each of which maps to the appropriate CWE number for that vulnerability.&lt;br /&gt;
&lt;br /&gt;
You can use the OWASP Benchmark with [[Source_Code_Analysis_Tools | Static Application Security Testing (SAST)]] tools, [[:Category:Vulnerability_Scanning_Tools | Dynamic Application Security Testing (DAST)]] tools like OWASP [[ZAP]] and Interactive Application Security Testing (IAST) tools. Benchmark is implemented in Java.  Future versions may expand to include other languages.&lt;br /&gt;
&lt;br /&gt;
==Benchmark Project Scoring Philosophy==&lt;br /&gt;
&lt;br /&gt;
Security tools (SAST, DAST, and IAST) are amazing when they find a complex vulnerability in your code.  But with widespread misunderstanding of the specific vulnerabilities automated tools cover, end users are often left with a false sense of security.&lt;br /&gt;
&lt;br /&gt;
We are on a quest to measure just how good these tools are at discovering and properly diagnosing security problems in applications. We rely on the [http://en.wikipedia.org/wiki/Receiver_operating_characteristic long history] of military and medical evaluation of detection technology as a foundation for our research. Therefore, the test suite tests both real and fake vulnerabilities.&lt;br /&gt;
&lt;br /&gt;
There are four possible test outcomes in the Benchmark:&lt;br /&gt;
&lt;br /&gt;
# Tool correctly identifies a real vulnerability (True Positive - TP)&lt;br /&gt;
# Tool fails to identify a real vulnerability (False Negative - FN)&lt;br /&gt;
# Tool correctly ignores a false alarm (True Negative - TN)&lt;br /&gt;
# Tool fails to ignore a false alarm (False Positive - FP)&lt;br /&gt;
&lt;br /&gt;
We can learn a lot about a tool from these four metrics. Consider a tool that simply flags every line of code as vulnerable. This tool will perfectly identify all vulnerabilities!  But it will also have 100% false positives and thus adds no value.  Similarly, consider a tool that reports absolutely nothing. This tool will have zero false positives, but will also identify zero real vulnerabilities and is also worthless. You can even imagine a tool that flips a coin to decide whether to report whether each test case contains a vulnerability. The result would be 50% true positives and 50% false positives.  We need a way to distinguish valuable security tools from these trivial ones.&lt;br /&gt;
&lt;br /&gt;
If you imagine the line that connects all these points, from 0,0 to 100,100 establishes a line that roughly translates to &amp;quot;random guessing.&amp;quot; The ultimate measure of a security tool is how much better it can do than this line.  The diagram below shows how we will evaluate security tools against the Benchmark.&lt;br /&gt;
&lt;br /&gt;
[[File:Wbe guide.png]]&lt;br /&gt;
&lt;br /&gt;
A point plotted on this chart provides a visual indication of how well a tool did considering both the True Positives the tool reported, as well as the False Positives it reported. We also want to compute an individual score for that point in the range 0 - 100, which we call the Benchmark Accuracy Score.&lt;br /&gt;
&lt;br /&gt;
The Benchmark Accuracy Score is essentially a [https://en.wikipedia.org/wiki/Youden%27s_J_statistic Youden Index], which is a standard way of summarizing the accuracy of a set of tests.  Youden's index is one of the oldest measures for diagnostic accuracy. It is also a global measure of a test performance, used for the evaluation of overall discriminative power of a diagnostic procedure and for comparison of this test with other tests. Youden's index is calculated by deducting 1 from the sum of a test’s sensitivity and specificity expressed not as percentage but as a part of a whole number: (sensitivity + specificity) – 1. For a test with poor diagnostic accuracy, Youden's index equals 0, and in a perfect test Youden's index equals 1.&lt;br /&gt;
&lt;br /&gt;
  So for example, if a tool has a True Positive Rate (TPR) of .98 (i.e., 98%) &lt;br /&gt;
    and False Positive Rate (FPR) of .05 (i.e., 5%)&lt;br /&gt;
  Sensitivity = TPR (.98)&lt;br /&gt;
  Specificity = 1-FPR (.95)&lt;br /&gt;
  So the Youden Index is (.98+.95) - 1 = .93&lt;br /&gt;
  &lt;br /&gt;
  And this would equate to a Benchmark score of 93 (since we normalize this to the range 0 - 100)&lt;br /&gt;
&lt;br /&gt;
On the graph, the Benchmark Score is the length of the line from the point down to the diagonal “guessing” line. Note that a Benchmark score can actually be negative if the point is below the line. This is caused when the False Positive Rate is actually higher than the True Positive Rate.&lt;br /&gt;
&lt;br /&gt;
==Benchmark Validity==&lt;br /&gt;
&lt;br /&gt;
The Benchmark tests are not exactly like real applications. The tests are derived from coding patterns observed in real applications, but the majority of them are considerably '''simpler''' than real applications. That is, most real world applications will be considerably harder to successfully analyze than the OWASP Benchmark Test Suite. Although the tests are based on real code, it is possible that some tests may have coding patterns that don't occur frequently in real code.&lt;br /&gt;
&lt;br /&gt;
Remember, we are trying to test the capabilities of the tools and make them explicit, so that users can make informed decisions about what tools to use, how to use them, and what results to expect.  This is exactly aligned with the OWASP mission to make application security visible.&lt;br /&gt;
&lt;br /&gt;
==Generating Benchmark Scores==&lt;br /&gt;
&lt;br /&gt;
Anyone can use this Benchmark to evaluate vulnerability detection tools. The basic steps are:&lt;br /&gt;
# Download the Benchmark from GitHub&lt;br /&gt;
# Run your tools against the Benchmark&lt;br /&gt;
# Run the BenchmarkScore tool on the reports from your tools&lt;br /&gt;
&lt;br /&gt;
That's it!&lt;br /&gt;
&lt;br /&gt;
Full details on how to do this are at the bottom of the page on the Quick_Start tab.&lt;br /&gt;
&lt;br /&gt;
We encourage both vendors, open source tools, and end users to verify their application security tools against the Benchmark. In order to ensure that the results are fair and useful, we ask that you follow a few simple rules when publishing results. We won't recognize any results that aren't easily reproducible:&lt;br /&gt;
&lt;br /&gt;
# A description of the default “out-of-the-box” installation, version numbers, etc…&lt;br /&gt;
# Any and all configuration, tailoring, onboarding, etc… performed to make the tool run&lt;br /&gt;
# Any and all changes to default security rules, tests, or checks used to achieve the results&lt;br /&gt;
# Easily reproducible steps to run the tool&lt;br /&gt;
&lt;br /&gt;
== Reporting Format==&lt;br /&gt;
&lt;br /&gt;
The Benchmark includes tools to interpret raw tool output, compare it to the expected results, and generate summary charts and graphs. We use the following table format in order to capture all the information generated during the evaluation.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable nowraplinks&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Security Category&lt;br /&gt;
! TP&lt;br /&gt;
! FN&lt;br /&gt;
! TN&lt;br /&gt;
! FP&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Total&lt;br /&gt;
! TPR&lt;br /&gt;
! FPR&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Score&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| General security category for test cases.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Positives''': Tests with real vulnerabilities that were correctly reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Negative''': Tests with real vulnerabilities that were not correctly reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Negative''': Tests with fake vulnerabilities that were correctly not reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Positive''':Tests with fake vulnerabilities that were incorrectly reported as vulnerable by the tool.&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| Total test cases for this category.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Positive Rate''': TP / ( TP + FN ) - Also referred to as Precision, as defined at [https://en.wikipedia.org/wiki/Precision_and_recall Wikipedia].&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Positive Rate''': FP / ( FP + TN ).&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| Normalized distance from the “guess line” TPR - FPR.&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Command Injection&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Etc...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | &lt;br /&gt;
! Total TP&lt;br /&gt;
! Total FN&lt;br /&gt;
! Total TN&lt;br /&gt;
! Total FP&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Total TC&lt;br /&gt;
! Average TPR&lt;br /&gt;
! Average FPR&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Average Score&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Code Repo and Build/Run Instructions ==&lt;br /&gt;
&lt;br /&gt;
See the '''Getting Started''' and '''Getting, Building, and Running the Benchmark''' sections on the Quick Start tab.&lt;br /&gt;
&lt;br /&gt;
==Licensing==&lt;br /&gt;
&lt;br /&gt;
The OWASP Benchmark is free to use under the [http://choosealicense.com/licenses/gpl-2.0/ GNU General Public License v2.0].&lt;br /&gt;
&lt;br /&gt;
== Mailing List ==&lt;br /&gt;
&lt;br /&gt;
[https://lists.owasp.org/mailman/listinfo/owasp-benchmark-project OWASP Benchmark Mailing List]&lt;br /&gt;
&lt;br /&gt;
== Project Leaders ==&lt;br /&gt;
&lt;br /&gt;
[https://www.owasp.org/index.php/User:Wichers Dave Wichers] [mailto:dave.wichers@owasp.org @]&lt;br /&gt;
&lt;br /&gt;
== Project References ==&lt;br /&gt;
* [https://www.mir-swamp.org/#packages/public Software Assurance Marketplace (SWAMP) - set of curated packages to test tools against]&lt;br /&gt;
* [http://samate.nist.gov/Other_Test_Collections.html SAMATE List of Test Collections]&lt;br /&gt;
&lt;br /&gt;
== Related Projects ==&lt;br /&gt;
&lt;br /&gt;
* [http://samate.nist.gov/SARD/testsuite.php NSA's Juliet for Java]&lt;br /&gt;
* [http://sectoolmarket.com/ The Web Application Vulnerability Scanner Evaluation Project (WAVSEP)]&lt;br /&gt;
&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;padding-left:25px;width:200px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Quick Download ==&lt;br /&gt;
&lt;br /&gt;
All test code and project files can be downloaded from [https://github.com/OWASP/benchmark OWASP GitHub].&lt;br /&gt;
&lt;br /&gt;
== Project Intro Video ==&lt;br /&gt;
&lt;br /&gt;
[[File:BenchmarkPodcastTitlePage.jpg|200px|link=https://www.youtube.com/watch?v=HQP8dwc3jJA&amp;amp;index=5&amp;amp;list=PLGB2s-U5FSWOmEStMt3JqlMFJvRYqeVW5]]&lt;br /&gt;
&lt;br /&gt;
== News and Events ==&lt;br /&gt;
* LOOKING FOR VOLUNTEERS!! - We are looking for individuals and organizations to join and make this a much more community driven project, including additional coleaders to help take this project to the next level. Contributors could work on things like new test cases, additional tool scorecard generators, adding support for languages beyond Java, and a host of other improvements. Please contact [mailto:dave.wichers@owasp.org me] if you are interested in contributing at any level.&lt;br /&gt;
* June 5, 2016 - Benchmark Version 1.2 Released&lt;br /&gt;
* Sep 24, 2015 - Benchmark introduced to broader OWASP community at [https://appsecusa2015.sched.org/event/3r9k/using-the-owasp-benchmark-to-assess-automated-vulnerability-analysis-tools AppSec USA]&lt;br /&gt;
* Aug 27, 2015 - U.S. Dept. of Homeland Security (DHS) is financially supporting the Benchmark project.&lt;br /&gt;
* Aug 15, 2015 - Benchmark Version 1.2beta Released with full DAST Support. Checkmarx and ZAP scorecard generators also released.&lt;br /&gt;
* July 10, 2015 - Benchmark Scorecard generator and open source scorecards released&lt;br /&gt;
* May 23, 2015 - Benchmark Version 1.1 Released&lt;br /&gt;
* April 15, 2015 - Benchmark Version 1.0 Released&lt;br /&gt;
&lt;br /&gt;
==Classifications==&lt;br /&gt;
&lt;br /&gt;
   {| width=&amp;quot;200&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; rowspan=&amp;quot;2&amp;quot;| [[File:Owasp-incubator-trans-85.png|link=https://www.owasp.org/index.php/OWASP_Project_Stages#tab=Incubator_Projects]]&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-builders-small.png|link=]]  &lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-defenders-small.png|link=]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [http://choosealicense.com/licenses/gpl-2.0/ GNU General Public License v2.0]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [[File:Project_Type_Files_CODE.jpg|link=]]&lt;br /&gt;
   |}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Test Cases =&lt;br /&gt;
&lt;br /&gt;
Version 1.0 of the Benchmark was published on April 15, 2015 and had 20,983 test cases. On May 23, 2015, version 1.1 of the Benchmark was released. The 1.1 release improves on the previous version by making sure that there are both true positives and false positives in every vulnerability area. Version 1.2 was released on June 5, 2016 (and the 1.2beta August 15, 2015).&lt;br /&gt;
&lt;br /&gt;
Version 1.2 and forward of the Benchmark is a fully executable web application, which means it is scannable by any kind of vulnerability detection tool. The 1.2 has been limited to slightly less than 3,000 test cases, to make it easier for DAST tools to scan it (so it doesn't take so long and they don't run out of memory, or blow up the size of their database). The 1.2 release covers the same vulnerability areas that 1.1 covers. We added a few Spring database SQL Injection tests, but that's it. The bulk of the work was turning each test case into something that actually runs correctly AND is fully exploitable, and then generating a UI on top of it that works in order to turn the test cases into a real running application.&lt;br /&gt;
&lt;br /&gt;
You can still download Benchmark version 1.1 by cloning the release marked with the GIT tag '1.1'.&lt;br /&gt;
&lt;br /&gt;
The test case areas and quantities for the Benchmark releases are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable nowraplinks&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Vulnerability Area&lt;br /&gt;
! # of Tests in v1.1&lt;br /&gt;
! # of Tests in v1.2&lt;br /&gt;
! CWE Number&lt;br /&gt;
|-&lt;br /&gt;
| [[Command Injection]]&lt;br /&gt;
| 2708&lt;br /&gt;
| 251&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/78.html 78]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Cryptography&lt;br /&gt;
| 1440&lt;br /&gt;
| 246&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/327.html 327]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Hashing&lt;br /&gt;
| 1421&lt;br /&gt;
| 236&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/328.html 328]&lt;br /&gt;
|-&lt;br /&gt;
| [[LDAP injection | LDAP Injection]]&lt;br /&gt;
| 736&lt;br /&gt;
| 59&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/90.html 90]&lt;br /&gt;
|-&lt;br /&gt;
| [[Path Traversal]]&lt;br /&gt;
| 2630&lt;br /&gt;
| 268&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/22.html 22]&lt;br /&gt;
|-&lt;br /&gt;
| Secure Cookie Flag&lt;br /&gt;
| 416&lt;br /&gt;
| 67&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/614.html 614]&lt;br /&gt;
|-&lt;br /&gt;
| [[SQL Injection]]&lt;br /&gt;
| 3529&lt;br /&gt;
| 504&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/89.html 89]&lt;br /&gt;
|-&lt;br /&gt;
| [[Trust Boundary Violation]]&lt;br /&gt;
| 725&lt;br /&gt;
| 126&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/501.html 501]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Randomness&lt;br /&gt;
| 3640&lt;br /&gt;
| 493&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/330.html 330]&lt;br /&gt;
|-&lt;br /&gt;
| [[XPATH Injection]]&lt;br /&gt;
| 347&lt;br /&gt;
| 35&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/643.html 643]&lt;br /&gt;
|-&lt;br /&gt;
| [[XSS]] (Cross-Site Scripting)&lt;br /&gt;
| 3449&lt;br /&gt;
| 455&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/79.html 79]&lt;br /&gt;
|-&lt;br /&gt;
| Total Test Cases&lt;br /&gt;
| 21,041&lt;br /&gt;
| 2,740&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Each Benchmark version comes with a spreadsheet that lists every test case, the vulnerability category, the CWE number, and the expected result (true finding/false positive). Look for the file: expectedresults-VERSION#.csv in the project root directory.&lt;br /&gt;
&lt;br /&gt;
Every test case is:&lt;br /&gt;
* a servlet or JSP (currently they are all servlets, but we plan to add JSPs)&lt;br /&gt;
* either a true vulnerability or a false positive for a single issue&lt;br /&gt;
&lt;br /&gt;
The Benchmark is intended to help determine how well analysis tools correctly analyze a broad array of application and framework behavior, including:&lt;br /&gt;
&lt;br /&gt;
* HTTP request and response problems&lt;br /&gt;
* Simple and complex data flow&lt;br /&gt;
* Simple and complex control flow&lt;br /&gt;
* Popular frameworks&lt;br /&gt;
* Inversion of control&lt;br /&gt;
* Reflection&lt;br /&gt;
* Class loading&lt;br /&gt;
* Annotations&lt;br /&gt;
* Popular UI technologies (particularly JavaScript frameworks)&lt;br /&gt;
&lt;br /&gt;
Not all of these are yet tested by the Benchmark but future enhancements intend to provide more coverage of these issues.&lt;br /&gt;
&lt;br /&gt;
Additional future enhancements could cover:&lt;br /&gt;
* All vulnerability types in the [[Top10 | OWASP Top 10]]&lt;br /&gt;
* Does the tool find flaws in libraries?&lt;br /&gt;
* Does the tool find flaws spanning custom code and libraries?&lt;br /&gt;
* Does tool handle web services? REST, XML, GWT, etc…&lt;br /&gt;
* Does tool work with different app servers? Java platforms?&lt;br /&gt;
&lt;br /&gt;
== Example Test Case ==&lt;br /&gt;
&lt;br /&gt;
Each test case is a simple Java EE servlet. BenchmarkTest00001 in version 1.0 of the Benchmark was an LDAP Injection test with the following metadata in the accompanying BenchmarkTest00001.xml file:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;test-metadata&amp;gt;&lt;br /&gt;
    &amp;lt;category&amp;gt;ldapi&amp;lt;/category&amp;gt;&lt;br /&gt;
    &amp;lt;test-number&amp;gt;00001&amp;lt;/test-number&amp;gt;&lt;br /&gt;
    &amp;lt;vulnerability&amp;gt;true&amp;lt;/vulnerability&amp;gt;&lt;br /&gt;
    &amp;lt;cwe&amp;gt;90&amp;lt;/cwe&amp;gt;&lt;br /&gt;
  &amp;lt;/test-metadata&amp;gt;&lt;br /&gt;
&lt;br /&gt;
BenchmarkTest00001.java in the OWASP Benchmark 1.0 simply reads in all the cookie values, looks for a cookie named &amp;quot;foo&amp;quot;, and uses the value of this cookie when performing an LDAP query. Here's the code for BenchmarkTest00001.java:&lt;br /&gt;
&lt;br /&gt;
  package org.owasp.benchmark.testcode;&lt;br /&gt;
  &lt;br /&gt;
  import java.io.IOException;&lt;br /&gt;
  &lt;br /&gt;
  import javax.servlet.ServletException;&lt;br /&gt;
  import javax.servlet.annotation.WebServlet;&lt;br /&gt;
  import javax.servlet.http.HttpServlet;&lt;br /&gt;
  import javax.servlet.http.HttpServletRequest;&lt;br /&gt;
  import javax.servlet.http.HttpServletResponse;&lt;br /&gt;
  &lt;br /&gt;
  @WebServlet(&amp;quot;/BenchmarkTest00001&amp;quot;)&lt;br /&gt;
  public class BenchmarkTest00001 extends HttpServlet {&lt;br /&gt;
  	&lt;br /&gt;
  	private static final long serialVersionUID = 1L;&lt;br /&gt;
  	&lt;br /&gt;
  	@Override&lt;br /&gt;
  	public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {&lt;br /&gt;
  		doPost(request, response);&lt;br /&gt;
  	}&lt;br /&gt;
  &lt;br /&gt;
  	@Override&lt;br /&gt;
  	public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {&lt;br /&gt;
  		// some code&lt;br /&gt;
  &lt;br /&gt;
  		javax.servlet.http.Cookie[] cookies = request.getCookies();&lt;br /&gt;
  		&lt;br /&gt;
  		String param = null;&lt;br /&gt;
  		boolean foundit = false;&lt;br /&gt;
  		if (cookies != null) {&lt;br /&gt;
  			for (javax.servlet.http.Cookie cookie : cookies) {&lt;br /&gt;
  				if (cookie.getName().equals(&amp;quot;foo&amp;quot;)) {&lt;br /&gt;
  					param = cookie.getValue();&lt;br /&gt;
  					foundit = true;&lt;br /&gt;
  				}&lt;br /&gt;
  			}&lt;br /&gt;
  			if (!foundit) {&lt;br /&gt;
  				// no cookie found in collection&lt;br /&gt;
  				param = &amp;quot;&amp;quot;;&lt;br /&gt;
  			}&lt;br /&gt;
  		} else {&lt;br /&gt;
  			// no cookies&lt;br /&gt;
  			param = &amp;quot;&amp;quot;;&lt;br /&gt;
  		}&lt;br /&gt;
  		&lt;br /&gt;
  		try {&lt;br /&gt;
  			javax.naming.directory.DirContext dc = org.owasp.benchmark.helpers.Utils.getDirContext();&lt;br /&gt;
  			Object[] filterArgs = {&amp;quot;a&amp;quot;,&amp;quot;b&amp;quot;};&lt;br /&gt;
  			dc.search(&amp;quot;name&amp;quot;, param, filterArgs, new javax.naming.directory.SearchControls());&lt;br /&gt;
  		} catch (javax.naming.NamingException e) {&lt;br /&gt;
  			throw new ServletException(e);&lt;br /&gt;
  		}&lt;br /&gt;
  	}&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
= Test Case Details =&lt;br /&gt;
&lt;br /&gt;
The following describes situations in the Benchmark that have come up for debate as to the validity/accuracy of the test cases in these scenarios. &lt;br /&gt;
&lt;br /&gt;
== Cookies as a Source of Attack for XSS ==&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.1 and early versions of the 1.2beta included test cases that used cookies as a source of data that flowed into XSS vulnerabilities. The Benchmark treated these tests as False Positives because the Benchmark team figured that you'd have to use an XSS vulnerability in the first place to set the cookie value, and so it wasn't fair/reasonable to consider an XSS vulnerability whose data source was a cookie value as actually exploitable. However, we got feedback from some tool vendors, like Fortify, Burp, and Arachni, that they disagreed with this analysis and felt that, in fact, cookies were a valid source of attack against XSS vulnerabilities. Given that there are good arguments on both sides of this safe vs. unsafe question, we decided on Aug 25, 2015 to simply remove those test cases from the Benchmark. If, in the future, we decide who is right, we may add such test cases back in.&lt;br /&gt;
&lt;br /&gt;
== Headers as a Source of Attack for XSS ==&lt;br /&gt;
&lt;br /&gt;
Similarly, the Benchmark team believes that the names of headers aren't a valid source of XSS attack for the same reason we thought cookie values aren't a valid source. Because it would require an XSS vulnerability to be exploited in the first place to set them. In fact, we feel that this argument is much stronger for header names, than cookie values. Right now, the Benchmark doesn't include any header names as sources for XSS test cases, but we plan to add them, and mark them as false positive in the Benchmark.&lt;br /&gt;
&lt;br /&gt;
We do have header values as sources for some XSS test cases in the Benchmark and only 'referer' is treated as a valid XSS source (i.e., true positives) because other headers are not viable XSS sources. Other headers are, of course, valid sources for other attack vectors, like SQL injection or Command Injection.&lt;br /&gt;
&lt;br /&gt;
== False Positive Scenario: Static Values Passed to Unsafe (Weak) Sinks ==&lt;br /&gt;
&lt;br /&gt;
The Benchmark has MANY test cases where unsafe data flows in from the browser, but that data is replaced with static content as it goes through the propagators in the that specific test case. This static (safe) data then flows to the sink, which may be a weak/unsafe sink, like, for example, an unsafely constructed SQL statement. The Benchmark treats those test cases as false positives because there is absolutely no way for that weakness to be exploited. The NSA Juliet SAST benchmark treats such test cases exactly the same way, as false positives. We do recognize that there are weaknesses in those test cases, even though they aren't exploitable.&lt;br /&gt;
&lt;br /&gt;
Some SAST tool vendors feel it is appropriate to point out those weaknesses, and that's fine. However, if the tool points those weaknesses out, and does not distinguish them from truly exploitable vulnerabilities, then the Benchmark treats those findings as false positives. If the tool allows a user to differentiate these non-exploitable weaknesses from exploitable vulnerabilities, then the Benchmark scorecard generator can use that information to filter out these extra findings (along with any other similarly marked findings) so they don't count against that tool when calculating that tool's Benchmark score.  In the real world, its important for analysts to be able to filter out such findings if they only have time to deal with the most critical, actually exploitable, vulnerabilities. If a tool doesn't make it easy for an analyst to distinguish the two situations, then they are providing a disservice to the analyst.&lt;br /&gt;
&lt;br /&gt;
This issue doesn't affect DAST tools. They only report what appears to be exploitable to them. So this has no affect on them.&lt;br /&gt;
&lt;br /&gt;
If you are a SAST tool vendor or user, and you believe the Benchmark scorecard generator is counting such findings against that tool, and there is a way to tell them apart, please let the project team know so the scorecard generator can be adjusted to not count those findings against the tool. The Benchmark project's goal is the generate the most fair and accurate results it can generate. If such an adjustment is made to how a scorecard is generated for that tool, we plan to document this was done for that tool, and explain how others could perform the same filtering within that tool in order to get the same focused set of results.&lt;br /&gt;
&lt;br /&gt;
== Dead Code ==&lt;br /&gt;
&lt;br /&gt;
Some SAST tools point out weaknesses in dead code because they might eventually end up being used, and serve as bad coding examples (think cut/paste of code). We think this is fine/appropriate.  However, there is no dead code in the OWASP Benchmark (at least not intentionally). So dead code should not be causing any tool to report unnecessary false positives.&lt;br /&gt;
&lt;br /&gt;
= Tool Support/Results =&lt;br /&gt;
&lt;br /&gt;
The results for 5 free tools, PMD, FindBugs, FindBugs with the FindSecBugs plugin, SonarQube and OWASP ZAP are available here against version 1.2 of the Benchmark: https://rawgit.com/OWASP/Benchmark/master/scorecard/OWASP_Benchmark_Home.html. We've included multiple versions of FindSecBugs' and ZAP's results so you can see the improvements they are making finding vulnerabilities in Benchmark.&lt;br /&gt;
&lt;br /&gt;
We have Benchmark results for all the following tools, but haven't publicly released the results for any commercial tools. However, we included a 'Commercial Average' page, which includes a summary of results for 6 commercial SAST tools along with anonymous versions of each SAST tool's scorecard.&lt;br /&gt;
&lt;br /&gt;
The Benchmark can generate results for the following tools: &lt;br /&gt;
&lt;br /&gt;
'''Free Static Application Security Testing (SAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://pmd.github.io/ PMD] (which really has no security rules) - .xml results file&lt;br /&gt;
* [http://findbugs.sourceforge.net/ FindBugs] - .xml results file (Note: FindBugs hasn't been updated since 2015. Use SpotBugs instead (see below))&lt;br /&gt;
* [https://www.sonarqube.org/downloads/ SonarQube] - .xml results file&lt;br /&gt;
* [https://spotbugs.github.io/ SpotBugs] - .xml results file. This is the successor to FindBugs.&lt;br /&gt;
* SpotBugs with the [http://find-sec-bugs.github.io/ FindSecurityBugs plugin] - .xml results file&lt;br /&gt;
&lt;br /&gt;
Note: We looked into supporting [http://checkstyle.sourceforge.net/ Checkstyle] but it has no security rules, just like PMD. The [http://fb-contrib.sourceforge.net/ fb-contrib] FindBugs plugin doesn't have any security rules either. We did test [http://errorprone.info/ Error Prone], and found that it does report some use of [http://errorprone.info/bugpattern/InsecureCipherMode) insecure ciphers (CWE-327)], but that's it.&lt;br /&gt;
&lt;br /&gt;
'''Commercial SAST Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.castsoftware.com/products/application-intelligence-platform CAST Application Intelligence Platform (AIP)] - .xml results file&lt;br /&gt;
* [https://www.checkmarx.com/products/static-application-security-testing/ Checkmarx CxSAST] - .xml results file&lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/ibm-appscan-source IBM AppScan Source (Standalone and Cloud)] - .ozasmt or .xml results file&lt;br /&gt;
* [https://juliasoft.com/solutions/julia-for-security/ Julia Analyzer] - .xml results file&lt;br /&gt;
* [https://www.kiuwan.com/code-security-sast/ Kiuwan Code Security] - .threadfix results file&lt;br /&gt;
* [https://software.microfocus.com/en-us/products/static-code-analysis-sast/overview Micro Focus (Formally HPE) Fortify (On-Demand and stand-alone versions)] - .fpr results file&lt;br /&gt;
* [https://www.parasoft.com/products/jtest/ Parasoft Jtest] - .xml results file&lt;br /&gt;
* [https://semmle.com/lgtm Semmle LGTM] - .sarif results file&lt;br /&gt;
* [https://www.shiftleft.io/product/ ShiftLeft SAST] - .sl results file (Benchmark specific format. Ask vendor how to generate this)&lt;br /&gt;
* [https://snappycodeaudit.com/category/static-code-analysis Snappycode Audit's SnappyTick Source Edition (SAST)] - .xml results file&lt;br /&gt;
* [https://www.sourcemeter.com/features/ SourceMeter] - .txt results file of ALL results from VulnerabilityHunter&lt;br /&gt;
* [https://www.synopsys.com/content/dam/synopsys/sig-assets/datasheets/SAST-Coverity-datasheet.pdf Synopsys Static Analysis (Formerly Coverity Code Advisor) (On-Demand and stand-alone versions)] - .json results file (You can scan Benchmark w/Coverity for free. See: https://scan.coverity.com/)&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] - .xml results file&lt;br /&gt;
* [https://www.veracode.com/products/binary-static-analysis-sast Veracode SAST] - .xml results file&lt;br /&gt;
* [https://www.rigs-it.com/xanitizer/ XANITIZER] - xml results file ([https://www.rigs-it.com/wp-content/uploads/2018/03/howtosetupxanitizerforowaspbenchmarkproject.pdf Their white paper on how to setup Xanitizer to scan Benchmark.]) (Free trial available)&lt;br /&gt;
&lt;br /&gt;
We are looking for results for other commercial static analysis tools like: [https://www.grammatech.com/products/codesonar Grammatech CodeSonar], [https://www.roguewave.com/products-services/klocwork RogueWave's Klocwork], etc. If you have a license for any static analysis tool not already listed above and can run it on the Benchmark and send us the results file that would be very helpful. &lt;br /&gt;
&lt;br /&gt;
The free SAST tools come bundled with the Benchmark so you can run them yourselves. If you have a license for any commercial SAST tool, you can also run them against the Benchmark. Just put your results files in the /results folder of the project, and then run the BenchmarkScore script for your platform (.sh / .bat) and it will generate a scorecard in the /scorecard directory for all the tools you have results for that are currently supported.&lt;br /&gt;
&lt;br /&gt;
'''Free Dynamic Application Security Testing (DAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
Note: While we support scorecard generators for these Free and Commercial DAST tools, we haven't been able to get a full/clean run against the Benchmark from most of these tools. As such, some of these scorecard generators might still need some work to properly reflect their results. If you notice any problems, let us know.&lt;br /&gt;
&lt;br /&gt;
* [http://www.arachni-scanner.com/ Arachni] - .xml results file&lt;br /&gt;
** To generate .xml, run: ./bin/arachni_reporter &amp;quot;Your_AFR_Results_Filename.afr&amp;quot; --reporter=xml:outfile=Benchmark1.2-Arachni.xml&lt;br /&gt;
* [https://www.owasp.org/index.php/ZAP OWASP ZAP] - .xml results file. To generate a complete ZAP XML results file so you can generate a valid scorecard, make sure you:&lt;br /&gt;
** Tools &amp;gt; Options &amp;gt; Alerts - And set the Max alert instances to like 500.&lt;br /&gt;
** Then: Report &amp;gt; Generate XML Report...&lt;br /&gt;
&lt;br /&gt;
'''Commercial DAST Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.acunetix.com/vulnerability-scanner/ Acunetix Web Vulnerability Scanner (WVS)] - .xml results file (Generated using [https://www.acunetix.com/resources/wvs7manual.pdf command line interface (see Chapter 10.)] /ExportXML switch)&lt;br /&gt;
* [https://portswigger.net/burp Burp Pro] - .xml results file&lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/appscan-standard IBM AppScan] - .xml results file&lt;br /&gt;
* [https://software.microfocus.com/en-us/products/webinspect-dynamic-analysis-dast/overview Micro Focus (Formally HPE) WebInspect] - .xml results file&lt;br /&gt;
* [https://www.netsparker.com/web-vulnerability-scanner/ Netsparker] - .xml results file&lt;br /&gt;
* [https://www.qualys.com/apps/web-app-scanning/ Qualys Web App Scanner] - .xml results file&lt;br /&gt;
* [https://www.rapid7.com/products/appspider/ Rapid7 AppSpider] - .xml results file&lt;br /&gt;
&lt;br /&gt;
If you have access to other DAST Tools, PLEASE RUN THEM FOR US against the Benchmark, and send us the results file so we can build a scorecard generator for that tool.&lt;br /&gt;
&lt;br /&gt;
'''Commercial Interactive Application Security Testing (IAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.contrastsecurity.com/interactive-application-security-testing-iast Contrast Assess] - .zip results file (You can scan Benchmark w/Contrast for free. See: https://www.contrastsecurity.com/contrast-community-edition)&lt;br /&gt;
* [https://hdivsecurity.com/interactive-application-security-testing-iast Hdiv Detection (IAST)] - .hlg results file&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/security-testing/interactive-application-security-testing.html Seeker IAST] - .csv results file&lt;br /&gt;
&lt;br /&gt;
'''Commercial Hybrid Analysis Application Security Testing Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [http://www.iappsecure.com/products.html Fusion Lite Insight] - .xml results file&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It may be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results.'''&lt;br /&gt;
&lt;br /&gt;
The project has automated test harnesses for these vulnerability detection tools, so we can repeatably run the tools against each version of the Benchmark and automatically produce scorecards in our desired format.&lt;br /&gt;
&lt;br /&gt;
We want to test as many tools as possible against the Benchmark. If you are:&lt;br /&gt;
&lt;br /&gt;
* A tool vendor and want to participate in the project&lt;br /&gt;
* Someone who wants to help score a free tool against the project&lt;br /&gt;
* Someone who has a license to a commercial tool and the terms of the license allow you to publish tool results, and you want to participate&lt;br /&gt;
&lt;br /&gt;
please let [mailto:dave.wichers@owasp.org me] know!&lt;br /&gt;
&lt;br /&gt;
= Quick Start =&lt;br /&gt;
&lt;br /&gt;
==What is in the Benchmark?==&lt;br /&gt;
The Benchmark is a Java Maven project. Its primary component is thousands of test cases (e.g., BenchmarkTest00001.java) , each of which is a single Java servlet that contains a single vulnerability (either a true positive or false positive). The vulnerabilities span about a dozen different types currently and are expected to expand significantly in the future.&lt;br /&gt;
&lt;br /&gt;
An expectedresults.csv is published with each version of the Benchmark (e.g., expectedresults-1.1.csv) and it specifically lists the expected results for each test case. Here’s what the first two rows in this file looks like for version 1.1 of the Benchmark:&lt;br /&gt;
&lt;br /&gt;
 # test name		category	real vulnerability	CWE	Benchmark version: 1.1	2015-05-22&lt;br /&gt;
 BenchmarkTest00001	crypto		TRUE			327&lt;br /&gt;
&lt;br /&gt;
This simply means that the first test case is a crypto test case (use of weak cryptographic algorithms), this is a real vulnerability (as opposed to a false positive), and this issue maps to CWE 327. It also indicates this expected results file is for Benchmark version 1.1 (produced May 22, 2015). There is a row in this file for each of the tens of thousands of test cases in the Benchmark.  Each time a new version of the Benchmark is published, a new corresponding results file is generated and each test case can be completely different from one version to the next.&lt;br /&gt;
&lt;br /&gt;
The Benchmark also comes with a bunch of different utilities, commands, and prepackaged open source security analysis tools, all of which can be executed through Maven goals, including:&lt;br /&gt;
&lt;br /&gt;
* Open source vulnerability detection tools to be run against the Benchmark&lt;br /&gt;
* A scorecard generator, which computes a scorecard for each of the tools you have results files for.&lt;br /&gt;
&lt;br /&gt;
==What Can You Do With the Benchmark?==&lt;br /&gt;
* Compile all the software in the Benchmark project (e.g., mvn compile)&lt;br /&gt;
* Run a static vulnerability analysis tool (SAST) against the Benchmark test case code&lt;br /&gt;
&lt;br /&gt;
* Scan a running version of the Benchmark with a dynamic application security testing tool (DAST)&lt;br /&gt;
** Instructions on how to run it are provided below&lt;br /&gt;
&lt;br /&gt;
* Generate scorecards for each of the tools you have results files for&lt;br /&gt;
** See the Tool Support/Results page for the list of tools the Benchmark supports generating scorecards for&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
Before downloading or using the Benchmark make sure you have the following installed and configured properly:&lt;br /&gt;
&lt;br /&gt;
 GIT: http://git-scm.com/ or https://github.com/&lt;br /&gt;
 Maven: https://maven.apache.org/  (Version: 3.2.3 or newer works.)&lt;br /&gt;
 Java: http://www.oracle.com/technetwork/java/javase/downloads/index.html (Java 7 or 8) (64-bit)&lt;br /&gt;
&lt;br /&gt;
==Getting, Building, and Running the Benchmark==&lt;br /&gt;
&lt;br /&gt;
To download and build everything:&lt;br /&gt;
&lt;br /&gt;
 $ git clone https://github.com/OWASP/benchmark &lt;br /&gt;
 $ cd benchmark&lt;br /&gt;
 $ mvn compile   (This compiles it)&lt;br /&gt;
 $ runBenchmark.sh/.bat - This compiles and runs it.&lt;br /&gt;
&lt;br /&gt;
Then navigate to: https://localhost:8443/benchmark/ to go to its home page. It uses a self signed SSL certificate, so you'll get a security warning when you hit the home page.&lt;br /&gt;
&lt;br /&gt;
Note: We have set the Benchmark app to use up to 6 Gig of RAM, which it may need when it is fully scanned by a DAST scanner. The DAST tool probably also requires 3+ Gig of RAM. As such, we recommend having a 16 Gig machine if you are going to try to run a full DAST scan. And at least 4 or ideally 8 Gig if you are going to play around with the running Benchmark app.&lt;br /&gt;
&lt;br /&gt;
== Using a VM instead ==&lt;br /&gt;
We have several preconstructed VMs or instructions on how to build one that you can use instead:&lt;br /&gt;
&lt;br /&gt;
* Docker: A Dockerfile is checked into the project [https://github.com/OWASP/Benchmark/blob/master/VMs/Dockerfile here]. This Docker file should automatically produce a Docker VM with the latest Benchmark project files. After you have Docker installed, cd to /VMs then run: &lt;br /&gt;
 ./buildDockerImage.sh --&amp;gt; This builds the Docker Benchmark VM (This will take a WHILE)&lt;br /&gt;
 docker images  --&amp;gt; You should see the new benchmark:latest image in the list provided&lt;br /&gt;
 # The Benchmark Docker Image only has to be created once. &lt;br /&gt;
&lt;br /&gt;
 To run the Benchmark in your Docker VM, just run:&lt;br /&gt;
   ./runDockerImage.sh  --&amp;gt; This pulls in any updates to Benchmark since the Image was built, builds everything, and starts a remotely accessible Benchmark web app.&lt;br /&gt;
 If successful, you should see this at the end:&lt;br /&gt;
   [INFO] [talledLocalContainer] Tomcat 8.x started on port [8443]&lt;br /&gt;
   [INFO] Press Ctrl-C to stop the container...&lt;br /&gt;
 Then simply navigate to: https://localhost:8443/benchmark from the machine you are running Docker&lt;br /&gt;
 &lt;br /&gt;
 Or if you want to access from a different machine:&lt;br /&gt;
  docker-machine ls (in a different terminal) --&amp;gt; To get IP Docker VM is exporting (e.g., tcp://192.168.99.100:2376)&lt;br /&gt;
  Navigate to: https://192.168.99.100:8443/benchmark in your browser (using the above IP as an example)&lt;br /&gt;
&lt;br /&gt;
* Amazon Web Services (AWS) - Here's how you set up the Benchmark on an AWS VM:&lt;br /&gt;
 sudo yum install git&lt;br /&gt;
 sudo yum install maven&lt;br /&gt;
 sudo yum install mvn&lt;br /&gt;
 sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo&lt;br /&gt;
 sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo&lt;br /&gt;
 sudo yum install -y apache-maven&lt;br /&gt;
 git clone https://github.com/OWASP/benchmark&lt;br /&gt;
 cd benchmark&lt;br /&gt;
 chmod 755 *.sh&lt;br /&gt;
 ./runBenchmark.sh -- to run it locally on the VM.&lt;br /&gt;
 ./runRemoteAccessibleBenchmark.sh -- to run it so it can be accessed outside the VM (on port 8443).&lt;br /&gt;
&lt;br /&gt;
==Running Free Static Analysis Tools Against the Benchmark==&lt;br /&gt;
There are scripts for running each of the free SAST vulnerability detection tools included with the Benchmark against the Benchmark test cases. On Linux, you might have to make them executable (e.g., chmod 755 *.sh) before you can run them.&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for PMD:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runPMD.sh (Linux) or .\scripts\runPMD.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for FindBugs:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runFindBugs.sh (Linux) or .\scripts\runFindBugs.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for FindBugs with the FindSecBugs plugin:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runFindSecBugs.sh (Linux) or .\scripts\runFindSecBugs.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
In each case, the script will generate a results file and put it in the /results directory. For example: &lt;br /&gt;
&lt;br /&gt;
 Benchmark_1.2-findbugs-v3.0.1-1026.xml&lt;br /&gt;
&lt;br /&gt;
This results file name is carefully constructed to mean the following: It's a results file against the OWASP Benchmark version 1.2, FindBugs was the analysis tool, it was version 3.0.1 of FindBugs, and it took 1026 seconds to run the analysis.&lt;br /&gt;
&lt;br /&gt;
NOTE: If you create a results file yourself, by running a commercial tool for example, you can add the version # and the compute time to the filename just like this and the Benchmark Scorecard generator will pick this information up and include it in the generated scorecard. If you don't, depending on what metadata is included in the tool results, the Scorecard generator might do this automatically anyway.&lt;br /&gt;
&lt;br /&gt;
==Generating Scorecards==&lt;br /&gt;
The scorecard generation application BenchmarkScore is included with the Benchmark. It parses the output files generated by any of the supported security tools run against the Benchmark and compares them against the expected results, and produces a set of web pages that detail the accuracy and speed of the tools involved. For the list of currently supported tools, check out the: Tools Support/Results tab. If you are using a tool that is not yet supported, simply send us a results file from that tool and we'll write a parser for that tool and add it to the supported tools list.&lt;br /&gt;
&lt;br /&gt;
The following command will compute a Benchmark scorecard for all the results files in the '''/results''' directory. The generated scorecard is put into the '''/scorecard''' directory.&lt;br /&gt;
&lt;br /&gt;
 createScorecard.sh (Linux) or createScorecard.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
An example of a real scorecard for some open source tools is provided at the top of the Tool Support/Results tab so you can see what one looks like.&lt;br /&gt;
&lt;br /&gt;
We recommend including the Benchmark version number in any results file name, in order to help prevent mismatches between the expected results and the actual results files.  A tool will not score well against the wrong expected results.&lt;br /&gt;
&lt;br /&gt;
===Customizing Your Scorecard Generation===&lt;br /&gt;
&lt;br /&gt;
The createScorecard scripts are very simple. They only have one line. Here's what the 1.2 version looks like:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;expectedresults-1.2.csv results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This Maven command simply says to run the BenchmarkScore application, passing in two parameters. The 1st is the Benchmark expected results file to compare the tool results against. And the 2nd is the name of the directory that contains all the results from tools run against that version of the Benchmark. If you have tool results older than the current version of the Benchmark, like 1.1 results for example, then you would do something like this instead:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;expectedresults-1.1.csv 1.1_results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To keep things organized, we actually put the expected results file inside the same results folder for that version of the Benchmark, so our command looks like this:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;1.1_results/expectedresults-1.1.csv 1.1_results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In all cases, the generated scorecard is put in the /scorecard folder.&lt;br /&gt;
&lt;br /&gt;
'''WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It is likely to be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results.''' It is for just this reason that the Benchmark project isn't releasing such results itself.&lt;br /&gt;
&lt;br /&gt;
= Tool Scanning Tips =&lt;br /&gt;
&lt;br /&gt;
People frequently have difficulty scanning the Benchmark with various tools due to many reasons, including size of the Benchmark app and its codebase, and complexity of the tools used. Here is some guidance for some of the tools we have used to scan the Benchmark. If you've learned any tricks on how to get better or easier results for a particular tool against the Benchmark, let us know or update this page directly.&lt;br /&gt;
&lt;br /&gt;
== Generic Tips ==&lt;br /&gt;
&lt;br /&gt;
Because of the size of the Benchmark, you may need to give your tool more memory before it starts the scan. If its a Java based tool, you may want to pass more memory to it like this:&lt;br /&gt;
&lt;br /&gt;
 -Xmx4G (This gives the Java application 4 Gig of memory)&lt;br /&gt;
&lt;br /&gt;
== SAST Tools ==&lt;br /&gt;
&lt;br /&gt;
=== Checkmarx ===&lt;br /&gt;
&lt;br /&gt;
The Checkmarx SAST Tool (CxSAST) is ready to scan the OWASP Benchmark out-of-the-box. &lt;br /&gt;
Please notice that the OWASP Benchmark “hides” some vulnerabilities in dead code areas, for example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
if (0&amp;gt;1)&lt;br /&gt;
{&lt;br /&gt;
  //vulnerable code&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
By default, CxSAST will find these vulnerabilities since Checkmarx believes that including dead code in the scan results is a SAST best practice. &lt;br /&gt;
&lt;br /&gt;
Checkmarx's experience shows that security experts expect to find these types of code vulnerabilities, and demand that their developers fix them. However, OWASP Benchmark considers the flagging of these vulnerabilities as False Positives, as a result lowering Checkmarx's overall score. &lt;br /&gt;
&lt;br /&gt;
Therefore, in order to receive an OWASP score untainted by dead code, re-configure CxSAST as follows:&lt;br /&gt;
# Open the CxAudit client for editing Java queries.&lt;br /&gt;
# Override the &amp;quot;Find_Dead_Code&amp;quot; query.&lt;br /&gt;
# Add the commented text of the original query to the new override query.&lt;br /&gt;
# Save the queries.&lt;br /&gt;
&lt;br /&gt;
=== FindBugs ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindBugs.(sh or bat). If you want to run a different version of FindBugs, just change its version number in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== FindBugs with FindSecBugs ===&lt;br /&gt;
&lt;br /&gt;
[http://h3xstream.github.io/find-sec-bugs/ FindSecurityBugs] is a great plugin for FindBugs that significantly increases the ability for FindBugs to find security issues. We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindSecBugs.(sh or bat). If you want to run a different version of FindSecBugs, just change the version number of the findsecbugs-plugin artifact in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== Micro Focus (Formally HP) Fortify ===&lt;br /&gt;
&lt;br /&gt;
If you are using the Audit Workbench, you can give it more memory and make sure you invoke it in 64-bit mode by doing this:&lt;br /&gt;
&lt;br /&gt;
  set AWB_VM_OPTS=&amp;quot;-Xmx2G -XX:MaxPermSize=256m&amp;quot;&lt;br /&gt;
  export AWB_VM_OPTS=&amp;quot;-Xmx2G -XX:MaxPermSize=256m&amp;quot;&lt;br /&gt;
  auditworkbench -64&lt;br /&gt;
&lt;br /&gt;
We found it was easier to use the Maven support in Fortify to scan the Benchmark and to do it in 2 phases, translate, and then scan. We did something like this:&lt;br /&gt;
&lt;br /&gt;
  Translate Phase:&lt;br /&gt;
  export JAVA_HOME=$(/usr/libexec/java_home)&lt;br /&gt;
  export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_17.10/bin&lt;br /&gt;
  export SCA_VM_OPTS=&amp;quot;-Xmx2G -version 1.7&amp;quot;&lt;br /&gt;
  mvn sca:clean&lt;br /&gt;
  mvn sca:translate&lt;br /&gt;
&lt;br /&gt;
  Scan Phase:&lt;br /&gt;
  export JAVA_HOME=$(/usr/libexec/java_home)&lt;br /&gt;
  export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_4.10/bin&lt;br /&gt;
  export SCA_VM_OPTS=&amp;quot;-Xmx10G -version 1.7&amp;quot;&lt;br /&gt;
  mvn sca:scan&lt;br /&gt;
&lt;br /&gt;
=== PMD ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runPMD.(sh or bat). If you want to run a different version of PMD, just change its version number in the Benchmark pom.xml file. (NOTE: PMD doesn't find any security issues. We include it because its interesting to know that it doesn't.)&lt;br /&gt;
&lt;br /&gt;
=== SonarQube ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its mostly dialed in.  But its a bit tricky because SonarQube requires two parts. There is a stand alone scanner for Java. And then there is a web application that accepts the results, and in turn can then produce the results file required by the Benchmark scorecard generator for SonarQube. Running the script runSonarQube.(sh or bat) will generate the results, but if the SonarQube Web Application isn't running where the runSonarQube script expects it to be, then the script will fail.&lt;br /&gt;
&lt;br /&gt;
If you want to run a different version of SonarQube, just change its version number in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== Xanitizer ===&lt;br /&gt;
&lt;br /&gt;
The vendor has written their own guide to [http://www.rigs-it.net/opendownloads/whitepapers/HowToSetUpXanitizerForOWASPBenchmarkProject.pdf How to Set Up Xanitizer for OWASP Benchmark].&lt;br /&gt;
&lt;br /&gt;
== DAST Tools ==&lt;br /&gt;
&lt;br /&gt;
=== Burp Pro ===&lt;br /&gt;
&lt;br /&gt;
You must use Burp Pro v1.6.29 or greater to scan the Benchmark due to a previous limitation in Burp Pro related to ensuring the path attribute for cookies was honored. This issue was fixed in the v1.6.29 release.&lt;br /&gt;
&lt;br /&gt;
To scan, first spider the entire Benchmark, and then select the /Benchmark URL and actively scan that branch. You can skip all the .html pages and any other pages that Burp says have no parameters.&lt;br /&gt;
&lt;br /&gt;
NOTE: We have been unable to simply run Burp Pro against the entire Benchmark in one shot. In our experience, it eventually freezes/stops scanning. We've had to run it against each test area one at a time. If you figure out how to get Burp Pro to scan all of Benchmark in one shot, let us know how you did it!&lt;br /&gt;
&lt;br /&gt;
=== OWASP ZAP ===&lt;br /&gt;
&lt;br /&gt;
ZAP may require additional memory to be able to scan the Benchmark. To configure the amount of memory:&lt;br /&gt;
* Tools --&amp;gt; Options --&amp;gt; JVM: Recommend setting to: -Xmx2048m (or larger). (Then restart ZAP).&lt;br /&gt;
&lt;br /&gt;
To run ZAP against Benchmark:&lt;br /&gt;
# Because Benchmark uses Cookies and Headers as sources of attack for many test cases: Tools --&amp;gt; Options --&amp;gt; Active Scan Input Vectors: Then check the HTTP Headers, All Requests, and Cookie Data checkboxes and hit OK&lt;br /&gt;
# Click on Show All Tabs button (if spider tab isn't visible)&lt;br /&gt;
# Go to Spider tab (the black spider) and click on New Scan button&lt;br /&gt;
# Enter: https://localhost:8443/benchmark/  into the 'Starting Point' box and hit 'Start Scan'&lt;br /&gt;
#* Do this again. For some reason it takes 2 passes with the Spider before it stops finding more Benchmark endpoints.&lt;br /&gt;
# When Spider completes, click on 'benchmark' folder in Site Map, right click and select: 'Attack --&amp;gt; Active Scan'&lt;br /&gt;
#* It will take several hours, like 3+ to complete (it's actually likely to simply freeze before completing the scan - see NOTE: below)&lt;br /&gt;
&lt;br /&gt;
For faster active scan you can&lt;br /&gt;
* Disable the ZAP DB log (in ZAP 2.5.0+):&lt;br /&gt;
** Disable it via Options / Database / Recover Log&lt;br /&gt;
** Set it on the command line using &amp;quot;-config database.recoverylog=false&amp;quot;&lt;br /&gt;
* Disable unnecessary plugins / Technologies: When you launch the Active Scan&lt;br /&gt;
** On the Policy tab, disable all plugins except: XSS (Reflected), Path Traversal, SQLi, OS Command Injection&lt;br /&gt;
** Go the Technology Tab, disable everything and only enable: MySQL, YOUR_OS, Tomcat&lt;br /&gt;
** Note: This 2nd performance improvement step is a bit like cheating as you wouldn't do this for a normal site scan. You'd want to leave all this on in case these other plugins/technologies are helpful in finding more issues. So a fair performance comparison of ZAP to other tools would leave all this on.&lt;br /&gt;
&lt;br /&gt;
To generate the ZAP XML results file so you can generate its scorecard:&lt;br /&gt;
* Tools &amp;gt; Options &amp;gt; Alerts - And set the Max alert instances to like 500.&lt;br /&gt;
* Then: Report &amp;gt; Generate XML Report...&lt;br /&gt;
&lt;br /&gt;
NOTE: Similar to Burp, we can't simply run ZAP against the entire Benchmark in one shot. In our experience, it eventually freezes/stops scanning. We've had to run it against each test area one at a time. If you figure out how to get ZAP to scan all of Benchmark in one shot, let us know how you did it!&lt;br /&gt;
&lt;br /&gt;
Things we tried that didn't improve the score:&lt;br /&gt;
* AJAX Spider - the traditional spider appears to find all (or 99%) of the test cases so the AJAX Spider does not appear to be needed against Benchmark v1.2&lt;br /&gt;
* XSS (Persistent) - There are 3 of these plugins that run by default. There aren't any stored XSS in Benchmark, so you can disable these plugins for a faster scan.&lt;br /&gt;
* DOM XSS Plugin - This is an optional plugin that didn't seem to find any additional XSS issues. There aren't an DOM specific XSS issues in Benchmark v1.2, so not surprising.&lt;br /&gt;
&lt;br /&gt;
== IAST Tools ==&lt;br /&gt;
&lt;br /&gt;
Interactive Application Security Testing (IAST) tools work differently than scanners.  IAST tools monitor an application as it runs to identify application vulnerabilities using context from inside the running application. Typically these tools run continuously, immediately notifying users of vulnerabilities, but you can also get a full report of an entire application. To do this, we simply run the Benchmark application with an IAST agent and use a crawler to hit all the pages.&lt;br /&gt;
&lt;br /&gt;
=== Contrast Assess ===&lt;br /&gt;
&lt;br /&gt;
To use Contrast Assess, we simply add the Java agent to the Benchmark environment and run the BenchmarkCrawler. The entire process should only take a few minutes. We provided a few scripts, which simply add the -javaagent:contrast.jar flag to the Benchmark launch configuration. We have tested on MacOS, Ubuntu, and Windows.  Be sure your VM has at least 4M of memory.&lt;br /&gt;
&lt;br /&gt;
* Ensure your environment has Java, Maven, and git installed, then build the Benchmark project&lt;br /&gt;
   '''$ git clone https://github.com/OWASP/Benchmark.git'''&lt;br /&gt;
   '''$ cd Benchmark'''&lt;br /&gt;
   '''$ mvn compile'''&lt;br /&gt;
&lt;br /&gt;
* Download a licensed copy of the Contrast Assess Java Agent (contrast.jar) from your Contrast TeamServer account and put it in the /Benchmark/tools/Contrast directory.&lt;br /&gt;
   '''$ cp ~/Downloads/contrast.jar tools/Contrast'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 1, launch the Benchmark application and wait until it starts&lt;br /&gt;
   '''$ cd tools/Contrast  &lt;br /&gt;
   '''$ ./runBenchmark_wContrast.sh''' (.bat on Windows)&lt;br /&gt;
   '''[INFO] Scanning for projects...&lt;br /&gt;
   '''[INFO]                                                                         &lt;br /&gt;
   '''[INFO] ------------------------------------------------------------------------&lt;br /&gt;
   '''[INFO] Building OWASP Benchmark Project 1.2&lt;br /&gt;
   '''[INFO] ------------------------------------------------------------------------&lt;br /&gt;
   '''[INFO] &lt;br /&gt;
   '''...&lt;br /&gt;
   '''[INFO] [talledLocalContainer] Tomcat 8.x started on port [8443]'''&lt;br /&gt;
   '''[INFO] Press Ctrl-C to stop the container...'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 2, launch the crawler and wait a minute or two for the crawl to complete.&lt;br /&gt;
   '''$ ./runCrawler.sh''' (.bat on Windows)&lt;br /&gt;
&lt;br /&gt;
* A Contrast report has been generated in /Benchmark/tools/Contrast/working/contrast.log.  This report will be automatically copied (and renamed with version number)  to /Benchmark/results directory.&lt;br /&gt;
   '''$ more tools/Contrast/working/contrast.log'''&lt;br /&gt;
   '''2016-04-22 12:29:29,716 [main b] INFO - Contrast Runtime Engine&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Copyright (C) 2012&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Pat. 8,458,789 B2&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Contrast Security, Inc.&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - All Rights Reserved&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - https://www.contrastsecurity.com/&lt;br /&gt;
   '''...'''&lt;br /&gt;
&lt;br /&gt;
* Press Ctrl-C to stop the Benchmark in Terminal 1.  Note: on Windows, select &amp;quot;N&amp;quot; when asked Terminate batch job (Y/N))&lt;br /&gt;
   '''[INFO] [talledLocalContainer] Tomcat 8.x is stopped'''&lt;br /&gt;
   '''Copying Contrast report to results directory'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 2, generate scorecards in /Benchmark/scorecard&lt;br /&gt;
   '''$ ./createScorecards.sh''' (.bat on Windows)&lt;br /&gt;
   '''Analyzing results from Benchmark_1.2-Contrast.log&lt;br /&gt;
   '''Actual results file generated: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.csv&lt;br /&gt;
   '''Report written to: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html&lt;br /&gt;
&lt;br /&gt;
* Open the Benchmark Scorecard in your browser&lt;br /&gt;
   '''/Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html'''&lt;br /&gt;
&lt;br /&gt;
=== Hdiv Detection ===&lt;br /&gt;
&lt;br /&gt;
Hdiv has written their own instructions on how to run the detection component of their product on the Benchmark here: https://hdivsecurity.com/docs/features/benchmark/#how-to-run-hdiv-in-owasp-benchmark-project. You'll see that these instructions involve using the same crawler used to exercise all the test cases in the Benchmark, just like Contrast above.&lt;br /&gt;
&lt;br /&gt;
= RoadMap =&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.0 - Released April 15, 2015 - This initial release included over 20,000 test cases in 11 different vulnerability categories. As this initial version was not a runnable application, it was only suitable for assessing static analysis tools (SAST).&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.1 - Released May 23, 2015 - This update fixed some inaccurate test cases, and made sure that every vulnerability area included both True Positives and False Positives.&lt;br /&gt;
&lt;br /&gt;
Benchmark Scorecard Generator - Released July 10, 2015 - The ability to automatically and repeatably produce a scorecard of how well tools do against the Benchmark was released for most of the SAST tools supported by the Benchmark. Scorecards present graphical as well as statistical data on how well a tool does against the Benchmark down to the level of detail of how exactly it did against each individual test in the Benchmark. [https://rawgit.com/OWASP/Benchmark/master/scorecard/OWASP_Benchmark_Home.html Here are the latest public scorecards].  Support for producing scorecards for additional tools is being added all the time and the current full set is documented on the '''Tool Support/Results''' and '''Quick Start''' tabs of this wiki.&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.2beta - Released Aug 15, 2015 - The 1st release of a fully runnable version of the Benchmark to support assessing all types of vulnerability detection and prevention technologies, including DAST, IAST, RASP, WAFs, etc. This involved creating a user interface for every test case, and enhancing each test case to make sure its actually exploitable, not just uses something that is theoretically weak. This release is under 3,000 test cases to make it practical to scan the entire Benchmark with a DAST tool in a reasonable amount of time, with commodity hardware specs.&lt;br /&gt;
&lt;br /&gt;
Benchmark 1.2 - Released June 5, 2016 -  Based on feedback from a number of DAST tool developers, and other vendors as well, we made the Benchmark more realistic in a number of ways to facilitate external DAST scanning, and also made the Benchmark more resilient against attack so it could properly survive various DAST vulnerability detection and exploit verification techniques.&lt;br /&gt;
&lt;br /&gt;
Plans for Benchmark 1.3:&lt;br /&gt;
&lt;br /&gt;
While we don't have hard and fast rules of exactly what we are going to do next, enhancements in the following areas are planned for the next release:&lt;br /&gt;
&lt;br /&gt;
* Add new vulnerability categories (e.g., XXE, Hibernate Injection)&lt;br /&gt;
* Add support for popular server side Java frameworks (e.g., Spring)&lt;br /&gt;
* Add web services test cases&lt;br /&gt;
&lt;br /&gt;
We are also starting to work on the ability to score WAFs/RASPs and other defensive technology against Benchmark.&lt;br /&gt;
&lt;br /&gt;
= FAQ =&lt;br /&gt;
&lt;br /&gt;
==1. How are the scores computed for the Benchmark?==&lt;br /&gt;
&lt;br /&gt;
Each test case has a single vulnerability of a specific type. Its either a real vulnerability (True Positive) or not (a False Positive). We document all the test cases for each version of the Benchmark in the expectedresults-VERSION#.csv file (e.g., expectedresults-1.1.csv). This file lists the test case name, the CWE type of the vulnerability, and whether it is a True Positive or not. The Benchmark supports scorecard generators for computing exactly how a tool did when analyzing a version of the Benchmark. The full list of supported tools is on the Tools Support/Results tab. For each tool there is a parser that can parse the native results format for that tool (usually XML). This parser simply, for each test case, looks to see if that tool reported a vulnerability of the type expected in the test case source code file (for SAST) or the test case URL (for DAST/IAST). If it did, and the test case was a True Positive, the tool gets credit for finding it. If it is a False Positive test, and the tool reports that type of finding, then its recorded as a False Positive. If the tool didn't report that type of vulnerability for a test case, then they get either a False Negative, or a True Negative as appropriate. After calculating all of the individual test case results, a scorecard is generated providing a chart and statistics for that tool across all the vulnerability categories, and pages are also created comparing different tools to each other in each vulnerability category (if multiple tools are being scored together).&lt;br /&gt;
&lt;br /&gt;
A detailed file explaining exactly how that tool did against each individual test case in that version of the Benchmark is produced as part of scorecard generation, and is available via the Actual Results link on each tool's scorecard page. (e.g., Benchmark_v1.1_Scorecard_for_FindBugs.csv).&lt;br /&gt;
&lt;br /&gt;
==2. What if the tool I'm using doesn't have a scorecard generator for it?==&lt;br /&gt;
&lt;br /&gt;
Send us the results file! We'll be happy to create a parser for that tool so its now supported.&lt;br /&gt;
&lt;br /&gt;
==3. What if a tool finds other unexpected vulnerabilities?==&lt;br /&gt;
&lt;br /&gt;
We are sure there are vulnerabilities we didn't intend to be there and we are eliminating them as we find them. If you find some, let us know and we'll fix them too. We are primarily focused on unintentional vulnerabilities in the categories of vulnerabilities the Benchmark currently supports, since that is what is actually measured.&lt;br /&gt;
&lt;br /&gt;
Right now, two types of vulnerabilities that get reported are ignored by the scorecard generator:&lt;br /&gt;
# Vulnerabilities in categories not yet supported&lt;br /&gt;
# Vulnerabilities of a type that is supported, but reported in test cases not of that type&lt;br /&gt;
&lt;br /&gt;
In the case of #2, false positives reported in unexpected areas are also ignored, which is primarily a DAST problem. Right now those false positives are completely ignored, but we are thinking about including them in the false positive score in some fashion. We just haven't decided how yet.&lt;br /&gt;
&lt;br /&gt;
==4. How should I configure my tool to scan the Benchmark?==&lt;br /&gt;
&lt;br /&gt;
All tools support various levels of configuration in order to improve their results. The Benchmark project, in general, is trying to '''compare out of the box capabilities of tools'''. However, if a few simple tweaks to a tool can be done to improve that tool's score, that's fine. We'd like to understand what those simple tweaks are, and document them here, so others can repeat those tests in exactly the same way. For example, just turn on the 'test cookies and headers' flag, which is off by default. Or turn on the 'advanced' scan, so it will work harder, find more vulnerabilities. Its simple things like this we are talking about, not an extensive effort to teach the tool about the app, or perform 'expert' configuration of the tool.&lt;br /&gt;
&lt;br /&gt;
So, if you know of some simple tweaks to improve a tool's results, let us know what they are and we'll document them here so everyone can benefit and make it easier to do apples to apples comparisons. And we'll link to that guidance once we start documenting it, but we don't have any such guidance right now.&lt;br /&gt;
&lt;br /&gt;
==5. I'm having difficulty scanning the Benchmark with a DAST tool. How can I get it to work?==&lt;br /&gt;
&lt;br /&gt;
We've run into 2 primary issues giving DAST tools problems.&lt;br /&gt;
&lt;br /&gt;
a) The Benchmark Generates Lots of Cookies&lt;br /&gt;
&lt;br /&gt;
The Burp team pointed out a cookies bug in the 1.2beta Benchmark. Each Weak Randomness test case generates its own cookie, 1 per test case. This caused the creation of so many cookies that servers would eventually start returning 400 errors because there were simply too many cookies being submitted in a request. This was fixed in the Aug 27, 2015 update to the Benchmark by setting the path attribute for each of these cookies to be the path to that individual test case. Now, only at most one of these cookies should be submitted with each request, eliminating this 'too many cookies' problem. However, if a DAST tool doesn't honor this path attribute, it may continue to send too many cookies, making the Benchmark unscannable for that tool. Burp Pro prior to 1.6.29 had this issue, but it was fixed in the 1.6.29 release.&lt;br /&gt;
&lt;br /&gt;
b) The Benchmark is a BIG Application&lt;br /&gt;
&lt;br /&gt;
Yes. It is, so you might have to give your scanner more memory than it normally uses by default in order to successfully scan the entire Benchmark. Please consult your tool vendor's documentation on how to give it more memory.&lt;br /&gt;
&lt;br /&gt;
Your machine itself might not have enough memory in the first place. For example, we were not able to successfully scan the 1.2beta with OWASP ZAP with only 8 Gig of RAM. So, you might need a more powerful machine or use a cloud provided machine to successfully scan the Benchmark with certain DAST tools. You may have similar problems with SAST tools against large versions of the Benchmark, like the 1.1 release.&lt;br /&gt;
&lt;br /&gt;
= Acknowledgements =&lt;br /&gt;
&lt;br /&gt;
The following people, organizations, and many others, have contributed to this project and their contributions are much appreciated!&lt;br /&gt;
&lt;br /&gt;
* Lots of Vendors - Many vendors have provided us with either trial licenses we can use, or they have run their tools themselves and either sent us results files, or written and contributed scorecard generators for their tool. Many have also provided valuable feedback so we can make the Benchmark more accurate and more realistic.&lt;br /&gt;
* Juan Gama - Development of initial release and continued support&lt;br /&gt;
* Ken Prole - Assistance with automated scorecard development using CodeDx&lt;br /&gt;
* Nick Sanidas - Development of initial release&lt;br /&gt;
* Denim Group - Contribution of scan results to facilitate scorecard development&lt;br /&gt;
* Tasos Laskos - Significant feedback on the DAST version of the Benchmark&lt;br /&gt;
* Ann Campbell - From SonarSource - for fixing our SonarQube results parser&lt;br /&gt;
* Dhiraj Mishra - OWASP Member - contributed SQLi/XSS fuzz vectors as initial contribution towards adding support for WAF/RASP scoring&lt;br /&gt;
&lt;br /&gt;
[[File:CWE_Logo.jpeg|link=https://cwe.mitre.org/]] - The CWE project for providing a mapping mechanism to easily map test cases to issues found by vulnerability detection tools.&lt;br /&gt;
&lt;br /&gt;
We are looking for volunteers. Please contact [mailto:dave.wichers@owasp.org Dave Wichers] if you are interested in contributing new test cases, tool results run against the benchmark, or anything else.&lt;br /&gt;
&lt;br /&gt;
__NOTOC__ &amp;lt;headertabs /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP_Project]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=255697</id>
		<title>Free for Open Source Application Security Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=255697"/>
				<updated>2019-10-24T15:41:47Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* IAST Tools */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
== Introduction ==&lt;br /&gt;
OWASP's mission is to help the world improve the security of its software. One of the best ways OWASP can do that is to help Open Source developers improve the software they are producing that everyone else relies on. As such, the following lists of '''automated vulnerability detection tools''' that are '''free for open source''' projects have been gathered together here to raise awareness of their availability.&lt;br /&gt;
&lt;br /&gt;
We would encourage open source projects to use the following types of tools to improve the security and quality of their code:&lt;br /&gt;
* Static Application Security Testing ([[SAST]]) Tools &lt;br /&gt;
* Dynamic Application Security Testing ([[DAST]]) Tools - (Primarily for web apps)&lt;br /&gt;
* Interactive Application Security Testing (IAST) Tools - (Primarily for web apps and web APIs)&lt;br /&gt;
* Keeping Open Source libraries up-to-date (to avoid [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]])&lt;br /&gt;
* Static Code Quality Tools&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; '''Disclaimer:''' &amp;lt;b&amp;gt;OWASP does not endorse any of the Vendors or Scanning Tools by listing them below. They are simply listed if we believe they are free for use by open source projects. We have made every effort to provide this information as accurately as possible. If you are the vendor of a free for open source tool and think this information is incomplete or incorrect, please send an e-mail to dave.wichers (at) owasp.org and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free for Open Source Tools ==&lt;br /&gt;
Tools that are free for open source projects in each of the above categories are listed below.&lt;br /&gt;
&lt;br /&gt;
=== SAST Tools ===&lt;br /&gt;
OWASP already maintains a page of known SAST tools: [[Source Code Analysis Tools]], which includes a list of those that are &amp;quot;Open Source or Free Tools Of This Type&amp;quot;. Any such tools could certainly be used. One such cloud service that looks promising is:&lt;br /&gt;
* [https://lgtm.com/help/lgtm/about-lgtm LGTM.com] - A free for open source static analysis service that automatically monitors commits to publicly accessible code in: Bitbucket Cloud, GitHub, or GitLab. Supports C/C++, C#, COBOL (in beta), Go (in beta), Java, JavaScript/TypeScript, Python&lt;br /&gt;
In addition, we are aware of the following commercial SAST tools that are free for Open Source projects:&lt;br /&gt;
* [https://scan.coverity.com/ Coverity Scan Static Analysis] - Can be lashed into Travis-CI so it's done automatically with online resources. Supports over a dozen programming languages as documented here in the section [https://www.synopsys.com/software-integrity/security-testing/static-analysis-sast.html Comprehensive support for these programming languages and frameworks].&lt;br /&gt;
* [https://www.reshiftsecurity.com reshift] - A CI/CD tool that uses static code analysis to scan for vulnerabilities and uses machine learning to give a prediction on false positives. Supports Java with future support for NodeJS and JavaScript planned for sometime in 2019. If you go to the Pricing section on this page, it says it is free for public repositories.&lt;br /&gt;
&lt;br /&gt;
=== DAST Tools ===&lt;br /&gt;
If your project has a web application component, we recommend running automated scans against it to look for vulnerabilities. OWASP maintains a page of known DAST Tools: [[:Category:Vulnerability Scanning Tools|Vulnerability Scanning Tools]], and the '''Licence''' column on this page indicates which of those tools have free capabilities. Our primary recommendation is to use one of these:&lt;br /&gt;
* [[OWASP Zed Attack Proxy Project|OWASP ZAP]] - A full featured free and open source DAST tool that includes both automated scanning for vulnerabilities and tools to assist expert manual web app pen testing.&lt;br /&gt;
** The ZAP team has also been working hard to make it easier to integrate ZAP into your CI/CD pipeline. (e.g., here's a [https://www.we45.com/blog/how-to-integrate-zap-into-jenkins-ci-pipeline-we45-blog blog post on how to integrate ZAP with Jenkins]).&lt;br /&gt;
* [http://www.arachni-scanner.com/ Arachni] - Arachni is a commercially supported scanner, but its free for most use cases, including scanning open source projects.&lt;br /&gt;
We are not aware of any other commercial grade tools that offer their full featured DAST product free for open source projects.&lt;br /&gt;
&lt;br /&gt;
=== IAST Tools ===&lt;br /&gt;
IAST tools are typically geared to analyze Web Applications and Web APIs, but that is vendor specific. There may be IAST products that can perform good security analysis on non-web applications as well.&lt;br /&gt;
&lt;br /&gt;
We are aware of only one IAST Tool that is free after registration at this time:&lt;br /&gt;
* [https://www.contrastsecurity.com/contrast-community-edition Contrast Community Edition (CE)] - Fully featured version for 1 app and up to 5 users (some Enterprise features disabled). Contrast CE supports Java and .NET only.&lt;br /&gt;
&lt;br /&gt;
=== Open Source Software (OSS) Security Tools ===&lt;br /&gt;
OSS refers to the open source libraries or components that application developers leverage to quickly develop new applications and add features to existing apps. Gartner refers to the analysis of the security of these components as software composition analysis (SCA). So OSS Analysis and SCA are the same thing.&lt;br /&gt;
&lt;br /&gt;
OWASP recommends that all software projects generally try to keep the libraries they use as up-to-date as possible to reduce the likelihood of [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]]. There are two recommended approaches for this:&lt;br /&gt;
&lt;br /&gt;
==== Keeping Your Libraries Updated ====&lt;br /&gt;
Using the latest version of each library is recommended because security issues are frequently fixed 'silently' by the component maintainer. By silently, we mean without publishing a [https://cve.mitre.org/ CVE] for the security fix.&lt;br /&gt;
* [https://www.mojohaus.org/versions-maven-plugin/ Maven Versions plugin]&lt;br /&gt;
** For Maven projects, can be used to generate a report of all dependencies used and when upgrades are available for them. Either a direct report, or part of the overall project documentation using: mvn site.&lt;br /&gt;
* Dependabot - https://dependabot.com/&lt;br /&gt;
** A GitHub only service that creates pull requests to keep your dependencies up-to-date. It automatically generates a pull request for each dependency you can upgrade, which you can then ignore, or accept, as you like. It supports tons of languages.&lt;br /&gt;
** Recommended for all open source projects maintained on GitHub!&lt;br /&gt;
&lt;br /&gt;
==== Detecting Known Vulnerable Components ====&lt;br /&gt;
As an alternative, or in addition to, trying to keep all your components up-to-date, a project can specifically monitor whether any of the components they use have known vulnerable components.&lt;br /&gt;
&lt;br /&gt;
Free tools of this type:&lt;br /&gt;
* OWASP has its own free open source tool [[OWASP Dependency Check]] that is free for anyone to use.&lt;br /&gt;
* GitHub: Security alerts for vulnerable dependencies - https://help.github.com/articles/about-security-alerts-for-vulnerable-dependencies/&lt;br /&gt;
** A native GitHub feature that reports known vulnerable dependencies in your GitHub projects. Supports: Java, .NET, JavaScript, Ruby, and Python. Your GitHub projects are automatically signed up for this service.&lt;br /&gt;
Commercial tools of this type that are free for open source:&lt;br /&gt;
* Contrast Community Edition (CE) (mentioned earlier) also has both Known Vulnerable Component detection and Available Updates reporting for OSS. CE supports Java only.&lt;br /&gt;
* Snyk - https://www.snyk.io - Supports Node.js, Ruby, Java, Python, Scala, Golang, .NET, PHP - Latest list here: https://snyk.io/docs&lt;br /&gt;
** A Commercial tool that identifies vulnerable components and integrates with numerous CI/CD pipelines. It is free for open source: https://snyk.io/plans&lt;br /&gt;
** If you don't want to grant Snyk write access to your repo (see it can auto-create pull requests) you can use the Command Line Interface (CLI) instead. See: https://snyk.io/docs/using-snyk. If you do this and want it to be free, you have to configure Snyk so it know its open source: https://support.snyk.io/snyk-cli/how-can-i-set-a-snyk-cli-project-as-open-source&lt;br /&gt;
*** Another benefit of using the Snyk CLI is that it won't auto create Pull requests for you (which makes these 'issues' more public than you might prefer)&lt;br /&gt;
** They also provide detailed information and remediation guidance for known vulnerabilities here: https://snyk.io/vuln&lt;br /&gt;
* SourceClear - https://www.sourceclear.com/ - Supports: Java, Ruby, JavaScript, Python, Objective C, GO, PHP&lt;br /&gt;
** They have a free trial right from their [https://www.sourceclear.com/ home page]. When the 30 day trial expires, it converts into a free &amp;quot;Personal Account&amp;quot; per: &amp;quot;Upgrade at any time to get the features that matter most to you, or choose the Personal plan when your trial ends.&amp;quot; Personal Account described here: https://www.sourceclear.com/pricing/&lt;br /&gt;
** They also make their component vulnerability data (for publicly known vulns) free to search: https://www.sourceclear.com/vulnerability-database/search#_ (Very useful when trying to research a particular library)&lt;br /&gt;
* WhiteSource Bolt - Supports 200+ programming languages. https://www.whitesourcesoftware.com/&lt;br /&gt;
** Azure version: https://marketplace.visualstudio.com/items?itemName=whitesource.ws-bolt&lt;br /&gt;
** GitHub version: https://github.com/apps/whitesource-bolt-for-github Available starting in Nov. 2018.&lt;br /&gt;
&lt;br /&gt;
=== Code Quality tools ===&lt;br /&gt;
Quality has a significant correlation to security. As such, we recommend open source projects also consider using good code quality tools. A few that we are aware of are:&lt;br /&gt;
* SpotBugs (https://github.com/spotbugs/spotbugs) - Open source code quality tool for Java&lt;br /&gt;
** This is the active fork for FindBugs, so if you use Findbugs, you should switch to this.&lt;br /&gt;
** SpotBugs users should add the FindSecBugs plugin (http://find-sec-bugs.github.io/) to their SpotBugs setup, as it significantly improves on the very basic security checking native to SpotBugs.&lt;br /&gt;
&lt;br /&gt;
* SonarQube (https://www.sonarqube.org/)&lt;br /&gt;
** This is a commercially supported, very popular, free (and commercial) code quality tool. It includes most if not all the FindSecBugs security rules plus lots more for quality, including a free, internet online CI setup to run it against your open source projects. SonarQube supports numerous languages: https://www.sonarqube.org/features/multi-languages/&lt;br /&gt;
&lt;br /&gt;
Please let us know if you are aware of any other high quality application security tools that are free for open source (or simply add them to this page). We are particularly interested in identifying and listing commercial tools that are free for open source, as they tend to be better and easier to use than open source (free) tools. If you are aware of any missing from this list, please add them, or let us know (dave.wichers (at) owasp.org) and we'll confirm they are free, and add them for you. Please encourage your favorite commercial tool vendor to make their tool free for open source projects as well!!&lt;br /&gt;
&lt;br /&gt;
Finally, please forward this page to the open source projects you rely on and encourage them to use these free tools!&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=255696</id>
		<title>Free for Open Source Application Security Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=255696"/>
				<updated>2019-10-24T15:40:42Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Free for Open Source Tools */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
== Introduction ==&lt;br /&gt;
OWASP's mission is to help the world improve the security of its software. One of the best ways OWASP can do that is to help Open Source developers improve the software they are producing that everyone else relies on. As such, the following lists of '''automated vulnerability detection tools''' that are '''free for open source''' projects have been gathered together here to raise awareness of their availability.&lt;br /&gt;
&lt;br /&gt;
We would encourage open source projects to use the following types of tools to improve the security and quality of their code:&lt;br /&gt;
* Static Application Security Testing ([[SAST]]) Tools &lt;br /&gt;
* Dynamic Application Security Testing ([[DAST]]) Tools - (Primarily for web apps)&lt;br /&gt;
* Interactive Application Security Testing (IAST) Tools - (Primarily for web apps and web APIs)&lt;br /&gt;
* Keeping Open Source libraries up-to-date (to avoid [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]])&lt;br /&gt;
* Static Code Quality Tools&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; '''Disclaimer:''' &amp;lt;b&amp;gt;OWASP does not endorse any of the Vendors or Scanning Tools by listing them below. They are simply listed if we believe they are free for use by open source projects. We have made every effort to provide this information as accurately as possible. If you are the vendor of a free for open source tool and think this information is incomplete or incorrect, please send an e-mail to dave.wichers (at) owasp.org and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free for Open Source Tools ==&lt;br /&gt;
Tools that are free for open source projects in each of the above categories are listed below.&lt;br /&gt;
&lt;br /&gt;
=== SAST Tools ===&lt;br /&gt;
OWASP already maintains a page of known SAST tools: [[Source Code Analysis Tools]], which includes a list of those that are &amp;quot;Open Source or Free Tools Of This Type&amp;quot;. Any such tools could certainly be used. One such cloud service that looks promising is:&lt;br /&gt;
* [https://lgtm.com/help/lgtm/about-lgtm LGTM.com] - A free for open source static analysis service that automatically monitors commits to publicly accessible code in: Bitbucket Cloud, GitHub, or GitLab. Supports C/C++, C#, COBOL (in beta), Go (in beta), Java, JavaScript/TypeScript, Python&lt;br /&gt;
In addition, we are aware of the following commercial SAST tools that are free for Open Source projects:&lt;br /&gt;
* [https://scan.coverity.com/ Coverity Scan Static Analysis] - Can be lashed into Travis-CI so it's done automatically with online resources. Supports over a dozen programming languages as documented here in the section [https://www.synopsys.com/software-integrity/security-testing/static-analysis-sast.html Comprehensive support for these programming languages and frameworks].&lt;br /&gt;
* [https://www.reshiftsecurity.com reshift] - A CI/CD tool that uses static code analysis to scan for vulnerabilities and uses machine learning to give a prediction on false positives. Supports Java with future support for NodeJS and JavaScript planned for sometime in 2019. If you go to the Pricing section on this page, it says it is free for public repositories.&lt;br /&gt;
&lt;br /&gt;
=== DAST Tools ===&lt;br /&gt;
If your project has a web application component, we recommend running automated scans against it to look for vulnerabilities. OWASP maintains a page of known DAST Tools: [[:Category:Vulnerability Scanning Tools|Vulnerability Scanning Tools]], and the '''Licence''' column on this page indicates which of those tools have free capabilities. Our primary recommendation is to use one of these:&lt;br /&gt;
* [[OWASP Zed Attack Proxy Project|OWASP ZAP]] - A full featured free and open source DAST tool that includes both automated scanning for vulnerabilities and tools to assist expert manual web app pen testing.&lt;br /&gt;
** The ZAP team has also been working hard to make it easier to integrate ZAP into your CI/CD pipeline. (e.g., here's a [https://www.we45.com/blog/how-to-integrate-zap-into-jenkins-ci-pipeline-we45-blog blog post on how to integrate ZAP with Jenkins]).&lt;br /&gt;
* [http://www.arachni-scanner.com/ Arachni] - Arachni is a commercially supported scanner, but its free for most use cases, including scanning open source projects.&lt;br /&gt;
We are not aware of any other commercial grade tools that offer their full featured DAST product free for open source projects.&lt;br /&gt;
&lt;br /&gt;
=== IAST Tools ===&lt;br /&gt;
IAST tools are typically geared to analyze Web Applications and Web APIs, but that is vendor specific. There may be IAST products that can perform good security analysis on non-web applications as well.&lt;br /&gt;
&lt;br /&gt;
We are aware of only one IAST Tool that is free after registration at this time:&lt;br /&gt;
* [https://www.contrastsecurity.com/contrast-community-edition Contrast Community Edition (CE)] - Fully featured version for 1 app and up to 5 users (some Enterprise features disabled). Contrast CE supports Java only.&lt;br /&gt;
&lt;br /&gt;
=== Open Source Software (OSS) Security Tools ===&lt;br /&gt;
OSS refers to the open source libraries or components that application developers leverage to quickly develop new applications and add features to existing apps. Gartner refers to the analysis of the security of these components as software composition analysis (SCA). So OSS Analysis and SCA are the same thing.&lt;br /&gt;
&lt;br /&gt;
OWASP recommends that all software projects generally try to keep the libraries they use as up-to-date as possible to reduce the likelihood of [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]]. There are two recommended approaches for this:&lt;br /&gt;
&lt;br /&gt;
==== Keeping Your Libraries Updated ====&lt;br /&gt;
Using the latest version of each library is recommended because security issues are frequently fixed 'silently' by the component maintainer. By silently, we mean without publishing a [https://cve.mitre.org/ CVE] for the security fix.&lt;br /&gt;
* [https://www.mojohaus.org/versions-maven-plugin/ Maven Versions plugin]&lt;br /&gt;
** For Maven projects, can be used to generate a report of all dependencies used and when upgrades are available for them. Either a direct report, or part of the overall project documentation using: mvn site.&lt;br /&gt;
* Dependabot - https://dependabot.com/&lt;br /&gt;
** A GitHub only service that creates pull requests to keep your dependencies up-to-date. It automatically generates a pull request for each dependency you can upgrade, which you can then ignore, or accept, as you like. It supports tons of languages.&lt;br /&gt;
** Recommended for all open source projects maintained on GitHub!&lt;br /&gt;
&lt;br /&gt;
==== Detecting Known Vulnerable Components ====&lt;br /&gt;
As an alternative, or in addition to, trying to keep all your components up-to-date, a project can specifically monitor whether any of the components they use have known vulnerable components.&lt;br /&gt;
&lt;br /&gt;
Free tools of this type:&lt;br /&gt;
* OWASP has its own free open source tool [[OWASP Dependency Check]] that is free for anyone to use.&lt;br /&gt;
* GitHub: Security alerts for vulnerable dependencies - https://help.github.com/articles/about-security-alerts-for-vulnerable-dependencies/&lt;br /&gt;
** A native GitHub feature that reports known vulnerable dependencies in your GitHub projects. Supports: Java, .NET, JavaScript, Ruby, and Python. Your GitHub projects are automatically signed up for this service.&lt;br /&gt;
Commercial tools of this type that are free for open source:&lt;br /&gt;
* Contrast Community Edition (CE) (mentioned earlier) also has both Known Vulnerable Component detection and Available Updates reporting for OSS. CE supports Java only.&lt;br /&gt;
* Snyk - https://www.snyk.io - Supports Node.js, Ruby, Java, Python, Scala, Golang, .NET, PHP - Latest list here: https://snyk.io/docs&lt;br /&gt;
** A Commercial tool that identifies vulnerable components and integrates with numerous CI/CD pipelines. It is free for open source: https://snyk.io/plans&lt;br /&gt;
** If you don't want to grant Snyk write access to your repo (see it can auto-create pull requests) you can use the Command Line Interface (CLI) instead. See: https://snyk.io/docs/using-snyk. If you do this and want it to be free, you have to configure Snyk so it know its open source: https://support.snyk.io/snyk-cli/how-can-i-set-a-snyk-cli-project-as-open-source&lt;br /&gt;
*** Another benefit of using the Snyk CLI is that it won't auto create Pull requests for you (which makes these 'issues' more public than you might prefer)&lt;br /&gt;
** They also provide detailed information and remediation guidance for known vulnerabilities here: https://snyk.io/vuln&lt;br /&gt;
* SourceClear - https://www.sourceclear.com/ - Supports: Java, Ruby, JavaScript, Python, Objective C, GO, PHP&lt;br /&gt;
** They have a free trial right from their [https://www.sourceclear.com/ home page]. When the 30 day trial expires, it converts into a free &amp;quot;Personal Account&amp;quot; per: &amp;quot;Upgrade at any time to get the features that matter most to you, or choose the Personal plan when your trial ends.&amp;quot; Personal Account described here: https://www.sourceclear.com/pricing/&lt;br /&gt;
** They also make their component vulnerability data (for publicly known vulns) free to search: https://www.sourceclear.com/vulnerability-database/search#_ (Very useful when trying to research a particular library)&lt;br /&gt;
* WhiteSource Bolt - Supports 200+ programming languages. https://www.whitesourcesoftware.com/&lt;br /&gt;
** Azure version: https://marketplace.visualstudio.com/items?itemName=whitesource.ws-bolt&lt;br /&gt;
** GitHub version: https://github.com/apps/whitesource-bolt-for-github Available starting in Nov. 2018.&lt;br /&gt;
&lt;br /&gt;
=== Code Quality tools ===&lt;br /&gt;
Quality has a significant correlation to security. As such, we recommend open source projects also consider using good code quality tools. A few that we are aware of are:&lt;br /&gt;
* SpotBugs (https://github.com/spotbugs/spotbugs) - Open source code quality tool for Java&lt;br /&gt;
** This is the active fork for FindBugs, so if you use Findbugs, you should switch to this.&lt;br /&gt;
** SpotBugs users should add the FindSecBugs plugin (http://find-sec-bugs.github.io/) to their SpotBugs setup, as it significantly improves on the very basic security checking native to SpotBugs.&lt;br /&gt;
&lt;br /&gt;
* SonarQube (https://www.sonarqube.org/)&lt;br /&gt;
** This is a commercially supported, very popular, free (and commercial) code quality tool. It includes most if not all the FindSecBugs security rules plus lots more for quality, including a free, internet online CI setup to run it against your open source projects. SonarQube supports numerous languages: https://www.sonarqube.org/features/multi-languages/&lt;br /&gt;
&lt;br /&gt;
Please let us know if you are aware of any other high quality application security tools that are free for open source (or simply add them to this page). We are particularly interested in identifying and listing commercial tools that are free for open source, as they tend to be better and easier to use than open source (free) tools. If you are aware of any missing from this list, please add them, or let us know (dave.wichers (at) owasp.org) and we'll confirm they are free, and add them for you. Please encourage your favorite commercial tool vendor to make their tool free for open source projects as well!!&lt;br /&gt;
&lt;br /&gt;
Finally, please forward this page to the open source projects you rely on and encourage them to use these free tools!&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=255695</id>
		<title>Source Code Analysis Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=255695"/>
				<updated>2019-10-24T15:38:23Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Open Source or Free Tools Of This Type */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Static_Code_Analysis | Source code analysis]] tools, also referred to as Static Application Security Testing (SAST) Tools, are designed to analyze source code and/or compiled versions of code to help find security flaws. &lt;br /&gt;
&lt;br /&gt;
Some tools are starting to move into the IDE. For the types of problems that can be detected during the software development phase itself, this is a powerful phase within the development life cycle to employ such tools, as it provides immediate feedback to the developer on issues they might be introducing into the code during code development itself. This immediate feedback is very useful, especially when compared to finding vulnerabilities much later in the development cycle.&lt;br /&gt;
&lt;br /&gt;
== Strengths and Weaknesses ==&lt;br /&gt;
&lt;br /&gt;
=== Strengths ===&lt;br /&gt;
&lt;br /&gt;
* Scales well -- can be run on lots of software, and can be run repeatedly (as with nightly builds or continuous integration)&lt;br /&gt;
* Useful for things that such tools can automatically find with high confidence, such as buffer overflows, SQL Injection Flaws, and so forth&lt;br /&gt;
* Output is good for developers -- highlights the precise source files, line numbers, and even subsections of lines that are affected&lt;br /&gt;
&lt;br /&gt;
=== Weaknesses ===&lt;br /&gt;
&lt;br /&gt;
* Many types of security vulnerabilities are difficult to find automatically, such as authentication problems, access control issues, insecure use of cryptography, etc. The current state of the art only allows such tools to automatically find a relatively small percentage of application security flaws. However, tools of this type are getting better.&lt;br /&gt;
* High numbers of false positives.&lt;br /&gt;
* Frequently can't find configuration issues, since they are not represented in the code.&lt;br /&gt;
* Difficult to 'prove' that an identified security issue is an actual vulnerability.&lt;br /&gt;
* Many of these tools have difficulty analyzing code that can't be compiled. Analysts frequently can't compile code because they don't have the right libraries, all the compilation instructions, all the code, etc.&lt;br /&gt;
&lt;br /&gt;
==Important Selection Criteria==&lt;br /&gt;
&lt;br /&gt;
* Requirement: Must support your programming language, but not usually a key factor once it does.&lt;br /&gt;
* Types of vulnerabilities it can detect (out of the [[OWASP Top Ten]]?) (plus more?)&lt;br /&gt;
* How accurate is it? False Positive/False Negative rates?&lt;br /&gt;
** Does the tool have an OWASP [[Benchmark]] score?&lt;br /&gt;
* Does it understand the libraries/frameworks you use?&lt;br /&gt;
* Does it require a fully buildable set of source?&lt;br /&gt;
* Can it run against binaries instead of source?&lt;br /&gt;
* Can it be integrated into the developer's IDE?&lt;br /&gt;
* How hard is it to setup/use?&lt;br /&gt;
* Can it be run continuously and automatically?&lt;br /&gt;
* License cost for the tool. (Some are sold per user, per org, per app, per line of code analyzed. Consulting licenses are frequently different than end user licenses.)&lt;br /&gt;
&lt;br /&gt;
==OWASP Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [[OWASP SonarQube Project]]&lt;br /&gt;
* [http://www.owasp.org/index.php/Category:OWASP_Orizon_Project OWASP Orizon Project]&lt;br /&gt;
* [[OWASP_LAPSE_Project | OWASP LAPSE Project]]&lt;br /&gt;
* [[OWASP O2 Platform]]&lt;br /&gt;
* [[OWASP WAP-Web Application Protection]]&lt;br /&gt;
&lt;br /&gt;
==Disclaimer==&lt;br /&gt;
&lt;br /&gt;
Disclaimer: &amp;lt;b&amp;gt;The tools listed in the tables below are presented in alphabetical order. &amp;lt;i&amp;gt;OWASP does not endorse any of the vendors or tools by listing them in the table below.&amp;lt;/i&amp;gt; We have made every effort to provide this information as accurately as possible. If you are the vendor of a tool below and think that this information is incomplete or incorrect, please send an e-mail to our mailing list and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Open Source or Free Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.openstack.org/wiki/Security/Projects/Bandit Bandit] - bandit is a comprehensive source vulnerability scanner for Python&lt;br /&gt;
* [http://brakemanscanner.org/ Brakeman] - Brakeman is an open source vulnerability scanner specifically designed for Ruby on Rails applications&lt;br /&gt;
* [http://rubygems.org/gems/codesake-dawn Codesake Dawn] - Codesake Dawn is an open source security source code analyzer designed for Sinatra, Padrino for Ruby on Rails applications. It also works on non-web applications written in Ruby&lt;br /&gt;
* [https://discotek.ca/deepdive.xhtml Deep Dive] - Byte code analysis tool for discovering vulnerabilities in Java deployments (Ear, War, Jar).&lt;br /&gt;
* [http://findbugs.sourceforge.net/ FindBugs] - (Legacy - NOT Maintained - Use SpotBugs (see below) instead) - Find bugs (including a few security flaws) in Java programs&lt;br /&gt;
* [https://find-sec-bugs.github.io/ FindSecBugs] - A security specific plugin for SpotBugs that significantly improves SpotBugs's ability to find security vulnerabilities in Java programs. Works with the old FindBugs too,&lt;br /&gt;
* [http://www.dwheeler.com/flawfinder/ Flawfinder] Flawfinder - Scans C and C++&lt;br /&gt;
* [https://github.com/golangci/golangci-lint GolangCI-Lint] - A Go Linters aggregator - One of the Linters is [https://github.com/securego/gosec gosec (Go Security)], which is off by default but can easily be enabled.&lt;br /&gt;
* [https://www.bishopfox.com/resources/tools/google-hacking-diggity/attack-tools/ Google CodeSearchDiggity] - Uses Google Code Search to identifies vulnerabilities in open source code projects hosted by Google Code, MS CodePlex, SourceForge, Github, and more. The tool comes with over 130 default searches that identify SQL injection, cross-site scripting (XSS), insecure remote and local file includes, hard-coded passwords, and much more.  ''Essentially, Google CodeSearchDiggity provides a source code security analysis of nearly every single open source code project in existence – simultaneously.''&lt;br /&gt;
* [https://github.com/wireghoul/graudit/ Graudit] - Scans multiple languages for various security flaws.&lt;br /&gt;
* [https://lgtm.com/help/lgtm/about-lgtm LGTM] - A free for open source static analysis service that automatically monitors commits to publicly accessible code in: Bitbucket Cloud, GitHub, or GitLab. Supports C/C++, C#, COBOL (in beta), Java, JavaScript/TypeScript, Python&lt;br /&gt;
* [https://dotnet-security-guard.github.io/ .NET Security Guard] - Roslyn analyzers that aim to help security audits on .NET applications. It will find SQL injections, LDAP injections, XXE, cryptography weakness, XSS and more.&lt;br /&gt;
* [https://github.com/FloeDesignTechnologies/phpcs-security-audit phpcs-security-audit] - phpcs-security-audit is a set of PHP_CodeSniffer rules that finds flaws or weaknesses related to security in PHP and its popular CMS or frameworks.  It currently has core PHP rules as well as Drupal 7 specific rules.&lt;br /&gt;
* [http://pmd.sourceforge.net/ PMD] - PMD scans Java source code and looks for potential code problems (this is a code quality tool that does not focus on security issues)&lt;br /&gt;
* [http://msdn.microsoft.com/en-us/library/ms933794.aspx PreFast] (Microsoft) - PREfast is a static analysis tool that identifies defects in C/C++ programs. Last update 2006.&lt;br /&gt;
* [https://github.com/designsecurity/progpilot Progpilot] - Progpilot is a static analyzer tool for PHP that detects security vulnerabilities such as XSS and SQL Injection.&lt;br /&gt;
* [https://pumascan.com/ Puma Scan] - Puma Scan is a .NET C# open source static source code analyzer that runs as an IDE plugin for Visual Studio and via MSBuild in CI pipelines.&lt;br /&gt;
* [https://pyre-check.org/ Pyre] - A performant type-checker for Python 3, that also has [https://pyre-check.org/docs/static-analysis.html limited security/data flow analysis] capabilities.&lt;br /&gt;
* [http://rips-scanner.sourceforge.net/ RIPS] - RIPS Open Source is a static source code analyzer for vulnerabilities in PHP web applications. Please see notes on the sourceforge.net site.&lt;br /&gt;
* [https://discotek.ca/sinktank.xhtml Sink Tank] - Byte code static code analyzer for performing source/sink (taint) analysis.&lt;br /&gt;
* [http://www.sonarqube.org/ SonarQube] - Scans source code for more than 20 languages for Bugs, Vulnerabilities, and Code Smells. SonarQube IDE plugins for Eclipse, Visual Studio, and IntelliJ provided by [http://www.sonarlint.org/ SonarLint].&lt;br /&gt;
* [https://spotbugs.github.io/ SpotBugs] - This is the active fork replacement for FindBugs, which is not maintained anymore.&lt;br /&gt;
* [http://sourceforge.net/projects/visualcodegrepp/ VisualCodeGrepper (VCG)] - Scans C/C++, C#, VB, PHP, Java, and PL/SQL for security issues and for comments which may indicate defective code. The config files can be used to carry out additional checks for banned functions or functions which commonly cause security issues.&lt;br /&gt;
&lt;br /&gt;
[https://docs.gitlab.com/ee/user/application_security/sast/index.html#supported-languages-and-frameworks GitLab has lashed a free SAST tool for a bunch of different languages natively into GitLab. So you might be able to use that, or at least identify a free SAST tool for the language you need from that list].&lt;br /&gt;
&lt;br /&gt;
An even broader list of free static analysis tools (not just for security) for lots of different languages is here called: [https://endler.dev/awesome-static-analysis/ Awesome Static Analysis]&lt;br /&gt;
&lt;br /&gt;
==Commercial Tools Of This Type==&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ Application Inspector] (Positive Technologies) - combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation; has capability to generate test queries (exploits) to verify detected vulnerabilities during SAST analysis; Supported languages include: Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/application-security-on-cloud Application Security on Cloud] (IBM) - Provides SAST, DAST and mobile security testing as well as OpenSource library known vulnerability detection as a cloud service. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/ibm-appscan-source AppScan Source] (IBM)&lt;br /&gt;
* [https://www.blueclosure.com BlueClosure BC Detect] (BlueClosure) - Analyzes client-side JavaScript.&lt;br /&gt;
* [https://bugscout.io/en/ bugScout] (Nalbatech, Formally Buguroo)&lt;br /&gt;
* [https://www.castsoftware.com/products/application-intelligence-platform CAST AIP] (CAST) Performs static and architectural analysis to identify numerous types of security issues. Supports over 30 languages. [https://www.castsoftware.com/solutions/application-security/cwe#SupportedSecurityStandards AIP's security specific coverage is here].&lt;br /&gt;
* [https://www.codacy.com/ Codacy] Offers security patterns for languages such as Python, Ruby, Scala, Java, JavaScript and more. Integrates with tools such as Brakeman, Bandit, FindBugs, and others. (free for open source projects)&lt;br /&gt;
* [https://www.grammatech.com/products/codesonar CodeSonar] tool that supports C, C++, Java and C# and maps against the OWASP top 10 vulnerabilities.&lt;br /&gt;
* [https://www.contrastsecurity.com/interactive-application-security-testing-iast Contrast Assess] (Contrast Security) - Contrast performs code security without actually doing static analysis. Contrast does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.coverity.com/products/code-advisor/ Coverity Code Advisor] (Synopsys)&lt;br /&gt;
* [https://www.checkmarx.com/technology/static-code-analysis-sca/ CxSAST] (Checkmarx)&lt;br /&gt;
* [https://www.microfocus.com/en-us/products/static-code-analysis-sast Fortify] (Micro Focus, Formally HP)&lt;br /&gt;
* [https://hdivsecurity.com/interactive-application-security-testing-iast Hdiv Detection] (Hdiv Security) - Hdiv performs code security without actually doing static analysis. Hdiv does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code-level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.juliasoft.com/solutions Julia] (JuliaSoft) - SaaS Java static analysis&lt;br /&gt;
* [http://www.klocwork.com/capabilities/static-code-analysis KlocWork] (KlocWork)&lt;br /&gt;
* [https://www.kiuwan.com/code-analysis/ Kiuwan] (an [http://www.optimyth.com Optimyth] company) - SaaS Software Quality &amp;amp; Security Analysis&lt;br /&gt;
* [http://www.parasoft.com/jsp/capabilities/static_analysis.jsp?itemId=547 Parasoft Test] (Parasoft)&lt;br /&gt;
* [https://pitss.com/products/pitss-con/ PITSS.CON] (PITTS)&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ PT Application Inspector] combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation for high accuracy rate with minimum false positives; has a unique capability to generate special test queries (exploits) to verify detected vulnerabilities during SAST analysis; integrates with CI/CD, VCS, etc. PT AI helps to easily understand, verify, and fix flaws; has a simple UI; is highly automated and easy to use. Supported languages are Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others.&lt;br /&gt;
* [https://pumascanpro.com/ Puma Scan Professional] - A .NET C# static source code analyzer that runs as a Visual Studio IDE extension, Azure DevOps extension, and Command Line (CLI) executable.&lt;br /&gt;
* [http://www.viva64.com/en/ PVS-Studio] (PVS-Studio) - For C/C++, C#&lt;br /&gt;
* [https://www.reshiftsecurity.com reshift] - A CI/CD tool that uses static code analysis to scan for vulnerabilities and uses machine learning to give a prediction on false positives. Supports Java with future support for NodeJS and JavaScript planned for sometime in 2019.&lt;br /&gt;
* [https://www.ripstech.com/ RIPS Code Analysis] (RIPS Technologies) - A SAST solution specialized for Java and PHP that detects unknown security vulnerabilities and code quality issues.&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/resources/datasheets/secureassist.html SecureAssist] (Synopsys) - Scans code for insecure coding and configurations automatically as an IDE plugin for Eclipse, IntelliJ, and Visual Studio etc. Supports (Java, .NET, PHP, and JavaScript)&lt;br /&gt;
* [https://www.whitehatsec.com/products/static-application-security-testing/ Sentinel Source] (Whitehat)&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/products/interactive-application-security-testing.html Seeker] (Synopsys) Seeker performs code security without actually doing static analysis. Seeker does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis with simulated attacks. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.sourcepatrol.co.uk/ Source Patrol] (Pentest)&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] (DefenseCode)&lt;br /&gt;
* [http://www.veracode.com/products/binary-static-analysis-sast Veracode Static Analysis] (Veracode)&lt;br /&gt;
* [http://www.xanitizer.net Xanitizer] - Scans Java and Scala for security vulnerabilities, mainly via taint analysis. Free for academic and open source projects (see [https://www.rigs-it.com/xanitizer-pricing/]).&lt;br /&gt;
&lt;br /&gt;
==More info==&lt;br /&gt;
&lt;br /&gt;
* [[Appendix_A:_Testing_Tools | Appendix A: Testing Tools]]&lt;br /&gt;
* [http://samate.nist.gov/index.php/Source_Code_Security_Analyzers.html NIST's list of Source Code Security Analysis Tools]&lt;br /&gt;
* [[:Category:Vulnerability_Scanning_Tools | DAST Tools]] - Similar info on Dynamic Application Security Testing (DAST) Tools&lt;br /&gt;
* [[Free for Open Source Application Security Tools]] - This page lists the Commercial Source Code Analysis Tools (SAST) we know of that are free for Open Source&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP .NET Project]]&lt;br /&gt;
[[Category:SAMM-CR-2]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Benchmark&amp;diff=254840</id>
		<title>Benchmark</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Benchmark&amp;diff=254840"/>
				<updated>2019-09-22T22:18:46Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: Add Semmle LGTM Scorecard Generation Support to Tools tab&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Main = &lt;br /&gt;
 &amp;lt;div style=&amp;quot;width:100%;height:100px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Lab_big.jpg|link=OWASP_Project_Stages#tab.3DLab_Projects]]&amp;lt;/div&amp;gt;&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== OWASP Benchmark Project  ==&lt;br /&gt;
The OWASP Benchmark for Security Automation (OWASP Benchmark) is a free and open test suite designed to evaluate the speed, coverage, and accuracy of automated software vulnerability detection tools and services (henceforth simply referred to as 'tools'). Without the ability to measure these tools, it is difficult to understand their strengths and weaknesses, and compare them to each other. Each version of the OWASP Benchmark contains thousands of test cases that are fully runnable and exploitable, each of which maps to the appropriate CWE number for that vulnerability.&lt;br /&gt;
&lt;br /&gt;
You can use the OWASP Benchmark with [[Source_Code_Analysis_Tools | Static Application Security Testing (SAST)]] tools, [[:Category:Vulnerability_Scanning_Tools | Dynamic Application Security Testing (DAST)]] tools like OWASP [[ZAP]] and Interactive Application Security Testing (IAST) tools. Benchmark is implemented in Java.  Future versions may expand to include other languages.&lt;br /&gt;
&lt;br /&gt;
==Benchmark Project Scoring Philosophy==&lt;br /&gt;
&lt;br /&gt;
Security tools (SAST, DAST, and IAST) are amazing when they find a complex vulnerability in your code.  But with widespread misunderstanding of the specific vulnerabilities automated tools cover, end users are often left with a false sense of security.&lt;br /&gt;
&lt;br /&gt;
We are on a quest to measure just how good these tools are at discovering and properly diagnosing security problems in applications. We rely on the [http://en.wikipedia.org/wiki/Receiver_operating_characteristic long history] of military and medical evaluation of detection technology as a foundation for our research. Therefore, the test suite tests both real and fake vulnerabilities.&lt;br /&gt;
&lt;br /&gt;
There are four possible test outcomes in the Benchmark:&lt;br /&gt;
&lt;br /&gt;
# Tool correctly identifies a real vulnerability (True Positive - TP)&lt;br /&gt;
# Tool fails to identify a real vulnerability (False Negative - FN)&lt;br /&gt;
# Tool correctly ignores a false alarm (True Negative - TN)&lt;br /&gt;
# Tool fails to ignore a false alarm (False Positive - FP)&lt;br /&gt;
&lt;br /&gt;
We can learn a lot about a tool from these four metrics. Consider a tool that simply flags every line of code as vulnerable. This tool will perfectly identify all vulnerabilities!  But it will also have 100% false positives and thus adds no value.  Similarly, consider a tool that reports absolutely nothing. This tool will have zero false positives, but will also identify zero real vulnerabilities and is also worthless. You can even imagine a tool that flips a coin to decide whether to report whether each test case contains a vulnerability. The result would be 50% true positives and 50% false positives.  We need a way to distinguish valuable security tools from these trivial ones.&lt;br /&gt;
&lt;br /&gt;
If you imagine the line that connects all these points, from 0,0 to 100,100 establishes a line that roughly translates to &amp;quot;random guessing.&amp;quot; The ultimate measure of a security tool is how much better it can do than this line.  The diagram below shows how we will evaluate security tools against the Benchmark.&lt;br /&gt;
&lt;br /&gt;
[[File:Wbe guide.png]]&lt;br /&gt;
&lt;br /&gt;
A point plotted on this chart provides a visual indication of how well a tool did considering both the True Positives the tool reported, as well as the False Positives it reported. We also want to compute an individual score for that point in the range 0 - 100, which we call the Benchmark Accuracy Score.&lt;br /&gt;
&lt;br /&gt;
The Benchmark Accuracy Score is essentially a [https://en.wikipedia.org/wiki/Youden%27s_J_statistic Youden Index], which is a standard way of summarizing the accuracy of a set of tests.  Youden's index is one of the oldest measures for diagnostic accuracy. It is also a global measure of a test performance, used for the evaluation of overall discriminative power of a diagnostic procedure and for comparison of this test with other tests. Youden's index is calculated by deducting 1 from the sum of a test’s sensitivity and specificity expressed not as percentage but as a part of a whole number: (sensitivity + specificity) – 1. For a test with poor diagnostic accuracy, Youden's index equals 0, and in a perfect test Youden's index equals 1.&lt;br /&gt;
&lt;br /&gt;
  So for example, if a tool has a True Positive Rate (TPR) of .98 (i.e., 98%) &lt;br /&gt;
    and False Positive Rate (FPR) of .05 (i.e., 5%)&lt;br /&gt;
  Sensitivity = TPR (.98)&lt;br /&gt;
  Specificity = 1-FPR (.95)&lt;br /&gt;
  So the Youden Index is (.98+.95) - 1 = .93&lt;br /&gt;
  &lt;br /&gt;
  And this would equate to a Benchmark score of 93 (since we normalize this to the range 0 - 100)&lt;br /&gt;
&lt;br /&gt;
On the graph, the Benchmark Score is the length of the line from the point down to the diagonal “guessing” line. Note that a Benchmark score can actually be negative if the point is below the line. This is caused when the False Positive Rate is actually higher than the True Positive Rate.&lt;br /&gt;
&lt;br /&gt;
==Benchmark Validity==&lt;br /&gt;
&lt;br /&gt;
The Benchmark tests are not exactly like real applications. The tests are derived from coding patterns observed in real applications, but the majority of them are considerably '''simpler''' than real applications. That is, most real world applications will be considerably harder to successfully analyze than the OWASP Benchmark Test Suite. Although the tests are based on real code, it is possible that some tests may have coding patterns that don't occur frequently in real code.&lt;br /&gt;
&lt;br /&gt;
Remember, we are trying to test the capabilities of the tools and make them explicit, so that users can make informed decisions about what tools to use, how to use them, and what results to expect.  This is exactly aligned with the OWASP mission to make application security visible.&lt;br /&gt;
&lt;br /&gt;
==Generating Benchmark Scores==&lt;br /&gt;
&lt;br /&gt;
Anyone can use this Benchmark to evaluate vulnerability detection tools. The basic steps are:&lt;br /&gt;
# Download the Benchmark from GitHub&lt;br /&gt;
# Run your tools against the Benchmark&lt;br /&gt;
# Run the BenchmarkScore tool on the reports from your tools&lt;br /&gt;
&lt;br /&gt;
That's it!&lt;br /&gt;
&lt;br /&gt;
Full details on how to do this are at the bottom of the page on the Quick_Start tab.&lt;br /&gt;
&lt;br /&gt;
We encourage both vendors, open source tools, and end users to verify their application security tools against the Benchmark. In order to ensure that the results are fair and useful, we ask that you follow a few simple rules when publishing results. We won't recognize any results that aren't easily reproducible:&lt;br /&gt;
&lt;br /&gt;
# A description of the default “out-of-the-box” installation, version numbers, etc…&lt;br /&gt;
# Any and all configuration, tailoring, onboarding, etc… performed to make the tool run&lt;br /&gt;
# Any and all changes to default security rules, tests, or checks used to achieve the results&lt;br /&gt;
# Easily reproducible steps to run the tool&lt;br /&gt;
&lt;br /&gt;
== Reporting Format==&lt;br /&gt;
&lt;br /&gt;
The Benchmark includes tools to interpret raw tool output, compare it to the expected results, and generate summary charts and graphs. We use the following table format in order to capture all the information generated during the evaluation.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable nowraplinks&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Security Category&lt;br /&gt;
! TP&lt;br /&gt;
! FN&lt;br /&gt;
! TN&lt;br /&gt;
! FP&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Total&lt;br /&gt;
! TPR&lt;br /&gt;
! FPR&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Score&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| General security category for test cases.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Positives''': Tests with real vulnerabilities that were correctly reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Negative''': Tests with real vulnerabilities that were not correctly reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Negative''': Tests with fake vulnerabilities that were correctly not reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Positive''':Tests with fake vulnerabilities that were incorrectly reported as vulnerable by the tool.&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| Total test cases for this category.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Positive Rate''': TP / ( TP + FN ) - Also referred to as Precision, as defined at [https://en.wikipedia.org/wiki/Precision_and_recall Wikipedia].&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Positive Rate''': FP / ( FP + TN ).&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| Normalized distance from the “guess line” TPR - FPR.&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Command Injection&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Etc...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | &lt;br /&gt;
! Total TP&lt;br /&gt;
! Total FN&lt;br /&gt;
! Total TN&lt;br /&gt;
! Total FP&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Total TC&lt;br /&gt;
! Average TPR&lt;br /&gt;
! Average FPR&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Average Score&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Code Repo and Build/Run Instructions ==&lt;br /&gt;
&lt;br /&gt;
See the '''Getting Started''' and '''Getting, Building, and Running the Benchmark''' sections on the Quick Start tab.&lt;br /&gt;
&lt;br /&gt;
==Licensing==&lt;br /&gt;
&lt;br /&gt;
The OWASP Benchmark is free to use under the [http://choosealicense.com/licenses/gpl-2.0/ GNU General Public License v2.0].&lt;br /&gt;
&lt;br /&gt;
== Mailing List ==&lt;br /&gt;
&lt;br /&gt;
[https://lists.owasp.org/mailman/listinfo/owasp-benchmark-project OWASP Benchmark Mailing List]&lt;br /&gt;
&lt;br /&gt;
== Project Leaders ==&lt;br /&gt;
&lt;br /&gt;
[https://www.owasp.org/index.php/User:Wichers Dave Wichers] [mailto:dave.wichers@owasp.org @]&lt;br /&gt;
&lt;br /&gt;
== Project References ==&lt;br /&gt;
* [https://www.mir-swamp.org/#packages/public Software Assurance Marketplace (SWAMP) - set of curated packages to test tools against]&lt;br /&gt;
* [http://samate.nist.gov/Other_Test_Collections.html SAMATE List of Test Collections]&lt;br /&gt;
&lt;br /&gt;
== Related Projects ==&lt;br /&gt;
&lt;br /&gt;
* [http://samate.nist.gov/SARD/testsuite.php NSA's Juliet for Java]&lt;br /&gt;
* [http://sectoolmarket.com/ The Web Application Vulnerability Scanner Evaluation Project (WAVSEP)]&lt;br /&gt;
&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;padding-left:25px;width:200px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Quick Download ==&lt;br /&gt;
&lt;br /&gt;
All test code and project files can be downloaded from [https://github.com/OWASP/benchmark OWASP GitHub].&lt;br /&gt;
&lt;br /&gt;
== Project Intro Video ==&lt;br /&gt;
&lt;br /&gt;
[[File:BenchmarkPodcastTitlePage.jpg|200px|link=https://www.youtube.com/watch?v=HQP8dwc3jJA&amp;amp;index=5&amp;amp;list=PLGB2s-U5FSWOmEStMt3JqlMFJvRYqeVW5]]&lt;br /&gt;
&lt;br /&gt;
== News and Events ==&lt;br /&gt;
* LOOKING FOR VOLUNTEERS!! - We are looking for individuals and organizations to join and make this a much more community driven project, including additional coleaders to help take this project to the next level. Contributors could work on things like new test cases, additional tool scorecard generators, adding support for languages beyond Java, and a host of other improvements. Please contact [mailto:dave.wichers@owasp.org me] if you are interested in contributing at any level.&lt;br /&gt;
* June 5, 2016 - Benchmark Version 1.2 Released&lt;br /&gt;
* Sep 24, 2015 - Benchmark introduced to broader OWASP community at [https://appsecusa2015.sched.org/event/3r9k/using-the-owasp-benchmark-to-assess-automated-vulnerability-analysis-tools AppSec USA]&lt;br /&gt;
* Aug 27, 2015 - U.S. Dept. of Homeland Security (DHS) is financially supporting the Benchmark project.&lt;br /&gt;
* Aug 15, 2015 - Benchmark Version 1.2beta Released with full DAST Support. Checkmarx and ZAP scorecard generators also released.&lt;br /&gt;
* July 10, 2015 - Benchmark Scorecard generator and open source scorecards released&lt;br /&gt;
* May 23, 2015 - Benchmark Version 1.1 Released&lt;br /&gt;
* April 15, 2015 - Benchmark Version 1.0 Released&lt;br /&gt;
&lt;br /&gt;
==Classifications==&lt;br /&gt;
&lt;br /&gt;
   {| width=&amp;quot;200&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; rowspan=&amp;quot;2&amp;quot;| [[File:Owasp-incubator-trans-85.png|link=https://www.owasp.org/index.php/OWASP_Project_Stages#tab=Incubator_Projects]]&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-builders-small.png|link=]]  &lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-defenders-small.png|link=]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [http://choosealicense.com/licenses/gpl-2.0/ GNU General Public License v2.0]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [[File:Project_Type_Files_CODE.jpg|link=]]&lt;br /&gt;
   |}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Test Cases =&lt;br /&gt;
&lt;br /&gt;
Version 1.0 of the Benchmark was published on April 15, 2015 and had 20,983 test cases. On May 23, 2015, version 1.1 of the Benchmark was released. The 1.1 release improves on the previous version by making sure that there are both true positives and false positives in every vulnerability area. Version 1.2 was released on June 5, 2016 (and the 1.2beta August 15, 2015).&lt;br /&gt;
&lt;br /&gt;
Version 1.2 and forward of the Benchmark is a fully executable web application, which means it is scannable by any kind of vulnerability detection tool. The 1.2 has been limited to slightly less than 3,000 test cases, to make it easier for DAST tools to scan it (so it doesn't take so long and they don't run out of memory, or blow up the size of their database). The 1.2 release covers the same vulnerability areas that 1.1 covers. We added a few Spring database SQL Injection tests, but that's it. The bulk of the work was turning each test case into something that actually runs correctly AND is fully exploitable, and then generating a UI on top of it that works in order to turn the test cases into a real running application.&lt;br /&gt;
&lt;br /&gt;
You can still download Benchmark version 1.1 by cloning the release marked with the GIT tag '1.1'.&lt;br /&gt;
&lt;br /&gt;
The test case areas and quantities for the Benchmark releases are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable nowraplinks&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Vulnerability Area&lt;br /&gt;
! # of Tests in v1.1&lt;br /&gt;
! # of Tests in v1.2&lt;br /&gt;
! CWE Number&lt;br /&gt;
|-&lt;br /&gt;
| [[Command Injection]]&lt;br /&gt;
| 2708&lt;br /&gt;
| 251&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/78.html 78]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Cryptography&lt;br /&gt;
| 1440&lt;br /&gt;
| 246&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/327.html 327]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Hashing&lt;br /&gt;
| 1421&lt;br /&gt;
| 236&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/328.html 328]&lt;br /&gt;
|-&lt;br /&gt;
| [[LDAP injection | LDAP Injection]]&lt;br /&gt;
| 736&lt;br /&gt;
| 59&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/90.html 90]&lt;br /&gt;
|-&lt;br /&gt;
| [[Path Traversal]]&lt;br /&gt;
| 2630&lt;br /&gt;
| 268&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/22.html 22]&lt;br /&gt;
|-&lt;br /&gt;
| Secure Cookie Flag&lt;br /&gt;
| 416&lt;br /&gt;
| 67&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/614.html 614]&lt;br /&gt;
|-&lt;br /&gt;
| [[SQL Injection]]&lt;br /&gt;
| 3529&lt;br /&gt;
| 504&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/89.html 89]&lt;br /&gt;
|-&lt;br /&gt;
| [[Trust Boundary Violation]]&lt;br /&gt;
| 725&lt;br /&gt;
| 126&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/501.html 501]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Randomness&lt;br /&gt;
| 3640&lt;br /&gt;
| 493&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/330.html 330]&lt;br /&gt;
|-&lt;br /&gt;
| [[XPATH Injection]]&lt;br /&gt;
| 347&lt;br /&gt;
| 35&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/643.html 643]&lt;br /&gt;
|-&lt;br /&gt;
| [[XSS]] (Cross-Site Scripting)&lt;br /&gt;
| 3449&lt;br /&gt;
| 455&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/79.html 79]&lt;br /&gt;
|-&lt;br /&gt;
| Total Test Cases&lt;br /&gt;
| 21,041&lt;br /&gt;
| 2,740&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Each Benchmark version comes with a spreadsheet that lists every test case, the vulnerability category, the CWE number, and the expected result (true finding/false positive). Look for the file: expectedresults-VERSION#.csv in the project root directory.&lt;br /&gt;
&lt;br /&gt;
Every test case is:&lt;br /&gt;
* a servlet or JSP (currently they are all servlets, but we plan to add JSPs)&lt;br /&gt;
* either a true vulnerability or a false positive for a single issue&lt;br /&gt;
&lt;br /&gt;
The Benchmark is intended to help determine how well analysis tools correctly analyze a broad array of application and framework behavior, including:&lt;br /&gt;
&lt;br /&gt;
* HTTP request and response problems&lt;br /&gt;
* Simple and complex data flow&lt;br /&gt;
* Simple and complex control flow&lt;br /&gt;
* Popular frameworks&lt;br /&gt;
* Inversion of control&lt;br /&gt;
* Reflection&lt;br /&gt;
* Class loading&lt;br /&gt;
* Annotations&lt;br /&gt;
* Popular UI technologies (particularly JavaScript frameworks)&lt;br /&gt;
&lt;br /&gt;
Not all of these are yet tested by the Benchmark but future enhancements intend to provide more coverage of these issues.&lt;br /&gt;
&lt;br /&gt;
Additional future enhancements could cover:&lt;br /&gt;
* All vulnerability types in the [[Top10 | OWASP Top 10]]&lt;br /&gt;
* Does the tool find flaws in libraries?&lt;br /&gt;
* Does the tool find flaws spanning custom code and libraries?&lt;br /&gt;
* Does tool handle web services? REST, XML, GWT, etc…&lt;br /&gt;
* Does tool work with different app servers? Java platforms?&lt;br /&gt;
&lt;br /&gt;
== Example Test Case ==&lt;br /&gt;
&lt;br /&gt;
Each test case is a simple Java EE servlet. BenchmarkTest00001 in version 1.0 of the Benchmark was an LDAP Injection test with the following metadata in the accompanying BenchmarkTest00001.xml file:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;test-metadata&amp;gt;&lt;br /&gt;
    &amp;lt;category&amp;gt;ldapi&amp;lt;/category&amp;gt;&lt;br /&gt;
    &amp;lt;test-number&amp;gt;00001&amp;lt;/test-number&amp;gt;&lt;br /&gt;
    &amp;lt;vulnerability&amp;gt;true&amp;lt;/vulnerability&amp;gt;&lt;br /&gt;
    &amp;lt;cwe&amp;gt;90&amp;lt;/cwe&amp;gt;&lt;br /&gt;
  &amp;lt;/test-metadata&amp;gt;&lt;br /&gt;
&lt;br /&gt;
BenchmarkTest00001.java in the OWASP Benchmark 1.0 simply reads in all the cookie values, looks for a cookie named &amp;quot;foo&amp;quot;, and uses the value of this cookie when performing an LDAP query. Here's the code for BenchmarkTest00001.java:&lt;br /&gt;
&lt;br /&gt;
  package org.owasp.benchmark.testcode;&lt;br /&gt;
  &lt;br /&gt;
  import java.io.IOException;&lt;br /&gt;
  &lt;br /&gt;
  import javax.servlet.ServletException;&lt;br /&gt;
  import javax.servlet.annotation.WebServlet;&lt;br /&gt;
  import javax.servlet.http.HttpServlet;&lt;br /&gt;
  import javax.servlet.http.HttpServletRequest;&lt;br /&gt;
  import javax.servlet.http.HttpServletResponse;&lt;br /&gt;
  &lt;br /&gt;
  @WebServlet(&amp;quot;/BenchmarkTest00001&amp;quot;)&lt;br /&gt;
  public class BenchmarkTest00001 extends HttpServlet {&lt;br /&gt;
  	&lt;br /&gt;
  	private static final long serialVersionUID = 1L;&lt;br /&gt;
  	&lt;br /&gt;
  	@Override&lt;br /&gt;
  	public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {&lt;br /&gt;
  		doPost(request, response);&lt;br /&gt;
  	}&lt;br /&gt;
  &lt;br /&gt;
  	@Override&lt;br /&gt;
  	public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {&lt;br /&gt;
  		// some code&lt;br /&gt;
  &lt;br /&gt;
  		javax.servlet.http.Cookie[] cookies = request.getCookies();&lt;br /&gt;
  		&lt;br /&gt;
  		String param = null;&lt;br /&gt;
  		boolean foundit = false;&lt;br /&gt;
  		if (cookies != null) {&lt;br /&gt;
  			for (javax.servlet.http.Cookie cookie : cookies) {&lt;br /&gt;
  				if (cookie.getName().equals(&amp;quot;foo&amp;quot;)) {&lt;br /&gt;
  					param = cookie.getValue();&lt;br /&gt;
  					foundit = true;&lt;br /&gt;
  				}&lt;br /&gt;
  			}&lt;br /&gt;
  			if (!foundit) {&lt;br /&gt;
  				// no cookie found in collection&lt;br /&gt;
  				param = &amp;quot;&amp;quot;;&lt;br /&gt;
  			}&lt;br /&gt;
  		} else {&lt;br /&gt;
  			// no cookies&lt;br /&gt;
  			param = &amp;quot;&amp;quot;;&lt;br /&gt;
  		}&lt;br /&gt;
  		&lt;br /&gt;
  		try {&lt;br /&gt;
  			javax.naming.directory.DirContext dc = org.owasp.benchmark.helpers.Utils.getDirContext();&lt;br /&gt;
  			Object[] filterArgs = {&amp;quot;a&amp;quot;,&amp;quot;b&amp;quot;};&lt;br /&gt;
  			dc.search(&amp;quot;name&amp;quot;, param, filterArgs, new javax.naming.directory.SearchControls());&lt;br /&gt;
  		} catch (javax.naming.NamingException e) {&lt;br /&gt;
  			throw new ServletException(e);&lt;br /&gt;
  		}&lt;br /&gt;
  	}&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
= Test Case Details =&lt;br /&gt;
&lt;br /&gt;
The following describes situations in the Benchmark that have come up for debate as to the validity/accuracy of the test cases in these scenarios. &lt;br /&gt;
&lt;br /&gt;
== Cookies as a Source of Attack for XSS ==&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.1 and early versions of the 1.2beta included test cases that used cookies as a source of data that flowed into XSS vulnerabilities. The Benchmark treated these tests as False Positives because the Benchmark team figured that you'd have to use an XSS vulnerability in the first place to set the cookie value, and so it wasn't fair/reasonable to consider an XSS vulnerability whose data source was a cookie value as actually exploitable. However, we got feedback from some tool vendors, like Fortify, Burp, and Arachni, that they disagreed with this analysis and felt that, in fact, cookies were a valid source of attack against XSS vulnerabilities. Given that there are good arguments on both sides of this safe vs. unsafe question, we decided on Aug 25, 2015 to simply remove those test cases from the Benchmark. If, in the future, we decide who is right, we may add such test cases back in.&lt;br /&gt;
&lt;br /&gt;
== Headers as a Source of Attack for XSS ==&lt;br /&gt;
&lt;br /&gt;
Similarly, the Benchmark team believes that the names of headers aren't a valid source of XSS attack for the same reason we thought cookie values aren't a valid source. Because it would require an XSS vulnerability to be exploited in the first place to set them. In fact, we feel that this argument is much stronger for header names, than cookie values. Right now, the Benchmark doesn't include any header names as sources for XSS test cases, but we plan to add them, and mark them as false positive in the Benchmark.&lt;br /&gt;
&lt;br /&gt;
We do have header values as sources for some XSS test cases in the Benchmark and only 'referer' is treated as a valid XSS source (i.e., true positives) because other headers are not viable XSS sources. Other headers are, of course, valid sources for other attack vectors, like SQL injection or Command Injection.&lt;br /&gt;
&lt;br /&gt;
== False Positive Scenario: Static Values Passed to Unsafe (Weak) Sinks ==&lt;br /&gt;
&lt;br /&gt;
The Benchmark has MANY test cases where unsafe data flows in from the browser, but that data is replaced with static content as it goes through the propagators in the that specific test case. This static (safe) data then flows to the sink, which may be a weak/unsafe sink, like, for example, an unsafely constructed SQL statement. The Benchmark treats those test cases as false positives because there is absolutely no way for that weakness to be exploited. The NSA Juliet SAST benchmark treats such test cases exactly the same way, as false positives. We do recognize that there are weaknesses in those test cases, even though they aren't exploitable.&lt;br /&gt;
&lt;br /&gt;
Some SAST tool vendors feel it is appropriate to point out those weaknesses, and that's fine. However, if the tool points those weaknesses out, and does not distinguish them from truly exploitable vulnerabilities, then the Benchmark treats those findings as false positives. If the tool allows a user to differentiate these non-exploitable weaknesses from exploitable vulnerabilities, then the Benchmark scorecard generator can use that information to filter out these extra findings (along with any other similarly marked findings) so they don't count against that tool when calculating that tool's Benchmark score.  In the real world, its important for analysts to be able to filter out such findings if they only have time to deal with the most critical, actually exploitable, vulnerabilities. If a tool doesn't make it easy for an analyst to distinguish the two situations, then they are providing a disservice to the analyst.&lt;br /&gt;
&lt;br /&gt;
This issue doesn't affect DAST tools. They only report what appears to be exploitable to them. So this has no affect on them.&lt;br /&gt;
&lt;br /&gt;
If you are a SAST tool vendor or user, and you believe the Benchmark scorecard generator is counting such findings against that tool, and there is a way to tell them apart, please let the project team know so the scorecard generator can be adjusted to not count those findings against the tool. The Benchmark project's goal is the generate the most fair and accurate results it can generate. If such an adjustment is made to how a scorecard is generated for that tool, we plan to document this was done for that tool, and explain how others could perform the same filtering within that tool in order to get the same focused set of results.&lt;br /&gt;
&lt;br /&gt;
== Dead Code ==&lt;br /&gt;
&lt;br /&gt;
Some SAST tools point out weaknesses in dead code because they might eventually end up being used, and serve as bad coding examples (think cut/paste of code). We think this is fine/appropriate.  However, there is no dead code in the OWASP Benchmark (at least not intentionally). So dead code should not be causing any tool to report unnecessary false positives.&lt;br /&gt;
&lt;br /&gt;
= Tool Support/Results =&lt;br /&gt;
&lt;br /&gt;
The results for 5 free tools, PMD, FindBugs, FindBugs with the FindSecBugs plugin, SonarQube and OWASP ZAP are available here against version 1.2 of the Benchmark: https://rawgit.com/OWASP/Benchmark/master/scorecard/OWASP_Benchmark_Home.html. We've included multiple versions of FindSecBugs' and ZAP's results so you can see the improvements they are making finding vulnerabilities in Benchmark.&lt;br /&gt;
&lt;br /&gt;
We have Benchmark results for all the following tools, but haven't publicly released the results for any commercial tools. However, we included a 'Commercial Average' page, which includes a summary of results for 6 commercial SAST tools along with anonymous versions of each SAST tool's scorecard.&lt;br /&gt;
&lt;br /&gt;
The Benchmark can generate results for the following tools: &lt;br /&gt;
&lt;br /&gt;
'''Free Static Application Security Testing (SAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://pmd.github.io/ PMD] (which really has no security rules) - .xml results file&lt;br /&gt;
* [http://findbugs.sourceforge.net/ FindBugs] - .xml results file (Note: FindBugs hasn't been updated since 2015. Use SpotBugs instead (see below))&lt;br /&gt;
* [https://www.sonarqube.org/downloads/ SonarQube] - .xml results file&lt;br /&gt;
* [https://spotbugs.github.io/ SpotBugs] - .xml results file. This is the successor to FindBugs.&lt;br /&gt;
* SpotBugs with the [http://find-sec-bugs.github.io/ FindSecurityBugs plugin] - .xml results file&lt;br /&gt;
&lt;br /&gt;
Note: We looked into supporting [http://checkstyle.sourceforge.net/ Checkstyle] but it has no security rules, just like PMD. The [http://fb-contrib.sourceforge.net/ fb-contrib] FindBugs plugin doesn't have any security rules either. We did test [http://errorprone.info/ Error Prone], and found that it does report some use of [http://errorprone.info/bugpattern/InsecureCipherMode) insecure ciphers (CWE-327)], but that's it.&lt;br /&gt;
&lt;br /&gt;
'''Commercial SAST Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.castsoftware.com/products/application-intelligence-platform CAST Application Intelligence Platform (AIP)] - .xml results file&lt;br /&gt;
* [https://www.checkmarx.com/products/static-application-security-testing/ Checkmarx CxSAST] - .xml results file&lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/ibm-appscan-source IBM AppScan Source (Standalone and Cloud)] - .ozasmt or .xml results file&lt;br /&gt;
* [https://juliasoft.com/solutions/julia-for-security/ Julia Analyzer] - .xml results file&lt;br /&gt;
* [https://software.microfocus.com/en-us/products/static-code-analysis-sast/overview Micro Focus (Formally HPE) Fortify (On-Demand and stand-alone versions)] - .fpr results file&lt;br /&gt;
* [https://www.parasoft.com/products/jtest/ Parasoft Jtest] - .xml results file&lt;br /&gt;
* [https://semmle.com/lgtm Semmle LGTM] - .sarif results file&lt;br /&gt;
* [https://www.shiftleft.io/product/ ShiftLeft SAST] - .sl results file (Benchmark specific format. Ask vendor how to generate this)&lt;br /&gt;
* [https://snappycodeaudit.com/category/static-code-analysis Snappycode Audit's SnappyTick Source Edition (SAST)] - .xml results file&lt;br /&gt;
* [https://www.sourcemeter.com/features/ SourceMeter] - .txt results file of ALL results from VulnerabilityHunter&lt;br /&gt;
* [https://www.synopsys.com/content/dam/synopsys/sig-assets/datasheets/SAST-Coverity-datasheet.pdf Synopsys Static Analysis (Formerly Coverity Code Advisor) (On-Demand and stand-alone versions)] - .json results file (You can scan Benchmark w/Coverity for free. See: https://scan.coverity.com/)&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] - .xml results file&lt;br /&gt;
* [https://www.veracode.com/products/binary-static-analysis-sast Veracode SAST] - .xml results file&lt;br /&gt;
* [https://www.rigs-it.com/xanitizer/ XANITIZER] - xml results file ([https://www.rigs-it.com/wp-content/uploads/2018/03/howtosetupxanitizerforowaspbenchmarkproject.pdf Their white paper on how to setup Xanitizer to scan Benchmark.]) (Free trial available)&lt;br /&gt;
&lt;br /&gt;
We are looking for results for other commercial static analysis tools like: [https://www.grammatech.com/products/codesonar Grammatech CodeSonar], [https://www.roguewave.com/products-services/klocwork RogueWave's Klocwork], etc. If you have a license for any static analysis tool not already listed above and can run it on the Benchmark and send us the results file that would be very helpful. &lt;br /&gt;
&lt;br /&gt;
The free SAST tools come bundled with the Benchmark so you can run them yourselves. If you have a license for any commercial SAST tool, you can also run them against the Benchmark. Just put your results files in the /results folder of the project, and then run the BenchmarkScore script for your platform (.sh / .bat) and it will generate a scorecard in the /scorecard directory for all the tools you have results for that are currently supported.&lt;br /&gt;
&lt;br /&gt;
'''Free Dynamic Application Security Testing (DAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
Note: While we support scorecard generators for these Free and Commercial DAST tools, we haven't been able to get a full/clean run against the Benchmark from most of these tools. As such, some of these scorecard generators might still need some work to properly reflect their results. If you notice any problems, let us know.&lt;br /&gt;
&lt;br /&gt;
* [http://www.arachni-scanner.com/ Arachni] - .xml results file&lt;br /&gt;
** To generate .xml, run: ./bin/arachni_reporter &amp;quot;Your_AFR_Results_Filename.afr&amp;quot; --reporter=xml:outfile=Benchmark1.2-Arachni.xml&lt;br /&gt;
* [https://www.owasp.org/index.php/ZAP OWASP ZAP] - .xml results file. To generate a complete ZAP XML results file so you can generate a valid scorecard, make sure you:&lt;br /&gt;
** Tools &amp;gt; Options &amp;gt; Alerts - And set the Max alert instances to like 500.&lt;br /&gt;
** Then: Report &amp;gt; Generate XML Report...&lt;br /&gt;
&lt;br /&gt;
'''Commercial DAST Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.acunetix.com/vulnerability-scanner/ Acunetix Web Vulnerability Scanner (WVS)] - .xml results file (Generated using [https://www.acunetix.com/resources/wvs7manual.pdf command line interface (see Chapter 10.)] /ExportXML switch)&lt;br /&gt;
* [https://portswigger.net/burp Burp Pro] - .xml results file&lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/appscan-standard IBM AppScan] - .xml results file&lt;br /&gt;
* [https://software.microfocus.com/en-us/products/webinspect-dynamic-analysis-dast/overview Micro Focus (Formally HPE) WebInspect] - .xml results file&lt;br /&gt;
* [https://www.netsparker.com/web-vulnerability-scanner/ Netsparker] - .xml results file&lt;br /&gt;
* [https://www.qualys.com/apps/web-app-scanning/ Qualys Web App Scanner] - .xml results file&lt;br /&gt;
* [https://www.rapid7.com/products/appspider/ Rapid7 AppSpider] - .xml results file&lt;br /&gt;
&lt;br /&gt;
If you have access to other DAST Tools, PLEASE RUN THEM FOR US against the Benchmark, and send us the results file so we can build a scorecard generator for that tool.&lt;br /&gt;
&lt;br /&gt;
'''Commercial Interactive Application Security Testing (IAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.contrastsecurity.com/interactive-application-security-testing-iast Contrast Assess] - .zip results file (You can scan Benchmark w/Contrast for free. See: https://www.contrastsecurity.com/contrast-community-edition)&lt;br /&gt;
* [https://hdivsecurity.com/interactive-application-security-testing-iast Hdiv Detection (IAST)] - .hlg results file&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/security-testing/interactive-application-security-testing.html Seeker IAST] - .csv results file&lt;br /&gt;
&lt;br /&gt;
'''Commercial Hybrid Analysis Application Security Testing Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [http://www.iappsecure.com/products.html Fusion Lite Insight] - .xml results file&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It may be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results.'''&lt;br /&gt;
&lt;br /&gt;
The project has automated test harnesses for these vulnerability detection tools, so we can repeatably run the tools against each version of the Benchmark and automatically produce scorecards in our desired format.&lt;br /&gt;
&lt;br /&gt;
We want to test as many tools as possible against the Benchmark. If you are:&lt;br /&gt;
&lt;br /&gt;
* A tool vendor and want to participate in the project&lt;br /&gt;
* Someone who wants to help score a free tool against the project&lt;br /&gt;
* Someone who has a license to a commercial tool and the terms of the license allow you to publish tool results, and you want to participate&lt;br /&gt;
&lt;br /&gt;
please let [mailto:dave.wichers@owasp.org me] know!&lt;br /&gt;
&lt;br /&gt;
= Quick Start =&lt;br /&gt;
&lt;br /&gt;
==What is in the Benchmark?==&lt;br /&gt;
The Benchmark is a Java Maven project. Its primary component is thousands of test cases (e.g., BenchmarkTest00001.java) , each of which is a single Java servlet that contains a single vulnerability (either a true positive or false positive). The vulnerabilities span about a dozen different types currently and are expected to expand significantly in the future.&lt;br /&gt;
&lt;br /&gt;
An expectedresults.csv is published with each version of the Benchmark (e.g., expectedresults-1.1.csv) and it specifically lists the expected results for each test case. Here’s what the first two rows in this file looks like for version 1.1 of the Benchmark:&lt;br /&gt;
&lt;br /&gt;
 # test name		category	real vulnerability	CWE	Benchmark version: 1.1	2015-05-22&lt;br /&gt;
 BenchmarkTest00001	crypto		TRUE			327&lt;br /&gt;
&lt;br /&gt;
This simply means that the first test case is a crypto test case (use of weak cryptographic algorithms), this is a real vulnerability (as opposed to a false positive), and this issue maps to CWE 327. It also indicates this expected results file is for Benchmark version 1.1 (produced May 22, 2015). There is a row in this file for each of the tens of thousands of test cases in the Benchmark.  Each time a new version of the Benchmark is published, a new corresponding results file is generated and each test case can be completely different from one version to the next.&lt;br /&gt;
&lt;br /&gt;
The Benchmark also comes with a bunch of different utilities, commands, and prepackaged open source security analysis tools, all of which can be executed through Maven goals, including:&lt;br /&gt;
&lt;br /&gt;
* Open source vulnerability detection tools to be run against the Benchmark&lt;br /&gt;
* A scorecard generator, which computes a scorecard for each of the tools you have results files for.&lt;br /&gt;
&lt;br /&gt;
==What Can You Do With the Benchmark?==&lt;br /&gt;
* Compile all the software in the Benchmark project (e.g., mvn compile)&lt;br /&gt;
* Run a static vulnerability analysis tool (SAST) against the Benchmark test case code&lt;br /&gt;
&lt;br /&gt;
* Scan a running version of the Benchmark with a dynamic application security testing tool (DAST)&lt;br /&gt;
** Instructions on how to run it are provided below&lt;br /&gt;
&lt;br /&gt;
* Generate scorecards for each of the tools you have results files for&lt;br /&gt;
** See the Tool Support/Results page for the list of tools the Benchmark supports generating scorecards for&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
Before downloading or using the Benchmark make sure you have the following installed and configured properly:&lt;br /&gt;
&lt;br /&gt;
 GIT: http://git-scm.com/ or https://github.com/&lt;br /&gt;
 Maven: https://maven.apache.org/  (Version: 3.2.3 or newer works.)&lt;br /&gt;
 Java: http://www.oracle.com/technetwork/java/javase/downloads/index.html (Java 7 or 8) (64-bit)&lt;br /&gt;
&lt;br /&gt;
==Getting, Building, and Running the Benchmark==&lt;br /&gt;
&lt;br /&gt;
To download and build everything:&lt;br /&gt;
&lt;br /&gt;
 $ git clone https://github.com/OWASP/benchmark &lt;br /&gt;
 $ cd benchmark&lt;br /&gt;
 $ mvn compile   (This compiles it)&lt;br /&gt;
 $ runBenchmark.sh/.bat - This compiles and runs it.&lt;br /&gt;
&lt;br /&gt;
Then navigate to: https://localhost:8443/benchmark/ to go to its home page. It uses a self signed SSL certificate, so you'll get a security warning when you hit the home page.&lt;br /&gt;
&lt;br /&gt;
Note: We have set the Benchmark app to use up to 6 Gig of RAM, which it may need when it is fully scanned by a DAST scanner. The DAST tool probably also requires 3+ Gig of RAM. As such, we recommend having a 16 Gig machine if you are going to try to run a full DAST scan. And at least 4 or ideally 8 Gig if you are going to play around with the running Benchmark app.&lt;br /&gt;
&lt;br /&gt;
== Using a VM instead ==&lt;br /&gt;
We have several preconstructed VMs or instructions on how to build one that you can use instead:&lt;br /&gt;
&lt;br /&gt;
* Docker: A Dockerfile is checked into the project [https://github.com/OWASP/Benchmark/blob/master/VMs/Dockerfile here]. This Docker file should automatically produce a Docker VM with the latest Benchmark project files. After you have Docker installed, cd to /VMs then run: &lt;br /&gt;
 ./buildDockerImage.sh --&amp;gt; This builds the Docker Benchmark VM (This will take a WHILE)&lt;br /&gt;
 docker images  --&amp;gt; You should see the new benchmark:latest image in the list provided&lt;br /&gt;
 # The Benchmark Docker Image only has to be created once. &lt;br /&gt;
&lt;br /&gt;
 To run the Benchmark in your Docker VM, just run:&lt;br /&gt;
   ./runDockerImage.sh  --&amp;gt; This pulls in any updates to Benchmark since the Image was built, builds everything, and starts a remotely accessible Benchmark web app.&lt;br /&gt;
 If successful, you should see this at the end:&lt;br /&gt;
   [INFO] [talledLocalContainer] Tomcat 8.x started on port [8443]&lt;br /&gt;
   [INFO] Press Ctrl-C to stop the container...&lt;br /&gt;
 Then simply navigate to: https://localhost:8443/benchmark from the machine you are running Docker&lt;br /&gt;
 &lt;br /&gt;
 Or if you want to access from a different machine:&lt;br /&gt;
  docker-machine ls (in a different terminal) --&amp;gt; To get IP Docker VM is exporting (e.g., tcp://192.168.99.100:2376)&lt;br /&gt;
  Navigate to: https://192.168.99.100:8443/benchmark in your browser (using the above IP as an example)&lt;br /&gt;
&lt;br /&gt;
* Amazon Web Services (AWS) - Here's how you set up the Benchmark on an AWS VM:&lt;br /&gt;
 sudo yum install git&lt;br /&gt;
 sudo yum install maven&lt;br /&gt;
 sudo yum install mvn&lt;br /&gt;
 sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo&lt;br /&gt;
 sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo&lt;br /&gt;
 sudo yum install -y apache-maven&lt;br /&gt;
 git clone https://github.com/OWASP/benchmark&lt;br /&gt;
 cd benchmark&lt;br /&gt;
 chmod 755 *.sh&lt;br /&gt;
 ./runBenchmark.sh -- to run it locally on the VM.&lt;br /&gt;
 ./runRemoteAccessibleBenchmark.sh -- to run it so it can be accessed outside the VM (on port 8443).&lt;br /&gt;
&lt;br /&gt;
==Running Free Static Analysis Tools Against the Benchmark==&lt;br /&gt;
There are scripts for running each of the free SAST vulnerability detection tools included with the Benchmark against the Benchmark test cases. On Linux, you might have to make them executable (e.g., chmod 755 *.sh) before you can run them.&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for PMD:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runPMD.sh (Linux) or .\scripts\runPMD.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for FindBugs:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runFindBugs.sh (Linux) or .\scripts\runFindBugs.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for FindBugs with the FindSecBugs plugin:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runFindSecBugs.sh (Linux) or .\scripts\runFindSecBugs.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
In each case, the script will generate a results file and put it in the /results directory. For example: &lt;br /&gt;
&lt;br /&gt;
 Benchmark_1.2-findbugs-v3.0.1-1026.xml&lt;br /&gt;
&lt;br /&gt;
This results file name is carefully constructed to mean the following: It's a results file against the OWASP Benchmark version 1.2, FindBugs was the analysis tool, it was version 3.0.1 of FindBugs, and it took 1026 seconds to run the analysis.&lt;br /&gt;
&lt;br /&gt;
NOTE: If you create a results file yourself, by running a commercial tool for example, you can add the version # and the compute time to the filename just like this and the Benchmark Scorecard generator will pick this information up and include it in the generated scorecard. If you don't, depending on what metadata is included in the tool results, the Scorecard generator might do this automatically anyway.&lt;br /&gt;
&lt;br /&gt;
==Generating Scorecards==&lt;br /&gt;
The scorecard generation application BenchmarkScore is included with the Benchmark. It parses the output files generated by any of the supported security tools run against the Benchmark and compares them against the expected results, and produces a set of web pages that detail the accuracy and speed of the tools involved. For the list of currently supported tools, check out the: Tools Support/Results tab. If you are using a tool that is not yet supported, simply send us a results file from that tool and we'll write a parser for that tool and add it to the supported tools list.&lt;br /&gt;
&lt;br /&gt;
The following command will compute a Benchmark scorecard for all the results files in the '''/results''' directory. The generated scorecard is put into the '''/scorecard''' directory.&lt;br /&gt;
&lt;br /&gt;
 createScorecard.sh (Linux) or createScorecard.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
An example of a real scorecard for some open source tools is provided at the top of the Tool Support/Results tab so you can see what one looks like.&lt;br /&gt;
&lt;br /&gt;
We recommend including the Benchmark version number in any results file name, in order to help prevent mismatches between the expected results and the actual results files.  A tool will not score well against the wrong expected results.&lt;br /&gt;
&lt;br /&gt;
===Customizing Your Scorecard Generation===&lt;br /&gt;
&lt;br /&gt;
The createScorecard scripts are very simple. They only have one line. Here's what the 1.2 version looks like:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;expectedresults-1.2.csv results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This Maven command simply says to run the BenchmarkScore application, passing in two parameters. The 1st is the Benchmark expected results file to compare the tool results against. And the 2nd is the name of the directory that contains all the results from tools run against that version of the Benchmark. If you have tool results older than the current version of the Benchmark, like 1.1 results for example, then you would do something like this instead:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;expectedresults-1.1.csv 1.1_results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To keep things organized, we actually put the expected results file inside the same results folder for that version of the Benchmark, so our command looks like this:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;1.1_results/expectedresults-1.1.csv 1.1_results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In all cases, the generated scorecard is put in the /scorecard folder.&lt;br /&gt;
&lt;br /&gt;
'''WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It is likely to be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results.''' It is for just this reason that the Benchmark project isn't releasing such results itself.&lt;br /&gt;
&lt;br /&gt;
= Tool Scanning Tips =&lt;br /&gt;
&lt;br /&gt;
People frequently have difficulty scanning the Benchmark with various tools due to many reasons, including size of the Benchmark app and its codebase, and complexity of the tools used. Here is some guidance for some of the tools we have used to scan the Benchmark. If you've learned any tricks on how to get better or easier results for a particular tool against the Benchmark, let us know or update this page directly.&lt;br /&gt;
&lt;br /&gt;
== Generic Tips ==&lt;br /&gt;
&lt;br /&gt;
Because of the size of the Benchmark, you may need to give your tool more memory before it starts the scan. If its a Java based tool, you may want to pass more memory to it like this:&lt;br /&gt;
&lt;br /&gt;
 -Xmx4G (This gives the Java application 4 Gig of memory)&lt;br /&gt;
&lt;br /&gt;
== SAST Tools ==&lt;br /&gt;
&lt;br /&gt;
=== Checkmarx ===&lt;br /&gt;
&lt;br /&gt;
The Checkmarx SAST Tool (CxSAST) is ready to scan the OWASP Benchmark out-of-the-box. &lt;br /&gt;
Please notice that the OWASP Benchmark “hides” some vulnerabilities in dead code areas, for example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
if (0&amp;gt;1)&lt;br /&gt;
{&lt;br /&gt;
  //vulnerable code&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
By default, CxSAST will find these vulnerabilities since Checkmarx believes that including dead code in the scan results is a SAST best practice. &lt;br /&gt;
&lt;br /&gt;
Checkmarx's experience shows that security experts expect to find these types of code vulnerabilities, and demand that their developers fix them. However, OWASP Benchmark considers the flagging of these vulnerabilities as False Positives, as a result lowering Checkmarx's overall score. &lt;br /&gt;
&lt;br /&gt;
Therefore, in order to receive an OWASP score untainted by dead code, re-configure CxSAST as follows:&lt;br /&gt;
# Open the CxAudit client for editing Java queries.&lt;br /&gt;
# Override the &amp;quot;Find_Dead_Code&amp;quot; query.&lt;br /&gt;
# Add the commented text of the original query to the new override query.&lt;br /&gt;
# Save the queries.&lt;br /&gt;
&lt;br /&gt;
=== FindBugs ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindBugs.(sh or bat). If you want to run a different version of FindBugs, just change its version number in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== FindBugs with FindSecBugs ===&lt;br /&gt;
&lt;br /&gt;
[http://h3xstream.github.io/find-sec-bugs/ FindSecurityBugs] is a great plugin for FindBugs that significantly increases the ability for FindBugs to find security issues. We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindSecBugs.(sh or bat). If you want to run a different version of FindSecBugs, just change the version number of the findsecbugs-plugin artifact in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== Micro Focus (Formally HP) Fortify ===&lt;br /&gt;
&lt;br /&gt;
If you are using the Audit Workbench, you can give it more memory and make sure you invoke it in 64-bit mode by doing this:&lt;br /&gt;
&lt;br /&gt;
  set AWB_VM_OPTS=&amp;quot;-Xmx2G -XX:MaxPermSize=256m&amp;quot;&lt;br /&gt;
  export AWB_VM_OPTS=&amp;quot;-Xmx2G -XX:MaxPermSize=256m&amp;quot;&lt;br /&gt;
  auditworkbench -64&lt;br /&gt;
&lt;br /&gt;
We found it was easier to use the Maven support in Fortify to scan the Benchmark and to do it in 2 phases, translate, and then scan. We did something like this:&lt;br /&gt;
&lt;br /&gt;
  Translate Phase:&lt;br /&gt;
  export JAVA_HOME=$(/usr/libexec/java_home)&lt;br /&gt;
  export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_17.10/bin&lt;br /&gt;
  export SCA_VM_OPTS=&amp;quot;-Xmx2G -version 1.7&amp;quot;&lt;br /&gt;
  mvn sca:clean&lt;br /&gt;
  mvn sca:translate&lt;br /&gt;
&lt;br /&gt;
  Scan Phase:&lt;br /&gt;
  export JAVA_HOME=$(/usr/libexec/java_home)&lt;br /&gt;
  export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_4.10/bin&lt;br /&gt;
  export SCA_VM_OPTS=&amp;quot;-Xmx10G -version 1.7&amp;quot;&lt;br /&gt;
  mvn sca:scan&lt;br /&gt;
&lt;br /&gt;
=== PMD ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runPMD.(sh or bat). If you want to run a different version of PMD, just change its version number in the Benchmark pom.xml file. (NOTE: PMD doesn't find any security issues. We include it because its interesting to know that it doesn't.)&lt;br /&gt;
&lt;br /&gt;
=== SonarQube ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its mostly dialed in.  But its a bit tricky because SonarQube requires two parts. There is a stand alone scanner for Java. And then there is a web application that accepts the results, and in turn can then produce the results file required by the Benchmark scorecard generator for SonarQube. Running the script runSonarQube.(sh or bat) will generate the results, but if the SonarQube Web Application isn't running where the runSonarQube script expects it to be, then the script will fail.&lt;br /&gt;
&lt;br /&gt;
If you want to run a different version of SonarQube, just change its version number in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== Xanitizer ===&lt;br /&gt;
&lt;br /&gt;
The vendor has written their own guide to [http://www.rigs-it.net/opendownloads/whitepapers/HowToSetUpXanitizerForOWASPBenchmarkProject.pdf How to Set Up Xanitizer for OWASP Benchmark].&lt;br /&gt;
&lt;br /&gt;
== DAST Tools ==&lt;br /&gt;
&lt;br /&gt;
=== Burp Pro ===&lt;br /&gt;
&lt;br /&gt;
You must use Burp Pro v1.6.29 or greater to scan the Benchmark due to a previous limitation in Burp Pro related to ensuring the path attribute for cookies was honored. This issue was fixed in the v1.6.29 release.&lt;br /&gt;
&lt;br /&gt;
To scan, first spider the entire Benchmark, and then select the /Benchmark URL and actively scan that branch. You can skip all the .html pages and any other pages that Burp says have no parameters.&lt;br /&gt;
&lt;br /&gt;
NOTE: We have been unable to simply run Burp Pro against the entire Benchmark in one shot. In our experience, it eventually freezes/stops scanning. We've had to run it against each test area one at a time. If you figure out how to get Burp Pro to scan all of Benchmark in one shot, let us know how you did it!&lt;br /&gt;
&lt;br /&gt;
=== OWASP ZAP ===&lt;br /&gt;
&lt;br /&gt;
ZAP may require additional memory to be able to scan the Benchmark. To configure the amount of memory:&lt;br /&gt;
* Tools --&amp;gt; Options --&amp;gt; JVM: Recommend setting to: -Xmx2048m (or larger). (Then restart ZAP).&lt;br /&gt;
&lt;br /&gt;
To run ZAP against Benchmark:&lt;br /&gt;
# Because Benchmark uses Cookies and Headers as sources of attack for many test cases: Tools --&amp;gt; Options --&amp;gt; Active Scan Input Vectors: Then check the HTTP Headers, All Requests, and Cookie Data checkboxes and hit OK&lt;br /&gt;
# Click on Show All Tabs button (if spider tab isn't visible)&lt;br /&gt;
# Go to Spider tab (the black spider) and click on New Scan button&lt;br /&gt;
# Enter: https://localhost:8443/benchmark/  into the 'Starting Point' box and hit 'Start Scan'&lt;br /&gt;
#* Do this again. For some reason it takes 2 passes with the Spider before it stops finding more Benchmark endpoints.&lt;br /&gt;
# When Spider completes, click on 'benchmark' folder in Site Map, right click and select: 'Attack --&amp;gt; Active Scan'&lt;br /&gt;
#* It will take several hours, like 3+ to complete (it's actually likely to simply freeze before completing the scan - see NOTE: below)&lt;br /&gt;
&lt;br /&gt;
For faster active scan you can&lt;br /&gt;
* Disable the ZAP DB log (in ZAP 2.5.0+):&lt;br /&gt;
** Disable it via Options / Database / Recover Log&lt;br /&gt;
** Set it on the command line using &amp;quot;-config database.recoverylog=false&amp;quot;&lt;br /&gt;
* Disable unnecessary plugins / Technologies: When you launch the Active Scan&lt;br /&gt;
** On the Policy tab, disable all plugins except: XSS (Reflected), Path Traversal, SQLi, OS Command Injection&lt;br /&gt;
** Go the Technology Tab, disable everything and only enable: MySQL, YOUR_OS, Tomcat&lt;br /&gt;
** Note: This 2nd performance improvement step is a bit like cheating as you wouldn't do this for a normal site scan. You'd want to leave all this on in case these other plugins/technologies are helpful in finding more issues. So a fair performance comparison of ZAP to other tools would leave all this on.&lt;br /&gt;
&lt;br /&gt;
To generate the ZAP XML results file so you can generate its scorecard:&lt;br /&gt;
* Tools &amp;gt; Options &amp;gt; Alerts - And set the Max alert instances to like 500.&lt;br /&gt;
* Then: Report &amp;gt; Generate XML Report...&lt;br /&gt;
&lt;br /&gt;
NOTE: Similar to Burp, we can't simply run ZAP against the entire Benchmark in one shot. In our experience, it eventually freezes/stops scanning. We've had to run it against each test area one at a time. If you figure out how to get ZAP to scan all of Benchmark in one shot, let us know how you did it!&lt;br /&gt;
&lt;br /&gt;
Things we tried that didn't improve the score:&lt;br /&gt;
* AJAX Spider - the traditional spider appears to find all (or 99%) of the test cases so the AJAX Spider does not appear to be needed against Benchmark v1.2&lt;br /&gt;
* XSS (Persistent) - There are 3 of these plugins that run by default. There aren't any stored XSS in Benchmark, so you can disable these plugins for a faster scan.&lt;br /&gt;
* DOM XSS Plugin - This is an optional plugin that didn't seem to find any additional XSS issues. There aren't an DOM specific XSS issues in Benchmark v1.2, so not surprising.&lt;br /&gt;
&lt;br /&gt;
== IAST Tools ==&lt;br /&gt;
&lt;br /&gt;
Interactive Application Security Testing (IAST) tools work differently than scanners.  IAST tools monitor an application as it runs to identify application vulnerabilities using context from inside the running application. Typically these tools run continuously, immediately notifying users of vulnerabilities, but you can also get a full report of an entire application. To do this, we simply run the Benchmark application with an IAST agent and use a crawler to hit all the pages.&lt;br /&gt;
&lt;br /&gt;
=== Contrast Assess ===&lt;br /&gt;
&lt;br /&gt;
To use Contrast Assess, we simply add the Java agent to the Benchmark environment and run the BenchmarkCrawler. The entire process should only take a few minutes. We provided a few scripts, which simply add the -javaagent:contrast.jar flag to the Benchmark launch configuration. We have tested on MacOS, Ubuntu, and Windows.  Be sure your VM has at least 4M of memory.&lt;br /&gt;
&lt;br /&gt;
* Ensure your environment has Java, Maven, and git installed, then build the Benchmark project&lt;br /&gt;
   '''$ git clone https://github.com/OWASP/Benchmark.git'''&lt;br /&gt;
   '''$ cd Benchmark'''&lt;br /&gt;
   '''$ mvn compile'''&lt;br /&gt;
&lt;br /&gt;
* Download a licensed copy of the Contrast Assess Java Agent (contrast.jar) from your Contrast TeamServer account and put it in the /Benchmark/tools/Contrast directory.&lt;br /&gt;
   '''$ cp ~/Downloads/contrast.jar tools/Contrast'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 1, launch the Benchmark application and wait until it starts&lt;br /&gt;
   '''$ cd tools/Contrast  &lt;br /&gt;
   '''$ ./runBenchmark_wContrast.sh''' (.bat on Windows)&lt;br /&gt;
   '''[INFO] Scanning for projects...&lt;br /&gt;
   '''[INFO]                                                                         &lt;br /&gt;
   '''[INFO] ------------------------------------------------------------------------&lt;br /&gt;
   '''[INFO] Building OWASP Benchmark Project 1.2&lt;br /&gt;
   '''[INFO] ------------------------------------------------------------------------&lt;br /&gt;
   '''[INFO] &lt;br /&gt;
   '''...&lt;br /&gt;
   '''[INFO] [talledLocalContainer] Tomcat 8.x started on port [8443]'''&lt;br /&gt;
   '''[INFO] Press Ctrl-C to stop the container...'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 2, launch the crawler and wait a minute or two for the crawl to complete.&lt;br /&gt;
   '''$ ./runCrawler.sh''' (.bat on Windows)&lt;br /&gt;
&lt;br /&gt;
* A Contrast report has been generated in /Benchmark/tools/Contrast/working/contrast.log.  This report will be automatically copied (and renamed with version number)  to /Benchmark/results directory.&lt;br /&gt;
   '''$ more tools/Contrast/working/contrast.log'''&lt;br /&gt;
   '''2016-04-22 12:29:29,716 [main b] INFO - Contrast Runtime Engine&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Copyright (C) 2012&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Pat. 8,458,789 B2&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Contrast Security, Inc.&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - All Rights Reserved&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - https://www.contrastsecurity.com/&lt;br /&gt;
   '''...'''&lt;br /&gt;
&lt;br /&gt;
* Press Ctrl-C to stop the Benchmark in Terminal 1.  Note: on Windows, select &amp;quot;N&amp;quot; when asked Terminate batch job (Y/N))&lt;br /&gt;
   '''[INFO] [talledLocalContainer] Tomcat 8.x is stopped'''&lt;br /&gt;
   '''Copying Contrast report to results directory'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 2, generate scorecards in /Benchmark/scorecard&lt;br /&gt;
   '''$ ./createScorecards.sh''' (.bat on Windows)&lt;br /&gt;
   '''Analyzing results from Benchmark_1.2-Contrast.log&lt;br /&gt;
   '''Actual results file generated: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.csv&lt;br /&gt;
   '''Report written to: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html&lt;br /&gt;
&lt;br /&gt;
* Open the Benchmark Scorecard in your browser&lt;br /&gt;
   '''/Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html'''&lt;br /&gt;
&lt;br /&gt;
=== Hdiv Detection ===&lt;br /&gt;
&lt;br /&gt;
Hdiv has written their own instructions on how to run the detection component of their product on the Benchmark here: https://hdivsecurity.com/docs/features/benchmark/#how-to-run-hdiv-in-owasp-benchmark-project. You'll see that these instructions involve using the same crawler used to exercise all the test cases in the Benchmark, just like Contrast above.&lt;br /&gt;
&lt;br /&gt;
= RoadMap =&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.0 - Released April 15, 2015 - This initial release included over 20,000 test cases in 11 different vulnerability categories. As this initial version was not a runnable application, it was only suitable for assessing static analysis tools (SAST).&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.1 - Released May 23, 2015 - This update fixed some inaccurate test cases, and made sure that every vulnerability area included both True Positives and False Positives.&lt;br /&gt;
&lt;br /&gt;
Benchmark Scorecard Generator - Released July 10, 2015 - The ability to automatically and repeatably produce a scorecard of how well tools do against the Benchmark was released for most of the SAST tools supported by the Benchmark. Scorecards present graphical as well as statistical data on how well a tool does against the Benchmark down to the level of detail of how exactly it did against each individual test in the Benchmark. [https://rawgit.com/OWASP/Benchmark/master/scorecard/OWASP_Benchmark_Home.html Here are the latest public scorecards].  Support for producing scorecards for additional tools is being added all the time and the current full set is documented on the '''Tool Support/Results''' and '''Quick Start''' tabs of this wiki.&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.2beta - Released Aug 15, 2015 - The 1st release of a fully runnable version of the Benchmark to support assessing all types of vulnerability detection and prevention technologies, including DAST, IAST, RASP, WAFs, etc. This involved creating a user interface for every test case, and enhancing each test case to make sure its actually exploitable, not just uses something that is theoretically weak. This release is under 3,000 test cases to make it practical to scan the entire Benchmark with a DAST tool in a reasonable amount of time, with commodity hardware specs.&lt;br /&gt;
&lt;br /&gt;
Benchmark 1.2 - Released June 5, 2016 -  Based on feedback from a number of DAST tool developers, and other vendors as well, we made the Benchmark more realistic in a number of ways to facilitate external DAST scanning, and also made the Benchmark more resilient against attack so it could properly survive various DAST vulnerability detection and exploit verification techniques.&lt;br /&gt;
&lt;br /&gt;
Plans for Benchmark 1.3:&lt;br /&gt;
&lt;br /&gt;
While we don't have hard and fast rules of exactly what we are going to do next, enhancements in the following areas are planned for the next release:&lt;br /&gt;
&lt;br /&gt;
* Add new vulnerability categories (e.g., XXE, Hibernate Injection)&lt;br /&gt;
* Add support for popular server side Java frameworks (e.g., Spring)&lt;br /&gt;
* Add web services test cases&lt;br /&gt;
&lt;br /&gt;
We are also starting to work on the ability to score WAFs/RASPs and other defensive technology against Benchmark.&lt;br /&gt;
&lt;br /&gt;
= FAQ =&lt;br /&gt;
&lt;br /&gt;
==1. How are the scores computed for the Benchmark?==&lt;br /&gt;
&lt;br /&gt;
Each test case has a single vulnerability of a specific type. Its either a real vulnerability (True Positive) or not (a False Positive). We document all the test cases for each version of the Benchmark in the expectedresults-VERSION#.csv file (e.g., expectedresults-1.1.csv). This file lists the test case name, the CWE type of the vulnerability, and whether it is a True Positive or not. The Benchmark supports scorecard generators for computing exactly how a tool did when analyzing a version of the Benchmark. The full list of supported tools is on the Tools Support/Results tab. For each tool there is a parser that can parse the native results format for that tool (usually XML). This parser simply, for each test case, looks to see if that tool reported a vulnerability of the type expected in the test case source code file (for SAST) or the test case URL (for DAST/IAST). If it did, and the test case was a True Positive, the tool gets credit for finding it. If it is a False Positive test, and the tool reports that type of finding, then its recorded as a False Positive. If the tool didn't report that type of vulnerability for a test case, then they get either a False Negative, or a True Negative as appropriate. After calculating all of the individual test case results, a scorecard is generated providing a chart and statistics for that tool across all the vulnerability categories, and pages are also created comparing different tools to each other in each vulnerability category (if multiple tools are being scored together).&lt;br /&gt;
&lt;br /&gt;
A detailed file explaining exactly how that tool did against each individual test case in that version of the Benchmark is produced as part of scorecard generation, and is available via the Actual Results link on each tool's scorecard page. (e.g., Benchmark_v1.1_Scorecard_for_FindBugs.csv).&lt;br /&gt;
&lt;br /&gt;
==2. What if the tool I'm using doesn't have a scorecard generator for it?==&lt;br /&gt;
&lt;br /&gt;
Send us the results file! We'll be happy to create a parser for that tool so its now supported.&lt;br /&gt;
&lt;br /&gt;
==3. What if a tool finds other unexpected vulnerabilities?==&lt;br /&gt;
&lt;br /&gt;
We are sure there are vulnerabilities we didn't intend to be there and we are eliminating them as we find them. If you find some, let us know and we'll fix them too. We are primarily focused on unintentional vulnerabilities in the categories of vulnerabilities the Benchmark currently supports, since that is what is actually measured.&lt;br /&gt;
&lt;br /&gt;
Right now, two types of vulnerabilities that get reported are ignored by the scorecard generator:&lt;br /&gt;
# Vulnerabilities in categories not yet supported&lt;br /&gt;
# Vulnerabilities of a type that is supported, but reported in test cases not of that type&lt;br /&gt;
&lt;br /&gt;
In the case of #2, false positives reported in unexpected areas are also ignored, which is primarily a DAST problem. Right now those false positives are completely ignored, but we are thinking about including them in the false positive score in some fashion. We just haven't decided how yet.&lt;br /&gt;
&lt;br /&gt;
==4. How should I configure my tool to scan the Benchmark?==&lt;br /&gt;
&lt;br /&gt;
All tools support various levels of configuration in order to improve their results. The Benchmark project, in general, is trying to '''compare out of the box capabilities of tools'''. However, if a few simple tweaks to a tool can be done to improve that tool's score, that's fine. We'd like to understand what those simple tweaks are, and document them here, so others can repeat those tests in exactly the same way. For example, just turn on the 'test cookies and headers' flag, which is off by default. Or turn on the 'advanced' scan, so it will work harder, find more vulnerabilities. Its simple things like this we are talking about, not an extensive effort to teach the tool about the app, or perform 'expert' configuration of the tool.&lt;br /&gt;
&lt;br /&gt;
So, if you know of some simple tweaks to improve a tool's results, let us know what they are and we'll document them here so everyone can benefit and make it easier to do apples to apples comparisons. And we'll link to that guidance once we start documenting it, but we don't have any such guidance right now.&lt;br /&gt;
&lt;br /&gt;
==5. I'm having difficulty scanning the Benchmark with a DAST tool. How can I get it to work?==&lt;br /&gt;
&lt;br /&gt;
We've run into 2 primary issues giving DAST tools problems.&lt;br /&gt;
&lt;br /&gt;
a) The Benchmark Generates Lots of Cookies&lt;br /&gt;
&lt;br /&gt;
The Burp team pointed out a cookies bug in the 1.2beta Benchmark. Each Weak Randomness test case generates its own cookie, 1 per test case. This caused the creation of so many cookies that servers would eventually start returning 400 errors because there were simply too many cookies being submitted in a request. This was fixed in the Aug 27, 2015 update to the Benchmark by setting the path attribute for each of these cookies to be the path to that individual test case. Now, only at most one of these cookies should be submitted with each request, eliminating this 'too many cookies' problem. However, if a DAST tool doesn't honor this path attribute, it may continue to send too many cookies, making the Benchmark unscannable for that tool. Burp Pro prior to 1.6.29 had this issue, but it was fixed in the 1.6.29 release.&lt;br /&gt;
&lt;br /&gt;
b) The Benchmark is a BIG Application&lt;br /&gt;
&lt;br /&gt;
Yes. It is, so you might have to give your scanner more memory than it normally uses by default in order to successfully scan the entire Benchmark. Please consult your tool vendor's documentation on how to give it more memory.&lt;br /&gt;
&lt;br /&gt;
Your machine itself might not have enough memory in the first place. For example, we were not able to successfully scan the 1.2beta with OWASP ZAP with only 8 Gig of RAM. So, you might need a more powerful machine or use a cloud provided machine to successfully scan the Benchmark with certain DAST tools. You may have similar problems with SAST tools against large versions of the Benchmark, like the 1.1 release.&lt;br /&gt;
&lt;br /&gt;
= Acknowledgements =&lt;br /&gt;
&lt;br /&gt;
The following people, organizations, and many others, have contributed to this project and their contributions are much appreciated!&lt;br /&gt;
&lt;br /&gt;
* Lots of Vendors - Many vendors have provided us with either trial licenses we can use, or they have run their tools themselves and either sent us results files, or written and contributed scorecard generators for their tool. Many have also provided valuable feedback so we can make the Benchmark more accurate and more realistic.&lt;br /&gt;
* Juan Gama - Development of initial release and continued support&lt;br /&gt;
* Ken Prole - Assistance with automated scorecard development using CodeDx&lt;br /&gt;
* Nick Sanidas - Development of initial release&lt;br /&gt;
* Denim Group - Contribution of scan results to facilitate scorecard development&lt;br /&gt;
* Tasos Laskos - Significant feedback on the DAST version of the Benchmark&lt;br /&gt;
* Ann Campbell - From SonarSource - for fixing our SonarQube results parser&lt;br /&gt;
* Dhiraj Mishra - OWASP Member - contributed SQLi/XSS fuzz vectors as initial contribution towards adding support for WAF/RASP scoring&lt;br /&gt;
&lt;br /&gt;
[[File:CWE_Logo.jpeg|link=https://cwe.mitre.org/]] - The CWE project for providing a mapping mechanism to easily map test cases to issues found by vulnerability detection tools.&lt;br /&gt;
&lt;br /&gt;
We are looking for volunteers. Please contact [mailto:dave.wichers@owasp.org Dave Wichers] if you are interested in contributing new test cases, tool results run against the benchmark, or anything else.&lt;br /&gt;
&lt;br /&gt;
__NOTOC__ &amp;lt;headertabs /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP_Project]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Benchmark&amp;diff=252754</id>
		<title>Benchmark</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Benchmark&amp;diff=252754"/>
				<updated>2019-07-01T19:20:45Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Main = &lt;br /&gt;
 &amp;lt;div style=&amp;quot;width:100%;height:100px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Lab_big.jpg|link=OWASP_Project_Stages#tab.3DLab_Projects]]&amp;lt;/div&amp;gt;&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== OWASP Benchmark Project  ==&lt;br /&gt;
The OWASP Benchmark for Security Automation (OWASP Benchmark) is a free and open test suite designed to evaluate the speed, coverage, and accuracy of automated software vulnerability detection tools and services (henceforth simply referred to as 'tools'). Without the ability to measure these tools, it is difficult to understand their strengths and weaknesses, and compare them to each other. Each version of the OWASP Benchmark contains thousands of test cases that are fully runnable and exploitable, each of which maps to the appropriate CWE number for that vulnerability.&lt;br /&gt;
&lt;br /&gt;
You can use the OWASP Benchmark with [[Source_Code_Analysis_Tools | Static Application Security Testing (SAST)]] tools, [[:Category:Vulnerability_Scanning_Tools | Dynamic Application Security Testing (DAST)]] tools like OWASP [[ZAP]] and Interactive Application Security Testing (IAST) tools. Benchmark is implemented in Java.  Future versions may expand to include other languages.&lt;br /&gt;
&lt;br /&gt;
==Benchmark Project Scoring Philosophy==&lt;br /&gt;
&lt;br /&gt;
Security tools (SAST, DAST, and IAST) are amazing when they find a complex vulnerability in your code.  But with widespread misunderstanding of the specific vulnerabilities automated tools cover, end users are often left with a false sense of security.&lt;br /&gt;
&lt;br /&gt;
We are on a quest to measure just how good these tools are at discovering and properly diagnosing security problems in applications. We rely on the [http://en.wikipedia.org/wiki/Receiver_operating_characteristic long history] of military and medical evaluation of detection technology as a foundation for our research. Therefore, the test suite tests both real and fake vulnerabilities.&lt;br /&gt;
&lt;br /&gt;
There are four possible test outcomes in the Benchmark:&lt;br /&gt;
&lt;br /&gt;
# Tool correctly identifies a real vulnerability (True Positive - TP)&lt;br /&gt;
# Tool fails to identify a real vulnerability (False Negative - FN)&lt;br /&gt;
# Tool correctly ignores a false alarm (True Negative - TN)&lt;br /&gt;
# Tool fails to ignore a false alarm (False Positive - FP)&lt;br /&gt;
&lt;br /&gt;
We can learn a lot about a tool from these four metrics. Consider a tool that simply flags every line of code as vulnerable. This tool will perfectly identify all vulnerabilities!  But it will also have 100% false positives and thus adds no value.  Similarly, consider a tool that reports absolutely nothing. This tool will have zero false positives, but will also identify zero real vulnerabilities and is also worthless. You can even imagine a tool that flips a coin to decide whether to report whether each test case contains a vulnerability. The result would be 50% true positives and 50% false positives.  We need a way to distinguish valuable security tools from these trivial ones.&lt;br /&gt;
&lt;br /&gt;
If you imagine the line that connects all these points, from 0,0 to 100,100 establishes a line that roughly translates to &amp;quot;random guessing.&amp;quot; The ultimate measure of a security tool is how much better it can do than this line.  The diagram below shows how we will evaluate security tools against the Benchmark.&lt;br /&gt;
&lt;br /&gt;
[[File:Wbe guide.png]]&lt;br /&gt;
&lt;br /&gt;
A point plotted on this chart provides a visual indication of how well a tool did considering both the True Positives the tool reported, as well as the False Positives it reported. We also want to compute an individual score for that point in the range 0 - 100, which we call the Benchmark Accuracy Score.&lt;br /&gt;
&lt;br /&gt;
The Benchmark Accuracy Score is essentially a [https://en.wikipedia.org/wiki/Youden%27s_J_statistic Youden Index], which is a standard way of summarizing the accuracy of a set of tests.  Youden's index is one of the oldest measures for diagnostic accuracy. It is also a global measure of a test performance, used for the evaluation of overall discriminative power of a diagnostic procedure and for comparison of this test with other tests. Youden's index is calculated by deducting 1 from the sum of a test’s sensitivity and specificity expressed not as percentage but as a part of a whole number: (sensitivity + specificity) – 1. For a test with poor diagnostic accuracy, Youden's index equals 0, and in a perfect test Youden's index equals 1.&lt;br /&gt;
&lt;br /&gt;
  So for example, if a tool has a True Positive Rate (TPR) of .98 (i.e., 98%) &lt;br /&gt;
    and False Positive Rate (FPR) of .05 (i.e., 5%)&lt;br /&gt;
  Sensitivity = TPR (.98)&lt;br /&gt;
  Specificity = 1-FPR (.95)&lt;br /&gt;
  So the Youden Index is (.98+.95) - 1 = .93&lt;br /&gt;
  &lt;br /&gt;
  And this would equate to a Benchmark score of 93 (since we normalize this to the range 0 - 100)&lt;br /&gt;
&lt;br /&gt;
On the graph, the Benchmark Score is the length of the line from the point down to the diagonal “guessing” line. Note that a Benchmark score can actually be negative if the point is below the line. This is caused when the False Positive Rate is actually higher than the True Positive Rate.&lt;br /&gt;
&lt;br /&gt;
==Benchmark Validity==&lt;br /&gt;
&lt;br /&gt;
The Benchmark tests are not exactly like real applications. The tests are derived from coding patterns observed in real applications, but the majority of them are considerably '''simpler''' than real applications. That is, most real world applications will be considerably harder to successfully analyze than the OWASP Benchmark Test Suite. Although the tests are based on real code, it is possible that some tests may have coding patterns that don't occur frequently in real code.&lt;br /&gt;
&lt;br /&gt;
Remember, we are trying to test the capabilities of the tools and make them explicit, so that users can make informed decisions about what tools to use, how to use them, and what results to expect.  This is exactly aligned with the OWASP mission to make application security visible.&lt;br /&gt;
&lt;br /&gt;
==Generating Benchmark Scores==&lt;br /&gt;
&lt;br /&gt;
Anyone can use this Benchmark to evaluate vulnerability detection tools. The basic steps are:&lt;br /&gt;
# Download the Benchmark from GitHub&lt;br /&gt;
# Run your tools against the Benchmark&lt;br /&gt;
# Run the BenchmarkScore tool on the reports from your tools&lt;br /&gt;
&lt;br /&gt;
That's it!&lt;br /&gt;
&lt;br /&gt;
Full details on how to do this are at the bottom of the page on the Quick_Start tab.&lt;br /&gt;
&lt;br /&gt;
We encourage both vendors, open source tools, and end users to verify their application security tools against the Benchmark. In order to ensure that the results are fair and useful, we ask that you follow a few simple rules when publishing results. We won't recognize any results that aren't easily reproducible:&lt;br /&gt;
&lt;br /&gt;
# A description of the default “out-of-the-box” installation, version numbers, etc…&lt;br /&gt;
# Any and all configuration, tailoring, onboarding, etc… performed to make the tool run&lt;br /&gt;
# Any and all changes to default security rules, tests, or checks used to achieve the results&lt;br /&gt;
# Easily reproducible steps to run the tool&lt;br /&gt;
&lt;br /&gt;
== Reporting Format==&lt;br /&gt;
&lt;br /&gt;
The Benchmark includes tools to interpret raw tool output, compare it to the expected results, and generate summary charts and graphs. We use the following table format in order to capture all the information generated during the evaluation.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable nowraplinks&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Security Category&lt;br /&gt;
! TP&lt;br /&gt;
! FN&lt;br /&gt;
! TN&lt;br /&gt;
! FP&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Total&lt;br /&gt;
! TPR&lt;br /&gt;
! FPR&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Score&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| General security category for test cases.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Positives''': Tests with real vulnerabilities that were correctly reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Negative''': Tests with real vulnerabilities that were not correctly reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Negative''': Tests with fake vulnerabilities that were correctly not reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Positive''':Tests with fake vulnerabilities that were incorrectly reported as vulnerable by the tool.&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| Total test cases for this category.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Positive Rate''': TP / ( TP + FN ) - Also referred to as Precision, as defined at [https://en.wikipedia.org/wiki/Precision_and_recall Wikipedia].&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Positive Rate''': FP / ( FP + TN ).&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| Normalized distance from the “guess line” TPR - FPR.&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Command Injection&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Etc...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | &lt;br /&gt;
! Total TP&lt;br /&gt;
! Total FN&lt;br /&gt;
! Total TN&lt;br /&gt;
! Total FP&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Total TC&lt;br /&gt;
! Average TPR&lt;br /&gt;
! Average FPR&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Average Score&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Code Repo and Build/Run Instructions ==&lt;br /&gt;
&lt;br /&gt;
See the '''Getting Started''' and '''Getting, Building, and Running the Benchmark''' sections on the Quick Start tab.&lt;br /&gt;
&lt;br /&gt;
==Licensing==&lt;br /&gt;
&lt;br /&gt;
The OWASP Benchmark is free to use under the [http://choosealicense.com/licenses/gpl-2.0/ GNU General Public License v2.0].&lt;br /&gt;
&lt;br /&gt;
== Mailing List ==&lt;br /&gt;
&lt;br /&gt;
[https://lists.owasp.org/mailman/listinfo/owasp-benchmark-project OWASP Benchmark Mailing List]&lt;br /&gt;
&lt;br /&gt;
== Project Leaders ==&lt;br /&gt;
&lt;br /&gt;
[https://www.owasp.org/index.php/User:Wichers Dave Wichers] [mailto:dave.wichers@owasp.org @]&lt;br /&gt;
&lt;br /&gt;
== Project References ==&lt;br /&gt;
* [https://www.mir-swamp.org/#packages/public Software Assurance Marketplace (SWAMP) - set of curated packages to test tools against]&lt;br /&gt;
* [http://samate.nist.gov/Other_Test_Collections.html SAMATE List of Test Collections]&lt;br /&gt;
&lt;br /&gt;
== Related Projects ==&lt;br /&gt;
&lt;br /&gt;
* [http://samate.nist.gov/SARD/testsuite.php NSA's Juliet for Java]&lt;br /&gt;
* [http://sectoolmarket.com/ The Web Application Vulnerability Scanner Evaluation Project (WAVSEP)]&lt;br /&gt;
&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;padding-left:25px;width:200px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Quick Download ==&lt;br /&gt;
&lt;br /&gt;
All test code and project files can be downloaded from [https://github.com/OWASP/benchmark OWASP GitHub].&lt;br /&gt;
&lt;br /&gt;
== Project Intro Video ==&lt;br /&gt;
&lt;br /&gt;
[[File:BenchmarkPodcastTitlePage.jpg|200px|link=https://www.youtube.com/watch?v=HQP8dwc3jJA&amp;amp;index=5&amp;amp;list=PLGB2s-U5FSWOmEStMt3JqlMFJvRYqeVW5]]&lt;br /&gt;
&lt;br /&gt;
== News and Events ==&lt;br /&gt;
* LOOKING FOR VOLUNTEERS!! - We are looking for individuals and organizations to join and make this a much more community driven project, including additional coleaders to help take this project to the next level. Contributors could work on things like new test cases, additional tool scorecard generators, adding support for languages beyond Java, and a host of other improvements. Please contact [mailto:dave.wichers@owasp.org me] if you are interested in contributing at any level.&lt;br /&gt;
* June 5, 2016 - Benchmark Version 1.2 Released&lt;br /&gt;
* Sep 24, 2015 - Benchmark introduced to broader OWASP community at [https://appsecusa2015.sched.org/event/3r9k/using-the-owasp-benchmark-to-assess-automated-vulnerability-analysis-tools AppSec USA]&lt;br /&gt;
* Aug 27, 2015 - U.S. Dept. of Homeland Security (DHS) is financially supporting the Benchmark project.&lt;br /&gt;
* Aug 15, 2015 - Benchmark Version 1.2beta Released with full DAST Support. Checkmarx and ZAP scorecard generators also released.&lt;br /&gt;
* July 10, 2015 - Benchmark Scorecard generator and open source scorecards released&lt;br /&gt;
* May 23, 2015 - Benchmark Version 1.1 Released&lt;br /&gt;
* April 15, 2015 - Benchmark Version 1.0 Released&lt;br /&gt;
&lt;br /&gt;
==Classifications==&lt;br /&gt;
&lt;br /&gt;
   {| width=&amp;quot;200&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; rowspan=&amp;quot;2&amp;quot;| [[File:Owasp-incubator-trans-85.png|link=https://www.owasp.org/index.php/OWASP_Project_Stages#tab=Incubator_Projects]]&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-builders-small.png|link=]]  &lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-defenders-small.png|link=]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [http://choosealicense.com/licenses/gpl-2.0/ GNU General Public License v2.0]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [[File:Project_Type_Files_CODE.jpg|link=]]&lt;br /&gt;
   |}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Test Cases =&lt;br /&gt;
&lt;br /&gt;
Version 1.0 of the Benchmark was published on April 15, 2015 and had 20,983 test cases. On May 23, 2015, version 1.1 of the Benchmark was released. The 1.1 release improves on the previous version by making sure that there are both true positives and false positives in every vulnerability area. Version 1.2 was released on June 5, 2016 (and the 1.2beta August 15, 2015).&lt;br /&gt;
&lt;br /&gt;
Version 1.2 and forward of the Benchmark is a fully executable web application, which means it is scannable by any kind of vulnerability detection tool. The 1.2 has been limited to slightly less than 3,000 test cases, to make it easier for DAST tools to scan it (so it doesn't take so long and they don't run out of memory, or blow up the size of their database). The 1.2 release covers the same vulnerability areas that 1.1 covers. We added a few Spring database SQL Injection tests, but that's it. The bulk of the work was turning each test case into something that actually runs correctly AND is fully exploitable, and then generating a UI on top of it that works in order to turn the test cases into a real running application.&lt;br /&gt;
&lt;br /&gt;
You can still download Benchmark version 1.1 by cloning the release marked with the GIT tag '1.1'.&lt;br /&gt;
&lt;br /&gt;
The test case areas and quantities for the Benchmark releases are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable nowraplinks&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Vulnerability Area&lt;br /&gt;
! # of Tests in v1.1&lt;br /&gt;
! # of Tests in v1.2&lt;br /&gt;
! CWE Number&lt;br /&gt;
|-&lt;br /&gt;
| [[Command Injection]]&lt;br /&gt;
| 2708&lt;br /&gt;
| 251&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/78.html 78]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Cryptography&lt;br /&gt;
| 1440&lt;br /&gt;
| 246&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/327.html 327]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Hashing&lt;br /&gt;
| 1421&lt;br /&gt;
| 236&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/328.html 328]&lt;br /&gt;
|-&lt;br /&gt;
| [[LDAP injection | LDAP Injection]]&lt;br /&gt;
| 736&lt;br /&gt;
| 59&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/90.html 90]&lt;br /&gt;
|-&lt;br /&gt;
| [[Path Traversal]]&lt;br /&gt;
| 2630&lt;br /&gt;
| 268&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/22.html 22]&lt;br /&gt;
|-&lt;br /&gt;
| Secure Cookie Flag&lt;br /&gt;
| 416&lt;br /&gt;
| 67&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/614.html 614]&lt;br /&gt;
|-&lt;br /&gt;
| [[SQL Injection]]&lt;br /&gt;
| 3529&lt;br /&gt;
| 504&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/89.html 89]&lt;br /&gt;
|-&lt;br /&gt;
| [[Trust Boundary Violation]]&lt;br /&gt;
| 725&lt;br /&gt;
| 126&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/501.html 501]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Randomness&lt;br /&gt;
| 3640&lt;br /&gt;
| 493&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/330.html 330]&lt;br /&gt;
|-&lt;br /&gt;
| [[XPATH Injection]]&lt;br /&gt;
| 347&lt;br /&gt;
| 35&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/643.html 643]&lt;br /&gt;
|-&lt;br /&gt;
| [[XSS]] (Cross-Site Scripting)&lt;br /&gt;
| 3449&lt;br /&gt;
| 455&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/79.html 79]&lt;br /&gt;
|-&lt;br /&gt;
| Total Test Cases&lt;br /&gt;
| 21,041&lt;br /&gt;
| 2,740&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Each Benchmark version comes with a spreadsheet that lists every test case, the vulnerability category, the CWE number, and the expected result (true finding/false positive). Look for the file: expectedresults-VERSION#.csv in the project root directory.&lt;br /&gt;
&lt;br /&gt;
Every test case is:&lt;br /&gt;
* a servlet or JSP (currently they are all servlets, but we plan to add JSPs)&lt;br /&gt;
* either a true vulnerability or a false positive for a single issue&lt;br /&gt;
&lt;br /&gt;
The Benchmark is intended to help determine how well analysis tools correctly analyze a broad array of application and framework behavior, including:&lt;br /&gt;
&lt;br /&gt;
* HTTP request and response problems&lt;br /&gt;
* Simple and complex data flow&lt;br /&gt;
* Simple and complex control flow&lt;br /&gt;
* Popular frameworks&lt;br /&gt;
* Inversion of control&lt;br /&gt;
* Reflection&lt;br /&gt;
* Class loading&lt;br /&gt;
* Annotations&lt;br /&gt;
* Popular UI technologies (particularly JavaScript frameworks)&lt;br /&gt;
&lt;br /&gt;
Not all of these are yet tested by the Benchmark but future enhancements intend to provide more coverage of these issues.&lt;br /&gt;
&lt;br /&gt;
Additional future enhancements could cover:&lt;br /&gt;
* All vulnerability types in the [[Top10 | OWASP Top 10]]&lt;br /&gt;
* Does the tool find flaws in libraries?&lt;br /&gt;
* Does the tool find flaws spanning custom code and libraries?&lt;br /&gt;
* Does tool handle web services? REST, XML, GWT, etc…&lt;br /&gt;
* Does tool work with different app servers? Java platforms?&lt;br /&gt;
&lt;br /&gt;
== Example Test Case ==&lt;br /&gt;
&lt;br /&gt;
Each test case is a simple Java EE servlet. BenchmarkTest00001 in version 1.0 of the Benchmark was an LDAP Injection test with the following metadata in the accompanying BenchmarkTest00001.xml file:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;test-metadata&amp;gt;&lt;br /&gt;
    &amp;lt;category&amp;gt;ldapi&amp;lt;/category&amp;gt;&lt;br /&gt;
    &amp;lt;test-number&amp;gt;00001&amp;lt;/test-number&amp;gt;&lt;br /&gt;
    &amp;lt;vulnerability&amp;gt;true&amp;lt;/vulnerability&amp;gt;&lt;br /&gt;
    &amp;lt;cwe&amp;gt;90&amp;lt;/cwe&amp;gt;&lt;br /&gt;
  &amp;lt;/test-metadata&amp;gt;&lt;br /&gt;
&lt;br /&gt;
BenchmarkTest00001.java in the OWASP Benchmark 1.0 simply reads in all the cookie values, looks for a cookie named &amp;quot;foo&amp;quot;, and uses the value of this cookie when performing an LDAP query. Here's the code for BenchmarkTest00001.java:&lt;br /&gt;
&lt;br /&gt;
  package org.owasp.benchmark.testcode;&lt;br /&gt;
  &lt;br /&gt;
  import java.io.IOException;&lt;br /&gt;
  &lt;br /&gt;
  import javax.servlet.ServletException;&lt;br /&gt;
  import javax.servlet.annotation.WebServlet;&lt;br /&gt;
  import javax.servlet.http.HttpServlet;&lt;br /&gt;
  import javax.servlet.http.HttpServletRequest;&lt;br /&gt;
  import javax.servlet.http.HttpServletResponse;&lt;br /&gt;
  &lt;br /&gt;
  @WebServlet(&amp;quot;/BenchmarkTest00001&amp;quot;)&lt;br /&gt;
  public class BenchmarkTest00001 extends HttpServlet {&lt;br /&gt;
  	&lt;br /&gt;
  	private static final long serialVersionUID = 1L;&lt;br /&gt;
  	&lt;br /&gt;
  	@Override&lt;br /&gt;
  	public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {&lt;br /&gt;
  		doPost(request, response);&lt;br /&gt;
  	}&lt;br /&gt;
  &lt;br /&gt;
  	@Override&lt;br /&gt;
  	public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {&lt;br /&gt;
  		// some code&lt;br /&gt;
  &lt;br /&gt;
  		javax.servlet.http.Cookie[] cookies = request.getCookies();&lt;br /&gt;
  		&lt;br /&gt;
  		String param = null;&lt;br /&gt;
  		boolean foundit = false;&lt;br /&gt;
  		if (cookies != null) {&lt;br /&gt;
  			for (javax.servlet.http.Cookie cookie : cookies) {&lt;br /&gt;
  				if (cookie.getName().equals(&amp;quot;foo&amp;quot;)) {&lt;br /&gt;
  					param = cookie.getValue();&lt;br /&gt;
  					foundit = true;&lt;br /&gt;
  				}&lt;br /&gt;
  			}&lt;br /&gt;
  			if (!foundit) {&lt;br /&gt;
  				// no cookie found in collection&lt;br /&gt;
  				param = &amp;quot;&amp;quot;;&lt;br /&gt;
  			}&lt;br /&gt;
  		} else {&lt;br /&gt;
  			// no cookies&lt;br /&gt;
  			param = &amp;quot;&amp;quot;;&lt;br /&gt;
  		}&lt;br /&gt;
  		&lt;br /&gt;
  		try {&lt;br /&gt;
  			javax.naming.directory.DirContext dc = org.owasp.benchmark.helpers.Utils.getDirContext();&lt;br /&gt;
  			Object[] filterArgs = {&amp;quot;a&amp;quot;,&amp;quot;b&amp;quot;};&lt;br /&gt;
  			dc.search(&amp;quot;name&amp;quot;, param, filterArgs, new javax.naming.directory.SearchControls());&lt;br /&gt;
  		} catch (javax.naming.NamingException e) {&lt;br /&gt;
  			throw new ServletException(e);&lt;br /&gt;
  		}&lt;br /&gt;
  	}&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
= Test Case Details =&lt;br /&gt;
&lt;br /&gt;
The following describes situations in the Benchmark that have come up for debate as to the validity/accuracy of the test cases in these scenarios. &lt;br /&gt;
&lt;br /&gt;
== Cookies as a Source of Attack for XSS ==&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.1 and early versions of the 1.2beta included test cases that used cookies as a source of data that flowed into XSS vulnerabilities. The Benchmark treated these tests as False Positives because the Benchmark team figured that you'd have to use an XSS vulnerability in the first place to set the cookie value, and so it wasn't fair/reasonable to consider an XSS vulnerability whose data source was a cookie value as actually exploitable. However, we got feedback from some tool vendors, like Fortify, Burp, and Arachni, that they disagreed with this analysis and felt that, in fact, cookies were a valid source of attack against XSS vulnerabilities. Given that there are good arguments on both sides of this safe vs. unsafe question, we decided on Aug 25, 2015 to simply remove those test cases from the Benchmark. If, in the future, we decide who is right, we may add such test cases back in.&lt;br /&gt;
&lt;br /&gt;
== Headers as a Source of Attack for XSS ==&lt;br /&gt;
&lt;br /&gt;
Similarly, the Benchmark team believes that the names of headers aren't a valid source of XSS attack for the same reason we thought cookie values aren't a valid source. Because it would require an XSS vulnerability to be exploited in the first place to set them. In fact, we feel that this argument is much stronger for header names, than cookie values. Right now, the Benchmark doesn't include any header names as sources for XSS test cases, but we plan to add them, and mark them as false positive in the Benchmark.&lt;br /&gt;
&lt;br /&gt;
We do have header values as sources for some XSS test cases in the Benchmark and only 'referer' is treated as a valid XSS source (i.e., true positives) because other headers are not viable XSS sources. Other headers are, of course, valid sources for other attack vectors, like SQL injection or Command Injection.&lt;br /&gt;
&lt;br /&gt;
== False Positive Scenario: Static Values Passed to Unsafe (Weak) Sinks ==&lt;br /&gt;
&lt;br /&gt;
The Benchmark has MANY test cases where unsafe data flows in from the browser, but that data is replaced with static content as it goes through the propagators in the that specific test case. This static (safe) data then flows to the sink, which may be a weak/unsafe sink, like, for example, an unsafely constructed SQL statement. The Benchmark treats those test cases as false positives because there is absolutely no way for that weakness to be exploited. The NSA Juliet SAST benchmark treats such test cases exactly the same way, as false positives. We do recognize that there are weaknesses in those test cases, even though they aren't exploitable.&lt;br /&gt;
&lt;br /&gt;
Some SAST tool vendors feel it is appropriate to point out those weaknesses, and that's fine. However, if the tool points those weaknesses out, and does not distinguish them from truly exploitable vulnerabilities, then the Benchmark treats those findings as false positives. If the tool allows a user to differentiate these non-exploitable weaknesses from exploitable vulnerabilities, then the Benchmark scorecard generator can use that information to filter out these extra findings (along with any other similarly marked findings) so they don't count against that tool when calculating that tool's Benchmark score.  In the real world, its important for analysts to be able to filter out such findings if they only have time to deal with the most critical, actually exploitable, vulnerabilities. If a tool doesn't make it easy for an analyst to distinguish the two situations, then they are providing a disservice to the analyst.&lt;br /&gt;
&lt;br /&gt;
This issue doesn't affect DAST tools. They only report what appears to be exploitable to them. So this has no affect on them.&lt;br /&gt;
&lt;br /&gt;
If you are a SAST tool vendor or user, and you believe the Benchmark scorecard generator is counting such findings against that tool, and there is a way to tell them apart, please let the project team know so the scorecard generator can be adjusted to not count those findings against the tool. The Benchmark project's goal is the generate the most fair and accurate results it can generate. If such an adjustment is made to how a scorecard is generated for that tool, we plan to document this was done for that tool, and explain how others could perform the same filtering within that tool in order to get the same focused set of results.&lt;br /&gt;
&lt;br /&gt;
== Dead Code ==&lt;br /&gt;
&lt;br /&gt;
Some SAST tools point out weaknesses in dead code because they might eventually end up being used, and serve as bad coding examples (think cut/paste of code). We think this is fine/appropriate.  However, there is no dead code in the OWASP Benchmark (at least not intentionally). So dead code should not be causing any tool to report unnecessary false positives.&lt;br /&gt;
&lt;br /&gt;
= Tool Support/Results =&lt;br /&gt;
&lt;br /&gt;
The results for 5 free tools, PMD, FindBugs, FindBugs with the FindSecBugs plugin, SonarQube and OWASP ZAP are available here against version 1.2 of the Benchmark: https://rawgit.com/OWASP/Benchmark/master/scorecard/OWASP_Benchmark_Home.html. We've included multiple versions of FindSecBugs' and ZAP's results so you can see the improvements they are making finding vulnerabilities in Benchmark.&lt;br /&gt;
&lt;br /&gt;
We have Benchmark results for all the following tools, but haven't publicly released the results for any commercial tools. However, we included a 'Commercial Average' page, which includes a summary of results for 6 commercial SAST tools along with anonymous versions of each SAST tool's scorecard.&lt;br /&gt;
&lt;br /&gt;
The Benchmark can generate results for the following tools: &lt;br /&gt;
&lt;br /&gt;
'''Free Static Application Security Testing (SAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://pmd.github.io/ PMD] (which really has no security rules) - .xml results file&lt;br /&gt;
* [http://findbugs.sourceforge.net/ FindBugs] - .xml results file (Note: FindBugs hasn't been updated since 2015. Use SpotBugs instead (see below))&lt;br /&gt;
* [https://www.sonarqube.org/downloads/ SonarQube] - .xml results file&lt;br /&gt;
* [https://spotbugs.github.io/ SpotBugs] - .xml results file. This is the successor to FindBugs.&lt;br /&gt;
* SpotBugs with the [http://find-sec-bugs.github.io/ FindSecurityBugs plugin] - .xml results file&lt;br /&gt;
&lt;br /&gt;
Note: We looked into supporting [http://checkstyle.sourceforge.net/ Checkstyle] but it has no security rules, just like PMD. The [http://fb-contrib.sourceforge.net/ fb-contrib] FindBugs plugin doesn't have any security rules either. We did test [http://errorprone.info/ Error Prone], and found that it does report some use of [http://errorprone.info/bugpattern/InsecureCipherMode) insecure ciphers (CWE-327)], but that's it.&lt;br /&gt;
&lt;br /&gt;
'''Commercial SAST Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.castsoftware.com/products/application-intelligence-platform CAST Application Intelligence Platform (AIP)] - .xml results file&lt;br /&gt;
* [https://www.checkmarx.com/products/static-application-security-testing/ Checkmarx CxSAST] - .xml results file&lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/ibm-appscan-source IBM AppScan Source (Standalone and Cloud)] - .ozasmt or .xml results file&lt;br /&gt;
* [https://juliasoft.com/solutions/julia-for-security/ Julia Analyzer] - .xml results file&lt;br /&gt;
* [https://software.microfocus.com/en-us/products/static-code-analysis-sast/overview Micro Focus (Formally HPE) Fortify (On-Demand and stand-alone versions)] - .fpr results file&lt;br /&gt;
* [https://www.parasoft.com/products/jtest/ Parasoft Jtest] - .xml results file&lt;br /&gt;
* [https://www.shiftleft.io/product/ ShiftLeft SAST] - .sl results file (Benchmark specific format. Ask vendor how to generate this)&lt;br /&gt;
* [https://snappycodeaudit.com/category/static-code-analysis Snappycode Audit's SnappyTick Source Edition (SAST)] - .xml results file&lt;br /&gt;
* [https://www.sourcemeter.com/features/ SourceMeter] - .txt results file of ALL results from VulnerabilityHunter&lt;br /&gt;
* [https://www.synopsys.com/content/dam/synopsys/sig-assets/datasheets/SAST-Coverity-datasheet.pdf Synopsys Static Analysis (Formerly Coverity Code Advisor) (On-Demand and stand-alone versions)] - .json results file (You can scan Benchmark w/Coverity for free. See: https://scan.coverity.com/)&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] - .xml results file&lt;br /&gt;
* [https://www.veracode.com/products/binary-static-analysis-sast Veracode SAST] - .xml results file&lt;br /&gt;
* [https://www.rigs-it.com/xanitizer/ XANITIZER] - xml results file ([https://www.rigs-it.com/wp-content/uploads/2018/03/howtosetupxanitizerforowaspbenchmarkproject.pdf Their white paper on how to setup Xanitizer to scan Benchmark.]) (Free trial available)&lt;br /&gt;
&lt;br /&gt;
We are looking for results for other commercial static analysis tools like: [https://www.grammatech.com/products/codesonar Grammatech CodeSonar], [https://www.roguewave.com/products-services/klocwork RogueWave's Klocwork], etc. If you have a license for any static analysis tool not already listed above and can run it on the Benchmark and send us the results file that would be very helpful. &lt;br /&gt;
&lt;br /&gt;
The free SAST tools come bundled with the Benchmark so you can run them yourselves. If you have a license for any commercial SAST tool, you can also run them against the Benchmark. Just put your results files in the /results folder of the project, and then run the BenchmarkScore script for your platform (.sh / .bat) and it will generate a scorecard in the /scorecard directory for all the tools you have results for that are currently supported.&lt;br /&gt;
&lt;br /&gt;
'''Free Dynamic Application Security Testing (DAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
Note: While we support scorecard generators for these Free and Commercial DAST tools, we haven't been able to get a full/clean run against the Benchmark from most of these tools. As such, some of these scorecard generators might still need some work to properly reflect their results. If you notice any problems, let us know.&lt;br /&gt;
&lt;br /&gt;
* [http://www.arachni-scanner.com/ Arachni] - .xml results file&lt;br /&gt;
** To generate .xml, run: ./bin/arachni_reporter &amp;quot;Your_AFR_Results_Filename.afr&amp;quot; --reporter=xml:outfile=Benchmark1.2-Arachni.xml&lt;br /&gt;
* [https://www.owasp.org/index.php/ZAP OWASP ZAP] - .xml results file. To generate a complete ZAP XML results file so you can generate a valid scorecard, make sure you:&lt;br /&gt;
** Tools &amp;gt; Options &amp;gt; Alerts - And set the Max alert instances to like 500.&lt;br /&gt;
** Then: Report &amp;gt; Generate XML Report...&lt;br /&gt;
&lt;br /&gt;
'''Commercial DAST Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.acunetix.com/vulnerability-scanner/ Acunetix Web Vulnerability Scanner (WVS)] - .xml results file (Generated using [https://www.acunetix.com/resources/wvs7manual.pdf command line interface (see Chapter 10.)] /ExportXML switch)&lt;br /&gt;
* [https://portswigger.net/burp Burp Pro] - .xml results file&lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/appscan-standard IBM AppScan] - .xml results file&lt;br /&gt;
* [https://software.microfocus.com/en-us/products/webinspect-dynamic-analysis-dast/overview Micro Focus (Formally HPE) WebInspect] - .xml results file&lt;br /&gt;
* [https://www.netsparker.com/web-vulnerability-scanner/ Netsparker] - .xml results file&lt;br /&gt;
* [https://www.qualys.com/apps/web-app-scanning/ Qualys Web App Scanner] - .xml results file&lt;br /&gt;
* [https://www.rapid7.com/products/appspider/ Rapid7 AppSpider] - .xml results file&lt;br /&gt;
&lt;br /&gt;
If you have access to other DAST Tools, PLEASE RUN THEM FOR US against the Benchmark, and send us the results file so we can build a scorecard generator for that tool.&lt;br /&gt;
&lt;br /&gt;
'''Commercial Interactive Application Security Testing (IAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.contrastsecurity.com/interactive-application-security-testing-iast Contrast Assess] - .zip results file (You can scan Benchmark w/Contrast for free. See: https://www.contrastsecurity.com/contrast-community-edition)&lt;br /&gt;
* [https://hdivsecurity.com/interactive-application-security-testing-iast Hdiv Detection (IAST)] - .hlg results file&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/security-testing/interactive-application-security-testing.html Seeker IAST] - .csv results file&lt;br /&gt;
&lt;br /&gt;
'''Commercial Hybrid Analysis Application Security Testing Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [http://www.iappsecure.com/products.html Fusion Lite Insight] - .xml results file&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It may be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results.'''&lt;br /&gt;
&lt;br /&gt;
The project has automated test harnesses for these vulnerability detection tools, so we can repeatably run the tools against each version of the Benchmark and automatically produce scorecards in our desired format.&lt;br /&gt;
&lt;br /&gt;
We want to test as many tools as possible against the Benchmark. If you are:&lt;br /&gt;
&lt;br /&gt;
* A tool vendor and want to participate in the project&lt;br /&gt;
* Someone who wants to help score a free tool against the project&lt;br /&gt;
* Someone who has a license to a commercial tool and the terms of the license allow you to publish tool results, and you want to participate&lt;br /&gt;
&lt;br /&gt;
please let [mailto:dave.wichers@owasp.org me] know!&lt;br /&gt;
&lt;br /&gt;
= Quick Start =&lt;br /&gt;
&lt;br /&gt;
==What is in the Benchmark?==&lt;br /&gt;
The Benchmark is a Java Maven project. Its primary component is thousands of test cases (e.g., BenchmarkTest00001.java) , each of which is a single Java servlet that contains a single vulnerability (either a true positive or false positive). The vulnerabilities span about a dozen different types currently and are expected to expand significantly in the future.&lt;br /&gt;
&lt;br /&gt;
An expectedresults.csv is published with each version of the Benchmark (e.g., expectedresults-1.1.csv) and it specifically lists the expected results for each test case. Here’s what the first two rows in this file looks like for version 1.1 of the Benchmark:&lt;br /&gt;
&lt;br /&gt;
 # test name		category	real vulnerability	CWE	Benchmark version: 1.1	2015-05-22&lt;br /&gt;
 BenchmarkTest00001	crypto		TRUE			327&lt;br /&gt;
&lt;br /&gt;
This simply means that the first test case is a crypto test case (use of weak cryptographic algorithms), this is a real vulnerability (as opposed to a false positive), and this issue maps to CWE 327. It also indicates this expected results file is for Benchmark version 1.1 (produced May 22, 2015). There is a row in this file for each of the tens of thousands of test cases in the Benchmark.  Each time a new version of the Benchmark is published, a new corresponding results file is generated and each test case can be completely different from one version to the next.&lt;br /&gt;
&lt;br /&gt;
The Benchmark also comes with a bunch of different utilities, commands, and prepackaged open source security analysis tools, all of which can be executed through Maven goals, including:&lt;br /&gt;
&lt;br /&gt;
* Open source vulnerability detection tools to be run against the Benchmark&lt;br /&gt;
* A scorecard generator, which computes a scorecard for each of the tools you have results files for.&lt;br /&gt;
&lt;br /&gt;
==What Can You Do With the Benchmark?==&lt;br /&gt;
* Compile all the software in the Benchmark project (e.g., mvn compile)&lt;br /&gt;
* Run a static vulnerability analysis tool (SAST) against the Benchmark test case code&lt;br /&gt;
&lt;br /&gt;
* Scan a running version of the Benchmark with a dynamic application security testing tool (DAST)&lt;br /&gt;
** Instructions on how to run it are provided below&lt;br /&gt;
&lt;br /&gt;
* Generate scorecards for each of the tools you have results files for&lt;br /&gt;
** See the Tool Support/Results page for the list of tools the Benchmark supports generating scorecards for&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
Before downloading or using the Benchmark make sure you have the following installed and configured properly:&lt;br /&gt;
&lt;br /&gt;
 GIT: http://git-scm.com/ or https://github.com/&lt;br /&gt;
 Maven: https://maven.apache.org/  (Version: 3.2.3 or newer works.)&lt;br /&gt;
 Java: http://www.oracle.com/technetwork/java/javase/downloads/index.html (Java 7 or 8) (64-bit)&lt;br /&gt;
&lt;br /&gt;
==Getting, Building, and Running the Benchmark==&lt;br /&gt;
&lt;br /&gt;
To download and build everything:&lt;br /&gt;
&lt;br /&gt;
 $ git clone https://github.com/OWASP/benchmark &lt;br /&gt;
 $ cd benchmark&lt;br /&gt;
 $ mvn compile   (This compiles it)&lt;br /&gt;
 $ runBenchmark.sh/.bat - This compiles and runs it.&lt;br /&gt;
&lt;br /&gt;
Then navigate to: https://localhost:8443/benchmark/ to go to its home page. It uses a self signed SSL certificate, so you'll get a security warning when you hit the home page.&lt;br /&gt;
&lt;br /&gt;
Note: We have set the Benchmark app to use up to 6 Gig of RAM, which it may need when it is fully scanned by a DAST scanner. The DAST tool probably also requires 3+ Gig of RAM. As such, we recommend having a 16 Gig machine if you are going to try to run a full DAST scan. And at least 4 or ideally 8 Gig if you are going to play around with the running Benchmark app.&lt;br /&gt;
&lt;br /&gt;
== Using a VM instead ==&lt;br /&gt;
We have several preconstructed VMs or instructions on how to build one that you can use instead:&lt;br /&gt;
&lt;br /&gt;
* Docker: A Dockerfile is checked into the project [https://github.com/OWASP/Benchmark/blob/master/VMs/Dockerfile here]. This Docker file should automatically produce a Docker VM with the latest Benchmark project files. After you have Docker installed, cd to /VMs then run: &lt;br /&gt;
 ./buildDockerImage.sh --&amp;gt; This builds the Docker Benchmark VM (This will take a WHILE)&lt;br /&gt;
 docker images  --&amp;gt; You should see the new benchmark:latest image in the list provided&lt;br /&gt;
 # The Benchmark Docker Image only has to be created once. &lt;br /&gt;
&lt;br /&gt;
 To run the Benchmark in your Docker VM, just run:&lt;br /&gt;
   ./runDockerImage.sh  --&amp;gt; This pulls in any updates to Benchmark since the Image was built, builds everything, and starts a remotely accessible Benchmark web app.&lt;br /&gt;
 If successful, you should see this at the end:&lt;br /&gt;
   [INFO] [talledLocalContainer] Tomcat 8.x started on port [8443]&lt;br /&gt;
   [INFO] Press Ctrl-C to stop the container...&lt;br /&gt;
 Then simply navigate to: https://localhost:8443/benchmark from the machine you are running Docker&lt;br /&gt;
 &lt;br /&gt;
 Or if you want to access from a different machine:&lt;br /&gt;
  docker-machine ls (in a different terminal) --&amp;gt; To get IP Docker VM is exporting (e.g., tcp://192.168.99.100:2376)&lt;br /&gt;
  Navigate to: https://192.168.99.100:8443/benchmark in your browser (using the above IP as an example)&lt;br /&gt;
&lt;br /&gt;
* Amazon Web Services (AWS) - Here's how you set up the Benchmark on an AWS VM:&lt;br /&gt;
 sudo yum install git&lt;br /&gt;
 sudo yum install maven&lt;br /&gt;
 sudo yum install mvn&lt;br /&gt;
 sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo&lt;br /&gt;
 sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo&lt;br /&gt;
 sudo yum install -y apache-maven&lt;br /&gt;
 git clone https://github.com/OWASP/benchmark&lt;br /&gt;
 cd benchmark&lt;br /&gt;
 chmod 755 *.sh&lt;br /&gt;
 ./runBenchmark.sh -- to run it locally on the VM.&lt;br /&gt;
 ./runRemoteAccessibleBenchmark.sh -- to run it so it can be accessed outside the VM (on port 8443).&lt;br /&gt;
&lt;br /&gt;
==Running Free Static Analysis Tools Against the Benchmark==&lt;br /&gt;
There are scripts for running each of the free SAST vulnerability detection tools included with the Benchmark against the Benchmark test cases. On Linux, you might have to make them executable (e.g., chmod 755 *.sh) before you can run them.&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for PMD:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runPMD.sh (Linux) or .\scripts\runPMD.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for FindBugs:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runFindBugs.sh (Linux) or .\scripts\runFindBugs.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for FindBugs with the FindSecBugs plugin:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runFindSecBugs.sh (Linux) or .\scripts\runFindSecBugs.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
In each case, the script will generate a results file and put it in the /results directory. For example: &lt;br /&gt;
&lt;br /&gt;
 Benchmark_1.2-findbugs-v3.0.1-1026.xml&lt;br /&gt;
&lt;br /&gt;
This results file name is carefully constructed to mean the following: It's a results file against the OWASP Benchmark version 1.2, FindBugs was the analysis tool, it was version 3.0.1 of FindBugs, and it took 1026 seconds to run the analysis.&lt;br /&gt;
&lt;br /&gt;
NOTE: If you create a results file yourself, by running a commercial tool for example, you can add the version # and the compute time to the filename just like this and the Benchmark Scorecard generator will pick this information up and include it in the generated scorecard. If you don't, depending on what metadata is included in the tool results, the Scorecard generator might do this automatically anyway.&lt;br /&gt;
&lt;br /&gt;
==Generating Scorecards==&lt;br /&gt;
The scorecard generation application BenchmarkScore is included with the Benchmark. It parses the output files generated by any of the supported security tools run against the Benchmark and compares them against the expected results, and produces a set of web pages that detail the accuracy and speed of the tools involved. For the list of currently supported tools, check out the: Tools Support/Results tab. If you are using a tool that is not yet supported, simply send us a results file from that tool and we'll write a parser for that tool and add it to the supported tools list.&lt;br /&gt;
&lt;br /&gt;
The following command will compute a Benchmark scorecard for all the results files in the '''/results''' directory. The generated scorecard is put into the '''/scorecard''' directory.&lt;br /&gt;
&lt;br /&gt;
 createScorecard.sh (Linux) or createScorecard.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
An example of a real scorecard for some open source tools is provided at the top of the Tool Support/Results tab so you can see what one looks like.&lt;br /&gt;
&lt;br /&gt;
We recommend including the Benchmark version number in any results file name, in order to help prevent mismatches between the expected results and the actual results files.  A tool will not score well against the wrong expected results.&lt;br /&gt;
&lt;br /&gt;
===Customizing Your Scorecard Generation===&lt;br /&gt;
&lt;br /&gt;
The createScorecard scripts are very simple. They only have one line. Here's what the 1.2 version looks like:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;expectedresults-1.2.csv results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This Maven command simply says to run the BenchmarkScore application, passing in two parameters. The 1st is the Benchmark expected results file to compare the tool results against. And the 2nd is the name of the directory that contains all the results from tools run against that version of the Benchmark. If you have tool results older than the current version of the Benchmark, like 1.1 results for example, then you would do something like this instead:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;expectedresults-1.1.csv 1.1_results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To keep things organized, we actually put the expected results file inside the same results folder for that version of the Benchmark, so our command looks like this:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;1.1_results/expectedresults-1.1.csv 1.1_results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In all cases, the generated scorecard is put in the /scorecard folder.&lt;br /&gt;
&lt;br /&gt;
'''WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It is likely to be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results.''' It is for just this reason that the Benchmark project isn't releasing such results itself.&lt;br /&gt;
&lt;br /&gt;
= Tool Scanning Tips =&lt;br /&gt;
&lt;br /&gt;
People frequently have difficulty scanning the Benchmark with various tools due to many reasons, including size of the Benchmark app and its codebase, and complexity of the tools used. Here is some guidance for some of the tools we have used to scan the Benchmark. If you've learned any tricks on how to get better or easier results for a particular tool against the Benchmark, let us know or update this page directly.&lt;br /&gt;
&lt;br /&gt;
== Generic Tips ==&lt;br /&gt;
&lt;br /&gt;
Because of the size of the Benchmark, you may need to give your tool more memory before it starts the scan. If its a Java based tool, you may want to pass more memory to it like this:&lt;br /&gt;
&lt;br /&gt;
 -Xmx4G (This gives the Java application 4 Gig of memory)&lt;br /&gt;
&lt;br /&gt;
== SAST Tools ==&lt;br /&gt;
&lt;br /&gt;
=== Checkmarx ===&lt;br /&gt;
&lt;br /&gt;
The Checkmarx SAST Tool (CxSAST) is ready to scan the OWASP Benchmark out-of-the-box. &lt;br /&gt;
Please notice that the OWASP Benchmark “hides” some vulnerabilities in dead code areas, for example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
if (0&amp;gt;1)&lt;br /&gt;
{&lt;br /&gt;
  //vulnerable code&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
By default, CxSAST will find these vulnerabilities since Checkmarx believes that including dead code in the scan results is a SAST best practice. &lt;br /&gt;
&lt;br /&gt;
Checkmarx's experience shows that security experts expect to find these types of code vulnerabilities, and demand that their developers fix them. However, OWASP Benchmark considers the flagging of these vulnerabilities as False Positives, as a result lowering Checkmarx's overall score. &lt;br /&gt;
&lt;br /&gt;
Therefore, in order to receive an OWASP score untainted by dead code, re-configure CxSAST as follows:&lt;br /&gt;
# Open the CxAudit client for editing Java queries.&lt;br /&gt;
# Override the &amp;quot;Find_Dead_Code&amp;quot; query.&lt;br /&gt;
# Add the commented text of the original query to the new override query.&lt;br /&gt;
# Save the queries.&lt;br /&gt;
&lt;br /&gt;
=== FindBugs ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindBugs.(sh or bat). If you want to run a different version of FindBugs, just change its version number in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== FindBugs with FindSecBugs ===&lt;br /&gt;
&lt;br /&gt;
[http://h3xstream.github.io/find-sec-bugs/ FindSecurityBugs] is a great plugin for FindBugs that significantly increases the ability for FindBugs to find security issues. We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindSecBugs.(sh or bat). If you want to run a different version of FindSecBugs, just change the version number of the findsecbugs-plugin artifact in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== Micro Focus (Formally HP) Fortify ===&lt;br /&gt;
&lt;br /&gt;
If you are using the Audit Workbench, you can give it more memory and make sure you invoke it in 64-bit mode by doing this:&lt;br /&gt;
&lt;br /&gt;
  set AWB_VM_OPTS=&amp;quot;-Xmx2G -XX:MaxPermSize=256m&amp;quot;&lt;br /&gt;
  export AWB_VM_OPTS=&amp;quot;-Xmx2G -XX:MaxPermSize=256m&amp;quot;&lt;br /&gt;
  auditworkbench -64&lt;br /&gt;
&lt;br /&gt;
We found it was easier to use the Maven support in Fortify to scan the Benchmark and to do it in 2 phases, translate, and then scan. We did something like this:&lt;br /&gt;
&lt;br /&gt;
  Translate Phase:&lt;br /&gt;
  export JAVA_HOME=$(/usr/libexec/java_home)&lt;br /&gt;
  export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_17.10/bin&lt;br /&gt;
  export SCA_VM_OPTS=&amp;quot;-Xmx2G -version 1.7&amp;quot;&lt;br /&gt;
  mvn sca:clean&lt;br /&gt;
  mvn sca:translate&lt;br /&gt;
&lt;br /&gt;
  Scan Phase:&lt;br /&gt;
  export JAVA_HOME=$(/usr/libexec/java_home)&lt;br /&gt;
  export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_4.10/bin&lt;br /&gt;
  export SCA_VM_OPTS=&amp;quot;-Xmx10G -version 1.7&amp;quot;&lt;br /&gt;
  mvn sca:scan&lt;br /&gt;
&lt;br /&gt;
=== PMD ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runPMD.(sh or bat). If you want to run a different version of PMD, just change its version number in the Benchmark pom.xml file. (NOTE: PMD doesn't find any security issues. We include it because its interesting to know that it doesn't.)&lt;br /&gt;
&lt;br /&gt;
=== SonarQube ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its mostly dialed in.  But its a bit tricky because SonarQube requires two parts. There is a stand alone scanner for Java. And then there is a web application that accepts the results, and in turn can then produce the results file required by the Benchmark scorecard generator for SonarQube. Running the script runSonarQube.(sh or bat) will generate the results, but if the SonarQube Web Application isn't running where the runSonarQube script expects it to be, then the script will fail.&lt;br /&gt;
&lt;br /&gt;
If you want to run a different version of SonarQube, just change its version number in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== Xanitizer ===&lt;br /&gt;
&lt;br /&gt;
The vendor has written their own guide to [http://www.rigs-it.net/opendownloads/whitepapers/HowToSetUpXanitizerForOWASPBenchmarkProject.pdf How to Set Up Xanitizer for OWASP Benchmark].&lt;br /&gt;
&lt;br /&gt;
== DAST Tools ==&lt;br /&gt;
&lt;br /&gt;
=== Burp Pro ===&lt;br /&gt;
&lt;br /&gt;
You must use Burp Pro v1.6.29 or greater to scan the Benchmark due to a previous limitation in Burp Pro related to ensuring the path attribute for cookies was honored. This issue was fixed in the v1.6.29 release.&lt;br /&gt;
&lt;br /&gt;
To scan, first spider the entire Benchmark, and then select the /Benchmark URL and actively scan that branch. You can skip all the .html pages and any other pages that Burp says have no parameters.&lt;br /&gt;
&lt;br /&gt;
NOTE: We have been unable to simply run Burp Pro against the entire Benchmark in one shot. In our experience, it eventually freezes/stops scanning. We've had to run it against each test area one at a time. If you figure out how to get Burp Pro to scan all of Benchmark in one shot, let us know how you did it!&lt;br /&gt;
&lt;br /&gt;
=== OWASP ZAP ===&lt;br /&gt;
&lt;br /&gt;
ZAP may require additional memory to be able to scan the Benchmark. To configure the amount of memory:&lt;br /&gt;
* Tools --&amp;gt; Options --&amp;gt; JVM: Recommend setting to: -Xmx2048m (or larger). (Then restart ZAP).&lt;br /&gt;
&lt;br /&gt;
To run ZAP against Benchmark:&lt;br /&gt;
# Because Benchmark uses Cookies and Headers as sources of attack for many test cases: Tools --&amp;gt; Options --&amp;gt; Active Scan Input Vectors: Then check the HTTP Headers, All Requests, and Cookie Data checkboxes and hit OK&lt;br /&gt;
# Click on Show All Tabs button (if spider tab isn't visible)&lt;br /&gt;
# Go to Spider tab (the black spider) and click on New Scan button&lt;br /&gt;
# Enter: https://localhost:8443/benchmark/  into the 'Starting Point' box and hit 'Start Scan'&lt;br /&gt;
#* Do this again. For some reason it takes 2 passes with the Spider before it stops finding more Benchmark endpoints.&lt;br /&gt;
# When Spider completes, click on 'benchmark' folder in Site Map, right click and select: 'Attack --&amp;gt; Active Scan'&lt;br /&gt;
#* It will take several hours, like 3+ to complete (it's actually likely to simply freeze before completing the scan - see NOTE: below)&lt;br /&gt;
&lt;br /&gt;
For faster active scan you can&lt;br /&gt;
* Disable the ZAP DB log (in ZAP 2.5.0+):&lt;br /&gt;
** Disable it via Options / Database / Recover Log&lt;br /&gt;
** Set it on the command line using &amp;quot;-config database.recoverylog=false&amp;quot;&lt;br /&gt;
* Disable unnecessary plugins / Technologies: When you launch the Active Scan&lt;br /&gt;
** On the Policy tab, disable all plugins except: XSS (Reflected), Path Traversal, SQLi, OS Command Injection&lt;br /&gt;
** Go the Technology Tab, disable everything and only enable: MySQL, YOUR_OS, Tomcat&lt;br /&gt;
** Note: This 2nd performance improvement step is a bit like cheating as you wouldn't do this for a normal site scan. You'd want to leave all this on in case these other plugins/technologies are helpful in finding more issues. So a fair performance comparison of ZAP to other tools would leave all this on.&lt;br /&gt;
&lt;br /&gt;
To generate the ZAP XML results file so you can generate its scorecard:&lt;br /&gt;
* Tools &amp;gt; Options &amp;gt; Alerts - And set the Max alert instances to like 500.&lt;br /&gt;
* Then: Report &amp;gt; Generate XML Report...&lt;br /&gt;
&lt;br /&gt;
NOTE: Similar to Burp, we can't simply run ZAP against the entire Benchmark in one shot. In our experience, it eventually freezes/stops scanning. We've had to run it against each test area one at a time. If you figure out how to get ZAP to scan all of Benchmark in one shot, let us know how you did it!&lt;br /&gt;
&lt;br /&gt;
Things we tried that didn't improve the score:&lt;br /&gt;
* AJAX Spider - the traditional spider appears to find all (or 99%) of the test cases so the AJAX Spider does not appear to be needed against Benchmark v1.2&lt;br /&gt;
* XSS (Persistent) - There are 3 of these plugins that run by default. There aren't any stored XSS in Benchmark, so you can disable these plugins for a faster scan.&lt;br /&gt;
* DOM XSS Plugin - This is an optional plugin that didn't seem to find any additional XSS issues. There aren't an DOM specific XSS issues in Benchmark v1.2, so not surprising.&lt;br /&gt;
&lt;br /&gt;
== IAST Tools ==&lt;br /&gt;
&lt;br /&gt;
Interactive Application Security Testing (IAST) tools work differently than scanners.  IAST tools monitor an application as it runs to identify application vulnerabilities using context from inside the running application. Typically these tools run continuously, immediately notifying users of vulnerabilities, but you can also get a full report of an entire application. To do this, we simply run the Benchmark application with an IAST agent and use a crawler to hit all the pages.&lt;br /&gt;
&lt;br /&gt;
=== Contrast Assess ===&lt;br /&gt;
&lt;br /&gt;
To use Contrast Assess, we simply add the Java agent to the Benchmark environment and run the BenchmarkCrawler. The entire process should only take a few minutes. We provided a few scripts, which simply add the -javaagent:contrast.jar flag to the Benchmark launch configuration. We have tested on MacOS, Ubuntu, and Windows.  Be sure your VM has at least 4M of memory.&lt;br /&gt;
&lt;br /&gt;
* Ensure your environment has Java, Maven, and git installed, then build the Benchmark project&lt;br /&gt;
   '''$ git clone https://github.com/OWASP/Benchmark.git'''&lt;br /&gt;
   '''$ cd Benchmark'''&lt;br /&gt;
   '''$ mvn compile'''&lt;br /&gt;
&lt;br /&gt;
* Download a licensed copy of the Contrast Assess Java Agent (contrast.jar) from your Contrast TeamServer account and put it in the /Benchmark/tools/Contrast directory.&lt;br /&gt;
   '''$ cp ~/Downloads/contrast.jar tools/Contrast'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 1, launch the Benchmark application and wait until it starts&lt;br /&gt;
   '''$ cd tools/Contrast  &lt;br /&gt;
   '''$ ./runBenchmark_wContrast.sh''' (.bat on Windows)&lt;br /&gt;
   '''[INFO] Scanning for projects...&lt;br /&gt;
   '''[INFO]                                                                         &lt;br /&gt;
   '''[INFO] ------------------------------------------------------------------------&lt;br /&gt;
   '''[INFO] Building OWASP Benchmark Project 1.2&lt;br /&gt;
   '''[INFO] ------------------------------------------------------------------------&lt;br /&gt;
   '''[INFO] &lt;br /&gt;
   '''...&lt;br /&gt;
   '''[INFO] [talledLocalContainer] Tomcat 8.x started on port [8443]'''&lt;br /&gt;
   '''[INFO] Press Ctrl-C to stop the container...'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 2, launch the crawler and wait a minute or two for the crawl to complete.&lt;br /&gt;
   '''$ ./runCrawler.sh''' (.bat on Windows)&lt;br /&gt;
&lt;br /&gt;
* A Contrast report has been generated in /Benchmark/tools/Contrast/working/contrast.log.  This report will be automatically copied (and renamed with version number)  to /Benchmark/results directory.&lt;br /&gt;
   '''$ more tools/Contrast/working/contrast.log'''&lt;br /&gt;
   '''2016-04-22 12:29:29,716 [main b] INFO - Contrast Runtime Engine&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Copyright (C) 2012&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Pat. 8,458,789 B2&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Contrast Security, Inc.&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - All Rights Reserved&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - https://www.contrastsecurity.com/&lt;br /&gt;
   '''...'''&lt;br /&gt;
&lt;br /&gt;
* Press Ctrl-C to stop the Benchmark in Terminal 1.  Note: on Windows, select &amp;quot;N&amp;quot; when asked Terminate batch job (Y/N))&lt;br /&gt;
   '''[INFO] [talledLocalContainer] Tomcat 8.x is stopped'''&lt;br /&gt;
   '''Copying Contrast report to results directory'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 2, generate scorecards in /Benchmark/scorecard&lt;br /&gt;
   '''$ ./createScorecards.sh''' (.bat on Windows)&lt;br /&gt;
   '''Analyzing results from Benchmark_1.2-Contrast.log&lt;br /&gt;
   '''Actual results file generated: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.csv&lt;br /&gt;
   '''Report written to: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html&lt;br /&gt;
&lt;br /&gt;
* Open the Benchmark Scorecard in your browser&lt;br /&gt;
   '''/Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html'''&lt;br /&gt;
&lt;br /&gt;
=== Hdiv Detection ===&lt;br /&gt;
&lt;br /&gt;
Hdiv has written their own instructions on how to run the detection component of their product on the Benchmark here: https://hdivsecurity.com/docs/features/benchmark/#how-to-run-hdiv-in-owasp-benchmark-project. You'll see that these instructions involve using the same crawler used to exercise all the test cases in the Benchmark, just like Contrast above.&lt;br /&gt;
&lt;br /&gt;
= RoadMap =&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.0 - Released April 15, 2015 - This initial release included over 20,000 test cases in 11 different vulnerability categories. As this initial version was not a runnable application, it was only suitable for assessing static analysis tools (SAST).&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.1 - Released May 23, 2015 - This update fixed some inaccurate test cases, and made sure that every vulnerability area included both True Positives and False Positives.&lt;br /&gt;
&lt;br /&gt;
Benchmark Scorecard Generator - Released July 10, 2015 - The ability to automatically and repeatably produce a scorecard of how well tools do against the Benchmark was released for most of the SAST tools supported by the Benchmark. Scorecards present graphical as well as statistical data on how well a tool does against the Benchmark down to the level of detail of how exactly it did against each individual test in the Benchmark. [https://rawgit.com/OWASP/Benchmark/master/scorecard/OWASP_Benchmark_Home.html Here are the latest public scorecards].  Support for producing scorecards for additional tools is being added all the time and the current full set is documented on the '''Tool Support/Results''' and '''Quick Start''' tabs of this wiki.&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.2beta - Released Aug 15, 2015 - The 1st release of a fully runnable version of the Benchmark to support assessing all types of vulnerability detection and prevention technologies, including DAST, IAST, RASP, WAFs, etc. This involved creating a user interface for every test case, and enhancing each test case to make sure its actually exploitable, not just uses something that is theoretically weak. This release is under 3,000 test cases to make it practical to scan the entire Benchmark with a DAST tool in a reasonable amount of time, with commodity hardware specs.&lt;br /&gt;
&lt;br /&gt;
Benchmark 1.2 - Released June 5, 2016 -  Based on feedback from a number of DAST tool developers, and other vendors as well, we made the Benchmark more realistic in a number of ways to facilitate external DAST scanning, and also made the Benchmark more resilient against attack so it could properly survive various DAST vulnerability detection and exploit verification techniques.&lt;br /&gt;
&lt;br /&gt;
Plans for Benchmark 1.3:&lt;br /&gt;
&lt;br /&gt;
While we don't have hard and fast rules of exactly what we are going to do next, enhancements in the following areas are planned for the next release:&lt;br /&gt;
&lt;br /&gt;
* Add new vulnerability categories (e.g., XXE, Hibernate Injection)&lt;br /&gt;
* Add support for popular server side Java frameworks (e.g., Spring)&lt;br /&gt;
* Add web services test cases&lt;br /&gt;
&lt;br /&gt;
We are also starting to work on the ability to score WAFs/RASPs and other defensive technology against Benchmark.&lt;br /&gt;
&lt;br /&gt;
= FAQ =&lt;br /&gt;
&lt;br /&gt;
==1. How are the scores computed for the Benchmark?==&lt;br /&gt;
&lt;br /&gt;
Each test case has a single vulnerability of a specific type. Its either a real vulnerability (True Positive) or not (a False Positive). We document all the test cases for each version of the Benchmark in the expectedresults-VERSION#.csv file (e.g., expectedresults-1.1.csv). This file lists the test case name, the CWE type of the vulnerability, and whether it is a True Positive or not. The Benchmark supports scorecard generators for computing exactly how a tool did when analyzing a version of the Benchmark. The full list of supported tools is on the Tools Support/Results tab. For each tool there is a parser that can parse the native results format for that tool (usually XML). This parser simply, for each test case, looks to see if that tool reported a vulnerability of the type expected in the test case source code file (for SAST) or the test case URL (for DAST/IAST). If it did, and the test case was a True Positive, the tool gets credit for finding it. If it is a False Positive test, and the tool reports that type of finding, then its recorded as a False Positive. If the tool didn't report that type of vulnerability for a test case, then they get either a False Negative, or a True Negative as appropriate. After calculating all of the individual test case results, a scorecard is generated providing a chart and statistics for that tool across all the vulnerability categories, and pages are also created comparing different tools to each other in each vulnerability category (if multiple tools are being scored together).&lt;br /&gt;
&lt;br /&gt;
A detailed file explaining exactly how that tool did against each individual test case in that version of the Benchmark is produced as part of scorecard generation, and is available via the Actual Results link on each tool's scorecard page. (e.g., Benchmark_v1.1_Scorecard_for_FindBugs.csv).&lt;br /&gt;
&lt;br /&gt;
==2. What if the tool I'm using doesn't have a scorecard generator for it?==&lt;br /&gt;
&lt;br /&gt;
Send us the results file! We'll be happy to create a parser for that tool so its now supported.&lt;br /&gt;
&lt;br /&gt;
==3. What if a tool finds other unexpected vulnerabilities?==&lt;br /&gt;
&lt;br /&gt;
We are sure there are vulnerabilities we didn't intend to be there and we are eliminating them as we find them. If you find some, let us know and we'll fix them too. We are primarily focused on unintentional vulnerabilities in the categories of vulnerabilities the Benchmark currently supports, since that is what is actually measured.&lt;br /&gt;
&lt;br /&gt;
Right now, two types of vulnerabilities that get reported are ignored by the scorecard generator:&lt;br /&gt;
# Vulnerabilities in categories not yet supported&lt;br /&gt;
# Vulnerabilities of a type that is supported, but reported in test cases not of that type&lt;br /&gt;
&lt;br /&gt;
In the case of #2, false positives reported in unexpected areas are also ignored, which is primarily a DAST problem. Right now those false positives are completely ignored, but we are thinking about including them in the false positive score in some fashion. We just haven't decided how yet.&lt;br /&gt;
&lt;br /&gt;
==4. How should I configure my tool to scan the Benchmark?==&lt;br /&gt;
&lt;br /&gt;
All tools support various levels of configuration in order to improve their results. The Benchmark project, in general, is trying to '''compare out of the box capabilities of tools'''. However, if a few simple tweaks to a tool can be done to improve that tool's score, that's fine. We'd like to understand what those simple tweaks are, and document them here, so others can repeat those tests in exactly the same way. For example, just turn on the 'test cookies and headers' flag, which is off by default. Or turn on the 'advanced' scan, so it will work harder, find more vulnerabilities. Its simple things like this we are talking about, not an extensive effort to teach the tool about the app, or perform 'expert' configuration of the tool.&lt;br /&gt;
&lt;br /&gt;
So, if you know of some simple tweaks to improve a tool's results, let us know what they are and we'll document them here so everyone can benefit and make it easier to do apples to apples comparisons. And we'll link to that guidance once we start documenting it, but we don't have any such guidance right now.&lt;br /&gt;
&lt;br /&gt;
==5. I'm having difficulty scanning the Benchmark with a DAST tool. How can I get it to work?==&lt;br /&gt;
&lt;br /&gt;
We've run into 2 primary issues giving DAST tools problems.&lt;br /&gt;
&lt;br /&gt;
a) The Benchmark Generates Lots of Cookies&lt;br /&gt;
&lt;br /&gt;
The Burp team pointed out a cookies bug in the 1.2beta Benchmark. Each Weak Randomness test case generates its own cookie, 1 per test case. This caused the creation of so many cookies that servers would eventually start returning 400 errors because there were simply too many cookies being submitted in a request. This was fixed in the Aug 27, 2015 update to the Benchmark by setting the path attribute for each of these cookies to be the path to that individual test case. Now, only at most one of these cookies should be submitted with each request, eliminating this 'too many cookies' problem. However, if a DAST tool doesn't honor this path attribute, it may continue to send too many cookies, making the Benchmark unscannable for that tool. Burp Pro prior to 1.6.29 had this issue, but it was fixed in the 1.6.29 release.&lt;br /&gt;
&lt;br /&gt;
b) The Benchmark is a BIG Application&lt;br /&gt;
&lt;br /&gt;
Yes. It is, so you might have to give your scanner more memory than it normally uses by default in order to successfully scan the entire Benchmark. Please consult your tool vendor's documentation on how to give it more memory.&lt;br /&gt;
&lt;br /&gt;
Your machine itself might not have enough memory in the first place. For example, we were not able to successfully scan the 1.2beta with OWASP ZAP with only 8 Gig of RAM. So, you might need a more powerful machine or use a cloud provided machine to successfully scan the Benchmark with certain DAST tools. You may have similar problems with SAST tools against large versions of the Benchmark, like the 1.1 release.&lt;br /&gt;
&lt;br /&gt;
= Acknowledgements =&lt;br /&gt;
&lt;br /&gt;
The following people, organizations, and many others, have contributed to this project and their contributions are much appreciated!&lt;br /&gt;
&lt;br /&gt;
* Lots of Vendors - Many vendors have provided us with either trial licenses we can use, or they have run their tools themselves and either sent us results files, or written and contributed scorecard generators for their tool. Many have also provided valuable feedback so we can make the Benchmark more accurate and more realistic.&lt;br /&gt;
* Juan Gama - Development of initial release and continued support&lt;br /&gt;
* Ken Prole - Assistance with automated scorecard development using CodeDx&lt;br /&gt;
* Nick Sanidas - Development of initial release&lt;br /&gt;
* Denim Group - Contribution of scan results to facilitate scorecard development&lt;br /&gt;
* Tasos Laskos - Significant feedback on the DAST version of the Benchmark&lt;br /&gt;
* Ann Campbell - From SonarSource - for fixing our SonarQube results parser&lt;br /&gt;
* Dhiraj Mishra - OWASP Member - contributed SQLi/XSS fuzz vectors as initial contribution towards adding support for WAF/RASP scoring&lt;br /&gt;
&lt;br /&gt;
[[File:CWE_Logo.jpeg|link=https://cwe.mitre.org/]] - The CWE project for providing a mapping mechanism to easily map test cases to issues found by vulnerability detection tools.&lt;br /&gt;
&lt;br /&gt;
We are looking for volunteers. Please contact [mailto:dave.wichers@owasp.org Dave Wichers] if you are interested in contributing new test cases, tool results run against the benchmark, or anything else.&lt;br /&gt;
&lt;br /&gt;
__NOTOC__ &amp;lt;headertabs /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP_Project]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=252065</id>
		<title>Source Code Analysis Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=252065"/>
				<updated>2019-06-03T22:09:42Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Static_Code_Analysis | Source code analysis]] tools, also referred to as Static Application Security Testing (SAST) Tools, are designed to analyze source code and/or compiled versions of code to help find security flaws. &lt;br /&gt;
&lt;br /&gt;
Some tools are starting to move into the IDE. For the types of problems that can be detected during the software development phase itself, this is a powerful phase within the development life cycle to employ such tools, as it provides immediate feedback to the developer on issues they might be introducing into the code during code development itself. This immediate feedback is very useful, especially when compared to finding vulnerabilities much later in the development cycle.&lt;br /&gt;
&lt;br /&gt;
== Strengths and Weaknesses ==&lt;br /&gt;
&lt;br /&gt;
=== Strengths ===&lt;br /&gt;
&lt;br /&gt;
* Scales well -- can be run on lots of software, and can be run repeatedly (as with nightly builds or continuous integration)&lt;br /&gt;
* Useful for things that such tools can automatically find with high confidence, such as buffer overflows, SQL Injection Flaws, and so forth&lt;br /&gt;
* Output is good for developers -- highlights the precise source files, line numbers, and even subsections of lines that are affected&lt;br /&gt;
&lt;br /&gt;
=== Weaknesses ===&lt;br /&gt;
&lt;br /&gt;
* Many types of security vulnerabilities are difficult to find automatically, such as authentication problems, access control issues, insecure use of cryptography, etc. The current state of the art only allows such tools to automatically find a relatively small percentage of application security flaws. However, tools of this type are getting better.&lt;br /&gt;
* High numbers of false positives.&lt;br /&gt;
* Frequently can't find configuration issues, since they are not represented in the code.&lt;br /&gt;
* Difficult to 'prove' that an identified security issue is an actual vulnerability.&lt;br /&gt;
* Many of these tools have difficulty analyzing code that can't be compiled. Analysts frequently can't compile code because they don't have the right libraries, all the compilation instructions, all the code, etc.&lt;br /&gt;
&lt;br /&gt;
==Important Selection Criteria==&lt;br /&gt;
&lt;br /&gt;
* Requirement: Must support your programming language, but not usually a key factor once it does.&lt;br /&gt;
* Types of vulnerabilities it can detect (out of the [[OWASP Top Ten]]?) (plus more?)&lt;br /&gt;
* How accurate is it? False Positive/False Negative rates?&lt;br /&gt;
** Does the tool have an OWASP [[Benchmark]] score?&lt;br /&gt;
* Does it understand the libraries/frameworks you use?&lt;br /&gt;
* Does it require a fully buildable set of source?&lt;br /&gt;
* Can it run against binaries instead of source?&lt;br /&gt;
* Can it be integrated into the developer's IDE?&lt;br /&gt;
* How hard is it to setup/use?&lt;br /&gt;
* Can it be run continuously and automatically?&lt;br /&gt;
* License cost for the tool. (Some are sold per user, per org, per app, per line of code analyzed. Consulting licenses are frequently different than end user licenses.)&lt;br /&gt;
&lt;br /&gt;
==OWASP Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [[OWASP SonarQube Project]]&lt;br /&gt;
* [http://www.owasp.org/index.php/Category:OWASP_Orizon_Project OWASP Orizon Project]&lt;br /&gt;
* [[OWASP_LAPSE_Project | OWASP LAPSE Project]]&lt;br /&gt;
* [[OWASP O2 Platform]]&lt;br /&gt;
* [[OWASP WAP-Web Application Protection]]&lt;br /&gt;
&lt;br /&gt;
==Disclaimer==&lt;br /&gt;
&lt;br /&gt;
Disclaimer: &amp;lt;b&amp;gt;The tools listed in the tables below are presented in alphabetical order. &amp;lt;i&amp;gt;OWASP does not endorse any of the vendors or tools by listing them in the table below.&amp;lt;/i&amp;gt; We have made every effort to provide this information as accurately as possible. If you are the vendor of a tool below and think that this information is incomplete or incorrect, please send an e-mail to our mailing list and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Open Source or Free Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.openstack.org/wiki/Security/Projects/Bandit Bandit] - bandit is a comprehensive source vulnerability scanner for Python&lt;br /&gt;
* [http://brakemanscanner.org/ Brakeman] - Brakeman is an open source vulnerability scanner specifically designed for Ruby on Rails applications&lt;br /&gt;
* [http://rubygems.org/gems/codesake-dawn Codesake Dawn] - Codesake Dawn is an open source security source code analyzer designed for Sinatra, Padrino for Ruby on Rails applications. It also works on non-web applications written in Ruby&lt;br /&gt;
* [http://findbugs.sourceforge.net/ FindBugs] - (Legacy - NOT Maintained - Use SpotBugs (see below) instead) - Find bugs (including a few security flaws) in Java programs&lt;br /&gt;
* [https://find-sec-bugs.github.io/ FindSecBugs] - A security specific plugin for SpotBugs that significantly improves SpotBugs's ability to find security vulnerabilities in Java programs. Works with the old FindBugs too,&lt;br /&gt;
* [http://www.dwheeler.com/flawfinder/ Flawfinder] Flawfinder - Scans C and C++&lt;br /&gt;
* [https://www.bishopfox.com/resources/tools/google-hacking-diggity/attack-tools/ Google CodeSearchDiggity] - Uses Google Code Search to identifies vulnerabilities in open source code projects hosted by Google Code, MS CodePlex, SourceForge, Github, and more. The tool comes with over 130 default searches that identify SQL injection, cross-site scripting (XSS), insecure remote and local file includes, hard-coded passwords, and much more.  ''Essentially, Google CodeSearchDiggity provides a source code security analysis of nearly every single open source code project in existence – simultaneously.''&lt;br /&gt;
* [https://github.com/wireghoul/graudit/ Graudit] - Scans multiple languages for various security flaws.&lt;br /&gt;
* [https://lgtm.com/help/lgtm/about-lgtm LGTM] - A free for open source static analysis service that automatically monitors commits to publicly accessible code in: Bitbucket Cloud, GitHub, or GitLab. Supports C/C++, C#, COBOL (in beta), Java, JavaScript/TypeScript, Python&lt;br /&gt;
* [https://dotnet-security-guard.github.io/ .NET Security Guard] - Roslyn analyzers that aim to help security audits on .NET applications. It will find SQL injections, LDAP injections, XXE, cryptography weakness, XSS and more.&lt;br /&gt;
* [https://github.com/FloeDesignTechnologies/phpcs-security-audit phpcs-security-audit] - phpcs-security-audit is a set of PHP_CodeSniffer rules that finds flaws or weaknesses related to security in PHP and its popular CMS or frameworks.  It currently has core PHP rules as well as Drupal 7 specific rules.&lt;br /&gt;
* [http://pmd.sourceforge.net/ PMD] - PMD scans Java source code and looks for potential code problems (this is a code quality tool that does not focus on security issues)&lt;br /&gt;
* [http://msdn.microsoft.com/en-us/library/ms933794.aspx PreFast] (Microsoft) - PREfast is a static analysis tool that identifies defects in C/C++ programs. Last update 2006.&lt;br /&gt;
* [https://github.com/designsecurity/progpilot Progpilot] - Progpilot is a static analyzer tool for PHP that detects security vulnerabilities such as XSS and SQL Injection.&lt;br /&gt;
* [https://pumascan.com/ Puma Scan] - Puma Scan is a .NET C# open source static source code analyzer that runs as an IDE plugin for Visual Studio and via MSBuild in CI pipelines.&lt;br /&gt;
* [https://pyre-check.org/ Pyre] - A performant type-checker for Python 3, that also has [https://pyre-check.org/docs/static-analysis.html limited security/data flow analysis] capabilities.&lt;br /&gt;
* [http://rips-scanner.sourceforge.net/ RIPS] - RIPS Open Source is a static source code analyzer for vulnerabilities in PHP web applications. Please see notes on the sourceforge.net site.&lt;br /&gt;
* [http://www.sonarqube.org/ SonarQube] - Scans source code for more than 20 languages for Bugs, Vulnerabilities, and Code Smells. SonarQube IDE plugins for Eclipse, Visual Studio, and IntelliJ provided by [http://www.sonarlint.org/ SonarLint].&lt;br /&gt;
* [https://spotbugs.github.io/ SpotBugs] - This is the active fork replacement for FindBugs, which is not maintained anymore.&lt;br /&gt;
* [http://sourceforge.net/projects/visualcodegrepp/ VisualCodeGrepper (VCG)] - Scans C/C++, C#, VB, PHP, Java, and PL/SQL for security issues and for comments which may indicate defective code. The config files can be used to carry out additional checks for banned functions or functions which commonly cause security issues.&lt;br /&gt;
&lt;br /&gt;
[https://docs.gitlab.com/ee/user/application_security/sast/index.html#supported-languages-and-frameworks GitLab has lashed a free SAST tool for a bunch of different languages natively into GitLab. So you might be able to use that, or at least identify a free SAST tool for the language you need from that list].&lt;br /&gt;
&lt;br /&gt;
==Commercial Tools Of This Type==&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ Application Inspector] (Positive Technologies) - combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation; has capability to generate test queries (exploits) to verify detected vulnerabilities during SAST analysis; Supported languages include: Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/application-security-on-cloud Application Security on Cloud] (IBM) - Provides SAST, DAST and mobile security testing as well as OpenSource library known vulnerability detection as a cloud service. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/ibm-appscan-source AppScan Source] (IBM)&lt;br /&gt;
* [https://www.blueclosure.com BlueClosure BC Detect] (BlueClosure) - Analyzes client-side JavaScript.&lt;br /&gt;
* [https://bugscout.io/en/ bugScout] (Nalbatech, Formally Buguroo)&lt;br /&gt;
* [https://www.castsoftware.com/products/application-intelligence-platform CAST AIP] (CAST) Performs static and architectural analysis to identify numerous types of security issues. Supports over 30 languages. [https://www.castsoftware.com/solutions/application-security/cwe#SupportedSecurityStandards AIP's security specific coverage is here].&lt;br /&gt;
* [https://www.codacy.com/ Codacy] Offers security patterns for languages such as Python, Ruby, Scala, Java, JavaScript and more. Integrates with tools such as Brakeman, Bandit, FindBugs, and others. (free for open source projects)&lt;br /&gt;
* [https://www.grammatech.com/products/codesonar CodeSonar] tool that supports C, C++, Java and C# and maps against the OWASP top 10 vulnerabilities.&lt;br /&gt;
* [https://www.contrastsecurity.com/interactive-application-security-testing-iast Contrast Assess] (Contrast Security) - Contrast performs code security without actually doing static analysis. Contrast does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.coverity.com/products/code-advisor/ Coverity Code Advisor] (Synopsys)&lt;br /&gt;
* [https://www.checkmarx.com/technology/static-code-analysis-sca/ CxSAST] (Checkmarx)&lt;br /&gt;
* [http://www8.hp.com/us/en/software-solutions/static-code-analysis-sast/ Fortify] (Micro Focus, Formally HP)&lt;br /&gt;
* [https://hdivsecurity.com/interactive-application-security-testing-iast Hdiv Detection] (Hdiv Security) - Hdiv performs code security without actually doing static analysis. Hdiv does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code-level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.juliasoft.com/solutions Julia] (JuliaSoft) - SaaS Java static analysis&lt;br /&gt;
* [http://www.klocwork.com/capabilities/static-code-analysis KlocWork] (KlocWork)&lt;br /&gt;
* [https://www.kiuwan.com/code-analysis/ Kiuwan] (an [http://www.optimyth.com Optimyth] company) - SaaS Software Quality &amp;amp; Security Analysis&lt;br /&gt;
* [http://www.parasoft.com/jsp/capabilities/static_analysis.jsp?itemId=547 Parasoft Test] (Parasoft)&lt;br /&gt;
* [https://pitss.com/products/pitss-con/ PITSS.CON] (PITTS)&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ PT Application Inspector] combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation for high accuracy rate with minimum false positives; has a unique capability to generate special test queries (exploits) to verify detected vulnerabilities during SAST analysis; integrates with CI/CD, VCS, etc. PT AI helps to easily understand, verify, and fix flaws; has a simple UI; is highly automated and easy to use. Supported languages are Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others.&lt;br /&gt;
* [https://pumascanpro.com/ Puma Scan Professional] - A .NET C# static source code analyzer that runs as a Visual Studio IDE extension, Azure DevOps extension, and Command Line (CLI) executable.&lt;br /&gt;
* [http://www.viva64.com/en/ PVS-Studio] (PVS-Studio) - For C/C++, C#&lt;br /&gt;
* [https://www.softwaresecured.com/reshift reshift] - A CI/CD tool that uses static code analysis to scan for vulnerabilities and uses machine learning to give a prediction on false positives. Supports Java with future support for NodeJS and JavaScript planned for sometime in 2019.&lt;br /&gt;
* [https://www.ripstech.com/ RIPS Code Analysis] (RIPS Technologies) - A SAST solution specialized for Java and PHP that detects unknown security vulnerabilities and code quality issues.&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/resources/datasheets/secureassist.html SecureAssist] (Synopsys) - Scans code for insecure coding and configurations automatically as an IDE plugin for Eclipse, IntelliJ, and Visual Studio etc. Supports (Java, .NET, PHP, and JavaScript)&lt;br /&gt;
* [https://www.whitehatsec.com/products/static-application-security-testing/ Sentinel Source] (Whitehat)&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/products/interactive-application-security-testing.html Seeker] (Synopsys) Seeker performs code security without actually doing static analysis. Seeker does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis with simulated attacks. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.sourcepatrol.co.uk/ Source Patrol] (Pentest)&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] (DefenseCode)&lt;br /&gt;
* [http://www.veracode.com/products/binary-static-analysis-sast Veracode Static Analysis] (Veracode)&lt;br /&gt;
* [http://www.xanitizer.net Xanitizer] - Scans Java for security vulnerabilities, mainly via taint analysis. Free for academic and open source projects (see [https://www.rigs-it.com/xanitizer-pricing/]).&lt;br /&gt;
&lt;br /&gt;
==More info==&lt;br /&gt;
&lt;br /&gt;
* [[Appendix_A:_Testing_Tools | Appendix A: Testing Tools]]&lt;br /&gt;
* [http://samate.nist.gov/index.php/Source_Code_Security_Analyzers.html NIST's list of Source Code Security Analysis Tools]&lt;br /&gt;
* [[:Category:Vulnerability_Scanning_Tools | DAST Tools]] - Similar info on Dynamic Application Security Testing (DAST) Tools&lt;br /&gt;
* [[Free for Open Source Application Security Tools]] - This page lists the Commercial Source Code Analysis Tools (SAST) we know of that are free for Open Source&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP .NET Project]]&lt;br /&gt;
[[Category:SAMM-CR-2]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Category:OWASP_AntiSamy_Project_.Java&amp;diff=251474</id>
		<title>Category:OWASP AntiSamy Project .Java</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Category:OWASP_AntiSamy_Project_.Java&amp;diff=251474"/>
				<updated>2019-05-13T22:08:12Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: Delete this page.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Category:OWASP_AntiSamy_Project&amp;diff=251471</id>
		<title>Category:OWASP AntiSamy Project</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Category:OWASP_AntiSamy_Project&amp;diff=251471"/>
				<updated>2019-05-13T22:06:29Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{|&lt;br /&gt;
|-&lt;br /&gt;
! width=&amp;quot;700&amp;quot; align=&amp;quot;center&amp;quot; | &amp;lt;br&amp;gt; &lt;br /&gt;
! width=&amp;quot;500&amp;quot; align=&amp;quot;center&amp;quot; | &amp;lt;br&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;right&amp;quot; | &lt;br /&gt;
| align=&amp;quot;right&amp;quot; | &lt;br /&gt;
|}&lt;br /&gt;
=Main=&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:160px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:OWASP_Project_Header.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
==OWASP AntiSamy Project==&lt;br /&gt;
&lt;br /&gt;
OWASP AntiSamy is a library for HTML and CSS encoding.&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
AntiSamy was originally authored by Arshan Dabirsiaghi (arshan.dabirsiaghi [at the] gmail.com) of Contrast Security with help from Jason Li (jason.li [at the] owasp.org) of Aspect Security (http://www.aspectsecurity.com/).&lt;br /&gt;
&lt;br /&gt;
==Description==&lt;br /&gt;
&lt;br /&gt;
The OWASP AntiSamy project is a few things. Technically, it is an API for ensuring user-supplied HTML/CSS is in compliance within an application's rules. Another way of saying that could be: It's an API that helps you make sure that clients don't supply malicious cargo code in the HTML they supply for their profile, comments, etc., that get persisted on the server. The term &amp;quot;malicious code&amp;quot; in regards to web applications usually mean &amp;quot;JavaScript.&amp;quot; Cascading Stylesheets are only considered malicious when they invoke the JavaScript engine. However, there are many situations where &amp;quot;normal&amp;quot; HTML and CSS can be used in a malicious manner. So we take care of that too.&lt;br /&gt;
&lt;br /&gt;
Philosophically, AntiSamy is a departure from contemporary security mechanisms. Generally, the security mechanism and user have a communication that is virtually one way, for good reason. Letting the potential attacker know details about the validation is considered unwise as it allows the attacker to &amp;quot;learn&amp;quot; and &amp;quot;recon&amp;quot; the mechanism for weaknesses. These types of information leaks can also hurt in ways you don't expect. A login mechanism that tells the user, &amp;quot;Username invalid&amp;quot; leaks the fact that a user by that name does not exist. A user could use a dictionary or phone book or both to remotely come up with a list of valid usernames. Using this information, an attacker could launch a brute force attack or massive account lock denial-of-service. We get that.&lt;br /&gt;
&lt;br /&gt;
Unfortunately, that's just not very usable in this situation. Typical Internet users are largely pretty bad when it comes to writing HTML/CSS, so where do they get their HTML from? Usually they copy it from somewhere out on the web. Simply rejecting their input without any clue as to why is jolting and annoying. Annoyed users go somewhere else to do their social networking.&lt;br /&gt;
&lt;br /&gt;
The [[OWASP_Licenses|OWASP licensing policy]] (further explained in the [[Membership|membership FAQ]]) allows OWASP projects to be released under any [http://www.opensource.org/licenses/alphabetical approved open source license]. Under these guidelines, AntiSamy is distributed under a [http://www.opensource.org/licenses/bsd-license.php BSD license].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;padding-left:25px;width:200px;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== What is AntiSamy ==&lt;br /&gt;
&lt;br /&gt;
OWASP AntiSamy  provides:&lt;br /&gt;
&lt;br /&gt;
[[AntiSamy Version Differences|This page]] shows a big-picture comparison between the versions. Since it's an unfunded open source project, the ports can't be expected to mirror functionality exactly. If there's something a port is missing -- let us know, and we'll try to accommodate, or write a patch!  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Presentations ==&lt;br /&gt;
&lt;br /&gt;
From OWASP &amp;amp; WASC AppSec U.S. 2007 Conference (San Jose, CA): [http://www.owasp.org/images/e/e9/OWASP-WASCAppSec2007SanJose_AntiSamy.ppt AntiSamy - Picking a Fight with XSS (ppt)] - by Arshan Dabirsiaghi - AntiSamy project lead&lt;br /&gt;
&lt;br /&gt;
From OWASP AppSec Europe 2008 (Ghent, Belgium): [http://www.owasp.org/images/4/47/AppSecEU08-AntiSamy.ppt The OWASP AntiSamy project (ppt)] - by Jason Li - AntiSamy project contributor&lt;br /&gt;
&lt;br /&gt;
From OWASP AppSec India 2008 (Delhi, India): [https://www.owasp.org/images/9/9d/AppSecIN08-ValidatingRichUserContent.ppt Validating Rich User Content (ppt)] - by Jason Li - AntiSamy project contributor&lt;br /&gt;
&lt;br /&gt;
From Shmoocon 2009 (Washington, DC): [http://www.shmoocon.org/2009/slides/OWASP%20Winter%202009%20Shmoocon%20-%20Anti%20Samy.pptx AntiSamy - Picking a Fight with XSS (pptx)] - by Arshan Dabirsiaghi - AntiSamy project lead&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Project Leader ==&lt;br /&gt;
&lt;br /&gt;
[mailto:arshan.dabirsiaghi@gmail.com Arshan Dabirsiaghi]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Related Projects ==&lt;br /&gt;
&lt;br /&gt;
== Ohloh ==&lt;br /&gt;
&lt;br /&gt;
* https://www.ohloh.net/p/owaspantisamy&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;padding-left:25px;width:200px;&amp;quot; | &lt;br /&gt;
&lt;br /&gt;
== News and Events ==&lt;br /&gt;
* [26 Sep 2017] Please update AntiSamy to 1.5.5 or later per [https://nvd.nist.gov/vuln/detail/CVE-2016-10006 CVE-2016-10006]&lt;br /&gt;
* [20 Nov 2013] News 2&lt;br /&gt;
* [30 Sep 2013] News 1&lt;br /&gt;
&lt;br /&gt;
== In Print ==&lt;br /&gt;
This project can be purchased as a print on demand book from Lulu.com&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Classifications==&lt;br /&gt;
&lt;br /&gt;
   {| width=&amp;quot;200&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; rowspan=&amp;quot;2&amp;quot;| [[File:Owasp-incubator-trans-85.png|link=https://www.owasp.org/index.php/OWASP_Project_Stages#tab=Incubator_Projects]]&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-builders-small.png|link=]]  &lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-defenders-small.png|link=]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [[File:Cc-button-y-sa-small.png|link=http://creativecommons.org/licenses/by-sa/3.0/]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [[File:Project_Type_Files_CODE.jpg|link=]]&lt;br /&gt;
   |}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= How do I get started? =&lt;br /&gt;
&lt;br /&gt;
There's 4 steps in the process of integrating AntiSamy. Each step is detailed in the next section, but the high level overview follows:&lt;br /&gt;
# Download AntiSamy from Maven&lt;br /&gt;
# Choose one of the standard policy files that matches as close to the functionality you need:&lt;br /&gt;
#* antisamy-tinymce-X.X.X.xml&lt;br /&gt;
#* antisamy-slashdot-X.X.X.xml&lt;br /&gt;
#* antisamy-ebay-X.X.X.xml&lt;br /&gt;
#* antisamy-myspace-X.X.X.xml&lt;br /&gt;
#* antisamy-anythinggoes-X.X.X.xml&lt;br /&gt;
# Tailor the policy file according to your site's rules&lt;br /&gt;
# Call the API from the code&lt;br /&gt;
&lt;br /&gt;
=== Stage 1 - Downloading AntiSamy ===&lt;br /&gt;
&lt;br /&gt;
First, add the dependency from Maven:&lt;br /&gt;
&lt;br /&gt;
 &amp;lt;dependency&amp;gt;&lt;br /&gt;
   &amp;lt;groupId&amp;gt;org.owasp.antisamy&amp;lt;/groupId&amp;gt;&lt;br /&gt;
   &amp;lt;projectId&amp;gt;antisamy&amp;lt;/projectId&amp;gt;&lt;br /&gt;
 &amp;lt;/dependency&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Stage 2 - Choosing a base policy file ===&lt;br /&gt;
&lt;br /&gt;
Chances are that your site's use case for AntiSamy is at least roughly comparable to one of the predefined policy files. They each represent a &amp;quot;typical&amp;quot; scenario for allowing users to provide HTML (and possibly CSS) formatting information. Let's look into the different policy files:&lt;br /&gt;
&lt;br /&gt;
1) antisamy-slashdot.xml&lt;br /&gt;
&lt;br /&gt;
Slashdot (http://www.slashdot.org/) is a techie news site that allows users to respond anonymously to news posts with very limited HTML markup. Now Slashdot is not only one of the coolest sites around, it's also one that's been subject to many different successful attacks. Even more unfortunate is the fact that most of the attacks led users to the infamous goatse.cx picture (please don't go look it up). The rules for Slashdot are fairly strict: users can only submit the following HTML tags and no CSS: &amp;amp;lt;b&amp;amp;gt;, &amp;amp;lt;u&amp;amp;gt;, &amp;amp;lt;i&amp;amp;gt;, &amp;amp;lt;a&amp;amp;gt;, &amp;amp;lt;blockquote&amp;amp;gt;. &lt;br /&gt;
&lt;br /&gt;
Accordingly, we've built a policy file that allows fairly similar functionality. All text-formatting tags that operate directly on the font, color or emphasis have been allowed. &lt;br /&gt;
&lt;br /&gt;
2) antisamy-ebay.xml&lt;br /&gt;
&lt;br /&gt;
eBay (http://www.ebay.com/) is the most popular online auction site in the universe, as far as I can tell. It is a public site so anyone is allowed to post listings with rich HTML content. It's not surprising that given the attractiveness of eBay as a target that it has been subject to a few complex XSS attacks. Listings are allowed to contain much more rich content than, say, Slashdot- so it's attack surface is considerably larger. The following tags appear to be accepted by eBay (they don't publish rules): &amp;lt;a&amp;gt;,...&lt;br /&gt;
&lt;br /&gt;
3) antisamy-myspace.xml&lt;br /&gt;
&lt;br /&gt;
MySpace (http://www.myspace.com/) was, at the time this project was born, arguably the most popular social networking site today. Users were allowed to submit pretty much all HTML and CSS they want - as long as it doesn't contain JavaScript. MySpace was using a word blacklist to validate users' HTML, which is why they were subject to the infamous Samy worm (http://namb.la/). The Samy worm, which used fragmentation attacks combined with a word that should have been blacklisted (eval) - was the inspiration for the project. &lt;br /&gt;
&lt;br /&gt;
4) antisamy-anythinggoes.xml&lt;br /&gt;
&lt;br /&gt;
I don't know of a possible use case for this policy file. If you wanted to allow every single valid HTML and CSS element (but without JavaScript or blatant CSS-related phishing attacks), you can use this policy file. Not even MySpace was _this_ crazy. However, it does serve as a good reference because it contains base rules for every element, so you can use it as a knowledge base when using tailoring the other policy files.&lt;br /&gt;
&lt;br /&gt;
=== Stage 3 - Tailoring the policy file ===&lt;br /&gt;
&lt;br /&gt;
Smaller organizations may want to deploy AntiSamy in a default configuration, but it's equally likely that a site may want to have strict, business-driven rules for what users can allow. The discussion that decides the tailoring should also consider attack surface - which grows in relative proportion to the policy file.&lt;br /&gt;
&lt;br /&gt;
You may also want to enable/modify some &amp;quot;directives&amp;quot;, which are basically advanced user options. [[AntiSamy Directives|This page]] tells you what the directives are and which versions support them.&lt;br /&gt;
&lt;br /&gt;
=== Stage 4 - Calling the AntiSamy API ===&lt;br /&gt;
&lt;br /&gt;
Using AntiSamy is easy. Here is an example of invoking AntiSamy with a policy file:&lt;br /&gt;
&lt;br /&gt;
 import org.owasp.validator.html.*;&lt;br /&gt;
 &lt;br /&gt;
 Policy policy = Policy.getInstance(POLICY_FILE_LOCATION);&lt;br /&gt;
 &lt;br /&gt;
 AntiSamy as = new AntiSamy();&lt;br /&gt;
 CleanResults cr = as.scan(dirtyInput, policy);&lt;br /&gt;
 &lt;br /&gt;
 MyUserDAO.storeUserProfile(cr.getCleanHTML()); // some custom function&lt;br /&gt;
&lt;br /&gt;
There are a few ways to create a Policy object. The &amp;lt;code&amp;gt;getInstance()&amp;lt;/code&amp;gt; method can take any of the following:&lt;br /&gt;
* a String filename&lt;br /&gt;
* a File object&lt;br /&gt;
* an InputStream &lt;br /&gt;
&lt;br /&gt;
Policy files can also be referenced by filename by passing a second argument to the &amp;lt;code&amp;gt;AntiSamy:scan()&amp;lt;/code&amp;gt; method as the following examples show:&lt;br /&gt;
&lt;br /&gt;
 AntiSamy as = new AntiSamy();&lt;br /&gt;
 CleanResults cr = as.scan(dirtyInput, policyFilePath);&amp;lt;/pre&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Finally, policy files can also be referenced by File objects directly in the second parameter:&lt;br /&gt;
&lt;br /&gt;
 AntiSamy as = new AntiSamy();&lt;br /&gt;
 CleanResults cr = as.scan(dirtyInput, new File(policyFilePath));&lt;br /&gt;
&lt;br /&gt;
=== Stage 5 - Analyzing CleanResults ===&lt;br /&gt;
&lt;br /&gt;
The CleanResults object provides a lot of useful stuff. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;getErrorMessages()&amp;lt;/code&amp;gt; - a list of &amp;lt;code&amp;gt;String&amp;lt;/code&amp;gt; error messages&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;getCleanHTML()&amp;lt;/code&amp;gt; - the clean, safe HTML output&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;getCleanXMLDocumentFragment()&amp;lt;/code&amp;gt; - the clean, safe &amp;lt;code&amp;gt;XMLDocumentFragment&amp;lt;/code&amp;gt; which is reflected in &amp;lt;code&amp;gt;getCleanHTML()&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;getScanTime()&amp;lt;/code&amp;gt; - returns the scan time in seconds&lt;br /&gt;
&lt;br /&gt;
= Acknowledgements =&lt;br /&gt;
== Contacting us ==&lt;br /&gt;
There are two ways of getting information on AntiSamy. The mailing list, and contacting the project lead directly.&lt;br /&gt;
&lt;br /&gt;
=== OWASP AntiSamy mailing list ===&lt;br /&gt;
The first is the mailing list which is located at https://lists.owasp.org/mailman/listinfo/owasp-antisamy. The list was previously private and the archives have been cleared with the release of version 1.0. We encourage all prospective and current users and bored attackers to join in the conversation. We're happy to brainstorm attack scenarios, discuss regular expressions and help with integration.&lt;br /&gt;
&lt;br /&gt;
=== Emailing the project lead ===&lt;br /&gt;
&lt;br /&gt;
For content which is not appropriate for the public mailing list, you can alternatively contact the project lead, Arshan Dabirsiaghi, at [arshan.dabirsiaghi] at [contrastsecurity.com] or Dave Wichers at [dave.wichers] at [owasp.org].&lt;br /&gt;
&lt;br /&gt;
=== Issue tracking ===&lt;br /&gt;
&lt;br /&gt;
Visit the [https://github.com/nahsra/antisamy/issues GitHub issue tracker].&lt;br /&gt;
&lt;br /&gt;
==Sponsors==&lt;br /&gt;
The AntiSamy project is sponsored by [https://www.contrastsecurity.com/ Contrast Security].&lt;br /&gt;
&lt;br /&gt;
The initial Java project was sponsored by the [[OWASP Spring Of Code 2007|OWASP Spring Of Code 2007]]. The .NET project was sponsored by the [[OWASP Summer of Code 2008]].&lt;br /&gt;
&lt;br /&gt;
= Road Map =&lt;br /&gt;
This section details the status of the various ports of AntiSamy.&lt;br /&gt;
&lt;br /&gt;
=== Grails ===&lt;br /&gt;
Daniel Bower created a [http://www.grails.org/plugin/sanitizer Grails plugin] for AntiSamy.&lt;br /&gt;
&lt;br /&gt;
=== .NET ===&lt;br /&gt;
A .NET port of AntiSamy is available now at the [[:Category:OWASP AntiSamy Project .NET|OWASP AntiSamy .NET]] page. The project was funded by a Summer of Code 2008 grant and was developed by Jerry Hoff. However, this version of AntiSamy has not been updated in a while.&lt;br /&gt;
&lt;br /&gt;
This port is no longer under active development, and is looking for a few good developers to help make it feature-synchronized with the Java version. If it doesn't suit your needs, consider Microsoft's [http://blogs.msdn.com/b/securitytools/archive/2009/09/01/html-sanitization-in-anti-xss-library.aspx AntiXSS] library.&lt;br /&gt;
&lt;br /&gt;
=== Python ===&lt;br /&gt;
A port of AntiSamy to Python was attempted, but has been abandoned since 2010. Michael Coates suggests you check out project Bleach instead: https://pypi.org/project/bleach/&lt;br /&gt;
&lt;br /&gt;
=== PHP ===&lt;br /&gt;
Although a PHP version was initially planned, we now suggest [http://htmlpurifier.org HTMLPurifier] for safe rich input validation for PHP applications.&lt;br /&gt;
&lt;br /&gt;
=Project About=&lt;br /&gt;
== Project's Assessment ==&lt;br /&gt;
&lt;br /&gt;
This project was assessed by [[:User:Jeff Williams|Jeff Williams]] and his evaluation can be seen [http://spreadsheets.google.com/ccc?key=pAX6n7m2zaTW-JtGBqixbTw '''here'''].&lt;br /&gt;
&lt;br /&gt;
__NOTOC__ &amp;lt;headertabs /&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Project|AntiSamy Project]]&lt;br /&gt;
[[Category:OWASP Tool]]&lt;br /&gt;
[[Category:OWASP Download]]&lt;br /&gt;
[[Category:OWASP Release Quality Tool]]&lt;br /&gt;
&lt;br /&gt;
{{OWASP Builders}}&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=250775</id>
		<title>Source Code Analysis Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=250775"/>
				<updated>2019-04-29T18:21:37Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Static_Code_Analysis | Source code analysis]] tools, also referred to as Static Application Security Testing (SAST) Tools, are designed to analyze source code and/or compiled versions of code to help find security flaws. &lt;br /&gt;
&lt;br /&gt;
Some tools are starting to move into the IDE. For the types of problems that can be detected during the software development phase itself, this is a powerful phase within the development life cycle to employ such tools, as it provides immediate feedback to the developer on issues they might be introducing into the code during code development itself. This immediate feedback is very useful, especially when compared to finding vulnerabilities much later in the development cycle.&lt;br /&gt;
&lt;br /&gt;
== Strengths and Weaknesses ==&lt;br /&gt;
&lt;br /&gt;
=== Strengths ===&lt;br /&gt;
&lt;br /&gt;
* Scales well -- can be run on lots of software, and can be run repeatedly (as with nightly builds or continuous integration)&lt;br /&gt;
* Useful for things that such tools can automatically find with high confidence, such as buffer overflows, SQL Injection Flaws, and so forth&lt;br /&gt;
* Output is good for developers -- highlights the precise source files, line numbers, and even subsections of lines that are affected&lt;br /&gt;
&lt;br /&gt;
=== Weaknesses ===&lt;br /&gt;
&lt;br /&gt;
* Many types of security vulnerabilities are difficult to find automatically, such as authentication problems, access control issues, insecure use of cryptography, etc. The current state of the art only allows such tools to automatically find a relatively small percentage of application security flaws. However, tools of this type are getting better.&lt;br /&gt;
* High numbers of false positives.&lt;br /&gt;
* Frequently can't find configuration issues, since they are not represented in the code.&lt;br /&gt;
* Difficult to 'prove' that an identified security issue is an actual vulnerability.&lt;br /&gt;
* Many of these tools have difficulty analyzing code that can't be compiled. Analysts frequently can't compile code because they don't have the right libraries, all the compilation instructions, all the code, etc.&lt;br /&gt;
&lt;br /&gt;
==Important Selection Criteria==&lt;br /&gt;
&lt;br /&gt;
* Requirement: Must support your programming language, but not usually a key factor once it does.&lt;br /&gt;
* Types of vulnerabilities it can detect (out of the [[OWASP Top Ten]]?) (plus more?)&lt;br /&gt;
* How accurate is it? False Positive/False Negative rates?&lt;br /&gt;
** Does the tool have an OWASP [[Benchmark]] score?&lt;br /&gt;
* Does it understand the libraries/frameworks you use?&lt;br /&gt;
* Does it require a fully buildable set of source?&lt;br /&gt;
* Can it run against binaries instead of source?&lt;br /&gt;
* Can it be integrated into the developer's IDE?&lt;br /&gt;
* How hard is it to setup/use?&lt;br /&gt;
* Can it be run continuously and automatically?&lt;br /&gt;
* License cost for the tool. (Some are sold per user, per org, per app, per line of code analyzed. Consulting licenses are frequently different than end user licenses.)&lt;br /&gt;
&lt;br /&gt;
==OWASP Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [[OWASP SonarQube Project]]&lt;br /&gt;
* [http://www.owasp.org/index.php/Category:OWASP_Orizon_Project OWASP Orizon Project]&lt;br /&gt;
* [[OWASP_LAPSE_Project | OWASP LAPSE Project]]&lt;br /&gt;
* [[OWASP O2 Platform]]&lt;br /&gt;
* [[OWASP WAP-Web Application Protection]]&lt;br /&gt;
&lt;br /&gt;
==Disclaimer==&lt;br /&gt;
&lt;br /&gt;
Disclaimer: &amp;lt;b&amp;gt;The tools listed in the tables below are presented in alphabetical order. &amp;lt;i&amp;gt;OWASP does not endorse any of the vendors or tools by listing them in the table below.&amp;lt;/i&amp;gt; We have made every effort to provide this information as accurately as possible. If you are the vendor of a tool below and think that this information is incomplete or incorrect, please send an e-mail to our mailing list and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Open Source or Free Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.openstack.org/wiki/Security/Projects/Bandit Bandit] - bandit is a comprehensive source vulnerability scanner for Python&lt;br /&gt;
* [http://brakemanscanner.org/ Brakeman] - Brakeman is an open source vulnerability scanner specifically designed for Ruby on Rails applications&lt;br /&gt;
* [http://rubygems.org/gems/codesake-dawn Codesake Dawn] - Codesake Dawn is an open source security source code analyzer designed for Sinatra, Padrino for Ruby on Rails applications. It also works on non-web applications written in Ruby&lt;br /&gt;
* [http://findbugs.sourceforge.net/ FindBugs] - (Legacy - NOT Maintained - Use SpotBugs (see below) instead) - Find bugs (including a few security flaws) in Java programs&lt;br /&gt;
* [https://find-sec-bugs.github.io/ FindSecBugs] - A security specific plugin for SpotBugs that significantly improves SpotBugs's ability to find security vulnerabilities in Java programs. Works with the old FindBugs too,&lt;br /&gt;
* [http://www.dwheeler.com/flawfinder/ Flawfinder] Flawfinder - Scans C and C++&lt;br /&gt;
* [https://www.bishopfox.com/resources/tools/google-hacking-diggity/attack-tools/ Google CodeSearchDiggity] - Uses Google Code Search to identifies vulnerabilities in open source code projects hosted by Google Code, MS CodePlex, SourceForge, Github, and more. The tool comes with over 130 default searches that identify SQL injection, cross-site scripting (XSS), insecure remote and local file includes, hard-coded passwords, and much more.  ''Essentially, Google CodeSearchDiggity provides a source code security analysis of nearly every single open source code project in existence – simultaneously.''&lt;br /&gt;
* [https://github.com/wireghoul/graudit/ Graudit] - Scans multiple languages for various security flaws.&lt;br /&gt;
* [https://lgtm.com/help/lgtm/about-lgtm LGTM] - A free for open source static analysis service that automatically monitors commits to publicly accessible code in: Bitbucket Cloud, GitHub, or GitLab. Supports C/C++, C#, COBOL (in beta), Java, JavaScript/TypeScript, Python&lt;br /&gt;
* [http://pmd.sourceforge.net/ PMD] - PMD scans Java source code and looks for potential code problems (this is a code quality tool that does not focus on security issues)&lt;br /&gt;
* [https://github.com/designsecurity/progpilot Progpilot] - Progpilot is a static analyzer tool for PHP that detects security vulnerabilities such as XSS and SQL Injection.&lt;br /&gt;
* [http://msdn.microsoft.com/en-us/library/ms933794.aspx PreFast] (Microsoft) - PREfast is a static analysis tool that identifies defects in C/C++ programs. Last update 2006.&lt;br /&gt;
* [https://pumascan.com/ Puma Scan] - Puma Scan is a .NET C# open source static source code analyzer that runs as an IDE plugin for Visual Studio and via MSBuild in CI pipelines.&lt;br /&gt;
* [https://dotnet-security-guard.github.io/ .NET Security Guard] - Roslyn analyzers that aim to help security audits on .NET applications. It will find SQL injections, LDAP injections, XXE, cryptography weakness, XSS and more.&lt;br /&gt;
* [http://rips-scanner.sourceforge.net/ RIPS] - RIPS is a static source code analyzer for vulnerabilities in PHP web applications. Please see notes on the sourceforge.net site.&lt;br /&gt;
* [https://github.com/FloeDesignTechnologies/phpcs-security-audit phpcs-security-audit] - phpcs-security-audit is a set of PHP_CodeSniffer rules that finds flaws or weaknesses related to security in PHP and its popular CMS or frameworks.  It currently has core PHP rules as well as Drupal 7 specific rules.&lt;br /&gt;
* [http://www.sonarqube.org/ SonarQube] - Scans source code for more than 20 languages for Bugs, Vulnerabilities, and Code Smells. SonarQube IDE plugins for Eclipse, Visual Studio, and IntelliJ provided by [http://www.sonarlint.org/ SonarLint].&lt;br /&gt;
* [https://spotbugs.github.io/ SpotBugs] - This is the active fork replacement for FindBugs, which is not maintained anymore.&lt;br /&gt;
* [http://sourceforge.net/projects/visualcodegrepp/ VisualCodeGrepper (VCG)] - Scans C/C++, C#, VB, PHP, Java, and PL/SQL for security issues and for comments which may indicate defective code. The config files can be used to carry out additional checks for banned functions or functions which commonly cause security issues.&lt;br /&gt;
&lt;br /&gt;
[https://docs.gitlab.com/ee/user/application_security/sast/index.html#supported-languages-and-frameworks GitLab has lashed a free SAST tool for a bunch of different languages natively into GitLab. So you might be able to use that, or at least identify a free SAST tool for the language you need from that list].&lt;br /&gt;
&lt;br /&gt;
==Commercial Tools Of This Type==&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ Application Inspector] (Positive Technologies) - combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation; has capability to generate test queries (exploits) to verify detected vulnerabilities during SAST analysis; Supported languages include: Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/application-security-on-cloud Application Security on Cloud] (IBM) - Provides SAST, DAST and mobile security testing as well as OpenSource library known vulnerability detection as a cloud service. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/ibm-appscan-source AppScan Source] (IBM)&lt;br /&gt;
* [https://www.blueclosure.com BlueClosure BC Detect] (BlueClosure) - Analyzes client-side JavaScript.&lt;br /&gt;
* [https://bugscout.io/en/ bugScout] (Nalbatech, Formally Buguroo)&lt;br /&gt;
* [https://www.castsoftware.com/products/application-intelligence-platform CAST AIP] (CAST) Performs static and architectural analysis to identify numerous types of security issues. Supports over 30 languages. [https://www.castsoftware.com/solutions/application-security/cwe#SupportedSecurityStandards AIP's security specific coverage is here].&lt;br /&gt;
* [https://www.codacy.com/ Codacy] Offers security patterns for languages such as Python, Ruby, Scala, Java, JavaScript and more. Integrates with tools such as Brakeman, Bandit, FindBugs, and others. (free for open source projects)&lt;br /&gt;
* [https://www.grammatech.com/products/codesonar CodeSonar] tool that supports C, C++, Java and C# and maps against the OWASP top 10 vulnerabilities.&lt;br /&gt;
* [https://www.contrastsecurity.com/interactive-application-security-testing-iast Contrast Assess] (Contrast Security) - Contrast performs code security without actually doing static analysis. Contrast does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.coverity.com/products/code-advisor/ Coverity Code Advisor] (Synopsys)&lt;br /&gt;
* [https://www.checkmarx.com/technology/static-code-analysis-sca/ CxSAST] (Checkmarx)&lt;br /&gt;
* [http://www8.hp.com/us/en/software-solutions/static-code-analysis-sast/ Fortify] (Micro Focus, Formally HP)&lt;br /&gt;
* [https://hdivsecurity.com/interactive-application-security-testing-iast Hdiv Detection] (Hdiv Security) - Hdiv performs code security without actually doing static analysis. Hdiv does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code-level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.juliasoft.com/solutions Julia] (JuliaSoft) - SaaS Java static analysis&lt;br /&gt;
* [http://www.klocwork.com/capabilities/static-code-analysis KlocWork] (KlocWork)&lt;br /&gt;
* [https://www.kiuwan.com/code-analysis/ Kiuwan] (an [http://www.optimyth.com Optimyth] company) - SaaS Software Quality &amp;amp; Security Analysis&lt;br /&gt;
* [http://www.parasoft.com/jsp/capabilities/static_analysis.jsp?itemId=547 Parasoft Test] (Parasoft)&lt;br /&gt;
* [https://pitss.com/products/pitss-con/ PITSS.CON] (PITTS)&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ PT Application Inspector] combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation for high accuracy rate with minimum false positives; has a unique capability to generate special test queries (exploits) to verify detected vulnerabilities during SAST analysis; integrates with CI/CD, VCS, etc. PT AI helps to easily understand, verify, and fix flaws; has a simple UI; is highly automated and easy to use. Supported languages are Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others.&lt;br /&gt;
* [https://pumascanpro.com/ Puma Scan Professional] - A .NET C# static source code analyzer that runs as a Visual Studio IDE extension, Azure DevOps extension, and Command Line (CLI) executable.&lt;br /&gt;
* [http://www.viva64.com/en/ PVS-Studio] (PVS-Studio) - For C/C++, C#&lt;br /&gt;
* [https://www.softwaresecured.com/reshift reshift] - A CI/CD tool that uses static code analysis to scan for vulnerabilities and uses machine learning to give a prediction on false positives. Supports Java with future support for NodeJS and JavaScript planned for sometime in 2019.&lt;br /&gt;
* [https://www.ripstech.com/ RIPS Code Analysis] (RIPS Technologies) - A SAST solution specialized for PHP that detects unknown security vulnerabilities and code quality issues.&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/resources/datasheets/secureassist.html SecureAssist] (Synopsys) - Scans code for insecure coding and configurations automatically as an IDE plugin for Eclipse, IntelliJ, and Visual Studio etc. Supports (Java, .NET, PHP, and JavaScript)&lt;br /&gt;
* [https://www.whitehatsec.com/products/static-application-security-testing/ Sentinel Source] (Whitehat)&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/products/interactive-application-security-testing.html Seeker] (Synopsys) Seeker performs code security without actually doing static analysis. Seeker does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis with simulated attacks. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.sourcepatrol.co.uk/ Source Patrol] (Pentest)&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] (DefenseCode)&lt;br /&gt;
* [http://www.veracode.com/products/binary-static-analysis-sast Veracode Static Analysis] (Veracode)&lt;br /&gt;
* [http://www.xanitizer.net Xanitizer] - Scans Java for security vulnerabilities, mainly via taint analysis. Free for academic and open source projects (see [https://www.rigs-it.com/xanitizer-pricing/]).&lt;br /&gt;
&lt;br /&gt;
==More info==&lt;br /&gt;
&lt;br /&gt;
* [[Appendix_A:_Testing_Tools | Appendix A: Testing Tools]]&lt;br /&gt;
* [http://samate.nist.gov/index.php/Source_Code_Security_Analyzers.html NIST's list of Source Code Security Analysis Tools]&lt;br /&gt;
* [[:Category:Vulnerability_Scanning_Tools | DAST Tools]] - Similar info on Dynamic Application Security Testing (DAST) Tools&lt;br /&gt;
* [[Free for Open Source Application Security Tools]] - This page lists the Commercial Source Code Analysis Tools (SAST) we know of that are free for Open Source&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP .NET Project]]&lt;br /&gt;
[[Category:SAMM-CR-2]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=250774</id>
		<title>Source Code Analysis Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=250774"/>
				<updated>2019-04-29T18:20:50Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: Add gitlab link.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Static_Code_Analysis | Source code analysis]] tools, also referred to as Static Application Security Testing (SAST) Tools, are designed to analyze source code and/or compiled versions of code to help find security flaws. &lt;br /&gt;
&lt;br /&gt;
Some tools are starting to move into the IDE. For the types of problems that can be detected during the software development phase itself, this is a powerful phase within the development life cycle to employ such tools, as it provides immediate feedback to the developer on issues they might be introducing into the code during code development itself. This immediate feedback is very useful, especially when compared to finding vulnerabilities much later in the development cycle.&lt;br /&gt;
&lt;br /&gt;
== Strengths and Weaknesses ==&lt;br /&gt;
&lt;br /&gt;
=== Strengths ===&lt;br /&gt;
&lt;br /&gt;
* Scales well -- can be run on lots of software, and can be run repeatedly (as with nightly builds or continuous integration)&lt;br /&gt;
* Useful for things that such tools can automatically find with high confidence, such as buffer overflows, SQL Injection Flaws, and so forth&lt;br /&gt;
* Output is good for developers -- highlights the precise source files, line numbers, and even subsections of lines that are affected&lt;br /&gt;
&lt;br /&gt;
=== Weaknesses ===&lt;br /&gt;
&lt;br /&gt;
* Many types of security vulnerabilities are difficult to find automatically, such as authentication problems, access control issues, insecure use of cryptography, etc. The current state of the art only allows such tools to automatically find a relatively small percentage of application security flaws. However, tools of this type are getting better.&lt;br /&gt;
* High numbers of false positives.&lt;br /&gt;
* Frequently can't find configuration issues, since they are not represented in the code.&lt;br /&gt;
* Difficult to 'prove' that an identified security issue is an actual vulnerability.&lt;br /&gt;
* Many of these tools have difficulty analyzing code that can't be compiled. Analysts frequently can't compile code because they don't have the right libraries, all the compilation instructions, all the code, etc.&lt;br /&gt;
&lt;br /&gt;
==Important Selection Criteria==&lt;br /&gt;
&lt;br /&gt;
* Requirement: Must support your programming language, but not usually a key factor once it does.&lt;br /&gt;
* Types of vulnerabilities it can detect (out of the [[OWASP Top Ten]]?) (plus more?)&lt;br /&gt;
* How accurate is it? False Positive/False Negative rates?&lt;br /&gt;
** Does the tool have an OWASP [[Benchmark]] score?&lt;br /&gt;
* Does it understand the libraries/frameworks you use?&lt;br /&gt;
* Does it require a fully buildable set of source?&lt;br /&gt;
* Can it run against binaries instead of source?&lt;br /&gt;
* Can it be integrated into the developer's IDE?&lt;br /&gt;
* How hard is it to setup/use?&lt;br /&gt;
* Can it be run continuously and automatically?&lt;br /&gt;
* License cost for the tool. (Some are sold per user, per org, per app, per line of code analyzed. Consulting licenses are frequently different than end user licenses.)&lt;br /&gt;
&lt;br /&gt;
==OWASP Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [[OWASP SonarQube Project]]&lt;br /&gt;
* [http://www.owasp.org/index.php/Category:OWASP_Orizon_Project OWASP Orizon Project]&lt;br /&gt;
* [[OWASP_LAPSE_Project | OWASP LAPSE Project]]&lt;br /&gt;
* [[OWASP O2 Platform]]&lt;br /&gt;
* [[OWASP WAP-Web Application Protection]]&lt;br /&gt;
&lt;br /&gt;
==Disclaimer==&lt;br /&gt;
&lt;br /&gt;
Disclaimer: &amp;lt;b&amp;gt;The tools listed in the tables below are presented in alphabetical order. &amp;lt;i&amp;gt;OWASP does not endorse any of the vendors or tools by listing them in the table below.&amp;lt;/i&amp;gt; We have made every effort to provide this information as accurately as possible. If you are the vendor of a tool below and think that this information is incomplete or incorrect, please send an e-mail to our mailing list and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Open Source or Free Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.openstack.org/wiki/Security/Projects/Bandit Bandit] - bandit is a comprehensive source vulnerability scanner for Python&lt;br /&gt;
* [http://brakemanscanner.org/ Brakeman] - Brakeman is an open source vulnerability scanner specifically designed for Ruby on Rails applications&lt;br /&gt;
* [http://rubygems.org/gems/codesake-dawn Codesake Dawn] - Codesake Dawn is an open source security source code analyzer designed for Sinatra, Padrino for Ruby on Rails applications. It also works on non-web applications written in Ruby&lt;br /&gt;
* [http://findbugs.sourceforge.net/ FindBugs] - (Legacy - NOT Maintained - Use SpotBugs (see below) instead) - Find bugs (including a few security flaws) in Java programs&lt;br /&gt;
* [https://find-sec-bugs.github.io/ FindSecBugs] - A security specific plugin for SpotBugs that significantly improves SpotBugs's ability to find security vulnerabilities in Java programs. Works with the old FindBugs too,&lt;br /&gt;
* [http://www.dwheeler.com/flawfinder/ Flawfinder] Flawfinder - Scans C and C++&lt;br /&gt;
* [https://www.bishopfox.com/resources/tools/google-hacking-diggity/attack-tools/ Google CodeSearchDiggity] - Uses Google Code Search to identifies vulnerabilities in open source code projects hosted by Google Code, MS CodePlex, SourceForge, Github, and more. The tool comes with over 130 default searches that identify SQL injection, cross-site scripting (XSS), insecure remote and local file includes, hard-coded passwords, and much more.  ''Essentially, Google CodeSearchDiggity provides a source code security analysis of nearly every single open source code project in existence – simultaneously.''&lt;br /&gt;
* [https://github.com/wireghoul/graudit/ Graudit] - Scans multiple languages for various security flaws.&lt;br /&gt;
* [https://lgtm.com/help/lgtm/about-lgtm LGTM] - A free for open source static analysis service that automatically monitors commits to publicly accessible code in: Bitbucket Cloud, GitHub, or GitLab. Supports C/C++, C#, COBOL (in beta), Java, JavaScript/TypeScript, Python&lt;br /&gt;
* [http://pmd.sourceforge.net/ PMD] - PMD scans Java source code and looks for potential code problems (this is a code quality tool that does not focus on security issues)&lt;br /&gt;
* [https://github.com/designsecurity/progpilot Progpilot] - Progpilot is a static analyzer tool for PHP that detects security vulnerabilities such as XSS and SQL Injection.&lt;br /&gt;
* [http://msdn.microsoft.com/en-us/library/ms933794.aspx PreFast] (Microsoft) - PREfast is a static analysis tool that identifies defects in C/C++ programs. Last update 2006.&lt;br /&gt;
* [https://pumascan.com/ Puma Scan] - Puma Scan is a .NET C# open source static source code analyzer that runs as an IDE plugin for Visual Studio and via MSBuild in CI pipelines.&lt;br /&gt;
* [https://dotnet-security-guard.github.io/ .NET Security Guard] - Roslyn analyzers that aim to help security audits on .NET applications. It will find SQL injections, LDAP injections, XXE, cryptography weakness, XSS and more.&lt;br /&gt;
* [http://rips-scanner.sourceforge.net/ RIPS] - RIPS is a static source code analyzer for vulnerabilities in PHP web applications. Please see notes on the sourceforge.net site.&lt;br /&gt;
* [https://github.com/FloeDesignTechnologies/phpcs-security-audit phpcs-security-audit] - phpcs-security-audit is a set of PHP_CodeSniffer rules that finds flaws or weaknesses related to security in PHP and its popular CMS or frameworks.  It currently has core PHP rules as well as Drupal 7 specific rules.&lt;br /&gt;
* [http://www.sonarqube.org/ SonarQube] - Scans source code for more than 20 languages for Bugs, Vulnerabilities, and Code Smells. SonarQube IDE plugins for Eclipse, Visual Studio, and IntelliJ provided by [http://www.sonarlint.org/ SonarLint].&lt;br /&gt;
* [https://spotbugs.github.io/ SpotBugs] - This is the active fork replacement for FindBugs, which is not maintained anymore.&lt;br /&gt;
* [http://sourceforge.net/projects/visualcodegrepp/ VisualCodeGrepper (VCG)] - Scans C/C++, C#, VB, PHP, Java, and PL/SQL for security issues and for comments which may indicate defective code. The config files can be used to carry out additional checks for banned functions or functions which commonly cause security issues.&lt;br /&gt;
&lt;br /&gt;
[https://docs.gitlab.com/ee/user/application_security/sast/index.html GitLab has lashed a free SAST tool for a bunch of different languages natively into GitLab. So you might be able to use that, or at least identify a free SAST tool for the language you need from that list].&lt;br /&gt;
&lt;br /&gt;
==Commercial Tools Of This Type==&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ Application Inspector] (Positive Technologies) - combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation; has capability to generate test queries (exploits) to verify detected vulnerabilities during SAST analysis; Supported languages include: Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/application-security-on-cloud Application Security on Cloud] (IBM) - Provides SAST, DAST and mobile security testing as well as OpenSource library known vulnerability detection as a cloud service. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/ibm-appscan-source AppScan Source] (IBM)&lt;br /&gt;
* [https://www.blueclosure.com BlueClosure BC Detect] (BlueClosure) - Analyzes client-side JavaScript.&lt;br /&gt;
* [https://bugscout.io/en/ bugScout] (Nalbatech, Formally Buguroo)&lt;br /&gt;
* [https://www.castsoftware.com/products/application-intelligence-platform CAST AIP] (CAST) Performs static and architectural analysis to identify numerous types of security issues. Supports over 30 languages. [https://www.castsoftware.com/solutions/application-security/cwe#SupportedSecurityStandards AIP's security specific coverage is here].&lt;br /&gt;
* [https://www.codacy.com/ Codacy] Offers security patterns for languages such as Python, Ruby, Scala, Java, JavaScript and more. Integrates with tools such as Brakeman, Bandit, FindBugs, and others. (free for open source projects)&lt;br /&gt;
* [https://www.grammatech.com/products/codesonar CodeSonar] tool that supports C, C++, Java and C# and maps against the OWASP top 10 vulnerabilities.&lt;br /&gt;
* [https://www.contrastsecurity.com/interactive-application-security-testing-iast Contrast Assess] (Contrast Security) - Contrast performs code security without actually doing static analysis. Contrast does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.coverity.com/products/code-advisor/ Coverity Code Advisor] (Synopsys)&lt;br /&gt;
* [https://www.checkmarx.com/technology/static-code-analysis-sca/ CxSAST] (Checkmarx)&lt;br /&gt;
* [http://www8.hp.com/us/en/software-solutions/static-code-analysis-sast/ Fortify] (Micro Focus, Formally HP)&lt;br /&gt;
* [https://hdivsecurity.com/interactive-application-security-testing-iast Hdiv Detection] (Hdiv Security) - Hdiv performs code security without actually doing static analysis. Hdiv does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code-level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.juliasoft.com/solutions Julia] (JuliaSoft) - SaaS Java static analysis&lt;br /&gt;
* [http://www.klocwork.com/capabilities/static-code-analysis KlocWork] (KlocWork)&lt;br /&gt;
* [https://www.kiuwan.com/code-analysis/ Kiuwan] (an [http://www.optimyth.com Optimyth] company) - SaaS Software Quality &amp;amp; Security Analysis&lt;br /&gt;
* [http://www.parasoft.com/jsp/capabilities/static_analysis.jsp?itemId=547 Parasoft Test] (Parasoft)&lt;br /&gt;
* [https://pitss.com/products/pitss-con/ PITSS.CON] (PITTS)&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ PT Application Inspector] combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation for high accuracy rate with minimum false positives; has a unique capability to generate special test queries (exploits) to verify detected vulnerabilities during SAST analysis; integrates with CI/CD, VCS, etc. PT AI helps to easily understand, verify, and fix flaws; has a simple UI; is highly automated and easy to use. Supported languages are Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others.&lt;br /&gt;
* [https://pumascanpro.com/ Puma Scan Professional] - A .NET C# static source code analyzer that runs as a Visual Studio IDE extension, Azure DevOps extension, and Command Line (CLI) executable.&lt;br /&gt;
* [http://www.viva64.com/en/ PVS-Studio] (PVS-Studio) - For C/C++, C#&lt;br /&gt;
* [https://www.softwaresecured.com/reshift reshift] - A CI/CD tool that uses static code analysis to scan for vulnerabilities and uses machine learning to give a prediction on false positives. Supports Java with future support for NodeJS and JavaScript planned for sometime in 2019.&lt;br /&gt;
* [https://www.ripstech.com/ RIPS Code Analysis] (RIPS Technologies) - A SAST solution specialized for PHP that detects unknown security vulnerabilities and code quality issues.&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/resources/datasheets/secureassist.html SecureAssist] (Synopsys) - Scans code for insecure coding and configurations automatically as an IDE plugin for Eclipse, IntelliJ, and Visual Studio etc. Supports (Java, .NET, PHP, and JavaScript)&lt;br /&gt;
* [https://www.whitehatsec.com/products/static-application-security-testing/ Sentinel Source] (Whitehat)&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/products/interactive-application-security-testing.html Seeker] (Synopsys) Seeker performs code security without actually doing static analysis. Seeker does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis with simulated attacks. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.sourcepatrol.co.uk/ Source Patrol] (Pentest)&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] (DefenseCode)&lt;br /&gt;
* [http://www.veracode.com/products/binary-static-analysis-sast Veracode Static Analysis] (Veracode)&lt;br /&gt;
* [http://www.xanitizer.net Xanitizer] - Scans Java for security vulnerabilities, mainly via taint analysis. Free for academic and open source projects (see [https://www.rigs-it.com/xanitizer-pricing/]).&lt;br /&gt;
&lt;br /&gt;
==More info==&lt;br /&gt;
&lt;br /&gt;
* [[Appendix_A:_Testing_Tools | Appendix A: Testing Tools]]&lt;br /&gt;
* [http://samate.nist.gov/index.php/Source_Code_Security_Analyzers.html NIST's list of Source Code Security Analysis Tools]&lt;br /&gt;
* [[:Category:Vulnerability_Scanning_Tools | DAST Tools]] - Similar info on Dynamic Application Security Testing (DAST) Tools&lt;br /&gt;
* [[Free for Open Source Application Security Tools]] - This page lists the Commercial Source Code Analysis Tools (SAST) we know of that are free for Open Source&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP .NET Project]]&lt;br /&gt;
[[Category:SAMM-CR-2]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=User:Wichers&amp;diff=250412</id>
		<title>User:Wichers</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=User:Wichers&amp;diff=250412"/>
				<updated>2019-04-22T18:33:12Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=About=&lt;br /&gt;
&lt;br /&gt;
==BIO==&lt;br /&gt;
&lt;br /&gt;
Dave Wichers is a managing director for application security at Ernst &amp;amp; Young (www.ey.com). He was a cofounder of [https://www.aspectsecurity.com/ Aspect Security], a consulting company that specializes in application security services, that was acquired by EY in 2017. He is also a long time contributor to OWASP, helping to establish the OWASP Foundation in 2004, serving on the [[Board | OWASP Board]] since it was formed from 2004 through 2013, served as [[Conferences | OWASP Conferences Chair]] from 2005 through 2008, was a coauthor of the [[Top10 | OWASP Top 10]] since its inception until 2017 release candidate 1 and led the project from 2007 thru May 2017. Dave is also the lead of the new OWASP [[Benchmark]] project and has also contributed to numerous other important OWASP projects including [[WebGoat]], [[ESAPI]], [[ASVS]], and the [[Cheat Sheets | OWASP Cheat Sheet Series]].&lt;br /&gt;
&lt;br /&gt;
Dave has over 30 years of experience in the information security field, and has focused exclusively on application security since 1998. At EY, he provides a wide variety of application security consulting services to EY's clients. Prior to starting Aspect, he ran the Application Security Services Group at Exodus Communications. Dave has a Bachelors and Masters degree in Computer Science and is a CISSP.&lt;br /&gt;
&lt;br /&gt;
==OWASP Contributions==&lt;br /&gt;
&lt;br /&gt;
I have been contributing to OWASP since 2002. In 2004, along with Jeff Williams, we established the 501c3 organization that is now the OWASP Foundation. Since establishing the OWASP Foundation, I served as the de facto Chief Financial Officer of OWASP, until the OWASP Board established an Executive Director in mid 2013. In late 2004, I volunteered to become the OWASP Conferences Chair where I launched the OWASP Conferences Series, personally organized all the U.S. and European AppSec conferences from 2005 through 2008, and helped launch the Global Conferences Committee in 2009, which organized the conferences from 2009 through 2012. The OWASP Conferences have since grown to serve as a primary revenue generating resource for OWASP.&lt;br /&gt;
&lt;br /&gt;
As a volunteer to OWASP, Dave is or has been:&lt;br /&gt;
&lt;br /&gt;
* A member of the [[About_OWASP#Global_Board_Members|OWASP Board]] since it was established in 2004 through the end of 2013, &lt;br /&gt;
* The [[:Category:OWASP_AppSec_Conference | OWASP Conferences]] Chair from 2005 through 2008,&lt;br /&gt;
* Project lead and coauthor of the [[OWASP_Top_Ten_Project | OWASP Top 10]] thru May 2017,&lt;br /&gt;
* Coauthor of the first version of the [[ASVS | OWASP Application Security Verification Standard]],&lt;br /&gt;
* Contributor to the [[ESAPI | OWASP Enterprise Security API (ESAPI)]] project,&lt;br /&gt;
* Past lead of the [[OWASP_Cheat_Sheet_Series | OWASP Prevention Cheat Sheet Series]] and primary author of the [[SQL_Injection_Prevention_Cheat_Sheet | SQL Injection Prevention Cheat Sheet]].&lt;br /&gt;
* Lead of the OWASP [[Benchmark]] project. Benchmark project intro video: [[File:BenchmarkPodcastTitlePage.jpg|200px|link=https://www.youtube.com/watch?v=HQP8dwc3jJA&amp;amp;index=5&amp;amp;list=PLGB2s-U5FSWOmEStMt3JqlMFJvRYqeVW5]]&lt;br /&gt;
&lt;br /&gt;
For more details than this short bio on what I've done at OWASP, listen to my [https://www.owasp.org/download/jmanico/owasp_podcast_82.mp3 OWASP podcast].&lt;br /&gt;
&lt;br /&gt;
[[:Special:Contributions/Wichers|Wiki Contributions]]&lt;br /&gt;
&lt;br /&gt;
I've also done lots of OWASP conference presentations. Here are some of them:&lt;br /&gt;
&lt;br /&gt;
* 2015 AppSec USA: [https://appsecusa2015.sched.org/event/3r9k/using-the-owasp-benchmark-to-assess-automated-vulnerability-analysis-tools Using the OWASP Benchmark to Assess Automated Vulnerability Analysis Tools]&lt;br /&gt;
* 2014 AppSec AsiaPac: [http://owaspappsecapac2014.sched.org/event/fec0f8c8cecafa44b1925641fbfee8fa#.U8hO02dOWJA AppSec at DevOps Speed and Portfolio Scale talk abstract]&lt;br /&gt;
* 2014 AppSec AsiaPac: [http://owaspappsecapac2014.sched.org/event/c7ba6e43fa6f4a7e242c40c44c7164c9#.U8hObGdOWJA OWASP Top 10 2013 talk abstract]&lt;br /&gt;
* 2013 AppSec USA: [http://appsecusa2013.sched.org/event/817cc39ce670549247d2d0ba05b02701#.Up99XsRDuO2 OWASP Top 10 2013 talk abstract] - [http://appsecusa.org/2013/wp-content/uploads/2013/12/OWASP-Top-10-2013-AppSec-USA.pptx Slides] - [https://www.youtube.com/watch?v=bWqb3Hemepc&amp;amp;list=PLpr-xdpM8wG8ODR2zWs06JkMmlRiLyBXU&amp;amp;index=17 Video]&lt;br /&gt;
* 2013 AppSec EU: [https://www.owasp.org/images/1/17/OWASP_Top-10_2013--AppSec_EU_2013_-_Dave_Wichers.pdf OWASP Top 10 2013 - Slides] - [https://www.its.fh-muenster.de/owasp-appseceu13/rooms/Aussichtsreich_+_Freiraum/high_quality/OWASP-AppsecEU13-DaveWichers-OWASPTop10-2013_720p.mp4 Video]&lt;br /&gt;
* 2012 AppSec USA: [https://www.owasp.org/images/c/c5/Unraveling_some_Mysteries_around_DOM-based_XSS.pdf Unraveling some of the Mysteries around DOM-based XSS]&lt;br /&gt;
* 2012 AppSec EU: [https://www.owasp.org/images/3/30/AppSecEU2012_DOM-based_XSS.pdf Unraveling some of the Mysteries around DOM-based XSS]&lt;br /&gt;
* 2012 AppSec DC: [[OWASP_AppSec_DC_2012/Unraveling_some_of_the_Mysteries_around_DOMbased_XSS | Unraveling some of the Mysteries around DOM-based XSS]]&lt;br /&gt;
* 2010 AppSec DC: [[The_Strengths_of_Combining_Code_Review_with_Application_Penetration_Testing | Strengths of Combining Code Review with Application Penetration Testing]] - [http://vimeo.com/groups/asdc10/videos/19104928 Video] | [[Media: 2010-DC_The_Power_of_Code_Review.pptx|Slides]]&lt;br /&gt;
* 2010 AppSec Europe: [[OWASP_AppSec_Research_2010_-_Stockholm,_Sweden#OWASP_Top_10_2010 | OWASP Top 10 for 2010 - Final]] - [http://owasp.blip.tv/file/3917942/ Video] |[[Media:OWASP_AppSec_Research_2010_OWASP_Top_10_by_Wichers.pdf | PDF]]&lt;br /&gt;
* 2009 AppSec DC: [[OWASP_Top_10_2010_AppSecDC | Debut of the OWASP Top 10 for 2010 Release Candidate]] - [http://www.vimeo.com/9006276 Video] | [[Media: AppSec DC 2009 - OWASP Top 10 - 2010 rc1.pptx | Slides]]&lt;br /&gt;
* 2009 Appsec Ireland: [[How_to_Avoid_Flaws_in_the_First_Place:_The_OWASP_Enterprise_Security_API_(ESAPI)_Project | How to Avoid Flaws in the First Place: The OWASP ESAPI Project]]&lt;br /&gt;
* 2009 AppSec Europe: [[ASVS | OWASP ASVS Project]] - [http://www.owasp.org/images/7/78/AppsecEU09_OWASP_ASVS_WebApp_Standard.ppt Slides]&lt;br /&gt;
* 2009 AppSec Europe: [[ESAPI | OWASP Enterprise Security API (ESAPI) Project]] - [http://blip.tv/file/2215191 Video] | [http://www.owasp.org/images/1/11/AppSecEU09Poland_ESAPI.pptx Slides]&lt;br /&gt;
* 2008 AppSec NY: Security in Agile Development - [http://video.google.com/videoplay?docid=-8287209466278543377&amp;amp;hl=en Video] | [http://www.owasp.org/images/a/a3/AppSecNYC08-Agile_and_Secure.ppt Slides]&lt;br /&gt;
* 2008 AppSec Europe: [[AppSecEU08_The_OWASP_ESAPI_project | Fundamental Application Security Building Blocks - The Benefits of Establishing an Enterprise Security API (ESAPI) for Your Organization]] - [http://www.owasp.org/images/c/cd/AppSecEU08-ESAPI.ppt Slides]&lt;br /&gt;
* 2008 AppSec Europe: [[AppSecEU08_Agile_Security_Breaking_the_Waterfall_Mindset | Agile Security - Breaking the Waterfall Mindset of the Security Industry]] - [http://www.owasp.org/images/b/b8/AppSecEU08-Agile_and_Secure.ppt Slides]&lt;br /&gt;
* 2007 AppSec Europe: OWASP WebGoat and WebScarab - [http://www.owasp.org/images/5/55/OWASPAppSec2007Milan_WebGoatv5.ppt WebGoat Slides] | [http://www.owasp.org/images/d/d7/OWASPAppSec2007Milan_WebScarabNG.ppt WebScarab Slides]&lt;br /&gt;
* 2006 AppSec Seattle: Why AJAX Applications are far more likely to be insecure, and What to do about it - [http://www.owasp.org/index.php/Image:OWASPAppSec2006Seattle_Why_AJAX_Applications_More_Likely_Insecure.ppt Slides]&lt;br /&gt;
&lt;br /&gt;
Dave can be reached at: dave.wichers (at) ey.com or dave.wichers (at) owasp.org&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=249336</id>
		<title>Source Code Analysis Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=249336"/>
				<updated>2019-03-27T14:16:59Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Commercial Tools Of This Type */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Static_Code_Analysis | Source code analysis]] tools, also referred to as Static Application Security Testing (SAST) Tools, are designed to analyze source code and/or compiled versions of code to help find security flaws. &lt;br /&gt;
&lt;br /&gt;
Some tools are starting to move into the IDE. For the types of problems that can be detected during the software development phase itself, this is a powerful phase within the development life cycle to employ such tools, as it provides immediate feedback to the developer on issues they might be introducing into the code during code development itself. This immediate feedback is very useful, especially when compared to finding vulnerabilities much later in the development cycle.&lt;br /&gt;
&lt;br /&gt;
== Strengths and Weaknesses ==&lt;br /&gt;
&lt;br /&gt;
=== Strengths ===&lt;br /&gt;
&lt;br /&gt;
* Scales well -- can be run on lots of software, and can be run repeatedly (as with nightly builds or continuous integration)&lt;br /&gt;
* Useful for things that such tools can automatically find with high confidence, such as buffer overflows, SQL Injection Flaws, and so forth&lt;br /&gt;
* Output is good for developers -- highlights the precise source files, line numbers, and even subsections of lines that are affected&lt;br /&gt;
&lt;br /&gt;
=== Weaknesses ===&lt;br /&gt;
&lt;br /&gt;
* Many types of security vulnerabilities are difficult to find automatically, such as authentication problems, access control issues, insecure use of cryptography, etc. The current state of the art only allows such tools to automatically find a relatively small percentage of application security flaws. However, tools of this type are getting better.&lt;br /&gt;
* High numbers of false positives.&lt;br /&gt;
* Frequently can't find configuration issues, since they are not represented in the code.&lt;br /&gt;
* Difficult to 'prove' that an identified security issue is an actual vulnerability.&lt;br /&gt;
* Many of these tools have difficulty analyzing code that can't be compiled. Analysts frequently can't compile code because they don't have the right libraries, all the compilation instructions, all the code, etc.&lt;br /&gt;
&lt;br /&gt;
==Important Selection Criteria==&lt;br /&gt;
&lt;br /&gt;
* Requirement: Must support your programming language, but not usually a key factor once it does.&lt;br /&gt;
* Types of vulnerabilities it can detect (out of the [[OWASP Top Ten]]?) (plus more?)&lt;br /&gt;
* How accurate is it? False Positive/False Negative rates?&lt;br /&gt;
** Does the tool have an OWASP [[Benchmark]] score?&lt;br /&gt;
* Does it understand the libraries/frameworks you use?&lt;br /&gt;
* Does it require a fully buildable set of source?&lt;br /&gt;
* Can it run against binaries instead of source?&lt;br /&gt;
* Can it be integrated into the developer's IDE?&lt;br /&gt;
* How hard is it to setup/use?&lt;br /&gt;
* Can it be run continuously and automatically?&lt;br /&gt;
* License cost for the tool. (Some are sold per user, per org, per app, per line of code analyzed. Consulting licenses are frequently different than end user licenses.)&lt;br /&gt;
&lt;br /&gt;
==OWASP Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [[OWASP SonarQube Project]]&lt;br /&gt;
* [http://www.owasp.org/index.php/Category:OWASP_Orizon_Project OWASP Orizon Project]&lt;br /&gt;
* [[OWASP_LAPSE_Project | OWASP LAPSE Project]]&lt;br /&gt;
* [[OWASP O2 Platform]]&lt;br /&gt;
* [[OWASP WAP-Web Application Protection]]&lt;br /&gt;
&lt;br /&gt;
==Disclaimer==&lt;br /&gt;
&lt;br /&gt;
Disclaimer: &amp;lt;b&amp;gt;The tools listed in the tables below are presented in alphabetical order. &amp;lt;i&amp;gt;OWASP does not endorse any of the vendors or tools by listing them in the table below.&amp;lt;/i&amp;gt; We have made every effort to provide this information as accurately as possible. If you are the vendor of a tool below and think that this information is incomplete or incorrect, please send an e-mail to our mailing list and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Open Source or Free Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.openstack.org/wiki/Security/Projects/Bandit Bandit] - bandit is a comprehensive source vulnerability scanner for Python&lt;br /&gt;
* [http://brakemanscanner.org/ Brakeman] - Brakeman is an open source vulnerability scanner specifically designed for Ruby on Rails applications&lt;br /&gt;
* [http://rubygems.org/gems/codesake-dawn Codesake Dawn] - Codesake Dawn is an open source security source code analyzer designed for Sinatra, Padrino for Ruby on Rails applications. It also works on non-web applications written in Ruby&lt;br /&gt;
* [http://findbugs.sourceforge.net/ FindBugs] - (Legacy - NOT Maintained - Use SpotBugs (see below) instead) - Find bugs (including a few security flaws) in Java programs&lt;br /&gt;
* [https://find-sec-bugs.github.io/ FindSecBugs] - A security specific plugin for SpotBugs that significantly improves SpotBugs's ability to find security vulnerabilities in Java programs. Works with the old FindBugs too,&lt;br /&gt;
* [http://www.dwheeler.com/flawfinder/ Flawfinder] Flawfinder - Scans C and C++&lt;br /&gt;
* [https://www.bishopfox.com/resources/tools/google-hacking-diggity/attack-tools/ Google CodeSearchDiggity] - Uses Google Code Search to identifies vulnerabilities in open source code projects hosted by Google Code, MS CodePlex, SourceForge, Github, and more. The tool comes with over 130 default searches that identify SQL injection, cross-site scripting (XSS), insecure remote and local file includes, hard-coded passwords, and much more.  ''Essentially, Google CodeSearchDiggity provides a source code security analysis of nearly every single open source code project in existence – simultaneously.''&lt;br /&gt;
* [https://github.com/wireghoul/graudit/ Graudit] - Scans multiple languages for various security flaws.&lt;br /&gt;
* [https://lgtm.com/help/lgtm/about-lgtm LGTM] - A free for open source static analysis service that automatically monitors commits to publicly accessible code in: Bitbucket Cloud, GitHub, or GitLab. Supports C/C++, C#, COBOL (in beta), Java, JavaScript/TypeScript, Python&lt;br /&gt;
* [http://pmd.sourceforge.net/ PMD] - PMD scans Java source code and looks for potential code problems (this is a code quality tool that does not focus on security issues)&lt;br /&gt;
* [https://github.com/designsecurity/progpilot Progpilot] - Progpilot is a static analyzer tool for PHP that detects security vulnerabilities such as XSS and SQL Injection.&lt;br /&gt;
* [http://msdn.microsoft.com/en-us/library/ms933794.aspx PreFast] (Microsoft) - PREfast is a static analysis tool that identifies defects in C/C++ programs. Last update 2006.&lt;br /&gt;
* [https://pumascan.com/ Puma Scan] - Puma Scan is a .NET C# open source static source code analyzer that runs as an IDE plugin for Visual Studio and via MSBuild in CI pipelines.&lt;br /&gt;
* [https://dotnet-security-guard.github.io/ .NET Security Guard] - Roslyn analyzers that aim to help security audits on .NET applications. It will find SQL injections, LDAP injections, XXE, cryptography weakness, XSS and more.&lt;br /&gt;
* [http://rips-scanner.sourceforge.net/ RIPS] - RIPS is a static source code analyzer for vulnerabilities in PHP web applications. Please see notes on the sourceforge.net site.&lt;br /&gt;
* [https://github.com/FloeDesignTechnologies/phpcs-security-audit phpcs-security-audit] - phpcs-security-audit is a set of PHP_CodeSniffer rules that finds flaws or weaknesses related to security in PHP and its popular CMS or frameworks.  It currently has core PHP rules as well as Drupal 7 specific rules.&lt;br /&gt;
* [http://www.sonarqube.org/ SonarQube] - Scans source code for more than 20 languages for Bugs, Vulnerabilities, and Code Smells. SonarQube IDE plugins for Eclipse, Visual Studio, and IntelliJ provided by [http://www.sonarlint.org/ SonarLint].&lt;br /&gt;
* [https://spotbugs.github.io/ SpotBugs] - This is the active fork replacement for FindBugs, which is not maintained anymore.&lt;br /&gt;
* [http://sourceforge.net/projects/visualcodegrepp/ VisualCodeGrepper (VCG)] - Scans C/C++, C#, VB, PHP, Java, and PL/SQL for security issues and for comments which may indicate defective code. The config files can be used to carry out additional checks for banned functions or functions which commonly cause security issues.&lt;br /&gt;
&lt;br /&gt;
==Commercial Tools Of This Type==&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ Application Inspector] (Positive Technologies) - combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation; has capability to generate test queries (exploits) to verify detected vulnerabilities during SAST analysis; Supported languages include: Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/application-security-on-cloud Application Security on Cloud] (IBM) - Provides SAST, DAST and mobile security testing as well as OpenSource library known vulnerability detection as a cloud service. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/ibm-appscan-source AppScan Source] (IBM)&lt;br /&gt;
* [https://www.blueclosure.com BlueClosure BC Detect] (BlueClosure) - Analyzes client-side JavaScript.&lt;br /&gt;
* [https://bugscout.io/en/ bugScout] (Nalbatech, Formally Buguroo)&lt;br /&gt;
* [https://www.castsoftware.com/products/application-intelligence-platform CAST AIP] (CAST) Performs static and architectural analysis to identify numerous types of security issues. Supports over 30 languages. [https://www.castsoftware.com/solutions/application-security/cwe#SupportedSecurityStandards AIP's security specific coverage is here].&lt;br /&gt;
* [https://www.codacy.com/ Codacy] Offers security patterns for languages such as Python, Ruby, Scala, Java, JavaScript and more. Integrates with tools such as Brakeman, Bandit, FindBugs, and others. (free for open source projects)&lt;br /&gt;
* [https://www.grammatech.com/products/codesonar CodeSonar] tool that supports C, C++, Java and C# and maps against the OWASP top 10 vulnerabilities.&lt;br /&gt;
* [https://www.contrastsecurity.com/interactive-application-security-testing-iast Contrast Assess] (Contrast Security) - Contrast performs code security without actually doing static analysis. Contrast does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.coverity.com/products/code-advisor/ Coverity Code Advisor] (Synopsys)&lt;br /&gt;
* [https://www.checkmarx.com/technology/static-code-analysis-sca/ CxSAST] (Checkmarx)&lt;br /&gt;
* [http://www8.hp.com/us/en/software-solutions/static-code-analysis-sast/ Fortify] (Micro Focus, Formally HP)&lt;br /&gt;
* [https://hdivsecurity.com/interactive-application-security-testing-iast Hdiv Detection] (Hdiv Security) - Hdiv performs code security without actually doing static analysis. Hdiv does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code-level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.juliasoft.com/solutions Julia] (JuliaSoft) - SaaS Java static analysis&lt;br /&gt;
* [http://www.klocwork.com/capabilities/static-code-analysis KlocWork] (KlocWork)&lt;br /&gt;
* [https://www.kiuwan.com/code-analysis/ Kiuwan] (an [http://www.optimyth.com Optimyth] company) - SaaS Software Quality &amp;amp; Security Analysis&lt;br /&gt;
* [http://www.parasoft.com/jsp/capabilities/static_analysis.jsp?itemId=547 Parasoft Test] (Parasoft)&lt;br /&gt;
* [https://pitss.com/products/pitss-con/ PITSS.CON] (PITTS)&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ PT Application Inspector] combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation for high accuracy rate with minimum false positives; has a unique capability to generate special test queries (exploits) to verify detected vulnerabilities during SAST analysis; integrates with CI/CD, VCS, etc. PT AI helps to easily understand, verify, and fix flaws; has a simple UI; is highly automated and easy to use. Supported languages are Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others.&lt;br /&gt;
* [https://pumascanpro.com/ Puma Scan Professional] - A .NET C# static source code analyzer that runs as a Visual Studio IDE extension, Azure DevOps extension, and Command Line (CLI) executable.&lt;br /&gt;
* [http://www.viva64.com/en/ PVS-Studio] (PVS-Studio) - For C/C++, C#&lt;br /&gt;
* [https://www.softwaresecured.com/reshift reshift] - A CI/CD tool that uses static code analysis to scan for vulnerabilities and uses machine learning to give a prediction on false positives. Supports Java with future support for NodeJS and JavaScript planned for sometime in 2019.&lt;br /&gt;
* [https://www.ripstech.com/ RIPS Code Analysis] (RIPS Technologies) - A SAST solution specialized for PHP that detects unknown security vulnerabilities and code quality issues.&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/resources/datasheets/secureassist.html SecureAssist] (Synopsys) - Scans code for insecure coding and configurations automatically as an IDE plugin for Eclipse, IntelliJ, and Visual Studio etc. Supports (Java, .NET, PHP, and JavaScript)&lt;br /&gt;
* [https://www.whitehatsec.com/products/static-application-security-testing/ Sentinel Source] (Whitehat)&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/products/interactive-application-security-testing.html Seeker] (Synopsys) Seeker performs code security without actually doing static analysis. Seeker does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis with simulated attacks. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.sourcepatrol.co.uk/ Source Patrol] (Pentest)&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] (DefenseCode)&lt;br /&gt;
* [http://www.veracode.com/products/binary-static-analysis-sast Veracode Static Analysis] (Veracode)&lt;br /&gt;
* [http://www.xanitizer.net Xanitizer] - Scans Java for security vulnerabilities, mainly via taint analysis. Free for academic and open source projects (see [https://www.rigs-it.com/xanitizer-pricing/]).&lt;br /&gt;
&lt;br /&gt;
==More info==&lt;br /&gt;
&lt;br /&gt;
* [[Appendix_A:_Testing_Tools | Appendix A: Testing Tools]]&lt;br /&gt;
* [http://samate.nist.gov/index.php/Source_Code_Security_Analyzers.html NIST's list of Source Code Security Analysis Tools]&lt;br /&gt;
* [[:Category:Vulnerability_Scanning_Tools | DAST Tools]] - Similar info on Dynamic Application Security Testing (DAST) Tools&lt;br /&gt;
* [[Free for Open Source Application Security Tools]] - This page lists the Commercial Source Code Analysis Tools (SAST) we know of that are free for Open Source&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP .NET Project]]&lt;br /&gt;
[[Category:SAMM-CR-2]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;diff=247222</id>
		<title>Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;diff=247222"/>
				<updated>2019-02-07T23:28:10Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Not a Simple HTTP Request Verification */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; __NOTOC__&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:160px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Cheatsheets-header.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
Last revision (mm/dd/yy): '''{{REVISIONMONTH}}/{{REVISIONDAY}}/{{REVISIONYEAR}}''' &lt;br /&gt;
&lt;br /&gt;
 __TOC__{{TOC hidden}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
[[Cross-Site Request Forgery (CSRF)]] is a type of attack that occurs when a malicious web site, email, blog, instant message, or program causes a user’s web browser to perform an unwanted action on a trusted site when the user is authenticated. A CSRF attack works because browser requests automatically include any credentials associated with the site, such as the user's session cookie, IP address, etc. Therefore, if the user is authenticated to the site, the site cannot distinguish between the forged or legitimate request sent by the victim. We would need a token/identifier that is not accessible to attacker and would not be sent along (like cookies) with forged requests that attacker initiates. For more information on CSRF, see OWASP [[Cross-Site Request Forgery (CSRF)|Cross-Site Request Forgery (CSRF) page]].&lt;br /&gt;
&lt;br /&gt;
The impact of a successful CSRF attack is limited to the capabilities exposed by the vulnerable application. For example, this attack could result in a transfer of funds, changing a password, or making a purchase with the user’s credentials. In effect, CSRF attacks are used by an attacker to make a target system perform a function via the target's browser, without the user’s knowledge, at least until the unauthorized transaction has been committed.&lt;br /&gt;
&lt;br /&gt;
Impacts of successful CSRF exploits vary greatly based on the privileges of each victim. When targeting a normal user, a successful CSRF attack can compromise end-user data and their associated functions. If the targeted end user is an administrator account, a CSRF attack can compromise the entire web application. Using social engineering, an attacker can embed malicious HTML or JavaScript code into an email or website to request a specific 'task URL'. The task then executes with or without the user's knowledge, either directly or by using a Cross-Site Scripting flaw. For example, see [https://en.wikipedia.org/wiki/Samy_(computer_worm) Samy MySpace Worm].&lt;br /&gt;
&lt;br /&gt;
==Warning: No Cross-Site Scripting (XSS) Vulnerabilities ==&lt;br /&gt;
[[Cross-Site Scripting]] is not necessary for CSRF to work. However, any cross-site scripting vulnerability can be used to defeat all CSRF mitigation techniques available in the market today (except mitigation techniques that involve user interaction and described later in this cheat sheet). This is because an XSS payload can simply read any page on the site using an XMLHttpRequest (direct DOM access can be done, if on same page) and obtain the generated token from the response, and include that token with a forged request.  This technique is exactly how the [https://en.wikipedia.org/wiki/Samy_(computer_worm) MySpace (Samy) worm] defeated MySpace's anti-CSRF defenses in 2005, which enabled the worm to propagate.&lt;br /&gt;
&lt;br /&gt;
It is imperative that no XSS vulnerabilities are present to ensure that CSRF defenses can't be circumvented. Please see the OWASP [[XSS (Cross Site Scripting) Prevention Cheat Sheet|XSS Prevention Cheat Sheet]] for detailed guidance on how to prevent XSS flaws.  &lt;br /&gt;
&lt;br /&gt;
== Resources that need to be protected from CSRF attacks ==&lt;br /&gt;
The following list assumes that you are not violating [http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.1 RFC2616], section 9.1.1, by using GET requests for state changing operations. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' If for any reason you violate, you would also need to protect those resources, which is mostly achieved with default &amp;lt;code&amp;gt;form tag [GET method]&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;href&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;src&amp;lt;/code&amp;gt; attributes.  &lt;br /&gt;
&lt;br /&gt;
* Form tags with POST &lt;br /&gt;
* Ajax/XHR calls&lt;br /&gt;
&lt;br /&gt;
== CSRF Defense Recommendations Summary ==&lt;br /&gt;
We recommend token based CSRF defense (either stateful/stateless) as a primary defense to mitigate CSRF in your applications. For highly sensitive operations, we also recommend user interaction based protection (either re-authentication/one-time token, detailed in section 6.5) along with token based mitigation.&lt;br /&gt;
&lt;br /&gt;
As a defense-in-depth measure, consider implementing one mitigation from the Defense-in-Depth Techniques section (you can choose the mitigation that fits your ecosystem considering the issues mentioned under them). These defense-in-depth mitigation techniques are not recommended to be used by themselves (without token based mitigation) for mitigating CSRF in your applications.&lt;br /&gt;
&lt;br /&gt;
== Primary Defense Techniques ==&lt;br /&gt;
&lt;br /&gt;
=== Token Based Mitigation ===&lt;br /&gt;
This defense is one the most popular and recommended methods to mitigate CSRF. It can be achieved either with state (synchronizer token pattern) or stateless (encrypted/hash based token pattern). For all the mitigation's, it is implicit that general security principles should be adhered.&lt;br /&gt;
* Strict key rotation and token lifetime policies should be maintained. Policies can be set according to your organizational needs. Generic key management guidance from OWASP can be found in the [[Key Management Cheat Sheet]].&lt;br /&gt;
&lt;br /&gt;
==== Synchronizer Token Pattern ====&lt;br /&gt;
Any state changing operation requires a secure random token (e.g., CSRF token) to prevent CSRF attacks. A CSRF token should be unique per user session, large random value, and also generated by a cryptographically secure random number generator. The CSRF token is added as a hidden field for forms headers/parameters for AJAX calls, and within the URL if the state changing operation occurs via a GET. See &amp;quot;Disclosure of Token in URL&amp;quot; section below. The server rejects the requested action if the CSRF token fails validation.&lt;br /&gt;
&lt;br /&gt;
In order to facilitate a &amp;quot;transparent but visible&amp;quot; CSRF solution, developers are encouraged to adopt a pattern similar to [http://www.corej2eepatterns.com/Design/PresoDesign.htm Synchronizer Token Pattern] (The original intention of this synchronizer token pattern was to detect duplicate submissions in forms). The synchronizer token pattern requires the generation of random &amp;quot;challenge&amp;quot; tokens that are associated with the user's current session. These challenge tokens are then inserted within the HTML forms and calls associated with sensitive server-side operations. It is the responsibility of the server application to verify the existence and correctness of this token. By including a challenge token with each request, the developer has a strong control to verify that the user actually intended to submit the desired requests. Inclusion of a required security token in HTTP requests associated with sensitive business functions helps mitigate CSRF attacks as successful exploitation assumes the attacker knows the randomly generated token for the target victim's session. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' These tokens aren’t like cookies that are automatically sent with forged requests made from your browser from the attacker website. &lt;br /&gt;
&lt;br /&gt;
This is analogous to the attacker being able to guess the target victim's session identifier. &lt;br /&gt;
&lt;br /&gt;
The following describes a general approach to incorporate challenge tokens within the request.&lt;br /&gt;
&lt;br /&gt;
When a Web application formulates a request, the application should include a hidden input parameter with a common name such as &amp;quot;CSRFToken&amp;quot; (for forms)/ as header/parameter value for Ajax calls. The value of this token must be randomly generated such that it cannot be guessed by an attacker. Consider leveraging the java.security.SecureRandom class for Java applications to generate a sufficiently long random token. Alternative generation algorithms include the use of 256-bit BASE64 encoded hashes. Developers that choose this generation algorithm must make sure that there is randomness and uniqueness utilized in the data that is hashed to generate the random token.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;html&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;form action=&amp;quot;/transfer.do&amp;quot; method=&amp;quot;post&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;input type=&amp;quot;hidden&amp;quot; name=&amp;quot;CSRFToken&amp;quot; &lt;br /&gt;
value=&amp;quot;OWY4NmQwODE4ODRjN2Q2NTlhMmZlYWEwYzU1YWQwMTVhM2JmNGYxYjJiMGI4MjJjZDE1ZDZMGYwMGEwOA==&amp;quot;&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/form&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is used for each subsequent request until the session expires. When a request is issued by the end-user, the server-side component must verify the existence and validity of the token in the request compared to the token found in the user session. If the token was not found within the request, or the value provided does not match the value within the user session, then the request should be aborted, and the event logged as a potential CSRF attack in progress.&lt;br /&gt;
&lt;br /&gt;
To further enhance the security of this proposed design, consider randomizing the CSRF token parameter name and/or value for each request. Implementing this approach results in the generation of per-request tokens as opposed to per-session tokens. This is more secure than per-session tokens as the time range for an attacker to exploit the stolen tokens is minimal. However, this may result in usability concerns. For example, the &amp;quot;Back&amp;quot; button browser capability is often hindered as the previous page may contain a token that is no longer valid. Interaction with this previous page will result in a CSRF false positive security event at the server. Few applications that need high security typically implement this approach (such as banks). You have to check what suits your needs. Regardless of the approach taken, developers are encouraged to protect the CSRF token the same way they protect authenticated session identifiers, such as the use of TLS.&lt;br /&gt;
&lt;br /&gt;
'''Existing Synchronizer Implementations'''&lt;br /&gt;
&lt;br /&gt;
Synchronizer token defenses have been built into many frameworks, so we strongly recommend using them first when they are available. External components that add CSRF defenses to existing applications are also recommended. OWASP has the following: &lt;br /&gt;
&lt;br /&gt;
* For Java: OWASP [[CSRF Guard]]&lt;br /&gt;
* For PHP and Apache: [[CSRFProtector Project]]&lt;br /&gt;
&lt;br /&gt;
'''Disclosure of Token in URL'''&lt;br /&gt;
&lt;br /&gt;
Some implementations of synchronizer tokens include the challenge token in GET (URL) requests as well as POST requests. This is often implemented as a result of sensitive server-side operations being invoked as a result of embedded links in the page or other general design patterns. These patterns are often implemented without knowledge of CSRF and an understanding of CSRF prevention design strategies. While this control does help mitigate the risk of CSRF attacks, the unique per-session token is being exposed for GET requests. CSRF tokens in GET requests are potentially leaked at several locations: browser history, log files, network appliances that make a point to log the first line of an HTTP request, and Referer headers if the protected site links to an external site. In the latter case (leaked CSRF token due to the Referer header being parsed by a linked site), it is trivially easy for the linked site to launch a CSRF attack on the protected site, and they will be able to target this attack very effectively, since the Referer header tells them the site as well as the CSRF token. The attack could be run entirely from JavaScript, so that a simple addition of a script tag to the HTML of a site can launch an attack (whether on an originally malicious site or on a hacked site). Additionally, since HTTPS requests from HTTPS contexts will not strip the Referer header (as opposed to HTTPS to HTTP requests) CSRF token leaks via Referer can still happen on HTTPS Applications.&lt;br /&gt;
&lt;br /&gt;
The ideal solution is to only include the CSRF token in POST requests and modify server-side actions that have state changing affect to only respond to POST requests. This is in fact what the &amp;lt;nowiki&amp;gt;RFC 2616&amp;lt;/nowiki&amp;gt; requires for GET requests. If sensitive server-side actions are guaranteed to only ever respond to POST requests, then there is no need to include the token in GET requests.&lt;br /&gt;
&lt;br /&gt;
In most JavaEE web applications, however, HTTP method scoping is rarely ever utilized when retrieving HTTP parameters from a request. Calls to &amp;quot;HttpServletRequest.getParameter&amp;quot; will return a parameter value regardless if it was a GET or POST. This is not to say HTTP method scoping cannot be enforced. It can be achieved if a developer explicitly overrides doPost() in the HttpServlet class or leverages framework specific capabilities such as the AbstractFormController class in Spring.&lt;br /&gt;
&lt;br /&gt;
For these cases, attempting to retrofit this pattern in existing applications requires significant development time and cost, and as a temporary measure it may be better to pass CSRF tokens in the URL. Once the application has been fixed to respond to HTTP GET and POST verbs correctly, CSRF tokens for GET requests should be turned off.&lt;br /&gt;
&lt;br /&gt;
==== Encryption based Token Pattern ====&lt;br /&gt;
The Encrypted Token Pattern leverages an encryption, rather than comparison method of Token-validation. It is most suitable for applications that do not want to maintain any state at server side. &lt;br /&gt;
&lt;br /&gt;
After successful authentication, the server generates a unique token comprised of the user's ID, a timestamp value and a [http://en.wikipedia.org/wiki/Cryptographic_nonce nonce], using a unique key available only on the server. This token is returned to the client and embedded in a hidden field for forms, in the request-header/parameter for AJAX requests. On receipt of this request, the server reads and decrypts the token value with the same key used to create the token. The inability to correctly decrypt suggests an intrusion attempt. Once decrypted, the UserId and timestamp contained within the token are validated; the UserId is compared against the currently logged in user, and the timestamp is compared against the current time.&lt;br /&gt;
&lt;br /&gt;
On successful token-decryption, the server has access to parsed values, ideally in the form of [http://en.wikipedia.org/wiki/Claims-based_identity claims]. These claims are processed by comparing the UserId claim to any potentially stored UserId (in a Cookie or Session variable, if the site already contains a means of authentication). The Timestamp is validated against the current time, preventing replay attacks. Alternatively, in the case of a CSRF attack, the server will be unable to decrypt the poisoned token, and can block and log the attack.&lt;br /&gt;
&lt;br /&gt;
This technique addresses some of the shortfalls in other stateless approaches, such as the need to store data in a Cookie, circumventing the Cookie-subdomain and [[HttpOnly]] issues. Your solution should use a strong encryption function. We recommend AES256-GCM or stronger.&lt;br /&gt;
&lt;br /&gt;
==== HMAC Based Token Pattern ====&lt;br /&gt;
[https://en.wikipedia.org/wiki/HMAC HMAC (hash-based message authentication code)] is a cryptographic function that helps to guarantee integrity and authentication of a message. HMAC Tokens can be used as a CSRF mitigation technique without requiring server side state. It is similar to the encryption token-based pattern with two main differences:&lt;br /&gt;
* Uses a strong HMAC function instead of an encryption function to generate the token&lt;br /&gt;
* Includes an additional field called ‘operation’ that would indicate the purpose of the operation for which you are including the CSRF token (may it be form tag/ajax call) &lt;br /&gt;
(Ex: ‘oneclickpurchase’ (or) buy/asin=SDFH&amp;amp;category=2&amp;amp;quantity=3)&lt;br /&gt;
&lt;br /&gt;
'''Note:''' Fields mentioned in encryption token pattern (user's ID, a timestamp value and a nonce) are included. &lt;br /&gt;
&lt;br /&gt;
The operation field helps in mitigating the fact that the hash function generates the same value irrespective of multiple iterations (unlike strong encryption functions that generate different values when they are encrypted each time). So, it would help in avoiding having repeated token values across your application. Nonce field serves the same purpose as in encrypted token pattern (i.e., to avoid rare collisions due to weak cryptographic functions and acts as a defense-in-depth measure). &lt;br /&gt;
&lt;br /&gt;
Generate the token using HMAC including all four fields mentioned previously (user's ID, a timestamp value, nonce, and operation) and then include it in hidden fields for form tags, headers/parameters for ajax calls. Once you receive the HMAC from the client in the requests, re-generate HMAC with the same fields that you used to generate it, and then verify that the HMAC you re-generated matches the HMAC received from the client. If it does, it is a legitimate user request and if it does not, flag it as a CSRF intrusion and alert your incident response teams. Because an attacker has no visibility into the key used for generating the hash fields used in generating it, there is no way for them to re-generate it to use in forged request.&lt;br /&gt;
&lt;br /&gt;
Your solution should use a strong HMAC function. We recommend SHA256/512 or stronger.&lt;br /&gt;
&lt;br /&gt;
=== Auto CSRF Mitigation Techniques ===&lt;br /&gt;
Though the technique of mitigating tokens is widely used (stateful with synchronizer token and stateless with encrypted/HMAC token), the major problem associated with these techniques is the human tendency to forget things at times. If a developer forgets to add the token to any state changing operation, they are making the application vulnerable to CSRF. To avoid this, you can try to automate the process of adding tokens to CSRF vulnerable resources (mentioned earlier in this document). You can achieve this by doing the following:&lt;br /&gt;
* Write wrappers (that would auto add tokens when used) around default form tags/ajax calls and educate your developers to use those wrappers instead of standard tags. Though this approach is better than depending purely on developers to add tokens, it still is vulnerable to the issue of human tendency to forget things. [https://docs.spring.io/spring-security/site/docs/3.2.0.CI-SNAPSHOT/reference/html/csrf.html Spring Security] uses this technique to add CSRF tokens by default when a custom &amp;lt;form:form&amp;gt; tag is used, you can opt to use after verifying that its enabled and properly configured in the Spring Security version you are using.&lt;br /&gt;
* Write a hook (that would capture the traffic and add tokens to CSRF vulnerable resources before rendering to customers) in your organizational web rendering frameworks. Because it is hard to analyze when a particular response is doing any state change (and thus needing a token), you might want to include tokens in all CSRF vulnerable resources (ex: include tokens in all POST responses). This is one recommended approach, but you need to consider the performance costs it might incur.&lt;br /&gt;
* Get the tokens automatically added on the client side when the page is being rendered in user’s browser, with help of a client side script (this approach is used by [[CSRF Guard]]). You need to consider any possible JavaScript hijacking attacks.&lt;br /&gt;
We recommend researching if the framework you are using has an option to achieve CSRF protection by default before trying to build your custom auto tokening system. For example, .NET has an [https://docs.microsoft.com/en-us/aspnet/core/security/anti-request-forgery?view=aspnetcore-2.1 in-built protection] that adds token to CSRF vulnerable resources. You are responsible for proper configuration (such as key management and token management) before using these in-built CSRF protections that do auto tokening to CSRF vulnerable resources.&lt;br /&gt;
&lt;br /&gt;
=== Stateless/Tokenless Defense Techniques ===&lt;br /&gt;
&lt;br /&gt;
Given the popularity of REST services (which are stateless) and the desire to implement the minimum changes required to defend against CSRF attacks, there are a couple of techniques you can rely on to verify that a request is not cross origin with minimal changes to your application. They are:&lt;br /&gt;
&lt;br /&gt;
* Verify presence of &amp;quot;X-Requested-With: XMLHttpRequest&amp;quot; Header and value&lt;br /&gt;
* Verify it's NOT a &amp;quot;Simple HTTP Request&amp;quot;&lt;br /&gt;
&lt;br /&gt;
If these techniques don't defend every state changing endpoint in your application, you might be able to fill the gap with another stateless/tokenless technique described later: &amp;quot;Verifying origin with standard headers&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
==== X-Requested-With: XMLHttpRequest Header Verification ====&lt;br /&gt;
&lt;br /&gt;
This is a specific instance of the Custom Request Header Defense which is described more fully later. This HTTP headername/value is particularly attractive because most JavaScript libraries already add this header to requests they generate by default. If you have built, or plan to build, a pure AJAX/REST web app then all you have to do is verify server-side the presence of this headername/value pair on all POST requests, and you are done. If your AJAX calls don't include this (or a similar) header, you'll have to tweak your JavaScript framework to add this custom header.&lt;br /&gt;
&lt;br /&gt;
This defense works because JavaScript can't submit cross-origin requests and only JavaScript can add custom headers to an HTTP request.&lt;br /&gt;
&lt;br /&gt;
==== Not a Simple HTTP Request Verification ====&lt;br /&gt;
&lt;br /&gt;
A Simple HTTP Request is described, but not given this actual name, in the [https://www.w3.org/TR/cors/#terminology W3C CORS spec]. We define a Simple HTTP Request to be a request that uses both a 'simple method' and has a 'simple header'. This W3C article defines a simple method as any of these three: GET, HEAD, POST, and also defines a simple header as one of these four: Accept, Accept-Language, Content-Language, Content-Type. However, the Content-Type header, if present, must also have a value of: application/x-www-form-urlencoded, multipart/form-data, or text/plain. It further defines, not very succinctly, at the beginning of [https://www.w3.org/TR/cors/#cross-origin-request-with-preflight-0 section 7.5.1 Cross-Origin Request with Preflight - Step 1] that a CORS preflight request is required for cross-origin requests UNLESS the request uses a simple method with simple headers. Meaning, that the ONLY cross-origin requests allowed by browsers (without requesting permission using CORS) are 'Simple HTTP Requests'.&lt;br /&gt;
&lt;br /&gt;
We can take advantage of this knowledge by simply verifying that all state changing requests to our application are NOT Simple HTTP Requests. And if they are not, then they can't be a CSRF attack. If you've built your application properly, then GET and HEAD requests cannot change state, so that leaves us with POSTs. If you then verify that all POST requests include a Content-Type header, whose value is either NOT any of the three allowed to go cross origin (application/x-www-form-urlencoded, multipart/form-data, or text/plain), or is an explicitly required Content-Type that is NOT one of those three, then you are all set.&lt;br /&gt;
&lt;br /&gt;
For example, if have a pure AJAX app that submits all of it's POST requests with Content-Types of application/xml or application/json, and you verify on the server side that all POSTs include a Content-Type with one of those two values, your app is completely defended against CSRF attacks. If your application also happens to accept other HTTP methods that do state changing things, like PUT, DELETE, etc. you don't have to do anything to defend those endpoints because browsers can't generate anything but Simple HTTP Requests cross-origin (i.e., GET, HEAD, or POST only). If you need to accept the uploading of files (multipart/form-data), explicit CSRF protection is still needed for those endpoints.&lt;br /&gt;
&lt;br /&gt;
Acknowledgement: This Simple HTTP Request verification technique was found in this [https://blog.jdriven.com/2014/10/stateless-spring-security-part-1-stateless-csrf-protection/ Stateless CSRF Protection blog post], where it explicitly stated &amp;quot;Stateless approach 1: SWITCH TO A FULL AND PROPERLY DESIGNED JSON BASED REST API. - Single-Origin Policy only allows cross-site HEAD/GET and POSTs. POSTs may only be one of the following mime-types: application/x-www-form-urlencoded, multipart/form-data, or text/plain. Indeed no JSON! Now considering GETs should never ever trigger side-effects in any properly designed HTTP based API, this leaves it up to you to simply disallow any non-JSON POST/PUT/DELETEs and all is well.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Defense-In-Depth Techniques ==&lt;br /&gt;
&lt;br /&gt;
=== Verifying origin with standard headers ===&lt;br /&gt;
This defense technique is specifically proposed in section 5.0 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery]. This paper proposes the first creation of the Origin header and its use as a CSRF defense mechanism.&lt;br /&gt;
&lt;br /&gt;
There are two steps to this mitigation, both of which rely on examining an HTTP request header value.&lt;br /&gt;
&lt;br /&gt;
1. Determining the origin the request is coming from (source origin). Can be done via Origin and/or referer header.&lt;br /&gt;
&lt;br /&gt;
2. Determining the origin the request is going to (target origin).&lt;br /&gt;
&lt;br /&gt;
At server side we verify if both of them match. If they do, we accept the request as legitimate (meaning it’s the same origin request) and if they don’t, we discard the request (meaning that the request originated from cross-domain). Reliability on these headers comes from the fact that they cannot be altered programmatically (using JavaScript in an XSS) as they fall under [https://developer.mozilla.org/en-US/docs/Glossary/Forbidden_header_name forbidden headers] list (i.e., only browsers can set them).&lt;br /&gt;
&lt;br /&gt;
====Identifying Source Origin (via Origin/Referer header) ====&lt;br /&gt;
'''Checking the Origin Header'''&lt;br /&gt;
&lt;br /&gt;
If the Origin header is present, verify that its value matches the target origin. Unlike the Referer, the Origin header will be present in HTTP requests that originate from an HTTPS URL.&lt;br /&gt;
&lt;br /&gt;
'''Checking the Referer Header'''&lt;br /&gt;
&lt;br /&gt;
If the Origin header is not present, verify the hostname in the Referer header matches the target origin. This method of CSRF mitigation is also commonly used with unauthenticated requests, such as requests made prior to establishing a session state, which is required to keep track of a synchronization token.&lt;br /&gt;
&lt;br /&gt;
In both cases, make sure the target origin check is strong. For example, if your site is &amp;quot;site.com&amp;quot; make sure &amp;quot;site.com.attacker.com&amp;quot; does not pass your origin check (i.e., match through the trailing/after the origin to make sure you are matching against the entire origin).&lt;br /&gt;
&lt;br /&gt;
If neither of these headers are present, you can either accept or block the request. We recommend '''blocking'''. Alternatively, you might want to log all such instances, monitor their use cases/behavior, and then start blocking requests only after you get enough confidence.&lt;br /&gt;
&lt;br /&gt;
==== Identifying the Target Origin ====&lt;br /&gt;
You might think it’s easy to determine the target origin, but it’s frequently not. The first thought is to simply grab the target origin (i.e., its hostname and port #) from the URL in the request. However, the application server is frequently sitting behind one or more proxies and the original URL is different from the URL the app server actually receives. If your application server is directly accessed by its users, then using the origin in the URL is fine and you're all set.&lt;br /&gt;
&lt;br /&gt;
If you are behind a proxy, there are a number of options to consider.&lt;br /&gt;
* '''Configure your application to simply know its target origin:''' It’s your application, so you can find its target origin and set that value in some server configuration entry. This would be the most secure approach as its defined server side, so it is a trusted value. However,   this might be problematic to maintain if your application is deployed in many places, e.g., dev, test, QA, production, and possibly multiple production instances. Setting the correct value for each of these situations might be difficult, but if you can do it via some central configuration and providing your instances to grab value from it, that's great! ('''Note:''' Make sure the centralized configuration store is maintained securely because major part of your CSRF defense depends on it.)&lt;br /&gt;
&lt;br /&gt;
* '''Use the Host header value:''' If you prefer that the application find its own target so it doesn't have to be configured for each deployed instance, we recommend using the Host family of headers. The Host header's purpose is to contain the target origin of the request. But, if your app server is sitting behind a proxy, the Host header value is most likely changed by the proxy to the target origin of the URL behind the proxy, which is different than the original URL. This modified Host header origin won't match the source origin in the original Origin or Referer headers.&lt;br /&gt;
&lt;br /&gt;
* '''Use the X-Forwarded-Host header value:''' To avoid the issue of proxy altering the host header, there is another header called X-Forwarded-Host, whose purpose is to contain the original Host header value the proxy received. Most proxies will pass along the original Host header value in the X-Forwarded-Host header. So that header value is likely to be the target origin value you need to compare to the source origin in the Origin or Referer header.&lt;br /&gt;
&lt;br /&gt;
This mitigation in earlier versions of the CSRF Cheat Sheet is treated as a primary defense. For reasons mentioned below, it is now moved to the Defense-in-Depth section.&lt;br /&gt;
&lt;br /&gt;
As it’s implicit, this mitigation would work properly when origin or referer headers are present in the requests. Though these headers are included '''majority''' of the time, there are few use cases where they are not included (most of them are for legitimate reasons to safeguard users privacy/to tune to browsers ecosystem). The following lists some use cases:&lt;br /&gt;
* Internet Explorer 11 does not add the Origin header on a CORS request across sites of a trusted zone. The Referer header will remain the only indication of the UI origin. See the following references in stackoverflow [https://stackoverflow.com/questions/20784209/internet-explorer-11-does-not-add-the-origin-header-on-a-cors-request here] and [https://github.com/silverstripe/silverstripe-graphql/issues/118 here].&lt;br /&gt;
* In an instance following a [https://stackoverflow.com/questions/22397072/are-there-any-browsers-that-set-the-origin-header-to-null-for-privacy-sensitiv 302 redirect cross-origin], Origin is not included in the redirected request because that may be considered sensitive information that should not be sent to the other origin.&lt;br /&gt;
* There are some [https://wiki.mozilla.org/Security/Origin#Privacy-Sensitive_Contexts privacy contexts] where Origin is set to “null” For example, see the following [https://www.google.com/search?q=origin+header+sent+null+value+site%3Astackoverflow.com&amp;amp;oq=origin+header+sent+null+value+site%3Astackoverflow.com here].&lt;br /&gt;
* Origin header is included for all cross origin requests but for same origin requests, in most browsers it is only included in POST/DELETE/PUT '''Note:''' Although it is not ideal, many developers use GET requests to do state changing operations.&lt;br /&gt;
* Referer header is no exception. There are multiple use cases where referer header is omitted as well ([https://stackoverflow.com/questions/6880659/in-what-cases-will-http-referer-be-empty &amp;lt;nowiki&amp;gt;[1]&amp;lt;/nowiki&amp;gt;], [https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referer &amp;lt;nowiki&amp;gt;[2]&amp;lt;/nowiki&amp;gt;], [https://en.wikipedia.org/wiki/HTTP_referer#Referer_hiding &amp;lt;nowiki&amp;gt;[3]&amp;lt;/nowiki&amp;gt;], [https://seclab.stanford.edu/websec/csrf/csrf.pdf &amp;lt;nowiki&amp;gt;[4]&amp;lt;/nowiki&amp;gt;] and [https://www.google.com/search?q=referer+header+sent+null+value+site:stackoverflow.com &amp;lt;nowiki&amp;gt;[5]&amp;lt;/nowiki&amp;gt;]). Load balancers, proxies and embedded network devices are also well known to strip the referer header due to privacy reasons in logging them.&lt;br /&gt;
&lt;br /&gt;
Though exceptions can be written for above cases in your source and target origin check logic, there is currently no central repository (even there is one, keeping it up-to-date is a problem) that references all such use cases. Each browser might also handle these use cases differently (browsers are known to handle things differently considering their ecosystem. IE example of not sending origin header within trusted zone is such example). Rejecting requests that do not contain origin and/or referer headers might sound like a good idea but it can impact legitimate users. Keeping this system in monitoring mode and trying to investigate use cases such as stated above, then adding them into exception logic is a process that you may consider to make this defense more stable in your environment.&lt;br /&gt;
&lt;br /&gt;
This CSRF defense relies on browser behavior that can change at times. For example, when new privacy contexts are discovered, under which situations you have to keep your validation logic updated, where as in token based mitigation, you have full control on the CSRF mitigation. If browsers alter CSRF tokens, they are literally changing the HTML content on rendering pages (which no browser would ever want to do!).&lt;br /&gt;
&lt;br /&gt;
When there is an XSS vulnerability on a page of an application protected with Origin and/or Referer header, the level of effort required to exploit state changing operations (that are typically vulnerable to CSRF) on other pages is very easy (grab the parameters and forge a request, as Origin and Referer header is included by default by browsers) than compared to token based mitigation (where attacker needs to download the target page, parse the DOM for the token, construct a forge request, and send it to server).&lt;br /&gt;
&lt;br /&gt;
'''Note:''' Although the concept of an origin header stemmed from [https://seclab.stanford.edu/websec/csrf/csrf.pdf the Stanford CSRF] paper that references robust CSRF defenses, the initial [https://tools.ietf.org/html/rfc6454 origin header RFC] does not reference mitigating CSRF in any way (another [https://tools.ietf.org/id/draft-abarth-origin-03.html draft version] does, however).&lt;br /&gt;
&lt;br /&gt;
=== Double Submit Cookie ===&lt;br /&gt;
If maintaining the state for CSRF token at server side is problematic, an alternative defense is to use the double submit cookie technique. This technique is easy to implement and is stateless. In this technique, we send a random value in both a cookie and as a request parameter, with the server verifying if the cookie value and request value match. When a user visits (even before authenticating to prevent login CSRF), the site should generate a (cryptographically strong) pseudorandom value and set it as a cookie on the user's machine separate from the session identifier. The site then requires that every transaction request include this pseudorandom value as a hidden form value (or other request parameter/header). If both of them match at server side, the server accepts it as legitimate request and if they don’t, it would reject the request.&lt;br /&gt;
&lt;br /&gt;
There’s a belief that this technique would work because a cross origin attacker cannot read any data sent from the server or modify cookie values, per the same-origin policy. This means that while an attacker can force a victim to send any value with a malicious CSRF request, the attacker will be unable to modify or read the value stored in the cookie (with which the server compares the token value).&lt;br /&gt;
&lt;br /&gt;
There are a couple of drawbacks associated with the assumptions made here. The problem of &amp;quot;trusting of sub domains and proper configuration of whole site in general to accept HTTPS connections only&amp;quot;. The [https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf Blackhat talk] by Rich Lundeen references these drawbacks.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;''With double submit, if an attacker can write a cookie they can obviously defeat the protection. And again, writing cookies is significantly easier then reading them. The fact that cookies can be written is difficult for many people to understand. After all, doesn't the same origin policy specify that one domain cannot access cookies from another domain? However, there are two common scenarios where writing cookies across domains is possible:''&lt;br /&gt;
&lt;br /&gt;
''a) While it's true that hellokitty.marketing.example.com cannot read cookies or access the DOM from secure.example.com because of the same origin policy, hellokitty.marketing.example.com can write cookies to the parent domain (example.com), and these cookies are then consumed by secure.example.com (secure.example.com has no good way to distinguish which site set the cookie). Additionally, there are methods of forcing secure.example.com to always accept your cookie first. What this means is that XSS in hellokitty.marketing.example.com is able to overwrite cookies in secure.example.com.''&lt;br /&gt;
&lt;br /&gt;
''b) If an attacker is in the middle, they can usually force a request to the same domain over HTTP. If an application is hosted at &amp;lt;nowiki&amp;gt;https://secure.example.com&amp;lt;/nowiki&amp;gt;, even if the cookies are set with the secure flag, a man in the middle can force connections to &amp;lt;nowiki&amp;gt;http://secure.example.com&amp;lt;/nowiki&amp;gt; and set (overwrite) any arbitrary cookies (even though the secure flag prevents the attacker from reading those cookies). Even if the HSTS header is set on the server and the browser visiting the site supports HSTS (this would prevent a man in the middle from forcing plain text HTTP requests) unless the HSTS header is set in a way that includes all subdomains, a man in the middle can simply force a request to a separate subdomain and overwrite cookies similar to 1. In other words, as long as &amp;lt;nowiki&amp;gt;http://hellokitty.marketing.example.com&amp;lt;/nowiki&amp;gt; doesn't force HTTPS, then an attacker can overwrite cookies on any example.com subdomain.''&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So, unless you are sure that your subdomains are fully secured and only accept HTTPS connections (we believe it’s difficult to guarantee at large enterprises), you should not rely on the Double Submit Cookie technique as a primary mitigation for CSRF.&lt;br /&gt;
&lt;br /&gt;
A variant of double submit cookie that can mitigate both the risks mentioned above is including the token in an encrypted cookie - often within the authentication cookie - and then at the server side matching it (after decrypting the authentication cookie) with the token in hidden form field or parameter/header for ajax calls.  This works because a sub domain has no way to over-write an properly crafted encrypted cookie without the necessary information such as encryption key.&lt;br /&gt;
&lt;br /&gt;
=== SameSite Cookie Attribute ===&lt;br /&gt;
SameSite is a cookie attribute (similar to [[HttpOnly]], Secure etc.) introduced by Google to mitigate CSRF attacks. It is defined in [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 this] Internet Draft. This attribute helps in preventing the browser from sending cookies along with cross-site requests. Possible values for this attribute are lax or strict.&lt;br /&gt;
&lt;br /&gt;
The strict value will prevent the cookie from being sent by the browser to the target site in all cross-site browsing context, even when following a regular link. For example, for a GitHub-like website this would mean that if a logged-in user follows a link to a private GitHub project posted on a corporate discussion forum or email, GitHub will not receive the session cookie and the user will not be able to access the project. A bank website however most likely doesn't want to allow any transactional pages to be linked from external sites, so the strict flag would be most appropriate.&lt;br /&gt;
&lt;br /&gt;
The default lax value provides a reasonable balance between security and usability for websites that want to maintain user's logged-in session after the user arrives from an external link. In the above GitHub scenario, the session cookie would be allowed when following a regular link from an external website while blocking it in CSRF-prone request methods such as POST. Only cross-site requests that are allowed in lax mode are the ones that have top-level navigations and are also “safe” HTTP methods (more details [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7.1 here]).&lt;br /&gt;
&lt;br /&gt;
Example of cookies using this attribute:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Set-Cookie: JSESSIONID=xxxxx; SameSite=Strict&lt;br /&gt;
Set-Cookie: JSESSIONID=xxxxx; SameSite=Lax&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Support for this attribute in different browsers is increasing but there are still browsers that need to adopt this. As of August 2018, SameSite attribute is on browsers used by 68.92% of Internet users (detailed statistics are [https://caniuse.com/#feat=same-site-cookie-attribute here]).&lt;br /&gt;
&lt;br /&gt;
Though this technique seems to be efficient in mitigating CSRF attacks, it is still in early stages (in [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 draft]) and does not have full browser support as mentioned above. Google’s [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 draft] also mentions a couple cases where forged requests can be simulated by attackers as same-site requests (and thus allowing to send SameSite cookies).&lt;br /&gt;
&lt;br /&gt;
Considering the factors above, it is not recommended to be used as a primary defense. Google agrees with this stance and strongly encourages developers to deploy server-side defenses such as tokens to mitigate CSRF more fully.&lt;br /&gt;
&lt;br /&gt;
=== Use of Custom Request Headers ===&lt;br /&gt;
&lt;br /&gt;
Adding CSRF tokens, a double submit cookie and value, encrypted token, or other defense that involves changing the UI can frequently be complex or otherwise problematic. An alternate defense that is particularly well suited for AJAX/XHR endpoints is the use of a custom request header. This defense relies on the [https://en.wikipedia.org/wiki/Same-origin_policy same-origin policy (SOP)] restriction that only JavaScript can be used to add a custom header, and only within its origin. By default, browsers do not allow JavaScript to make cross origin requests.&lt;br /&gt;
&lt;br /&gt;
A particularly attractive custom header and value to use is “X-Requested-With: XMLHttpRequest” because most JavaScript libraries already add this header to requests they generate by default. However, some do not. For example, AngularJS used to, but does not anymore. For more information, see [https://github.com/angular/angular.js/commit/3a75b1124d062f64093a90b26630938558909e8d their rationale] and how to add it back.&lt;br /&gt;
&lt;br /&gt;
If this is the case for your system, you can simply verify the presence of this header and value on all your server side AJAX endpoints in order to protect against CSRF attacks. This approach has the double advantage of usually requiring no UI changes and not introducing any server side state, which is particularly attractive to REST services. You can always add your own custom header and value if that is preferred.&lt;br /&gt;
&lt;br /&gt;
This defense technique is specifically discussed in section 4.3 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery]. However, bypasses of this defense using Flash were documented as early as 2008 and again as recently as 2015 by Mathias Karlsson to [https://hackerone.com/reports/44146 exploit a CSRF flaw in Vimeo]. A Flash attack can't spoof the Origin or Referer headers, so by checking both of them we believe this combination of checks should prevent Flash bypass CSRF attacks (if any comes up in future). &lt;br /&gt;
&lt;br /&gt;
Besides any possible future bypasses such as Flash, using a static header will make it easy to exploit other state changing operations in the application (similar to the previous explanation on why ease of exploitation is easier in origin/referer header check than token based mitigation). Including a random token instead of static header value is more or less equal to the token based approach described in the Primary Defenses section. Developers also need to consider that if you are using this approach in an application with both Ajax calls and form tags, this technique would only help Ajax calls in protecting from CSRF and you would still need protect &amp;lt;form&amp;gt; tags with approaches described in this document such as tokens. Setting custom headers on form tags is not possible directly. Also, CORS configuration should also be robust to make this solution work effectively (as custom headers for requests coming from other domains trigger a pre-flight CORS check).&lt;br /&gt;
&lt;br /&gt;
=== User Interaction Based CSRF Defense ===&lt;br /&gt;
&lt;br /&gt;
While all the techniques referenced here do not require any user interaction, sometimes it’s easier or more appropriate to involve the user in the transaction to prevent unauthorized operations (forged via CSRF or otherwise). The following are some examples of techniques that can act as strong CSRF defense when implemented correctly.&lt;br /&gt;
* Re-Authentication (password or stronger)&lt;br /&gt;
* One-time Token&lt;br /&gt;
* CAPTCHA&lt;br /&gt;
While these are a very strong CSRF defense, it does create a huge impact on the user experience. For applications that are in need of high security for some operations (password change, money transfer etc.), these techniques should be used along with token based mitigation. Please note that tokens by themselves can mitigate CSRF, developers should use these techniques only to achieve additional security for their high sensitive operations.&lt;br /&gt;
&lt;br /&gt;
=== Login CSRF ===&lt;br /&gt;
Most developers tend to ignore CSRF vulnerability on login forms as they assume that CSRF would not be applicable on login forms because user is not authenticated at that stage. That assumption is false. CSRF vulnerability can still occur on login forms where the user is not authenticated, but the impact/risk view for it is quite different from the impact/risk view of a general CSRF vulnerability (when a user is authenticated).&lt;br /&gt;
&lt;br /&gt;
With a CSRF vulnerability on login form, an attacker can make a victim login to the attacker's account and learn behavior from the victim's searches. For more information about login CSRF and other risks, see section 3 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf the Stanford CSRF] paper.&lt;br /&gt;
&lt;br /&gt;
Login CSRF can be mitigated by creating pre-sessions (sessions before a user is authenticated) and including tokens in login form. You can use any of the techniques mentioned above to generate tokens. Pre-sessions can be transitioned to real sessions once the user is authenticated. This technique is described in [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery section 4.1].&lt;br /&gt;
&lt;br /&gt;
If sub-domains under your master domain are treated as not trusted in your threat model, it is difficult to mitigate login CSRF. A strict subdomain and path level referer header (because most login pages are served on HTTPS - no stripping of referer - and are also linked from home pages) validation (detailed in section 6.1) can be used in these cases for mitigating CSRF on login forms to an extent.&lt;br /&gt;
&lt;br /&gt;
== Not So Popular CSRF Mitigations ==&lt;br /&gt;
&lt;br /&gt;
=== Triple Submit Cookie ===&lt;br /&gt;
This mitigation is proposed by [https://www.owasp.org/images/e/e6/AppSecEU2012_Wilander.pdf John Wilander in 2012 at OWASP Appsec Research]. This technique adds an additional step to double submit cookie approach by verifying if the request contains two cookies with same name (please note, attacker need to write an additional cookie to bypass double submit cookie mitigation). Though it mitigates the issues discussed in bypass of double submit cookie, it introduces new problems such as cookie jar overflow (in-details and more issue details [https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf here] and [https://webstersprodigy.net/2012/08/03/analysis-of-john-wilanders-triple-submit-cookies/ here]). We were not able to find any real-time implementations of this mitigation so far.&lt;br /&gt;
&lt;br /&gt;
=== Content-Type Header Validation ===&lt;br /&gt;
This technique is better known than the triple submit cookie mitigation. In first place, this header is not designed for security (initial RFC [https://tools.ietf.org/html/rfc1049 here] and later well-defined in [https://www.ietf.org/rfc/rfc2045.txt this] RFC) but only to let receiving agents know the type of data they would be handling, so that they can invoke corresponding parsers. The pre-flighting behavior of this header (pre-flight if header has value other than application/x-www-form-urlencoded, multipart/form-data, or text/plain) is what treated as a CSRF mitigation and thus forcing all requests to have a header value that would force a pre-flight (such as application/json. Server side can reject cross-origin requests with CORS/SOP during this pre-flight).&lt;br /&gt;
&lt;br /&gt;
This approach has two main problems. One that it would mandate all requests to have a header value that would force pre-flight despite the real use case and the other that this technique is relying on a feature that is not designed for security, to mitigate a security vulnerability. When a bug was discovered in the Chrome API, browser architects even considered to removing this pre-flighting behavior. Because this header was not designed as a security control, architects can re-design it to better cater its primary purpose. In the future, there’s a possibility that new content-type header types can be included (to better support various use-cases), which can put systems relying on this header for CSRF mitigation in trouble. For more information, see [https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2017/september/common-csrf-prevention-misconceptions/ Common CSRF Prevention Misconceptions].&lt;br /&gt;
&lt;br /&gt;
== CSRF Mitigation Myths ==&lt;br /&gt;
The following shows techniques presumed to be CSRF mitigations but none of them fully/actually mitigates a CSRF vulnerability.&lt;br /&gt;
* '''CORS''': CORS is a header designed to relax Same-Origin-Policy when cross-origin communication between sites is required. It is not designed, nor prevents CSRF attacks.&lt;br /&gt;
* '''Using HTTPS''': Using HTTPS has nothing to do with the protection from CSRF attacks. Resources that are under HTTPS are still vulnerable to CSRF if proper CSRF mitigations described above are not included.&lt;br /&gt;
* More myths can be found [[Cross-Site Request Forgery (CSRF)|here]]&lt;br /&gt;
&lt;br /&gt;
== Implementation reference example  ==&lt;br /&gt;
The following JEE web filter provides an example reference for some of the concepts described in this cheat sheet. It implements the following stateless mitigations ([https://github.com/aramrami/OWASP-CSRFGuard OWASP CSRFGuard], cover a stateful approach).&lt;br /&gt;
* Verifying same origin with standard headers&lt;br /&gt;
* Double submit cookie&lt;br /&gt;
* SameSite cookie attribute&lt;br /&gt;
'''Please note''' that it only acts a reference sample and is not complete (for example: it doesn’t have a block to direct the control flow when origin and referer header check succeeds nor it has a port/host/protocol level validation for referer header). Developers are recommended to build their complete mitigation on top of this reference sample. Developers should also implement standard authentication or authorization checks before checking for CSRF.&lt;br /&gt;
&lt;br /&gt;
Source is also located [https://github.com/righettod/poc-csrf here] and provides a runnable POC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
import org.slf4j.Logger;&lt;br /&gt;
import org.slf4j.LoggerFactory;&lt;br /&gt;
&lt;br /&gt;
import javax.servlet.Filter;&lt;br /&gt;
import javax.servlet.FilterChain;&lt;br /&gt;
import javax.servlet.FilterConfig;&lt;br /&gt;
import javax.servlet.ServletException;&lt;br /&gt;
import javax.servlet.ServletRequest;&lt;br /&gt;
import javax.servlet.ServletResponse;&lt;br /&gt;
import javax.servlet.annotation.WebFilter;&lt;br /&gt;
import javax.servlet.http.Cookie;&lt;br /&gt;
import javax.servlet.http.HttpServletRequest;&lt;br /&gt;
import javax.servlet.http.HttpServletResponse;&lt;br /&gt;
import javax.servlet.http.HttpServletResponseWrapper;&lt;br /&gt;
import javax.xml.bind.DatatypeConverter;&lt;br /&gt;
import java.io.IOException;&lt;br /&gt;
import java.net.MalformedURLException;&lt;br /&gt;
import java.net.URL;&lt;br /&gt;
import java.security.SecureRandom;&lt;br /&gt;
import java.util.Arrays;&lt;br /&gt;
&lt;br /&gt;
/**&lt;br /&gt;
 * Filter in charge of validating each incoming HTTP request about Headers and CSRF token.&lt;br /&gt;
 * It is called for all requests to backend destination.&lt;br /&gt;
 *&lt;br /&gt;
 * We use the approach in which:&lt;br /&gt;
 * - The CSRF token is changed after each valid HTTP exchange&lt;br /&gt;
 * - The custom Header name for the CSRF token transmission is fixed&lt;br /&gt;
 * - A CSRF token is associated to a backend service URI in order to enable the support for multiple parallel Ajax request from the same application&lt;br /&gt;
 * - The CSRF cookie name is the backend service name prefixed with a fixed prefix&lt;br /&gt;
 *&lt;br /&gt;
 * Here for the POC we show the &amp;quot;access denied&amp;quot; reason in the response but in production code only return a generic message !&lt;br /&gt;
 *&lt;br /&gt;
 * @see &amp;quot;https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://wiki.mozilla.org/Security/Origin&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://chloe.re/2016/04/13/goodbye-csrf-samesite-to-the-rescue/&amp;quot;&lt;br /&gt;
 */&lt;br /&gt;
@WebFilter(&amp;quot;/backend/*&amp;quot;)&lt;br /&gt;
public class CSRFValidationFilter implements Filter {&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * JVM param name used to define the target origin&lt;br /&gt;
     */&lt;br /&gt;
    public static final String TARGET_ORIGIN_JVM_PARAM_NAME = &amp;quot;target.origin&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Name of the custom HTTP header used to transmit the CSRF token and also to prefix &lt;br /&gt;
     * the CSRF cookie for the expected backend service&lt;br /&gt;
     */&lt;br /&gt;
    private static final String CSRF_TOKEN_NAME = &amp;quot;X-TOKEN&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Logger&lt;br /&gt;
     */&lt;br /&gt;
    private static final Logger LOG = LoggerFactory.getLogger(CSRFValidationFilter.class);&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Application expected deployment domain: named &amp;quot;Target Origin&amp;quot; in OWASP CSRF article&lt;br /&gt;
     */&lt;br /&gt;
    private URL targetOrigin;&lt;br /&gt;
&lt;br /&gt;
    /***&lt;br /&gt;
     * Secure generator&lt;br /&gt;
     */&lt;br /&gt;
    private final SecureRandom secureRandom = new SecureRandom();&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {&lt;br /&gt;
        HttpServletRequest httpReq = (HttpServletRequest) request;&lt;br /&gt;
        HttpServletResponse httpResp = (HttpServletResponse) response;&lt;br /&gt;
        String accessDeniedReason;&lt;br /&gt;
&lt;br /&gt;
        /* STEP 1: Verifying Same Origin with Standard Headers */&lt;br /&gt;
        //Try to get the source from the &amp;quot;Origin&amp;quot; header&lt;br /&gt;
        String source = httpReq.getHeader(&amp;quot;Origin&amp;quot;);&lt;br /&gt;
        if (this.isBlank(source)) {&lt;br /&gt;
            //If empty then fallback on &amp;quot;Referer&amp;quot; header&lt;br /&gt;
            source = httpReq.getHeader(&amp;quot;Referer&amp;quot;);&lt;br /&gt;
            //If this one is empty too then we trace the event and we block the request (recommendation of the article)...&lt;br /&gt;
            if (this.isBlank(source)) {&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: ORIGIN and REFERER request headers are both absent/empty so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
                return;&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //Compare the source against the expected target origin&lt;br /&gt;
        URL sourceURL = new URL(source);&lt;br /&gt;
        if (!this.targetOrigin.getProtocol().equals(sourceURL.getProtocol()) || !this.targetOrigin.getHost().equals(sourceURL.getHost()) &lt;br /&gt;
		|| this.targetOrigin.getPort() != sourceURL.getPort()) {&lt;br /&gt;
            //One the part do not match so we trace the event and we block the request&lt;br /&gt;
            accessDeniedReason = String.format(&amp;quot;CSRFValidationFilter: Protocol/Host/Port do not fully matches so we block the request! (%s != %s) &amp;quot;, &lt;br /&gt;
				this.targetOrigin, sourceURL);&lt;br /&gt;
            LOG.warn(accessDeniedReason);&lt;br /&gt;
            httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            return;&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        /* STEP 2: Verifying CSRF token using &amp;quot;Double Submit Cookie&amp;quot; approach */&lt;br /&gt;
        //If CSRF token cookie is absent from the request then we provide one in response but we stop the process at this stage.&lt;br /&gt;
        //Using this way we implement the first providing of token&lt;br /&gt;
        Cookie tokenCookie = null;&lt;br /&gt;
        if (httpReq.getCookies() != null) {&lt;br /&gt;
            String csrfCookieExpectedName = this.determineCookieName(httpReq);&lt;br /&gt;
            tokenCookie = Arrays.stream(httpReq.getCookies()).filter(c -&amp;gt; c.getName().equals(csrfCookieExpectedName)).findFirst().orElse(null);&lt;br /&gt;
        }&lt;br /&gt;
        if (tokenCookie == null || this.isBlank(tokenCookie.getValue())) {&lt;br /&gt;
            LOG.info(&amp;quot;CSRFValidationFilter: CSRF cookie absent or value is null/empty so we provide one and return an HTTP NO_CONTENT response !&amp;quot;);&lt;br /&gt;
            //Add the CSRF token cookie and header&lt;br /&gt;
            this.addTokenCookieAndHeader(httpReq, httpResp);&lt;br /&gt;
            //Set response state to &amp;quot;204 No Content&amp;quot; in order to allow the requester to clearly identify an initial response providing the initial CSRF token&lt;br /&gt;
            httpResp.setStatus(HttpServletResponse.SC_NO_CONTENT);&lt;br /&gt;
        } else {&lt;br /&gt;
            //If the cookie is present then we pass to validation phase&lt;br /&gt;
            //Get token from the custom HTTP header (part under control of the requester)&lt;br /&gt;
            String tokenFromHeader = httpReq.getHeader(CSRF_TOKEN_NAME);&lt;br /&gt;
            //If empty then we trace the event and we block the request&lt;br /&gt;
            if (this.isBlank(tokenFromHeader)) {&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: Token provided via HTTP Header is absent/empty so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            } else if (!tokenFromHeader.equals(tokenCookie.getValue())) {&lt;br /&gt;
                //Verify that token from header and one from cookie are the same&lt;br /&gt;
                //Here is not the case so we trace the event and we block the request&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: Token provided via HTTP Header and via Cookie are not equals so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            } else {&lt;br /&gt;
                //Verify that token from header and one from cookie matches&lt;br /&gt;
                //Here is the case so we let the request reach the target component (ServiceServlet, jsp...) and add a new token when we get back the bucket&lt;br /&gt;
                HttpServletResponseWrapper httpRespWrapper = new HttpServletResponseWrapper(httpResp);&lt;br /&gt;
                chain.doFilter(request, httpRespWrapper);&lt;br /&gt;
                //Add the CSRF token cookie and header&lt;br /&gt;
                this.addTokenCookieAndHeader(httpReq, httpRespWrapper);&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void init(FilterConfig filterConfig) throws ServletException {&lt;br /&gt;
        //To easier the configuration, we load the target expected origin from an JVM property&lt;br /&gt;
        //Reconfiguration only require an application restart that is generally acceptable&lt;br /&gt;
        try {&lt;br /&gt;
            this.targetOrigin = new URL(System.getProperty(TARGET_ORIGIN_JVM_PARAM_NAME));&lt;br /&gt;
        } catch (MalformedURLException e) {&lt;br /&gt;
            LOG.error(&amp;quot;Cannot init the filter !&amp;quot;, e);&lt;br /&gt;
            throw new ServletException(e);&lt;br /&gt;
        }&lt;br /&gt;
        LOG.info(&amp;quot;CSRFValidationFilter: Filter init, set expected target origin to '{}'.&amp;quot;, this.targetOrigin);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void destroy() {&lt;br /&gt;
        LOG.info(&amp;quot;CSRFValidationFilter: Filter shutdown&amp;quot;);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Check if a string is null or empty (including containing only spaces)&lt;br /&gt;
     *&lt;br /&gt;
     * @param s Source string&lt;br /&gt;
     * @return TRUE if source string is null or empty (including containing only spaces)&lt;br /&gt;
     */&lt;br /&gt;
    private boolean isBlank(String s) {&lt;br /&gt;
        return s == null || s.trim().isEmpty();&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Generate a new CSRF token&lt;br /&gt;
     *&lt;br /&gt;
     * @return The token a string&lt;br /&gt;
     */&lt;br /&gt;
    private String generateToken() {&lt;br /&gt;
        byte[] buffer = new byte[50];&lt;br /&gt;
        this.secureRandom.nextBytes(buffer);&lt;br /&gt;
        return DatatypeConverter.printHexBinary(buffer);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Determine the name of the CSRF cookie for the targeted backend service&lt;br /&gt;
     *&lt;br /&gt;
     * @param httpRequest Source HTTP request&lt;br /&gt;
     * @return The name of the cookie as a string&lt;br /&gt;
     */&lt;br /&gt;
    private String determineCookieName(HttpServletRequest httpRequest) {&lt;br /&gt;
        String backendServiceName = httpRequest.getRequestURI().replaceAll(&amp;quot;/&amp;quot;, &amp;quot;-&amp;quot;);&lt;br /&gt;
        return CSRF_TOKEN_NAME + &amp;quot;-&amp;quot; + backendServiceName;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Add the CSRF token cookie and header to the provided HTTP response object&lt;br /&gt;
     *&lt;br /&gt;
     * @param httpRequest  Source HTTP request&lt;br /&gt;
     * @param httpResponse HTTP response object to update&lt;br /&gt;
     */&lt;br /&gt;
    private void addTokenCookieAndHeader(HttpServletRequest httpRequest, HttpServletResponse httpResponse) {&lt;br /&gt;
        //Get new token&lt;br /&gt;
        String token = this.generateToken();&lt;br /&gt;
        //Add cookie manually because the current Cookie class implementation do not support the &amp;quot;SameSite&amp;quot; attribute&lt;br /&gt;
        //We let the adding of the &amp;quot;Secure&amp;quot; cookie attribute to the reverse proxy rewriting...&lt;br /&gt;
        //Here we lock the cookie from JS access and we use the SameSite new attribute protection&lt;br /&gt;
        String cookieSpec = String.format(&amp;quot;%s=%s; Path=%s; HttpOnly; SameSite=Strict&amp;quot;, this.determineCookieName(httpRequest), token, httpRequest.getRequestURI());&lt;br /&gt;
        httpResponse.addHeader(&amp;quot;Set-Cookie&amp;quot;, cookieSpec);&lt;br /&gt;
        //Add cookie header to give access to the token to the JS code&lt;br /&gt;
        httpResponse.setHeader(CSRF_TOKEN_NAME, token);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Authors and Primary Editors  ==&lt;br /&gt;
Manideep Konakandla (Amazon Application Security Team) - http://www.manideepk.com&lt;br /&gt;
&lt;br /&gt;
Dave Wichers - dave.wichers[at]owasp.org&lt;br /&gt;
&lt;br /&gt;
Paul Petefish - https://www.linkedin.com/in/paulpetefish&lt;br /&gt;
&lt;br /&gt;
Eric Sheridan - eric.sheridan[at]owasp.org&lt;br /&gt;
&lt;br /&gt;
Dominique Righetto - dominique.righetto[at]owasp.org&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Other Cheat Sheets ==&lt;br /&gt;
{{Cheatsheet_Navigation_Body}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;br /&gt;
[[Category:Popular]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;diff=247194</id>
		<title>Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;diff=247194"/>
				<updated>2019-02-06T22:52:16Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; __NOTOC__&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:160px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Cheatsheets-header.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
Last revision (mm/dd/yy): '''{{REVISIONMONTH}}/{{REVISIONDAY}}/{{REVISIONYEAR}}''' &lt;br /&gt;
&lt;br /&gt;
 __TOC__{{TOC hidden}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
[[Cross-Site Request Forgery (CSRF)]] is a type of attack that occurs when a malicious web site, email, blog, instant message, or program causes a user’s web browser to perform an unwanted action on a trusted site when the user is authenticated. A CSRF attack works because browser requests automatically include any credentials associated with the site, such as the user's session cookie, IP address, etc. Therefore, if the user is authenticated to the site, the site cannot distinguish between the forged or legitimate request sent by the victim. We would need a token/identifier that is not accessible to attacker and would not be sent along (like cookies) with forged requests that attacker initiates. For more information on CSRF, see OWASP [[Cross-Site Request Forgery (CSRF)|Cross-Site Request Forgery (CSRF) page]].&lt;br /&gt;
&lt;br /&gt;
The impact of a successful CSRF attack is limited to the capabilities exposed by the vulnerable application. For example, this attack could result in a transfer of funds, changing a password, or making a purchase with the user’s credentials. In effect, CSRF attacks are used by an attacker to make a target system perform a function via the target's browser, without the user’s knowledge, at least until the unauthorized transaction has been committed.&lt;br /&gt;
&lt;br /&gt;
Impacts of successful CSRF exploits vary greatly based on the privileges of each victim. When targeting a normal user, a successful CSRF attack can compromise end-user data and their associated functions. If the targeted end user is an administrator account, a CSRF attack can compromise the entire web application. Using social engineering, an attacker can embed malicious HTML or JavaScript code into an email or website to request a specific 'task URL'. The task then executes with or without the user's knowledge, either directly or by using a Cross-Site Scripting flaw. For example, see [https://en.wikipedia.org/wiki/Samy_(computer_worm) Samy MySpace Worm].&lt;br /&gt;
&lt;br /&gt;
==Warning: No Cross-Site Scripting (XSS) Vulnerabilities ==&lt;br /&gt;
[[Cross-Site Scripting]] is not necessary for CSRF to work. However, any cross-site scripting vulnerability can be used to defeat all CSRF mitigation techniques available in the market today (except mitigation techniques that involve user interaction and described later in this cheat sheet). This is because an XSS payload can simply read any page on the site using an XMLHttpRequest (direct DOM access can be done, if on same page) and obtain the generated token from the response, and include that token with a forged request.  This technique is exactly how the [https://en.wikipedia.org/wiki/Samy_(computer_worm) MySpace (Samy) worm] defeated MySpace's anti-CSRF defenses in 2005, which enabled the worm to propagate.&lt;br /&gt;
&lt;br /&gt;
It is imperative that no XSS vulnerabilities are present to ensure that CSRF defenses can't be circumvented. Please see the OWASP [[XSS (Cross Site Scripting) Prevention Cheat Sheet|XSS Prevention Cheat Sheet]] for detailed guidance on how to prevent XSS flaws.  &lt;br /&gt;
&lt;br /&gt;
== Resources that need to be protected from CSRF attacks ==&lt;br /&gt;
The following list assumes that you are not violating [http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.1 RFC2616], section 9.1.1, by using GET requests for state changing operations. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' If for any reason you violate, you would also need to protect those resources, which is mostly achieved with default &amp;lt;code&amp;gt;form tag [GET method]&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;href&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;src&amp;lt;/code&amp;gt; attributes.  &lt;br /&gt;
&lt;br /&gt;
* Form tags with POST &lt;br /&gt;
* Ajax/XHR calls&lt;br /&gt;
&lt;br /&gt;
== CSRF Defense Recommendations Summary ==&lt;br /&gt;
We recommend token based CSRF defense (either stateful/stateless) as a primary defense to mitigate CSRF in your applications. For highly sensitive operations, we also recommend user interaction based protection (either re-authentication/one-time token, detailed in section 6.5) along with token based mitigation.&lt;br /&gt;
&lt;br /&gt;
As a defense-in-depth measure, consider implementing one mitigation from the Defense-in-Depth Techniques section (you can choose the mitigation that fits your ecosystem considering the issues mentioned under them). These defense-in-depth mitigation techniques are not recommended to be used by themselves (without token based mitigation) for mitigating CSRF in your applications.&lt;br /&gt;
&lt;br /&gt;
== Primary Defense Techniques ==&lt;br /&gt;
&lt;br /&gt;
=== Token Based Mitigation ===&lt;br /&gt;
This defense is one the most popular and recommended methods to mitigate CSRF. It can be achieved either with state (synchronizer token pattern) or stateless (encrypted/hash based token pattern). For all the mitigation's, it is implicit that general security principles should be adhered.&lt;br /&gt;
* Strict key rotation and token lifetime policies should be maintained. Policies can be set according to your organizational needs. Generic key management guidance from OWASP can be found in the [[Key Management Cheat Sheet]].&lt;br /&gt;
&lt;br /&gt;
==== Synchronizer Token Pattern ====&lt;br /&gt;
Any state changing operation requires a secure random token (e.g., CSRF token) to prevent CSRF attacks. A CSRF token should be unique per user session, large random value, and also generated by a cryptographically secure random number generator. The CSRF token is added as a hidden field for forms headers/parameters for AJAX calls, and within the URL if the state changing operation occurs via a GET. See &amp;quot;Disclosure of Token in URL&amp;quot; section below. The server rejects the requested action if the CSRF token fails validation.&lt;br /&gt;
&lt;br /&gt;
In order to facilitate a &amp;quot;transparent but visible&amp;quot; CSRF solution, developers are encouraged to adopt a pattern similar to [http://www.corej2eepatterns.com/Design/PresoDesign.htm Synchronizer Token Pattern] (The original intention of this synchronizer token pattern was to detect duplicate submissions in forms). The synchronizer token pattern requires the generation of random &amp;quot;challenge&amp;quot; tokens that are associated with the user's current session. These challenge tokens are then inserted within the HTML forms and calls associated with sensitive server-side operations. It is the responsibility of the server application to verify the existence and correctness of this token. By including a challenge token with each request, the developer has a strong control to verify that the user actually intended to submit the desired requests. Inclusion of a required security token in HTTP requests associated with sensitive business functions helps mitigate CSRF attacks as successful exploitation assumes the attacker knows the randomly generated token for the target victim's session. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' These tokens aren’t like cookies that are automatically sent with forged requests made from your browser from the attacker website. &lt;br /&gt;
&lt;br /&gt;
This is analogous to the attacker being able to guess the target victim's session identifier. &lt;br /&gt;
&lt;br /&gt;
The following describes a general approach to incorporate challenge tokens within the request.&lt;br /&gt;
&lt;br /&gt;
When a Web application formulates a request, the application should include a hidden input parameter with a common name such as &amp;quot;CSRFToken&amp;quot; (for forms)/ as header/parameter value for Ajax calls. The value of this token must be randomly generated such that it cannot be guessed by an attacker. Consider leveraging the java.security.SecureRandom class for Java applications to generate a sufficiently long random token. Alternative generation algorithms include the use of 256-bit BASE64 encoded hashes. Developers that choose this generation algorithm must make sure that there is randomness and uniqueness utilized in the data that is hashed to generate the random token.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;html&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;form action=&amp;quot;/transfer.do&amp;quot; method=&amp;quot;post&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;input type=&amp;quot;hidden&amp;quot; name=&amp;quot;CSRFToken&amp;quot; &lt;br /&gt;
value=&amp;quot;OWY4NmQwODE4ODRjN2Q2NTlhMmZlYWEwYzU1YWQwMTVhM2JmNGYxYjJiMGI4MjJjZDE1ZDZMGYwMGEwOA==&amp;quot;&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/form&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is used for each subsequent request until the session expires. When a request is issued by the end-user, the server-side component must verify the existence and validity of the token in the request compared to the token found in the user session. If the token was not found within the request, or the value provided does not match the value within the user session, then the request should be aborted, and the event logged as a potential CSRF attack in progress.&lt;br /&gt;
&lt;br /&gt;
To further enhance the security of this proposed design, consider randomizing the CSRF token parameter name and/or value for each request. Implementing this approach results in the generation of per-request tokens as opposed to per-session tokens. This is more secure than per-session tokens as the time range for an attacker to exploit the stolen tokens is minimal. However, this may result in usability concerns. For example, the &amp;quot;Back&amp;quot; button browser capability is often hindered as the previous page may contain a token that is no longer valid. Interaction with this previous page will result in a CSRF false positive security event at the server. Few applications that need high security typically implement this approach (such as banks). You have to check what suits your needs. Regardless of the approach taken, developers are encouraged to protect the CSRF token the same way they protect authenticated session identifiers, such as the use of TLS.&lt;br /&gt;
&lt;br /&gt;
'''Existing Synchronizer Implementations'''&lt;br /&gt;
&lt;br /&gt;
Synchronizer token defenses have been built into many frameworks, so we strongly recommend using them first when they are available. External components that add CSRF defenses to existing applications are also recommended. OWASP has the following: &lt;br /&gt;
&lt;br /&gt;
* For Java: OWASP [[CSRF Guard]]&lt;br /&gt;
* For PHP and Apache: [[CSRFProtector Project]]&lt;br /&gt;
&lt;br /&gt;
'''Disclosure of Token in URL'''&lt;br /&gt;
&lt;br /&gt;
Some implementations of synchronizer tokens include the challenge token in GET (URL) requests as well as POST requests. This is often implemented as a result of sensitive server-side operations being invoked as a result of embedded links in the page or other general design patterns. These patterns are often implemented without knowledge of CSRF and an understanding of CSRF prevention design strategies. While this control does help mitigate the risk of CSRF attacks, the unique per-session token is being exposed for GET requests. CSRF tokens in GET requests are potentially leaked at several locations: browser history, log files, network appliances that make a point to log the first line of an HTTP request, and Referer headers if the protected site links to an external site. In the latter case (leaked CSRF token due to the Referer header being parsed by a linked site), it is trivially easy for the linked site to launch a CSRF attack on the protected site, and they will be able to target this attack very effectively, since the Referer header tells them the site as well as the CSRF token. The attack could be run entirely from JavaScript, so that a simple addition of a script tag to the HTML of a site can launch an attack (whether on an originally malicious site or on a hacked site). Additionally, since HTTPS requests from HTTPS contexts will not strip the Referer header (as opposed to HTTPS to HTTP requests) CSRF token leaks via Referer can still happen on HTTPS Applications.&lt;br /&gt;
&lt;br /&gt;
The ideal solution is to only include the CSRF token in POST requests and modify server-side actions that have state changing affect to only respond to POST requests. This is in fact what the &amp;lt;nowiki&amp;gt;RFC 2616&amp;lt;/nowiki&amp;gt; requires for GET requests. If sensitive server-side actions are guaranteed to only ever respond to POST requests, then there is no need to include the token in GET requests.&lt;br /&gt;
&lt;br /&gt;
In most JavaEE web applications, however, HTTP method scoping is rarely ever utilized when retrieving HTTP parameters from a request. Calls to &amp;quot;HttpServletRequest.getParameter&amp;quot; will return a parameter value regardless if it was a GET or POST. This is not to say HTTP method scoping cannot be enforced. It can be achieved if a developer explicitly overrides doPost() in the HttpServlet class or leverages framework specific capabilities such as the AbstractFormController class in Spring.&lt;br /&gt;
&lt;br /&gt;
For these cases, attempting to retrofit this pattern in existing applications requires significant development time and cost, and as a temporary measure it may be better to pass CSRF tokens in the URL. Once the application has been fixed to respond to HTTP GET and POST verbs correctly, CSRF tokens for GET requests should be turned off.&lt;br /&gt;
&lt;br /&gt;
==== Encryption based Token Pattern ====&lt;br /&gt;
The Encrypted Token Pattern leverages an encryption, rather than comparison method of Token-validation. It is most suitable for applications that do not want to maintain any state at server side. &lt;br /&gt;
&lt;br /&gt;
After successful authentication, the server generates a unique token comprised of the user's ID, a timestamp value and a [http://en.wikipedia.org/wiki/Cryptographic_nonce nonce], using a unique key available only on the server. This token is returned to the client and embedded in a hidden field for forms, in the request-header/parameter for AJAX requests. On receipt of this request, the server reads and decrypts the token value with the same key used to create the token. The inability to correctly decrypt suggests an intrusion attempt. Once decrypted, the UserId and timestamp contained within the token are validated; the UserId is compared against the currently logged in user, and the timestamp is compared against the current time.&lt;br /&gt;
&lt;br /&gt;
On successful token-decryption, the server has access to parsed values, ideally in the form of [http://en.wikipedia.org/wiki/Claims-based_identity claims]. These claims are processed by comparing the UserId claim to any potentially stored UserId (in a Cookie or Session variable, if the site already contains a means of authentication). The Timestamp is validated against the current time, preventing replay attacks. Alternatively, in the case of a CSRF attack, the server will be unable to decrypt the poisoned token, and can block and log the attack.&lt;br /&gt;
&lt;br /&gt;
This technique addresses some of the shortfalls in other stateless approaches, such as the need to store data in a Cookie, circumventing the Cookie-subdomain and [[HttpOnly]] issues. Your solution should use a strong encryption function. We recommend AES256-GCM or stronger.&lt;br /&gt;
&lt;br /&gt;
==== HMAC Based Token Pattern ====&lt;br /&gt;
[https://en.wikipedia.org/wiki/HMAC HMAC (hash-based message authentication code)] is a cryptographic function that helps to guarantee integrity and authentication of a message. HMAC Tokens can be used as a CSRF mitigation technique without requiring server side state. It is similar to the encryption token-based pattern with two main differences:&lt;br /&gt;
* Uses a strong HMAC function instead of an encryption function to generate the token&lt;br /&gt;
* Includes an additional field called ‘operation’ that would indicate the purpose of the operation for which you are including the CSRF token (may it be form tag/ajax call) &lt;br /&gt;
(Ex: ‘oneclickpurchase’ (or) buy/asin=SDFH&amp;amp;category=2&amp;amp;quantity=3)&lt;br /&gt;
&lt;br /&gt;
'''Note:''' Fields mentioned in encryption token pattern (user's ID, a timestamp value and a nonce) are included. &lt;br /&gt;
&lt;br /&gt;
The operation field helps in mitigating the fact that the hash function generates the same value irrespective of multiple iterations (unlike strong encryption functions that generate different values when they are encrypted each time). So, it would help in avoiding having repeated token values across your application. Nonce field serves the same purpose as in encrypted token pattern (i.e., to avoid rare collisions due to weak cryptographic functions and acts as a defense-in-depth measure). &lt;br /&gt;
&lt;br /&gt;
Generate the token using HMAC including all four fields mentioned previously (user's ID, a timestamp value, nonce, and operation) and then include it in hidden fields for form tags, headers/parameters for ajax calls. Once you receive the HMAC from the client in the requests, re-generate HMAC with the same fields that you used to generate it, and then verify that the HMAC you re-generated matches the HMAC received from the client. If it does, it is a legitimate user request and if it does not, flag it as a CSRF intrusion and alert your incident response teams. Because an attacker has no visibility into the key used for generating the hash fields used in generating it, there is no way for them to re-generate it to use in forged request.&lt;br /&gt;
&lt;br /&gt;
Your solution should use a strong HMAC function. We recommend SHA256/512 or stronger.&lt;br /&gt;
&lt;br /&gt;
=== Auto CSRF Mitigation Techniques ===&lt;br /&gt;
Though the technique of mitigating tokens is widely used (stateful with synchronizer token and stateless with encrypted/HMAC token), the major problem associated with these techniques is the human tendency to forget things at times. If a developer forgets to add the token to any state changing operation, they are making the application vulnerable to CSRF. To avoid this, you can try to automate the process of adding tokens to CSRF vulnerable resources (mentioned earlier in this document). You can achieve this by doing the following:&lt;br /&gt;
* Write wrappers (that would auto add tokens when used) around default form tags/ajax calls and educate your developers to use those wrappers instead of standard tags. Though this approach is better than depending purely on developers to add tokens, it still is vulnerable to the issue of human tendency to forget things. [https://docs.spring.io/spring-security/site/docs/3.2.0.CI-SNAPSHOT/reference/html/csrf.html Spring Security] uses this technique to add CSRF tokens by default when a custom &amp;lt;form:form&amp;gt; tag is used, you can opt to use after verifying that its enabled and properly configured in the Spring Security version you are using.&lt;br /&gt;
* Write a hook (that would capture the traffic and add tokens to CSRF vulnerable resources before rendering to customers) in your organizational web rendering frameworks. Because it is hard to analyze when a particular response is doing any state change (and thus needing a token), you might want to include tokens in all CSRF vulnerable resources (ex: include tokens in all POST responses). This is one recommended approach, but you need to consider the performance costs it might incur.&lt;br /&gt;
* Get the tokens automatically added on the client side when the page is being rendered in user’s browser, with help of a client side script (this approach is used by [[CSRF Guard]]). You need to consider any possible JavaScript hijacking attacks.&lt;br /&gt;
We recommend researching if the framework you are using has an option to achieve CSRF protection by default before trying to build your custom auto tokening system. For example, .NET has an [https://docs.microsoft.com/en-us/aspnet/core/security/anti-request-forgery?view=aspnetcore-2.1 in-built protection] that adds token to CSRF vulnerable resources. You are responsible for proper configuration (such as key management and token management) before using these in-built CSRF protections that do auto tokening to CSRF vulnerable resources.&lt;br /&gt;
&lt;br /&gt;
=== Stateless/Tokenless Defense Techniques ===&lt;br /&gt;
&lt;br /&gt;
Given the popularity of REST services (which are stateless) and the desire to implement the minimum changes required to defend against CSRF attacks, there are a couple of techniques you can rely on to verify that a request is not cross origin with minimal changes to your application. They are:&lt;br /&gt;
&lt;br /&gt;
* Verify presence of &amp;quot;X-Requested-With: XMLHttpRequest&amp;quot; Header and value&lt;br /&gt;
* Verify it's NOT a &amp;quot;Simple HTTP Request&amp;quot;&lt;br /&gt;
&lt;br /&gt;
If these techniques don't defend every state changing endpoint in your application, you might be able to fill the gap with another stateless/tokenless technique described later: &amp;quot;Verifying origin with standard headers&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
==== X-Requested-With: XMLHttpRequest Header Verification ====&lt;br /&gt;
&lt;br /&gt;
This is a specific instance of the Custom Request Header Defense which is described more fully later. This HTTP headername/value is particularly attractive because most JavaScript libraries already add this header to requests they generate by default. If you have built, or plan to build, a pure AJAX/REST web app then all you have to do is verify server-side the presence of this headername/value pair on all POST requests, and you are done. If your AJAX calls don't include this (or a similar) header, you'll have to tweak your JavaScript framework to add this custom header.&lt;br /&gt;
&lt;br /&gt;
This defense works because JavaScript can't submit cross-origin requests and only JavaScript can add custom headers to an HTTP request.&lt;br /&gt;
&lt;br /&gt;
==== Not a Simple HTTP Request Verification ====&lt;br /&gt;
&lt;br /&gt;
A Simple HTTP Request is described, but not given this actual name, in the W3C CORS spec: [https://www.w3.org/TR/cors/#terminology]. We define a Simple HTTP Request to be a request that uses both a 'simple method' and has a 'simple header'. This W3C article defines a simple method as any of these three: GET, HEAD, POST, and also defines a simple header as one of these four: Accept, Accept-Language, Content-Language, Content-Type. However, the Content-Type header, if present, must also have a value of: application/x-www-form-urlencoded, multipart/form-data, or text/plain. It further defines, not very succinctly, at the beginning of section 7.5.1 Cross-Origin Request with Preflight - Step 1. [https://www.w3.org/TR/cors/#cross-origin-request-with-preflight-0] that a CORS preflight request is required for cross-origin requests UNLESS the request uses a simple method with simple headers. Meaning, that the ONLY cross-origin requests allowed by browsers (without requesting permission using CORS) are 'Simple HTTP Requests'.&lt;br /&gt;
&lt;br /&gt;
We can take advantage of this knowledge by simply verifying that all state changing requests to our application are NOT Simple HTTP Requests. And if they are not, then they can't be a CSRF attack. If you've built your application properly, then GET and HEAD requests cannot change state, so that leaves us with POSTs. If you then verify that all POST requests include a Content-Type header, whose value is either NOT any of the three allowed to go cross origin (application/x-www-form-urlencoded, multipart/form-data, or text/plain), or is an explicitly required Content-Type that is NOT one of those three, then you are all set.&lt;br /&gt;
&lt;br /&gt;
For example, if have a pure AJAX app that submits all of it's POST requests with Content-Types of application/xml or application/json, and you verify on the server side that all POSTs include a Content-Type with one of those two values, your app is completely defended against CSRF attacks. If your application also happens to accept other HTTP methods that do state changing things, like PUT, DELETE, etc. you don't have to do anything to defend those endpoints because browsers can't generate anything but Simple HTTP Requests cross-origin (i.e., GET, HEAD, or POST only). If you need to accept the uploading of files (multipart/form-data), explicit CSRF protection is still needed for those endpoints.&lt;br /&gt;
&lt;br /&gt;
Acknowledgement: This Simple HTTP Request verification technique was found in: [https://blog.jdriven.com/2014/10/stateless-spring-security-part-1-stateless-csrf-protection/], where it explicitly stated &amp;quot;Stateless approach 1: SWITCH TO A FULL AND PROPERLY DESIGNED JSON BASED REST API. - Single-Origin Policy only allows cross-site HEAD/GET and POSTs. POSTs may only be one of the following mime-types: application/x-www-form-urlencoded, multipart/form-data, or text/plain. Indeed no JSON! Now considering GETs should never ever trigger side-effects in any properly designed HTTP based API, this leaves it up to you to simply disallow any non-JSON POST/PUT/DELETEs and all is well.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== Defense-In-Depth Techniques ==&lt;br /&gt;
&lt;br /&gt;
=== Verifying origin with standard headers ===&lt;br /&gt;
This defense technique is specifically proposed in section 5.0 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery]. This paper proposes the first creation of the Origin header and its use as a CSRF defense mechanism.&lt;br /&gt;
&lt;br /&gt;
There are two steps to this mitigation, both of which rely on examining an HTTP request header value.&lt;br /&gt;
&lt;br /&gt;
1. Determining the origin the request is coming from (source origin). Can be done via Origin and/or referer header.&lt;br /&gt;
&lt;br /&gt;
2. Determining the origin the request is going to (target origin).&lt;br /&gt;
&lt;br /&gt;
At server side we verify if both of them match. If they do, we accept the request as legitimate (meaning it’s the same origin request) and if they don’t, we discard the request (meaning that the request originated from cross-domain). Reliability on these headers comes from the fact that they cannot be altered programmatically (using JavaScript in an XSS) as they fall under [https://developer.mozilla.org/en-US/docs/Glossary/Forbidden_header_name forbidden headers] list (i.e., only browsers can set them).&lt;br /&gt;
&lt;br /&gt;
====Identifying Source Origin (via Origin/Referer header) ====&lt;br /&gt;
'''Checking the Origin Header'''&lt;br /&gt;
&lt;br /&gt;
If the Origin header is present, verify that its value matches the target origin. Unlike the Referer, the Origin header will be present in HTTP requests that originate from an HTTPS URL.&lt;br /&gt;
&lt;br /&gt;
'''Checking the Referer Header'''&lt;br /&gt;
&lt;br /&gt;
If the Origin header is not present, verify the hostname in the Referer header matches the target origin. This method of CSRF mitigation is also commonly used with unauthenticated requests, such as requests made prior to establishing a session state, which is required to keep track of a synchronization token.&lt;br /&gt;
&lt;br /&gt;
In both cases, make sure the target origin check is strong. For example, if your site is &amp;quot;site.com&amp;quot; make sure &amp;quot;site.com.attacker.com&amp;quot; does not pass your origin check (i.e., match through the trailing/after the origin to make sure you are matching against the entire origin).&lt;br /&gt;
&lt;br /&gt;
If neither of these headers are present, you can either accept or block the request. We recommend '''blocking'''. Alternatively, you might want to log all such instances, monitor their use cases/behavior, and then start blocking requests only after you get enough confidence.&lt;br /&gt;
&lt;br /&gt;
==== Identifying the Target Origin ====&lt;br /&gt;
You might think it’s easy to determine the target origin, but it’s frequently not. The first thought is to simply grab the target origin (i.e., its hostname and port #) from the URL in the request. However, the application server is frequently sitting behind one or more proxies and the original URL is different from the URL the app server actually receives. If your application server is directly accessed by its users, then using the origin in the URL is fine and you're all set.&lt;br /&gt;
&lt;br /&gt;
If you are behind a proxy, there are a number of options to consider.&lt;br /&gt;
* '''Configure your application to simply know its target origin:''' It’s your application, so you can find its target origin and set that value in some server configuration entry. This would be the most secure approach as its defined server side, so it is a trusted value. However,   this might be problematic to maintain if your application is deployed in many places, e.g., dev, test, QA, production, and possibly multiple production instances. Setting the correct value for each of these situations might be difficult, but if you can do it via some central configuration and providing your instances to grab value from it, that's great! ('''Note:''' Make sure the centralized configuration store is maintained securely because major part of your CSRF defense depends on it.)&lt;br /&gt;
&lt;br /&gt;
* '''Use the Host header value:''' If you prefer that the application find its own target so it doesn't have to be configured for each deployed instance, we recommend using the Host family of headers. The Host header's purpose is to contain the target origin of the request. But, if your app server is sitting behind a proxy, the Host header value is most likely changed by the proxy to the target origin of the URL behind the proxy, which is different than the original URL. This modified Host header origin won't match the source origin in the original Origin or Referer headers.&lt;br /&gt;
&lt;br /&gt;
* '''Use the X-Forwarded-Host header value:''' To avoid the issue of proxy altering the host header, there is another header called X-Forwarded-Host, whose purpose is to contain the original Host header value the proxy received. Most proxies will pass along the original Host header value in the X-Forwarded-Host header. So that header value is likely to be the target origin value you need to compare to the source origin in the Origin or Referer header.&lt;br /&gt;
&lt;br /&gt;
This mitigation in earlier versions of the CSRF Cheat Sheet is treated as a primary defense. For reasons mentioned below, it is now moved to the Defense-in-Depth section.&lt;br /&gt;
&lt;br /&gt;
As it’s implicit, this mitigation would work properly when origin or referer headers are present in the requests. Though these headers are included '''majority''' of the time, there are few use cases where they are not included (most of them are for legitimate reasons to safeguard users privacy/to tune to browsers ecosystem). The following lists some use cases:&lt;br /&gt;
* Internet Explorer 11 does not add the Origin header on a CORS request across sites of a trusted zone. The Referer header will remain the only indication of the UI origin. See the following references in stackoverflow [https://stackoverflow.com/questions/20784209/internet-explorer-11-does-not-add-the-origin-header-on-a-cors-request here] and [https://github.com/silverstripe/silverstripe-graphql/issues/118 here].&lt;br /&gt;
* In an instance following a [https://stackoverflow.com/questions/22397072/are-there-any-browsers-that-set-the-origin-header-to-null-for-privacy-sensitiv 302 redirect cross-origin], Origin is not included in the redirected request because that may be considered sensitive information that should not be sent to the other origin.&lt;br /&gt;
* There are some [https://wiki.mozilla.org/Security/Origin#Privacy-Sensitive_Contexts privacy contexts] where Origin is set to “null” For example, see the following [https://www.google.com/search?q=origin+header+sent+null+value+site%3Astackoverflow.com&amp;amp;oq=origin+header+sent+null+value+site%3Astackoverflow.com here].&lt;br /&gt;
* Origin header is included for all cross origin requests but for same origin requests, in most browsers it is only included in POST/DELETE/PUT '''Note:''' Although it is not ideal, many developers use GET requests to do state changing operations.&lt;br /&gt;
* Referer header is no exception. There are multiple use cases where referer header is omitted as well ([https://stackoverflow.com/questions/6880659/in-what-cases-will-http-referer-be-empty &amp;lt;nowiki&amp;gt;[1]&amp;lt;/nowiki&amp;gt;], [https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referer &amp;lt;nowiki&amp;gt;[2]&amp;lt;/nowiki&amp;gt;], [https://en.wikipedia.org/wiki/HTTP_referer#Referer_hiding &amp;lt;nowiki&amp;gt;[3]&amp;lt;/nowiki&amp;gt;], [https://seclab.stanford.edu/websec/csrf/csrf.pdf &amp;lt;nowiki&amp;gt;[4]&amp;lt;/nowiki&amp;gt;] and [https://www.google.com/search?q=referer+header+sent+null+value+site:stackoverflow.com &amp;lt;nowiki&amp;gt;[5]&amp;lt;/nowiki&amp;gt;]). Load balancers, proxies and embedded network devices are also well known to strip the referer header due to privacy reasons in logging them.&lt;br /&gt;
&lt;br /&gt;
Though exceptions can be written for above cases in your source and target origin check logic, there is currently no central repository (even there is one, keeping it up-to-date is a problem) that references all such use cases. Each browser might also handle these use cases differently (browsers are known to handle things differently considering their ecosystem. IE example of not sending origin header within trusted zone is such example). Rejecting requests that do not contain origin and/or referer headers might sound like a good idea but it can impact legitimate users. Keeping this system in monitoring mode and trying to investigate use cases such as stated above, then adding them into exception logic is a process that you may consider to make this defense more stable in your environment.&lt;br /&gt;
&lt;br /&gt;
This CSRF defense relies on browser behavior that can change at times. For example, when new privacy contexts are discovered, under which situations you have to keep your validation logic updated, where as in token based mitigation, you have full control on the CSRF mitigation. If browsers alter CSRF tokens, they are literally changing the HTML content on rendering pages (which no browser would ever want to do!).&lt;br /&gt;
&lt;br /&gt;
When there is an XSS vulnerability on a page of an application protected with Origin and/or Referer header, the level of effort required to exploit state changing operations (that are typically vulnerable to CSRF) on other pages is very easy (grab the parameters and forge a request, as Origin and Referer header is included by default by browsers) than compared to token based mitigation (where attacker needs to download the target page, parse the DOM for the token, construct a forge request, and send it to server).&lt;br /&gt;
&lt;br /&gt;
'''Note:''' Although the concept of an origin header stemmed from [https://seclab.stanford.edu/websec/csrf/csrf.pdf the Stanford CSRF] paper that references robust CSRF defenses, the initial [https://tools.ietf.org/html/rfc6454 origin header RFC] does not reference mitigating CSRF in any way (another [https://tools.ietf.org/id/draft-abarth-origin-03.html draft version] does, however).&lt;br /&gt;
&lt;br /&gt;
=== Double Submit Cookie ===&lt;br /&gt;
If maintaining the state for CSRF token at server side is problematic, an alternative defense is to use the double submit cookie technique. This technique is easy to implement and is stateless. In this technique, we send a random value in both a cookie and as a request parameter, with the server verifying if the cookie value and request value match. When a user visits (even before authenticating to prevent login CSRF), the site should generate a (cryptographically strong) pseudorandom value and set it as a cookie on the user's machine separate from the session identifier. The site then requires that every transaction request include this pseudorandom value as a hidden form value (or other request parameter/header). If both of them match at server side, the server accepts it as legitimate request and if they don’t, it would reject the request.&lt;br /&gt;
&lt;br /&gt;
There’s a belief that this technique would work because a cross origin attacker cannot read any data sent from the server or modify cookie values, per the same-origin policy. This means that while an attacker can force a victim to send any value with a malicious CSRF request, the attacker will be unable to modify or read the value stored in the cookie (with which the server compares the token value).&lt;br /&gt;
&lt;br /&gt;
There are a couple of drawbacks associated with the assumptions made here. The problem of &amp;quot;trusting of sub domains and proper configuration of whole site in general to accept HTTPS connections only&amp;quot;. The [https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf Blackhat talk] by Rich Lundeen references these drawbacks.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;''With double submit, if an attacker can write a cookie they can obviously defeat the protection. And again, writing cookies is significantly easier then reading them. The fact that cookies can be written is difficult for many people to understand. After all, doesn't the same origin policy specify that one domain cannot access cookies from another domain? However, there are two common scenarios where writing cookies across domains is possible:''&lt;br /&gt;
&lt;br /&gt;
''a) While it's true that hellokitty.marketing.example.com cannot read cookies or access the DOM from secure.example.com because of the same origin policy, hellokitty.marketing.example.com can write cookies to the parent domain (example.com), and these cookies are then consumed by secure.example.com (secure.example.com has no good way to distinguish which site set the cookie). Additionally, there are methods of forcing secure.example.com to always accept your cookie first. What this means is that XSS in hellokitty.marketing.example.com is able to overwrite cookies in secure.example.com.''&lt;br /&gt;
&lt;br /&gt;
''b) If an attacker is in the middle, they can usually force a request to the same domain over HTTP. If an application is hosted at &amp;lt;nowiki&amp;gt;https://secure.example.com&amp;lt;/nowiki&amp;gt;, even if the cookies are set with the secure flag, a man in the middle can force connections to &amp;lt;nowiki&amp;gt;http://secure.example.com&amp;lt;/nowiki&amp;gt; and set (overwrite) any arbitrary cookies (even though the secure flag prevents the attacker from reading those cookies). Even if the HSTS header is set on the server and the browser visiting the site supports HSTS (this would prevent a man in the middle from forcing plain text HTTP requests) unless the HSTS header is set in a way that includes all subdomains, a man in the middle can simply force a request to a separate subdomain and overwrite cookies similar to 1. In other words, as long as &amp;lt;nowiki&amp;gt;http://hellokitty.marketing.example.com&amp;lt;/nowiki&amp;gt; doesn't force HTTPS, then an attacker can overwrite cookies on any example.com subdomain.''&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So, unless you are sure that your subdomains are fully secured and only accept HTTPS connections (we believe it’s difficult to guarantee at large enterprises), you should not rely on the Double Submit Cookie technique as a primary mitigation for CSRF.&lt;br /&gt;
&lt;br /&gt;
A variant of double submit cookie that can mitigate both the risks mentioned above is including the token in an encrypted cookie - often within the authentication cookie - and then at the server side matching it (after decrypting the authentication cookie) with the token in hidden form field or parameter/header for ajax calls.  This works because a sub domain has no way to over-write an properly crafted encrypted cookie without the necessary information such as encryption key.&lt;br /&gt;
&lt;br /&gt;
=== SameSite Cookie Attribute ===&lt;br /&gt;
SameSite is a cookie attribute (similar to [[HttpOnly]], Secure etc.) introduced by Google to mitigate CSRF attacks. It is defined in [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 this] Internet Draft. This attribute helps in preventing the browser from sending cookies along with cross-site requests. Possible values for this attribute are lax or strict.&lt;br /&gt;
&lt;br /&gt;
The strict value will prevent the cookie from being sent by the browser to the target site in all cross-site browsing context, even when following a regular link. For example, for a GitHub-like website this would mean that if a logged-in user follows a link to a private GitHub project posted on a corporate discussion forum or email, GitHub will not receive the session cookie and the user will not be able to access the project. A bank website however most likely doesn't want to allow any transactional pages to be linked from external sites, so the strict flag would be most appropriate.&lt;br /&gt;
&lt;br /&gt;
The default lax value provides a reasonable balance between security and usability for websites that want to maintain user's logged-in session after the user arrives from an external link. In the above GitHub scenario, the session cookie would be allowed when following a regular link from an external website while blocking it in CSRF-prone request methods such as POST. Only cross-site requests that are allowed in lax mode are the ones that have top-level navigations and are also “safe” HTTP methods (more details [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7.1 here]).&lt;br /&gt;
&lt;br /&gt;
Example of cookies using this attribute:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Set-Cookie: JSESSIONID=xxxxx; SameSite=Strict&lt;br /&gt;
Set-Cookie: JSESSIONID=xxxxx; SameSite=Lax&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Support for this attribute in different browsers is increasing but there are still browsers that need to adopt this. As of August 2018, SameSite attribute is on browsers used by 68.92% of Internet users (detailed statistics are [https://caniuse.com/#feat=same-site-cookie-attribute here]).&lt;br /&gt;
&lt;br /&gt;
Though this technique seems to be efficient in mitigating CSRF attacks, it is still in early stages (in [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 draft]) and does not have full browser support as mentioned above. Google’s [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 draft] also mentions a couple cases where forged requests can be simulated by attackers as same-site requests (and thus allowing to send SameSite cookies).&lt;br /&gt;
&lt;br /&gt;
Considering the factors above, it is not recommended to be used as a primary defense. Google agrees with this stance and strongly encourages developers to deploy server-side defenses such as tokens to mitigate CSRF more fully.&lt;br /&gt;
&lt;br /&gt;
=== Use of Custom Request Headers ===&lt;br /&gt;
&lt;br /&gt;
Adding CSRF tokens, a double submit cookie and value, encrypted token, or other defense that involves changing the UI can frequently be complex or otherwise problematic. An alternate defense that is particularly well suited for AJAX/XHR endpoints is the use of a custom request header. This defense relies on the [https://en.wikipedia.org/wiki/Same-origin_policy same-origin policy (SOP)] restriction that only JavaScript can be used to add a custom header, and only within its origin. By default, browsers do not allow JavaScript to make cross origin requests.&lt;br /&gt;
&lt;br /&gt;
A particularly attractive custom header and value to use is “X-Requested-With: XMLHttpRequest” because most JavaScript libraries already add this header to requests they generate by default. However, some do not. For example, AngularJS used to, but does not anymore. For more information, see [https://github.com/angular/angular.js/commit/3a75b1124d062f64093a90b26630938558909e8d their rationale] and how to add it back.&lt;br /&gt;
&lt;br /&gt;
If this is the case for your system, you can simply verify the presence of this header and value on all your server side AJAX endpoints in order to protect against CSRF attacks. This approach has the double advantage of usually requiring no UI changes and not introducing any server side state, which is particularly attractive to REST services. You can always add your own custom header and value if that is preferred.&lt;br /&gt;
&lt;br /&gt;
This defense technique is specifically discussed in section 4.3 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery]. However, bypasses of this defense using Flash were documented as early as 2008 and again as recently as 2015 by Mathias Karlsson to [https://hackerone.com/reports/44146 exploit a CSRF flaw in Vimeo]. A Flash attack can't spoof the Origin or Referer headers, so by checking both of them we believe this combination of checks should prevent Flash bypass CSRF attacks (if any comes up in future). &lt;br /&gt;
&lt;br /&gt;
Besides any possible future bypasses such as Flash, using a static header will make it easy to exploit other state changing operations in the application (similar to the previous explanation on why ease of exploitation is easier in origin/referer header check than token based mitigation). Including a random token instead of static header value is more or less equal to the token based approach described in the Primary Defenses section. Developers also need to consider that if you are using this approach in an application with both Ajax calls and form tags, this technique would only help Ajax calls in protecting from CSRF and you would still need protect &amp;lt;form&amp;gt; tags with approaches described in this document such as tokens. Setting custom headers on form tags is not possible directly. Also, CORS configuration should also be robust to make this solution work effectively (as custom headers for requests coming from other domains trigger a pre-flight CORS check).&lt;br /&gt;
&lt;br /&gt;
=== User Interaction Based CSRF Defense ===&lt;br /&gt;
&lt;br /&gt;
While all the techniques referenced here do not require any user interaction, sometimes it’s easier or more appropriate to involve the user in the transaction to prevent unauthorized operations (forged via CSRF or otherwise). The following are some examples of techniques that can act as strong CSRF defense when implemented correctly.&lt;br /&gt;
* Re-Authentication (password or stronger)&lt;br /&gt;
* One-time Token&lt;br /&gt;
* CAPTCHA&lt;br /&gt;
While these are a very strong CSRF defense, it does create a huge impact on the user experience. For applications that are in need of high security for some operations (password change, money transfer etc.), these techniques should be used along with token based mitigation. Please note that tokens by themselves can mitigate CSRF, developers should use these techniques only to achieve additional security for their high sensitive operations.&lt;br /&gt;
&lt;br /&gt;
=== Login CSRF ===&lt;br /&gt;
Most developers tend to ignore CSRF vulnerability on login forms as they assume that CSRF would not be applicable on login forms because user is not authenticated at that stage. That assumption is false. CSRF vulnerability can still occur on login forms where the user is not authenticated, but the impact/risk view for it is quite different from the impact/risk view of a general CSRF vulnerability (when a user is authenticated).&lt;br /&gt;
&lt;br /&gt;
With a CSRF vulnerability on login form, an attacker can make a victim login to the attacker's account and learn behavior from the victim's searches. For more information about login CSRF and other risks, see section 3 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf the Stanford CSRF] paper.&lt;br /&gt;
&lt;br /&gt;
Login CSRF can be mitigated by creating pre-sessions (sessions before a user is authenticated) and including tokens in login form. You can use any of the techniques mentioned above to generate tokens. Pre-sessions can be transitioned to real sessions once the user is authenticated. This technique is described in [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery section 4.1].&lt;br /&gt;
&lt;br /&gt;
If sub-domains under your master domain are treated as not trusted in your threat model, it is difficult to mitigate login CSRF. A strict subdomain and path level referer header (because most login pages are served on HTTPS - no stripping of referer - and are also linked from home pages) validation (detailed in section 6.1) can be used in these cases for mitigating CSRF on login forms to an extent.&lt;br /&gt;
&lt;br /&gt;
== Not So Popular CSRF Mitigations ==&lt;br /&gt;
&lt;br /&gt;
=== Triple Submit Cookie ===&lt;br /&gt;
This mitigation is proposed by [https://www.owasp.org/images/e/e6/AppSecEU2012_Wilander.pdf John Wilander in 2012 at OWASP Appsec Research]. This technique adds an additional step to double submit cookie approach by verifying if the request contains two cookies with same name (please note, attacker need to write an additional cookie to bypass double submit cookie mitigation). Though it mitigates the issues discussed in bypass of double submit cookie, it introduces new problems such as cookie jar overflow (in-details and more issue details [https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf here] and [https://webstersprodigy.net/2012/08/03/analysis-of-john-wilanders-triple-submit-cookies/ here]). We were not able to find any real-time implementations of this mitigation so far.&lt;br /&gt;
&lt;br /&gt;
=== Content-Type Header Validation ===&lt;br /&gt;
This technique is better known than the triple submit cookie mitigation. In first place, this header is not designed for security (initial RFC [https://tools.ietf.org/html/rfc1049 here] and later well-defined in [https://www.ietf.org/rfc/rfc2045.txt this] RFC) but only to let receiving agents know the type of data they would be handling, so that they can invoke corresponding parsers. The pre-flighting behavior of this header (pre-flight if header has value other than application/x-www-form-urlencoded, multipart/form-data, or text/plain) is what treated as a CSRF mitigation and thus forcing all requests to have a header value that would force a pre-flight (such as application/json. Server side can reject cross-origin requests with CORS/SOP during this pre-flight).&lt;br /&gt;
&lt;br /&gt;
This approach has two main problems. One that it would mandate all requests to have a header value that would force pre-flight despite the real use case and the other that this technique is relying on a feature that is not designed for security, to mitigate a security vulnerability. When a bug was discovered in the Chrome API, browser architects even considered to removing this pre-flighting behavior. Because this header was not designed as a security control, architects can re-design it to better cater its primary purpose. In the future, there’s a possibility that new content-type header types can be included (to better support various use-cases), which can put systems relying on this header for CSRF mitigation in trouble. For more information, see [https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2017/september/common-csrf-prevention-misconceptions/ Common CSRF Prevention Misconceptions].&lt;br /&gt;
&lt;br /&gt;
== CSRF Mitigation Myths ==&lt;br /&gt;
The following shows techniques presumed to be CSRF mitigations but none of them fully/actually mitigates a CSRF vulnerability.&lt;br /&gt;
* '''CORS''': CORS is a header designed to relax Same-Origin-Policy when cross-origin communication between sites is required. It is not designed, nor prevents CSRF attacks.&lt;br /&gt;
* '''Using HTTPS''': Using HTTPS has nothing to do with the protection from CSRF attacks. Resources that are under HTTPS are still vulnerable to CSRF if proper CSRF mitigations described above are not included.&lt;br /&gt;
* More myths can be found [[Cross-Site Request Forgery (CSRF)|here]]&lt;br /&gt;
&lt;br /&gt;
== Implementation reference example  ==&lt;br /&gt;
The following JEE web filter provides an example reference for some of the concepts described in this cheat sheet. It implements the following stateless mitigations ([https://github.com/aramrami/OWASP-CSRFGuard OWASP CSRFGuard], cover a stateful approach).&lt;br /&gt;
* Verifying same origin with standard headers&lt;br /&gt;
* Double submit cookie&lt;br /&gt;
* SameSite cookie attribute&lt;br /&gt;
'''Please note''' that it only acts a reference sample and is not complete (for example: it doesn’t have a block to direct the control flow when origin and referer header check succeeds nor it has a port/host/protocol level validation for referer header). Developers are recommended to build their complete mitigation on top of this reference sample. Developers should also implement standard authentication or authorization checks before checking for CSRF.&lt;br /&gt;
&lt;br /&gt;
Source is also located [https://github.com/righettod/poc-csrf here] and provides a runnable POC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
import org.slf4j.Logger;&lt;br /&gt;
import org.slf4j.LoggerFactory;&lt;br /&gt;
&lt;br /&gt;
import javax.servlet.Filter;&lt;br /&gt;
import javax.servlet.FilterChain;&lt;br /&gt;
import javax.servlet.FilterConfig;&lt;br /&gt;
import javax.servlet.ServletException;&lt;br /&gt;
import javax.servlet.ServletRequest;&lt;br /&gt;
import javax.servlet.ServletResponse;&lt;br /&gt;
import javax.servlet.annotation.WebFilter;&lt;br /&gt;
import javax.servlet.http.Cookie;&lt;br /&gt;
import javax.servlet.http.HttpServletRequest;&lt;br /&gt;
import javax.servlet.http.HttpServletResponse;&lt;br /&gt;
import javax.servlet.http.HttpServletResponseWrapper;&lt;br /&gt;
import javax.xml.bind.DatatypeConverter;&lt;br /&gt;
import java.io.IOException;&lt;br /&gt;
import java.net.MalformedURLException;&lt;br /&gt;
import java.net.URL;&lt;br /&gt;
import java.security.SecureRandom;&lt;br /&gt;
import java.util.Arrays;&lt;br /&gt;
&lt;br /&gt;
/**&lt;br /&gt;
 * Filter in charge of validating each incoming HTTP request about Headers and CSRF token.&lt;br /&gt;
 * It is called for all requests to backend destination.&lt;br /&gt;
 *&lt;br /&gt;
 * We use the approach in which:&lt;br /&gt;
 * - The CSRF token is changed after each valid HTTP exchange&lt;br /&gt;
 * - The custom Header name for the CSRF token transmission is fixed&lt;br /&gt;
 * - A CSRF token is associated to a backend service URI in order to enable the support for multiple parallel Ajax request from the same application&lt;br /&gt;
 * - The CSRF cookie name is the backend service name prefixed with a fixed prefix&lt;br /&gt;
 *&lt;br /&gt;
 * Here for the POC we show the &amp;quot;access denied&amp;quot; reason in the response but in production code only return a generic message !&lt;br /&gt;
 *&lt;br /&gt;
 * @see &amp;quot;https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://wiki.mozilla.org/Security/Origin&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://chloe.re/2016/04/13/goodbye-csrf-samesite-to-the-rescue/&amp;quot;&lt;br /&gt;
 */&lt;br /&gt;
@WebFilter(&amp;quot;/backend/*&amp;quot;)&lt;br /&gt;
public class CSRFValidationFilter implements Filter {&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * JVM param name used to define the target origin&lt;br /&gt;
     */&lt;br /&gt;
    public static final String TARGET_ORIGIN_JVM_PARAM_NAME = &amp;quot;target.origin&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Name of the custom HTTP header used to transmit the CSRF token and also to prefix &lt;br /&gt;
     * the CSRF cookie for the expected backend service&lt;br /&gt;
     */&lt;br /&gt;
    private static final String CSRF_TOKEN_NAME = &amp;quot;X-TOKEN&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Logger&lt;br /&gt;
     */&lt;br /&gt;
    private static final Logger LOG = LoggerFactory.getLogger(CSRFValidationFilter.class);&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Application expected deployment domain: named &amp;quot;Target Origin&amp;quot; in OWASP CSRF article&lt;br /&gt;
     */&lt;br /&gt;
    private URL targetOrigin;&lt;br /&gt;
&lt;br /&gt;
    /***&lt;br /&gt;
     * Secure generator&lt;br /&gt;
     */&lt;br /&gt;
    private final SecureRandom secureRandom = new SecureRandom();&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {&lt;br /&gt;
        HttpServletRequest httpReq = (HttpServletRequest) request;&lt;br /&gt;
        HttpServletResponse httpResp = (HttpServletResponse) response;&lt;br /&gt;
        String accessDeniedReason;&lt;br /&gt;
&lt;br /&gt;
        /* STEP 1: Verifying Same Origin with Standard Headers */&lt;br /&gt;
        //Try to get the source from the &amp;quot;Origin&amp;quot; header&lt;br /&gt;
        String source = httpReq.getHeader(&amp;quot;Origin&amp;quot;);&lt;br /&gt;
        if (this.isBlank(source)) {&lt;br /&gt;
            //If empty then fallback on &amp;quot;Referer&amp;quot; header&lt;br /&gt;
            source = httpReq.getHeader(&amp;quot;Referer&amp;quot;);&lt;br /&gt;
            //If this one is empty too then we trace the event and we block the request (recommendation of the article)...&lt;br /&gt;
            if (this.isBlank(source)) {&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: ORIGIN and REFERER request headers are both absent/empty so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
                return;&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //Compare the source against the expected target origin&lt;br /&gt;
        URL sourceURL = new URL(source);&lt;br /&gt;
        if (!this.targetOrigin.getProtocol().equals(sourceURL.getProtocol()) || !this.targetOrigin.getHost().equals(sourceURL.getHost()) &lt;br /&gt;
		|| this.targetOrigin.getPort() != sourceURL.getPort()) {&lt;br /&gt;
            //One the part do not match so we trace the event and we block the request&lt;br /&gt;
            accessDeniedReason = String.format(&amp;quot;CSRFValidationFilter: Protocol/Host/Port do not fully matches so we block the request! (%s != %s) &amp;quot;, &lt;br /&gt;
				this.targetOrigin, sourceURL);&lt;br /&gt;
            LOG.warn(accessDeniedReason);&lt;br /&gt;
            httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            return;&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        /* STEP 2: Verifying CSRF token using &amp;quot;Double Submit Cookie&amp;quot; approach */&lt;br /&gt;
        //If CSRF token cookie is absent from the request then we provide one in response but we stop the process at this stage.&lt;br /&gt;
        //Using this way we implement the first providing of token&lt;br /&gt;
        Cookie tokenCookie = null;&lt;br /&gt;
        if (httpReq.getCookies() != null) {&lt;br /&gt;
            String csrfCookieExpectedName = this.determineCookieName(httpReq);&lt;br /&gt;
            tokenCookie = Arrays.stream(httpReq.getCookies()).filter(c -&amp;gt; c.getName().equals(csrfCookieExpectedName)).findFirst().orElse(null);&lt;br /&gt;
        }&lt;br /&gt;
        if (tokenCookie == null || this.isBlank(tokenCookie.getValue())) {&lt;br /&gt;
            LOG.info(&amp;quot;CSRFValidationFilter: CSRF cookie absent or value is null/empty so we provide one and return an HTTP NO_CONTENT response !&amp;quot;);&lt;br /&gt;
            //Add the CSRF token cookie and header&lt;br /&gt;
            this.addTokenCookieAndHeader(httpReq, httpResp);&lt;br /&gt;
            //Set response state to &amp;quot;204 No Content&amp;quot; in order to allow the requester to clearly identify an initial response providing the initial CSRF token&lt;br /&gt;
            httpResp.setStatus(HttpServletResponse.SC_NO_CONTENT);&lt;br /&gt;
        } else {&lt;br /&gt;
            //If the cookie is present then we pass to validation phase&lt;br /&gt;
            //Get token from the custom HTTP header (part under control of the requester)&lt;br /&gt;
            String tokenFromHeader = httpReq.getHeader(CSRF_TOKEN_NAME);&lt;br /&gt;
            //If empty then we trace the event and we block the request&lt;br /&gt;
            if (this.isBlank(tokenFromHeader)) {&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: Token provided via HTTP Header is absent/empty so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            } else if (!tokenFromHeader.equals(tokenCookie.getValue())) {&lt;br /&gt;
                //Verify that token from header and one from cookie are the same&lt;br /&gt;
                //Here is not the case so we trace the event and we block the request&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: Token provided via HTTP Header and via Cookie are not equals so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            } else {&lt;br /&gt;
                //Verify that token from header and one from cookie matches&lt;br /&gt;
                //Here is the case so we let the request reach the target component (ServiceServlet, jsp...) and add a new token when we get back the bucket&lt;br /&gt;
                HttpServletResponseWrapper httpRespWrapper = new HttpServletResponseWrapper(httpResp);&lt;br /&gt;
                chain.doFilter(request, httpRespWrapper);&lt;br /&gt;
                //Add the CSRF token cookie and header&lt;br /&gt;
                this.addTokenCookieAndHeader(httpReq, httpRespWrapper);&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void init(FilterConfig filterConfig) throws ServletException {&lt;br /&gt;
        //To easier the configuration, we load the target expected origin from an JVM property&lt;br /&gt;
        //Reconfiguration only require an application restart that is generally acceptable&lt;br /&gt;
        try {&lt;br /&gt;
            this.targetOrigin = new URL(System.getProperty(TARGET_ORIGIN_JVM_PARAM_NAME));&lt;br /&gt;
        } catch (MalformedURLException e) {&lt;br /&gt;
            LOG.error(&amp;quot;Cannot init the filter !&amp;quot;, e);&lt;br /&gt;
            throw new ServletException(e);&lt;br /&gt;
        }&lt;br /&gt;
        LOG.info(&amp;quot;CSRFValidationFilter: Filter init, set expected target origin to '{}'.&amp;quot;, this.targetOrigin);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void destroy() {&lt;br /&gt;
        LOG.info(&amp;quot;CSRFValidationFilter: Filter shutdown&amp;quot;);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Check if a string is null or empty (including containing only spaces)&lt;br /&gt;
     *&lt;br /&gt;
     * @param s Source string&lt;br /&gt;
     * @return TRUE if source string is null or empty (including containing only spaces)&lt;br /&gt;
     */&lt;br /&gt;
    private boolean isBlank(String s) {&lt;br /&gt;
        return s == null || s.trim().isEmpty();&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Generate a new CSRF token&lt;br /&gt;
     *&lt;br /&gt;
     * @return The token a string&lt;br /&gt;
     */&lt;br /&gt;
    private String generateToken() {&lt;br /&gt;
        byte[] buffer = new byte[50];&lt;br /&gt;
        this.secureRandom.nextBytes(buffer);&lt;br /&gt;
        return DatatypeConverter.printHexBinary(buffer);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Determine the name of the CSRF cookie for the targeted backend service&lt;br /&gt;
     *&lt;br /&gt;
     * @param httpRequest Source HTTP request&lt;br /&gt;
     * @return The name of the cookie as a string&lt;br /&gt;
     */&lt;br /&gt;
    private String determineCookieName(HttpServletRequest httpRequest) {&lt;br /&gt;
        String backendServiceName = httpRequest.getRequestURI().replaceAll(&amp;quot;/&amp;quot;, &amp;quot;-&amp;quot;);&lt;br /&gt;
        return CSRF_TOKEN_NAME + &amp;quot;-&amp;quot; + backendServiceName;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Add the CSRF token cookie and header to the provided HTTP response object&lt;br /&gt;
     *&lt;br /&gt;
     * @param httpRequest  Source HTTP request&lt;br /&gt;
     * @param httpResponse HTTP response object to update&lt;br /&gt;
     */&lt;br /&gt;
    private void addTokenCookieAndHeader(HttpServletRequest httpRequest, HttpServletResponse httpResponse) {&lt;br /&gt;
        //Get new token&lt;br /&gt;
        String token = this.generateToken();&lt;br /&gt;
        //Add cookie manually because the current Cookie class implementation do not support the &amp;quot;SameSite&amp;quot; attribute&lt;br /&gt;
        //We let the adding of the &amp;quot;Secure&amp;quot; cookie attribute to the reverse proxy rewriting...&lt;br /&gt;
        //Here we lock the cookie from JS access and we use the SameSite new attribute protection&lt;br /&gt;
        String cookieSpec = String.format(&amp;quot;%s=%s; Path=%s; HttpOnly; SameSite=Strict&amp;quot;, this.determineCookieName(httpRequest), token, httpRequest.getRequestURI());&lt;br /&gt;
        httpResponse.addHeader(&amp;quot;Set-Cookie&amp;quot;, cookieSpec);&lt;br /&gt;
        //Add cookie header to give access to the token to the JS code&lt;br /&gt;
        httpResponse.setHeader(CSRF_TOKEN_NAME, token);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Authors and Primary Editors  ==&lt;br /&gt;
Manideep Konakandla (Amazon Application Security Team) - http://www.manideepk.com&lt;br /&gt;
&lt;br /&gt;
Dave Wichers - dave.wichers[at]owasp.org&lt;br /&gt;
&lt;br /&gt;
Paul Petefish - https://www.linkedin.com/in/paulpetefish&lt;br /&gt;
&lt;br /&gt;
Eric Sheridan - eric.sheridan[at]owasp.org&lt;br /&gt;
&lt;br /&gt;
Dominique Righetto - dominique.righetto[at]owasp.org&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Other Cheat Sheets ==&lt;br /&gt;
{{Cheatsheet_Navigation_Body}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;br /&gt;
[[Category:Popular]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;diff=247193</id>
		<title>Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;diff=247193"/>
				<updated>2019-02-06T21:36:24Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: Eliminate some unnecessary content, move some later in the article.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; __NOTOC__&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:160px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Cheatsheets-header.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
Last revision (mm/dd/yy): '''{{REVISIONMONTH}}/{{REVISIONDAY}}/{{REVISIONYEAR}}''' &lt;br /&gt;
&lt;br /&gt;
 __TOC__{{TOC hidden}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
[[Cross-Site Request Forgery (CSRF)]] is a type of attack that occurs when a malicious web site, email, blog, instant message, or program causes a user’s web browser to perform an unwanted action on a trusted site when the user is authenticated. A CSRF attack works because browser requests automatically include any credentials associated with the site, such as the user's session cookie, IP address, etc. Therefore, if the user is authenticated to the site, the site cannot distinguish between the forged or legitimate request sent by the victim. We would need a token/identifier that is not accessible to attacker and would not be sent along (like cookies) with forged requests that attacker initiates. For more information on CSRF, see OWASP [[Cross-Site Request Forgery (CSRF)|Cross-Site Request Forgery (CSRF) page]].&lt;br /&gt;
&lt;br /&gt;
The impact of a successful CSRF attack is limited to the capabilities exposed by the vulnerable application. For example, this attack could result in a transfer of funds, changing a password, or making a purchase with the user’s credentials. In effect, CSRF attacks are used by an attacker to make a target system perform a function via the target's browser, without the user’s knowledge, at least until the unauthorized transaction has been committed.&lt;br /&gt;
&lt;br /&gt;
Impacts of successful CSRF exploits vary greatly based on the privileges of each victim. When targeting a normal user, a successful CSRF attack can compromise end-user data and their associated functions. If the targeted end user is an administrator account, a CSRF attack can compromise the entire web application. Using social engineering, an attacker can embed malicious HTML or JavaScript code into an email or website to request a specific 'task URL'. The task then executes with or without the user's knowledge, either directly or by using a Cross-Site Scripting flaw. For example, see [https://en.wikipedia.org/wiki/Samy_(computer_worm) Samy MySpace Worm].&lt;br /&gt;
&lt;br /&gt;
==Warning: No Cross-Site Scripting (XSS) Vulnerabilities ==&lt;br /&gt;
[[Cross-Site Scripting]] is not necessary for CSRF to work. However, any cross-site scripting vulnerability can be used to defeat all CSRF mitigation techniques available in the market today (except mitigation techniques that involve user interaction and described later in this cheat sheet). This is because an XSS payload can simply read any page on the site using an XMLHttpRequest (direct DOM access can be done, if on same page) and obtain the generated token from the response, and include that token with a forged request.  This technique is exactly how the [https://en.wikipedia.org/wiki/Samy_(computer_worm) MySpace (Samy) worm] defeated MySpace's anti-CSRF defenses in 2005, which enabled the worm to propagate.&lt;br /&gt;
&lt;br /&gt;
It is imperative that no XSS vulnerabilities are present to ensure that CSRF defenses can't be circumvented. Please see the OWASP [[XSS (Cross Site Scripting) Prevention Cheat Sheet|XSS Prevention Cheat Sheet]] for detailed guidance on how to prevent XSS flaws.  &lt;br /&gt;
&lt;br /&gt;
== Resources that need to be protected from CSRF attacks ==&lt;br /&gt;
The following list assumes that you are not violating [http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.1 RFC2616], section 9.1.1, by using GET requests for state changing operations. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' If for any reason you violate, you would also need to protect those resources, which is mostly achieved with default &amp;lt;code&amp;gt;form tag [GET method]&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;href&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;src&amp;lt;/code&amp;gt; attributes.  &lt;br /&gt;
&lt;br /&gt;
* Form tags with POST &lt;br /&gt;
* Ajax/XHR calls&lt;br /&gt;
&lt;br /&gt;
== CSRF Defense Recommendations Summary ==&lt;br /&gt;
We recommend token based CSRF defense (either stateful/stateless) as a primary defense to mitigate CSRF in your applications. For highly sensitive operations, we also recommend user interaction based protection (either re-authentication/one-time token, detailed in sectio 6.5) along with token based mitigation.&lt;br /&gt;
&lt;br /&gt;
As a defense-in-depth measure, consider implementing one mitigation from the Defense-in-Depth Techniques section (you can choose the mitigation that fits your ecosystem considering the issues mentioned under them). These defense-in-depth mitigation techniques are not recommended to be used by themselves (without token based mitigation) for mitigating CSRF in your applications.&lt;br /&gt;
&lt;br /&gt;
== Primary Defense Techniques ==&lt;br /&gt;
&lt;br /&gt;
=== Token Based Mitigation ===&lt;br /&gt;
This defense is one the most popular and recommended methods to mitigate CSRF. It can be achieved either with state (synchronizer token pattern) or stateless (encrypted/hash based token pattern). For all the mitigation's, it is implicit that general security principles should be adhered.&lt;br /&gt;
* Strict key rotation and token lifetime policies should be maintained. Policies can be set according to your organizational needs. Generic key management guidance from OWASP can be found in the [[Key Management Cheat Sheet]].&lt;br /&gt;
&lt;br /&gt;
==== Synchronizer Token Pattern ====&lt;br /&gt;
Any state changing operation requires a secure random token (e.g., CSRF token) to prevent CSRF attacks. A CSRF token should be unique per user session, large random value, and also generated by a cryptographically secure random number generator. The CSRF token is added as a hidden field for forms headers/parameters for AJAX calls, and within the URL if the state changing operation occurs via a GET. See &amp;quot;Disclosure of Token in URL&amp;quot; section below. The server rejects the requested action if the CSRF token fails validation.&lt;br /&gt;
&lt;br /&gt;
In order to facilitate a &amp;quot;transparent but visible&amp;quot; CSRF solution, developers are encouraged to adopt a pattern similar to [http://www.corej2eepatterns.com/Design/PresoDesign.htm Synchronizer Token Pattern] (The original intention of this synchronizer token pattern was to detect duplicate submissions in forms). The synchronizer token pattern requires the generation of random &amp;quot;challenge&amp;quot; tokens that are associated with the user's current session. These challenge tokens are then inserted within the HTML forms and calls associated with sensitive server-side operations. It is the responsibility of the server application to verify the existence and correctness of this token. By including a challenge token with each request, the developer has a strong control to verify that the user actually intended to submit the desired requests. Inclusion of a required security token in HTTP requests associated with sensitive business functions helps mitigate CSRF attacks as successful exploitation assumes the attacker knows the randomly generated token for the target victim's session. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' These tokens aren’t like cookies that are automatically sent with forged requests made from your browser from the attacker website. &lt;br /&gt;
&lt;br /&gt;
This is analogous to the attacker being able to guess the target victim's session identifier. &lt;br /&gt;
&lt;br /&gt;
The following describes a general approach to incorporate challenge tokens within the request.&lt;br /&gt;
&lt;br /&gt;
When a Web application formulates a request, the application should include a hidden input parameter with a common name such as &amp;quot;CSRFToken&amp;quot; (for forms)/ as header/parameter value for Ajax calls. The value of this token must be randomly generated such that it cannot be guessed by an attacker. Consider leveraging the java.security.SecureRandom class for Java applications to generate a sufficiently long random token. Alternative generation algorithms include the use of 256-bit BASE64 encoded hashes. Developers that choose this generation algorithm must make sure that there is randomness and uniqueness utilized in the data that is hashed to generate the random token.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;html&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;form action=&amp;quot;/transfer.do&amp;quot; method=&amp;quot;post&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;input type=&amp;quot;hidden&amp;quot; name=&amp;quot;CSRFToken&amp;quot; &lt;br /&gt;
value=&amp;quot;OWY4NmQwODE4ODRjN2Q2NTlhMmZlYWEwYzU1YWQwMTVhM2JmNGYxYjJiMGI4MjJjZDE1ZDZMGYwMGEwOA==&amp;quot;&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/form&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is used for each subsequent request until the session expires. When a request is issued by the end-user, the server-side component must verify the existence and validity of the token in the request compared to the token found in the user session. If the token was not found within the request, or the value provided does not match the value within the user session, then the request should be aborted, and the event logged as a potential CSRF attack in progress.&lt;br /&gt;
&lt;br /&gt;
To further enhance the security of this proposed design, consider randomizing the CSRF token parameter name and/or value for each request. Implementing this approach results in the generation of per-request tokens as opposed to per-session tokens. This is more secure than per-session tokens as the time range for an attacker to exploit the stolen tokens is minimal. However, this may result in usability concerns. For example, the &amp;quot;Back&amp;quot; button browser capability is often hindered as the previous page may contain a token that is no longer valid. Interaction with this previous page will result in a CSRF false positive security event at the server. Few applications that need high security typically implement this approach (such as banks). You have to check what suits your needs. Regardless of the approach taken, developers are encouraged to protect the CSRF token the same way they protect authenticated session identifiers, such as the use of TLS.&lt;br /&gt;
&lt;br /&gt;
'''Existing Synchronizer Implementations'''&lt;br /&gt;
&lt;br /&gt;
Synchronizer token defenses have been built into many frameworks, so we strongly recommend using them first when they are available. External components that add CSRF defenses to existing applications are also recommended. OWASP has the following: &lt;br /&gt;
&lt;br /&gt;
* For Java: OWASP [[CSRF Guard]]&lt;br /&gt;
* For PHP and Apache: [[CSRFProtector Project]]&lt;br /&gt;
&lt;br /&gt;
'''Disclosure of Token in URL'''&lt;br /&gt;
&lt;br /&gt;
Some implementations of synchronizer tokens include the challenge token in GET (URL) requests as well as POST requests. This is often implemented as a result of sensitive server-side operations being invoked as a result of embedded links in the page or other general design patterns. These patterns are often implemented without knowledge of CSRF and an understanding of CSRF prevention design strategies. While this control does help mitigate the risk of CSRF attacks, the unique per-session token is being exposed for GET requests. CSRF tokens in GET requests are potentially leaked at several locations: browser history, log files, network appliances that make a point to log the first line of an HTTP request, and Referer headers if the protected site links to an external site. In the latter case (leaked CSRF token due to the Referer header being parsed by a linked site), it is trivially easy for the linked site to launch a CSRF attack on the protected site, and they will be able to target this attack very effectively, since the Referer header tells them the site as well as the CSRF token. The attack could be run entirely from JavaScript, so that a simple addition of a script tag to the HTML of a site can launch an attack (whether on an originally malicious site or on a hacked site). Additionally, since HTTPS requests from HTTPS contexts will not strip the Referer header (as opposed to HTTPS to HTTP requests) CSRF token leaks via Referer can still happen on HTTPS Applications.&lt;br /&gt;
&lt;br /&gt;
The ideal solution is to only include the CSRF token in POST requests and modify server-side actions that have state changing affect to only respond to POST requests. This is in fact what the &amp;lt;nowiki&amp;gt;RFC 2616&amp;lt;/nowiki&amp;gt; requires for GET requests. If sensitive server-side actions are guaranteed to only ever respond to POST requests, then there is no need to include the token in GET requests.&lt;br /&gt;
&lt;br /&gt;
In most JavaEE web applications, however, HTTP method scoping is rarely ever utilized when retrieving HTTP parameters from a request. Calls to &amp;quot;HttpServletRequest.getParameter&amp;quot; will return a parameter value regardless if it was a GET or POST. This is not to say HTTP method scoping cannot be enforced. It can be achieved if a developer explicitly overrides doPost() in the HttpServlet class or leverages framework specific capabilities such as the AbstractFormController class in Spring.&lt;br /&gt;
&lt;br /&gt;
For these cases, attempting to retrofit this pattern in existing applications requires significant development time and cost, and as a temporary measure it may be better to pass CSRF tokens in the URL. Once the application has been fixed to respond to HTTP GET and POST verbs correctly, CSRF tokens for GET requests should be turned off.&lt;br /&gt;
&lt;br /&gt;
==== Encryption based Token Pattern ====&lt;br /&gt;
The Encrypted Token Pattern leverages an encryption, rather than comparison method of Token-validation. It is most suitable for applications that do not want to maintain any state at server side. &lt;br /&gt;
&lt;br /&gt;
After successful authentication, the server generates a unique token comprised of the user's ID, a timestamp value and a [http://en.wikipedia.org/wiki/Cryptographic_nonce nonce], using a unique key available only on the server. This token is returned to the client and embedded in a hidden field for forms, in the request-header/parameter for AJAX requests. On receipt of this request, the server reads and decrypts the token value with the same key used to create the token. The inability to correctly decrypt suggests an intrusion attempt. Once decrypted, the UserId and timestamp contained within the token are validated; the UserId is compared against the currently logged in user, and the timestamp is compared against the current time.&lt;br /&gt;
&lt;br /&gt;
On successful token-decryption, the server has access to parsed values, ideally in the form of [http://en.wikipedia.org/wiki/Claims-based_identity claims]. These claims are processed by comparing the UserId claim to any potentially stored UserId (in a Cookie or Session variable, if the site already contains a means of authentication). The Timestamp is validated against the current time, preventing replay attacks. Alternatively, in the case of a CSRF attack, the server will be unable to decrypt the poisoned token, and can block and log the attack.&lt;br /&gt;
&lt;br /&gt;
This technique addresses some of the shortfalls in other stateless approaches, such as the need to store data in a Cookie, circumventing the Cookie-subdomain and [[HttpOnly]] issues. Your solution should use a strong encryption function. We recommend AES256-GCM or stronger.&lt;br /&gt;
&lt;br /&gt;
==== HMAC Based Token Pattern ====&lt;br /&gt;
[https://en.wikipedia.org/wiki/HMAC HMAC (hash-based message authentication code)] is a cryptographic function that helps to guarantee integrity and authentication of a message. HMAC Tokens can be used as a CSRF mitigation technique without requiring server side state. It is similar to the encryption token-based pattern with two main differences:&lt;br /&gt;
* Uses a strong HMAC function instead of an encryption function to generate the token&lt;br /&gt;
* Includes an additional field called ‘operation’ that would indicate the purpose of the operation for which you are including the CSRF token (may it be form tag/ajax call) &lt;br /&gt;
(Ex: ‘oneclickpurchase’ (or) buy/asin=SDFH&amp;amp;category=2&amp;amp;quantity=3)&lt;br /&gt;
&lt;br /&gt;
'''Note:''' Fields mentioned in encryption token pattern (user's ID, a timestamp value and a nonce) are included. &lt;br /&gt;
&lt;br /&gt;
The operation field helps in mitigating the fact that the hash function generates the same value irrespective of multiple iterations (unlike strong encryption functions that generate different values when they are encrypted each time). So, it would help in avoiding having repeated token values across your application. Nonce field serves the same purpose as in encrypted token pattern (i.e., to avoid rare collisions due to weak cryptographic functions and acts as a defense-in-depth measure). &lt;br /&gt;
&lt;br /&gt;
Generate the token using HMAC including all four fields mentioned previously (user's ID, a timestamp value, nonce, and operation) and then include it in hidden fields for form tags, headers/parameters for ajax calls. Once you receive the HMAC from the client in the requests, re-generate HMAC with the same fields that you used to generate it, and then verify that the HMAC you re-generated matches the HMAC received from the client. If it does, it is a legitimate user request and if it does not, flag it as a CSRF intrusion and alert your incident response teams. Because an attacker has no visibility into the key used for generating the hash fields used in generating it, there is no way for them to re-generate it to use in forged request.&lt;br /&gt;
&lt;br /&gt;
Your solution should use a strong HMAC function. We recommend SHA256/512 or stronger.&lt;br /&gt;
&lt;br /&gt;
=== Auto CSRF Mitigation Techniques ===&lt;br /&gt;
Though the technique of mitigating tokens is widely used (stateful with synchronizer token and stateless with encrypted/HMAC token), the major problem associated with these techniques is the human tendency to forget things at times. If a developer forgets to add the token to any state changing operation, they are making the application vulnerable to CSRF. To avoid this, you can try to automate the process of adding tokens to CSRF vulnerable resources (mentioned earlier in this document). You can achieve this by doing the following:&lt;br /&gt;
* Write wrappers (that would auto add tokens when used) around default form tags/ajax calls and educate your developers to use those wrappers instead of standard tags. Though this approach is better than depending purely on developers to add tokens, it still is vulnerable to the issue of human tendency to forget things. [https://docs.spring.io/spring-security/site/docs/3.2.0.CI-SNAPSHOT/reference/html/csrf.html Spring Security] uses this technique to add CSRF tokens by default when a custom &amp;lt;form:form&amp;gt; tag is used, you can opt to use after verifying that its enabled and properly configured in the Spring Security version you are using.&lt;br /&gt;
* Write a hook (that would capture the traffic and add tokens to CSRF vulnerable resources before rendering to customers) in your organizational web rendering frameworks. Because it is hard to analyze when a particular response is doing any state change (and thus needing a token), you might want to include tokens in all CSRF vulnerable resources (ex: include tokens in all POST responses). This is one recommended approach, but you need to consider the performance costs it might incur.&lt;br /&gt;
* Get the tokens automatically added on the client side when the page is being rendered in user’s browser, with help of a client side script (this approach is used by [[CSRF Guard]]). You need to consider any possible JavaScript hijacking attacks.&lt;br /&gt;
We recommend researching if the framework you are using has an option to achieve CSRF protection by default before trying to build your custom auto tokening system. For example, .NET has an [https://docs.microsoft.com/en-us/aspnet/core/security/anti-request-forgery?view=aspnetcore-2.1 in-built protection] that adds token to CSRF vulnerable resources. You are responsible for proper configuration (such as key management and token management) before using these in-built CSRF protections that do auto tokening to CSRF vulnerable resources.&lt;br /&gt;
&lt;br /&gt;
== Defense-In-Depth Techniques ==&lt;br /&gt;
&lt;br /&gt;
=== Verifying origin with standard headers ===&lt;br /&gt;
This defense technique is specifically proposed in section 5.0 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery]. This paper proposes the first creation of the Origin header and its use as a CSRF defense mechanism.&lt;br /&gt;
&lt;br /&gt;
There are two steps to this mitigation, both of which rely on examining an HTTP request header value.&lt;br /&gt;
&lt;br /&gt;
1. Determining the origin the request is coming from (source origin). Can be done via Origin and/or referer header.&lt;br /&gt;
&lt;br /&gt;
2. Determining the origin the request is going to (target origin).&lt;br /&gt;
&lt;br /&gt;
At server side we verify if both of them match. If they do, we accept the request as legitimate (meaning it’s the same origin request) and if they don’t, we discard the request (meaning that the request originated from cross-domain). Reliability on these headers comes from the fact that they cannot be altered programmatically (using JavaScript in an XSS) as they fall under [https://developer.mozilla.org/en-US/docs/Glossary/Forbidden_header_name forbidden headers] list (i.e., only browsers can set them).&lt;br /&gt;
&lt;br /&gt;
====Identifying Source Origin (via Origin/Referer header) ====&lt;br /&gt;
'''Checking the Origin Header'''&lt;br /&gt;
&lt;br /&gt;
If the Origin header is present, verify that its value matches the target origin. Unlike the Referer, the Origin header will be present in HTTP requests that originate from an HTTPS URL.&lt;br /&gt;
&lt;br /&gt;
'''Checking the Referer Header'''&lt;br /&gt;
&lt;br /&gt;
If the Origin header is not present, verify the hostname in the Referer header matches the target origin. This method of CSRF mitigation is also commonly used with unauthenticated requests, such as requests made prior to establishing a session state, which is required to keep track of a synchronization token.&lt;br /&gt;
&lt;br /&gt;
In both cases, make sure the target origin check is strong. For example, if your site is &amp;quot;site.com&amp;quot; make sure &amp;quot;site.com.attacker.com&amp;quot; does not pass your origin check (i.e., match through the trailing/after the origin to make sure you are matching against the entire origin).&lt;br /&gt;
&lt;br /&gt;
If neither of these headers are present, you can either accept or block the request. We recommend '''blocking'''. Alternatively, you might want to log all such instances, monitor their use cases/behavior, and then start blocking requests only after you get enough confidence.&lt;br /&gt;
&lt;br /&gt;
==== Identifying the Target Origin ====&lt;br /&gt;
You might think it’s easy to determine the target origin, but it’s frequently not. The first thought is to simply grab the target origin (i.e., its hostname and port #) from the URL in the request. However, the application server is frequently sitting behind one or more proxies and the original URL is different from the URL the app server actually receives. If your application server is directly accessed by its users, then using the origin in the URL is fine and you're all set.&lt;br /&gt;
&lt;br /&gt;
If you are behind a proxy, there are a number of options to consider.&lt;br /&gt;
* '''Configure your application to simply know its target origin:''' It’s your application, so you can find its target origin and set that value in some server configuration entry. This would be the most secure approach as its defined server side, so it is a trusted value. However,   this might be problematic to maintain if your application is deployed in many places, e.g., dev, test, QA, production, and possibly multiple production instances. Setting the correct value for each of these situations might be difficult, but if you can do it via some central configuration and providing your instances to grab value from it, that's great! ('''Note:''' Make sure the centralized configuration store is maintained securely because major part of your CSRF defense depends on it.)&lt;br /&gt;
&lt;br /&gt;
* '''Use the Host header value:''' If you prefer that the application find its own target so it doesn't have to be configured for each deployed instance, we recommend using the Host family of headers. The Host header's purpose is to contain the target origin of the request. But, if your app server is sitting behind a proxy, the Host header value is most likely changed by the proxy to the target origin of the URL behind the proxy, which is different than the original URL. This modified Host header origin won't match the source origin in the original Origin or Referer headers.&lt;br /&gt;
&lt;br /&gt;
* '''Use the X-Forwarded-Host header value:''' To avoid the issue of proxy altering the host header, there is another header called X-Forwarded-Host, whose purpose is to contain the original Host header value the proxy received. Most proxies will pass along the original Host header value in the X-Forwarded-Host header. So that header value is likely to be the target origin value you need to compare to the source origin in the Origin or Referer header.&lt;br /&gt;
&lt;br /&gt;
This mitigation in earlier versions of the CSRF Cheat Sheet is treated as a primary defense. For reasons mentioned below, it is now moved to the Defense-in-Depth section.&lt;br /&gt;
&lt;br /&gt;
As it’s implicit, this mitigation would work properly when origin or referer headers are present in the requests. Though these headers are included '''majority''' of the time, there are few use cases where they are not included (most of them are for legitimate reasons to safeguard users privacy/to tune to browsers ecosystem). The following lists some use cases:&lt;br /&gt;
* Internet Explorer 11 does not add the Origin header on a CORS request across sites of a trusted zone. The Referer header will remain the only indication of the UI origin. See the following references in stackoverflow [https://stackoverflow.com/questions/20784209/internet-explorer-11-does-not-add-the-origin-header-on-a-cors-request here] and [https://github.com/silverstripe/silverstripe-graphql/issues/118 here].&lt;br /&gt;
* In an instance following a [https://stackoverflow.com/questions/22397072/are-there-any-browsers-that-set-the-origin-header-to-null-for-privacy-sensitiv 302 redirect cross-origin], Origin is not included in the redirected request because that may be considered sensitive information that should not be sent to the other origin.&lt;br /&gt;
* There are some [https://wiki.mozilla.org/Security/Origin#Privacy-Sensitive_Contexts privacy contexts] where Origin is set to “null” For example, see the following [https://www.google.com/search?q=origin+header+sent+null+value+site%3Astackoverflow.com&amp;amp;oq=origin+header+sent+null+value+site%3Astackoverflow.com here].&lt;br /&gt;
* Origin header is included for all cross origin requests but for same origin requests, in most browsers it is only included in POST/DELETE/PUT '''Note:''' Although it is not ideal, many developers use GET requests to do state changing operations.&lt;br /&gt;
* Referer header is no exception. There are multiple use cases where referer header is omitted as well ([https://stackoverflow.com/questions/6880659/in-what-cases-will-http-referer-be-empty &amp;lt;nowiki&amp;gt;[1]&amp;lt;/nowiki&amp;gt;], [https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referer &amp;lt;nowiki&amp;gt;[2]&amp;lt;/nowiki&amp;gt;], [https://en.wikipedia.org/wiki/HTTP_referer#Referer_hiding &amp;lt;nowiki&amp;gt;[3]&amp;lt;/nowiki&amp;gt;], [https://seclab.stanford.edu/websec/csrf/csrf.pdf &amp;lt;nowiki&amp;gt;[4]&amp;lt;/nowiki&amp;gt;] and [https://www.google.com/search?q=referer+header+sent+null+value+site:stackoverflow.com &amp;lt;nowiki&amp;gt;[5]&amp;lt;/nowiki&amp;gt;]). Load balancers, proxies and embedded network devices are also well known to strip the referer header due to privacy reasons in logging them.&lt;br /&gt;
&lt;br /&gt;
Though exceptions can be written for above cases in your source and target origin check logic, there is currently no central repository (even there is one, keeping it up-to-date is a problem) that references all such use cases. Each browser might also handle these use cases differently (browsers are known to handle things differently considering their ecosystem. IE example of not sending origin header within trusted zone is such example). Rejecting requests that do not contain origin and/or referer headers might sound like a good idea but it can impact legitimate users. Keeping this system in monitoring mode and trying to investigate use cases such as stated above, then adding them into exception logic is a process that you may consider to make this defense more stable in your environment.&lt;br /&gt;
&lt;br /&gt;
This CSRF defense relies on browser behavior that can change at times. For example, when new privacy contexts are discovered, under which situations you have to keep your validation logic updated, where as in token based mitigation, you have full control on the CSRF mitigation. If browsers alter CSRF tokens, they are literally changing the HTML content on rendering pages (which no browser would ever want to do!).&lt;br /&gt;
&lt;br /&gt;
When there is an XSS vulnerability on a page of an application protected with Origin and/or Referer header, the level of effort required to exploit state changing operations (that are typically vulnerable to CSRF) on other pages is very easy (grab the parameters and forge a request, as Origin and Referer header is included by default by browsers) than compared to token based mitigation (where attacker needs to download the target page, parse the DOM for the token, construct a forge request, and send it to server).&lt;br /&gt;
&lt;br /&gt;
'''Note:''' Although the concept of an origin header stemmed from [https://seclab.stanford.edu/websec/csrf/csrf.pdf the Stanford CSRF] paper that references robust CSRF defenses, the initial [https://tools.ietf.org/html/rfc6454 origin header RFC] does not reference mitigating CSRF in any way (another [https://tools.ietf.org/id/draft-abarth-origin-03.html draft version] does, however).&lt;br /&gt;
&lt;br /&gt;
=== Double Submit Cookie ===&lt;br /&gt;
If maintaining the state for CSRF token at server side is problematic, an alternative defense is to use the double submit cookie technique. This technique is easy to implement and is stateless. In this technique, we send a random value in both a cookie and as a request parameter, with the server verifying if the cookie value and request value match. When a user visits (even before authenticating to prevent login CSRF), the site should generate a (cryptographically strong) pseudorandom value and set it as a cookie on the user's machine separate from the session identifier. The site then requires that every transaction request include this pseudorandom value as a hidden form value (or other request parameter/header). If both of them match at server side, the server accepts it as legitimate request and if they don’t, it would reject the request.&lt;br /&gt;
&lt;br /&gt;
There’s a belief that this technique would work because a cross origin attacker cannot read any data sent from the server or modify cookie values, per the same-origin policy. This means that while an attacker can force a victim to send any value with a malicious CSRF request, the attacker will be unable to modify or read the value stored in the cookie (with which the server compares the token value).&lt;br /&gt;
&lt;br /&gt;
There are a couple of drawbacks associated with the assumptions made here. The problem of &amp;quot;trusting of sub domains and proper configuration of whole site in general to accept HTTPS connections only&amp;quot;. The [https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf Blackhat talk] by Rich Lundeen references these drawbacks.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;''With double submit, if an attacker can write a cookie they can obviously defeat the protection. And again, writing cookies is significantly easier then reading them. The fact that cookies can be written is difficult for many people to understand. After all, doesn't the same origin policy specify that one domain cannot access cookies from another domain? However, there are two common scenarios where writing cookies across domains is possible:''&lt;br /&gt;
&lt;br /&gt;
''a) While it's true that hellokitty.marketing.example.com cannot read cookies or access the DOM from secure.example.com because of the same origin policy, hellokitty.marketing.example.com can write cookies to the parent domain (example.com), and these cookies are then consumed by secure.example.com (secure.example.com has no good way to distinguish which site set the cookie). Additionally, there are methods of forcing secure.example.com to always accept your cookie first. What this means is that XSS in hellokitty.marketing.example.com is able to overwrite cookies in secure.example.com.''&lt;br /&gt;
&lt;br /&gt;
''b) If an attacker is in the middle, they can usually force a request to the same domain over HTTP. If an application is hosted at &amp;lt;nowiki&amp;gt;https://secure.example.com&amp;lt;/nowiki&amp;gt;, even if the cookies are set with the secure flag, a man in the middle can force connections to &amp;lt;nowiki&amp;gt;http://secure.example.com&amp;lt;/nowiki&amp;gt; and set (overwrite) any arbitrary cookies (even though the secure flag prevents the attacker from reading those cookies). Even if the HSTS header is set on the server and the browser visiting the site supports HSTS (this would prevent a man in the middle from forcing plain text HTTP requests) unless the HSTS header is set in a way that includes all subdomains, a man in the middle can simply force a request to a separate subdomain and overwrite cookies similar to 1. In other words, as long as &amp;lt;nowiki&amp;gt;http://hellokitty.marketing.example.com&amp;lt;/nowiki&amp;gt; doesn't force HTTPS, then an attacker can overwrite cookies on any example.com subdomain.''&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So, unless you are sure that your subdomains are fully secured and only accept HTTPS connections (we believe it’s difficult to guarantee at large enterprises), you should not rely on the Double Submit Cookie technique as a primary mitigation for CSRF.&lt;br /&gt;
&lt;br /&gt;
A variant of double submit cookie that can mitigate both the risks mentioned above is including the token in an encrypted cookie - often within the authentication cookie - and then at the server side matching it (after decrypting the authentication cookie) with the token in hidden form field or parameter/header for ajax calls.  This works because a sub domain has no way to over-write an properly crafted encrypted cookie without the necessary information such as encryption key.&lt;br /&gt;
&lt;br /&gt;
=== SameSite Cookie Attribute ===&lt;br /&gt;
SameSite is a cookie attribute (similar to [[HttpOnly]], Secure etc.) introduced by Google to mitigate CSRF attacks. It is defined in [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 this] Internet Draft. This attribute helps in preventing the browser from sending cookies along with cross-site requests. Possible values for this attribute are lax or strict.&lt;br /&gt;
&lt;br /&gt;
The strict value will prevent the cookie from being sent by the browser to the target site in all cross-site browsing context, even when following a regular link. For example, for a GitHub-like website this would mean that if a logged-in user follows a link to a private GitHub project posted on a corporate discussion forum or email, GitHub will not receive the session cookie and the user will not be able to access the project. A bank website however most likely doesn't want to allow any transactional pages to be linked from external sites, so the strict flag would be most appropriate.&lt;br /&gt;
&lt;br /&gt;
The default lax value provides a reasonable balance between security and usability for websites that want to maintain user's logged-in session after the user arrives from an external link. In the above GitHub scenario, the session cookie would be allowed when following a regular link from an external website while blocking it in CSRF-prone request methods such as POST. Only cross-site requests that are allowed in lax mode are the ones that have top-level navigations and are also “safe” HTTP methods (more details [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7.1 here]).&lt;br /&gt;
&lt;br /&gt;
Example of cookies using this attribute:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Set-Cookie: JSESSIONID=xxxxx; SameSite=Strict&lt;br /&gt;
Set-Cookie: JSESSIONID=xxxxx; SameSite=Lax&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Support for this attribute in different browsers is increasing but there are still browsers that need to adopt this. As of August 2018, SameSite attribute is on browsers used by 68.92% of Internet users (detailed statistics are [https://caniuse.com/#feat=same-site-cookie-attribute here]).&lt;br /&gt;
&lt;br /&gt;
Though this technique seems to be efficient in mitigating CSRF attacks, it is still in early stages (in [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 draft]) and does not have full browser support as mentioned above. Google’s [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 draft] also mentions a couple cases where forged requests can be simulated by attackers as same-site requests (and thus allowing to send SameSite cookies).&lt;br /&gt;
&lt;br /&gt;
Considering the factors above, it is not recommended to be used as a primary defense. Google agrees with this stance and strongly encourages developers to deploy server-side defenses such as tokens to mitigate CSRF more fully.&lt;br /&gt;
&lt;br /&gt;
=== Use of Custom Request Headers ===&lt;br /&gt;
&lt;br /&gt;
Adding CSRF tokens, a double submit cookie and value, encrypted token, or other defense that involves changing the UI can frequently be complex or otherwise problematic. An alternate defense that is particularly well suited for AJAX/XHR endpoints is the use of a custom request header. This defense relies on the [https://en.wikipedia.org/wiki/Same-origin_policy same-origin policy (SOP)] restriction that only JavaScript can be used to add a custom header, and only within its origin. By default, browsers do not allow JavaScript to make cross origin requests.&lt;br /&gt;
&lt;br /&gt;
A particularly attractive custom header and value to use is “X-Requested-With: XMLHttpRequest” because most JavaScript libraries already add this header to requests they generate by default. However, some do not. For example, AngularJS used to, but does not anymore. For more information, see [https://github.com/angular/angular.js/commit/3a75b1124d062f64093a90b26630938558909e8d their rationale] and how to add it back.&lt;br /&gt;
&lt;br /&gt;
If this is the case for your system, you can simply verify the presence of this header and value on all your server side AJAX endpoints in order to protect against CSRF attacks. This approach has the double advantage of usually requiring no UI changes and not introducing any server side state, which is particularly attractive to REST services. You can always add your own custom header and value if that is preferred.&lt;br /&gt;
&lt;br /&gt;
This defense technique is specifically discussed in section 4.3 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery]. However, bypasses of this defense using Flash were documented as early as 2008 and again as recently as 2015 by Mathias Karlsson to [https://hackerone.com/reports/44146 exploit a CSRF flaw in Vimeo]. A Flash attack can't spoof the Origin or Referer headers, so by checking both of them we believe this combination of checks should prevent Flash bypass CSRF attacks (if any comes up in future). &lt;br /&gt;
&lt;br /&gt;
Besides any possible future bypasses such as Flash, using a static header will make it easy to exploit other state changing operations in the application (similar to the previous explanation on why ease of exploitation is easier in origin/referer header check than token based mitigation). Including a random token instead of static header value is more or less equal to the token based approach described in the Primary Defenses section. Developers also need to consider that if you are using this approach in an application with both Ajax calls and form tags, this technique would only help Ajax calls in protecting from CSRF and you would still need protect &amp;lt;form&amp;gt; tags with approaches described in this document such as tokens. Setting custom headers on form tags is not possible directly. Also, CORS configuration should also be robust to make this solution work effectively (as custom headers for requests coming from other domains trigger a pre-flight CORS check).&lt;br /&gt;
&lt;br /&gt;
=== User Interaction Based CSRF Defense ===&lt;br /&gt;
&lt;br /&gt;
While all the techniques referenced here do not require any user interaction, sometimes it’s easier or more appropriate to involve the user in the transaction to prevent unauthorized operations (forged via CSRF or otherwise). The following are some examples of techniques that can act as strong CSRF defense when implemented correctly.&lt;br /&gt;
* Re-Authentication (password or stronger)&lt;br /&gt;
* One-time Token&lt;br /&gt;
* CAPTCHA&lt;br /&gt;
While these are a very strong CSRF defense, it does create a huge impact on the user experience. For applications that are in need of high security for some operations (password change, money transfer etc.), these techniques should be used along with token based mitigation. Please note that tokens by themselves can mitigate CSRF, developers should use these techniques only to achieve additional security for their high sensitive operations.&lt;br /&gt;
&lt;br /&gt;
=== Login CSRF ===&lt;br /&gt;
Most developers tend to ignore CSRF vulnerability on login forms as they assume that CSRF would not be applicable on login forms because user is not authenticated at that stage. That assumption is false. CSRF vulnerability can still occur on login forms where the user is not authenticated, but the impact/risk view for it is quite different from the impact/risk view of a general CSRF vulnerability (when a user is authenticated).&lt;br /&gt;
&lt;br /&gt;
With a CSRF vulnerability on login form, an attacker can make a victim login to the attacker's account and learn behavior from the victim's searches. For more information about login CSRF and other risks, see section 3 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf the Stanford CSRF] paper.&lt;br /&gt;
&lt;br /&gt;
Login CSRF can be mitigated by creating pre-sessions (sessions before a user is authenticated) and including tokens in login form. You can use any of the techniques mentioned above to generate tokens. Pre-sessions can be transitioned to real sessions once the user is authenticated. This technique is described in [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery section 4.1].&lt;br /&gt;
&lt;br /&gt;
If sub-domains under your master domain are treated as not trusted in your threat model, it is difficult to mitigate login CSRF. A strict subdomain and path level referer header (because most login pages are served on HTTPS - no stripping of referer - and are also linked from home pages) validation (detailed in section 6.1) can be used in these cases for mitigating CSRF on login forms to an extent.&lt;br /&gt;
&lt;br /&gt;
== Not So Popular CSRF Mitigations ==&lt;br /&gt;
&lt;br /&gt;
=== Triple Submit Cookie ===&lt;br /&gt;
This mitigation is proposed by [https://www.owasp.org/images/e/e6/AppSecEU2012_Wilander.pdf John Wilander in 2012 at OWASP Appsec Research]. This technique adds an additional step to double submit cookie approach by verifying if the request contains two cookies with same name (please note, attacker need to write an additional cookie to bypass double submit cookie mitigation). Though it mitigates the issues discussed in bypass of double submit cookie, it introduces new problems such as cookie jar overflow (in-details and more issue details [https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf here] and [https://webstersprodigy.net/2012/08/03/analysis-of-john-wilanders-triple-submit-cookies/ here]). We were not able to find any real-time implementations of this mitigation so far.&lt;br /&gt;
&lt;br /&gt;
=== Content-Type Header Validation ===&lt;br /&gt;
This technique is better known than the triple submit cookie mitigation. In first place, this header is not designed for security (initial RFC [https://tools.ietf.org/html/rfc1049 here] and later well-defined in [https://www.ietf.org/rfc/rfc2045.txt this] RFC) but only to let receiving agents know the type of data they would be handling, so that they can invoke corresponding parsers. The pre-flighting behavior of this header (pre-flight if header has value other than application/x-www-form-urlencoded, multipart/form-data, or text/plain) is what treated as a CSRF mitigation and thus forcing all requests to have a header value that would force a pre-flight (such as application/json. Server side can reject cross-origin requests with CORS/SOP during this pre-flight).&lt;br /&gt;
&lt;br /&gt;
This approach has two main problems. One that it would mandate all requests to have a header value that would force pre-flight despite the real use case and the other that this technique is relying on a feature that is not designed for security, to mitigate a security vulnerability. When a bug was discovered in the Chrome API, browser architects even considered to removing this pre-flighting behavior. Because this header was not designed as a security control, architects can re-design it to better cater its primary purpose. In the future, there’s a possibility that new content-type header types can be included (to better support various use-cases), which can put systems relying on this header for CSRF mitigation in trouble. For more information, see [https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2017/september/common-csrf-prevention-misconceptions/ Common CSRF Prevention Misconceptions].&lt;br /&gt;
&lt;br /&gt;
== CSRF Mitigation Myths ==&lt;br /&gt;
The following shows techniques presumed to be CSRF mitigations but none of them fully/actually mitigates a CSRF vulnerability.&lt;br /&gt;
* '''CORS''': CORS is a header designed to relax Same-Origin-Policy when cross-origin communication between sites is required. It is not designed, nor prevents CSRF attacks.&lt;br /&gt;
* '''Using HTTPS''': Using HTTPS has nothing to do with the protection from CSRF attacks. Resources that are under HTTPS are still vulnerable to CSRF if proper CSRF mitigations described above are not included.&lt;br /&gt;
* More myths can be found [[Cross-Site Request Forgery (CSRF)|here]]&lt;br /&gt;
&lt;br /&gt;
== Implementation reference example  ==&lt;br /&gt;
The following JEE web filter provides an example reference for some of the concepts described in this cheat sheet. It implements the following stateless mitigations ([https://github.com/aramrami/OWASP-CSRFGuard OWASP CSRFGuard], cover a stateful approach).&lt;br /&gt;
* Verifying same origin with standard headers&lt;br /&gt;
* Double submit cookie&lt;br /&gt;
* SameSite cookie attribute&lt;br /&gt;
'''Please note''' that it only acts a reference sample and is not complete (for example: it doesn’t have a block to direct the control flow when origin and referer header check succeeds nor it has a port/host/protocol level validation for referer header). Developers are recommended to build their complete mitigation on top of this reference sample. Developers should also implement standard authentication or authorization checks before checking for CSRF.&lt;br /&gt;
&lt;br /&gt;
Source is also located [https://github.com/righettod/poc-csrf here] and provides a runnable POC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
import org.slf4j.Logger;&lt;br /&gt;
import org.slf4j.LoggerFactory;&lt;br /&gt;
&lt;br /&gt;
import javax.servlet.Filter;&lt;br /&gt;
import javax.servlet.FilterChain;&lt;br /&gt;
import javax.servlet.FilterConfig;&lt;br /&gt;
import javax.servlet.ServletException;&lt;br /&gt;
import javax.servlet.ServletRequest;&lt;br /&gt;
import javax.servlet.ServletResponse;&lt;br /&gt;
import javax.servlet.annotation.WebFilter;&lt;br /&gt;
import javax.servlet.http.Cookie;&lt;br /&gt;
import javax.servlet.http.HttpServletRequest;&lt;br /&gt;
import javax.servlet.http.HttpServletResponse;&lt;br /&gt;
import javax.servlet.http.HttpServletResponseWrapper;&lt;br /&gt;
import javax.xml.bind.DatatypeConverter;&lt;br /&gt;
import java.io.IOException;&lt;br /&gt;
import java.net.MalformedURLException;&lt;br /&gt;
import java.net.URL;&lt;br /&gt;
import java.security.SecureRandom;&lt;br /&gt;
import java.util.Arrays;&lt;br /&gt;
&lt;br /&gt;
/**&lt;br /&gt;
 * Filter in charge of validating each incoming HTTP request about Headers and CSRF token.&lt;br /&gt;
 * It is called for all requests to backend destination.&lt;br /&gt;
 *&lt;br /&gt;
 * We use the approach in which:&lt;br /&gt;
 * - The CSRF token is changed after each valid HTTP exchange&lt;br /&gt;
 * - The custom Header name for the CSRF token transmission is fixed&lt;br /&gt;
 * - A CSRF token is associated to a backend service URI in order to enable the support for multiple parallel Ajax request from the same application&lt;br /&gt;
 * - The CSRF cookie name is the backend service name prefixed with a fixed prefix&lt;br /&gt;
 *&lt;br /&gt;
 * Here for the POC we show the &amp;quot;access denied&amp;quot; reason in the response but in production code only return a generic message !&lt;br /&gt;
 *&lt;br /&gt;
 * @see &amp;quot;https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://wiki.mozilla.org/Security/Origin&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://chloe.re/2016/04/13/goodbye-csrf-samesite-to-the-rescue/&amp;quot;&lt;br /&gt;
 */&lt;br /&gt;
@WebFilter(&amp;quot;/backend/*&amp;quot;)&lt;br /&gt;
public class CSRFValidationFilter implements Filter {&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * JVM param name used to define the target origin&lt;br /&gt;
     */&lt;br /&gt;
    public static final String TARGET_ORIGIN_JVM_PARAM_NAME = &amp;quot;target.origin&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Name of the custom HTTP header used to transmit the CSRF token and also to prefix &lt;br /&gt;
     * the CSRF cookie for the expected backend service&lt;br /&gt;
     */&lt;br /&gt;
    private static final String CSRF_TOKEN_NAME = &amp;quot;X-TOKEN&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Logger&lt;br /&gt;
     */&lt;br /&gt;
    private static final Logger LOG = LoggerFactory.getLogger(CSRFValidationFilter.class);&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Application expected deployment domain: named &amp;quot;Target Origin&amp;quot; in OWASP CSRF article&lt;br /&gt;
     */&lt;br /&gt;
    private URL targetOrigin;&lt;br /&gt;
&lt;br /&gt;
    /***&lt;br /&gt;
     * Secure generator&lt;br /&gt;
     */&lt;br /&gt;
    private final SecureRandom secureRandom = new SecureRandom();&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {&lt;br /&gt;
        HttpServletRequest httpReq = (HttpServletRequest) request;&lt;br /&gt;
        HttpServletResponse httpResp = (HttpServletResponse) response;&lt;br /&gt;
        String accessDeniedReason;&lt;br /&gt;
&lt;br /&gt;
        /* STEP 1: Verifying Same Origin with Standard Headers */&lt;br /&gt;
        //Try to get the source from the &amp;quot;Origin&amp;quot; header&lt;br /&gt;
        String source = httpReq.getHeader(&amp;quot;Origin&amp;quot;);&lt;br /&gt;
        if (this.isBlank(source)) {&lt;br /&gt;
            //If empty then fallback on &amp;quot;Referer&amp;quot; header&lt;br /&gt;
            source = httpReq.getHeader(&amp;quot;Referer&amp;quot;);&lt;br /&gt;
            //If this one is empty too then we trace the event and we block the request (recommendation of the article)...&lt;br /&gt;
            if (this.isBlank(source)) {&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: ORIGIN and REFERER request headers are both absent/empty so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
                return;&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //Compare the source against the expected target origin&lt;br /&gt;
        URL sourceURL = new URL(source);&lt;br /&gt;
        if (!this.targetOrigin.getProtocol().equals(sourceURL.getProtocol()) || !this.targetOrigin.getHost().equals(sourceURL.getHost()) &lt;br /&gt;
		|| this.targetOrigin.getPort() != sourceURL.getPort()) {&lt;br /&gt;
            //One the part do not match so we trace the event and we block the request&lt;br /&gt;
            accessDeniedReason = String.format(&amp;quot;CSRFValidationFilter: Protocol/Host/Port do not fully matches so we block the request! (%s != %s) &amp;quot;, &lt;br /&gt;
				this.targetOrigin, sourceURL);&lt;br /&gt;
            LOG.warn(accessDeniedReason);&lt;br /&gt;
            httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            return;&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        /* STEP 2: Verifying CSRF token using &amp;quot;Double Submit Cookie&amp;quot; approach */&lt;br /&gt;
        //If CSRF token cookie is absent from the request then we provide one in response but we stop the process at this stage.&lt;br /&gt;
        //Using this way we implement the first providing of token&lt;br /&gt;
        Cookie tokenCookie = null;&lt;br /&gt;
        if (httpReq.getCookies() != null) {&lt;br /&gt;
            String csrfCookieExpectedName = this.determineCookieName(httpReq);&lt;br /&gt;
            tokenCookie = Arrays.stream(httpReq.getCookies()).filter(c -&amp;gt; c.getName().equals(csrfCookieExpectedName)).findFirst().orElse(null);&lt;br /&gt;
        }&lt;br /&gt;
        if (tokenCookie == null || this.isBlank(tokenCookie.getValue())) {&lt;br /&gt;
            LOG.info(&amp;quot;CSRFValidationFilter: CSRF cookie absent or value is null/empty so we provide one and return an HTTP NO_CONTENT response !&amp;quot;);&lt;br /&gt;
            //Add the CSRF token cookie and header&lt;br /&gt;
            this.addTokenCookieAndHeader(httpReq, httpResp);&lt;br /&gt;
            //Set response state to &amp;quot;204 No Content&amp;quot; in order to allow the requester to clearly identify an initial response providing the initial CSRF token&lt;br /&gt;
            httpResp.setStatus(HttpServletResponse.SC_NO_CONTENT);&lt;br /&gt;
        } else {&lt;br /&gt;
            //If the cookie is present then we pass to validation phase&lt;br /&gt;
            //Get token from the custom HTTP header (part under control of the requester)&lt;br /&gt;
            String tokenFromHeader = httpReq.getHeader(CSRF_TOKEN_NAME);&lt;br /&gt;
            //If empty then we trace the event and we block the request&lt;br /&gt;
            if (this.isBlank(tokenFromHeader)) {&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: Token provided via HTTP Header is absent/empty so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            } else if (!tokenFromHeader.equals(tokenCookie.getValue())) {&lt;br /&gt;
                //Verify that token from header and one from cookie are the same&lt;br /&gt;
                //Here is not the case so we trace the event and we block the request&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: Token provided via HTTP Header and via Cookie are not equals so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            } else {&lt;br /&gt;
                //Verify that token from header and one from cookie matches&lt;br /&gt;
                //Here is the case so we let the request reach the target component (ServiceServlet, jsp...) and add a new token when we get back the bucket&lt;br /&gt;
                HttpServletResponseWrapper httpRespWrapper = new HttpServletResponseWrapper(httpResp);&lt;br /&gt;
                chain.doFilter(request, httpRespWrapper);&lt;br /&gt;
                //Add the CSRF token cookie and header&lt;br /&gt;
                this.addTokenCookieAndHeader(httpReq, httpRespWrapper);&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void init(FilterConfig filterConfig) throws ServletException {&lt;br /&gt;
        //To easier the configuration, we load the target expected origin from an JVM property&lt;br /&gt;
        //Reconfiguration only require an application restart that is generally acceptable&lt;br /&gt;
        try {&lt;br /&gt;
            this.targetOrigin = new URL(System.getProperty(TARGET_ORIGIN_JVM_PARAM_NAME));&lt;br /&gt;
        } catch (MalformedURLException e) {&lt;br /&gt;
            LOG.error(&amp;quot;Cannot init the filter !&amp;quot;, e);&lt;br /&gt;
            throw new ServletException(e);&lt;br /&gt;
        }&lt;br /&gt;
        LOG.info(&amp;quot;CSRFValidationFilter: Filter init, set expected target origin to '{}'.&amp;quot;, this.targetOrigin);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void destroy() {&lt;br /&gt;
        LOG.info(&amp;quot;CSRFValidationFilter: Filter shutdown&amp;quot;);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Check if a string is null or empty (including containing only spaces)&lt;br /&gt;
     *&lt;br /&gt;
     * @param s Source string&lt;br /&gt;
     * @return TRUE if source string is null or empty (including containing only spaces)&lt;br /&gt;
     */&lt;br /&gt;
    private boolean isBlank(String s) {&lt;br /&gt;
        return s == null || s.trim().isEmpty();&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Generate a new CSRF token&lt;br /&gt;
     *&lt;br /&gt;
     * @return The token a string&lt;br /&gt;
     */&lt;br /&gt;
    private String generateToken() {&lt;br /&gt;
        byte[] buffer = new byte[50];&lt;br /&gt;
        this.secureRandom.nextBytes(buffer);&lt;br /&gt;
        return DatatypeConverter.printHexBinary(buffer);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Determine the name of the CSRF cookie for the targeted backend service&lt;br /&gt;
     *&lt;br /&gt;
     * @param httpRequest Source HTTP request&lt;br /&gt;
     * @return The name of the cookie as a string&lt;br /&gt;
     */&lt;br /&gt;
    private String determineCookieName(HttpServletRequest httpRequest) {&lt;br /&gt;
        String backendServiceName = httpRequest.getRequestURI().replaceAll(&amp;quot;/&amp;quot;, &amp;quot;-&amp;quot;);&lt;br /&gt;
        return CSRF_TOKEN_NAME + &amp;quot;-&amp;quot; + backendServiceName;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Add the CSRF token cookie and header to the provided HTTP response object&lt;br /&gt;
     *&lt;br /&gt;
     * @param httpRequest  Source HTTP request&lt;br /&gt;
     * @param httpResponse HTTP response object to update&lt;br /&gt;
     */&lt;br /&gt;
    private void addTokenCookieAndHeader(HttpServletRequest httpRequest, HttpServletResponse httpResponse) {&lt;br /&gt;
        //Get new token&lt;br /&gt;
        String token = this.generateToken();&lt;br /&gt;
        //Add cookie manually because the current Cookie class implementation do not support the &amp;quot;SameSite&amp;quot; attribute&lt;br /&gt;
        //We let the adding of the &amp;quot;Secure&amp;quot; cookie attribute to the reverse proxy rewriting...&lt;br /&gt;
        //Here we lock the cookie from JS access and we use the SameSite new attribute protection&lt;br /&gt;
        String cookieSpec = String.format(&amp;quot;%s=%s; Path=%s; HttpOnly; SameSite=Strict&amp;quot;, this.determineCookieName(httpRequest), token, httpRequest.getRequestURI());&lt;br /&gt;
        httpResponse.addHeader(&amp;quot;Set-Cookie&amp;quot;, cookieSpec);&lt;br /&gt;
        //Add cookie header to give access to the token to the JS code&lt;br /&gt;
        httpResponse.setHeader(CSRF_TOKEN_NAME, token);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Authors and Primary Editors  ==&lt;br /&gt;
Manideep Konakandla (Amazon Application Security Team) - http://www.manideepk.com&lt;br /&gt;
&lt;br /&gt;
Dave Wichers - dave.wichers[at]owasp.org&lt;br /&gt;
&lt;br /&gt;
Paul Petefish - https://www.linkedin.com/in/paulpetefish&lt;br /&gt;
&lt;br /&gt;
Eric Sheridan - eric.sheridan[at]owasp.org&lt;br /&gt;
&lt;br /&gt;
Dominique Righetto - dominique.righetto[at]owasp.org&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Other Cheat Sheets ==&lt;br /&gt;
{{Cheatsheet_Navigation_Body}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;br /&gt;
[[Category:Popular]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_SAMM_Project&amp;diff=246908</id>
		<title>OWASP SAMM Project</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_SAMM_Project&amp;diff=246908"/>
				<updated>2019-01-29T16:04:51Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Quick Download v1.1.1 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Main = &lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:90px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File: flagship_big.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''OWASP SAMM v1.5 available in the downloads section!'''&lt;br /&gt;
&lt;br /&gt;
We are now working on the Beta release of OWASP SAMMv2, our work in progress is available [https://owaspsamm.org online on our new web site]. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Join our monthly calls'''&lt;br /&gt;
* The monthly call is on each 2nd Wednesday of the month at 21h30 CEST / 3:30pm EST. &amp;lt;br&amp;gt;&lt;br /&gt;
* Please join our GoToMeeting: https://global.gotomeeting.com/join/262891661 &amp;lt;br&amp;gt;&lt;br /&gt;
* The call is open for everybody interested in SAMM or who wants to work on SAMM. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Join us on the OWASP SAMM project Slack channel'''&lt;br /&gt;
* Join our project slack channel on https://owasp.slack.com/messages/C0VF1EJGH &lt;br /&gt;
* If you do not have an OWASP Slack workspace account yet, contact one of our project leaders to get an invite link.&lt;br /&gt;
&lt;br /&gt;
'''2018 OWASP SAMM Summit (4-8 JUNE 2018, London)'''&lt;br /&gt;
* Join our 2018 OWASP SAMM Summit near London as part of the [https://open-security-summit.org/ Open Security Summit].&amp;lt;br&amp;gt;&lt;br /&gt;
* We will organize working sessions in a 5-day sprint to draft SAMM v2.0. &lt;br /&gt;
* Register online [https://open-security-summit.org/tickets/ here]&lt;br /&gt;
* Sponsor the SAMM2, more details [https://www.owasp.org/index.php/OWASP_SAMM_Project#tab=Project_Sponsors here]&lt;br /&gt;
&lt;br /&gt;
The Software Assurance Maturity Model (SAMM) is an open framework to help organizations formulate and implement a strategy for software security that is tailored to the specific risks facing the organization. SAMM helps you:&lt;br /&gt;
* '''Evaluate an organization’s existing software security practices'''&lt;br /&gt;
* '''Build a balanced software security assurance program in well-defined iterations'''&lt;br /&gt;
* '''Demonstrate concrete improvements to a security assurance program'''&lt;br /&gt;
* '''Define and measure security-related activities throughout an organization'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Dell uses OWASP’s Software Assurance Maturity Model (Owasp SAMM) to help focus our resources and determine which components of our secure application development program to prioritize.'',  ('''Michael J. Craigue, Information Security &amp;amp; Compliance, Dell, Inc.''')&lt;br /&gt;
&lt;br /&gt;
Follow OWASP SAMM on twitter: [https://twitter.com/owaspsamm @owaspsamm]&lt;br /&gt;
&lt;br /&gt;
{{Social Media Links}}&lt;br /&gt;
&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;padding-left:25px;width:200px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Quick Download v1.5 ==&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.5/Final/OWASP_SAMM_v1.5.zip All SAMM v1.5 files (.zip)] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.5/Final/SAMM_Core_V1-5_FINAL.pdf SAMM Core Model] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.5/Final/SAMM_How_To_V1-5_FINAL.pdf How-To Guide] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.5/Final/SAMM_Quick_Start_V1-5_FINAL.pdf Quick Start Guide] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.5/Final/SAMM_Assessment_Toolbox_v1.5_FINAL.xlsx SAMM Toolbox] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.5/Final/SAMM_Assessment_Toolbox_v1.5-Example_FINAL.xlsx SAMM Toolbox Example] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/ OWASP SAMM on GitHub]&lt;br /&gt;
&lt;br /&gt;
== Quick Download v1.1.1 ==&lt;br /&gt;
&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.1/Final/SAMM_Core_V1-1-Final-1page.pdf SAMM Core Model]&amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.1/Final/SAMM_How_To_V1-1-Final-1page.pdf How-To Guide] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.1/Final/SAMM_Quick_Start_V1-1-Final-1page.pdf Quick-Start Guide] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.1/Final/SAMM_Assessment_Toolbox_v1-1-Final.xlsx Updated SAMM Tool Box]&amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm OWASP SAMM on GitHub]&lt;br /&gt;
&lt;br /&gt;
== News and Events ==&lt;br /&gt;
Please see the [https://www.owasp.org/index.php/OWASP_SAMM_Project#News News] and [https://www.owasp.org/index.php/OWASP_SAMM_Project#Talks Talks] tabs&lt;br /&gt;
&lt;br /&gt;
== Change Log ==&lt;br /&gt;
* OWASP SAMM v1.5 Released! ([http://www.prnewswire.com/news-releases/owasp-samm-v15-helps-organizations-improve-their-security-posture-300439237.html Press Release])&lt;br /&gt;
* OWASP SAMM v1.1 Released! ([http://www.prnewswire.com/news-releases/owasp-releases-software-assurance-maturity-model-samm-version-11-for-improving-software-security-300236836.html Press Release])&lt;br /&gt;
* OpenSAMM v1.1 RC - [http://lists.owasp.org/pipermail/samm/2015-December/000758.html available for review]&lt;br /&gt;
&lt;br /&gt;
== Email List ==&lt;br /&gt;
&lt;br /&gt;
Questions? Please ask on the [https://lists.owasp.org/mailman/listinfo/samm SAMM Mailing List]&lt;br /&gt;
&lt;br /&gt;
== Project Leaders ==&lt;br /&gt;
&lt;br /&gt;
[https://www.owasp.org/index.php/User:Sdeleersnyder Seba Deleersnyder] &amp;lt;br /&amp;gt;  [https://www.owasp.org/index.php/User:Bart_De_Win Bart De Win]&lt;br /&gt;
&lt;br /&gt;
== Related Projects ==&lt;br /&gt;
&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Classifications==&lt;br /&gt;
&lt;br /&gt;
   {| width=&amp;quot;200&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
   |-&lt;br /&gt;
   | rowspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-flagship-trans-85.png|link=https://www.owasp.org/index.php/OWASP_Project_Stages#tab=Flagship_Projects]]&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;center&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-defenders-small.png|link=]]&lt;br /&gt;
   |&lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;center&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-builders-small.png|link=]]  &lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; | [[File:Cc-button-y-sa-small.png|link=http://creativecommons.org/licenses/by-sa/3.0/]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; | [[File:Project_Type_Files_DOC.jpg|link=]]&lt;br /&gt;
   |}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Browse Online =&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&lt;br /&gt;
The foundation of the model is built upon the core business functions of software development with security practices tied to each (see diagram below). The building blocks of the model are the three maturity levels defined for each of the twelve security practices. These define a wide variety of activities in which an organization could engage to reduce security risks and increase software assurance. Additional details are included to measure successful activity performance, understand the associated assurance benefits, estimate personnel and other costs.&lt;br /&gt;
&lt;br /&gt;
[[Image:SAMM-Overview.png|720px]]&lt;br /&gt;
&lt;br /&gt;
===== Click on any badge to learn more =====&lt;br /&gt;
&lt;br /&gt;
{| cellpadding=&amp;quot;1&amp;quot;&lt;br /&gt;
|[https://www.owasp.org/index.php/SAMM_-_Governance https://www.owasp.org/images/f/f7/G.png]&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Strategy &amp;amp; Metrics'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Strategy_&amp;amp;_Metrics|abbr=SM|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Policy &amp;amp; Compliance'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Policy_&amp;amp;_Compliance|abbr=PC|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Education &amp;amp; Guidance'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Education_&amp;amp;_Guidance|abbr=EG|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
|[https://www.owasp.org/index.php/SAMM_-_Construction https://www.owasp.org/images/e/ee/C.png]&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Threat Assessment'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Threat_Assessment|abbr=TA|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Security Requirements'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Security_Requirements|abbr=SR|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Secure Architecture'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Secure_Architecture|abbr=SA|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
|[https://www.owasp.org/index.php/SAMM_-_Verification https://www.owasp.org/images/8/83/V.png]&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Design Review'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Design_Review|abbr=DR|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Code Review'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Code_Review|abbr=CR|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Security Testing'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Security_Testing|abbr=ST|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
|[https://www.owasp.org/index.php/SAMM_-_Deployment https://www.owasp.org/images/5/54/D.png]&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Vulnerability Management'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Vulnerability_Management|abbr=VM|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Environment Hardening'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Environment_Hardening|abbr=EH|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Operational Enablement'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Operational_Enablement|abbr=OE|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Downloads =&lt;br /&gt;
&lt;br /&gt;
The latest work in progress can be found on Github: https://github.com/OWASP/samm&lt;br /&gt;
&lt;br /&gt;
Download SAMM v1.5&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.5/Final/OWASP_SAMM_v1.5.zip All SAMM v1.5 files (.zip)] Zip file containing all the v1.5 files below;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.5/Final/SAMM_Core_V1-5_FINAL.pdf SAMM Core Model] document, explaining the maturity model;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.5/Final/SAMM_How_To_V1-5_FINAL.pdf How-To Guide] with implementation guidance;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.5/Final/SAMM_Quick_Start_V1-5_FINAL.pdf Quick-Start Guide] with different steps to improve your secure software practice;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.5/Final/SAMM_Assessment_Toolbox_v1.5_FINAL.xlsx SAMM Toolbox] to perform SAMM assessments and create SAMM roadmaps;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.5/Final/SAMM_Assessment_Toolbox_v1.5-Example_FINAL.xlsx SAMM Tool Box Example] to provide an example SAMM assessment;&lt;br /&gt;
&lt;br /&gt;
Download SAMM v1.1&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.1/Final/SAMM_Core_V1-1-Final.pdf SAMM Core Model] document, explaining the maturity model;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.1/Final/SAMM_How_To_V1-1-Final.pdf How-To Guide] with implementation guidance;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.1/Final/SAMM_Quick_Start_V1-1-Final.pdf Quick-Start Guide] with different steps to improve your secure software practice;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.1/Final/SAMM_Assessment_Toolbox_v1-1-Final.xlsx Updated SAMM Tool Box] to perform SAMM assessments and create SAMM roadmaps;&lt;br /&gt;
&lt;br /&gt;
Download OpenSAMM v1.0:&lt;br /&gt;
* in [https://www.owasp.org/images/c/c0/SAMM-1.0.pdf English - PDF], [https://www.owasp.org/images/2/25/SAMM-1.0-en_US-0.3.xml.zip English - XML]&lt;br /&gt;
* in [https://www.owasp.org/images/a/a9/SAMM-1.0-es_MX.pdf Spanish - PDF], [https://www.owasp.org/images/a/a1/SAMM-1.0-es_MX-0.3.xml.zip Spanish - XML]&lt;br /&gt;
* in [https://www.owasp.org/images/a/a9/SAMM-1.0-ja_JP.pdf Japanese - PDF], not available as XML&lt;br /&gt;
* in [https://www.owasp.org/images/f/fd/SAMM-1.0-cn.pdf Chinese - PDF], not available as XML&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Available resources to apply SAMM:&lt;br /&gt;
* Browse OWASP and other resources for SAMM Security practices: [[:Category:SAMM-Resources]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Trainings:&lt;br /&gt;
* Recent OWASP SAMM 1-Day training slide deck delivered by Bart De Win and Sebastien Deleersnyder at AppSec Europe 2014 in Cambridge&lt;br /&gt;
** Slide deck download [https://www.owasp.org/images/d/df/OpenSAMM_Training_vFINAL.pptx here]&lt;br /&gt;
** Training description download [https://www.owasp.org/images/7/7c/Training_-_Bootstrap_and_improve_your_SDLC_with_OpenSAMM.docx here]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assessments:&lt;br /&gt;
* SAMM v1.5 Toolbox&lt;br /&gt;
** Download the new v1.5 Toolbox with the updated scoring model [https://github.com/OWASP/samm/blob/master/v1.5/Final/SAMM_Assessment_Toolbox_v1.5_FINAL.xlsx SAMM v1.5 Toolbox]&lt;br /&gt;
* SAMM v1.1 Toolbox&lt;br /&gt;
** download the v1.1 toolbox, including the updated questions [https://github.com/OWASP/samm/blob/master/v1.1/Final/SAMM_Assessment_Toolbox_v1-1-Final.xlsx here]&lt;br /&gt;
* Assessment Interview Template by Nick Coblentz for SAMM V1.0&lt;br /&gt;
** This [https://www.owasp.org/images/c/cf/20090607-SAMMAssessmentInterviewTemplate-1.0.xls spreadsheet] breaks down the assessment questionnaire from the SAMM framework into assertion statements that can be used to drive assessment interviews.&lt;br /&gt;
* Roadmap Chart Template by Colin Watson for SAMM V1.0&lt;br /&gt;
** This [https://www.owasp.org/images/e/e2/20090610-Samm-roadmap-chart-template.xls spreadsheet] provides a simple way to capture the data for a SAMM roadmap and automatically generate graphics similar to those that appear in the framework.&lt;br /&gt;
* Assessment Worksheet by Christian Frichot for SAMM V1.0&lt;br /&gt;
** This is an easy-to-use  [https://www.owasp.org/images/e/e2/20090610-Samm-roadmap-chart-template.xls spreadsheet] containing the assessment questionnaire from the SAMM framework. Features some auto-scoring to make the appearance very polished.&lt;br /&gt;
* Project Plan Template by Jim Weiler for SAMM V1.0&lt;br /&gt;
** This is a [https://www.owasp.org/images/3/33/SAMMProject.zip project plan template] (MS Project) that captures the activities from the SAMM levels. Useful for copying pieces into existing development project schedules.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Mappings:&lt;br /&gt;
* BSIMM-6 mapping to SAMM activities:&lt;br /&gt;
** Spreadsheet download [https://github.com/OWASP/opensamm/tree/master/v1.1/mapping here]&lt;br /&gt;
** Presentation with start of analysis download [https://www.owasp.org/images/6/66/OpenSAMM_-_BSIMM-V_mapping.pptx here]&lt;br /&gt;
* BSIMM mapping to SAMM during the 2011 Summit:&lt;br /&gt;
** This [https://www.owasp.org/images/2/2e/20110301-OpenSAMM-BSIMM-Mapping.xlsx spreadsheet] contains an activity-level mapping between OpenSAMM and BSIMM. Note that in some cases, multiple BSIMM activities map to a single SAMM activity (109 in BSIMM map to 72 in SAMM).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Tools:&lt;br /&gt;
*Javascript visualization framework for SAMM on [https://github.com/qudosoft-labs/SAMMCharts github]&lt;br /&gt;
&lt;br /&gt;
= Community =&lt;br /&gt;
&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
{{:Projects/OWASP SAMM Project/Pages/Community | Community}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Summit =&lt;br /&gt;
&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&lt;br /&gt;
In 2016 we organized our second OWASP SAMM Summit in New York on 20-21 April, details [https://www.owasp.org/index.php/OWASP_SAMM_Summit_2016 &amp;gt;here&amp;lt;] !!&lt;br /&gt;
&lt;br /&gt;
Read the wrap-up of the Summit here: https://docs.google.com/document/d/19_LC1euR7ZuazRYgeblhPE1Fv6E8N56Bu8zANq2JB30/edit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In 2015 we organized our the first OWASP SAMM Summit in Dublin on 27-28 March, details [https://www.owasp.org/index.php/OWASP_SAMM_Summit_2015 &amp;gt;here&amp;lt;] !!&lt;br /&gt;
&lt;br /&gt;
Summit Notes:&lt;br /&gt;
* 28 Mar 2015 - https://docs.google.com/document/d/1pC4har75olF1WPZaqRfXFG9T3SS_qoEUvHkEynE0iTI/edit&lt;br /&gt;
* Summit outcome is described [http://www.opensamm.org/2015/04/opensamm-summit-dublin-outcome/ here]&lt;br /&gt;
''&amp;quot;The SAMM summit provided an opportunity to breathe new life into a framework that I use to facilitate my day-to-day work and support my customers.&amp;quot;'' Bruce C Jenkins, Fortify Security Lead, Hewlett-Packard Company&lt;br /&gt;
&lt;br /&gt;
Previous workshop Notes:&lt;br /&gt;
&lt;br /&gt;
During the AppSec conferences, the SAMM project team organises workshops for you to influence the direction SAMM evolves.&lt;br /&gt;
&lt;br /&gt;
This is also an excellent opportunity to exchange experiences with your peers.&lt;br /&gt;
&lt;br /&gt;
If you plan on attending http://appsec.eu  be sure to get involved in the SAMM workshop (scheduled on Jun-23).&lt;br /&gt;
* The agenda for the SAMM Workshop in Cambridge on 23-Jun-2014 is available [https://docs.google.com/document/d/1tXqIovpSuFqycVYetdGSC2PiPygySymiLUhHT5yHR2M/edit here].&lt;br /&gt;
&lt;br /&gt;
Previous workshop notes:&lt;br /&gt;
* The notes for the SAMM Workshop in New York on 21-Nov-2013 are available [https://docs.google.com/document/d/1PwoDVsWyhoWksBiLIRh8UOh-QCs8H7QMqrSUsS13WzU/edit here].&lt;br /&gt;
* The notes for the SAMM Workshop in Hamburg on 21-Aug-2013 are available [https://docs.google.com/document/d/12mB7FkmhcI04YDZle_VD90n1xcENgNhAGqZkCAb6EkM/edit here].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Talks =&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
Upcoming talks featuring SAMM are listed here:&lt;br /&gt;
&lt;br /&gt;
* OWASP DC - Software Assurance Maturity Model (SAMM) with Brian Glas! (2017-03-15)&lt;br /&gt;
* OWASP NoVA - SAMM 1.5, what's changed and how it impacts you (2017-03-16)&lt;br /&gt;
* InfoSec World - Software Assurance Maturity Model Evolutions (2017-04-03)&lt;br /&gt;
&lt;br /&gt;
past talks:&lt;br /&gt;
&lt;br /&gt;
* OWASP SAMM v1.5 Webinar - Brian Glas discussing the SAMM model and changes in v1.5 (watch - [https://www.youtube.com/watch?v=4pKdwRb8fTI youtube]) - 2017&lt;br /&gt;
* OWASP 24/7 - Seba Deleersnyder discussing the upcoming SAMM Summit (listen - [https://soundcloud.com/owasp-podcast/seba-deleersnyder-discusses-samm-software-assurance-maturity-model-summit-in-dublin-ireland here]) - 2015&lt;br /&gt;
* OWASP Germany Day 2014: Seba Deleersnyder: OpenSAMM Best Practices: Lessons from the Trenches (download [https://www.owasp.org/images/f/fa/OpenSAMM_Best_Practices_Lessons_from_the_Trenches_-_Seba_Deleersnyder.pdf presentation]) - 2014&lt;br /&gt;
* AppSecEU14: Seba Deleersnyder &amp;amp; Bart De Win: OpenSAMM Best Practices: Lessons from the Trenches OpenSAMM Best Practices: Lessons from the Trenches (download [https://www.owasp.org/images/6/6f/OpenSAMM_-_AppSecEU_2014_-_Seba-Bart_v20140528.pptx presentation], see [https://www.youtube.com/watch?v=qcCgeBeBLUg video]) - 2014&lt;br /&gt;
* AppSecEU13 - Hamburg: Seba Deleersnyder presenting a project update (download [https://www.owasp.org/images/3/32/OpenSAMM_-_Project_Status_-_Hamburg_2013.pdf presentation]) - 2013&lt;br /&gt;
* OWASP Europe Tour 2013 - Geneva: Seba Deleersnyder presenting OpenSAMM and the renewed project  (download [https://www.owasp.org/images/c/cd/OpenSAMM_-_OWASP_Tour_13_Talk_-_Seba.pptx presentation]) - 2013&lt;br /&gt;
* AppSecEU11 - Athens: Colin Watson presenting SAMM Training (download [https://www.owasp.org/images/1/18/Owasp-training-samm-greece.pdf presentation]) - 2011&lt;br /&gt;
* AppSecEU09: Pravir Chandra presenting OpenSAMM v1.0 (download [https://www.owasp.org/images/4/49/AppSecEU09_OpenSAMM-1.0.ppt presentation]) - 2009&lt;br /&gt;
* Matt Bartoldus presentation on new SAMM project during OWASP London chapter (download [https://www.owasp.org/images/d/df/OpenSAMM.pdf presentation]) - 2009&lt;br /&gt;
* Pravir Chandra - first presentation discussing the next generation to the CLASP Project- a complete working of the details into a Software Assurance Maturity Model (SAMM). (download [https://www.owasp.org/images/2/2e/OWASP_CLASP_SAMM.ppt presentation]) - 2009&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= News =&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Latest News on SAMM&lt;br /&gt;
* OWASP SAMM v2.0 workshop at the OWASP Project Summit June 2017&lt;br /&gt;
* OWASP SAMM v1.5 Released!&lt;br /&gt;
* SAMM Summit 2016 read the [https://docs.google.com/document/d/19_LC1euR7ZuazRYgeblhPE1Fv6E8N56Bu8zANq2JB30/edit wrap-up here] &lt;br /&gt;
* OWASP SAMM v1.1 Released! See the [http://www.prnewswire.com/news-releases/owasp-releases-software-assurance-maturity-model-samm-version-11-for-improving-software-security-300236836.html Press Release].&lt;br /&gt;
* OpenSAMM v1.1 RC - [http://lists.owasp.org/pipermail/samm/2015-December/000758.html available for review]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Languages =&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''SAMM v1.0 is available in the following languages:'''&lt;br /&gt;
&lt;br /&gt;
* English&lt;br /&gt;
* Spanish&lt;br /&gt;
* Japanese&lt;br /&gt;
* Chinese&lt;br /&gt;
&lt;br /&gt;
Carlos Allendes created a presentation in Spanish on SAMM during the 2011 LatAm tour, download the [https://www.owasp.org/images/c/cf/05_OWASP_LatamTur2011_OpenSAMM.pdf presentation].&lt;br /&gt;
Hubert Grégoire and Sebastien Gioria created a French translation of the OpenSAMM 1.0 Overview presentation available for download [https://www.owasp.org/images/f/fd/OpenSAMM-1.0-fr_FR.ppt here].&lt;br /&gt;
&lt;br /&gt;
You can use [http://crowdin.net/project/owasp-samm Crowdin] to help improve these translations or add new ones right now!&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Roadmap =&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Updated roadmap:&lt;br /&gt;
Next 1.5 release, updated scoring:&lt;br /&gt;
* Clarification of maturity levels (syntactic changes to keep the text consistent)&lt;br /&gt;
* Not change activities but try to impose the current scoring system on existing activities, i.e. move from binary yes/no to the multi-tiered questions/answers of the current proposal. &lt;br /&gt;
* Show improvements with every activity introduced&lt;br /&gt;
* Adapt for the new scoring method&lt;br /&gt;
* Update questions for 4-tiers&lt;br /&gt;
* Review and where necessary clarify current questions&lt;br /&gt;
* Consider v1.1 remarks that were not withheld for the previous release&lt;br /&gt;
Targeted completion date: February 28, 2017&lt;br /&gt;
&lt;br /&gt;
SAMM version 2.0&lt;br /&gt;
* Core model changed&lt;br /&gt;
* Visualisations + flavours for a few development methodologies&lt;br /&gt;
* Update quickstart guide, TB, HTG. &lt;br /&gt;
* Success metrics: How well does the model work: Linked to the benchmarking project.&lt;br /&gt;
Timing: Workshops as part of OWASP Project Summit June 2017&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
= Get Involved =&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Involvement in the development of SAMM is actively encouraged!&lt;br /&gt;
&lt;br /&gt;
You do not have to be a security expert in order to contribute.&lt;br /&gt;
&lt;br /&gt;
Some of the ways you can help:&lt;br /&gt;
&lt;br /&gt;
==Feedback==&lt;br /&gt;
&lt;br /&gt;
Please use the [https://lists.owasp.org/mailman/listinfo/samm Mailing List] for feedback:&lt;br /&gt;
* What do like?&lt;br /&gt;
* What don't you like?&lt;br /&gt;
* How can we make SAMM easier to use?&lt;br /&gt;
* How could SAMM be improved? &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Localization==&lt;br /&gt;
&lt;br /&gt;
Are you fluent in another language? Can you help translate SAMM into that language?&lt;br /&gt;
&lt;br /&gt;
You can use [http://crowdin.net/project/owasp-samm Crowdin] to do that!&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Project Sponsors =&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SAMM is developed and maintained by a worldwide team of volunteers. We have also been helped by many organizations, either financially or by encouraging their employees to work on SAMM.&lt;br /&gt;
&lt;br /&gt;
==SAMM Adopters==&lt;br /&gt;
SAMM is the premier open source software assurance framework. You can find a list of [https://www.owasp.org/index.php/OpenSAMM_Adopters SAMM adopters] online.&lt;br /&gt;
&lt;br /&gt;
==Call for SAMM2 Sponsors==&lt;br /&gt;
OWASP SAMM and the upcoming SAMM 2.0 release is the open source software security maturity model used to develop secure software for IT, application and software security technologists. &lt;br /&gt;
&lt;br /&gt;
We are seeking sponsors to support OWASP SAMM. All proceeds from the sponsorship support the mission of the OWASP Foundation and the further development of SAMM. Supporting the project drives the funding for research grants, SAMM hosting, tools, templates, documents, promotion, and more.&lt;br /&gt;
&lt;br /&gt;
By sponsoring SAMM, you not only support an important and flagship OWASP project, you will also get visibility during the next SAMM Summit (part of the OWASP Summit 2018) and recognition on the OWASP SAMM project web site and the next release of SAMM (version 2.0).&lt;br /&gt;
&lt;br /&gt;
For more information: Contact [mailto:seba@owasp.org seba@owasp.org]&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgements ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We would like to thank the following sponsors who donated funds to our project:&lt;br /&gt;
&lt;br /&gt;
[[File:OWASP-NoVA-Chapter-Logo.PNG|250px|link=https://www.owasp.org/index.php/Virginia]]&lt;br /&gt;
[[File:Belgium_Chapter.PNG|250px|link=https://www.owasp.org/index.php/Belgium]]&lt;br /&gt;
[[File:London_Chapter.PNG|250px|link=https://www.owasp.org/index.php/London]]&lt;br /&gt;
&lt;br /&gt;
[[File:Aspectsecurity.png|250px|link=http://www.aspectsecurity.com]]&lt;br /&gt;
[[File:Astech_Consulting_logo.png|250px|link=http://www.astechconsulting.com/]] &lt;br /&gt;
[[File:Denim_Group_logo.jpg|250px|link=http://www.denimgroup.com/]] &lt;br /&gt;
[[File:Gotham_Digital_Science_logo.jpg|250px|link=http://www.gdssecurity.com/]] &lt;br /&gt;
&lt;br /&gt;
{{MemberLinksv2|link=http://www.hpenterprisesecurity.com|logo=HP_Blue_RGB_150_SM.png|size=300px90px}} &lt;br /&gt;
[[File:NetSPI_logo.png|250px|link=http://www.netspi.com/]] &lt;br /&gt;
[[Image:PwC_logo_4colourprint_(2)_Resized_good_one.jpg|150px|link=http://www.pwc.com]]&lt;br /&gt;
[[File:SI_Logo_Stacked_Application_Security.jpg|250px|link=http://www.securityinnovation.com/]] &lt;br /&gt;
[[File:LogoToreon.jpg|250px|link=http://www.toreon.com]] &lt;br /&gt;
[[File:Veracode-samm.png|250px|link=http://www.veracode.com]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
__NOTOC__ &amp;lt;headertabs&amp;gt;&amp;lt;/headertabs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
{{OWASP Book|6888083}}&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Project|Zed Attack Proxy Project]]&lt;br /&gt;
[[Category:OWASP_Tool]]&lt;br /&gt;
[[Category:OWASP Release Quality Tool|OWASP Release Quality Tool]]&lt;br /&gt;
[[Category:OWASP_Download]]&lt;br /&gt;
[[Category:Popular]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_SAMM_Project&amp;diff=246907</id>
		<title>OWASP SAMM Project</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_SAMM_Project&amp;diff=246907"/>
				<updated>2019-01-29T16:03:05Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Quick Download v1.5 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Main = &lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:90px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File: flagship_big.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''OWASP SAMM v1.5 available in the downloads section!'''&lt;br /&gt;
&lt;br /&gt;
We are now working on the Beta release of OWASP SAMMv2, our work in progress is available [https://owaspsamm.org online on our new web site]. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Join our monthly calls'''&lt;br /&gt;
* The monthly call is on each 2nd Wednesday of the month at 21h30 CEST / 3:30pm EST. &amp;lt;br&amp;gt;&lt;br /&gt;
* Please join our GoToMeeting: https://global.gotomeeting.com/join/262891661 &amp;lt;br&amp;gt;&lt;br /&gt;
* The call is open for everybody interested in SAMM or who wants to work on SAMM. &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Join us on the OWASP SAMM project Slack channel'''&lt;br /&gt;
* Join our project slack channel on https://owasp.slack.com/messages/C0VF1EJGH &lt;br /&gt;
* If you do not have an OWASP Slack workspace account yet, contact one of our project leaders to get an invite link.&lt;br /&gt;
&lt;br /&gt;
'''2018 OWASP SAMM Summit (4-8 JUNE 2018, London)'''&lt;br /&gt;
* Join our 2018 OWASP SAMM Summit near London as part of the [https://open-security-summit.org/ Open Security Summit].&amp;lt;br&amp;gt;&lt;br /&gt;
* We will organize working sessions in a 5-day sprint to draft SAMM v2.0. &lt;br /&gt;
* Register online [https://open-security-summit.org/tickets/ here]&lt;br /&gt;
* Sponsor the SAMM2, more details [https://www.owasp.org/index.php/OWASP_SAMM_Project#tab=Project_Sponsors here]&lt;br /&gt;
&lt;br /&gt;
The Software Assurance Maturity Model (SAMM) is an open framework to help organizations formulate and implement a strategy for software security that is tailored to the specific risks facing the organization. SAMM helps you:&lt;br /&gt;
* '''Evaluate an organization’s existing software security practices'''&lt;br /&gt;
* '''Build a balanced software security assurance program in well-defined iterations'''&lt;br /&gt;
* '''Demonstrate concrete improvements to a security assurance program'''&lt;br /&gt;
* '''Define and measure security-related activities throughout an organization'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Dell uses OWASP’s Software Assurance Maturity Model (Owasp SAMM) to help focus our resources and determine which components of our secure application development program to prioritize.'',  ('''Michael J. Craigue, Information Security &amp;amp; Compliance, Dell, Inc.''')&lt;br /&gt;
&lt;br /&gt;
Follow OWASP SAMM on twitter: [https://twitter.com/owaspsamm @owaspsamm]&lt;br /&gt;
&lt;br /&gt;
{{Social Media Links}}&lt;br /&gt;
&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;padding-left:25px;width:200px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Quick Download v1.5 ==&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.5/Final/OWASP_SAMM_v1.5.zip All SAMM v1.5 files (.zip)] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.5/Final/SAMM_Core_V1-5_FINAL.pdf SAMM Core Model] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.5/Final/SAMM_How_To_V1-5_FINAL.pdf How-To Guide] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.5/Final/SAMM_Quick_Start_V1-5_FINAL.pdf Quick Start Guide] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.5/Final/SAMM_Assessment_Toolbox_v1.5_FINAL.xlsx SAMM Toolbox] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/raw/master/Supporting%20Resources/v1.5/Final/SAMM_Assessment_Toolbox_v1.5-Example_FINAL.xlsx SAMM Toolbox Example] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/ OWASP SAMM on GitHub]&lt;br /&gt;
&lt;br /&gt;
== Quick Download v1.1.1 ==&lt;br /&gt;
&lt;br /&gt;
[https://github.com/OWASP/samm/blob/master/v1.1/Final/SAMM_Core_V1-1-Final-1page.pdf SAMM Core Model]&amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/blob/master/v1.1/Final/SAMM_How_To_V1-1-Final-1page.pdf How-To Guide] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/blob/master/v1.1/Final/SAMM_Quick_Start_V1-1-Final-1page.pdf Quick-Start Guide] &amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm/blob/master/v1.1/Final/SAMM_Assessment_Toolbox_v1-1-Final.xlsx Updated SAMM Tool Box]&amp;lt;br&amp;gt;&lt;br /&gt;
[https://github.com/OWASP/samm OWASP SAMM on GitHub]&lt;br /&gt;
&lt;br /&gt;
== News and Events ==&lt;br /&gt;
Please see the [https://www.owasp.org/index.php/OWASP_SAMM_Project#News News] and [https://www.owasp.org/index.php/OWASP_SAMM_Project#Talks Talks] tabs&lt;br /&gt;
&lt;br /&gt;
== Change Log ==&lt;br /&gt;
* OWASP SAMM v1.5 Released! ([http://www.prnewswire.com/news-releases/owasp-samm-v15-helps-organizations-improve-their-security-posture-300439237.html Press Release])&lt;br /&gt;
* OWASP SAMM v1.1 Released! ([http://www.prnewswire.com/news-releases/owasp-releases-software-assurance-maturity-model-samm-version-11-for-improving-software-security-300236836.html Press Release])&lt;br /&gt;
* OpenSAMM v1.1 RC - [http://lists.owasp.org/pipermail/samm/2015-December/000758.html available for review]&lt;br /&gt;
&lt;br /&gt;
== Email List ==&lt;br /&gt;
&lt;br /&gt;
Questions? Please ask on the [https://lists.owasp.org/mailman/listinfo/samm SAMM Mailing List]&lt;br /&gt;
&lt;br /&gt;
== Project Leaders ==&lt;br /&gt;
&lt;br /&gt;
[https://www.owasp.org/index.php/User:Sdeleersnyder Seba Deleersnyder] &amp;lt;br /&amp;gt;  [https://www.owasp.org/index.php/User:Bart_De_Win Bart De Win]&lt;br /&gt;
&lt;br /&gt;
== Related Projects ==&lt;br /&gt;
&lt;br /&gt;
*&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Classifications==&lt;br /&gt;
&lt;br /&gt;
   {| width=&amp;quot;200&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
   |-&lt;br /&gt;
   | rowspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-flagship-trans-85.png|link=https://www.owasp.org/index.php/OWASP_Project_Stages#tab=Flagship_Projects]]&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;center&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-defenders-small.png|link=]]&lt;br /&gt;
   |&lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;center&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-builders-small.png|link=]]  &lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; | [[File:Cc-button-y-sa-small.png|link=http://creativecommons.org/licenses/by-sa/3.0/]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; | [[File:Project_Type_Files_DOC.jpg|link=]]&lt;br /&gt;
   |}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Browse Online =&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&lt;br /&gt;
The foundation of the model is built upon the core business functions of software development with security practices tied to each (see diagram below). The building blocks of the model are the three maturity levels defined for each of the twelve security practices. These define a wide variety of activities in which an organization could engage to reduce security risks and increase software assurance. Additional details are included to measure successful activity performance, understand the associated assurance benefits, estimate personnel and other costs.&lt;br /&gt;
&lt;br /&gt;
[[Image:SAMM-Overview.png|720px]]&lt;br /&gt;
&lt;br /&gt;
===== Click on any badge to learn more =====&lt;br /&gt;
&lt;br /&gt;
{| cellpadding=&amp;quot;1&amp;quot;&lt;br /&gt;
|[https://www.owasp.org/index.php/SAMM_-_Governance https://www.owasp.org/images/f/f7/G.png]&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Strategy &amp;amp; Metrics'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Strategy_&amp;amp;_Metrics|abbr=SM|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Policy &amp;amp; Compliance'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Policy_&amp;amp;_Compliance|abbr=PC|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Education &amp;amp; Guidance'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Education_&amp;amp;_Guidance|abbr=EG|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
|[https://www.owasp.org/index.php/SAMM_-_Construction https://www.owasp.org/images/e/ee/C.png]&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Threat Assessment'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Threat_Assessment|abbr=TA|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Security Requirements'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Security_Requirements|abbr=SR|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Secure Architecture'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Secure_Architecture|abbr=SA|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
|[https://www.owasp.org/index.php/SAMM_-_Verification https://www.owasp.org/images/8/83/V.png]&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Design Review'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Design_Review|abbr=DR|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Code Review'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Code_Review|abbr=CR|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Security Testing'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Security_Testing|abbr=ST|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
|[https://www.owasp.org/index.php/SAMM_-_Deployment https://www.owasp.org/images/5/54/D.png]&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Vulnerability Management'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Vulnerability_Management|abbr=VM|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Environment Hardening'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Environment_Hardening|abbr=EH|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
| align=&amp;quot;center&amp;quot; |'''Operational Enablement'''&lt;br /&gt;
|{{SAMM-BadgeList|name=Operational_Enablement|abbr=OE|padding=0}}&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Downloads =&lt;br /&gt;
&lt;br /&gt;
The latest work in progress can be found on Github: https://github.com/OWASP/samm&lt;br /&gt;
&lt;br /&gt;
Download SAMM v1.5&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.5/Final/OWASP_SAMM_v1.5.zip All SAMM v1.5 files (.zip)] Zip file containing all the v1.5 files below;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.5/Final/SAMM_Core_V1-5_FINAL.pdf SAMM Core Model] document, explaining the maturity model;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.5/Final/SAMM_How_To_V1-5_FINAL.pdf How-To Guide] with implementation guidance;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.5/Final/SAMM_Quick_Start_V1-5_FINAL.pdf Quick-Start Guide] with different steps to improve your secure software practice;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.5/Final/SAMM_Assessment_Toolbox_v1.5_FINAL.xlsx SAMM Toolbox] to perform SAMM assessments and create SAMM roadmaps;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.5/Final/SAMM_Assessment_Toolbox_v1.5-Example_FINAL.xlsx SAMM Tool Box Example] to provide an example SAMM assessment;&lt;br /&gt;
&lt;br /&gt;
Download SAMM v1.1&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.1/Final/SAMM_Core_V1-1-Final.pdf SAMM Core Model] document, explaining the maturity model;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.1/Final/SAMM_How_To_V1-1-Final.pdf How-To Guide] with implementation guidance;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.1/Final/SAMM_Quick_Start_V1-1-Final.pdf Quick-Start Guide] with different steps to improve your secure software practice;&lt;br /&gt;
* [https://github.com/OWASP/samm/blob/master/v1.1/Final/SAMM_Assessment_Toolbox_v1-1-Final.xlsx Updated SAMM Tool Box] to perform SAMM assessments and create SAMM roadmaps;&lt;br /&gt;
&lt;br /&gt;
Download OpenSAMM v1.0:&lt;br /&gt;
* in [https://www.owasp.org/images/c/c0/SAMM-1.0.pdf English - PDF], [https://www.owasp.org/images/2/25/SAMM-1.0-en_US-0.3.xml.zip English - XML]&lt;br /&gt;
* in [https://www.owasp.org/images/a/a9/SAMM-1.0-es_MX.pdf Spanish - PDF], [https://www.owasp.org/images/a/a1/SAMM-1.0-es_MX-0.3.xml.zip Spanish - XML]&lt;br /&gt;
* in [https://www.owasp.org/images/a/a9/SAMM-1.0-ja_JP.pdf Japanese - PDF], not available as XML&lt;br /&gt;
* in [https://www.owasp.org/images/f/fd/SAMM-1.0-cn.pdf Chinese - PDF], not available as XML&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Available resources to apply SAMM:&lt;br /&gt;
* Browse OWASP and other resources for SAMM Security practices: [[:Category:SAMM-Resources]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Trainings:&lt;br /&gt;
* Recent OWASP SAMM 1-Day training slide deck delivered by Bart De Win and Sebastien Deleersnyder at AppSec Europe 2014 in Cambridge&lt;br /&gt;
** Slide deck download [https://www.owasp.org/images/d/df/OpenSAMM_Training_vFINAL.pptx here]&lt;br /&gt;
** Training description download [https://www.owasp.org/images/7/7c/Training_-_Bootstrap_and_improve_your_SDLC_with_OpenSAMM.docx here]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Assessments:&lt;br /&gt;
* SAMM v1.5 Toolbox&lt;br /&gt;
** Download the new v1.5 Toolbox with the updated scoring model [https://github.com/OWASP/samm/blob/master/v1.5/Final/SAMM_Assessment_Toolbox_v1.5_FINAL.xlsx SAMM v1.5 Toolbox]&lt;br /&gt;
* SAMM v1.1 Toolbox&lt;br /&gt;
** download the v1.1 toolbox, including the updated questions [https://github.com/OWASP/samm/blob/master/v1.1/Final/SAMM_Assessment_Toolbox_v1-1-Final.xlsx here]&lt;br /&gt;
* Assessment Interview Template by Nick Coblentz for SAMM V1.0&lt;br /&gt;
** This [https://www.owasp.org/images/c/cf/20090607-SAMMAssessmentInterviewTemplate-1.0.xls spreadsheet] breaks down the assessment questionnaire from the SAMM framework into assertion statements that can be used to drive assessment interviews.&lt;br /&gt;
* Roadmap Chart Template by Colin Watson for SAMM V1.0&lt;br /&gt;
** This [https://www.owasp.org/images/e/e2/20090610-Samm-roadmap-chart-template.xls spreadsheet] provides a simple way to capture the data for a SAMM roadmap and automatically generate graphics similar to those that appear in the framework.&lt;br /&gt;
* Assessment Worksheet by Christian Frichot for SAMM V1.0&lt;br /&gt;
** This is an easy-to-use  [https://www.owasp.org/images/e/e2/20090610-Samm-roadmap-chart-template.xls spreadsheet] containing the assessment questionnaire from the SAMM framework. Features some auto-scoring to make the appearance very polished.&lt;br /&gt;
* Project Plan Template by Jim Weiler for SAMM V1.0&lt;br /&gt;
** This is a [https://www.owasp.org/images/3/33/SAMMProject.zip project plan template] (MS Project) that captures the activities from the SAMM levels. Useful for copying pieces into existing development project schedules.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Mappings:&lt;br /&gt;
* BSIMM-6 mapping to SAMM activities:&lt;br /&gt;
** Spreadsheet download [https://github.com/OWASP/opensamm/tree/master/v1.1/mapping here]&lt;br /&gt;
** Presentation with start of analysis download [https://www.owasp.org/images/6/66/OpenSAMM_-_BSIMM-V_mapping.pptx here]&lt;br /&gt;
* BSIMM mapping to SAMM during the 2011 Summit:&lt;br /&gt;
** This [https://www.owasp.org/images/2/2e/20110301-OpenSAMM-BSIMM-Mapping.xlsx spreadsheet] contains an activity-level mapping between OpenSAMM and BSIMM. Note that in some cases, multiple BSIMM activities map to a single SAMM activity (109 in BSIMM map to 72 in SAMM).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Tools:&lt;br /&gt;
*Javascript visualization framework for SAMM on [https://github.com/qudosoft-labs/SAMMCharts github]&lt;br /&gt;
&lt;br /&gt;
= Community =&lt;br /&gt;
&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
{{:Projects/OWASP SAMM Project/Pages/Community | Community}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Summit =&lt;br /&gt;
&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&lt;br /&gt;
In 2016 we organized our second OWASP SAMM Summit in New York on 20-21 April, details [https://www.owasp.org/index.php/OWASP_SAMM_Summit_2016 &amp;gt;here&amp;lt;] !!&lt;br /&gt;
&lt;br /&gt;
Read the wrap-up of the Summit here: https://docs.google.com/document/d/19_LC1euR7ZuazRYgeblhPE1Fv6E8N56Bu8zANq2JB30/edit&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In 2015 we organized our the first OWASP SAMM Summit in Dublin on 27-28 March, details [https://www.owasp.org/index.php/OWASP_SAMM_Summit_2015 &amp;gt;here&amp;lt;] !!&lt;br /&gt;
&lt;br /&gt;
Summit Notes:&lt;br /&gt;
* 28 Mar 2015 - https://docs.google.com/document/d/1pC4har75olF1WPZaqRfXFG9T3SS_qoEUvHkEynE0iTI/edit&lt;br /&gt;
* Summit outcome is described [http://www.opensamm.org/2015/04/opensamm-summit-dublin-outcome/ here]&lt;br /&gt;
''&amp;quot;The SAMM summit provided an opportunity to breathe new life into a framework that I use to facilitate my day-to-day work and support my customers.&amp;quot;'' Bruce C Jenkins, Fortify Security Lead, Hewlett-Packard Company&lt;br /&gt;
&lt;br /&gt;
Previous workshop Notes:&lt;br /&gt;
&lt;br /&gt;
During the AppSec conferences, the SAMM project team organises workshops for you to influence the direction SAMM evolves.&lt;br /&gt;
&lt;br /&gt;
This is also an excellent opportunity to exchange experiences with your peers.&lt;br /&gt;
&lt;br /&gt;
If you plan on attending http://appsec.eu  be sure to get involved in the SAMM workshop (scheduled on Jun-23).&lt;br /&gt;
* The agenda for the SAMM Workshop in Cambridge on 23-Jun-2014 is available [https://docs.google.com/document/d/1tXqIovpSuFqycVYetdGSC2PiPygySymiLUhHT5yHR2M/edit here].&lt;br /&gt;
&lt;br /&gt;
Previous workshop notes:&lt;br /&gt;
* The notes for the SAMM Workshop in New York on 21-Nov-2013 are available [https://docs.google.com/document/d/1PwoDVsWyhoWksBiLIRh8UOh-QCs8H7QMqrSUsS13WzU/edit here].&lt;br /&gt;
* The notes for the SAMM Workshop in Hamburg on 21-Aug-2013 are available [https://docs.google.com/document/d/12mB7FkmhcI04YDZle_VD90n1xcENgNhAGqZkCAb6EkM/edit here].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Talks =&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
Upcoming talks featuring SAMM are listed here:&lt;br /&gt;
&lt;br /&gt;
* OWASP DC - Software Assurance Maturity Model (SAMM) with Brian Glas! (2017-03-15)&lt;br /&gt;
* OWASP NoVA - SAMM 1.5, what's changed and how it impacts you (2017-03-16)&lt;br /&gt;
* InfoSec World - Software Assurance Maturity Model Evolutions (2017-04-03)&lt;br /&gt;
&lt;br /&gt;
past talks:&lt;br /&gt;
&lt;br /&gt;
* OWASP SAMM v1.5 Webinar - Brian Glas discussing the SAMM model and changes in v1.5 (watch - [https://www.youtube.com/watch?v=4pKdwRb8fTI youtube]) - 2017&lt;br /&gt;
* OWASP 24/7 - Seba Deleersnyder discussing the upcoming SAMM Summit (listen - [https://soundcloud.com/owasp-podcast/seba-deleersnyder-discusses-samm-software-assurance-maturity-model-summit-in-dublin-ireland here]) - 2015&lt;br /&gt;
* OWASP Germany Day 2014: Seba Deleersnyder: OpenSAMM Best Practices: Lessons from the Trenches (download [https://www.owasp.org/images/f/fa/OpenSAMM_Best_Practices_Lessons_from_the_Trenches_-_Seba_Deleersnyder.pdf presentation]) - 2014&lt;br /&gt;
* AppSecEU14: Seba Deleersnyder &amp;amp; Bart De Win: OpenSAMM Best Practices: Lessons from the Trenches OpenSAMM Best Practices: Lessons from the Trenches (download [https://www.owasp.org/images/6/6f/OpenSAMM_-_AppSecEU_2014_-_Seba-Bart_v20140528.pptx presentation], see [https://www.youtube.com/watch?v=qcCgeBeBLUg video]) - 2014&lt;br /&gt;
* AppSecEU13 - Hamburg: Seba Deleersnyder presenting a project update (download [https://www.owasp.org/images/3/32/OpenSAMM_-_Project_Status_-_Hamburg_2013.pdf presentation]) - 2013&lt;br /&gt;
* OWASP Europe Tour 2013 - Geneva: Seba Deleersnyder presenting OpenSAMM and the renewed project  (download [https://www.owasp.org/images/c/cd/OpenSAMM_-_OWASP_Tour_13_Talk_-_Seba.pptx presentation]) - 2013&lt;br /&gt;
* AppSecEU11 - Athens: Colin Watson presenting SAMM Training (download [https://www.owasp.org/images/1/18/Owasp-training-samm-greece.pdf presentation]) - 2011&lt;br /&gt;
* AppSecEU09: Pravir Chandra presenting OpenSAMM v1.0 (download [https://www.owasp.org/images/4/49/AppSecEU09_OpenSAMM-1.0.ppt presentation]) - 2009&lt;br /&gt;
* Matt Bartoldus presentation on new SAMM project during OWASP London chapter (download [https://www.owasp.org/images/d/df/OpenSAMM.pdf presentation]) - 2009&lt;br /&gt;
* Pravir Chandra - first presentation discussing the next generation to the CLASP Project- a complete working of the details into a Software Assurance Maturity Model (SAMM). (download [https://www.owasp.org/images/2/2e/OWASP_CLASP_SAMM.ppt presentation]) - 2009&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= News =&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Latest News on SAMM&lt;br /&gt;
* OWASP SAMM v2.0 workshop at the OWASP Project Summit June 2017&lt;br /&gt;
* OWASP SAMM v1.5 Released!&lt;br /&gt;
* SAMM Summit 2016 read the [https://docs.google.com/document/d/19_LC1euR7ZuazRYgeblhPE1Fv6E8N56Bu8zANq2JB30/edit wrap-up here] &lt;br /&gt;
* OWASP SAMM v1.1 Released! See the [http://www.prnewswire.com/news-releases/owasp-releases-software-assurance-maturity-model-samm-version-11-for-improving-software-security-300236836.html Press Release].&lt;br /&gt;
* OpenSAMM v1.1 RC - [http://lists.owasp.org/pipermail/samm/2015-December/000758.html available for review]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Languages =&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''SAMM v1.0 is available in the following languages:'''&lt;br /&gt;
&lt;br /&gt;
* English&lt;br /&gt;
* Spanish&lt;br /&gt;
* Japanese&lt;br /&gt;
* Chinese&lt;br /&gt;
&lt;br /&gt;
Carlos Allendes created a presentation in Spanish on SAMM during the 2011 LatAm tour, download the [https://www.owasp.org/images/c/cf/05_OWASP_LatamTur2011_OpenSAMM.pdf presentation].&lt;br /&gt;
Hubert Grégoire and Sebastien Gioria created a French translation of the OpenSAMM 1.0 Overview presentation available for download [https://www.owasp.org/images/f/fd/OpenSAMM-1.0-fr_FR.ppt here].&lt;br /&gt;
&lt;br /&gt;
You can use [http://crowdin.net/project/owasp-samm Crowdin] to help improve these translations or add new ones right now!&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Roadmap =&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Updated roadmap:&lt;br /&gt;
Next 1.5 release, updated scoring:&lt;br /&gt;
* Clarification of maturity levels (syntactic changes to keep the text consistent)&lt;br /&gt;
* Not change activities but try to impose the current scoring system on existing activities, i.e. move from binary yes/no to the multi-tiered questions/answers of the current proposal. &lt;br /&gt;
* Show improvements with every activity introduced&lt;br /&gt;
* Adapt for the new scoring method&lt;br /&gt;
* Update questions for 4-tiers&lt;br /&gt;
* Review and where necessary clarify current questions&lt;br /&gt;
* Consider v1.1 remarks that were not withheld for the previous release&lt;br /&gt;
Targeted completion date: February 28, 2017&lt;br /&gt;
&lt;br /&gt;
SAMM version 2.0&lt;br /&gt;
* Core model changed&lt;br /&gt;
* Visualisations + flavours for a few development methodologies&lt;br /&gt;
* Update quickstart guide, TB, HTG. &lt;br /&gt;
* Success metrics: How well does the model work: Linked to the benchmarking project.&lt;br /&gt;
Timing: Workshops as part of OWASP Project Summit June 2017&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
= Get Involved =&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Involvement in the development of SAMM is actively encouraged!&lt;br /&gt;
&lt;br /&gt;
You do not have to be a security expert in order to contribute.&lt;br /&gt;
&lt;br /&gt;
Some of the ways you can help:&lt;br /&gt;
&lt;br /&gt;
==Feedback==&lt;br /&gt;
&lt;br /&gt;
Please use the [https://lists.owasp.org/mailman/listinfo/samm Mailing List] for feedback:&lt;br /&gt;
* What do like?&lt;br /&gt;
* What don't you like?&lt;br /&gt;
* How can we make SAMM easier to use?&lt;br /&gt;
* How could SAMM be improved? &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Localization==&lt;br /&gt;
&lt;br /&gt;
Are you fluent in another language? Can you help translate SAMM into that language?&lt;br /&gt;
&lt;br /&gt;
You can use [http://crowdin.net/project/owasp-samm Crowdin] to do that!&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Project Sponsors =&lt;br /&gt;
[[Image:OwaspSAMM.png|right]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;font-size:120%;border:none;margin: 0;color:#000&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
SAMM is developed and maintained by a worldwide team of volunteers. We have also been helped by many organizations, either financially or by encouraging their employees to work on SAMM.&lt;br /&gt;
&lt;br /&gt;
==SAMM Adopters==&lt;br /&gt;
SAMM is the premier open source software assurance framework. You can find a list of [https://www.owasp.org/index.php/OpenSAMM_Adopters SAMM adopters] online.&lt;br /&gt;
&lt;br /&gt;
==Call for SAMM2 Sponsors==&lt;br /&gt;
OWASP SAMM and the upcoming SAMM 2.0 release is the open source software security maturity model used to develop secure software for IT, application and software security technologists. &lt;br /&gt;
&lt;br /&gt;
We are seeking sponsors to support OWASP SAMM. All proceeds from the sponsorship support the mission of the OWASP Foundation and the further development of SAMM. Supporting the project drives the funding for research grants, SAMM hosting, tools, templates, documents, promotion, and more.&lt;br /&gt;
&lt;br /&gt;
By sponsoring SAMM, you not only support an important and flagship OWASP project, you will also get visibility during the next SAMM Summit (part of the OWASP Summit 2018) and recognition on the OWASP SAMM project web site and the next release of SAMM (version 2.0).&lt;br /&gt;
&lt;br /&gt;
For more information: Contact [mailto:seba@owasp.org seba@owasp.org]&lt;br /&gt;
&lt;br /&gt;
==== Acknowledgements ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We would like to thank the following sponsors who donated funds to our project:&lt;br /&gt;
&lt;br /&gt;
[[File:OWASP-NoVA-Chapter-Logo.PNG|250px|link=https://www.owasp.org/index.php/Virginia]]&lt;br /&gt;
[[File:Belgium_Chapter.PNG|250px|link=https://www.owasp.org/index.php/Belgium]]&lt;br /&gt;
[[File:London_Chapter.PNG|250px|link=https://www.owasp.org/index.php/London]]&lt;br /&gt;
&lt;br /&gt;
[[File:Aspectsecurity.png|250px|link=http://www.aspectsecurity.com]]&lt;br /&gt;
[[File:Astech_Consulting_logo.png|250px|link=http://www.astechconsulting.com/]] &lt;br /&gt;
[[File:Denim_Group_logo.jpg|250px|link=http://www.denimgroup.com/]] &lt;br /&gt;
[[File:Gotham_Digital_Science_logo.jpg|250px|link=http://www.gdssecurity.com/]] &lt;br /&gt;
&lt;br /&gt;
{{MemberLinksv2|link=http://www.hpenterprisesecurity.com|logo=HP_Blue_RGB_150_SM.png|size=300px90px}} &lt;br /&gt;
[[File:NetSPI_logo.png|250px|link=http://www.netspi.com/]] &lt;br /&gt;
[[Image:PwC_logo_4colourprint_(2)_Resized_good_one.jpg|150px|link=http://www.pwc.com]]&lt;br /&gt;
[[File:SI_Logo_Stacked_Application_Security.jpg|250px|link=http://www.securityinnovation.com/]] &lt;br /&gt;
[[File:LogoToreon.jpg|250px|link=http://www.toreon.com]] &lt;br /&gt;
[[File:Veracode-samm.png|250px|link=http://www.veracode.com]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
__NOTOC__ &amp;lt;headertabs&amp;gt;&amp;lt;/headertabs&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
{{OWASP Book|6888083}}&lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Project|Zed Attack Proxy Project]]&lt;br /&gt;
[[Category:OWASP_Tool]]&lt;br /&gt;
[[Category:OWASP Release Quality Tool|OWASP Release Quality Tool]]&lt;br /&gt;
[[Category:OWASP_Download]]&lt;br /&gt;
[[Category:Popular]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Category:Vulnerability_Scanning_Tools&amp;diff=246881</id>
		<title>Category:Vulnerability Scanning Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Category:Vulnerability_Scanning_Tools&amp;diff=246881"/>
				<updated>2019-01-28T16:49:56Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description  ==&lt;br /&gt;
&lt;br /&gt;
Web Application Vulnerability Scanners are automated tools that scan web applications, normally from the outside, to look for security vulnerabilities such as [[Cross-site scripting]], [[SQL Injection]], [[Command Injection]], [[Path Traversal]] and insecure server configuration. This category of tools is frequently referred to as [https://www.techopedia.com/definition/30958/dynamic-application-security-testing-dast Dynamic Application Security Testing] (DAST) Tools. A large number of both commercial and open source tools of this type are available and all of these tools have their own strengths and weaknesses.  If you are interested in the effectiveness of DAST tools, check out the OWASP [[Benchmark]] project, which is scientifically measuring the effectiveness of all types of vulnerability detection tools, including DAST.&lt;br /&gt;
&lt;br /&gt;
Here we provide a list of vulnerability scanning tools currently available in the market.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; '''Disclaimer:''' The tools listing in the table below are presented in an alphabetical order. &amp;lt;b&amp;gt;OWASP does not endorse any of the Vendors or Scanning Tools by listing them in the table below. We have made every effort to provide this information as accurately as possible. If you are the vendor of a tool below and think this information is incomplete or incorrect, please send an e-mail to our [mailto:owasp_ha_vulnerability_scanner_project@lists.owasp.org mailing list] and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
OWASP is aware of the [http://sectooladdict.blogspot.com/ '''Web Application Vulnerability Scanner Evaluation Project (WAVSEP)'''. WAVSEP] is completely unrelated to OWASP and we do not endorse its results, nor any of the DAST tools it evaluates. However, the results provided by WAVSEP may be helpful to someone interested in researching or selecting free and/or commercial DAST tools for their projects. This project has far more detail on DAST tools and their features than this OWASP DAST page.&lt;br /&gt;
&lt;br /&gt;
== Tools Listing  ==&lt;br /&gt;
&lt;br /&gt;
{{:Template:OWASP Tool Headings}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.acunetix.com/ Acunetix WVS] || tool_owner = Acunetix || tool_licence = Commercial / Free (Limited Capability) || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.ibm.com/us-en/marketplace/application-security-on-cloud Application Security on Cloud] || tool_owner = IBM || tool_licence = Commercial || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www-03.ibm.com/software/products/en/appscan-standard AppScan] || tool_owner = IBM || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.trustwave.com/Products/Application-Security/App-Scanner-Family/App-Scanner-Enterprise/ App Scanner] || tool_owner = Trustwave || tool_licence = Commercial || tool_platforms = Windows }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.rapid7.com/products/appspider/ AppSpider] || tool_owner = Rapid7 || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://apptrana.indusface.com/basic/ AppTrana Website Security Scan] || tool_owner = AppTrana || tool_licence = Free || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.arachni-scanner.com/ Arachni] || tool_owner = Arachni|| tool_licence = Free for most use cases || tool_platforms = Most platforms supported}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.scanmyserver.com/ AVDS] || tool_owner = Beyond Security || tool_licence = Commercial / Free (Limited Capability)|| tool_platforms = SaaS }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.blueclosure.com BlueClosure BC Detect] || tool_owner = BlueClosure || tool_licence = Commercial, 2 weeks trial || tool_platforms = Most platforms supported}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.portswigger.net/ Burp Suite] || tool_owner = PortSwiger || tool_licence = Commercial / Free (Limited Capability)|| tool_platforms = Most platforms supported }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://contrastsecurity.com Contrast] || tool_owner = Contrast Security || tool_licence = Commercial / Free (Full featured for 1 App) || tool_platforms = SaaS or On-Premises }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://detectify.com/ Detectify] || tool_owner = Detectify || tool_licence = Commercial || tool_platforms = SaaS }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.digifort.se/en/scanner Digifort- Inspect] || tool_owner = Digifort|| tool_licence = Commercial || tool_platforms = SaaS }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.edgescan.com/ edgescan] || tool_owner = edgescan|| tool_licence = Commercial || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.gamasec.com/Gamascan.aspx GamaScan] || tool_owner = GamaSec || tool_licence = Commercial || tool_platforms = Windows }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://rgaucher.info/beta/grabber/ Grabber] || tool_owner = Romain Gaucher || tool_licence = Open Source || tool_platforms = Python 2.4, BeautifulSoup and PyXML}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://gravityscan.com/ Gravityscan] || tool_owner = Defiant, Inc. || tool_licence = Commercial / Free (Limited Capability) || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://sourceforge.net/p/grendel/code/ci/c59780bfd41bdf34cc13b27bc3ce694fd3cb7456/tree/ Grendel-Scan] || tool_owner = David Byrne || tool_licence = Open Source || tool_platforms = Windows, Linux and Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.golismero.com GoLismero] || tool_owner = GoLismero Team || tool_licence = GPLv2.0 || tool_platforms = Windows, Linux and Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.ikare-monitoring.com/ IKare] || tool_owner = ITrust || tool_licence = Commercial || tool_platforms = N/A }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.htbridge.com/immuniweb/ ImmuniWeb] || tool_owner = High-Tech Bridge || tool_licence = Commercial  / Free (Limited Capability)|| tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.indusface.com/index.php/products/web-application-scanning Indusface Web Application Scanning] || tool_owner = Indusface || tool_licence = Commercial / Free Trial || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.nstalker.com/ N-Stealth] || tool_owner = N-Stalker || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.tenable.com/products/tenable-io/web-application-scanning/ Nessus] || tool_owner = Tenable || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.mavitunasecurity.com/ Netsparker] || tool_owner = MavitunaSecurity || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.rapid7.com/products/nexpose-community-edition.jsp Nexpose] || tool_owner = Rapid7 || tool_licence = Commercial / Free (Limited Capability)|| tool_platforms = Windows/Linux}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.cirt.net/nikto2 Nikto] || tool_owner = CIRT || tool_licence = Open Source|| tool_platforms = Unix/Linux}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.milescan.com/ ParosPro] || tool_owner = MileSCAN || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://probely.com Probe.ly] || tool_owner = Probe.ly || tool_licence = Commercial / Free (Limited Capability) || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.websecurify.com/desktop/proxy.html Proxy.app] || tool_owner = Websecurify || tool_licence = Commercial || tool_platforms = Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.qualys.com/products/qg_suite/was/ QualysGuard] || tool_owner = Qualys || tool_licence = Commercial || tool_platforms = N/A}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.beyondtrust.com/Products/RetinaNetworkSecurityScanner/ Retina] || tool_owner = BeyondTrust || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.orvant.com Securus] || tool_owner = Orvant, Inc || tool_licence = Commercial || tool_platforms = N/A}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.whitehatsec.com/home/services/services.html Sentinel] || tool_owner = WhiteHat Security || tool_licence = Commercial || tool_platforms = N/A}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.parasoft.com/products/article.jsp?articleId=3169&amp;amp;redname=webtesting&amp;amp;referred=webtesting SOATest] || tool_owner = Parasoft || tool_licence = Commercial || tool_platforms = Windows / Linux / Solaris}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.tinfoilsecurity.com Tinfoil Security] || tool_owner = Tinfoil Security, Inc. || tool_licence = Commercial / Free (Limited Capability) || tool_platforms = SaaS or On-Premises}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.trustwave.com/external-vulnerability-scanning.php Trustkeeper Scanner] || tool_owner = Trustwave SpiderLabs || tool_licence = Commercial || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://subgraph.com/vega/ Vega] || tool_owner = Subgraph || tool_licence = Open Source || tool_platforms = Windows, Linux and Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://wapiti.sourceforge.net/ Wapiti] || tool_owner = Informática Gesfor || tool_licence = Open Source || tool_platforms = Windows, Unix/Linux and Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.defensecode.com/webscanner.php Web Security Scanner] || tool_owner = DefenseCode || tool_licence = Commercial || tool_platforms = On-Premises}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.tripwire.com/it-security-software/enterprise-vulnerability-management/web-application-vulnerability-scanning/ WebApp360] || tool_owner = TripWire || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://webcookies.org WebCookies] || tool_owner = WebCookies || tool_licence = Free|| tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www8.hp.com/us/en/software-solutions/software.html?compURI=1341991#.Uuf0KBAo4iw WebInspect] || tool_owner = HP || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.websecurify.com/desktop/webreaver.html WebReaver] || tool_owner = Websecurify || tool_licence = Commercial || tool_platforms = Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.german-websecurity.com/en/products/webscanservice/product-details/overview/ WebScanService] || tool_owner = German Web Security || tool_licence = Commercial || tool_platforms = N/A}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://suite.websecurify.com/ Websecurify Suite] || tool_owner = Websecurify || tool_licence = Commercial / Free (Limited Capability) || tool_platforms = Windows, Linux, Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.sensepost.com/research/wikto/ Wikto] || tool_owner = Sensepost || tool_licence = Open Source || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.w3af.org/ w3af] || tool_owner = w3af.org || tool_licence = GPLv2.0 || tool_platforms = Linux and Mac}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.owasp.org/index.php/OWASP_Xenotix_XSS_Exploit_Framework Xenotix XSS Exploit Framework] || tool_owner = OWASP || tool_licence = Open Source || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project Zed Attack Proxy] || tool_owner = OWASP || tool_licence = Open Source || tool_platforms = Windows, Unix/Linux and Macintosh}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== References  ==&lt;br /&gt;
&lt;br /&gt;
*[[Source_Code_Analysis_Tools | SAST Tools]] - OWASP page with similar information on Static Application Security Testing (SAST) Tools&lt;br /&gt;
*[[Free for Open Source Application Security Tools]] - OWASP page that lists the Commercial Dynamic Application Security Testing (DAST) tools we know of that are free for Open Source&lt;br /&gt;
*http://sectooladdict.blogspot.com/ - Web Application Vulnerability Scanner Evaluation Project (WAVSEP)&lt;br /&gt;
*http://projects.webappsec.org/Web-Application-Security-Scanner-Evaluation-Criteria - v1.0 (2009)&lt;br /&gt;
*http://www.slideshare.net/lbsuto/accuracy-and-timecostsofwebappscanners - White Paper: Analyzing the Accuracy and Time Costs of WebApplication Security Scanners - By Larry Suto (2010)&lt;br /&gt;
*http://samate.nist.gov/index.php/Web_Application_Vulnerability_Scanners.html - NIST home page which links to: NIST Special Publication 500-269: Software Assurance Tools: Web Application Security Scanner Functional Specification Version 1.0 (21 August, 2007)&lt;br /&gt;
*http://www.softwareqatest.com/qatweb1.html#SECURITY - A list of Web Site Security Test Tools. (Has both DAST and SAST tools)&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP_Tools_Project]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=246880</id>
		<title>Source Code Analysis Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=246880"/>
				<updated>2019-01-28T16:47:48Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* More info */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Static_Code_Analysis | Source code analysis]] tools, also referred to as Static Application Security Testing (SAST) Tools, are designed to analyze source code and/or compiled versions of code to help find security flaws. &lt;br /&gt;
&lt;br /&gt;
Some tools are starting to move into the IDE. For the types of problems that can be detected during the software development phase itself, this is a powerful phase within the development life cycle to employ such tools, as it provides immediate feedback to the developer on issues they might be introducing into the code during code development itself. This immediate feedback is very useful, especially when compared to finding vulnerabilities much later in the development cycle.&lt;br /&gt;
&lt;br /&gt;
== Strengths and Weaknesses ==&lt;br /&gt;
&lt;br /&gt;
=== Strengths ===&lt;br /&gt;
&lt;br /&gt;
* Scales well -- can be run on lots of software, and can be run repeatedly (as with nightly builds or continuous integration)&lt;br /&gt;
* Useful for things that such tools can automatically find with high confidence, such as buffer overflows, SQL Injection Flaws, and so forth&lt;br /&gt;
* Output is good for developers -- highlights the precise source files, line numbers, and even subsections of lines that are affected&lt;br /&gt;
&lt;br /&gt;
=== Weaknesses ===&lt;br /&gt;
&lt;br /&gt;
* Many types of security vulnerabilities are difficult to find automatically, such as authentication problems, access control issues, insecure use of cryptography, etc. The current state of the art only allows such tools to automatically find a relatively small percentage of application security flaws. However, tools of this type are getting better.&lt;br /&gt;
* High numbers of false positives.&lt;br /&gt;
* Frequently can't find configuration issues, since they are not represented in the code.&lt;br /&gt;
* Difficult to 'prove' that an identified security issue is an actual vulnerability.&lt;br /&gt;
* Many of these tools have difficulty analyzing code that can't be compiled. Analysts frequently can't compile code because they don't have the right libraries, all the compilation instructions, all the code, etc.&lt;br /&gt;
&lt;br /&gt;
==Important Selection Criteria==&lt;br /&gt;
&lt;br /&gt;
* Requirement: Must support your programming language, but not usually a key factor once it does.&lt;br /&gt;
* Types of vulnerabilities it can detect (out of the [[OWASP Top Ten]]?) (plus more?)&lt;br /&gt;
* How accurate is it? False Positive/False Negative rates?&lt;br /&gt;
** Does the tool have an OWASP [[Benchmark]] score?&lt;br /&gt;
* Does it understand the libraries/frameworks you use?&lt;br /&gt;
* Does it require a fully buildable set of source?&lt;br /&gt;
* Can it run against binaries instead of source?&lt;br /&gt;
* Can it be integrated into the developer's IDE?&lt;br /&gt;
* How hard is it to setup/use?&lt;br /&gt;
* Can it be run continuously and automatically?&lt;br /&gt;
* License cost for the tool. (Some are sold per user, per org, per app, per line of code analyzed. Consulting licenses are frequently different than end user licenses.)&lt;br /&gt;
&lt;br /&gt;
==OWASP Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [[OWASP SonarQube Project]]&lt;br /&gt;
* [http://www.owasp.org/index.php/Category:OWASP_Orizon_Project OWASP Orizon Project]&lt;br /&gt;
* [[OWASP_LAPSE_Project | OWASP LAPSE Project]]&lt;br /&gt;
* [[OWASP O2 Platform]]&lt;br /&gt;
* [[OWASP WAP-Web Application Protection]]&lt;br /&gt;
&lt;br /&gt;
==Disclaimer==&lt;br /&gt;
&lt;br /&gt;
Disclaimer: &amp;lt;b&amp;gt;The tools listed in the tables below are presented in alphabetical order. &amp;lt;i&amp;gt;OWASP does not endorse any of the vendors or tools by listing them in the table below.&amp;lt;/i&amp;gt; We have made every effort to provide this information as accurately as possible. If you are the vendor of a tool below and think that this information is incomplete or incorrect, please send an e-mail to our mailing list and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Open Source or Free Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.openstack.org/wiki/Security/Projects/Bandit Bandit] - bandit is a comprehensive source vulnerability scanner for Python&lt;br /&gt;
* [http://brakemanscanner.org/ Brakeman] - Brakeman is an open source vulnerability scanner specifically designed for Ruby on Rails applications&lt;br /&gt;
* [http://rubygems.org/gems/codesake-dawn Codesake Dawn] - Codesake Dawn is an open source security source code analyzer designed for Sinatra, Padrino for Ruby on Rails applications. It also works on non-web applications written in Ruby&lt;br /&gt;
* [http://findbugs.sourceforge.net/ FindBugs] - (Legacy - NOT Maintained - Use SpotBugs (see below) instead) - Find bugs (including a few security flaws) in Java programs&lt;br /&gt;
* [https://find-sec-bugs.github.io/ FindSecBugs] - A security specific plugin for SpotBugs that significantly improves SpotBugs's ability to find security vulnerabilities in Java programs. Works with the old FindBugs too,&lt;br /&gt;
* [http://www.dwheeler.com/flawfinder/ Flawfinder] Flawfinder - Scans C and C++&lt;br /&gt;
* [https://www.bishopfox.com/resources/tools/google-hacking-diggity/attack-tools/ Google CodeSearchDiggity] - Uses Google Code Search to identifies vulnerabilities in open source code projects hosted by Google Code, MS CodePlex, SourceForge, Github, and more. The tool comes with over 130 default searches that identify SQL injection, cross-site scripting (XSS), insecure remote and local file includes, hard-coded passwords, and much more.  ''Essentially, Google CodeSearchDiggity provides a source code security analysis of nearly every single open source code project in existence – simultaneously.''&lt;br /&gt;
* [https://github.com/wireghoul/graudit/ Graudit] - Scans multiple languages for various security flaws.&lt;br /&gt;
* [https://lgtm.com/help/lgtm/about-lgtm LGTM] - A free for open source static analysis service that automatically monitors commits to publicly accessible code in: Bitbucket Cloud, GitHub, or GitLab. Supports C/C++, C#, COBOL (in beta), Java, JavaScript/TypeScript, Python&lt;br /&gt;
* [http://pmd.sourceforge.net/ PMD] - PMD scans Java source code and looks for potential code problems (this is a code quality tool that does not focus on security issues)&lt;br /&gt;
* [https://github.com/designsecurity/progpilot Progpilot] - Progpilot is a static analyzer tool for PHP that detects security vulnerabilities such as XSS and SQL Injection.&lt;br /&gt;
* [http://msdn.microsoft.com/en-us/library/ms933794.aspx PreFast] (Microsoft) - PREfast is a static analysis tool that identifies defects in C/C++ programs. Last update 2006.&lt;br /&gt;
* [https://pumascan.com/ Puma Scan] - Puma Scan is a .NET C# open source static source code analyzer that runs as an IDE plugin for Visual Studio and via MSBuild in CI pipelines.&lt;br /&gt;
* [https://dotnet-security-guard.github.io/ .NET Security Guard] - Roslyn analyzers that aim to help security audits on .NET applications. It will find SQL injections, LDAP injections, XXE, cryptography weakness, XSS and more.&lt;br /&gt;
* [http://rips-scanner.sourceforge.net/ RIPS] - RIPS is a static source code analyzer for vulnerabilities in PHP web applications. Please see notes on the sourceforge.net site.&lt;br /&gt;
* [https://github.com/FloeDesignTechnologies/phpcs-security-audit phpcs-security-audit] - phpcs-security-audit is a set of PHP_CodeSniffer rules that finds flaws or weaknesses related to security in PHP and its popular CMS or frameworks.  It currently has core PHP rules as well as Drupal 7 specific rules.&lt;br /&gt;
* [http://www.sonarqube.org/ SonarQube] - Scans source code for more than 20 languages for Bugs, Vulnerabilities, and Code Smells. SonarQube IDE plugins for Eclipse, Visual Studio, and IntelliJ provided by [http://www.sonarlint.org/ SonarLint].&lt;br /&gt;
* [https://spotbugs.github.io/ SpotBugs] - This is the active fork replacement for FindBugs, which is not maintained anymore.&lt;br /&gt;
* [http://sourceforge.net/projects/visualcodegrepp/ VisualCodeGrepper (VCG)] - Scans C/C++, C#, VB, PHP, Java, and PL/SQL for security issues and for comments which may indicate defective code. The config files can be used to carry out additional checks for banned functions or functions which commonly cause security issues.&lt;br /&gt;
&lt;br /&gt;
==Commercial Tools Of This Type==&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ Application Inspector] (Positive Technologies) - combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation; has capability to generate test queries (exploits) to verify detected vulnerabilities during SAST analysis; Supported languages include: Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/application-security-on-cloud Application Security on Cloud] (IBM) - Provides SAST, DAST and mobile security testing as well as OpenSource library known vulnerability detection as a cloud service. &lt;br /&gt;
* [http://www-01.ibm.com/software/rational/products/appscan/source/ AppScan Source] (IBM)&lt;br /&gt;
* [http://www.blueclosure.com BlueClosure BC Detect] (BlueClosure) - Analyzes client-side JavaScript.&lt;br /&gt;
* [https://buguroo.com/products/bugblast-next-gen-appsec-platform/bugscout-sca bugScout] (Buguroo Offensive Security)&lt;br /&gt;
* [http://www.castsoftware.com/solutions/application-security/cwe#SupportedSecurityStandards CAST AIP] (CAST) Performs static and architectural analysis to identify numerous types of security issues. Supports over 30 languages.&lt;br /&gt;
* [https://www.codacy.com/ Codacy] Offers security patterns for languages such as Python, Ruby, Scala, Java, JavaScript and more. Integrates with tools such as Brakeman, Bandit, FindBugs, and others. (free for open source projects)&lt;br /&gt;
* [https://www.grammatech.com/products/codesonar CodeSonar] tool that supports C, C++, Java and C# and maps against the OWASP top 10 vulnerabilities.&lt;br /&gt;
* [http://www.contrastsecurity.com/ Contrast] (Contrast Security) - Contrast performs code security without actually doing static analysis. Contrast does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.coverity.com/products/code-advisor/ Coverity Code Advisor] (Synopsys)&lt;br /&gt;
* [https://www.checkmarx.com/technology/static-code-analysis-sca/ CxSAST] (Checkmarx)&lt;br /&gt;
* [http://www8.hp.com/us/en/software-solutions/static-code-analysis-sast/ Fortify] (Micro Focus, Formally HP)&lt;br /&gt;
* [http://www.juliasoft.com/solutions Julia] (JuliaSoft) - SaaS Java static analysis&lt;br /&gt;
* [http://www.klocwork.com/capabilities/static-code-analysis KlocWork] (KlocWork)&lt;br /&gt;
* [https://www.kiuwan.com/code-analysis/ Kiuwan] (an [http://www.optimyth.com Optimyth] company) - SaaS Software Quality &amp;amp; Security Analysis&lt;br /&gt;
* [http://www.parasoft.com/jsp/capabilities/static_analysis.jsp?itemId=547 Parasoft Test] (Parasoft)&lt;br /&gt;
* [https://pitss.com/products/pitss-con/ PITSS.CON] (PITTS)&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ PT Application Inspector] combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation for high accuracy rate with minimum false positives; has a unique capability to generate special test queries (exploits) to verify detected vulnerabilities during SAST analysis; integrates with CI/CD, VCS, etc. PT AI helps to easily understand, verify, and fix flaws; has a simple UI; is highly automated and easy to use. Supported languages are Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others.&lt;br /&gt;
* [https://pumascanpro.com/ Puma Scan Professional] - A .NET C# static source code analyzer that runs as a Visual Studio IDE extension, Azure DevOps extension, and Command Line (CLI) executable.&lt;br /&gt;
* [http://www.viva64.com/en/ PVS-Studio] (PVS-Studio) - For C/C++, C#&lt;br /&gt;
* [https://www.softwaresecured.com/reshift reshift] - A CI/CD tool that uses static code analysis to scan for vulnerabilities and uses machine learning to give a prediction on false positives. Supports Java with future support for NodeJS and JavaScript planned for sometime in 2019.&lt;br /&gt;
* [https://www.ripstech.com/ RIPS Code Analysis] (RIPS Technologies) - A SAST solution specialized for PHP that detects unknown security vulnerabilities and code quality issues.&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/resources/datasheets/secureassist.html SecureAssist] (Synopsys) - Scans code for insecure coding and configurations automatically as an IDE plugin for Eclipse, IntelliJ, and Visual Studio etc. Supports (Java, .NET, PHP, and JavaScript)&lt;br /&gt;
* [https://www.whitehatsec.com/products/static-application-security-testing/ Sentinel Source] (Whitehat)&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/products/interactive-application-security-testing.html Seeker] (Synopsys) Seeker performs code security without actually doing static analysis. Seeker does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis with simulated attacks. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.sourcepatrol.co.uk/ Source Patrol] (Pentest)&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] (DefenseCode)&lt;br /&gt;
* [http://www.veracode.com/products/binary-static-analysis-sast Veracode Static Analysis] (Veracode)&lt;br /&gt;
* [http://www.xanitizer.net Xanitizer] - Scans Java for security vulnerabilities, mainly via taint analysis. Free for academic and open source projects (see [https://www.rigs-it.com/xanitizer-pricing/]).&lt;br /&gt;
&lt;br /&gt;
==More info==&lt;br /&gt;
&lt;br /&gt;
* [[Appendix_A:_Testing_Tools | Appendix A: Testing Tools]]&lt;br /&gt;
* [http://samate.nist.gov/index.php/Source_Code_Security_Analyzers.html NIST's list of Source Code Security Analysis Tools]&lt;br /&gt;
* [[:Category:Vulnerability_Scanning_Tools | DAST Tools]] - Similar info on Dynamic Application Security Testing (DAST) Tools&lt;br /&gt;
* [[Free for Open Source Application Security Tools]] - This page lists the Commercial Source Code Analysis Tools (SAST) we know of that are free for Open Source&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP .NET Project]]&lt;br /&gt;
[[Category:SAMM-CR-2]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=246839</id>
		<title>C-Based Toolchain Hardening</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=C-Based_Toolchain_Hardening&amp;diff=246839"/>
				<updated>2019-01-25T23:27:53Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: Fix a broken link and one misspelling.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[C-Based Toolchain Hardening]] is a treatment of project settings that will help you deliver reliable and secure code when using C, C++ and Objective C languages in a number of development environments. This article will examine Microsoft and GCC toolchains for the C, C++ and Objective C languages. It will guide you through the steps you should take to create executables with firmer defensive postures and increased integration with the available platform security. Effectively configuring the toolchain also means your project will enjoy a number of benefits during development, including enhanced warnings and static analysis, and self-debugging code.&lt;br /&gt;
&lt;br /&gt;
There are four areas to be examined when hardening the toolchain: configuration, preprocessor, compiler, and linker. Nearly all areas are overlooked or neglected when setting up a project. The neglect appears to be pandemic, and it applies to nearly all projects including Auto-configured projects, Makefile-based, Eclipse-based, Visual Studio-based, and Xcode-based. Its important to address the gaps at configuration and build time because its difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening on a distributed executable after the fact] on some platforms.&lt;br /&gt;
&lt;br /&gt;
This is a prescriptive article, and it will not debate semantics or speculate on behavior. Some information, such as the C/C++ committee's motivation and pedigree for [https://groups.google.com/a/isocpp.org/forum/?fromgroups=#!topic/std-discussion/ak8e1mzBhGs &amp;quot;program diagnostics&amp;quot;, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;, and &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;], appears to be lost like a tale in the Lord of the Rings. As such, the article will specify semantics (for example, the philosophy of 'debug' and 'release' build configurations), assign behaviors (for example, what an assert should do in a 'debug' and 'release' build configurations), and present a position. If you find the posture is too aggressive, then you should back off as required to suite your taste.&lt;br /&gt;
&lt;br /&gt;
A secure toolchain is not a silver bullet. It is one piece of an overall strategy in the engineering process to help ensure success. It will compliment existing processes such as static analysis, dynamic analysis, secure coding, negative test suites, and the like. Tools such as Valgrind and Helgrind will still be needed. And a project will still require solid designs and architectures.&lt;br /&gt;
&lt;br /&gt;
The OWASP [http://code.google.com/p/owasp-esapi-cplusplus/source ESAPI C++] project eats its own dog food. Many of the examples you will see in this article come directly from the ESAPI C++ project.&lt;br /&gt;
&lt;br /&gt;
Finally, a [[:Category:Cheatsheets|Cheat Sheet]] is available for those who desire a terse treatment of the material. Please visit [[C-Based_Toolchain_Hardening_Cheat_Sheet|C-Based Toolchain Hardening Cheat Sheet]] for the abbreviated version.&lt;br /&gt;
&lt;br /&gt;
== Wisdom ==&lt;br /&gt;
&lt;br /&gt;
Code '''must''' be correct. It '''should''' be secure. It '''can''' be efficient.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Jon_Bentley Dr. Jon Bentley]: ''&amp;quot;If it doesn't have to be correct, I can make it as fast as you'd like it to be&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/Gary_McGraw Dr. Gary McGraw]: ''&amp;quot;Thou shalt not rely solely on security features and functions to build secure software as security is an emergent property of the entire system and thus relies on building and integrating all parts properly&amp;quot;''.&lt;br /&gt;
&lt;br /&gt;
== Configuration ==&lt;br /&gt;
&lt;br /&gt;
Configuration is the first opportunity to configure your project for success. Not only do you have to configure your project to meet reliability and security goals, you must also configure integrated libraries properly. You typically have has three choices. First, you can use auto-configuration utilities if on Linux or Unix. Second, you can write a makefile by hand. This is predominant on Linux, Mac OS X, and Unix, but it applies to Windows as well. Finally, you can use an integrated development environment or IDE.&lt;br /&gt;
&lt;br /&gt;
=== Build Configurations ===&lt;br /&gt;
&lt;br /&gt;
At this stage in the process, you should concentrate on configuring for two builds: Debug and Release. Debug will be used for development and include full instrumentation. Release will be configured for production. The difference between the two settings is usually ''optimization level'' and ''debug level''. A third build configuration is Test, and its usually a special case of Release.&lt;br /&gt;
&lt;br /&gt;
For debug and release builds, the settings are typically diametrically opposed. Debug configurations have no optimizations and full debug information; while Release builds have optimizations and minimal to moderate debug information. In addition, debug code has full assertions and additional library integration, such as mudflaps and malloc guards such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The Test configuration is often a Release configuration that makes everything public for testing and builds a test harness. For example, all member functions public (C++ class) and all interfaces (library or shared object) should be made available for testing. Many Object Oriented purist oppose testing private interfaces, but this is not about object oriented-ness. This (''q.v.'') is about building reliable and secure software.&lt;br /&gt;
&lt;br /&gt;
[http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8] introduced an optimization of &amp;lt;tt&amp;gt;-Og&amp;lt;/tt&amp;gt;. Note that it is only an optimization, and still requires a customary debug level via &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==== Debug Builds ====&lt;br /&gt;
&lt;br /&gt;
Debug builds are where developers spend most of their time when vetting problems, so this build should concentrate forces and tools or be a 'force multiplier'. Though many do not realize, debug code is more highly valued than release code because its adorned with additional instrumentation. The debug instrumentation will cause a program to become nearly &amp;quot;self-debugging&amp;quot;, and help you catch mistakes such as bad parameters, failed API calls, and memory problems.&lt;br /&gt;
&lt;br /&gt;
Self-debugging code reduces your time during trouble shooting and debugging. Reducing time under the debugger means you have more time for development and feature requests. If code is checked in without debug instrumentation, it should be fixed by adding instrumentation or rejected.&lt;br /&gt;
&lt;br /&gt;
For GCC, optimizations and debug symbolication are controlled through two switches: &amp;lt;tt&amp;gt;-O&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-g&amp;lt;/tt&amp;gt;. You should use the following as part of your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for a minimal debug session:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-O0 -g3 -ggdb&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O0&amp;lt;/tt&amp;gt; turns off optimizations and &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debug information is available. You may need to use &amp;lt;tt&amp;gt;-O1&amp;lt;/tt&amp;gt; so some analysis is performed. Otherwise, your debug build will be missing a number of warnings not present in release builds. &amp;lt;tt&amp;gt;-g3&amp;lt;/tt&amp;gt; ensures maximum debugging information is available for the debug session, including symbolic constants and &amp;lt;tt&amp;gt;#defines&amp;lt;/tt&amp;gt;. &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; includes extensions to help with a debug session under GDB. For completeness, Jan Krachtovil stated &amp;lt;tt&amp;gt;-ggdb&amp;lt;/tt&amp;gt; currently has no effect in a private email.&lt;br /&gt;
&lt;br /&gt;
Release builds should also consider the configuration pair of &amp;lt;tt&amp;gt;-mfunction-return=thunk&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-mindirect-branch=thunk&amp;lt;/tt&amp;gt;. These are the &amp;quot;Reptoline&amp;quot; fix which is an indirect branch used to thwart speculative execution CPU vulnerabilities such as Spectre and Meltdown. The CPU cannot tell what code to [speculatively] execute because it is an indirect (as opposed to a direct) branch. This is an extra layer of indirection, like calling a pointer through a pointer.&lt;br /&gt;
&lt;br /&gt;
Debug build should also define &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is not defined. &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; removes &amp;quot;program diagnostics&amp;quot;; and has undesirable behavior and side effects which discussed below in more detail. The defines should be present for all code, and not just the program. You use it for all code (your program and included libraries) because you need to know how they fail too (remember, you take the bug report - not the third party library).&lt;br /&gt;
&lt;br /&gt;
In addition, you should use other relevant flags, such as &amp;lt;tt&amp;gt;-fno-omit-frame-pointer&amp;lt;/tt&amp;gt;. Ensuring a frame pointer exists makes it easier to decode stack traces. Since debug builds are not shipped, its OK to leave symbols in the executable. Programs with debug information do not suffer performance hits. See, for example, [http://gcc.gnu.org/ml/gcc-help/2005-03/msg00032.html How does the gcc -g option affect performance?]&lt;br /&gt;
&lt;br /&gt;
Finally, you should ensure your project includes additional diagnostic libraries, such as &amp;lt;tt&amp;gt;dmalloc&amp;lt;/tt&amp;gt; and [http://code.google.com/p/address-sanitizer/ Address Sanitizer]. A comparison of some memory checking tools can be found at [http://code.google.com/p/address-sanitizer/wiki/ComparisonOfMemoryTools Comparison Of Memory Tools]. If you don't include additional diagnostics in debug builds, then you should start using them since its OK to find errors you are not looking for.&lt;br /&gt;
&lt;br /&gt;
==== Release Builds ====&lt;br /&gt;
&lt;br /&gt;
Release builds are what your customer receives. They are meant to be run on production hardware and servers, and they should be reliable, secure, and efficient. A stable release build is the product of the hard work and effort during development.&lt;br /&gt;
&lt;br /&gt;
For release builds, you should use the following as part of &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; for release builds:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-On -g2&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;-O''n''&amp;lt;/tt&amp;gt; sets optimizations for speed or size (for example, &amp;lt;tt&amp;gt;-Os&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;-O2&amp;lt;/tt&amp;gt;), and &amp;lt;tt&amp;gt;-g2&amp;lt;/tt&amp;gt; ensure debugging information is created.&lt;br /&gt;
&lt;br /&gt;
Debugging information should be stripped and retained in case of symbolication for a crash report from the field. While not desired, debug information can be left in place without a performance penalty. See ''[http://gcc.gnu.org/ml/gcc-help/2005-03/msg00032.html How does the gcc -g option affect performance?]'' for details.&lt;br /&gt;
&lt;br /&gt;
Release builds should also define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt;, and ensure &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is not defined. The time for debugging and diagnostics is over, so users get production code with full optimizations, no &amp;quot;programming diagnostics&amp;quot;, and other efficiencies. If you can't optimize or your are performing excessive logging, it usually means the program is not ready for production.&lt;br /&gt;
&lt;br /&gt;
If you have been relying on an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and then a subsequent &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;, you have been abusing &amp;quot;program diagnostics&amp;quot; since it has no place in production code. If you want a memory dump, create one so users don't have to worry about secrets and other sensitive information being written to the filesystem and emailed in plain text.&lt;br /&gt;
&lt;br /&gt;
For Windows, you would use &amp;lt;tt&amp;gt;/Od&amp;lt;/tt&amp;gt; for debug builds; and &amp;lt;tt&amp;gt;/Ox&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;/O2&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;/Os&amp;lt;/tt&amp;gt; for release builds. See Microsoft's [http://msdn.microsoft.com/en-us/library/k1ack8f1.aspx /O Options (Optimize Code)] for details.&lt;br /&gt;
&lt;br /&gt;
==== Test Builds ====&lt;br /&gt;
&lt;br /&gt;
Test builds are used to provide heuristic validation by way of positive and negative test suites. Under a test configuration, all interfaces are tested to ensure they perform to specification and satisfaction. &amp;quot;Satisfaction&amp;quot; is subjective, but it should include no crashing and no trashing of your memory arena, even when faced with negative tests.&lt;br /&gt;
&lt;br /&gt;
Because all interfaces are tested (and not just the public ones), your &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; should include:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;-Dprotected=public -Dprivate=public&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You should also change &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;hidden&amp;quot;)))&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;__attribute__ ((visibility (&amp;quot;default&amp;quot;)))&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Nearly everyone gets a positive test right, so no more needs to be said. The negative self tests are much more interesting, and you should concentrate on trying to make your program fail so you can verify its fails gracefully. Remember, a bad guy is not going to be courteous when he attempts to cause your program to fail. And its your project that takes egg on the face by way of a bug report or guest appearance on [http://www.grok.org.uk/full-disclosure/ Full Disclosure] or [http://www.securityfocus.com/archive Bugtraq] - not ''&amp;lt;nowiki&amp;gt;&amp;lt;some library&amp;gt;&amp;lt;/nowiki&amp;gt;'' you included.&lt;br /&gt;
&lt;br /&gt;
=== Auto Tools ===&lt;br /&gt;
&lt;br /&gt;
Auto configuration tools are popular on many Linux and Unix based systems, and the tools include ''Autoconf'', ''Automake'', ''config'', and ''Configure''. The tools work together to produce project files from scripts and template files. After the process completes, your project should be setup and ready to be made with &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
When using auto configuration tools, there are a few files of interest worth mentioning. The files are part of the auto tools chain and include &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; and the various &amp;lt;tt&amp;gt;*.in&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;*.ac&amp;lt;/tt&amp;gt; (autoconf), and &amp;lt;tt&amp;gt;*.am&amp;lt;/tt&amp;gt; (automake) files. At times, you will have to open them, or the resulting makefiles, to tune the &amp;quot;stock&amp;quot; configuration.&lt;br /&gt;
&lt;br /&gt;
There are three downsides to the command line configuration tools in the toolchain: (1) they often ignore user requests, (2) they cannot create configurations, and (3) security is often not a goal.&lt;br /&gt;
&lt;br /&gt;
To demonstrate the first issue, confider your project with the following: &amp;lt;tt&amp;gt;configure CFLAGS=&amp;quot;-Wall -fPIE&amp;quot; CXXFLAGS=&amp;quot;-Wall -fPIE&amp;quot; LDFLAGS=&amp;quot;-pie&amp;quot;&amp;lt;/tt&amp;gt;. You will probably find the auto tools ignored your request, which means the command below will not produce expected results. As a work around, you will have to open an &amp;lt;tt&amp;gt;m4&amp;lt;/tt&amp;gt; scripts, &amp;lt;tt&amp;gt;Makefile.in&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;Makefile.am&amp;lt;/tt&amp;gt; and fix the configuration.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ configure CFLAGS=&amp;quot;-Wall -Wextra -Wconversion -fPIE -Wno-unused-parameter&lt;br /&gt;
    -Wformat=2 -Wformat-security -fstack-protector-all -Wstrict-overflow&amp;quot;&lt;br /&gt;
    LDFLAGS=&amp;quot;-pie -z,noexecstack -z,noexecheap -z,relro -z,now&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For the second point, you will probably be disappointed to learn [https://lists.gnu.org/archive/html/automake/2012-12/msg00019.html Automake does not support the concept of configurations]. Its not entirely Autoconf's or Automake's fault - ''Make'' and its inability to detect changes is the underlying problem. Specifically, ''Make'' only [http://pubs.opengroup.org/onlinepubs/009695399/utilities/make.html checks modification times of prerequisites and targets], and does not check things like &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;. The net effect is you will not receive expected results when you issue &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt; and then &amp;lt;tt&amp;gt;make test&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Finally, you will probably be disappointed to learn tools such as Autoconf and Automake miss many security related opportunities and ship insecure out of the box. There are a number of compiler switches and linker flags that improve the defensive posture of a program, but they are not 'on' by default. Tools like Autoconf - which are supposed to handle this situation - often provides setting to serve the lowest of all denominators.&lt;br /&gt;
&lt;br /&gt;
A recent discussion on the Automake mailing list illuminates the issue: ''[https://lists.gnu.org/archive/html/autoconf/2012-12/msg00038.html Enabling compiler warning flags]''. Attempts to improve default configurations were met with resistance and no action was taken. The resistance is often of the form, &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some useful warning&amp;gt;&amp;lt;/nowiki&amp;gt; also produces false positives&amp;quot; or &amp;quot;&amp;lt;nowiki&amp;gt;&amp;lt;some obscure platform&amp;gt;&amp;lt;/nowiki&amp;gt; does not support &amp;lt;nowiki&amp;gt;&amp;lt;established security feature&amp;gt;&amp;lt;/nowiki&amp;gt;&amp;quot;. Its noteworthy that David Wheeler, the author of ''[http://www.dwheeler.com/secure-programs/ Secure Programming for Linux and Unix HOWTO]'', was one of the folks trying to improve the posture.&lt;br /&gt;
&lt;br /&gt;
=== Makefiles ===&lt;br /&gt;
&lt;br /&gt;
Make is one of the earliest build systems dating back to the 1970s. Its available on Linux, Mac OS X and Unix, so you will frequently encounter projects using it. Unfortunately, Make has a number of short comings (''[http://aegis.sourceforge.net/auug97.pdf Recursive Make Considered Harmful]'' and ''[http://www.conifersystems.com/whitepapers/gnu-make/ What’s Wrong With GNU make?]''), and can cause some discomfort. Despite issues with Make, ESAPI C++ uses Make primarily for three reasons: first, its omnipresent; second, its easier to manage than the Auto Tools family; and third, &amp;lt;tt&amp;gt;libtool&amp;lt;/tt&amp;gt; was out of the question.&lt;br /&gt;
&lt;br /&gt;
Consider what happens when you: (1) type &amp;lt;tt&amp;gt;make debug&amp;lt;/tt&amp;gt;, and then type &amp;lt;tt&amp;gt;make release&amp;lt;/tt&amp;gt;. Each build would require different &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; due to optimizations and level of debug support. In your makefile, you would extract the relevant target and set &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; similar to below (taken from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/Makefile ESAPI C++ Makefile]):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;# Makefile&lt;br /&gt;
DEBUG_GOALS = $(filter $(MAKECMDGOALS), debug)&lt;br /&gt;
ifneq ($(DEBUG_GOALS),)&lt;br /&gt;
  WANT_DEBUG := 1&lt;br /&gt;
  WANT_TEST := 0&lt;br /&gt;
  WANT_RELEASE := 0&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_DEBUG),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
  ESAPI_CXXFLAGS += -DDEBUG=1 -UNDEBUG -g3 -ggdb -O0&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_RELEASE),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
  ESAPI_CXXFLAGS += -DNDEBUG=1 -UDEBUG -g -O2&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(WANT_TEST),1)&lt;br /&gt;
  ESAPI_CFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
  ESAPI_CXXFLAGS += -DESAPI_NO_ASSERT=1 -g2 -ggdb -O2 -Dprivate=public -Dprotected=public&lt;br /&gt;
endif&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
# Merge ESAPI flags with user supplied flags. We perform the extra step to ensure &lt;br /&gt;
# user options follow our options, which should give user option's a preference.&lt;br /&gt;
override CFLAGS := $(ESAPI_CFLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(ESAPI_CXXFLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(ESAPI_LDFLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make will first build the program in a debug configuration for a session under the debugger using a rule similar to:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;%.cpp:%.o:&lt;br /&gt;
        $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $&amp;lt; -o $@&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When you want the release build, Make will do nothing because it considers everything up to date despite the fact &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt; have changed. Hence, your program will actually be in a debug configuration and risk a &amp;lt;tt&amp;gt;SIGABRT&amp;lt;/tt&amp;gt; at runtime because debug instrumentation is present (recall &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; calls &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; when &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined). In essence, you have DoS'd yourself due to &amp;lt;tt&amp;gt;make&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition, many projects do not honor the user's command line. ESAPI C++ does its best to ensure a user's flags are honored via &amp;lt;tt&amp;gt;override&amp;lt;/tt&amp;gt; as shown above, but other projects do not. For example, consider a project that should be built with Position Independent Executable (PIE or ASLR) enabled and data execution prevention (DEP) enabled. Dismissing user settings combined with insecure out of the box settings (and not picking them up during auto-setup or auto-configure) means a program built with the following will likely have neither defense:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ make CFLAGS=&amp;quot;-fPIE&amp;quot; CXXFLAGS=&amp;quot;-fPIE&amp;quot; LDFLAGS=&amp;quot;-pie -z,noexecstack, -z,noexecheap&amp;quot;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Defenses such as ASLR and DEP are especially important on Linux because [http://linux.die.net/man/5/elf Data Execution - not Prevention - is the norm].&lt;br /&gt;
&lt;br /&gt;
=== Integration ===&lt;br /&gt;
&lt;br /&gt;
Project level integration presents opportunities to harden your program or library with domain specific knowledge. For example, if the platform supports Position Independent Executables (PIE or ASLR) and data execution prevention (DEP), then you should integrate with it. The consequences of not doing so could result in exploitation. As a case in point, see KingCope's 0-days for MySQL in December, 2012 (CVE-2012-5579 and CVE-2012-5612, among others). Integration with platform security would have neutered a number of the 0-days.&lt;br /&gt;
&lt;br /&gt;
You also have the opportunity to include helpful libraries that are not need for business logic support. For example, if you are working on a platform with [http://dmalloc.com DMalloc] or [http://code.google.com/p/address-sanitizer/ Address Sanitizer], you should probably use it in your debug builds. For Ubuntu, DMalloc available from the package manager and can be installed with &amp;lt;tt&amp;gt;sudo apt-get install libdmalloc5&amp;lt;/tt&amp;gt;. For Apple platforms, its available as a scheme option (see [[#Clang/Xcode|Clang/Xcode]] below). Address Sanitizer is available in [http://gcc.gnu.org/gcc-4.8/changes.html GCC 4.8 and above] for many platforms.&lt;br /&gt;
&lt;br /&gt;
In addition, project level integration is an opportunity to harden third party libraries you chose to include. Because you chose to include them, you and your users are responsible for them. If you or your users endure a SP800-53 audit, third party libraries will be in scope because the supply chain is included (specifically, item SA-12, Supply Chain Protection). The audits are not limited to those in the US Federal arena - financial institutions perform reviews too. A perfect example of violating this guidance is [http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-1525 CVE-2012-1525], which was due to [http://www.agarri.fr/blog/index.html Adobe's inclusion of a defective Sablotron library].&lt;br /&gt;
&lt;br /&gt;
Another example is including OpenSSL. You know (1) [http://www.schneier.com/paper-ssl-revised.pdf SSLv2 is insecure], (2) [http://www.yaksman.org/~lweith/ssl.pdf SSLv3 is insecure], and (3) [http://arstechnica.com/security/2012/09/crime-hijacks-https-sessions/ compression is insecure] (among others). In addition, suppose you don't use hardware and engines, and only allow static linking. Given the knowledge and specifications, you would configure the OpenSSL library as follows:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ Configure darwin64-x86_64-cc -no-hw -no-engine -no-comp -no-shared -no-dso -no-ssl2 -no-ssl3 --openssldir=…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Note Well'': you might want engines, especially on Ivy Bridge microarchitectures (3rd generation Intel Core i5 and i7 processors). To have OpenSSL use the processor's random number generator (via the of &amp;lt;tt&amp;gt;rdrand&amp;lt;/tt&amp;gt; instruction), you will need to call OpenSSL's &amp;lt;tt&amp;gt;ENGINE_load_rdrand()&amp;lt;/tt&amp;gt; function and then &amp;lt;tt&amp;gt;ENGINE_set_default&amp;lt;/tt&amp;gt; with &amp;lt;tt&amp;gt;ENGINE_METHOD_RAND&amp;lt;/tt&amp;gt;. See [http://wiki.opensslfoundation.com/index.php/Random_Numbers OpenSSL's Random Numbers] for details.&lt;br /&gt;
&lt;br /&gt;
If you configure without the switches, then you will likely have vulnerable code/libraries and risk failing an audit. If the program is a remote server, then the following command will reveal if compression is active on the channel:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ echo &amp;quot;GET / HTTP1.0&amp;quot; | openssl s_client -connect &amp;lt;nowiki&amp;gt;example.com:443&amp;lt;/nowiki&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;nm&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;openssl s_client&amp;lt;/tt&amp;gt; will show that compression is enabled in the client. In fact, any symbol within the &amp;lt;tt&amp;gt;OPENSSL_NO_COMP&amp;lt;/tt&amp;gt; preprocessor macro will bear witness since &amp;lt;tt&amp;gt;-no-comp&amp;lt;/tt&amp;gt; is translated into a &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; define.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ nm /usr/local/ssl/iphoneos/lib/libcrypto.a 2&amp;gt;/dev/null | egrep -i &amp;quot;(COMP_CTX_new|COMP_CTX_free)&amp;quot;&lt;br /&gt;
0000000000000110 T COMP_CTX_free&lt;br /&gt;
0000000000000000 T COMP_CTX_new&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Even more egregious is the answer given to auditors who specifically ask about configurations and protocols: &amp;quot;we don't use weak/wounded/broken ciphers&amp;quot; or &amp;quot;we follow best practices.&amp;quot; The use of compression tells the auditor that you are using wounded protocol in an insecure configuration and you don't follow best practices. That will likely set off alarm bells, and ensure the auditor dives deeper on more items.&lt;br /&gt;
&lt;br /&gt;
== Preprocessor ==&lt;br /&gt;
&lt;br /&gt;
The preprocessor is crucial to setting up a project for success. The C committee provided one macro - &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; - and the macro can be used to derive a number of configurations and drive engineering processes. Unfortunately, the committee also left many related items to chance, which has resulted in programmers abusing builtin facilities. This section will help you set up you projects to integrate well with other projects and ensure reliability and security.&lt;br /&gt;
&lt;br /&gt;
There are three topics to discuss when hardening the preprocessor. The first is well defined configurations which produce well defined behaviors, the second is useful behavior from assert, and the third is proper use of macros when integrating vendor code and third party libraries.&lt;br /&gt;
&lt;br /&gt;
=== Configurations ===&lt;br /&gt;
&lt;br /&gt;
To remove ambiguity, you should recognize two configurations: Release and Debug. Release is for production code on live servers, and its behavior is requested via the C/C++ &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; macro. Its also the only macro observed by the C and C++ Committees and Posix. Diametrically opposed to release is Debug. While there is a compelling argument for &amp;lt;tt&amp;gt;!defined(NDEBUG)&amp;lt;/tt&amp;gt;, you should have an explicit macro for the configuration and that macro should be &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;. This is because vendors and outside libraries use &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (or similar) macro for their configuration. For example, Carnegie Mellon's Mach kernel uses &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt;, Microsoft's CRT uses [http://msdn.microsoft.com/en-us/library/ww5t02fa%28v=vs.71%29.aspx&amp;lt;tt&amp;gt;_DEBUG&amp;lt;/tt&amp;gt;], and Wind River Workbench uses &amp;lt;tt&amp;gt;DEBUG_MODE&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (Release) and &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; (Debug), you have two additional cross products: both are defined or neither are defined. Defining both should be an error, and defining neither should default to a release configuration. Below is from [http://code.google.com/p/owasp-esapi-cplusplus/source/browse/trunk/esapi/EsapiCommon.h ESAPI C++ EsapiCommon.h], which is the configuration file used by all source files:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// Only one or the other, but not both&lt;br /&gt;
#if (defined(DEBUG) || defined(_DEBUG)) &amp;amp;&amp;amp; (defined(NDEBUG) || defined(_NDEBUG))&lt;br /&gt;
# error Both DEBUG and NDEBUG are defined.&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
// The only time we switch to debug is when asked. NDEBUG or {nothing} results&lt;br /&gt;
// in release build (fewer surprises at runtime).&lt;br /&gt;
#if defined(DEBUG) || defined(_DEBUG)&lt;br /&gt;
# define ESAPI_BUILD_DEBUG 1&lt;br /&gt;
#else&lt;br /&gt;
# define ESAPI_BUILD_RELEASE 1&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;tt&amp;gt;DEBUG&amp;lt;/tt&amp;gt; is in effect, your code should receive full debug instrumentation, including the full force of assertions.&lt;br /&gt;
&lt;br /&gt;
=== ASSERT ===&lt;br /&gt;
&lt;br /&gt;
Asserts will help you create self-debugging code by helping you find the point of first failure quickly and easily. Asserts should be used throughout your program, including parameter validation, return value checking and program state. The &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; will silently guard your code through its lifetime. It will always be there, even when not debugging a specific component of a module. If you have thorough code coverage, you will spend less time debugging and more time developing because programs will debug themselves.&lt;br /&gt;
&lt;br /&gt;
To use asserts effectively, you should assert everything. That includes parameters upon entering a function, return values from function calls, and any program state. Everywhere you place an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement for validation or checking, you should have an assert. Everywhere you have an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; for validation or checking, you should have an &amp;lt;tt&amp;gt;if&amp;lt;/tt&amp;gt; statement. They go hand-in-hand.&lt;br /&gt;
&lt;br /&gt;
If you are still using &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt;'s, then you have an opportunity for improvement. In the time it takes for you to write a &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; statement, you could have written an &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt;. Unlike the &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;NSLog&amp;lt;/tt&amp;gt; which are often removed when no longer needed, the &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; stays active forever. Remember, this is all about finding the point of first failure quickly so you can spend your time doing other things.&lt;br /&gt;
&lt;br /&gt;
There is one problem with using asserts - [http://pubs.opengroup.org/onlinepubs/009604499/functions/assert.html Posix states &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; should call &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt;] if &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; is '''not''' defined. When debugging, &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; will never be defined since you want the &amp;quot;program diagnostics&amp;quot; (quote from the Posix description). The behavior makes &amp;lt;tt&amp;gt;assert&amp;lt;/tt&amp;gt; and its accompanying &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; completely useless for development. The result of &amp;quot;program diagnostics&amp;quot; calling &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; due to standard C/C++ behavior is disuse - developers simply don't use them. Its incredibly bad for the development community because self-debugging programs can help eradicate so many stability problems.&lt;br /&gt;
&lt;br /&gt;
Since self-debugging programs are so powerful, you will have to have to supply your own assert and signal handler with improved behavior. Your assert will exchange auto-aborting behavior for auto-debugging behavior. The auto-debugging facility will ensure the debugger snaps when a problem is detected, and you will find the point of first failure quickly and easily.&lt;br /&gt;
&lt;br /&gt;
ESAPI C++ supplies its own assert with the behavior described above. In the code below, &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt; raises &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; when in effect or it evaluates to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt; in other cases.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;// A debug assert which should be sprinkled liberally. This assert fires and then continues rather&lt;br /&gt;
// than calling abort(). Useful when examining negative test cases from the command line.&lt;br /&gt;
#if (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_STARNIX))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp) {                                    \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; std::endl;                                           \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) {                               \&lt;br /&gt;
    if(!(exp)) {                                                  \&lt;br /&gt;
      std::ostringstream oss;                                     \&lt;br /&gt;
      oss &amp;lt;&amp;lt; &amp;quot;Assertion failed: &amp;quot; &amp;lt;&amp;lt; (char*)(__FILE__) &amp;lt;&amp;lt; &amp;quot;(&amp;quot;     \&lt;br /&gt;
          &amp;lt;&amp;lt; (int)__LINE__ &amp;lt;&amp;lt; &amp;quot;): &amp;quot; &amp;lt;&amp;lt; (char*)(__func__)          \&lt;br /&gt;
          &amp;lt;&amp;lt; &amp;quot;: \&amp;quot;&amp;quot; &amp;lt;&amp;lt; (msg) &amp;lt;&amp;lt; &amp;quot;\&amp;quot;&amp;quot; &amp;lt;&amp;lt; std::endl;                \&lt;br /&gt;
      std::cerr &amp;lt;&amp;lt; oss.str();                                     \&lt;br /&gt;
      raise(SIGTRAP);                                             \&lt;br /&gt;
    }                                                             \&lt;br /&gt;
  }&lt;br /&gt;
#elif (defined(ESAPI_BUILD_DEBUG) &amp;amp;&amp;amp; defined(ESAPI_OS_WINDOWS))&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      assert(exp)&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) assert(exp)&lt;br /&gt;
#else&lt;br /&gt;
#  define ESAPI_ASSERT1(exp)      ((void)(exp))&lt;br /&gt;
#  define ESAPI_ASSERT2(exp, msg) ((void)(exp))&lt;br /&gt;
#endif&lt;br /&gt;
&lt;br /&gt;
#if !defined(ASSERT)&lt;br /&gt;
#  define ASSERT(exp)     ESAPI_ASSERT1(exp)&lt;br /&gt;
#endif&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At program startup, a &amp;lt;tt&amp;gt;SIGTRAP&amp;lt;/tt&amp;gt; handler will be installed if one is not provided by another component:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct DebugTrapHandler&lt;br /&gt;
{&lt;br /&gt;
  DebugTrapHandler()&lt;br /&gt;
  {&lt;br /&gt;
    struct sigaction new_handler, old_handler;&lt;br /&gt;
&lt;br /&gt;
    do&lt;br /&gt;
      {&lt;br /&gt;
        int ret = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, NULL, &amp;amp;old_handler);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        // Don't step on another's handler&lt;br /&gt;
        if (old_handler.sa_handler != NULL) break;&lt;br /&gt;
&lt;br /&gt;
        new_handler.sa_handler = &amp;amp;DebugTrapHandler::NullHandler;&lt;br /&gt;
        new_handler.sa_flags = 0;&lt;br /&gt;
&lt;br /&gt;
        ret = sigemptyset (&amp;amp;new_handler.sa_mask);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
        ret = sigaction (SIGTRAP, &amp;amp;new_handler, NULL);&lt;br /&gt;
        if (ret != 0) break; // Failed&lt;br /&gt;
&lt;br /&gt;
      } while(0);&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
  static void NullHandler(int /*unused*/) { }&lt;br /&gt;
&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
// We specify a relatively low priority, to make sure we run before other CTORs&lt;br /&gt;
// http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Attributes.html#C_002b_002b-Attributes&lt;br /&gt;
static const DebugTrapHandler g_dummyHandler __attribute__ ((init_priority (110)));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On a Windows platform, you would call &amp;lt;tt&amp;gt;_set_invalid_parameter_handler&amp;lt;/tt&amp;gt; (and possibly &amp;lt;tt&amp;gt;set_unexpected&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;set_terminate&amp;lt;/tt&amp;gt;) to install a new handler.&lt;br /&gt;
&lt;br /&gt;
Live hosts running production code should always define &amp;lt;tt&amp;gt;NDEBUG&amp;lt;/tt&amp;gt; (i.e., release configuration), which means they do not assert or auto-abort. Auto-abortion is not acceptable behavior, and anyone who asks for the behavior is completely abusing the functionality of &amp;quot;program diagnostics&amp;quot;. If a program wants a core dump, then it should create the dump rather than crashing.&lt;br /&gt;
&lt;br /&gt;
For more reading on asserting effectively, please see one of John Robbin's books, such as ''[http://www.amazon.com/dp/0735608865 Debugging Applications]''. John is a legendary bug slayer in Windows circles, and he will show you how to do nearly everything, from debugging a simple program to bug slaying in multithreaded programs.&lt;br /&gt;
&lt;br /&gt;
=== Additional Macros ===&lt;br /&gt;
&lt;br /&gt;
Additional macros include any macros needed to integrate properly and securely. It includes integrating the program with the platform (for example MFC or Cocoa/CocoaTouch) and libraries (for example, Crypto++ or OpenSSL). It can be a challenge because you have to have proficiency with your platform and all included libraries and frameworks. The list below illustrates the level of detail you will need when integrating.&lt;br /&gt;
&lt;br /&gt;
Though Boost is missing from the list, it appears to lack recommendations, additional debug diagnostics, and a hardening guide. See ''[http://stackoverflow.com/questions/14927033/boost-hardening-guide-preprocessor-macros BOOST Hardening Guide (Preprocessor Macros)]'' for details. In addition, Tim Day points to ''[http://boost.2283326.n4.nabble.com/boost-build-should-we-not-define-SECURE-SCL-0-by-default-for-all-msvc-toolsets-td2654710.html &amp;lt;nowiki&amp;gt;[boost.build] should we not define _SECURE_SCL=0 by default for all msvc toolsets&amp;lt;/nowiki&amp;gt;]'' for a recent discussion related to hardening (or lack thereof).&lt;br /&gt;
&lt;br /&gt;
In addition to what you should define, defining some macros and undefining others should trigger a security related defect. For example, &amp;lt;tt&amp;gt;-U_FORTIFY_SOURCES&amp;lt;/tt&amp;gt; on Linux and &amp;lt;tt&amp;gt;_CRT_SECURE_NO_WARNINGS=1&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_SCL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt;, &amp;lt;tt&amp;gt;_ATL_SECURE_NO_WARNINGS&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;STRSAFE_NO_DEPRECATE&amp;lt;/tt&amp;gt; on Windows.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Platform/Library!!Debug!!Release&lt;br /&gt;
|+ Table 1: Additional Platform/Library Macros&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;175pt&amp;quot;|All&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|DEBUG=1&lt;br /&gt;
|width=&amp;quot;250pt&amp;quot;|NDEBUG=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Linux&lt;br /&gt;
|_GLIBCXX_DEBUG=1&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
_GLIBCXX_CONCEPT_CHECKS=1&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
|_FORTIFY_SOURCE=2&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Android&lt;br /&gt;
|NDK_DEBUG=1&lt;br /&gt;
|_FORTIFY_SOURCE=1 (4.2 and above)&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define LOGI(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt logging)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Cocoa/CocoaTouch&lt;br /&gt;
|&lt;br /&gt;
|NS_BLOCK_ASSERTIONS=1&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;tt&amp;gt;#define NSLog(...)&amp;lt;/tt&amp;gt; (define to nothing, preempt ASL)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SafeInt&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
|SAFEINT_DISALLOW_UNSIGNED_NEGATION=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft&lt;br /&gt;
|_DEBUG=1, STRICT,&amp;lt;br&amp;gt;&lt;br /&gt;
_SECURE_SCL=1, _HAS_ITERATOR_DEBUGGING=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
|STRICT&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES=1&amp;lt;br&amp;gt;&lt;br /&gt;
_CRT_SECURE_CPP_OVERLOAD_STANDARD_NAMES_COUNT=1&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|Microsoft ATL &amp;amp; MFC&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
|_SECURE_ATL, _ATL_ALL_WARNINGS&amp;lt;br&amp;gt;&lt;br /&gt;
_ATL_CSTRING_EXPLICIT_CONSTRUCTORS&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|STLPort&lt;br /&gt;
|_STLP_DEBUG=1, _STLP_USE_DEBUG_LIB=1&amp;lt;br&amp;gt;&lt;br /&gt;
_STLP_DEBUG_ALLOC=1, _STLP_DEBUG_UNINITIALIZED=1&lt;br /&gt;
|&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLite&lt;br /&gt;
|SQLITE_DEBUG, SQLITE_MEMDEBUG&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_SECURE_DELETE&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
SQLITE_DEFAULT_FILE_PERMISSIONS=N&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|SQLCipher&lt;br /&gt;
|SQLITE_HAS_CODEC=1&amp;lt;BR&amp;gt;&lt;br /&gt;
SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;e&amp;lt;/sup&amp;gt;&lt;br /&gt;
|SQLITE_HAS_CODEC=1&amp;lt;BR&amp;gt;&lt;br /&gt;
SQLITE_TEMP_STORE=3&amp;lt;sup&amp;gt;e&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Be careful with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; when using pre-compiled libraries such as Boost from a distribution. There are ABI incompatibilities, and the result will likely be a crash. You will have to compile Boost with &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt; or omit &amp;lt;tt&amp;gt;_GLIBCXX_DEBUG&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; See [http://gcc.gnu.org/onlinedocs/libstdc++/manual/concept_checking.html Chapter 5, Diagnostics] of the libstdc++ manual for details.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;c&amp;lt;/sup&amp;gt; SQLite secure deletion zeroizes memory on destruction. Define as required, and always define in US Federal since zeroization is required for FIPS 140-2, Level 1.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;d&amp;lt;/sup&amp;gt; ''N'' is 0644 by default, which means everyone has some access.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;e&amp;lt;/sup&amp;gt; Force temporary tables into memory (no unencrypted data to disk).&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
##########################################&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
== Compiler and Linker ==&lt;br /&gt;
&lt;br /&gt;
Compiler writers provide a rich set of warnings from the analysis of code during compilation. Both GCC and Visual Studio have static analysis capabilities to help find mistakes early in the development process. The built in static analysis capabilities of GCC and Visual Studio are usually sufficient to ensure proper API usage and catch a number of mistakes such as using an uninitialized variable or comparing a negative signed int and a positive unsigned int.&lt;br /&gt;
&lt;br /&gt;
As a concrete example, (and for those not familiar with C/C++ promotion rules), a warning will be issued if a signed integer is promoted to an unsigned integer and then compared because a side effect is &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! GCC and Visual Studio will not currently catch, for example, SQL injections and other tainted data usage. For that, you will need a tool designed to perform data flow analysis or taint analysis.&lt;br /&gt;
&lt;br /&gt;
Some in the development community resist static analysis or refute its results. For example, when static analysis warned the Linux kernel's &amp;lt;tt&amp;gt;sys_prctl&amp;lt;/tt&amp;gt; was comparing an unsigned value against less than zero, Jesper Juhl offered a patch to clean up the code. Linus Torvalds howled “No, you don't do this… GCC is crap” (referring to compiling with warnings). For the full discussion, see ''[http://linux.derkeiler.com/Mailing-Lists/Kernel/2006-11/msg08325.html &amp;lt;nowiki&amp;gt;[PATCH] Don't compare unsigned variable for &amp;lt;0 in sys_prctl()&amp;lt;/nowiki&amp;gt;]'' from the Linux Kernel mailing list.&lt;br /&gt;
&lt;br /&gt;
The following sections will detail steps for three platforms. First is a typical GNU Linux based distribution offering GCC and Binutils, second is Clang and Xcode, and third is modern Windows platforms.&lt;br /&gt;
&lt;br /&gt;
=== Distribution Hardening ===&lt;br /&gt;
&lt;br /&gt;
Before discussing GCC and Binutils, it would be a good time to point out some of the defenses discussed below are all ready present in a distribution. Unfortunately, its design by committee, so what is present is usually only a mild variation of what is available (this way, everyone is mildly offended). For those who are purely worried about performance, you might be surprised to learn you have already taken the small performance hint without even knowing.&lt;br /&gt;
&lt;br /&gt;
Linux and BSD distributions often apply some hardening without intervention via ''[http://gcc.gnu.org/onlinedocs/gcc/Spec-Files.html GCC Spec Files]''. If you are using Debian, Ubuntu, Linux Mint and family, see ''[http://wiki.debian.org/Hardening Debian Hardening]''. For Red Hat and Fedora systems, see ''[http://lists.fedoraproject.org/pipermail/devel-announce/2011-August/000821.html New hardened build support (coming) in F16]''. Gentoo users should visit ''[http://www.gentoo.org/proj/en/hardened/ Hardened Gentoo]''.&lt;br /&gt;
&lt;br /&gt;
You can see the settings being used by a distribution via &amp;lt;tt&amp;gt;gcc -dumpspecs&amp;lt;/tt&amp;gt;. From Linux Mint 12 below, -fstack-protector (but not -fstack-protector-all) is used by default.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;$ gcc -dumpspecs&lt;br /&gt;
…&lt;br /&gt;
*link_ssp: %{fstack-protector:}&lt;br /&gt;
&lt;br /&gt;
*ssp_default: %{!fno-stack-protector:%{!fstack-protector-all: %{!ffreestanding:%{!nostdlib:-fstack-protector}}}}&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The “SSP” above stands for Stack Smashing Protector. SSP is a reimplementation of Hiroaki Etoh's work on IBM Pro Police Stack Detector. See Hiroaki Etoh's patch ''[http://gcc.gnu.org/ml/gcc-patches/2001-06/msg01753.html gcc stack-smashing protector]'' and IBM's ''[http://www.research.ibm.com/trl/projects/security/ssp/ GCC extension for protecting applications from stack-smashing attacks]'' for details.&lt;br /&gt;
&lt;br /&gt;
=== GCC/Binutils ===&lt;br /&gt;
&lt;br /&gt;
GCC (the compiler collection) and Binutils (the assemblers, linkers, and other tools) are separate projects that work together to produce a final executable. Both the compiler and linker offer options to help you write safer and more secure code. The linker will produce code which takes advantage of platform security features offered by the kernel and PaX, such as no-exec stacks and heaps (NX) and Position Independent Executable (PIE).&lt;br /&gt;
&lt;br /&gt;
The table below offers a set of compiler options to build your program. Static analysis warnings help catch mistakes early, while the linker options harden the executable at runtime. In the table below, “GCC” should be loosely taken as “non-ancient distributions.” While the GCC team considers 4.2 ancient, you will still encounter it on Apple and BSD platforms due to changes in GPL licensing around 2007. Refer to ''[http://gcc.gnu.org/onlinedocs/gcc/Option-Summary.html GCC Option Summary]'', ''[http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html Options to Request or Suppress Warnings]'' and ''[http://sourceware.org/binutils/docs-2.21/ld/Options.html Binutils (LD) Command Line Options]'' for usage details.&lt;br /&gt;
&lt;br /&gt;
Noteworthy of special mention are &amp;lt;tt&amp;gt;-fno-strict-overflow&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fwrapv&amp;lt;/tt&amp;gt;&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;. The flags ensure the compiler does not remove statements that result in overflow or wrap. If your program only runs correctly using the flags, it is likely violating C/C++ rules on overflow and illegal. If the program is illegal due to overflow or wrap checking, you should consider using [http://code.google.com/p/safe-iop/ safe-iop] for C or David LeBlanc's [http://safeint.codeplex.com SafeInt] in C++.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, some of those settings can be verified with the [http://www.trapkit.de/tools/checksec.html Checksec] tool written by Tobias Klein. The &amp;lt;tt&amp;gt;checksec.sh&amp;lt;/tt&amp;gt; script is designed to test standard Linux OS and PaX security features being used by an application. See the [http://www.trapkit.de/tools/checksec.html Trapkit] web page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 2: GCC C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wall -Wextra&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;75t&amp;quot;|GCC&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Enables many warnings (despite their names, all and extra do not turn on all warnings).&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wconversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may alter a value (includes -Wsign-conversion).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-conversion&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for implicit conversions that may change the sign of an integer value, such as assigning a signed integer to an unsigned integer (&amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion!).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wcast-align&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn for a pointer cast to a type which has a different size, causing an invalid alignment and subsequent bus error on ARM processors.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wformat=2 -Wformat-security&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Increases warnings related to possible security defects, including incorrect format specifiers.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-common&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Prevent global variables being simultaneously defined in different object files.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fstack-protector or -fstack-protector-all&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Stack Smashing Protector (SSP). Improves stack layout and adds a guard to detect stack based buffer overflows.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fno-omit-frame-pointer&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Improves backtraces for post-mortem analysis&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wmissing-prototypes and -Wmissing-declarations&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a global function is defined without a prototype or declaration.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-prototypes&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Warn if a function is declared or defined without specifying the argument types.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fstack-check&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC&lt;br /&gt;
|Prevents the stack-pointer from moving into another memory region without accessing the stack guard-page. The &amp;quot;-fstack-check&amp;quot; remediation has some expense. It touches each page using a 4K stride to ensure the guard page is touched.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wstrict-overflow&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.2&lt;br /&gt;
|Warn about optimizations taken due to &amp;lt;nowiki&amp;gt;[undefined]&amp;lt;/nowiki&amp;gt; signed integer overflow assumptions. &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wtrampolines&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.3&lt;br /&gt;
|Warn about trampolines generated for pointers to nested functions. Trampolines require executable stacks.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=address&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/address-sanitizer/ AddressSanitizer], a fast memory error detector. Memory access instructions will be instrumented to help detect heap, stack, and global buffer overflows; as well as use-after-free bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fsanitize=thread&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 4.8&lt;br /&gt;
|Enable [http://code.google.com/p/data-race-test/wiki/ThreadSanitizer ThreadSanitizer], a fast data race detector. Memory access instructions will be instrumented to detect data race bugs.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-mfunction-return=thunk and -mindirect-branch=thunk&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|GCC 7.3, 8.1&lt;br /&gt;
|Enable &amp;quot;Reptoline&amp;quot; fix which is an indirect branch used to thwart speculative execution CPU vulnerabilities such as Spectre and Meltdown. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,nodlopen and -Wl,-z,nodump&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.10&lt;br /&gt;
|Reduces the ability of an attacker to load, manipulate, and dump shared objects.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,noexecstack and -Wl,-z,noexecheap&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.14&lt;br /&gt;
|Data Execution Prevention (DEP). ELF headers are marked with PT_GNU_STACK and PT_GNU_HEAP.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,relro&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Global Offset Table (GOT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wl,-z,now&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.15&lt;br /&gt;
|Helps remediate Procedure Linkage Table (PLT) attacks on executables.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIC&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils&lt;br /&gt;
|Position Independent Code. Used for libraries and shared objects. Both -fPIC (compiler) and -shared (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-fPIE&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Binutils 2.16&lt;br /&gt;
|Position Independent Executable (ASLR). Used for programs. Both -fPIE (compiler) and -pie (linker) are required.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt; Unlike Clang and -Weverything, GCC does not provide a switch to truly enable all warnings.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt; -fstack-protector guards functions with high risk objects such as C strings, while -fstack-protector-all guards all objects.&lt;br /&gt;
&lt;br /&gt;
Additional C++ warnings which can be used include the following in Table 3. See ''[http://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html GCC's Options Controlling C++ Dialect]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 3: GCC C++ Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Woverloaded-virtual&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn when a function declaration hides virtual functions from a base class. &lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wreorder&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when the order of member initializers given in the code does not match the order in which they must be executed.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wsign-promo&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when overload resolution chooses a promotion from unsigned or enumerated type to a signed type, over a conversion to an unsigned type of the same size.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wnon-virtual-dtor&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn when a class has virtual functions and an accessible non-virtual destructor.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Weffc++&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn about violations of the following style guidelines from Scott Meyers' ''[http://www.aristeia.com/books.html Effective C++, Second Edition]'' book.&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
And additional Objective C warnings which are often useful include the following. See ''[http://gcc.gnu.org/onlinedocs/gcc/Objective_002dC-and-Objective_002dC_002b_002b-Dialect-Options.html Options Controlling Objective-C and Objective-C++ Dialects]'' for additional options and details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Discussion&lt;br /&gt;
|+ Table 4: GCC Objective C Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;200pt&amp;quot;|&amp;lt;nowiki&amp;gt;-Wstrict-selector-match&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;425pt&amp;quot;|Warn if multiple methods with differing argument and/or return types are found for a given selector when attempting to send a message using this selector to a receiver of type id or Class.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;-Wundeclared-selector&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Warn if a &amp;lt;tt&amp;gt;@selector(…)&amp;lt;/tt&amp;gt; expression referring to an undeclared selector is found. &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
The use of aggressive warnings will produce spurious noise. The noise is a tradeoff - you can learn of potential problems at the cost of wading through some chaff. The following will help reduces spurious noise from the warning system:&lt;br /&gt;
&lt;br /&gt;
* -Wno-unused-parameter (GCC)&lt;br /&gt;
* -Wno-type-limits (GCC 4.3)&lt;br /&gt;
* -Wno-tautological-compare (Clang)&lt;br /&gt;
&lt;br /&gt;
Finally, a simple version based Makefile example is shown below. This is different than feature based makefile produced by auto tools (which will test for a particular feature and then define a symbol or configure a template file). Not all platforms use all options and flags. To address the issue you can pursue one of two strategies. First, you can ship with a weakened posture by servicing the lowest common denominator; or you can ship with everything in force. In the latter case, those who don't have a feature available will edit the makefile to accommodate their installation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;CXX=g++&lt;br /&gt;
EGREP = egrep&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GCC_COMPILER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version')&lt;br /&gt;
GCC41_OR_LATER = $(shell $(CXX) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gcc version (4\.[1-9]|[5-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
GNU_LD210_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[0-9]|2\.[2-9])')&lt;br /&gt;
GNU_LD214_OR_LATER = $(shell $(LD) -v 2&amp;gt;&amp;amp;1 | $(EGREP) -i -c '^gnu ld .* (2\.1[4-9]|2\.[2-9])')&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC_COMPILER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wall -Wextra -Wconversion&lt;br /&gt;
  MY_CC_FLAGS += -Wformat=2 -Wformat-security&lt;br /&gt;
  MY_CC_FLAGS += -Wno-unused-parameter&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC41_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fstack-protector-all&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC42_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wstrict-overflow&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GCC43_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -Wtrampolines&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD210_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,nodlopen -z,nodump&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD214_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,noexecstack -z,noexecheap&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD215_OR_LATER),1)&lt;br /&gt;
  MY_LD_FLAGS += -z,relro -z,now&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
ifeq ($(GNU_LD216_OR_LATER),1)&lt;br /&gt;
  MY_CC_FLAGS += -fPIE&lt;br /&gt;
  MY_LD_FLAGS += -pie&lt;br /&gt;
endif&lt;br /&gt;
&lt;br /&gt;
# Use 'override' to honor the user's command line&lt;br /&gt;
override CFLAGS := $(MY_CC_FLAGS) $(CFLAGS)&lt;br /&gt;
override CXXFLAGS := $(MY_CC_FLAGS) $(CXXFLAGS)&lt;br /&gt;
override LDFLAGS := $(MY_LD_FLAGS) $(LDFLAGS)&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Clang/Xcode ===&lt;br /&gt;
&lt;br /&gt;
[http://clang.llvm.org Clang] and [http://llvm.org LLVM] have been aggressively developed since Apple lost its GPL compiler back in 2007 (due to Tivoization which resulted in GPLv3). Since that time, a number of developers and Goggle have joined the effort. While Clang will consume most (all?) GCC/Binutil flags and switches, the project supports a number of its own options, including a static analyzer. In addition, Clang is relatively easy to build with additional diagnostics, such as Dr. John Regher and Peng Li's [http://embed.cs.utah.edu/ioc/ Integer Overflow Checker (IOC)].&lt;br /&gt;
&lt;br /&gt;
IOC is incredibly useful, and has found bugs in a number of projects, from the Linux Kernel (&amp;lt;tt&amp;gt;include/linux/bitops.h&amp;lt;/tt&amp;gt;, still unfixed), SQLite, PHP, Firefox (many still unfixed), LLVM, and Python. Future version of Clang (Clang 3.3 and above) will allow you to enable the checks out of the box with &amp;lt;tt&amp;gt;-fsanitize=integer&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;-fsanitize=shift&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Clang options can be found at [http://clang.llvm.org/docs/UsersManual.html Clang Compiler User’s Manual]. Clang does include an option to turn on all warnings - &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt;. Use it with care but use it regularly since you will get back a lot of noise and issues you missed. For example, add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; for production builds and make non-spurious issues a quality gate. Under Xcode, simply add &amp;lt;tt&amp;gt;-Weverything&amp;lt;/tt&amp;gt; to &amp;lt;tt&amp;gt;CFLAGS&amp;lt;/tt&amp;gt; and &amp;lt;tt&amp;gt;CXXFLAGS&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
In addition to compiler warnings, both static analysis and additional security checks can be performed. Reading on Clang's static analysis capabilities can be found at [http://clang-analyzer.llvm.org Clang Static Analyzer]. Figure 1 below shows some of the security checks utilized by Xcode.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-11.png|thumb|450px|Figure 1: Clang/LLVM and Xcode options]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Visual Studio ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a convenient Integrated Development Environment (IDE) for managing solutions and their settings. the section called “Visual Studio Options” discusses option which should be used with Visual Studio, and the section called “Project Properties” demonstrates incorporating those options into a solution's project.&lt;br /&gt;
&lt;br /&gt;
The table below lists the compiler and linker switches which should be used under Visual Studio. Refer to Howard and LeBlanc's Writing Secure Code (Microsoft Press) for a detailed discussion; or ''[http://msdn.microsoft.com/en-us/magazine/cc337897.aspx Protecting Your Code with Visual C++ Defenses]'' in Security Briefs by Michael Howard. In the table below, “Visual Studio” refers to nearly all versions of the development environment, including Visual Studio 5.0 and 6.0.&lt;br /&gt;
&lt;br /&gt;
For a project compiled and linked with hardened settings, those settings can be verified with BinScope. BinScope is a verification tool from Microsoft that analyzes binaries to ensure that they have been built in compliance with Microsoft's Security Development Lifecycle (SDLC) requirements and recommendations. See the ''[https://www.microsoft.com/download/en/details.aspx?id=11910 BinScope Binary Analyzer]'' download page for details.&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
&lt;br /&gt;
|-style=&amp;quot;background:#DADADA&amp;quot;&lt;br /&gt;
!Flag or Switch!!Version!!Discussion&lt;br /&gt;
|+ Table 5: Visual Studio Warning Options&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|width=&amp;quot;150pt&amp;quot;|&amp;lt;nowiki&amp;gt;/W4&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|width=&amp;quot;100pt&amp;quot;|Visual Studio&lt;br /&gt;
|width=&amp;quot;350pt&amp;quot;|Warning level 4, which includes most warnings.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/sdl&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2002&lt;br /&gt;
|Adds recommended Security Development Lifecycle checks including extra security-relevant warnings as errors, and additional secure code-generation features.&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/WAll&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Enable all warnings, including those off by default.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/GS&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Adds a security cookie (guard or canary) on the stack before the return address buffer stack based for overflow checks.&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/SafeSEH&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2003&lt;br /&gt;
|Safe structured exception handling to remediate SEH overwrites.&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/analyze&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Enterprise code analysis (freely available with Windows SDK for Windows Server 2008 and .NET Framework 3.5).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/NXCOMPAT&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005&lt;br /&gt;
|Data Execution Prevention (DEP).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/dynamicbase&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Address Space Layout Randomization (ASLR).&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;strict_gs_check&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2005 SP1&lt;br /&gt;
|Aggressively applies stack protections to a source file to help detect some categories of stack based buffer overruns.&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;nowiki&amp;gt;/d2guard4 /link /guard:cf&amp;lt;/nowiki&amp;gt;&lt;br /&gt;
|Visual Studio 2015&lt;br /&gt;
|&amp;lt;i&amp;gt;Control Flow Guard&amp;lt;/i&amp;gt; ensure that all indirect-calls result in a jump to legal targets. Please note that /d2guard4 is a compiler switch and it needs to be used with /guard.  Also note that /guard is a linker switch and needs to be used with /d2guard4. Please note than &amp;lt;i&amp;gt;Control Flow Guard&amp;lt;/i&amp;gt; protects against attacks like heap sprays and no-op sleds as well.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;See Jon Sturgeon's discussion of the switch at ''[https://blogs.msdn.com/b/vcblog/archive/2010/12/14/off-by-default-compiler-warnings-in-visual-c.aspx Off By Default Compiler Warnings in Visual C++]''.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;a&amp;lt;/sup&amp;gt;When using /GS, there are a number of circumstances which affect the inclusion of a security cookie. For example, the guard is not used if there is no buffer in the stack frame, optimizations are disabled, or the function is declared naked or contains inline assembly.&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;sup&amp;gt;b&amp;lt;/sup&amp;gt;&amp;lt;tt&amp;gt;#pragma strict_gs_check(on)&amp;lt;/tt&amp;gt; should be used sparingly, but is recommend in high risk situations, such as when a source file parses input from the internet.&lt;br /&gt;
&lt;br /&gt;
=== Warn Suppression ===&lt;br /&gt;
&lt;br /&gt;
From the tables above, a lot of warnings have been enabled to help detect possible programming mistakes. The potential mistakes are detected via compiler which carries around a lot of contextual information during its code analysis phase. At times, you will receive spurious warnings because the compiler is not ''that'' smart. Its understandable and even a good thing (how would you like to be out of a job because a program writes its own programs?). At times you will have to learn how to work with the compiler's warning system to suppress warnings. Notice what was not said: turn off the warnings.&lt;br /&gt;
&lt;br /&gt;
Suppressing warnings placates the compiler for spurious noise so you can get to the issues that matter (you are separating the wheat from the chaff). This section will offer some hints and point out some potential minefields. First is an unused parameter (for example, &amp;lt;tt&amp;gt;argc&amp;lt;/tt&amp;gt; or &amp;lt;tt&amp;gt;argv&amp;lt;/tt&amp;gt;). Suppressing unused parameter warnings is especially helpful for C++ and interface programming, where parameters are often unused. For this warning, simply define an &amp;quot;UNUSED&amp;quot; macro and warp the parameter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;#define UNUSED_PARAMETER(x) ((void)x)&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int main(int argc, char* argv[])&lt;br /&gt;
{&lt;br /&gt;
    UNUSED_PARAMETER(argc);&lt;br /&gt;
    UNUSED_PARAMETER(argv);&lt;br /&gt;
    …&lt;br /&gt;
}&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A potential minefield lies near &amp;quot;comparing unsigned and signed&amp;quot; values, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will catch it for you. This is because C/C++ promotion rules state the signed value will be promoted to an unsigned value and then compared. That means &amp;lt;tt&amp;gt;-1 &amp;gt; 1&amp;lt;/tt&amp;gt; after promotion! To fix this, you cannot blindly cast - you must first range test the value:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;int x = GetX();&lt;br /&gt;
unsigned int y = GetY();&lt;br /&gt;
&lt;br /&gt;
ASSERT(x &amp;gt;= 0);&lt;br /&gt;
if(!(x &amp;gt;= 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? X is negative.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
if(static_cast&amp;lt;unsigned int&amp;gt;(x) &amp;gt; y)&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is greater than y&amp;quot; &amp;lt;&amp;lt; endl;&lt;br /&gt;
else&lt;br /&gt;
    cout &amp;lt;&amp;lt; &amp;quot;x is not greater than y&amp;quot; &amp;lt;&amp;lt; endl;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;x&amp;lt;/tt&amp;gt;. Just run the program and wait for it to tell you there is a problem. If there is a problem, the program will snap the debugger (and more importantly, not call a useless &amp;lt;tt&amp;gt;abort()&amp;lt;/tt&amp;gt; as specified by Posix). It beats the snot out of &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; that are removed when no longer needed or pollute outputs.&lt;br /&gt;
&lt;br /&gt;
Another conversion problem you will encounter conversion between types, and &amp;lt;tt&amp;gt;-Wconversion&amp;lt;/tt&amp;gt; will also catch it for you. The following will always have an opportunity to fail, and should light up like a Christmas tree:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;struct sockaddr_in addr;&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
addr.sin_port = htons(atoi(argv[2]));&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following would probably serve you much better. Notice &amp;lt;tt&amp;gt;atoi&amp;lt;/tt&amp;gt; and fiends are not used because they can silently fail. In addition, the code is instrumented so you don't need to waste a lot of time debugging potential problems:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;const char* cstr = GetPortString();&lt;br /&gt;
&lt;br /&gt;
ASSERT(cstr != NULL);&lt;br /&gt;
if(!(cstr != NULL))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port string is not valid.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
istringstream iss(cstr);&lt;br /&gt;
long long t = 0;&lt;br /&gt;
iss &amp;gt;&amp;gt; t;&lt;br /&gt;
&lt;br /&gt;
ASSERT(!(iss.fail()));&lt;br /&gt;
if(iss.fail())&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Failed to read port.&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// Should this be a port above the reserved range ([0-1024] on Unix)?&lt;br /&gt;
ASSERT(t &amp;gt; 0);&lt;br /&gt;
if(!(t &amp;gt; 0))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too small&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
ASSERT(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max()));&lt;br /&gt;
if(!(t &amp;lt; static_cast&amp;lt;long long&amp;gt;(numeric_limits&amp;lt;unsigned int&amp;gt;::max())))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Port is too large&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use port&lt;br /&gt;
unsigned short port = static_cast&amp;lt;unsigned short&amp;gt;(t);&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Again, notice the code above will debug itself - you don't need to set a breakpoint to see if there is a problem with &amp;lt;tt&amp;gt;port&amp;lt;/tt&amp;gt;. This code will continue checking conditions, years after being instrumented (assuming to wrote code to read a config file early in the project). There's no need to remove the &amp;lt;tt&amp;gt;ASSERT&amp;lt;/tt&amp;gt;s as with &amp;lt;tt&amp;gt;printf&amp;lt;/tt&amp;gt; since they are silent guardians.&lt;br /&gt;
&lt;br /&gt;
Another useful suppression trick is too avoid ignoring return values. Not only is it useful to suppress the warning, its required for correct code. For example, &amp;lt;tt&amp;gt;snprint&amp;lt;/tt&amp;gt; will alert you to truncations through its return value. You should not make them silent truncations by ignoring the warning or casting to &amp;lt;tt&amp;gt;void&amp;lt;/tt&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;char path[PATH_MAX];&lt;br /&gt;
…&lt;br /&gt;
&lt;br /&gt;
int ret = snprintf(path, sizeof(path), &amp;quot;%s/%s&amp;quot;, GetDirectory(), GetObjectName());&lt;br /&gt;
ASSERT(ret != -1);&lt;br /&gt;
ASSERT(!(ret &amp;gt;= sizeof(path)));&lt;br /&gt;
&lt;br /&gt;
if(ret == -1 || ret &amp;gt;= sizeof(path))&lt;br /&gt;
    throw runtime_error(&amp;quot;WTF??? Unable to build full object name&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
// OK to use path&lt;br /&gt;
…&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The problem is pandemic, and not just boring user land programs. Projects which offer high integrity code, such as SELinux, suffer silent truncations. The following is from an approved SELinux patch even though a comment was made that it [http://permalink.gmane.org/gmane.comp.security.selinux/16845 suffered silent truncations in its &amp;lt;tt&amp;gt;security_compute_create_name&amp;lt;/tt&amp;gt; function] from &amp;lt;tt&amp;gt;compute_create.c&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;12  int security_compute_create_raw(security_context_t scon,&lt;br /&gt;
13                                  security_context_t tcon,&lt;br /&gt;
14                                  security_class_t   tclass,&lt;br /&gt;
15                                  security_context_t * newcon)&lt;br /&gt;
16  {&lt;br /&gt;
17    char path[PATH_MAX];&lt;br /&gt;
18    char *buf;&lt;br /&gt;
19    size_t size;&lt;br /&gt;
20    int fd, ret;&lt;br /&gt;
21 	&lt;br /&gt;
22    if (!selinux_mnt) {&lt;br /&gt;
23      errno = ENOENT;&lt;br /&gt;
24      return -1;&lt;br /&gt;
25    }&lt;br /&gt;
26 	&lt;br /&gt;
27    snprintf(path, sizeof path, &amp;quot;%s/create&amp;quot;, selinux_mnt);&lt;br /&gt;
28    fd = open(path, O_RDWR);&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Unlike other examples, the above code will not debug itself, and you will have to set breakpoints and trace calls to determine the point of first failure. (And the code above gambles that the truncated file does not exist or is not under an adversary's control by blindly performing the &amp;lt;tt&amp;gt;open&amp;lt;/tt&amp;gt;).&lt;br /&gt;
&lt;br /&gt;
== Runtime ==&lt;br /&gt;
&lt;br /&gt;
The previous sections concentrated on setting up your project for success. This section will examine additional hints for running with increased diagnostics and defenses. Not all platforms are created equal - GNU Linux is difficult to impossible to [http://sourceware.org/ml/binutils/2012-03/msg00309.html add hardening to a program after compiling and static linking]; while Windows allows post-build hardening through a download. Remember, the goal is to find the point of first failure quickly so you can improve the reliability and security of the code.&lt;br /&gt;
&lt;br /&gt;
=== Xcode ===&lt;br /&gt;
&lt;br /&gt;
Xcode offers additional [http://developer.apple.com/library/mac/#recipes/xcode_help-scheme_editor/Articles/SchemeDiagnostics.html Application Diagnostics] that can help find memory errors and object use problems. Schemes can be managed through ''Products'' menu item, ''Scheme'' submenu item, and then ''Edit''. From the editor, navigate to the ''Diagnostics'' tab. In the figure below, four additional instruments are enabled for the debugging cycle: Scribble guards, Edge guards, Malloc guards, and Zombies.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-1.png|thumb|450px|Figure 2: Xcode Memory Diagnostics]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
There is one caveat with using some of the guards: Apple only provides them for the simulator, and not a device. In the past, the guards were available for both devices and simulators.&lt;br /&gt;
&lt;br /&gt;
=== Windows ===&lt;br /&gt;
&lt;br /&gt;
Visual Studio offers a number of debugging aides for use during development. The aides are called [http://msdn.microsoft.com/en-us/library/d21c150d.aspx Managed Debugging Assistants (MDAs)]. You can find the MDAs on the ''Debug'' menu, then ''Exceptions'' submenu. MDAs allow you to tune your debugging experience by, for example, filter exceptions for which the debugger should snap. For more details, see Stephen Toub's ''[http://msdn.microsoft.com/en-us/magazine/cc163606.aspx Let The CLR Find Bugs For You With Managed Debugging Assistants]''.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-2.png|thumb|450px|Figure 3: Managed Debugging Assistants]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Finally, for runtime hardening, Microsoft has a helpful tool called EMET. EMET is the [http://support.microsoft.com/kb/2458544 Enhanced Mitigation Experience Toolkit], and allows you to apply runtime hardening to an executable which was built without. Its very useful for utilities and other programs that were built without an SDLC.&lt;br /&gt;
&lt;br /&gt;
{| align=&amp;quot;center&amp;quot;&lt;br /&gt;
| [[File:toolchan-hardening-3.png|thumb|450px|Figure 4: Windows and EMET]]&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Authors and Editors ==&lt;br /&gt;
&lt;br /&gt;
* Jeffrey Walton - jeffrey, owasp.org&lt;br /&gt;
* Jim Manico - jim, owasp.org&lt;br /&gt;
* Kevin Wall - kevin, owasp.org&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;diff=246804</id>
		<title>Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;diff=246804"/>
				<updated>2019-01-23T23:24:23Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Triple Submit Cookie */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; __NOTOC__&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:160px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Cheatsheets-header.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
Last revision (mm/dd/yy): '''{{REVISIONMONTH}}/{{REVISIONDAY}}/{{REVISIONYEAR}}''' &lt;br /&gt;
&lt;br /&gt;
 __TOC__{{TOC hidden}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
[[Cross-Site Request Forgery (CSRF)]] is a type of attack that occurs when a malicious web site, email, blog, instant message, or program causes a user’s web browser to perform an unwanted action on a trusted site when the user is authenticated. A CSRF attack works because browser requests automatically include any credentials associated with the site, such as the user's session cookie, IP address, etc. Therefore, if the user is authenticated to the site, the site cannot distinguish between the forged or legitimate request sent by the victim. We would need a token/identifier that is not accessible to attacker and would not be sent along (like cookies) with forged requests that attacker initiates. For more information on CSRF, see OWASP [[Cross-Site Request Forgery (CSRF)|Cross-Site Request Forgery (CSRF) page]].&lt;br /&gt;
&lt;br /&gt;
The impact of a successful CSRF attack is limited to the capabilities exposed by the vulnerable application. For example, this attack could result in a transfer of funds, changing a password, or making a purchase with the user’s credentials. In effect, CSRF attacks are used by an attacker to make a target system perform a function via the target's browser, without the user’s knowledge, at least until the unauthorized transaction has been committed.&lt;br /&gt;
&lt;br /&gt;
Impacts of successful CSRF exploits vary greatly based on the privileges of each victim. When targeting a normal user, a successful CSRF attack can compromise end-user data and their associated functions. If the targeted end user is an administrator account, a CSRF attack can compromise the entire web application. Using social engineering, an attacker can embed malicious HTML or JavaScript code into an email or website to request a specific 'task URL'. The task then executes with or without the user's knowledge, either directly or by using a Cross-Site Scripting flaw. For example, see [https://en.wikipedia.org/wiki/Samy_(computer_worm) Samy MySpace Worm].&lt;br /&gt;
&lt;br /&gt;
== What's new? ==&lt;br /&gt;
If you have seen OWASP [https://www.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;amp;action=history old CSRF prevention cheat sheets], you can observe that a lot has changed in this newer version. One of the major changes is that the “Verifying same origin with standard headers” CSRF defense has been moved to the Defense-in-Depth section, whereas token based mitigation moved to the Primary Defenses section (technical reasons for this switch were added under respective sections). Multiple new sections (HMAC based token protection, auto CSRF mitigation techniques, login CSRF, not so popular CSRF mitigations and CSRF mitigation myths) were added besides adding new content, removing obsolete content to the existing sections. Security issues/caveats associated with each mitigation were also included.&lt;br /&gt;
&lt;br /&gt;
==Warning: No Cross-Site Scripting (XSS) Vulnerabilities ==&lt;br /&gt;
[[Cross-Site Scripting]] is not necessary for CSRF to work. However, any cross-site scripting vulnerability can be used to defeat all CSRF mitigation techniques available in the market today (except mitigation techniques that involve user interaction and described later in this cheat sheet). This is because an XSS payload can simply read any page on the site using an XMLHttpRequest (direct DOM access can be done, if on same page) and obtain the generated token from the response, and include that token with a forged request.  This technique is exactly how the [https://en.wikipedia.org/wiki/Samy_(computer_worm) MySpace (Samy) worm] defeated MySpace's anti-CSRF defenses in 2005, which enabled the worm to propagate.&lt;br /&gt;
&lt;br /&gt;
It is imperative that no XSS vulnerabilities are present to ensure that CSRF defenses can't be circumvented. Please see the OWASP [[XSS (Cross Site Scripting) Prevention Cheat Sheet|XSS Prevention Cheat Sheet]] for detailed guidance on how to prevent XSS flaws.  &lt;br /&gt;
&lt;br /&gt;
== Resources that need to be protected from CSRF vulnerability ==&lt;br /&gt;
The following list assumes that you are not violating [http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.1 RFC2616], section 9.1.1, by using GET requests for state changing operations. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' If for any reason you violate, you would also need to protect those resources, which is mostly achieved with default &amp;lt;code&amp;gt;form tag [GET method]&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;href&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;src&amp;lt;/code&amp;gt; attributes.  &lt;br /&gt;
&lt;br /&gt;
* Form tags with POST &lt;br /&gt;
* Ajax/XHR calls&lt;br /&gt;
&lt;br /&gt;
== CSRF Defense Recommendations Summary ==&lt;br /&gt;
We recommend token based CSRF defense (either stateful/stateless) as a primary defense to mitigate CSRF in your applications. Only for highly sensitive operations, we also recommend a user interaction based protection (either re-authentication/one-time token, detailed in section 7.5) along with token based mitigation.&lt;br /&gt;
&lt;br /&gt;
As a defense-in-depth measure, consider implementing one mitigation from the Defense-in-Depth Techniques section (you can choose the mitigation that fits your ecosystem considering the issues mentioned under them). These defense-in-depth mitigation techniques are not recommended to be used by themselves (without token based mitigation) for mitigating CSRF in your applications.&lt;br /&gt;
&lt;br /&gt;
== Primary Defense Techniques ==&lt;br /&gt;
&lt;br /&gt;
=== Token Based Mitigation ===&lt;br /&gt;
This defense is one the most popular and recommended methods to mitigate CSRF. It can be achieved either with state (synchronizer token pattern) or stateless (encrypted/hash based token pattern). See section 6.3 on how to mitigate login CSRF in your applications. For all the mitigation's, it is implicit that general security principles should be adhered&lt;br /&gt;
* Strong encryption/HMAC functions should be adhered to. &lt;br /&gt;
'''Note:''' You can select any algorithm per your organizational needs. We recommend AES256-GCM for encryption and SHA256/512 for HMAC.&lt;br /&gt;
* Strict key rotation and token lifetime policies should be maintained. Policies can be set according to your organizational needs. Generic key management guidance from OWASP can be found [[Key Management Cheat Sheet|here]].&lt;br /&gt;
&lt;br /&gt;
==== Synchronizer Token Pattern ====&lt;br /&gt;
Any state changing operation requires a secure random token (e.g., CSRF token) to prevent CSRF attacks. A CSRF token should be unique per user session, large random value, and also generated by a cryptographically secure random number generator. The CSRF token is added as a hidden field for forms headers/parameters for AJAX calls, and within the URL if the state changing operation occurs via a GET. See &amp;quot;Disclosure of Token in URL&amp;quot; section below. The server rejects the requested action if the CSRF token fails validation.&lt;br /&gt;
&lt;br /&gt;
In order to facilitate a &amp;quot;transparent but visible&amp;quot; CSRF solution, developers are encouraged to adopt a pattern similar to [http://www.corej2eepatterns.com/Design/PresoDesign.htm Synchronizer Token Pattern] (The original intention of this synchronizer token pattern was to detect duplicate submissions in forms). The synchronizer token pattern requires the generation of random &amp;quot;challenge&amp;quot; tokens that are associated with the user's current session. These challenge tokens are then inserted within the HTML forms and calls associated with sensitive server-side operations. It is the responsibility of the server application to verify the existence and correctness of this token. By including a challenge token with each request, the developer has a strong control to verify that the user actually intended to submit the desired requests. Inclusion of a required security token in HTTP requests associated with sensitive business functions helps mitigate CSRF attacks as successful exploitation assumes the attacker knows the randomly generated token for the target victim's session. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' These tokens aren’t like cookies that are automatically sent with forged requests made from your browser from the attacker website. &lt;br /&gt;
&lt;br /&gt;
This is analogous to the attacker being able to guess the target victim's session identifier. &lt;br /&gt;
&lt;br /&gt;
The following describes a general approach to incorporate challenge tokens within the request.&lt;br /&gt;
&lt;br /&gt;
When a Web application formulates a request, the application should include a hidden input parameter with a common name such as &amp;quot;CSRFToken&amp;quot; (for forms)/ as header/parameter value for Ajax calls. The value of this token must be randomly generated such that it cannot be guessed by an attacker. Consider leveraging the java.security.SecureRandom class for Java applications to generate a sufficiently long random token. Alternative generation algorithms include the use of 256-bit BASE64 encoded hashes. Developers that choose this generation algorithm must make sure that there is randomness and uniqueness utilized in the data that is hashed to generate the random token.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;html&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;form action=&amp;quot;/transfer.do&amp;quot; method=&amp;quot;post&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;input type=&amp;quot;hidden&amp;quot; name=&amp;quot;CSRFToken&amp;quot; &lt;br /&gt;
value=&amp;quot;OWY4NmQwODE4ODRjN2Q2NTlhMmZlYWEwYzU1YWQwMTVhM2JmNGYxYjJiMGI4MjJjZDE1ZDZMGYwMGEwOA==&amp;quot;&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/form&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is used for each subsequent request until the session expires. When a request is issued by the end-user, the server-side component must verify the existence and validity of the token in the request compared to the token found in the user session. If the token was not found within the request, or the value provided does not match the value within the user session, then the request should be aborted, and the event logged as a potential CSRF attack in progress.&lt;br /&gt;
&lt;br /&gt;
To further enhance the security of this proposed design, consider randomizing the CSRF token parameter name and/or value for each request. Implementing this approach results in the generation of per-request tokens as opposed to per-session tokens. This is more secure than per-session tokens as the time range for an attacker to exploit the stolen tokens is minimal. However, this may result in usability concerns. For example, the &amp;quot;Back&amp;quot; button browser capability is often hindered as the previous page may contain a token that is no longer valid. Interaction with this previous page will result in a CSRF false positive security event at the server. Few applications that need high security typically implement this approach (such as banks). You have to check what suits your needs. Regardless of the approach taken, developers are encouraged to protect the CSRF token the same way they protect authenticated session identifiers, such as the use of TLS.&lt;br /&gt;
&lt;br /&gt;
'''Existing Synchronizer Implementations'''&lt;br /&gt;
&lt;br /&gt;
Synchronizer token defenses have been built into many frameworks, so we strongly recommend using them first when they are available. External components that add CSRF defenses to existing applications are also recommended. OWASP has the following: &lt;br /&gt;
&lt;br /&gt;
* For Java: OWASP [[CSRF Guard]]&lt;br /&gt;
* For PHP and Apache: [[CSRFProtector Project]]&lt;br /&gt;
&lt;br /&gt;
'''Disclosure of Token in URL'''&lt;br /&gt;
&lt;br /&gt;
Some implementations of synchronizer tokens include the challenge token in GET (URL) requests as well as POST requests. This is often implemented as a result of sensitive server-side operations being invoked as a result of embedded links in the page or other general design patterns. These patterns are often implemented without knowledge of CSRF and an understanding of CSRF prevention design strategies. While this control does help mitigate the risk of CSRF attacks, the unique per-session token is being exposed for GET requests. CSRF tokens in GET requests are potentially leaked at several locations: browser history, log files, network appliances that make a point to log the first line of an HTTP request, and Referer headers if the protected site links to an external site. In the latter case (leaked CSRF token due to the Referer header being parsed by a linked site), it is trivially easy for the linked site to launch a CSRF attack on the protected site, and they will be able to target this attack very effectively, since the Referer header tells them the site as well as the CSRF token. The attack could be run entirely from JavaScript, so that a simple addition of a script tag to the HTML of a site can launch an attack (whether on an originally malicious site or on a hacked site). Additionally, since HTTPS requests from HTTPS contexts will not strip the Referer header (as opposed to HTTPS to HTTP requests) CSRF token leaks via Referer can still happen on HTTPS Applications.&lt;br /&gt;
&lt;br /&gt;
The ideal solution is to only include the CSRF token in POST requests and modify server-side actions that have state changing affect to only respond to POST requests. This is in fact what the &amp;lt;nowiki&amp;gt;RFC 2616&amp;lt;/nowiki&amp;gt; requires for GET requests. If sensitive server-side actions are guaranteed to only ever respond to POST requests, then there is no need to include the token in GET requests.&lt;br /&gt;
&lt;br /&gt;
In most JavaEE web applications, however, HTTP method scoping is rarely ever utilized when retrieving HTTP parameters from a request. Calls to &amp;quot;HttpServletRequest.getParameter&amp;quot; will return a parameter value regardless if it was a GET or POST. This is not to say HTTP method scoping cannot be enforced. It can be achieved if a developer explicitly overrides doPost() in the HttpServlet class or leverages framework specific capabilities such as the AbstractFormController class in Spring.&lt;br /&gt;
&lt;br /&gt;
For these cases, attempting to retrofit this pattern in existing applications requires significant development time and cost, and as a temporary measure it may be better to pass CSRF tokens in the URL. Once the application has been fixed to respond to HTTP GET and POST verbs correctly, CSRF tokens for GET requests should be turned off.&lt;br /&gt;
&lt;br /&gt;
==== Encryption based Token Pattern ====&lt;br /&gt;
The Encrypted Token Pattern leverages an encryption, rather than comparison method of Token-validation. It is most suitable for applications that do not want to maintain any state at server side. &lt;br /&gt;
&lt;br /&gt;
After successful authentication, the server generates a unique token comprised of the user's ID, a timestamp value and a [http://en.wikipedia.org/wiki/Cryptographic_nonce nonce], using a unique key available only on the server. This token is returned to the client and embedded in a hidden field for forms, in the request-header/parameter for AJAX requests. On receipt of this request, the server reads and decrypts the token value with the same key used to create the token. The inability to correctly decrypt suggests an intrusion attempt. Once decrypted, the UserId and timestamp contained within the token are validated; the UserId is compared against the currently logged in user, and the timestamp is compared against the current time.&lt;br /&gt;
&lt;br /&gt;
On successful token-decryption, the server has access to parsed values, ideally in the form of [http://en.wikipedia.org/wiki/Claims-based_identity claims]. These claims are processed by comparing the UserId claim to any potentially stored UserId (in a Cookie or Session variable, if the site already contains a means of authentication). The Timestamp is validated against the current time, preventing replay attacks. Alternatively, in the case of a CSRF attack, the server will be unable to decrypt the poisoned token, and can block and log the attack.&lt;br /&gt;
&lt;br /&gt;
This technique addresses some of the shortfalls in other stateless approaches, such as the need to store data in a Cookie, circumventing the Cookie-subdomain and [[HttpOnly]] issues.&lt;br /&gt;
&lt;br /&gt;
==== HMAC Based Token Pattern ====&lt;br /&gt;
[https://en.wikipedia.org/wiki/HMAC HMAC (hash-based message authentication code)] is a cryptographic function that helps to guarantee integrity and authentication of a message. It is another way that CSRF mitigation can be achieved without maintaining any state at the server and is similar to an encryption token-based pattern with two main differences:&lt;br /&gt;
* Uses a strong HMAC function instead of an encryption function to generate the token&lt;br /&gt;
* Includes an additional field called ‘operation’ that would indicate the purpose of the operation for which you are including the CSRF token (may it be form tag/ajax call) &lt;br /&gt;
(Ex: ‘oneclickpurchase’ (or) buy/asin=SDFH&amp;amp;category=2&amp;amp;quantity=3)&lt;br /&gt;
&lt;br /&gt;
'''Note:''' Fields mentioned in encryption token pattern (user's ID, a timestamp value and a nonce) are included. &lt;br /&gt;
&lt;br /&gt;
The operation field helps in mitigating the fact that the hash function generates the same value irrespective of multiple iterations (unlike strong encryption functions that generate different values when they are encrypted each time). So, it would help in avoiding having repeated token values across your application. Nonce field serves the same purpose as in encrypted token pattern (i.e., to avoid rare collisions due to weak cryptographic functions and acts as a defense-in-depth measure). &lt;br /&gt;
&lt;br /&gt;
Generate the token using HMAC including all four fields mentioned previously (user's ID, a timestamp value, nonce, and operation) and then include it in hidden fields for form tags, headers/parameters for ajax calls. Once you receive the HMAC from the client in the requests, re-generate HMAC with the same fields that you used to generate it, and then verify that the HMAC you re-generated matches the HMAC received from the client. If it does, it is a legitimate user request and if it does not, flag it as a CSRF intrusion and alert your incident response teams. Because an attacker has no visibility into the key used for generating the hash fields used in generating it, there is no way for them to re-generate it to use in forged request.&lt;br /&gt;
&lt;br /&gt;
=== Auto CSRF Mitigation Techniques ===&lt;br /&gt;
Though the technique of mitigating tokens is widely used (stateful with synchronizer token and stateless with encrypted/HMAC token), the major problem associated with these techniques is the human tendency to forget things at times. If a developer forgets to add the token to any state changing operation, they are making the application vulnerable to CSRF. To avoid this, you can try to automate the process of adding tokens to CSRF vulnerable resources (mentioned earlier in this document). You can achieve this by doing the following:&lt;br /&gt;
* Write wrappers (that would auto add tokens when used) around default form tags/ajax calls and educate your developers to use those wrappers instead of standard tags. Though this approach is better than depending purely on developers to add tokens, it still is vulnerable to the issue of human tendency to forget things. [https://docs.spring.io/spring-security/site/docs/3.2.0.CI-SNAPSHOT/reference/html/csrf.html Spring Security] uses this technique to add CSRF tokens by default when a custom &amp;lt;form:form&amp;gt; tag is used, you can opt to use after verifying that its enabled and properly configured in the Spring Security version you are using.&lt;br /&gt;
* Write a hook (that would capture the traffic and add tokens to CSRF vulnerable resources before rendering to customers) in your organizational web rendering frameworks. Because it is hard to analyze when a particular response is doing any state change (and thus needing a token), you might want to include tokens in all CSRF vulnerable resources (ex: include tokens in all POST responses). This is one recommended approach, but you need to consider the performance costs it might incur.&lt;br /&gt;
* Get the tokens automatically added on the client side when the page is being rendered in user’s browser, with help of a client side script (this approach is used by [[CSRF Guard]]). You need to consider any possible JavaScript hijacking attacks.&lt;br /&gt;
We recommend researching if the framework you are using has an option to achieve CSRF protection by default before trying to build your custom auto tokening system. For example, .NET has an [https://docs.microsoft.com/en-us/aspnet/core/security/anti-request-forgery?view=aspnetcore-2.1 in-built protection] that adds token to CSRF vulnerable resources. You are responsible for proper configuration (such as key management and token management) before using these in-built CSRF protections that do auto tokening to CSRF vulnerable resources.&lt;br /&gt;
&lt;br /&gt;
=== Login CSRF ===&lt;br /&gt;
Most developers tend to ignore CSRF vulnerability on login forms as they assume that CSRF would not be applicable on login forms because user is not authenticated at that stage. That assumption is false. CSRF vulnerability can still occur on login forms where the user is not authenticated, but the impact/risk view for it is quite different from the impact/risk view of a general CSRF vulnerability (when a user is authenticated).&lt;br /&gt;
&lt;br /&gt;
With a CSRF vulnerability on login form, an attacker can make a victim login to the attacker's account and learn behavior from the victim's searches. For more information about login CSRF and other risks, see section 3 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf the Stanford CSRF] paper.&lt;br /&gt;
&lt;br /&gt;
Login CSRF can be mitigated by creating pre-sessions (sessions before a user is authenticated) and including tokens in login form. You can use any of the techniques mentioned above to generate tokens. Pre-sessions can be transitioned to real sessions once the user is authenticated. This technique is described in [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery section 4.1].&lt;br /&gt;
&lt;br /&gt;
If sub-domains under your master domain are treated as not trusted in your threat model, it is difficult to mitigate login CSRF. A strict subdomain and path level referer header (because most login pages are served on HTTPS - no stripping of referer - and are also linked from home pages) validation (detailed in section 7.1) can be used in these cases for mitigating CSRF on login forms to an extent.&lt;br /&gt;
&lt;br /&gt;
== Defense-In-Depth Techniques ==&lt;br /&gt;
&lt;br /&gt;
=== Verifying origin with standard headers ===&lt;br /&gt;
This defense technique is specifically proposed in section 5.0 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery]. This paper proposes the first creation of the Origin header and its use as a CSRF defense mechanism.&lt;br /&gt;
&lt;br /&gt;
There are two steps to this mitigation, both of which rely on examining an HTTP request header value.&lt;br /&gt;
&lt;br /&gt;
1. Determining the origin the request is coming from (source origin). Can be done via Origin and/or referer header.&lt;br /&gt;
&lt;br /&gt;
2. Determining the origin the request is going to (target origin).&lt;br /&gt;
&lt;br /&gt;
At server side we verify if both of them match. If they do, we accept the request as legitimate (meaning it’s the same origin request) and if they don’t, we discard the request (meaning that the request originated from cross-domain). Reliability on these headers comes from the fact that they cannot be altered programmatically (using JavaScript in an XSS) as they fall under [https://developer.mozilla.org/en-US/docs/Glossary/Forbidden_header_name forbidden headers] list (i.e., only browsers can set them).&lt;br /&gt;
&lt;br /&gt;
====Identifying Source Origin (via Origin/Referer header) ====&lt;br /&gt;
'''Checking the Origin Header'''&lt;br /&gt;
&lt;br /&gt;
If the Origin header is present, verify that its value matches the target origin. Unlike the Referer, the Origin header will be present in HTTP requests that originate from an HTTPS URL.&lt;br /&gt;
&lt;br /&gt;
'''Checking the Referer Header'''&lt;br /&gt;
&lt;br /&gt;
If the Origin header is not present, verify the hostname in the Referer header matches the target origin. This method of CSRF mitigation is also commonly used with unauthenticated requests, such as requests made prior to establishing a session state, which is required to keep track of a synchronization token.&lt;br /&gt;
&lt;br /&gt;
In both cases, make sure the target origin check is strong. For example, if your site is &amp;quot;site.com&amp;quot; make sure &amp;quot;site.com.attacker.com&amp;quot; does not pass your origin check (i.e., match through the trailing/after the origin to make sure you are matching against the entire origin).&lt;br /&gt;
&lt;br /&gt;
If neither of these headers are present, you can either accept or block the request. We recommend '''blocking'''. Alternatively, you might want to log all such instances, monitor their use cases/behavior, and then start blocking requests only after you get enough confidence.&lt;br /&gt;
&lt;br /&gt;
==== Identifying the Target Origin ====&lt;br /&gt;
You might think it’s easy to determine the target origin, but it’s frequently not. The first thought is to simply grab the target origin (i.e., its hostname and port #) from the URL in the request. However, the application server is frequently sitting behind one or more proxies and the original URL is different from the URL the app server actually receives. If your application server is directly accessed by its users, then using the origin in the URL is fine and you're all set.&lt;br /&gt;
&lt;br /&gt;
If you are behind a proxy, there are a number of options to consider.&lt;br /&gt;
* '''Configure your application to simply know its target origin:''' It’s your application, so you can find its target origin and set that value in some server configuration entry. This would be the most secure approach as its defined server side, so it is a trusted value. However,   this might be problematic to maintain if your application is deployed in many places, e.g., dev, test, QA, production, and possibly multiple production instances. Setting the correct value for each of these situations might be difficult, but if you can do it via some central configuration and providing your instances to grab value from it, that's great! ('''Note:''' Make sure the centralized configuration store is maintained securely because major part of your CSRF defense depends on it.)&lt;br /&gt;
&lt;br /&gt;
* '''Use the Host header value:''' If you prefer that the application find its own target so it doesn't have to be configured for each deployed instance, we recommend using the Host family of headers. The Host header's purpose is to contain the target origin of the request. But, if your app server is sitting behind a proxy, the Host header value is most likely changed by the proxy to the target origin of the URL behind the proxy, which is different than the original URL. This modified Host header origin won't match the source origin in the original Origin or Referer headers.&lt;br /&gt;
&lt;br /&gt;
* '''Use the X-Forwarded-Host header value:''' To avoid the issue of proxy altering the host header, there is another header called X-Forwarded-Host, whose purpose is to contain the original Host header value the proxy received. Most proxies will pass along the original Host header value in the X-Forwarded-Host header. So that header value is likely to be the target origin value you need to compare to the source origin in the Origin or Referer header.&lt;br /&gt;
&lt;br /&gt;
This mitigation in earlier versions of the CSRF Cheat Sheet is treated as a primary defense. For reasons mentioned below, it is now moved to the Defense-in-Depth section.&lt;br /&gt;
&lt;br /&gt;
As it’s implicit, this mitigation would work properly when origin or referer headers are present in the requests. Though these headers are included '''majority''' of the time, there are few use cases where they are not included (most of them are for legitimate reasons to safeguard users privacy/to tune to browsers ecosystem). The following lists some use cases:&lt;br /&gt;
* Internet Explorer 11 does not add the Origin header on a CORS request across sites of a trusted zone. The Referer header will remain the only indication of the UI origin. See the following references in stackoverflow [https://stackoverflow.com/questions/20784209/internet-explorer-11-does-not-add-the-origin-header-on-a-cors-request here] and [https://github.com/silverstripe/silverstripe-graphql/issues/118 here].&lt;br /&gt;
* In an instance following a [https://stackoverflow.com/questions/22397072/are-there-any-browsers-that-set-the-origin-header-to-null-for-privacy-sensitiv 302 redirect cross-origin], Origin is not included in the redirected request because that may be considered sensitive information that should not be sent to the other origin.&lt;br /&gt;
* There are some [https://wiki.mozilla.org/Security/Origin#Privacy-Sensitive_Contexts privacy contexts] where Origin is set to “null” For example, see the following [https://www.google.com/search?q=origin+header+sent+null+value+site%3Astackoverflow.com&amp;amp;oq=origin+header+sent+null+value+site%3Astackoverflow.com here].&lt;br /&gt;
* Origin header is included for all cross origin requests but for same origin requests, in most browsers it is only included in POST/DELETE/PUT '''Note:''' Although it is not ideal, many developers use GET requests to do state changing operations.&lt;br /&gt;
* Referer header is no exception. There are multiple use cases where referer header is omitted as well ([https://stackoverflow.com/questions/6880659/in-what-cases-will-http-referer-be-empty &amp;lt;nowiki&amp;gt;[1]&amp;lt;/nowiki&amp;gt;], [https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referer &amp;lt;nowiki&amp;gt;[2]&amp;lt;/nowiki&amp;gt;], [https://en.wikipedia.org/wiki/HTTP_referer#Referer_hiding &amp;lt;nowiki&amp;gt;[3]&amp;lt;/nowiki&amp;gt;], [https://seclab.stanford.edu/websec/csrf/csrf.pdf &amp;lt;nowiki&amp;gt;[4]&amp;lt;/nowiki&amp;gt;] and [https://www.google.com/search?q=referer+header+sent+null+value+site:stackoverflow.com &amp;lt;nowiki&amp;gt;[5]&amp;lt;/nowiki&amp;gt;]). Load balancers, proxies and embedded network devices are also well known to strip the referer header due to privacy reasons in logging them.&lt;br /&gt;
&lt;br /&gt;
Though exceptions can be written for above cases in your source and target origin check logic, there is currently no central repository (even there is one, keeping it up-to-date is a problem) that references all such use cases. Each browser might also handle these use cases differently (browsers are known to handle things differently considering their ecosystem. IE example of not sending origin header within trusted zone is such example). Rejecting requests that do not contain origin and/or referer headers might sound like a good idea but it can impact legitimate users. Keeping this system in monitoring mode and trying to investigate use cases such as stated above, then adding them into exception logic is a process that you may consider to make this defense more stable in your environment.&lt;br /&gt;
&lt;br /&gt;
This CSRF defense relies on browser behavior that can change at times. For example, when new privacy contexts are discovered, under which situations you have to keep your validation logic updated, where as in token based mitigation, you have full control on the CSRF mitigation. If browsers alter CSRF tokens, they are literally changing the HTML content on rendering pages (which no browser would ever want to do!).&lt;br /&gt;
&lt;br /&gt;
When there is an XSS vulnerability on a page of an application protected with Origin and/or Referer header, the level of effort required to exploit state changing operations (that are typically vulnerable to CSRF) on other pages is very easy (grab the parameters and forge a request, as Origin and Referer header is included by default by browsers) than compared to token based mitigation (where attacker needs to download the target page, parse the DOM for the token, construct a forge request, and send it to server).&lt;br /&gt;
&lt;br /&gt;
'''Note:''' Although the concept of an origin header stemmed from [https://seclab.stanford.edu/websec/csrf/csrf.pdf the Stanford CSRF] paper that references robust CSRF defenses, the initial [https://tools.ietf.org/html/rfc6454 origin header RFC] does not reference mitigating CSRF in any way (another [https://tools.ietf.org/id/draft-abarth-origin-03.html draft version] does, however).&lt;br /&gt;
&lt;br /&gt;
=== Double Submit Cookie ===&lt;br /&gt;
If maintaining the state for CSRF token at server side is problematic, an alternative defense is to use the double submit cookie technique. This technique is easy to implement and is stateless. In this technique, we send a random value in both a cookie and as a request parameter, with the server verifying if the cookie value and request value match. When a user visits (even before authenticating to prevent login CSRF), the site should generate a (cryptographically strong) pseudorandom value and set it as a cookie on the user's machine separate from the session identifier. The site then requires that every transaction request include this pseudorandom value as a hidden form value (or other request parameter/header). If both of them match at server side, the server accepts it as legitimate request and if they don’t, it would reject the request.&lt;br /&gt;
&lt;br /&gt;
There’s a belief that this technique would work because a cross origin attacker cannot read any data sent from the server or modify cookie values, per the same-origin policy. This means that while an attacker can force a victim to send any value with a malicious CSRF request, the attacker will be unable to modify or read the value stored in the cookie (with which the server compares the token value).&lt;br /&gt;
&lt;br /&gt;
There are a couple of drawbacks associated with the assumptions made here. The problem of &amp;quot;trusting of sub domains and proper configuration of whole site in general to accept HTTPS connections only&amp;quot;. The [https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf Blackhat talk] by Rich Lundeen references these drawbacks.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;''With double submit, if an attacker can write a cookie they can obviously defeat the protection. And again, writing cookies is significantly easier then reading them. The fact that cookies can be written is difficult for many people to understand. After all, doesn't the same origin policy specify that one domain cannot access cookies from another domain? However, there are two common scenarios where writing cookies across domains is possible:''&lt;br /&gt;
&lt;br /&gt;
''a) While it's true that hellokitty.marketing.example.com cannot read cookies or access the DOM from secure.example.com because of the same origin policy, hellokitty.marketing.example.com can write cookies to the parent domain (example.com), and these cookies are then consumed by secure.example.com (secure.example.com has no good way to distinguish which site set the cookie). Additionally, there are methods of forcing secure.example.com to always accept your cookie first. What this means is that XSS in hellokitty.marketing.example.com is able to overwrite cookies in secure.example.com.''&lt;br /&gt;
&lt;br /&gt;
''b) If an attacker is in the middle, they can usually force a request to the same domain over HTTP. If an application is hosted at &amp;lt;nowiki&amp;gt;https://secure.example.com&amp;lt;/nowiki&amp;gt;, even if the cookies are set with the secure flag, a man in the middle can force connections to &amp;lt;nowiki&amp;gt;http://secure.example.com&amp;lt;/nowiki&amp;gt; and set (overwrite) any arbitrary cookies (even though the secure flag prevents the attacker from reading those cookies). Even if the HSTS header is set on the server and the browser visiting the site supports HSTS (this would prevent a man in the middle from forcing plain text HTTP requests) unless the HSTS header is set in a way that includes all subdomains, a man in the middle can simply force a request to a separate subdomain and overwrite cookies similar to 1. In other words, as long as &amp;lt;nowiki&amp;gt;http://hellokitty.marketing.example.com&amp;lt;/nowiki&amp;gt; doesn't force HTTPS, then an attacker can overwrite cookies on any example.com subdomain.''&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So, unless you are sure that your subdomains are fully secured and only accept HTTPS connections (we believe it’s difficult to guarantee at large enterprises), you should not rely on the Double Submit Cookie technique as a primary mitigation for CSRF.&lt;br /&gt;
&lt;br /&gt;
A variant of double submit cookie that can mitigate both the risks mentioned above is including the token in an encrypted cookie - often within the authentication cookie - and then at the server side matching it (after decrypting the authentication cookie) with the token in hidden form field or parameter/header for ajax calls.  This works because a sub domain has no way to over-write an properly crafted encrypted cookie without the necessary information such as encryption key.&lt;br /&gt;
&lt;br /&gt;
=== SameSite Cookie Attribute ===&lt;br /&gt;
SameSite is a cookie attribute (similar to [[HttpOnly]], Secure etc.) introduced by Google to mitigate CSRF attacks. It is defined in [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 this] Internet Draft. This attribute helps in preventing the browser from sending cookies along with cross-site requests. Possible values for this attribute are lax or strict.&lt;br /&gt;
&lt;br /&gt;
The strict value will prevent the cookie from being sent by the browser to the target site in all cross-site browsing context, even when following a regular link. For example, for a GitHub-like website this would mean that if a logged-in user follows a link to a private GitHub project posted on a corporate discussion forum or email, GitHub will not receive the session cookie and the user will not be able to access the project. A bank website however most likely doesn't want to allow any transactional pages to be linked from external sites, so the strict flag would be most appropriate.&lt;br /&gt;
&lt;br /&gt;
The default lax value provides a reasonable balance between security and usability for websites that want to maintain user's logged-in session after the user arrives from an external link. In the above GitHub scenario, the session cookie would be allowed when following a regular link from an external website while blocking it in CSRF-prone request methods such as POST. Only cross-site requests that are allowed in lax mode are the ones that have top-level navigations and are also “safe” HTTP methods (more details [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7.1 here]).&lt;br /&gt;
&lt;br /&gt;
Example of cookies using this attribute:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Set-Cookie: JSESSIONID=xxxxx; SameSite=Strict&lt;br /&gt;
Set-Cookie: JSESSIONID=xxxxx; SameSite=Lax&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Support for this attribute in different browsers is increasing but there are still browsers that need to adopt this. As of August 2018, SameSite attribute is on browsers used by 68.92% of Internet users (detailed statistics are [https://caniuse.com/#feat=same-site-cookie-attribute here]).&lt;br /&gt;
&lt;br /&gt;
Though this technique seems to be efficient in mitigating CSRF attacks, it is still in early stages (in [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 draft]) and does not have full browser support as mentioned above. Google’s [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 draft] also mentions a couple cases where forged requests can be simulated by attackers as same-site requests (and thus allowing to send SameSite cookies).&lt;br /&gt;
&lt;br /&gt;
Considering the factors above, it is not recommended to be used as a primary defense. Google agrees with this stance and strongly encourages developers to deploy server-side defenses such as tokens to mitigate CSRF more fully.&lt;br /&gt;
&lt;br /&gt;
=== Use of Custom Request Headers ===&lt;br /&gt;
&lt;br /&gt;
Adding CSRF tokens, a double submit cookie and value, encrypted token, or other defense that involves changing the UI can frequently be complex or otherwise problematic. An alternate defense that is particularly well suited for AJAX/XHR endpoints is the use of a custom request header. This defense relies on the [https://en.wikipedia.org/wiki/Same-origin_policy same-origin policy (SOP)] restriction that only JavaScript can be used to add a custom header, and only within its origin. By default, browsers do not allow JavaScript to make cross origin requests.&lt;br /&gt;
&lt;br /&gt;
A particularly attractive custom header and value to use is “X-Requested-With: XMLHttpRequest” because most JavaScript libraries already add this header to requests they generate by default. However, some do not. For example, AngularJS used to, but does not anymore. For more information, see [https://github.com/angular/angular.js/commit/3a75b1124d062f64093a90b26630938558909e8d their rationale] and how to add it back.&lt;br /&gt;
&lt;br /&gt;
If this is the case for your system, you can simply verify the presence of this header and value on all your server side AJAX endpoints in order to protect against CSRF attacks. This approach has the double advantage of usually requiring no UI changes and not introducing any server side state, which is particularly attractive to REST services. You can always add your own custom header and value if that is preferred.&lt;br /&gt;
&lt;br /&gt;
This defense technique is specifically discussed in section 4.3 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery]. However, bypasses of this defense using Flash were documented as early as 2008 and again as recently as 2015 by Mathias Karlsson to [https://hackerone.com/reports/44146 exploit a CSRF flaw in Vimeo]. A Flash attack can't spoof the Origin or Referer headers, so by checking both of them we believe this combination of checks should prevent Flash bypass CSRF attacks (if any comes up in future). &lt;br /&gt;
&lt;br /&gt;
Besides any possible future bypasses such as Flash, using a static header will make it easy to exploit other state changing operations in the application (similar to the previous explanation on why ease of exploitation is easier in origin/referer header check than token based mitigation). Including a random token instead of static header value is more or less equal to the token based approach described in the Primary Defenses section. Developers also need to consider that if you are using this approach in an application with both Ajax calls and form tags, this technique would only help Ajax calls in protecting from CSRF and you would still need protect &amp;lt;form&amp;gt; tags with approaches described in this document such as tokens. Setting custom headers on form tags is not possible directly. Also, CORS configuration should also be robust to make this solution work effectively (as custom headers for requests coming from other domains trigger a pre-flight CORS check).&lt;br /&gt;
&lt;br /&gt;
=== User Interaction Based CSRF Defense ===&lt;br /&gt;
&lt;br /&gt;
While all the techniques referenced here do not require any user interaction, sometimes it’s easier or more appropriate to involve the user in the transaction to prevent unauthorized operations (forged via CSRF or otherwise). The following are some examples of techniques that can act as strong CSRF defense when implemented correctly.&lt;br /&gt;
* Re-Authentication (password or stronger)&lt;br /&gt;
* One-time Token&lt;br /&gt;
* CAPTCHA&lt;br /&gt;
While these are a very strong CSRF defense, it does create a huge impact on the user experience. For applications that are in need of high security for some operations (password change, money transfer etc.), these techniques should be used along with token based mitigation. Please note that tokens by themselves can mitigate CSRF, developers should use these techniques only to achieve additional security for their high sensitive operations.&lt;br /&gt;
&lt;br /&gt;
== Not So Popular CSRF Mitigations ==&lt;br /&gt;
&lt;br /&gt;
=== Triple Submit Cookie ===&lt;br /&gt;
This mitigation is proposed by [https://www.owasp.org/images/e/e6/AppSecEU2012_Wilander.pdf John Wilander in 2012 at OWASP Appsec Research]. This technique adds an additional step to double submit cookie approach by verifying if the request contains two cookies with same name (please note, attacker need to write an additional cookie to bypass double submit cookie mitigation). Though it mitigates the issues discussed in bypass of double submit cookie, it introduces new problems such as cookie jar overflow (in-details and more issue details [https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf here] and [https://webstersprodigy.net/2012/08/03/analysis-of-john-wilanders-triple-submit-cookies/ here]). We were not able to find any real-time implementations of this mitigation so far.&lt;br /&gt;
&lt;br /&gt;
=== Content-Type Header Validation ===&lt;br /&gt;
This technique is better known than the triple submit cookie mitigation. In first place, this header is not designed for security (initial RFC [https://tools.ietf.org/html/rfc1049 here] and later well-defined in [https://www.ietf.org/rfc/rfc2045.txt this] RFC) but only to let receiving agents know the type of data they would be handling, so that they can invoke corresponding parsers. The pre-flighting behavior of this header (pre-flight if header has value other than application/x-www-form-urlencoded, multipart/form-data, or text/plain) is what treated as a CSRF mitigation and thus forcing all requests to have a header value that would force a pre-flight (such as application/json. Server side can reject cross-origin requests with CORS/SOP during this pre-flight).&lt;br /&gt;
&lt;br /&gt;
This approach has two main problems. One that it would mandate all requests to have a header value that would force pre-flight despite the real use case and the other that this technique is relying on a feature that is not designed for security, to mitigate a security vulnerability. When a bug was discovered in the Chrome API, browser architects even considered to removing this pre-flighting behavior. Because this header was not designed as a security control, architects can re-design it to better cater its primary purpose. In the future, there’s a possibility that new content-type header types can be included (to better support various use-cases), which can put systems relying on this header for CSRF mitigation in trouble. For more information, see [https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2017/september/common-csrf-prevention-misconceptions/ Common CSRF Prevention Misconceptions].&lt;br /&gt;
&lt;br /&gt;
== CSRF Mitigation Myths ==&lt;br /&gt;
The following shows techniques presumed to be CSRF mitigations but none of them fully/actually mitigates a CSRF vulnerability.&lt;br /&gt;
* '''CORS''': CORS is a header designed to relax Same-Origin-Policy when cross-origin communication between sites is required. It is not designed, nor prevents CSRF attacks.&lt;br /&gt;
* '''Using HTTPS''': Using HTTPS has nothing to do with the protection from CSRF attacks. Resources that are under HTTPS are still vulnerable to CSRF if proper CSRF mitigations described above are not included.&lt;br /&gt;
* More myths can be found [[Cross-Site Request Forgery (CSRF)|here]]&lt;br /&gt;
&lt;br /&gt;
== Personal Safety CSRF Tips for Users ==&lt;br /&gt;
Since CSRF vulnerabilities are reportedly widespread, we recommend using the following best practices to mitigate risk.  &lt;br /&gt;
&lt;br /&gt;
# Logoff immediately after using a Web application.&lt;br /&gt;
# Do not allow your browser to save username/passwords, and do not allow sites to “remember” your login.&lt;br /&gt;
# Do not use the same browser to access sensitive applications and to surf the Internet freely (tabbed browsing).&lt;br /&gt;
# The use of plugins such as No-Script makes POST based CSRF vulnerabilities difficult to exploit. This is because JavaScript is used to automatically submit the form when the exploit is loaded. Without JavaScript, the attacker would have to trick the user into submitting the form manually.&lt;br /&gt;
&lt;br /&gt;
Integrated HTML-enabled mail/browser and newsreader/browser environments pose additional risks since simply viewing a mail message or a news message might lead to the execution of an attack. &lt;br /&gt;
== Implementation reference example  ==&lt;br /&gt;
The following JEE web filter provides an example reference for some of the concepts described in this cheat sheet. It implements the following stateless mitigations ([https://github.com/aramrami/OWASP-CSRFGuard OWASP CSRFGuard], cover a stateful approach).&lt;br /&gt;
* Verifying same origin with standard headers&lt;br /&gt;
* Double submit cookie&lt;br /&gt;
* SameSite cookie attribute&lt;br /&gt;
'''Please note''' that it only acts a reference sample and is not complete (for example: it doesn’t have a block to direct the control flow when origin and referer header check succeeds nor it has a port/host/protocol level validation for referer header). Developers are recommended to build their complete mitigation on top of this reference sample. Developers should also implement standard authentication or authorization checks before checking for CSRF.&lt;br /&gt;
&lt;br /&gt;
Source is also located [https://github.com/righettod/poc-csrf here] and provides a runnable POC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
import org.slf4j.Logger;&lt;br /&gt;
import org.slf4j.LoggerFactory;&lt;br /&gt;
&lt;br /&gt;
import javax.servlet.Filter;&lt;br /&gt;
import javax.servlet.FilterChain;&lt;br /&gt;
import javax.servlet.FilterConfig;&lt;br /&gt;
import javax.servlet.ServletException;&lt;br /&gt;
import javax.servlet.ServletRequest;&lt;br /&gt;
import javax.servlet.ServletResponse;&lt;br /&gt;
import javax.servlet.annotation.WebFilter;&lt;br /&gt;
import javax.servlet.http.Cookie;&lt;br /&gt;
import javax.servlet.http.HttpServletRequest;&lt;br /&gt;
import javax.servlet.http.HttpServletResponse;&lt;br /&gt;
import javax.servlet.http.HttpServletResponseWrapper;&lt;br /&gt;
import javax.xml.bind.DatatypeConverter;&lt;br /&gt;
import java.io.IOException;&lt;br /&gt;
import java.net.MalformedURLException;&lt;br /&gt;
import java.net.URL;&lt;br /&gt;
import java.security.SecureRandom;&lt;br /&gt;
import java.util.Arrays;&lt;br /&gt;
&lt;br /&gt;
/**&lt;br /&gt;
 * Filter in charge of validating each incoming HTTP request about Headers and CSRF token.&lt;br /&gt;
 * It is called for all requests to backend destination.&lt;br /&gt;
 *&lt;br /&gt;
 * We use the approach in which:&lt;br /&gt;
 * - The CSRF token is changed after each valid HTTP exchange&lt;br /&gt;
 * - The custom Header name for the CSRF token transmission is fixed&lt;br /&gt;
 * - A CSRF token is associated to a backend service URI in order to enable the support for multiple parallel Ajax request from the same application&lt;br /&gt;
 * - The CSRF cookie name is the backend service name prefixed with a fixed prefix&lt;br /&gt;
 *&lt;br /&gt;
 * Here for the POC we show the &amp;quot;access denied&amp;quot; reason in the response but in production code only return a generic message !&lt;br /&gt;
 *&lt;br /&gt;
 * @see &amp;quot;https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://wiki.mozilla.org/Security/Origin&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://chloe.re/2016/04/13/goodbye-csrf-samesite-to-the-rescue/&amp;quot;&lt;br /&gt;
 */&lt;br /&gt;
@WebFilter(&amp;quot;/backend/*&amp;quot;)&lt;br /&gt;
public class CSRFValidationFilter implements Filter {&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * JVM param name used to define the target origin&lt;br /&gt;
     */&lt;br /&gt;
    public static final String TARGET_ORIGIN_JVM_PARAM_NAME = &amp;quot;target.origin&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Name of the custom HTTP header used to transmit the CSRF token and also to prefix &lt;br /&gt;
     * the CSRF cookie for the expected backend service&lt;br /&gt;
     */&lt;br /&gt;
    private static final String CSRF_TOKEN_NAME = &amp;quot;X-TOKEN&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Logger&lt;br /&gt;
     */&lt;br /&gt;
    private static final Logger LOG = LoggerFactory.getLogger(CSRFValidationFilter.class);&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Application expected deployment domain: named &amp;quot;Target Origin&amp;quot; in OWASP CSRF article&lt;br /&gt;
     */&lt;br /&gt;
    private URL targetOrigin;&lt;br /&gt;
&lt;br /&gt;
    /***&lt;br /&gt;
     * Secure generator&lt;br /&gt;
     */&lt;br /&gt;
    private final SecureRandom secureRandom = new SecureRandom();&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {&lt;br /&gt;
        HttpServletRequest httpReq = (HttpServletRequest) request;&lt;br /&gt;
        HttpServletResponse httpResp = (HttpServletResponse) response;&lt;br /&gt;
        String accessDeniedReason;&lt;br /&gt;
&lt;br /&gt;
        /* STEP 1: Verifying Same Origin with Standard Headers */&lt;br /&gt;
        //Try to get the source from the &amp;quot;Origin&amp;quot; header&lt;br /&gt;
        String source = httpReq.getHeader(&amp;quot;Origin&amp;quot;);&lt;br /&gt;
        if (this.isBlank(source)) {&lt;br /&gt;
            //If empty then fallback on &amp;quot;Referer&amp;quot; header&lt;br /&gt;
            source = httpReq.getHeader(&amp;quot;Referer&amp;quot;);&lt;br /&gt;
            //If this one is empty too then we trace the event and we block the request (recommendation of the article)...&lt;br /&gt;
            if (this.isBlank(source)) {&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: ORIGIN and REFERER request headers are both absent/empty so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
                return;&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //Compare the source against the expected target origin&lt;br /&gt;
        URL sourceURL = new URL(source);&lt;br /&gt;
        if (!this.targetOrigin.getProtocol().equals(sourceURL.getProtocol()) || !this.targetOrigin.getHost().equals(sourceURL.getHost()) &lt;br /&gt;
		|| this.targetOrigin.getPort() != sourceURL.getPort()) {&lt;br /&gt;
            //One the part do not match so we trace the event and we block the request&lt;br /&gt;
            accessDeniedReason = String.format(&amp;quot;CSRFValidationFilter: Protocol/Host/Port do not fully matches so we block the request! (%s != %s) &amp;quot;, &lt;br /&gt;
				this.targetOrigin, sourceURL);&lt;br /&gt;
            LOG.warn(accessDeniedReason);&lt;br /&gt;
            httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            return;&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        /* STEP 2: Verifying CSRF token using &amp;quot;Double Submit Cookie&amp;quot; approach */&lt;br /&gt;
        //If CSRF token cookie is absent from the request then we provide one in response but we stop the process at this stage.&lt;br /&gt;
        //Using this way we implement the first providing of token&lt;br /&gt;
        Cookie tokenCookie = null;&lt;br /&gt;
        if (httpReq.getCookies() != null) {&lt;br /&gt;
            String csrfCookieExpectedName = this.determineCookieName(httpReq);&lt;br /&gt;
            tokenCookie = Arrays.stream(httpReq.getCookies()).filter(c -&amp;gt; c.getName().equals(csrfCookieExpectedName)).findFirst().orElse(null);&lt;br /&gt;
        }&lt;br /&gt;
        if (tokenCookie == null || this.isBlank(tokenCookie.getValue())) {&lt;br /&gt;
            LOG.info(&amp;quot;CSRFValidationFilter: CSRF cookie absent or value is null/empty so we provide one and return an HTTP NO_CONTENT response !&amp;quot;);&lt;br /&gt;
            //Add the CSRF token cookie and header&lt;br /&gt;
            this.addTokenCookieAndHeader(httpReq, httpResp);&lt;br /&gt;
            //Set response state to &amp;quot;204 No Content&amp;quot; in order to allow the requester to clearly identify an initial response providing the initial CSRF token&lt;br /&gt;
            httpResp.setStatus(HttpServletResponse.SC_NO_CONTENT);&lt;br /&gt;
        } else {&lt;br /&gt;
            //If the cookie is present then we pass to validation phase&lt;br /&gt;
            //Get token from the custom HTTP header (part under control of the requester)&lt;br /&gt;
            String tokenFromHeader = httpReq.getHeader(CSRF_TOKEN_NAME);&lt;br /&gt;
            //If empty then we trace the event and we block the request&lt;br /&gt;
            if (this.isBlank(tokenFromHeader)) {&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: Token provided via HTTP Header is absent/empty so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            } else if (!tokenFromHeader.equals(tokenCookie.getValue())) {&lt;br /&gt;
                //Verify that token from header and one from cookie are the same&lt;br /&gt;
                //Here is not the case so we trace the event and we block the request&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: Token provided via HTTP Header and via Cookie are not equals so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            } else {&lt;br /&gt;
                //Verify that token from header and one from cookie matches&lt;br /&gt;
                //Here is the case so we let the request reach the target component (ServiceServlet, jsp...) and add a new token when we get back the bucket&lt;br /&gt;
                HttpServletResponseWrapper httpRespWrapper = new HttpServletResponseWrapper(httpResp);&lt;br /&gt;
                chain.doFilter(request, httpRespWrapper);&lt;br /&gt;
                //Add the CSRF token cookie and header&lt;br /&gt;
                this.addTokenCookieAndHeader(httpReq, httpRespWrapper);&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void init(FilterConfig filterConfig) throws ServletException {&lt;br /&gt;
        //To easier the configuration, we load the target expected origin from an JVM property&lt;br /&gt;
        //Reconfiguration only require an application restart that is generally acceptable&lt;br /&gt;
        try {&lt;br /&gt;
            this.targetOrigin = new URL(System.getProperty(TARGET_ORIGIN_JVM_PARAM_NAME));&lt;br /&gt;
        } catch (MalformedURLException e) {&lt;br /&gt;
            LOG.error(&amp;quot;Cannot init the filter !&amp;quot;, e);&lt;br /&gt;
            throw new ServletException(e);&lt;br /&gt;
        }&lt;br /&gt;
        LOG.info(&amp;quot;CSRFValidationFilter: Filter init, set expected target origin to '{}'.&amp;quot;, this.targetOrigin);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void destroy() {&lt;br /&gt;
        LOG.info(&amp;quot;CSRFValidationFilter: Filter shutdown&amp;quot;);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Check if a string is null or empty (including containing only spaces)&lt;br /&gt;
     *&lt;br /&gt;
     * @param s Source string&lt;br /&gt;
     * @return TRUE if source string is null or empty (including containing only spaces)&lt;br /&gt;
     */&lt;br /&gt;
    private boolean isBlank(String s) {&lt;br /&gt;
        return s == null || s.trim().isEmpty();&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Generate a new CSRF token&lt;br /&gt;
     *&lt;br /&gt;
     * @return The token a string&lt;br /&gt;
     */&lt;br /&gt;
    private String generateToken() {&lt;br /&gt;
        byte[] buffer = new byte[50];&lt;br /&gt;
        this.secureRandom.nextBytes(buffer);&lt;br /&gt;
        return DatatypeConverter.printHexBinary(buffer);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Determine the name of the CSRF cookie for the targeted backend service&lt;br /&gt;
     *&lt;br /&gt;
     * @param httpRequest Source HTTP request&lt;br /&gt;
     * @return The name of the cookie as a string&lt;br /&gt;
     */&lt;br /&gt;
    private String determineCookieName(HttpServletRequest httpRequest) {&lt;br /&gt;
        String backendServiceName = httpRequest.getRequestURI().replaceAll(&amp;quot;/&amp;quot;, &amp;quot;-&amp;quot;);&lt;br /&gt;
        return CSRF_TOKEN_NAME + &amp;quot;-&amp;quot; + backendServiceName;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Add the CSRF token cookie and header to the provided HTTP response object&lt;br /&gt;
     *&lt;br /&gt;
     * @param httpRequest  Source HTTP request&lt;br /&gt;
     * @param httpResponse HTTP response object to update&lt;br /&gt;
     */&lt;br /&gt;
    private void addTokenCookieAndHeader(HttpServletRequest httpRequest, HttpServletResponse httpResponse) {&lt;br /&gt;
        //Get new token&lt;br /&gt;
        String token = this.generateToken();&lt;br /&gt;
        //Add cookie manually because the current Cookie class implementation do not support the &amp;quot;SameSite&amp;quot; attribute&lt;br /&gt;
        //We let the adding of the &amp;quot;Secure&amp;quot; cookie attribute to the reverse proxy rewriting...&lt;br /&gt;
        //Here we lock the cookie from JS access and we use the SameSite new attribute protection&lt;br /&gt;
        String cookieSpec = String.format(&amp;quot;%s=%s; Path=%s; HttpOnly; SameSite=Strict&amp;quot;, this.determineCookieName(httpRequest), token, httpRequest.getRequestURI());&lt;br /&gt;
        httpResponse.addHeader(&amp;quot;Set-Cookie&amp;quot;, cookieSpec);&lt;br /&gt;
        //Add cookie header to give access to the token to the JS code&lt;br /&gt;
        httpResponse.setHeader(CSRF_TOKEN_NAME, token);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Authors and Primary Editors  ==&lt;br /&gt;
Manideep Konakandla (Amazon Application Security Team) - http://www.manideepk.com&lt;br /&gt;
&lt;br /&gt;
Dave Wichers - dave.wichers[at]owasp.org&lt;br /&gt;
&lt;br /&gt;
Paul Petefish - https://www.linkedin.com/in/paulpetefish&lt;br /&gt;
&lt;br /&gt;
Eric Sheridan - eric.sheridan[at]owasp.org&lt;br /&gt;
&lt;br /&gt;
Dominique Righetto - dominique.righetto[at]owasp.org&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Other Cheat Sheets ==&lt;br /&gt;
{{Cheatsheet_Navigation_Body}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;br /&gt;
[[Category:Popular]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;diff=246803</id>
		<title>Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;diff=246803"/>
				<updated>2019-01-23T23:18:31Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; __NOTOC__&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:160px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Cheatsheets-header.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
Last revision (mm/dd/yy): '''{{REVISIONMONTH}}/{{REVISIONDAY}}/{{REVISIONYEAR}}''' &lt;br /&gt;
&lt;br /&gt;
 __TOC__{{TOC hidden}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
[[Cross-Site Request Forgery (CSRF)]] is a type of attack that occurs when a malicious web site, email, blog, instant message, or program causes a user’s web browser to perform an unwanted action on a trusted site when the user is authenticated. A CSRF attack works because browser requests automatically include any credentials associated with the site, such as the user's session cookie, IP address, etc. Therefore, if the user is authenticated to the site, the site cannot distinguish between the forged or legitimate request sent by the victim. We would need a token/identifier that is not accessible to attacker and would not be sent along (like cookies) with forged requests that attacker initiates. For more information on CSRF, see OWASP [[Cross-Site Request Forgery (CSRF)|Cross-Site Request Forgery (CSRF) page]].&lt;br /&gt;
&lt;br /&gt;
The impact of a successful CSRF attack is limited to the capabilities exposed by the vulnerable application. For example, this attack could result in a transfer of funds, changing a password, or making a purchase with the user’s credentials. In effect, CSRF attacks are used by an attacker to make a target system perform a function via the target's browser, without the user’s knowledge, at least until the unauthorized transaction has been committed.&lt;br /&gt;
&lt;br /&gt;
Impacts of successful CSRF exploits vary greatly based on the privileges of each victim. When targeting a normal user, a successful CSRF attack can compromise end-user data and their associated functions. If the targeted end user is an administrator account, a CSRF attack can compromise the entire web application. Using social engineering, an attacker can embed malicious HTML or JavaScript code into an email or website to request a specific 'task URL'. The task then executes with or without the user's knowledge, either directly or by using a Cross-Site Scripting flaw. For example, see [https://en.wikipedia.org/wiki/Samy_(computer_worm) Samy MySpace Worm].&lt;br /&gt;
&lt;br /&gt;
== What's new? ==&lt;br /&gt;
If you have seen OWASP [https://www.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;amp;action=history old CSRF prevention cheat sheets], you can observe that a lot has changed in this newer version. One of the major changes is that the “Verifying same origin with standard headers” CSRF defense has been moved to the Defense-in-Depth section, whereas token based mitigation moved to the Primary Defenses section (technical reasons for this switch were added under respective sections). Multiple new sections (HMAC based token protection, auto CSRF mitigation techniques, login CSRF, not so popular CSRF mitigations and CSRF mitigation myths) were added besides adding new content, removing obsolete content to the existing sections. Security issues/caveats associated with each mitigation were also included.&lt;br /&gt;
&lt;br /&gt;
==Warning: No Cross-Site Scripting (XSS) Vulnerabilities ==&lt;br /&gt;
[[Cross-Site Scripting]] is not necessary for CSRF to work. However, any cross-site scripting vulnerability can be used to defeat all CSRF mitigation techniques available in the market today (except mitigation techniques that involve user interaction and described later in this cheat sheet). This is because an XSS payload can simply read any page on the site using an XMLHttpRequest (direct DOM access can be done, if on same page) and obtain the generated token from the response, and include that token with a forged request.  This technique is exactly how the [https://en.wikipedia.org/wiki/Samy_(computer_worm) MySpace (Samy) worm] defeated MySpace's anti-CSRF defenses in 2005, which enabled the worm to propagate.&lt;br /&gt;
&lt;br /&gt;
It is imperative that no XSS vulnerabilities are present to ensure that CSRF defenses can't be circumvented. Please see the OWASP [[XSS (Cross Site Scripting) Prevention Cheat Sheet|XSS Prevention Cheat Sheet]] for detailed guidance on how to prevent XSS flaws.  &lt;br /&gt;
&lt;br /&gt;
== Resources that need to be protected from CSRF vulnerability ==&lt;br /&gt;
The following list assumes that you are not violating [http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.1 RFC2616], section 9.1.1, by using GET requests for state changing operations. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' If for any reason you violate, you would also need to protect those resources, which is mostly achieved with default &amp;lt;code&amp;gt;form tag [GET method]&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;href&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;src&amp;lt;/code&amp;gt; attributes.  &lt;br /&gt;
&lt;br /&gt;
* Form tags with POST &lt;br /&gt;
* Ajax/XHR calls&lt;br /&gt;
&lt;br /&gt;
== CSRF Defense Recommendations Summary ==&lt;br /&gt;
We recommend token based CSRF defense (either stateful/stateless) as a primary defense to mitigate CSRF in your applications. Only for highly sensitive operations, we also recommend a user interaction based protection (either re-authentication/one-time token, detailed in section 7.5) along with token based mitigation.&lt;br /&gt;
&lt;br /&gt;
As a defense-in-depth measure, consider implementing one mitigation from the Defense-in-Depth Techniques section (you can choose the mitigation that fits your ecosystem considering the issues mentioned under them). These defense-in-depth mitigation techniques are not recommended to be used by themselves (without token based mitigation) for mitigating CSRF in your applications.&lt;br /&gt;
&lt;br /&gt;
== Primary Defense Techniques ==&lt;br /&gt;
&lt;br /&gt;
=== Token Based Mitigation ===&lt;br /&gt;
This defense is one the most popular and recommended methods to mitigate CSRF. It can be achieved either with state (synchronizer token pattern) or stateless (encrypted/hash based token pattern). See section 6.3 on how to mitigate login CSRF in your applications. For all the mitigation's, it is implicit that general security principles should be adhered&lt;br /&gt;
* Strong encryption/HMAC functions should be adhered to. &lt;br /&gt;
'''Note:''' You can select any algorithm per your organizational needs. We recommend AES256-GCM for encryption and SHA256/512 for HMAC.&lt;br /&gt;
* Strict key rotation and token lifetime policies should be maintained. Policies can be set according to your organizational needs. Generic key management guidance from OWASP can be found [[Key Management Cheat Sheet|here]].&lt;br /&gt;
&lt;br /&gt;
==== Synchronizer Token Pattern ====&lt;br /&gt;
Any state changing operation requires a secure random token (e.g., CSRF token) to prevent CSRF attacks. A CSRF token should be unique per user session, large random value, and also generated by a cryptographically secure random number generator. The CSRF token is added as a hidden field for forms headers/parameters for AJAX calls, and within the URL if the state changing operation occurs via a GET. See &amp;quot;Disclosure of Token in URL&amp;quot; section below. The server rejects the requested action if the CSRF token fails validation.&lt;br /&gt;
&lt;br /&gt;
In order to facilitate a &amp;quot;transparent but visible&amp;quot; CSRF solution, developers are encouraged to adopt a pattern similar to [http://www.corej2eepatterns.com/Design/PresoDesign.htm Synchronizer Token Pattern] (The original intention of this synchronizer token pattern was to detect duplicate submissions in forms). The synchronizer token pattern requires the generation of random &amp;quot;challenge&amp;quot; tokens that are associated with the user's current session. These challenge tokens are then inserted within the HTML forms and calls associated with sensitive server-side operations. It is the responsibility of the server application to verify the existence and correctness of this token. By including a challenge token with each request, the developer has a strong control to verify that the user actually intended to submit the desired requests. Inclusion of a required security token in HTTP requests associated with sensitive business functions helps mitigate CSRF attacks as successful exploitation assumes the attacker knows the randomly generated token for the target victim's session. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' These tokens aren’t like cookies that are automatically sent with forged requests made from your browser from the attacker website. &lt;br /&gt;
&lt;br /&gt;
This is analogous to the attacker being able to guess the target victim's session identifier. &lt;br /&gt;
&lt;br /&gt;
The following describes a general approach to incorporate challenge tokens within the request.&lt;br /&gt;
&lt;br /&gt;
When a Web application formulates a request, the application should include a hidden input parameter with a common name such as &amp;quot;CSRFToken&amp;quot; (for forms)/ as header/parameter value for Ajax calls. The value of this token must be randomly generated such that it cannot be guessed by an attacker. Consider leveraging the java.security.SecureRandom class for Java applications to generate a sufficiently long random token. Alternative generation algorithms include the use of 256-bit BASE64 encoded hashes. Developers that choose this generation algorithm must make sure that there is randomness and uniqueness utilized in the data that is hashed to generate the random token.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;html&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;form action=&amp;quot;/transfer.do&amp;quot; method=&amp;quot;post&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;input type=&amp;quot;hidden&amp;quot; name=&amp;quot;CSRFToken&amp;quot; &lt;br /&gt;
value=&amp;quot;OWY4NmQwODE4ODRjN2Q2NTlhMmZlYWEwYzU1YWQwMTVhM2JmNGYxYjJiMGI4MjJjZDE1ZDZMGYwMGEwOA==&amp;quot;&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/form&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is used for each subsequent request until the session expires. When a request is issued by the end-user, the server-side component must verify the existence and validity of the token in the request compared to the token found in the user session. If the token was not found within the request, or the value provided does not match the value within the user session, then the request should be aborted, and the event logged as a potential CSRF attack in progress.&lt;br /&gt;
&lt;br /&gt;
To further enhance the security of this proposed design, consider randomizing the CSRF token parameter name and/or value for each request. Implementing this approach results in the generation of per-request tokens as opposed to per-session tokens. This is more secure than per-session tokens as the time range for an attacker to exploit the stolen tokens is minimal. However, this may result in usability concerns. For example, the &amp;quot;Back&amp;quot; button browser capability is often hindered as the previous page may contain a token that is no longer valid. Interaction with this previous page will result in a CSRF false positive security event at the server. Few applications that need high security typically implement this approach (such as banks). You have to check what suits your needs. Regardless of the approach taken, developers are encouraged to protect the CSRF token the same way they protect authenticated session identifiers, such as the use of TLS.&lt;br /&gt;
&lt;br /&gt;
'''Existing Synchronizer Implementations'''&lt;br /&gt;
&lt;br /&gt;
Synchronizer token defenses have been built into many frameworks, so we strongly recommend using them first when they are available. External components that add CSRF defenses to existing applications are also recommended. OWASP has the following: &lt;br /&gt;
&lt;br /&gt;
* For Java: OWASP [[CSRF Guard]]&lt;br /&gt;
* For PHP and Apache: [[CSRFProtector Project]]&lt;br /&gt;
&lt;br /&gt;
'''Disclosure of Token in URL'''&lt;br /&gt;
&lt;br /&gt;
Some implementations of synchronizer tokens include the challenge token in GET (URL) requests as well as POST requests. This is often implemented as a result of sensitive server-side operations being invoked as a result of embedded links in the page or other general design patterns. These patterns are often implemented without knowledge of CSRF and an understanding of CSRF prevention design strategies. While this control does help mitigate the risk of CSRF attacks, the unique per-session token is being exposed for GET requests. CSRF tokens in GET requests are potentially leaked at several locations: browser history, log files, network appliances that make a point to log the first line of an HTTP request, and Referer headers if the protected site links to an external site. In the latter case (leaked CSRF token due to the Referer header being parsed by a linked site), it is trivially easy for the linked site to launch a CSRF attack on the protected site, and they will be able to target this attack very effectively, since the Referer header tells them the site as well as the CSRF token. The attack could be run entirely from JavaScript, so that a simple addition of a script tag to the HTML of a site can launch an attack (whether on an originally malicious site or on a hacked site). Additionally, since HTTPS requests from HTTPS contexts will not strip the Referer header (as opposed to HTTPS to HTTP requests) CSRF token leaks via Referer can still happen on HTTPS Applications.&lt;br /&gt;
&lt;br /&gt;
The ideal solution is to only include the CSRF token in POST requests and modify server-side actions that have state changing affect to only respond to POST requests. This is in fact what the &amp;lt;nowiki&amp;gt;RFC 2616&amp;lt;/nowiki&amp;gt; requires for GET requests. If sensitive server-side actions are guaranteed to only ever respond to POST requests, then there is no need to include the token in GET requests.&lt;br /&gt;
&lt;br /&gt;
In most JavaEE web applications, however, HTTP method scoping is rarely ever utilized when retrieving HTTP parameters from a request. Calls to &amp;quot;HttpServletRequest.getParameter&amp;quot; will return a parameter value regardless if it was a GET or POST. This is not to say HTTP method scoping cannot be enforced. It can be achieved if a developer explicitly overrides doPost() in the HttpServlet class or leverages framework specific capabilities such as the AbstractFormController class in Spring.&lt;br /&gt;
&lt;br /&gt;
For these cases, attempting to retrofit this pattern in existing applications requires significant development time and cost, and as a temporary measure it may be better to pass CSRF tokens in the URL. Once the application has been fixed to respond to HTTP GET and POST verbs correctly, CSRF tokens for GET requests should be turned off.&lt;br /&gt;
&lt;br /&gt;
==== Encryption based Token Pattern ====&lt;br /&gt;
The Encrypted Token Pattern leverages an encryption, rather than comparison method of Token-validation. It is most suitable for applications that do not want to maintain any state at server side. &lt;br /&gt;
&lt;br /&gt;
After successful authentication, the server generates a unique token comprised of the user's ID, a timestamp value and a [http://en.wikipedia.org/wiki/Cryptographic_nonce nonce], using a unique key available only on the server. This token is returned to the client and embedded in a hidden field for forms, in the request-header/parameter for AJAX requests. On receipt of this request, the server reads and decrypts the token value with the same key used to create the token. The inability to correctly decrypt suggests an intrusion attempt. Once decrypted, the UserId and timestamp contained within the token are validated; the UserId is compared against the currently logged in user, and the timestamp is compared against the current time.&lt;br /&gt;
&lt;br /&gt;
On successful token-decryption, the server has access to parsed values, ideally in the form of [http://en.wikipedia.org/wiki/Claims-based_identity claims]. These claims are processed by comparing the UserId claim to any potentially stored UserId (in a Cookie or Session variable, if the site already contains a means of authentication). The Timestamp is validated against the current time, preventing replay attacks. Alternatively, in the case of a CSRF attack, the server will be unable to decrypt the poisoned token, and can block and log the attack.&lt;br /&gt;
&lt;br /&gt;
This technique addresses some of the shortfalls in other stateless approaches, such as the need to store data in a Cookie, circumventing the Cookie-subdomain and [[HttpOnly]] issues.&lt;br /&gt;
&lt;br /&gt;
==== HMAC Based Token Pattern ====&lt;br /&gt;
[https://en.wikipedia.org/wiki/HMAC HMAC (hash-based message authentication code)] is a cryptographic function that helps to guarantee integrity and authentication of a message. It is another way that CSRF mitigation can be achieved without maintaining any state at the server and is similar to an encryption token-based pattern with two main differences:&lt;br /&gt;
* Uses a strong HMAC function instead of an encryption function to generate the token&lt;br /&gt;
* Includes an additional field called ‘operation’ that would indicate the purpose of the operation for which you are including the CSRF token (may it be form tag/ajax call) &lt;br /&gt;
(Ex: ‘oneclickpurchase’ (or) buy/asin=SDFH&amp;amp;category=2&amp;amp;quantity=3)&lt;br /&gt;
&lt;br /&gt;
'''Note:''' Fields mentioned in encryption token pattern (user's ID, a timestamp value and a nonce) are included. &lt;br /&gt;
&lt;br /&gt;
The operation field helps in mitigating the fact that the hash function generates the same value irrespective of multiple iterations (unlike strong encryption functions that generate different values when they are encrypted each time). So, it would help in avoiding having repeated token values across your application. Nonce field serves the same purpose as in encrypted token pattern (i.e., to avoid rare collisions due to weak cryptographic functions and acts as a defense-in-depth measure). &lt;br /&gt;
&lt;br /&gt;
Generate the token using HMAC including all four fields mentioned previously (user's ID, a timestamp value, nonce, and operation) and then include it in hidden fields for form tags, headers/parameters for ajax calls. Once you receive the HMAC from the client in the requests, re-generate HMAC with the same fields that you used to generate it, and then verify that the HMAC you re-generated matches the HMAC received from the client. If it does, it is a legitimate user request and if it does not, flag it as a CSRF intrusion and alert your incident response teams. Because an attacker has no visibility into the key used for generating the hash fields used in generating it, there is no way for them to re-generate it to use in forged request.&lt;br /&gt;
&lt;br /&gt;
=== Auto CSRF Mitigation Techniques ===&lt;br /&gt;
Though the technique of mitigating tokens is widely used (stateful with synchronizer token and stateless with encrypted/HMAC token), the major problem associated with these techniques is the human tendency to forget things at times. If a developer forgets to add the token to any state changing operation, they are making the application vulnerable to CSRF. To avoid this, you can try to automate the process of adding tokens to CSRF vulnerable resources (mentioned earlier in this document). You can achieve this by doing the following:&lt;br /&gt;
* Write wrappers (that would auto add tokens when used) around default form tags/ajax calls and educate your developers to use those wrappers instead of standard tags. Though this approach is better than depending purely on developers to add tokens, it still is vulnerable to the issue of human tendency to forget things. [https://docs.spring.io/spring-security/site/docs/3.2.0.CI-SNAPSHOT/reference/html/csrf.html Spring Security] uses this technique to add CSRF tokens by default when a custom &amp;lt;form:form&amp;gt; tag is used, you can opt to use after verifying that its enabled and properly configured in the Spring Security version you are using.&lt;br /&gt;
* Write a hook (that would capture the traffic and add tokens to CSRF vulnerable resources before rendering to customers) in your organizational web rendering frameworks. Because it is hard to analyze when a particular response is doing any state change (and thus needing a token), you might want to include tokens in all CSRF vulnerable resources (ex: include tokens in all POST responses). This is one recommended approach, but you need to consider the performance costs it might incur.&lt;br /&gt;
* Get the tokens automatically added on the client side when the page is being rendered in user’s browser, with help of a client side script (this approach is used by [[CSRF Guard]]). You need to consider any possible JavaScript hijacking attacks.&lt;br /&gt;
We recommend researching if the framework you are using has an option to achieve CSRF protection by default before trying to build your custom auto tokening system. For example, .NET has an [https://docs.microsoft.com/en-us/aspnet/core/security/anti-request-forgery?view=aspnetcore-2.1 in-built protection] that adds token to CSRF vulnerable resources. You are responsible for proper configuration (such as key management and token management) before using these in-built CSRF protections that do auto tokening to CSRF vulnerable resources.&lt;br /&gt;
&lt;br /&gt;
=== Login CSRF ===&lt;br /&gt;
Most developers tend to ignore CSRF vulnerability on login forms as they assume that CSRF would not be applicable on login forms because user is not authenticated at that stage. That assumption is false. CSRF vulnerability can still occur on login forms where the user is not authenticated, but the impact/risk view for it is quite different from the impact/risk view of a general CSRF vulnerability (when a user is authenticated).&lt;br /&gt;
&lt;br /&gt;
With a CSRF vulnerability on login form, an attacker can make a victim login to the attacker's account and learn behavior from the victim's searches. For more information about login CSRF and other risks, see section 3 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf the Stanford CSRF] paper.&lt;br /&gt;
&lt;br /&gt;
Login CSRF can be mitigated by creating pre-sessions (sessions before a user is authenticated) and including tokens in login form. You can use any of the techniques mentioned above to generate tokens. Pre-sessions can be transitioned to real sessions once the user is authenticated. This technique is described in [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery section 4.1].&lt;br /&gt;
&lt;br /&gt;
If sub-domains under your master domain are treated as not trusted in your threat model, it is difficult to mitigate login CSRF. A strict subdomain and path level referer header (because most login pages are served on HTTPS - no stripping of referer - and are also linked from home pages) validation (detailed in section 7.1) can be used in these cases for mitigating CSRF on login forms to an extent.&lt;br /&gt;
&lt;br /&gt;
== Defense-In-Depth Techniques ==&lt;br /&gt;
&lt;br /&gt;
=== Verifying origin with standard headers ===&lt;br /&gt;
This defense technique is specifically proposed in section 5.0 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery]. This paper proposes the first creation of the Origin header and its use as a CSRF defense mechanism.&lt;br /&gt;
&lt;br /&gt;
There are two steps to this mitigation, both of which rely on examining an HTTP request header value.&lt;br /&gt;
&lt;br /&gt;
1. Determining the origin the request is coming from (source origin). Can be done via Origin and/or referer header.&lt;br /&gt;
&lt;br /&gt;
2. Determining the origin the request is going to (target origin).&lt;br /&gt;
&lt;br /&gt;
At server side we verify if both of them match. If they do, we accept the request as legitimate (meaning it’s the same origin request) and if they don’t, we discard the request (meaning that the request originated from cross-domain). Reliability on these headers comes from the fact that they cannot be altered programmatically (using JavaScript in an XSS) as they fall under [https://developer.mozilla.org/en-US/docs/Glossary/Forbidden_header_name forbidden headers] list (i.e., only browsers can set them).&lt;br /&gt;
&lt;br /&gt;
====Identifying Source Origin (via Origin/Referer header) ====&lt;br /&gt;
'''Checking the Origin Header'''&lt;br /&gt;
&lt;br /&gt;
If the Origin header is present, verify that its value matches the target origin. Unlike the Referer, the Origin header will be present in HTTP requests that originate from an HTTPS URL.&lt;br /&gt;
&lt;br /&gt;
'''Checking the Referer Header'''&lt;br /&gt;
&lt;br /&gt;
If the Origin header is not present, verify the hostname in the Referer header matches the target origin. This method of CSRF mitigation is also commonly used with unauthenticated requests, such as requests made prior to establishing a session state, which is required to keep track of a synchronization token.&lt;br /&gt;
&lt;br /&gt;
In both cases, make sure the target origin check is strong. For example, if your site is &amp;quot;site.com&amp;quot; make sure &amp;quot;site.com.attacker.com&amp;quot; does not pass your origin check (i.e., match through the trailing/after the origin to make sure you are matching against the entire origin).&lt;br /&gt;
&lt;br /&gt;
If neither of these headers are present, you can either accept or block the request. We recommend '''blocking'''. Alternatively, you might want to log all such instances, monitor their use cases/behavior, and then start blocking requests only after you get enough confidence.&lt;br /&gt;
&lt;br /&gt;
==== Identifying the Target Origin ====&lt;br /&gt;
You might think it’s easy to determine the target origin, but it’s frequently not. The first thought is to simply grab the target origin (i.e., its hostname and port #) from the URL in the request. However, the application server is frequently sitting behind one or more proxies and the original URL is different from the URL the app server actually receives. If your application server is directly accessed by its users, then using the origin in the URL is fine and you're all set.&lt;br /&gt;
&lt;br /&gt;
If you are behind a proxy, there are a number of options to consider.&lt;br /&gt;
* '''Configure your application to simply know its target origin:''' It’s your application, so you can find its target origin and set that value in some server configuration entry. This would be the most secure approach as its defined server side, so it is a trusted value. However,   this might be problematic to maintain if your application is deployed in many places, e.g., dev, test, QA, production, and possibly multiple production instances. Setting the correct value for each of these situations might be difficult, but if you can do it via some central configuration and providing your instances to grab value from it, that's great! ('''Note:''' Make sure the centralized configuration store is maintained securely because major part of your CSRF defense depends on it.)&lt;br /&gt;
&lt;br /&gt;
* '''Use the Host header value:''' If you prefer that the application find its own target so it doesn't have to be configured for each deployed instance, we recommend using the Host family of headers. The Host header's purpose is to contain the target origin of the request. But, if your app server is sitting behind a proxy, the Host header value is most likely changed by the proxy to the target origin of the URL behind the proxy, which is different than the original URL. This modified Host header origin won't match the source origin in the original Origin or Referer headers.&lt;br /&gt;
&lt;br /&gt;
* '''Use the X-Forwarded-Host header value:''' To avoid the issue of proxy altering the host header, there is another header called X-Forwarded-Host, whose purpose is to contain the original Host header value the proxy received. Most proxies will pass along the original Host header value in the X-Forwarded-Host header. So that header value is likely to be the target origin value you need to compare to the source origin in the Origin or Referer header.&lt;br /&gt;
&lt;br /&gt;
This mitigation in earlier versions of the CSRF Cheat Sheet is treated as a primary defense. For reasons mentioned below, it is now moved to the Defense-in-Depth section.&lt;br /&gt;
&lt;br /&gt;
As it’s implicit, this mitigation would work properly when origin or referer headers are present in the requests. Though these headers are included '''majority''' of the time, there are few use cases where they are not included (most of them are for legitimate reasons to safeguard users privacy/to tune to browsers ecosystem). The following lists some use cases:&lt;br /&gt;
* Internet Explorer 11 does not add the Origin header on a CORS request across sites of a trusted zone. The Referer header will remain the only indication of the UI origin. See the following references in stackoverflow [https://stackoverflow.com/questions/20784209/internet-explorer-11-does-not-add-the-origin-header-on-a-cors-request here] and [https://github.com/silverstripe/silverstripe-graphql/issues/118 here].&lt;br /&gt;
* In an instance following a [https://stackoverflow.com/questions/22397072/are-there-any-browsers-that-set-the-origin-header-to-null-for-privacy-sensitiv 302 redirect cross-origin], Origin is not included in the redirected request because that may be considered sensitive information that should not be sent to the other origin.&lt;br /&gt;
* There are some [https://wiki.mozilla.org/Security/Origin#Privacy-Sensitive_Contexts privacy contexts] where Origin is set to “null” For example, see the following [https://www.google.com/search?q=origin+header+sent+null+value+site%3Astackoverflow.com&amp;amp;oq=origin+header+sent+null+value+site%3Astackoverflow.com here].&lt;br /&gt;
* Origin header is included for all cross origin requests but for same origin requests, in most browsers it is only included in POST/DELETE/PUT '''Note:''' Although it is not ideal, many developers use GET requests to do state changing operations.&lt;br /&gt;
* Referer header is no exception. There are multiple use cases where referer header is omitted as well ([https://stackoverflow.com/questions/6880659/in-what-cases-will-http-referer-be-empty &amp;lt;nowiki&amp;gt;[1]&amp;lt;/nowiki&amp;gt;], [https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referer &amp;lt;nowiki&amp;gt;[2]&amp;lt;/nowiki&amp;gt;], [https://en.wikipedia.org/wiki/HTTP_referer#Referer_hiding &amp;lt;nowiki&amp;gt;[3]&amp;lt;/nowiki&amp;gt;], [https://seclab.stanford.edu/websec/csrf/csrf.pdf &amp;lt;nowiki&amp;gt;[4]&amp;lt;/nowiki&amp;gt;] and [https://www.google.com/search?q=referer+header+sent+null+value+site:stackoverflow.com &amp;lt;nowiki&amp;gt;[5]&amp;lt;/nowiki&amp;gt;]). Load balancers, proxies and embedded network devices are also well known to strip the referer header due to privacy reasons in logging them.&lt;br /&gt;
&lt;br /&gt;
Though exceptions can be written for above cases in your source and target origin check logic, there is currently no central repository (even there is one, keeping it up-to-date is a problem) that references all such use cases. Each browser might also handle these use cases differently (browsers are known to handle things differently considering their ecosystem. IE example of not sending origin header within trusted zone is such example). Rejecting requests that do not contain origin and/or referer headers might sound like a good idea but it can impact legitimate users. Keeping this system in monitoring mode and trying to investigate use cases such as stated above, then adding them into exception logic is a process that you may consider to make this defense more stable in your environment.&lt;br /&gt;
&lt;br /&gt;
This CSRF defense relies on browser behavior that can change at times. For example, when new privacy contexts are discovered, under which situations you have to keep your validation logic updated, where as in token based mitigation, you have full control on the CSRF mitigation. If browsers alter CSRF tokens, they are literally changing the HTML content on rendering pages (which no browser would ever want to do!).&lt;br /&gt;
&lt;br /&gt;
When there is an XSS vulnerability on a page of an application protected with Origin and/or Referer header, the level of effort required to exploit state changing operations (that are typically vulnerable to CSRF) on other pages is very easy (grab the parameters and forge a request, as Origin and Referer header is included by default by browsers) than compared to token based mitigation (where attacker needs to download the target page, parse the DOM for the token, construct a forge request, and send it to server).&lt;br /&gt;
&lt;br /&gt;
'''Note:''' Although the concept of an origin header stemmed from [https://seclab.stanford.edu/websec/csrf/csrf.pdf the Stanford CSRF] paper that references robust CSRF defenses, the initial [https://tools.ietf.org/html/rfc6454 origin header RFC] does not reference mitigating CSRF in any way (another [https://tools.ietf.org/id/draft-abarth-origin-03.html draft version] does, however).&lt;br /&gt;
&lt;br /&gt;
=== Double Submit Cookie ===&lt;br /&gt;
If maintaining the state for CSRF token at server side is problematic, an alternative defense is to use the double submit cookie technique. This technique is easy to implement and is stateless. In this technique, we send a random value in both a cookie and as a request parameter, with the server verifying if the cookie value and request value match. When a user visits (even before authenticating to prevent login CSRF), the site should generate a (cryptographically strong) pseudorandom value and set it as a cookie on the user's machine separate from the session identifier. The site then requires that every transaction request include this pseudorandom value as a hidden form value (or other request parameter/header). If both of them match at server side, the server accepts it as legitimate request and if they don’t, it would reject the request.&lt;br /&gt;
&lt;br /&gt;
There’s a belief that this technique would work because a cross origin attacker cannot read any data sent from the server or modify cookie values, per the same-origin policy. This means that while an attacker can force a victim to send any value with a malicious CSRF request, the attacker will be unable to modify or read the value stored in the cookie (with which the server compares the token value).&lt;br /&gt;
&lt;br /&gt;
There are a couple of drawbacks associated with the assumptions made here. The problem of &amp;quot;trusting of sub domains and proper configuration of whole site in general to accept HTTPS connections only&amp;quot;. The [https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf Blackhat talk] by Rich Lundeen references these drawbacks.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;''With double submit, if an attacker can write a cookie they can obviously defeat the protection. And again, writing cookies is significantly easier then reading them. The fact that cookies can be written is difficult for many people to understand. After all, doesn't the same origin policy specify that one domain cannot access cookies from another domain? However, there are two common scenarios where writing cookies across domains is possible:''&lt;br /&gt;
&lt;br /&gt;
''a) While it's true that hellokitty.marketing.example.com cannot read cookies or access the DOM from secure.example.com because of the same origin policy, hellokitty.marketing.example.com can write cookies to the parent domain (example.com), and these cookies are then consumed by secure.example.com (secure.example.com has no good way to distinguish which site set the cookie). Additionally, there are methods of forcing secure.example.com to always accept your cookie first. What this means is that XSS in hellokitty.marketing.example.com is able to overwrite cookies in secure.example.com.''&lt;br /&gt;
&lt;br /&gt;
''b) If an attacker is in the middle, they can usually force a request to the same domain over HTTP. If an application is hosted at &amp;lt;nowiki&amp;gt;https://secure.example.com&amp;lt;/nowiki&amp;gt;, even if the cookies are set with the secure flag, a man in the middle can force connections to &amp;lt;nowiki&amp;gt;http://secure.example.com&amp;lt;/nowiki&amp;gt; and set (overwrite) any arbitrary cookies (even though the secure flag prevents the attacker from reading those cookies). Even if the HSTS header is set on the server and the browser visiting the site supports HSTS (this would prevent a man in the middle from forcing plain text HTTP requests) unless the HSTS header is set in a way that includes all subdomains, a man in the middle can simply force a request to a separate subdomain and overwrite cookies similar to 1. In other words, as long as &amp;lt;nowiki&amp;gt;http://hellokitty.marketing.example.com&amp;lt;/nowiki&amp;gt; doesn't force HTTPS, then an attacker can overwrite cookies on any example.com subdomain.''&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So, unless you are sure that your subdomains are fully secured and only accept HTTPS connections (we believe it’s difficult to guarantee at large enterprises), you should not rely on the Double Submit Cookie technique as a primary mitigation for CSRF.&lt;br /&gt;
&lt;br /&gt;
A variant of double submit cookie that can mitigate both the risks mentioned above is including the token in an encrypted cookie - often within the authentication cookie - and then at the server side matching it (after decrypting the authentication cookie) with the token in hidden form field or parameter/header for ajax calls.  This works because a sub domain has no way to over-write an properly crafted encrypted cookie without the necessary information such as encryption key.&lt;br /&gt;
&lt;br /&gt;
=== SameSite Cookie Attribute ===&lt;br /&gt;
SameSite is a cookie attribute (similar to [[HttpOnly]], Secure etc.) introduced by Google to mitigate CSRF attacks. It is defined in [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 this] Internet Draft. This attribute helps in preventing the browser from sending cookies along with cross-site requests. Possible values for this attribute are lax or strict.&lt;br /&gt;
&lt;br /&gt;
The strict value will prevent the cookie from being sent by the browser to the target site in all cross-site browsing context, even when following a regular link. For example, for a GitHub-like website this would mean that if a logged-in user follows a link to a private GitHub project posted on a corporate discussion forum or email, GitHub will not receive the session cookie and the user will not be able to access the project. A bank website however most likely doesn't want to allow any transactional pages to be linked from external sites, so the strict flag would be most appropriate.&lt;br /&gt;
&lt;br /&gt;
The default lax value provides a reasonable balance between security and usability for websites that want to maintain user's logged-in session after the user arrives from an external link. In the above GitHub scenario, the session cookie would be allowed when following a regular link from an external website while blocking it in CSRF-prone request methods such as POST. Only cross-site requests that are allowed in lax mode are the ones that have top-level navigations and are also “safe” HTTP methods (more details [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7.1 here]).&lt;br /&gt;
&lt;br /&gt;
Example of cookies using this attribute:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Set-Cookie: JSESSIONID=xxxxx; SameSite=Strict&lt;br /&gt;
Set-Cookie: JSESSIONID=xxxxx; SameSite=Lax&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Support for this attribute in different browsers is increasing but there are still browsers that need to adopt this. As of August 2018, SameSite attribute is on browsers used by 68.92% of Internet users (detailed statistics are [https://caniuse.com/#feat=same-site-cookie-attribute here]).&lt;br /&gt;
&lt;br /&gt;
Though this technique seems to be efficient in mitigating CSRF attacks, it is still in early stages (in [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 draft]) and does not have full browser support as mentioned above. Google’s [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 draft] also mentions a couple cases where forged requests can be simulated by attackers as same-site requests (and thus allowing to send SameSite cookies).&lt;br /&gt;
&lt;br /&gt;
Considering the factors above, it is not recommended to be used as a primary defense. Google agrees with this stance and strongly encourages developers to deploy server-side defenses such as tokens to mitigate CSRF more fully.&lt;br /&gt;
&lt;br /&gt;
=== Use of Custom Request Headers ===&lt;br /&gt;
&lt;br /&gt;
Adding CSRF tokens, a double submit cookie and value, encrypted token, or other defense that involves changing the UI can frequently be complex or otherwise problematic. An alternate defense that is particularly well suited for AJAX/XHR endpoints is the use of a custom request header. This defense relies on the [https://en.wikipedia.org/wiki/Same-origin_policy same-origin policy (SOP)] restriction that only JavaScript can be used to add a custom header, and only within its origin. By default, browsers do not allow JavaScript to make cross origin requests.&lt;br /&gt;
&lt;br /&gt;
A particularly attractive custom header and value to use is “X-Requested-With: XMLHttpRequest” because most JavaScript libraries already add this header to requests they generate by default. However, some do not. For example, AngularJS used to, but does not anymore. For more information, see [https://github.com/angular/angular.js/commit/3a75b1124d062f64093a90b26630938558909e8d their rationale] and how to add it back.&lt;br /&gt;
&lt;br /&gt;
If this is the case for your system, you can simply verify the presence of this header and value on all your server side AJAX endpoints in order to protect against CSRF attacks. This approach has the double advantage of usually requiring no UI changes and not introducing any server side state, which is particularly attractive to REST services. You can always add your own custom header and value if that is preferred.&lt;br /&gt;
&lt;br /&gt;
This defense technique is specifically discussed in section 4.3 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery]. However, bypasses of this defense using Flash were documented as early as 2008 and again as recently as 2015 by Mathias Karlsson to [https://hackerone.com/reports/44146 exploit a CSRF flaw in Vimeo]. A Flash attack can't spoof the Origin or Referer headers, so by checking both of them we believe this combination of checks should prevent Flash bypass CSRF attacks (if any comes up in future). &lt;br /&gt;
&lt;br /&gt;
Besides any possible future bypasses such as Flash, using a static header will make it easy to exploit other state changing operations in the application (similar to the previous explanation on why ease of exploitation is easier in origin/referer header check than token based mitigation). Including a random token instead of static header value is more or less equal to the token based approach described in the Primary Defenses section. Developers also need to consider that if you are using this approach in an application with both Ajax calls and form tags, this technique would only help Ajax calls in protecting from CSRF and you would still need protect &amp;lt;form&amp;gt; tags with approaches described in this document such as tokens. Setting custom headers on form tags is not possible directly. Also, CORS configuration should also be robust to make this solution work effectively (as custom headers for requests coming from other domains trigger a pre-flight CORS check).&lt;br /&gt;
&lt;br /&gt;
=== User Interaction Based CSRF Defense ===&lt;br /&gt;
&lt;br /&gt;
While all the techniques referenced here do not require any user interaction, sometimes it’s easier or more appropriate to involve the user in the transaction to prevent unauthorized operations (forged via CSRF or otherwise). The following are some examples of techniques that can act as strong CSRF defense when implemented correctly.&lt;br /&gt;
* Re-Authentication (password or stronger)&lt;br /&gt;
* One-time Token&lt;br /&gt;
* CAPTCHA&lt;br /&gt;
While these are a very strong CSRF defense, it does create a huge impact on the user experience. For applications that are in need of high security for some operations (password change, money transfer etc.), these techniques should be used along with token based mitigation. Please note that tokens by themselves can mitigate CSRF, developers should use these techniques only to achieve additional security for their high sensitive operations.&lt;br /&gt;
&lt;br /&gt;
== Not So Popular CSRF Mitigations ==&lt;br /&gt;
&lt;br /&gt;
=== Triple Submit Cookie ===&lt;br /&gt;
This mitigation is proposed by John Wilander in 2012 at OWASP Appsec Research. This technique adds an additional step to double submit cookie approach by verifying if the request contains two cookies with same name (please note, attacker need to write an additional cookie to bypass double submit cookie mitigation). Though it mitigates the issues discussed in bypass of double submit cookie, it introduces new problems such as cookie jar overflow (in-details and more issue details [https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf here] and [https://webstersprodigy.net/2012/08/03/analysis-of-john-wilanders-triple-submit-cookies/ here]). We were not able to find any real-time implementations of this mitigation so far.&lt;br /&gt;
&lt;br /&gt;
=== Content-Type Header Validation ===&lt;br /&gt;
This technique is better known than the triple submit cookie mitigation. In first place, this header is not designed for security (initial RFC [https://tools.ietf.org/html/rfc1049 here] and later well-defined in [https://www.ietf.org/rfc/rfc2045.txt this] RFC) but only to let receiving agents know the type of data they would be handling, so that they can invoke corresponding parsers. The pre-flighting behavior of this header (pre-flight if header has value other than application/x-www-form-urlencoded, multipart/form-data, or text/plain) is what treated as a CSRF mitigation and thus forcing all requests to have a header value that would force a pre-flight (such as application/json. Server side can reject cross-origin requests with CORS/SOP during this pre-flight).&lt;br /&gt;
&lt;br /&gt;
This approach has two main problems. One that it would mandate all requests to have a header value that would force pre-flight despite the real use case and the other that this technique is relying on a feature that is not designed for security, to mitigate a security vulnerability. When a bug was discovered in the Chrome API, browser architects even considered to removing this pre-flighting behavior. Because this header was not designed as a security control, architects can re-design it to better cater its primary purpose. In the future, there’s a possibility that new content-type header types can be included (to better support various use-cases), which can put systems relying on this header for CSRF mitigation in trouble. For more information, see [https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2017/september/common-csrf-prevention-misconceptions/ Common CSRF Prevention Misconceptions].&lt;br /&gt;
&lt;br /&gt;
== CSRF Mitigation Myths ==&lt;br /&gt;
The following shows techniques presumed to be CSRF mitigations but none of them fully/actually mitigates a CSRF vulnerability.&lt;br /&gt;
* '''CORS''': CORS is a header designed to relax Same-Origin-Policy when cross-origin communication between sites is required. It is not designed, nor prevents CSRF attacks.&lt;br /&gt;
* '''Using HTTPS''': Using HTTPS has nothing to do with the protection from CSRF attacks. Resources that are under HTTPS are still vulnerable to CSRF if proper CSRF mitigations described above are not included.&lt;br /&gt;
* More myths can be found [[Cross-Site Request Forgery (CSRF)|here]]&lt;br /&gt;
&lt;br /&gt;
== Personal Safety CSRF Tips for Users ==&lt;br /&gt;
Since CSRF vulnerabilities are reportedly widespread, we recommend using the following best practices to mitigate risk.  &lt;br /&gt;
&lt;br /&gt;
# Logoff immediately after using a Web application.&lt;br /&gt;
# Do not allow your browser to save username/passwords, and do not allow sites to “remember” your login.&lt;br /&gt;
# Do not use the same browser to access sensitive applications and to surf the Internet freely (tabbed browsing).&lt;br /&gt;
# The use of plugins such as No-Script makes POST based CSRF vulnerabilities difficult to exploit. This is because JavaScript is used to automatically submit the form when the exploit is loaded. Without JavaScript, the attacker would have to trick the user into submitting the form manually.&lt;br /&gt;
&lt;br /&gt;
Integrated HTML-enabled mail/browser and newsreader/browser environments pose additional risks since simply viewing a mail message or a news message might lead to the execution of an attack. &lt;br /&gt;
== Implementation reference example  ==&lt;br /&gt;
The following JEE web filter provides an example reference for some of the concepts described in this cheat sheet. It implements the following stateless mitigations ([https://github.com/aramrami/OWASP-CSRFGuard OWASP CSRFGuard], cover a stateful approach).&lt;br /&gt;
* Verifying same origin with standard headers&lt;br /&gt;
* Double submit cookie&lt;br /&gt;
* SameSite cookie attribute&lt;br /&gt;
'''Please note''' that it only acts a reference sample and is not complete (for example: it doesn’t have a block to direct the control flow when origin and referer header check succeeds nor it has a port/host/protocol level validation for referer header). Developers are recommended to build their complete mitigation on top of this reference sample. Developers should also implement standard authentication or authorization checks before checking for CSRF.&lt;br /&gt;
&lt;br /&gt;
Source is also located [https://github.com/righettod/poc-csrf here] and provides a runnable POC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
import org.slf4j.Logger;&lt;br /&gt;
import org.slf4j.LoggerFactory;&lt;br /&gt;
&lt;br /&gt;
import javax.servlet.Filter;&lt;br /&gt;
import javax.servlet.FilterChain;&lt;br /&gt;
import javax.servlet.FilterConfig;&lt;br /&gt;
import javax.servlet.ServletException;&lt;br /&gt;
import javax.servlet.ServletRequest;&lt;br /&gt;
import javax.servlet.ServletResponse;&lt;br /&gt;
import javax.servlet.annotation.WebFilter;&lt;br /&gt;
import javax.servlet.http.Cookie;&lt;br /&gt;
import javax.servlet.http.HttpServletRequest;&lt;br /&gt;
import javax.servlet.http.HttpServletResponse;&lt;br /&gt;
import javax.servlet.http.HttpServletResponseWrapper;&lt;br /&gt;
import javax.xml.bind.DatatypeConverter;&lt;br /&gt;
import java.io.IOException;&lt;br /&gt;
import java.net.MalformedURLException;&lt;br /&gt;
import java.net.URL;&lt;br /&gt;
import java.security.SecureRandom;&lt;br /&gt;
import java.util.Arrays;&lt;br /&gt;
&lt;br /&gt;
/**&lt;br /&gt;
 * Filter in charge of validating each incoming HTTP request about Headers and CSRF token.&lt;br /&gt;
 * It is called for all requests to backend destination.&lt;br /&gt;
 *&lt;br /&gt;
 * We use the approach in which:&lt;br /&gt;
 * - The CSRF token is changed after each valid HTTP exchange&lt;br /&gt;
 * - The custom Header name for the CSRF token transmission is fixed&lt;br /&gt;
 * - A CSRF token is associated to a backend service URI in order to enable the support for multiple parallel Ajax request from the same application&lt;br /&gt;
 * - The CSRF cookie name is the backend service name prefixed with a fixed prefix&lt;br /&gt;
 *&lt;br /&gt;
 * Here for the POC we show the &amp;quot;access denied&amp;quot; reason in the response but in production code only return a generic message !&lt;br /&gt;
 *&lt;br /&gt;
 * @see &amp;quot;https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://wiki.mozilla.org/Security/Origin&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://chloe.re/2016/04/13/goodbye-csrf-samesite-to-the-rescue/&amp;quot;&lt;br /&gt;
 */&lt;br /&gt;
@WebFilter(&amp;quot;/backend/*&amp;quot;)&lt;br /&gt;
public class CSRFValidationFilter implements Filter {&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * JVM param name used to define the target origin&lt;br /&gt;
     */&lt;br /&gt;
    public static final String TARGET_ORIGIN_JVM_PARAM_NAME = &amp;quot;target.origin&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Name of the custom HTTP header used to transmit the CSRF token and also to prefix &lt;br /&gt;
     * the CSRF cookie for the expected backend service&lt;br /&gt;
     */&lt;br /&gt;
    private static final String CSRF_TOKEN_NAME = &amp;quot;X-TOKEN&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Logger&lt;br /&gt;
     */&lt;br /&gt;
    private static final Logger LOG = LoggerFactory.getLogger(CSRFValidationFilter.class);&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Application expected deployment domain: named &amp;quot;Target Origin&amp;quot; in OWASP CSRF article&lt;br /&gt;
     */&lt;br /&gt;
    private URL targetOrigin;&lt;br /&gt;
&lt;br /&gt;
    /***&lt;br /&gt;
     * Secure generator&lt;br /&gt;
     */&lt;br /&gt;
    private final SecureRandom secureRandom = new SecureRandom();&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {&lt;br /&gt;
        HttpServletRequest httpReq = (HttpServletRequest) request;&lt;br /&gt;
        HttpServletResponse httpResp = (HttpServletResponse) response;&lt;br /&gt;
        String accessDeniedReason;&lt;br /&gt;
&lt;br /&gt;
        /* STEP 1: Verifying Same Origin with Standard Headers */&lt;br /&gt;
        //Try to get the source from the &amp;quot;Origin&amp;quot; header&lt;br /&gt;
        String source = httpReq.getHeader(&amp;quot;Origin&amp;quot;);&lt;br /&gt;
        if (this.isBlank(source)) {&lt;br /&gt;
            //If empty then fallback on &amp;quot;Referer&amp;quot; header&lt;br /&gt;
            source = httpReq.getHeader(&amp;quot;Referer&amp;quot;);&lt;br /&gt;
            //If this one is empty too then we trace the event and we block the request (recommendation of the article)...&lt;br /&gt;
            if (this.isBlank(source)) {&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: ORIGIN and REFERER request headers are both absent/empty so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
                return;&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //Compare the source against the expected target origin&lt;br /&gt;
        URL sourceURL = new URL(source);&lt;br /&gt;
        if (!this.targetOrigin.getProtocol().equals(sourceURL.getProtocol()) || !this.targetOrigin.getHost().equals(sourceURL.getHost()) &lt;br /&gt;
		|| this.targetOrigin.getPort() != sourceURL.getPort()) {&lt;br /&gt;
            //One the part do not match so we trace the event and we block the request&lt;br /&gt;
            accessDeniedReason = String.format(&amp;quot;CSRFValidationFilter: Protocol/Host/Port do not fully matches so we block the request! (%s != %s) &amp;quot;, &lt;br /&gt;
				this.targetOrigin, sourceURL);&lt;br /&gt;
            LOG.warn(accessDeniedReason);&lt;br /&gt;
            httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            return;&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        /* STEP 2: Verifying CSRF token using &amp;quot;Double Submit Cookie&amp;quot; approach */&lt;br /&gt;
        //If CSRF token cookie is absent from the request then we provide one in response but we stop the process at this stage.&lt;br /&gt;
        //Using this way we implement the first providing of token&lt;br /&gt;
        Cookie tokenCookie = null;&lt;br /&gt;
        if (httpReq.getCookies() != null) {&lt;br /&gt;
            String csrfCookieExpectedName = this.determineCookieName(httpReq);&lt;br /&gt;
            tokenCookie = Arrays.stream(httpReq.getCookies()).filter(c -&amp;gt; c.getName().equals(csrfCookieExpectedName)).findFirst().orElse(null);&lt;br /&gt;
        }&lt;br /&gt;
        if (tokenCookie == null || this.isBlank(tokenCookie.getValue())) {&lt;br /&gt;
            LOG.info(&amp;quot;CSRFValidationFilter: CSRF cookie absent or value is null/empty so we provide one and return an HTTP NO_CONTENT response !&amp;quot;);&lt;br /&gt;
            //Add the CSRF token cookie and header&lt;br /&gt;
            this.addTokenCookieAndHeader(httpReq, httpResp);&lt;br /&gt;
            //Set response state to &amp;quot;204 No Content&amp;quot; in order to allow the requester to clearly identify an initial response providing the initial CSRF token&lt;br /&gt;
            httpResp.setStatus(HttpServletResponse.SC_NO_CONTENT);&lt;br /&gt;
        } else {&lt;br /&gt;
            //If the cookie is present then we pass to validation phase&lt;br /&gt;
            //Get token from the custom HTTP header (part under control of the requester)&lt;br /&gt;
            String tokenFromHeader = httpReq.getHeader(CSRF_TOKEN_NAME);&lt;br /&gt;
            //If empty then we trace the event and we block the request&lt;br /&gt;
            if (this.isBlank(tokenFromHeader)) {&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: Token provided via HTTP Header is absent/empty so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            } else if (!tokenFromHeader.equals(tokenCookie.getValue())) {&lt;br /&gt;
                //Verify that token from header and one from cookie are the same&lt;br /&gt;
                //Here is not the case so we trace the event and we block the request&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: Token provided via HTTP Header and via Cookie are not equals so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            } else {&lt;br /&gt;
                //Verify that token from header and one from cookie matches&lt;br /&gt;
                //Here is the case so we let the request reach the target component (ServiceServlet, jsp...) and add a new token when we get back the bucket&lt;br /&gt;
                HttpServletResponseWrapper httpRespWrapper = new HttpServletResponseWrapper(httpResp);&lt;br /&gt;
                chain.doFilter(request, httpRespWrapper);&lt;br /&gt;
                //Add the CSRF token cookie and header&lt;br /&gt;
                this.addTokenCookieAndHeader(httpReq, httpRespWrapper);&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void init(FilterConfig filterConfig) throws ServletException {&lt;br /&gt;
        //To easier the configuration, we load the target expected origin from an JVM property&lt;br /&gt;
        //Reconfiguration only require an application restart that is generally acceptable&lt;br /&gt;
        try {&lt;br /&gt;
            this.targetOrigin = new URL(System.getProperty(TARGET_ORIGIN_JVM_PARAM_NAME));&lt;br /&gt;
        } catch (MalformedURLException e) {&lt;br /&gt;
            LOG.error(&amp;quot;Cannot init the filter !&amp;quot;, e);&lt;br /&gt;
            throw new ServletException(e);&lt;br /&gt;
        }&lt;br /&gt;
        LOG.info(&amp;quot;CSRFValidationFilter: Filter init, set expected target origin to '{}'.&amp;quot;, this.targetOrigin);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void destroy() {&lt;br /&gt;
        LOG.info(&amp;quot;CSRFValidationFilter: Filter shutdown&amp;quot;);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Check if a string is null or empty (including containing only spaces)&lt;br /&gt;
     *&lt;br /&gt;
     * @param s Source string&lt;br /&gt;
     * @return TRUE if source string is null or empty (including containing only spaces)&lt;br /&gt;
     */&lt;br /&gt;
    private boolean isBlank(String s) {&lt;br /&gt;
        return s == null || s.trim().isEmpty();&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Generate a new CSRF token&lt;br /&gt;
     *&lt;br /&gt;
     * @return The token a string&lt;br /&gt;
     */&lt;br /&gt;
    private String generateToken() {&lt;br /&gt;
        byte[] buffer = new byte[50];&lt;br /&gt;
        this.secureRandom.nextBytes(buffer);&lt;br /&gt;
        return DatatypeConverter.printHexBinary(buffer);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Determine the name of the CSRF cookie for the targeted backend service&lt;br /&gt;
     *&lt;br /&gt;
     * @param httpRequest Source HTTP request&lt;br /&gt;
     * @return The name of the cookie as a string&lt;br /&gt;
     */&lt;br /&gt;
    private String determineCookieName(HttpServletRequest httpRequest) {&lt;br /&gt;
        String backendServiceName = httpRequest.getRequestURI().replaceAll(&amp;quot;/&amp;quot;, &amp;quot;-&amp;quot;);&lt;br /&gt;
        return CSRF_TOKEN_NAME + &amp;quot;-&amp;quot; + backendServiceName;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Add the CSRF token cookie and header to the provided HTTP response object&lt;br /&gt;
     *&lt;br /&gt;
     * @param httpRequest  Source HTTP request&lt;br /&gt;
     * @param httpResponse HTTP response object to update&lt;br /&gt;
     */&lt;br /&gt;
    private void addTokenCookieAndHeader(HttpServletRequest httpRequest, HttpServletResponse httpResponse) {&lt;br /&gt;
        //Get new token&lt;br /&gt;
        String token = this.generateToken();&lt;br /&gt;
        //Add cookie manually because the current Cookie class implementation do not support the &amp;quot;SameSite&amp;quot; attribute&lt;br /&gt;
        //We let the adding of the &amp;quot;Secure&amp;quot; cookie attribute to the reverse proxy rewriting...&lt;br /&gt;
        //Here we lock the cookie from JS access and we use the SameSite new attribute protection&lt;br /&gt;
        String cookieSpec = String.format(&amp;quot;%s=%s; Path=%s; HttpOnly; SameSite=Strict&amp;quot;, this.determineCookieName(httpRequest), token, httpRequest.getRequestURI());&lt;br /&gt;
        httpResponse.addHeader(&amp;quot;Set-Cookie&amp;quot;, cookieSpec);&lt;br /&gt;
        //Add cookie header to give access to the token to the JS code&lt;br /&gt;
        httpResponse.setHeader(CSRF_TOKEN_NAME, token);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Authors and Primary Editors  ==&lt;br /&gt;
Manideep Konakandla (Amazon Application Security Team) - http://www.manideepk.com&lt;br /&gt;
&lt;br /&gt;
Dave Wichers - dave.wichers[at]owasp.org&lt;br /&gt;
&lt;br /&gt;
Paul Petefish - https://www.linkedin.com/in/paulpetefish&lt;br /&gt;
&lt;br /&gt;
Eric Sheridan - eric.sheridan[at]owasp.org&lt;br /&gt;
&lt;br /&gt;
Dominique Righetto - dominique.righetto[at]owasp.org&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Other Cheat Sheets ==&lt;br /&gt;
{{Cheatsheet_Navigation_Body}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;br /&gt;
[[Category:Popular]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;diff=246802</id>
		<title>Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;diff=246802"/>
				<updated>2019-01-23T23:01:47Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; __NOTOC__&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:160px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Cheatsheets-header.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
Last revision (mm/dd/yy): '''{{REVISIONMONTH}}/{{REVISIONDAY}}/{{REVISIONYEAR}}''' &lt;br /&gt;
&lt;br /&gt;
 __TOC__{{TOC hidden}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
[[Cross-Site Request Forgery (CSRF)]] is a type of attack that occurs when a malicious web site, email, blog, instant message, or program causes a user’s web browser to perform an unwanted action on a trusted site when the user is authenticated. A CSRF attack works because browser requests automatically include any credentials associated with the site, such as the user's session cookie, IP address, etc. Therefore, if the user is authenticated to the site, the site cannot distinguish between the forged or legitimate request sent by the victim. We would need a token/identifier that is not accessible to attacker and would not be sent along (like cookies) with forged requests that attacker initiates. For more information on CSRF, see OWASP [[Cross-Site Request Forgery (CSRF)|Cross-Site Request Forgery (CSRF) page]].&lt;br /&gt;
&lt;br /&gt;
The impact of a successful CSRF attack is limited to the capabilities exposed by the vulnerable application. For example, this attack could result in a transfer of funds, changing a password, or making a purchase with the user’s credentials. In effect, CSRF attacks are used by an attacker to make a target system perform a function via the target's browser, without the user’s knowledge, at least until the unauthorized transaction has been committed.&lt;br /&gt;
&lt;br /&gt;
Impacts of successful CSRF exploits vary greatly based on the privileges of each victim. When targeting a normal user, a successful CSRF attack can compromise end-user data and their associated functions. If the targeted end user is an administrator account, a CSRF attack can compromise the entire web application. Using social engineering, an attacker can embed malicious HTML or JavaScript code into an email or website to request a specific 'task URL'. The task then executes with or without the user's knowledge, either directly or by using a Cross-Site Scripting flaw. For example, see [https://en.wikipedia.org/wiki/Samy_(computer_worm) Samy MySpace Worm].&lt;br /&gt;
&lt;br /&gt;
== What's new? ==&lt;br /&gt;
If you have seen OWASP [https://www.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;amp;action=history old CSRF prevention cheat sheets], you can observe that a lot has changed in this newer version. One of the major changes is that the “Verifying same origin with standard headers” CSRF defense has been moved to the Defense in Depth section, whereas token based mitigation moved to the Primary Defense section (technical reasons for this switch were added under respective sections). Multiple new sections (HMAC based token protection, auto CSRF mitigation techniques, login CSRF, not so popular CSRF mitigations and CSRF mitigation myths) were added besides adding new content, removing obsolete content to the existing sections. Security issues/caveats associated with each mitigation were also included.&lt;br /&gt;
&lt;br /&gt;
==Warning: No Cross-Site Scripting (XSS) Vulnerabilities ==&lt;br /&gt;
[[Cross-Site Scripting]] is not necessary for CSRF to work. However, any cross-site scripting vulnerability can be used to defeat all CSRF mitigation techniques available in the market today (except mitigation techniques that involve user interaction and described later in this cheat sheet). This is because an XSS payload can simply read any page on the site using an XMLHttpRequest (direct DOM access can be done, if on same page) and obtain the generated token from the response, and include that token with a forged request.  This technique is exactly how the [https://en.wikipedia.org/wiki/Samy_(computer_worm) MySpace (Samy) worm] defeated MySpace's anti-CSRF defenses in 2005, which enabled the worm to propagate.&lt;br /&gt;
&lt;br /&gt;
It is imperative that no XSS vulnerabilities are present to ensure that CSRF defenses can't be circumvented. Please see the OWASP [[XSS (Cross Site Scripting) Prevention Cheat Sheet|XSS Prevention Cheat Sheet]] for detailed guidance on how to prevent XSS flaws.  &lt;br /&gt;
&lt;br /&gt;
== Resources that need to be protected from CSRF vulnerability ==&lt;br /&gt;
The following list assumes that you are not violating [http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.1 RFC2616], section 9.1.1, by using GET requests for state changing operations. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' If for any reason you violate, you would also need to protect those resources, which is mostly achieved with default &amp;lt;code&amp;gt;form tag [GET method]&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;href&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;src&amp;lt;/code&amp;gt; attributes.  &lt;br /&gt;
&lt;br /&gt;
* Form tags with POST &lt;br /&gt;
* Ajax/XHR calls&lt;br /&gt;
&lt;br /&gt;
== CSRF Defense Recommendations Summary ==&lt;br /&gt;
We recommend token based CSRF defense (either stateful/stateless) as a primary defense to mitigate CSRF in your applications. Only for highly sensitive operations, we also recommend a user interaction based protection (either re-authentication/one-time token, detailed in section 7.5) along with token based mitigation.&lt;br /&gt;
&lt;br /&gt;
As a defense-in-depth measure, consider implementing one mitigation from Defense in Depth Mitigations section (you can choose the mitigation that fits your ecosystem considering the issues mentioned under them). These defense-in-depth mitigation techniques are not recommended to be used by themselves (without token based mitigation) for mitigating CSRF in your applications.&lt;br /&gt;
&lt;br /&gt;
== Primary Defense Technique ==&lt;br /&gt;
&lt;br /&gt;
=== Token Based Mitigation ===&lt;br /&gt;
This defense is one the most popular and recommended methods to mitigate CSRF. It can be achieved either with state (synchronizer token pattern) or stateless (encrypted/hash based token pattern). See section 4.3 on how to mitigate login CSRF in your applications. For all the mitigation's, it is implicit that general security principles should be adhered&lt;br /&gt;
* Strong encryption/HMAC functions should be adhered to. &lt;br /&gt;
'''Note:''' You can select any algorithm per your organizational needs. We recommend AES256-GCM for encryption and SHA256/512 for HMAC.&lt;br /&gt;
* Strict key rotation and token lifetime policies should be maintained. Policies can be set according to your organizational needs. Generic key management guidance from OWASP can be found [[Key Management Cheat Sheet|here]].&lt;br /&gt;
&lt;br /&gt;
==== Synchronizer Token Pattern ====&lt;br /&gt;
Any state changing operation requires a secure random token (e.g., CSRF token) to prevent CSRF attacks. A CSRF token should be unique per user session, large random value, and also generated by a cryptographically secure random number generator. The CSRF token is added as a hidden field for forms headers/parameters for AJAX calls, and within the URL if the state changing operation occurs via a GET. See &amp;quot;Disclosure of Token in URL&amp;quot; section below. The server rejects the requested action if the CSRF token fails validation.&lt;br /&gt;
&lt;br /&gt;
In order to facilitate a &amp;quot;transparent but visible&amp;quot; CSRF solution, developers are encouraged to adopt a pattern similar to [http://www.corej2eepatterns.com/Design/PresoDesign.htm Synchronizer Token Pattern] (The original intention of this synchronizer token pattern was to detect duplicate submissions in forms). The synchronizer token pattern requires the generation of random &amp;quot;challenge&amp;quot; tokens that are associated with the user's current session. These challenge tokens are then inserted within the HTML forms and calls associated with sensitive server-side operations. It is the responsibility of the server application to verify the existence and correctness of this token. By including a challenge token with each request, the developer has a strong control to verify that the user actually intended to submit the desired requests. Inclusion of a required security token in HTTP requests associated with sensitive business functions helps mitigate CSRF attacks as successful exploitation assumes the attacker knows the randomly generated token for the target victim's session. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' These tokens aren’t like cookies that are automatically sent with forged requests made from your browser from the attacker website. &lt;br /&gt;
&lt;br /&gt;
This is analogous to the attacker being able to guess the target victim's session identifier. &lt;br /&gt;
&lt;br /&gt;
The following describes a general approach to incorporate challenge tokens within the request.&lt;br /&gt;
&lt;br /&gt;
When a Web application formulates a request, the application should include a hidden input parameter with a common name such as &amp;quot;CSRFToken&amp;quot; (for forms)/ as header/parameter value for Ajax calls. The value of this token must be randomly generated such that it cannot be guessed by an attacker. Consider leveraging the java.security.SecureRandom class for Java applications to generate a sufficiently long random token. Alternative generation algorithms include the use of 256-bit BASE64 encoded hashes. Developers that choose this generation algorithm must make sure that there is randomness and uniqueness utilized in the data that is hashed to generate the random token.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;html&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;form action=&amp;quot;/transfer.do&amp;quot; method=&amp;quot;post&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;input type=&amp;quot;hidden&amp;quot; name=&amp;quot;CSRFToken&amp;quot; &lt;br /&gt;
value=&amp;quot;OWY4NmQwODE4ODRjN2Q2NTlhMmZlYWEwYzU1YWQwMTVhM2JmNGYxYjJiMGI4MjJjZDE1ZDZMGYwMGEwOA==&amp;quot;&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/form&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is used for each subsequent request until the session expires. When a request is issued by the end-user, the server-side component must verify the existence and validity of the token in the request compared to the token found in the user session. If the token was not found within the request, or the value provided does not match the value within the user session, then the request should be aborted, and the event logged as a potential CSRF attack in progress.&lt;br /&gt;
&lt;br /&gt;
To further enhance the security of this proposed design, consider randomizing the CSRF token parameter name and/or value for each request. Implementing this approach results in the generation of per-request tokens as opposed to per-session tokens. This is more secure than per-session tokens as the time range for an attacker to exploit the stolen tokens is minimal. However, this may result in usability concerns. For example, the &amp;quot;Back&amp;quot; button browser capability is often hindered as the previous page may contain a token that is no longer valid. Interaction with this previous page will result in a CSRF false positive security event at the server. Few applications that need high security typically implement this approach (such as banks). You have to check what suits your needs. Regardless of the approach taken, developers are encouraged to protect the CSRF token the same way they protect authenticated session identifiers, such as the use of TLS.&lt;br /&gt;
&lt;br /&gt;
'''Existing Synchronizer Implementations'''&lt;br /&gt;
&lt;br /&gt;
Synchronizer token defenses have been built into many frameworks, so we strongly recommend using them first when they are available. External components that add CSRF defenses to existing applications are also recommended. OWASP has the following: &lt;br /&gt;
&lt;br /&gt;
* For Java: OWASP [[CSRF Guard]]&lt;br /&gt;
* For PHP and Apache: [[CSRFProtector Project]]&lt;br /&gt;
&lt;br /&gt;
'''Disclosure of Token in URL'''&lt;br /&gt;
&lt;br /&gt;
Some implementations of synchronizer tokens include the challenge token in GET (URL) requests as well as POST requests. This is often implemented as a result of sensitive server-side operations being invoked as a result of embedded links in the page or other general design patterns. These patterns are often implemented without knowledge of CSRF and an understanding of CSRF prevention design strategies. While this control does help mitigate the risk of CSRF attacks, the unique per-session token is being exposed for GET requests. CSRF tokens in GET requests are potentially leaked at several locations: browser history, log files, network appliances that make a point to log the first line of an HTTP request, and Referer headers if the protected site links to an external site. In the latter case (leaked CSRF token due to the Referer header being parsed by a linked site), it is trivially easy for the linked site to launch a CSRF attack on the protected site, and they will be able to target this attack very effectively, since the Referer header tells them the site as well as the CSRF token. The attack could be run entirely from JavaScript, so that a simple addition of a script tag to the HTML of a site can launch an attack (whether on an originally malicious site or on a hacked site). Additionally, since HTTPS requests from HTTPS contexts will not strip the Referer header (as opposed to HTTPS to HTTP requests) CSRF token leaks via Referer can still happen on HTTPS Applications.&lt;br /&gt;
&lt;br /&gt;
The ideal solution is to only include the CSRF token in POST requests and modify server-side actions that have state changing affect to only respond to POST requests. This is in fact what the &amp;lt;nowiki&amp;gt;RFC 2616&amp;lt;/nowiki&amp;gt; requires for GET requests. If sensitive server-side actions are guaranteed to only ever respond to POST requests, then there is no need to include the token in GET requests.&lt;br /&gt;
&lt;br /&gt;
In most JavaEE web applications, however, HTTP method scoping is rarely ever utilized when retrieving HTTP parameters from a request. Calls to &amp;quot;HttpServletRequest.getParameter&amp;quot; will return a parameter value regardless if it was a GET or POST. This is not to say HTTP method scoping cannot be enforced. It can be achieved if a developer explicitly overrides doPost() in the HttpServlet class or leverages framework specific capabilities such as the AbstractFormController class in Spring.&lt;br /&gt;
&lt;br /&gt;
For these cases, attempting to retrofit this pattern in existing applications requires significant development time and cost, and as a temporary measure it may be better to pass CSRF tokens in the URL. Once the application has been fixed to respond to HTTP GET and POST verbs correctly, CSRF tokens for GET requests should be turned off.&lt;br /&gt;
&lt;br /&gt;
==== Encryption based Token Pattern ====&lt;br /&gt;
The Encrypted Token Pattern leverages an encryption, rather than comparison method of Token-validation. It is most suitable for applications that do not want to maintain any state at server side. &lt;br /&gt;
&lt;br /&gt;
After successful authentication, the server generates a unique token comprised of the user's ID, a timestamp value and a [http://en.wikipedia.org/wiki/Cryptographic_nonce nonce], using a unique key available only on the server. This token is returned to the client and embedded in a hidden field for forms, in the request-header/parameter for AJAX requests. On receipt of this request, the server reads and decrypts the token value with the same key used to create the token. The inability to correctly decrypt suggests an intrusion attempt. Once decrypted, the UserId and timestamp contained within the token are validated; the UserId is compared against the currently logged in user, and the timestamp is compared against the current time.&lt;br /&gt;
&lt;br /&gt;
On successful token-decryption, the server has access to parsed values, ideally in the form of [http://en.wikipedia.org/wiki/Claims-based_identity claims]. These claims are processed by comparing the UserId claim to any potentially stored UserId (in a Cookie or Session variable, if the site already contains a means of authentication). The Timestamp is validated against the current time, preventing replay attacks. Alternatively, in the case of a CSRF attack, the server will be unable to decrypt the poisoned token, and can block and log the attack.&lt;br /&gt;
&lt;br /&gt;
This technique addresses some of the shortfalls in other stateless approaches, such as the need to store data in a Cookie, circumventing the Cookie-subdomain and [[HttpOnly]] issues.&lt;br /&gt;
&lt;br /&gt;
==== HMAC Based Token Pattern ====&lt;br /&gt;
[https://en.wikipedia.org/wiki/HMAC HMAC (hash-based message authentication code)] is a cryptographic function that helps to guarantee integrity and authentication of a message. It is another way that CSRF mitigation can be achieved without maintaining any state at the server and is similar to an encryption token-based pattern with two main differences:&lt;br /&gt;
* Uses a strong HMAC function instead of an encryption function to generate the token&lt;br /&gt;
* Includes an additional field called ‘operation’ that would indicate the purpose of the operation for which you are including the CSRF token (may it be form tag/ajax call) &lt;br /&gt;
(Ex: ‘oneclickpurchase’ (or) buy/asin=SDFH&amp;amp;category=2&amp;amp;quantity=3)&lt;br /&gt;
&lt;br /&gt;
'''Note:''' Fields mentioned in encryption token pattern (user's ID, a timestamp value and a nonce) are included. &lt;br /&gt;
&lt;br /&gt;
The operation field helps in mitigating the fact that the hash function generates the same value irrespective of multiple iterations (unlike strong encryption functions that generate different values when they are encrypted each time). So, it would help in avoiding having repeated token values across your application. Nonce field serves the same purpose as in encrypted token pattern (i.e., to avoid rare collisions due to weak cryptographic functions and acts as a defense-in-depth measure). &lt;br /&gt;
&lt;br /&gt;
Generate the token using HMAC including all four fields mentioned previously (user's ID, a timestamp value, nonce, and operation) and then include it in hidden fields for form tags, headers/parameters for ajax calls. Once you receive the HMAC from the client in the requests, re-generate HMAC with the same fields that you used to generate it, and then verify that the HMAC you re-generated matches the HMAC received from the client. If it does, it is a legitimate user request and if it does not, flag it as a CSRF intrusion and alert your incident response teams. Because an attacker has no visibility into the key used for generating the hash fields used in generating it, there is no way for them to re-generate it to use in forged request.&lt;br /&gt;
&lt;br /&gt;
=== Auto CSRF Mitigation Techniques ===&lt;br /&gt;
Though the technique of mitigating tokens is widely used (stateful with synchronizer token and stateless with encrypted/HMAC token), the major problem associated with these techniques is the human tendency to forget things at times. If a developer forgets to add the token to any state changing operation, they are making the application vulnerable to CSRF. To avoid this, you can try to automate the process of adding tokens to CSRF vulnerable resources (mentioned earlier in this document). You can achieve this by doing the following:&lt;br /&gt;
* Write wrappers (that would auto add tokens when used) around default form tags/ajax calls and educate your developers to use those wrappers instead of standard tags. Though this approach is better than depending purely on developers to add tokens, it still is vulnerable to the issue of human tendency to forget things. [https://docs.spring.io/spring-security/site/docs/3.2.0.CI-SNAPSHOT/reference/html/csrf.html Spring Security] uses this technique to add CSRF tokens by default when a custom &amp;lt;form:form&amp;gt; tag is used, you can opt to use after verifying that its enabled and properly configured in the Spring Security version you are using.&lt;br /&gt;
* Write a hook (that would capture the traffic and add tokens to CSRF vulnerable resources before rendering to customers) in your organizational web rendering frameworks. Because it is hard to analyze when a particular response is doing any state change (and thus needing a token), you might want to include tokens in all CSRF vulnerable resources (ex: include tokens in all POST responses). This is one recommended approach, but you need to consider the performance costs it might incur.&lt;br /&gt;
* Get the tokens automatically added on the client side when the page is being rendered in user’s browser, with help of a client side script (this approach is used by [[CSRF Guard]]). You need to consider any possible JavaScript hijacking attacks.&lt;br /&gt;
We recommend researching if the framework you are using has an option to achieve CSRF protection by default before trying to build your custom auto tokening system. For example, .NET has an [https://docs.microsoft.com/en-us/aspnet/core/security/anti-request-forgery?view=aspnetcore-2.1 in-built protection] that adds token to CSRF vulnerable resources. You are responsible for proper configuration (such as key management and token management) before using these in-built CSRF protections that do auto tokening to CSRF vulnerable resources.&lt;br /&gt;
&lt;br /&gt;
=== Login CSRF ===&lt;br /&gt;
Most developers tend to ignore CSRF vulnerability on login forms as they assume that CSRF would not be applicable on login forms because user is not authenticated at that stage. That assumption is false. CSRF vulnerability can still occur on login forms where the user is not authenticated, but the impact/risk view for it is quite different from the impact/risk view of a general CSRF vulnerability (when a user is authenticated).&lt;br /&gt;
&lt;br /&gt;
With a CSRF vulnerability on login form, an attacker can make a victim login to the attacker's account and learn behavior from the victim's searches. For more information about login CSRF and other risks, see section 3 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf the Stanford CSRF] paper.&lt;br /&gt;
&lt;br /&gt;
Login CSRF can be mitigated by creating pre-sessions (sessions before a user is authenticated) and including tokens in login form. You can use any of the techniques mentioned above to generate tokens. Pre-sessions can be transitioned to real sessions once the user is authenticated. This technique is described in [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery section 4.1].&lt;br /&gt;
&lt;br /&gt;
If sub-domains under your master domain are treated as not trusty in your threat model, it is difficult to mitigate login CSRF. A strict subdomain and path level referer header (because most login pages are served on HTTPS - no stripping of referer - and are also linked from home pages) validation (detailed in section 6.1) can be used in these cases for mitigating CSRF on login forms to an extent.&lt;br /&gt;
&lt;br /&gt;
== Defense In Depth Techniques ==&lt;br /&gt;
&lt;br /&gt;
=== Verifying origin with standard headers ===&lt;br /&gt;
This defense technique is specifically proposed in section 5.0 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery]. This paper proposes the first creation of the Origin header and its use as a CSRF defense mechanism.&lt;br /&gt;
&lt;br /&gt;
There are two steps to this mitigation, both of which rely on examining an HTTP request header value.&lt;br /&gt;
&lt;br /&gt;
1. Determining the origin the request is coming from (source origin). Can be done via Origin and/or referer header.&lt;br /&gt;
&lt;br /&gt;
2. Determining the origin the request is going to (target origin).&lt;br /&gt;
&lt;br /&gt;
At server side we verify if both of them match. If they do, we accept the request as legitimate (meaning it’s the same origin request) and if they don’t, we discard the request (meaning that the request originated from cross-domain). Reliability on these headers comes from the fact that they cannot be altered programmatically (using JavaScript in an XSS) as they fall under [https://developer.mozilla.org/en-US/docs/Glossary/Forbidden_header_name forbidden headers] list (i.e., only browsers can set them).&lt;br /&gt;
&lt;br /&gt;
====Identifying Source Origin (via Origin/Referer header) ====&lt;br /&gt;
'''Checking the Origin Header'''&lt;br /&gt;
&lt;br /&gt;
If the Origin header is present, verify that its value matches the target origin. Unlike the Referer, the Origin header will be present in HTTP requests that originate from an HTTPS URL.&lt;br /&gt;
&lt;br /&gt;
'''Checking the Referer Header'''&lt;br /&gt;
&lt;br /&gt;
If the Origin header is not present, verify the hostname in the Referer header matches the target origin. This method of CSRF mitigation is also commonly used with unauthenticated requests, such as requests made prior to establishing a session state, which is required to keep track of a synchronization token.&lt;br /&gt;
&lt;br /&gt;
In both cases, make sure the target origin check is strong. For example, if your site is &amp;quot;site.com&amp;quot; make sure &amp;quot;site.com.attacker.com&amp;quot; does not pass your origin check (i.e., match through the trailing/after the origin to make sure you are matching against the entire origin).&lt;br /&gt;
&lt;br /&gt;
If neither of these headers are present, you can either accept or block the request. We recommend '''blocking'''. Alternatively, you might want to log all such instances, monitor their use cases/behavior, and then start blocking requests only after you get enough confidence.&lt;br /&gt;
&lt;br /&gt;
==== Identifying the Target Origin ====&lt;br /&gt;
You might think it’s easy to determine the target origin, but it’s frequently not. The first thought is to simply grab the target origin (i.e., its hostname and port #) from the URL in the request. However, the application server is frequently sitting behind one or more proxies and the original URL is different from the URL the app server actually receives. If your application server is directly accessed by its users, then using the origin in the URL is fine and you're all set.&lt;br /&gt;
&lt;br /&gt;
If you are behind a proxy, there are a number of options to consider.&lt;br /&gt;
* '''Configure your application to simply know its target origin:''' It’s your application, so you can find its target origin and set that value in some server configuration entry. This would be the most secure approach as its defined server side, so it is a trusted value. However,   this might be problematic to maintain if your application is deployed in many places, e.g., dev, test, QA, production, and possibly multiple production instances. Setting the correct value for each of these situations might be difficult, but if you can do it via some central configuration and providing your instances to grab value from it, that's great! ('''Note:''' Make sure the centralized configuration store is maintained securely because major part of your CSRF defense depends on it.)&lt;br /&gt;
&lt;br /&gt;
* '''Use the Host header value:''' If you prefer that the application find its own target so it doesn't have to be configured for each deployed instance, we recommend using the Host family of headers. The Host header's purpose is to contain the target origin of the request. But, if your app server is sitting behind a proxy, the Host header value is most likely changed by the proxy to the target origin of the URL behind the proxy, which is different than the original URL. This modified Host header origin won't match the source origin in the original Origin or Referer headers.&lt;br /&gt;
&lt;br /&gt;
* '''Use the X-Forwarded-Host header value:''' To avoid the issue of proxy altering the host header, there is another header called X-Forwarded-Host, whose purpose is to contain the original Host header value the proxy received. Most proxies will pass along the original Host header value in the X-Forwarded-Host header. So that header value is likely to be the target origin value you need to compare to the source origin in the Origin or Referer header.&lt;br /&gt;
&lt;br /&gt;
This mitigation in earlier versions of the CSRF Cheat Sheet is treated as a primary defense. For reasons mentioned below, it is now moved to the Defense-in-Depth section.&lt;br /&gt;
&lt;br /&gt;
As it’s implicit, this mitigation would work properly when origin or referer headers are present in the requests. Though these headers are included '''majority''' of the time, there are few use cases where they are not included (most of them are for legitimate reasons to safeguard users privacy/to tune to browsers ecosystem). The following lists some use cases:&lt;br /&gt;
* Internet Explorer 11 does not add the Origin header on a CORS request across sites of a trusted zone. The Referer header will remain the only indication of the UI origin. See the following references in stackoverflow [https://stackoverflow.com/questions/20784209/internet-explorer-11-does-not-add-the-origin-header-on-a-cors-request here] and [https://github.com/silverstripe/silverstripe-graphql/issues/118 here].&lt;br /&gt;
* In an instance following a [https://stackoverflow.com/questions/22397072/are-there-any-browsers-that-set-the-origin-header-to-null-for-privacy-sensitiv 302 redirect cross-origin], Origin is not included in the redirected request because that may be considered sensitive information that should not be sent to the other origin.&lt;br /&gt;
* There are some [https://wiki.mozilla.org/Security/Origin#Privacy-Sensitive_Contexts privacy contexts] where Origin is set to “null” For example, see the following [https://www.google.com/search?q=origin+header+sent+null+value+site%3Astackoverflow.com&amp;amp;oq=origin+header+sent+null+value+site%3Astackoverflow.com here].&lt;br /&gt;
* Origin header is included for all cross origin requests but for same origin requests, in most browsers it is only included in POST/DELETE/PUT '''Note:''' Although it is not ideal, many developers use GET requests to do state changing operations.&lt;br /&gt;
* Referer header is no exception. There are multiple use cases where referer header is omitted as well ([https://stackoverflow.com/questions/6880659/in-what-cases-will-http-referer-be-empty &amp;lt;nowiki&amp;gt;[1]&amp;lt;/nowiki&amp;gt;], [https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referer &amp;lt;nowiki&amp;gt;[2]&amp;lt;/nowiki&amp;gt;], [https://en.wikipedia.org/wiki/HTTP_referer#Referer_hiding &amp;lt;nowiki&amp;gt;[3]&amp;lt;/nowiki&amp;gt;], [https://seclab.stanford.edu/websec/csrf/csrf.pdf &amp;lt;nowiki&amp;gt;[4]&amp;lt;/nowiki&amp;gt;] and [https://www.google.com/search?q=referer+header+sent+null+value+site:stackoverflow.com &amp;lt;nowiki&amp;gt;[5]&amp;lt;/nowiki&amp;gt;]). Load balancers, proxies and embedded network devices are also well known to strip the referer header due to privacy reasons in logging them.&lt;br /&gt;
&lt;br /&gt;
Though exceptions can be written for above cases in your source and target origin check logic, there is currently no central repository (even there is one, keeping it up-to-date is a problem) that references all such use cases. Each browser might also handle these use cases differently (browsers are known to handle things differently considering their ecosystem. IE example of not sending origin header within trusted zone is such example). Rejecting requests that do not contain origin and/or referer headers might sound like a good idea but it can impact legitimate users. Keeping this system in monitoring mode and trying to investigate use cases such as stated above, then adding them into exception logic is a process that you may consider to make this defense more stable in your environment.&lt;br /&gt;
&lt;br /&gt;
This CSRF defense relies on browser behavior that can change at times. For example, when new privacy contexts are discovered, under which situations you have to keep your validation logic updated, where as in token based mitigation, you have full control on the CSRF mitigation. If browsers alter CSRF tokens, they are literally changing the HTML content on rendering pages (which no browser would ever want to do!).&lt;br /&gt;
&lt;br /&gt;
When there is an XSS vulnerability on a page of an application protected with Origin and/or Referer header, the level of effort required to exploit state changing operations (that are typically vulnerable to CSRF) on other pages is very easy (grab the parameters and forge a request, as Origin and Referer header is included by default by browsers) than compared to token based mitigation (where attacker needs to download the target page, parse the DOM for the token, construct a forge request, and send it to server).&lt;br /&gt;
&lt;br /&gt;
'''Note:''' Although the concept of an origin header stemmed from [https://seclab.stanford.edu/websec/csrf/csrf.pdf the Stanford CSRF] paper that references robust CSRF defenses, the initial [https://tools.ietf.org/html/rfc6454 origin header RFC] does not reference mitigating CSRF in any way (another [https://tools.ietf.org/id/draft-abarth-origin-03.html draft version] does, however).&lt;br /&gt;
&lt;br /&gt;
=== Double Submit Cookie ===&lt;br /&gt;
If maintaining the state for CSRF token at server side is problematic, an alternative defense is to use the double submit cookie technique. This technique is easy to implement and is stateless. In this technique, we send a random value in both a cookie and as a request parameter, with the server verifying if the cookie value and request value match. When a user visits (even before authenticating to prevent login CSRF), the site should generate a (cryptographically strong) pseudorandom value and set it as a cookie on the user's machine separate from the session identifier. The site then requires that every transaction request include this pseudorandom value as a hidden form value (or other request parameter/header). If both of them match at server side, the server accepts it as legitimate request and if they don’t, it would reject the request.&lt;br /&gt;
&lt;br /&gt;
There’s a belief that this technique would work because a cross origin attacker cannot read any data sent from the server or modify cookie values, per the same-origin policy. This means that while an attacker can force a victim to send any value with a malicious CSRF request, the attacker will be unable to modify or read the value stored in the cookie (with which the server compares the token value).&lt;br /&gt;
&lt;br /&gt;
There are a couple of drawbacks associated with the assumptions made here. The problem of &amp;quot;trusting of sub domains and proper configuration of whole site in general to accept HTTPS connections only&amp;quot;. The [https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf Blackhat talk] by Rich Lundeen references these drawbacks.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;''With double submit, if an attacker can write a cookie they can obviously defeat the protection. And again, writing cookies is significantly easier then reading them. The fact that cookies can be written is difficult for many people to understand. After all, doesn't the same origin policy specify that one domain cannot access cookies from another domain? However, there are two common scenarios where writing cookies across domains is possible:''&lt;br /&gt;
&lt;br /&gt;
''a) While it's true that hellokitty.marketing.example.com cannot read cookies or access the DOM from secure.example.com because of the same origin policy, hellokitty.marketing.example.com can write cookies to the parent domain (example.com), and these cookies are then consumed by secure.example.com (secure.example.com has no good way to distinguish which site set the cookie). Additionally, there are methods of forcing secure.example.com to always accept your cookie first. What this means is that XSS in hellokitty.marketing.example.com is able to overwrite cookies in secure.example.com.''&lt;br /&gt;
&lt;br /&gt;
''b) If an attacker is in the middle, they can usually force a request to the same domain over HTTP. If an application is hosted at &amp;lt;nowiki&amp;gt;https://secure.example.com&amp;lt;/nowiki&amp;gt;, even if the cookies are set with the secure flag, a man in the middle can force connections to &amp;lt;nowiki&amp;gt;http://secure.example.com&amp;lt;/nowiki&amp;gt; and set (overwrite) any arbitrary cookies (even though the secure flag prevents the attacker from reading those cookies). Even if the HSTS header is set on the server and the browser visiting the site supports HSTS (this would prevent a man in the middle from forcing plaintext HTTP requests) unless the HSTS header is set in a way that includes all subdomains, a man in the middle can simply force a request to a separate subdomain and overwrite cookies similar to 1. In other words, as long as &amp;lt;nowiki&amp;gt;http://hellokitty.marketing.example.com&amp;lt;/nowiki&amp;gt; doesn't force HTTPS, then an attacker can overwrite cookies on any example.com subdomain.''&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So, unless you are sure that your subdomains are fully secured and only accept HTTPS connections (we believe it’s difficult to guarantee at large enterprises), you should not rely on the Double Submit Cookie technique as a primary mitigation for CSRF.&lt;br /&gt;
&lt;br /&gt;
A variant of double submit cookie that can mitigate both the risks mentioned above is including the token in an encrypted cookie - often within the authentication cookie - and then at the server side matching it (after decrypting authentication cookie) with the token in hidden form field or parameter/header for ajax calls.  This works because a sub domain has no way to over-write an properly crafted encrypted cookie without the necessary information such as encryption key.&lt;br /&gt;
&lt;br /&gt;
=== Samesite Cookie Attribute ===&lt;br /&gt;
SameSite is a cookie attribute (similar to [[HttpOnly]], Secure etc.) introduced by Google to mitigate CSRF attacks. It is defined in [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 this] Internet Draft. This attribute helps in preventing the browser from sending cookies along with cross-site requests. Possible values for this attribute are lax or strict.&lt;br /&gt;
&lt;br /&gt;
The strict value will prevent the cookie from being sent by the browser to the target site in all cross-site browsing context, even when following a regular link. For example, for a GitHub-like website this would mean that if a logged-in user follows a link to a private GitHub project posted on a corporate discussion forum or email, GitHub will not receive the session cookie and the user will not be able to access the project. A bank website however most likely doesn't want to allow any transactional pages to be linked from external sites, so the strict flag would be most appropriate.&lt;br /&gt;
&lt;br /&gt;
The default lax value provides a reasonable balance between security and usability for websites that want to maintain user's logged-in session after the user arrives from an external link. In the above GitHub scenario, the session cookie would be allowed when following a regular link from an external website while blocking it in CSRF-prone request methods such as POST. Only cross-site requests that are allowed in lax mode are the ones that have top-level navigations and are also “safe” HTTP methods (more details [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7.1 here]).&lt;br /&gt;
&lt;br /&gt;
Example of cookies using this attribute:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Set-Cookie: JSESSIONID=xxxxx; SameSite=Strict&lt;br /&gt;
Set-Cookie: JSESSIONID=xxxxx; SameSite=Lax&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Support for this attribute in different browsers is increasing but there are still browsers that need to adopt this. As of August 2018, SameSite attribute is on browsers used by 68.92% of Internet users (detailed statistics are [https://caniuse.com/#feat=same-site-cookie-attribute here]).&lt;br /&gt;
&lt;br /&gt;
Though this technique seems to be efficient in mitigating CSRF attacks, it is still in early stages (in [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 draft]) and does not have full browser support as mentioned above. Google’s [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 draft] also mentions a couple cases where forged requests can be simulated by attackers as same-site requests (and thus allowing to send SameSite cookies).&lt;br /&gt;
&lt;br /&gt;
Considering the factors above, it is not recommended to be used as a primary defense. Google agrees with this stance and strongly encourages developers to deploy server-side defenses such as tokens to mitigate CSRF more fully.&lt;br /&gt;
&lt;br /&gt;
=== Use of Custom Request Headers ===&lt;br /&gt;
&lt;br /&gt;
Adding CSRF tokens, a double submit cookie and value, encrypted token, or other defense that involves changing the UI can frequently be complex or otherwise problematic. An alternate defense that is particularly well suited for AJAX/XHR endpoints is the use of a custom request header. This defense relies on the [https://en.wikipedia.org/wiki/Same-origin_policy same-origin policy (SOP)] restriction that only JavaScript can be used to add a custom header, and only within its origin. By default, browsers do not allow JavaScript to make cross origin requests.&lt;br /&gt;
&lt;br /&gt;
A particularly attractive custom header and value to use is “X-Requested-With: XMLHttpRequest” because most JavaScript libraries already add this header to requests they generate by default. However, some do not. For example, AngularJS used to, but does not anymore. For more information, see [https://github.com/angular/angular.js/commit/3a75b1124d062f64093a90b26630938558909e8d their rationale] and how to add it back.&lt;br /&gt;
&lt;br /&gt;
If this is the case for your system, you can simply verify the presence of this header and value on all your server side AJAX endpoints in order to protect against CSRF attacks. This approach has the double advantage of usually requiring no UI changes and not introducing any server side state, which is particularly attractive to REST services. You can always add your own custom header and value if that is preferred.&lt;br /&gt;
&lt;br /&gt;
This defense technique is specifically discussed in section 4.3 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery]. However, bypasses of this defense using Flash were documented as early as 2008 and again as recently as 2015 by Mathias Karlsson to [https://hackerone.com/reports/44146 exploit a CSRF flaw in Vimeo]. A Flash attack can't spoof the Origin or Referer headers, so by checking both of them we believe this combination of checks should prevent Flash bypass CSRF attacks (if any comes up in future). &lt;br /&gt;
&lt;br /&gt;
Besides any possible future bypasses such as Flash, using a static header will make it easy to exploit other state changing operations in the application (similar to the previous explanation on why ease of exploitation is easier in origin/referer header check than token based mitigation). Including a random token instead of static header value is more or less equal to the token based approach described in the primary defense section. Developers also need to consider that if you are using this approach in an application with both Ajax calls and form tags, this technique would only help Ajax calls in protecting from CSRF and you would still need protect &amp;lt;form&amp;gt; tags with approaches described in this document such as tokens. Setting custom headers on form tags is not possible directly. Also, CORS configuration should also be robust to make this solution work effectively (as custom headers for requests coming from other domains trigger a pre-flight CORS check).&lt;br /&gt;
&lt;br /&gt;
=== User Interaction Based CSRF Defense ===&lt;br /&gt;
&lt;br /&gt;
While all the techniques referenced here do not require any user interaction, sometimes it’s easier or more appropriate to involve the user in the transaction to prevent unauthorized operations (forged via CSRF or otherwise). The following are some examples of techniques that can act as strong CSRF defense when implemented correctly.&lt;br /&gt;
* Re-Authentication (password or stronger)&lt;br /&gt;
* One-time Token&lt;br /&gt;
* CAPTCHA&lt;br /&gt;
While these are a very strong CSRF defense, it does create a huge impact on the user experience. For applications that are in need of high security for some operations (password change, money transfer etc.), these techniques should be used along with token based mitigation. Please note that tokens by themselves can mitigate CSRF, developers should use these techniques only to achieve additional security for their high sensitive operations.&lt;br /&gt;
&lt;br /&gt;
== Not So Popular CSRF Mitigations ==&lt;br /&gt;
&lt;br /&gt;
=== Triple Submit Cookie ===&lt;br /&gt;
This mitigation is proposed by John Wilander in 2012 at OWASP Appsec Research. This technique adds an additional step to double submit cookie approach by verifying if the request contains two cookies with same name (please note, attacker need to write an additional cookie to bypass double submit cookie mitigation). Though it mitigates the issues discussed in bypass of double submit cookie, it introduces new problems such as cookie jar overflow (in-details and more issue details [https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf here] and [https://webstersprodigy.net/2012/08/03/analysis-of-john-wilanders-triple-submit-cookies/ here]). We were not able to find any real-time implementations of this mitigation so far.&lt;br /&gt;
&lt;br /&gt;
=== Content-Type Header Validation ===&lt;br /&gt;
This technique is better known than the triple submit cookie mitigation. In first place, this header is not designed for security (initial RFC [https://tools.ietf.org/html/rfc1049 here] and later well-defined in [https://www.ietf.org/rfc/rfc2045.txt this] RFC) but only to let receiving agents know the type of data they would be handling, so that they can invoke corresponding parsers. The pre-flighting behavior of this header (pre-flight if header has value other than application/x-www-form-urlencoded, multipart/form-data, or text/plain) is what treated as a CSRF mitigation and thus forcing all requests to have a header value that would force a pre-flight (such as application/json. Server side can reject cross-origin requests with CORS/SOP during this pre-flight).&lt;br /&gt;
&lt;br /&gt;
This approach has two main problems. One that it would mandate all requests to have a header value that would force pre-flight despite the real use case and the other that this technique is relying on a feature that is not designed for security, to mitigate a security vulnerability. When a bug was discovered in the Chrome API, browser architects even considered to removing this pre-flighting behavior. Because this header was not designed as a security control, architects can re-design it to better cater its primary purpose. In the future, there’s a possibility that new content-type header types can be included (to better support various use-cases), which can put systems relying on this header for CSRF mitigation in trouble. For more information, see [https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2017/september/common-csrf-prevention-misconceptions/ Common CSRF Prevention Misconceptions].&lt;br /&gt;
&lt;br /&gt;
== CSRF Mitigation Myths ==&lt;br /&gt;
The following shows techniques presumed to be CSRF mitigations but none of them fully/actually mitigates a CSRF vulnerability.&lt;br /&gt;
* '''CORS''': CORS is a header designed to relax Same-Origin-Policy when cross-origin communication between sites is required. It is not designed, nor prevents CSRF attacks.&lt;br /&gt;
* '''Using HTTPS''': Using HTTPS has nothing to do with the protection from CSRF attacks. Resources that are under HTTPS are still vulnerable to CSRF if proper CSRF mitigations described above are not included.&lt;br /&gt;
* More myths can be found [[Cross-Site Request Forgery (CSRF)|here]]&lt;br /&gt;
&lt;br /&gt;
== Personal Safety CSRF Tips for Users ==&lt;br /&gt;
Since CSRF vulnerabilities are reportedly widespread, we recommend using the following best practices to mitigate risk.  &lt;br /&gt;
&lt;br /&gt;
# Logoff immediately after using a Web application.&lt;br /&gt;
# Do not allow your browser to save username/passwords, and do not allow sites to “remember” your login.&lt;br /&gt;
# Do not use the same browser to access sensitive applications and to surf the Internet freely (tabbed browsing).&lt;br /&gt;
# The use of plugins such as No-Script makes POST based CSRF vulnerabilities difficult to exploit. This is because JavaScript is used to automatically submit the form when the exploit is loaded. Without JavaScript, the attacker would have to trick the user into submitting the form manually.&lt;br /&gt;
&lt;br /&gt;
Integrated HTML-enabled mail/browser and newsreader/browser environments pose additional risks since simply viewing a mail message or a news message might lead to the execution of an attack. &lt;br /&gt;
== Implementation reference example  ==&lt;br /&gt;
The following JEE web filter provides an example reference for some of the concepts described in this cheat sheet. It implements the following stateless mitigations ([https://github.com/aramrami/OWASP-CSRFGuard OWASP CSRFGuard], cover a stateful approach).&lt;br /&gt;
* Verifying same origin with standard headers&lt;br /&gt;
* Double submit cookie&lt;br /&gt;
* SameSite cookie attribute&lt;br /&gt;
'''Please note''' that it only acts a reference sample and is not complete (for example: it doesn’t have a block to direct the control flow when origin and referer header check succeeds nor it has a port/host/protocol level validation for referer header). Developers are recommended to build their complete mitigation on top of this reference sample. Developers should also implement standard authentication or authorization checks before checking for CSRF.&lt;br /&gt;
&lt;br /&gt;
Source is also located [https://github.com/righettod/poc-csrf here] and provides a runnable POC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
import org.slf4j.Logger;&lt;br /&gt;
import org.slf4j.LoggerFactory;&lt;br /&gt;
&lt;br /&gt;
import javax.servlet.Filter;&lt;br /&gt;
import javax.servlet.FilterChain;&lt;br /&gt;
import javax.servlet.FilterConfig;&lt;br /&gt;
import javax.servlet.ServletException;&lt;br /&gt;
import javax.servlet.ServletRequest;&lt;br /&gt;
import javax.servlet.ServletResponse;&lt;br /&gt;
import javax.servlet.annotation.WebFilter;&lt;br /&gt;
import javax.servlet.http.Cookie;&lt;br /&gt;
import javax.servlet.http.HttpServletRequest;&lt;br /&gt;
import javax.servlet.http.HttpServletResponse;&lt;br /&gt;
import javax.servlet.http.HttpServletResponseWrapper;&lt;br /&gt;
import javax.xml.bind.DatatypeConverter;&lt;br /&gt;
import java.io.IOException;&lt;br /&gt;
import java.net.MalformedURLException;&lt;br /&gt;
import java.net.URL;&lt;br /&gt;
import java.security.SecureRandom;&lt;br /&gt;
import java.util.Arrays;&lt;br /&gt;
&lt;br /&gt;
/**&lt;br /&gt;
 * Filter in charge of validating each incoming HTTP request about Headers and CSRF token.&lt;br /&gt;
 * It is called for all requests to backend destination.&lt;br /&gt;
 *&lt;br /&gt;
 * We use the approach in which:&lt;br /&gt;
 * - The CSRF token is changed after each valid HTTP exchange&lt;br /&gt;
 * - The custom Header name for the CSRF token transmission is fixed&lt;br /&gt;
 * - A CSRF token is associated to a backend service URI in order to enable the support for multiple parallel Ajax request from the same application&lt;br /&gt;
 * - The CSRF cookie name is the backend service name prefixed with a fixed prefix&lt;br /&gt;
 *&lt;br /&gt;
 * Here for the POC we show the &amp;quot;access denied&amp;quot; reason in the response but in production code only return a generic message !&lt;br /&gt;
 *&lt;br /&gt;
 * @see &amp;quot;https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://wiki.mozilla.org/Security/Origin&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://chloe.re/2016/04/13/goodbye-csrf-samesite-to-the-rescue/&amp;quot;&lt;br /&gt;
 */&lt;br /&gt;
@WebFilter(&amp;quot;/backend/*&amp;quot;)&lt;br /&gt;
public class CSRFValidationFilter implements Filter {&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * JVM param name used to define the target origin&lt;br /&gt;
     */&lt;br /&gt;
    public static final String TARGET_ORIGIN_JVM_PARAM_NAME = &amp;quot;target.origin&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Name of the custom HTTP header used to transmit the CSRF token and also to prefix &lt;br /&gt;
     * the CSRF cookie for the expected backend service&lt;br /&gt;
     */&lt;br /&gt;
    private static final String CSRF_TOKEN_NAME = &amp;quot;X-TOKEN&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Logger&lt;br /&gt;
     */&lt;br /&gt;
    private static final Logger LOG = LoggerFactory.getLogger(CSRFValidationFilter.class);&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Application expected deployment domain: named &amp;quot;Target Origin&amp;quot; in OWASP CSRF article&lt;br /&gt;
     */&lt;br /&gt;
    private URL targetOrigin;&lt;br /&gt;
&lt;br /&gt;
    /***&lt;br /&gt;
     * Secure generator&lt;br /&gt;
     */&lt;br /&gt;
    private final SecureRandom secureRandom = new SecureRandom();&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {&lt;br /&gt;
        HttpServletRequest httpReq = (HttpServletRequest) request;&lt;br /&gt;
        HttpServletResponse httpResp = (HttpServletResponse) response;&lt;br /&gt;
        String accessDeniedReason;&lt;br /&gt;
&lt;br /&gt;
        /* STEP 1: Verifying Same Origin with Standard Headers */&lt;br /&gt;
        //Try to get the source from the &amp;quot;Origin&amp;quot; header&lt;br /&gt;
        String source = httpReq.getHeader(&amp;quot;Origin&amp;quot;);&lt;br /&gt;
        if (this.isBlank(source)) {&lt;br /&gt;
            //If empty then fallback on &amp;quot;Referer&amp;quot; header&lt;br /&gt;
            source = httpReq.getHeader(&amp;quot;Referer&amp;quot;);&lt;br /&gt;
            //If this one is empty too then we trace the event and we block the request (recommendation of the article)...&lt;br /&gt;
            if (this.isBlank(source)) {&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: ORIGIN and REFERER request headers are both absent/empty so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
                return;&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //Compare the source against the expected target origin&lt;br /&gt;
        URL sourceURL = new URL(source);&lt;br /&gt;
        if (!this.targetOrigin.getProtocol().equals(sourceURL.getProtocol()) || !this.targetOrigin.getHost().equals(sourceURL.getHost()) &lt;br /&gt;
		|| this.targetOrigin.getPort() != sourceURL.getPort()) {&lt;br /&gt;
            //One the part do not match so we trace the event and we block the request&lt;br /&gt;
            accessDeniedReason = String.format(&amp;quot;CSRFValidationFilter: Protocol/Host/Port do not fully matches so we block the request! (%s != %s) &amp;quot;, &lt;br /&gt;
				this.targetOrigin, sourceURL);&lt;br /&gt;
            LOG.warn(accessDeniedReason);&lt;br /&gt;
            httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            return;&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        /* STEP 2: Verifying CSRF token using &amp;quot;Double Submit Cookie&amp;quot; approach */&lt;br /&gt;
        //If CSRF token cookie is absent from the request then we provide one in response but we stop the process at this stage.&lt;br /&gt;
        //Using this way we implement the first providing of token&lt;br /&gt;
        Cookie tokenCookie = null;&lt;br /&gt;
        if (httpReq.getCookies() != null) {&lt;br /&gt;
            String csrfCookieExpectedName = this.determineCookieName(httpReq);&lt;br /&gt;
            tokenCookie = Arrays.stream(httpReq.getCookies()).filter(c -&amp;gt; c.getName().equals(csrfCookieExpectedName)).findFirst().orElse(null);&lt;br /&gt;
        }&lt;br /&gt;
        if (tokenCookie == null || this.isBlank(tokenCookie.getValue())) {&lt;br /&gt;
            LOG.info(&amp;quot;CSRFValidationFilter: CSRF cookie absent or value is null/empty so we provide one and return an HTTP NO_CONTENT response !&amp;quot;);&lt;br /&gt;
            //Add the CSRF token cookie and header&lt;br /&gt;
            this.addTokenCookieAndHeader(httpReq, httpResp);&lt;br /&gt;
            //Set response state to &amp;quot;204 No Content&amp;quot; in order to allow the requester to clearly identify an initial response providing the initial CSRF token&lt;br /&gt;
            httpResp.setStatus(HttpServletResponse.SC_NO_CONTENT);&lt;br /&gt;
        } else {&lt;br /&gt;
            //If the cookie is present then we pass to validation phase&lt;br /&gt;
            //Get token from the custom HTTP header (part under control of the requester)&lt;br /&gt;
            String tokenFromHeader = httpReq.getHeader(CSRF_TOKEN_NAME);&lt;br /&gt;
            //If empty then we trace the event and we block the request&lt;br /&gt;
            if (this.isBlank(tokenFromHeader)) {&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: Token provided via HTTP Header is absent/empty so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            } else if (!tokenFromHeader.equals(tokenCookie.getValue())) {&lt;br /&gt;
                //Verify that token from header and one from cookie are the same&lt;br /&gt;
                //Here is not the case so we trace the event and we block the request&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: Token provided via HTTP Header and via Cookie are not equals so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            } else {&lt;br /&gt;
                //Verify that token from header and one from cookie matches&lt;br /&gt;
                //Here is the case so we let the request reach the target component (ServiceServlet, jsp...) and add a new token when we get back the bucket&lt;br /&gt;
                HttpServletResponseWrapper httpRespWrapper = new HttpServletResponseWrapper(httpResp);&lt;br /&gt;
                chain.doFilter(request, httpRespWrapper);&lt;br /&gt;
                //Add the CSRF token cookie and header&lt;br /&gt;
                this.addTokenCookieAndHeader(httpReq, httpRespWrapper);&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void init(FilterConfig filterConfig) throws ServletException {&lt;br /&gt;
        //To easier the configuration, we load the target expected origin from an JVM property&lt;br /&gt;
        //Reconfiguration only require an application restart that is generally acceptable&lt;br /&gt;
        try {&lt;br /&gt;
            this.targetOrigin = new URL(System.getProperty(TARGET_ORIGIN_JVM_PARAM_NAME));&lt;br /&gt;
        } catch (MalformedURLException e) {&lt;br /&gt;
            LOG.error(&amp;quot;Cannot init the filter !&amp;quot;, e);&lt;br /&gt;
            throw new ServletException(e);&lt;br /&gt;
        }&lt;br /&gt;
        LOG.info(&amp;quot;CSRFValidationFilter: Filter init, set expected target origin to '{}'.&amp;quot;, this.targetOrigin);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void destroy() {&lt;br /&gt;
        LOG.info(&amp;quot;CSRFValidationFilter: Filter shutdown&amp;quot;);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Check if a string is null or empty (including containing only spaces)&lt;br /&gt;
     *&lt;br /&gt;
     * @param s Source string&lt;br /&gt;
     * @return TRUE if source string is null or empty (including containing only spaces)&lt;br /&gt;
     */&lt;br /&gt;
    private boolean isBlank(String s) {&lt;br /&gt;
        return s == null || s.trim().isEmpty();&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Generate a new CSRF token&lt;br /&gt;
     *&lt;br /&gt;
     * @return The token a string&lt;br /&gt;
     */&lt;br /&gt;
    private String generateToken() {&lt;br /&gt;
        byte[] buffer = new byte[50];&lt;br /&gt;
        this.secureRandom.nextBytes(buffer);&lt;br /&gt;
        return DatatypeConverter.printHexBinary(buffer);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Determine the name of the CSRF cookie for the targeted backend service&lt;br /&gt;
     *&lt;br /&gt;
     * @param httpRequest Source HTTP request&lt;br /&gt;
     * @return The name of the cookie as a string&lt;br /&gt;
     */&lt;br /&gt;
    private String determineCookieName(HttpServletRequest httpRequest) {&lt;br /&gt;
        String backendServiceName = httpRequest.getRequestURI().replaceAll(&amp;quot;/&amp;quot;, &amp;quot;-&amp;quot;);&lt;br /&gt;
        return CSRF_TOKEN_NAME + &amp;quot;-&amp;quot; + backendServiceName;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Add the CSRF token cookie and header to the provided HTTP response object&lt;br /&gt;
     *&lt;br /&gt;
     * @param httpRequest  Source HTTP request&lt;br /&gt;
     * @param httpResponse HTTP response object to update&lt;br /&gt;
     */&lt;br /&gt;
    private void addTokenCookieAndHeader(HttpServletRequest httpRequest, HttpServletResponse httpResponse) {&lt;br /&gt;
        //Get new token&lt;br /&gt;
        String token = this.generateToken();&lt;br /&gt;
        //Add cookie manually because the current Cookie class implementation do not support the &amp;quot;SameSite&amp;quot; attribute&lt;br /&gt;
        //We let the adding of the &amp;quot;Secure&amp;quot; cookie attribute to the reverse proxy rewriting...&lt;br /&gt;
        //Here we lock the cookie from JS access and we use the SameSite new attribute protection&lt;br /&gt;
        String cookieSpec = String.format(&amp;quot;%s=%s; Path=%s; HttpOnly; SameSite=Strict&amp;quot;, this.determineCookieName(httpRequest), token, httpRequest.getRequestURI());&lt;br /&gt;
        httpResponse.addHeader(&amp;quot;Set-Cookie&amp;quot;, cookieSpec);&lt;br /&gt;
        //Add cookie header to give access to the token to the JS code&lt;br /&gt;
        httpResponse.setHeader(CSRF_TOKEN_NAME, token);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Authors and Primary Editors  ==&lt;br /&gt;
Manideep Konakandla (Amazon Application Security Team) - http://www.manideepk.com&lt;br /&gt;
&lt;br /&gt;
Dave Wichers - dave.wichers[at]owasp.org&lt;br /&gt;
&lt;br /&gt;
Paul Petefish - https://www.linkedin.com/in/paulpetefish&lt;br /&gt;
&lt;br /&gt;
Eric Sheridan - eric.sheridan[at]owasp.org&lt;br /&gt;
&lt;br /&gt;
Dominique Righetto - dominique.righetto[at]owasp.org&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Other Cheat Sheets ==&lt;br /&gt;
{{Cheatsheet_Navigation_Body}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;br /&gt;
[[Category:Popular]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;diff=246801</id>
		<title>Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;diff=246801"/>
				<updated>2019-01-23T22:38:50Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* CSRF Defense Recommendations Summary */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; __NOTOC__&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:160px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Cheatsheets-header.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
Last revision (mm/dd/yy): '''{{REVISIONMONTH}}/{{REVISIONDAY}}/{{REVISIONYEAR}}''' &lt;br /&gt;
&lt;br /&gt;
 __TOC__{{TOC hidden}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
[[Cross-Site Request Forgery (CSRF)]] is a type of attack that occurs when a malicious web site, email, blog, instant message, or program causes a user’s web browser to perform an unwanted action on a trusted site when the user is authenticated. A CSRF attack works because browser requests automatically include any credentials associated with the site, such as the user's session cookie, IP address, etc. Therefore, if the user is authenticated to the site, the site cannot distinguish between the forged or legitimate request sent by the victim. We would need a token/identifier that is not accessible to attacker and would not be sent along (like cookies) with forged requests that attacker initiates. For more information on CSRF, see OWASP [[Cross-Site Request Forgery (CSRF)|Cross-Site Request Forgery (CSRF)  page]].&lt;br /&gt;
&lt;br /&gt;
The impact of a successful CSRF attack is limited to the capabilities exposed by the vulnerable application. For example, this attack could result in a transfer of funds, changing a password, or making a purchase with the user’s credentials. In effect, CSRF attacks are used by an attacker to make a target system perform a function via the target's browser, without the user’s knowledge, at least until the unauthorized transaction has been committed.&lt;br /&gt;
&lt;br /&gt;
Impacts of successful CSRF exploits vary greatly based on the privileges of each victim. When targeting a normal user, a successful CSRF attack can compromise end-user data and their associated functions. If the targeted end user is an administrator account, a CSRF attack can compromise the entire web application. Using social engineering, an attacker can embed malicious HTML or JavaScript code into an email or website to request a specific 'task URL'. The task then executes with or without the user's knowledge, either directly or by using a Cross-Site Scripting flaw. For example, see [https://en.wikipedia.org/wiki/Samy_(computer_worm) Samy MySpace Worm].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== What's new? ==&lt;br /&gt;
If you have seen OWASP [https://www.owasp.org/index.php?title=Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;amp;action=history old CSRF prevention cheat sheets], you can observe that a lot has changed in this newer version. One of the major changes is that the “Verifying same origin with standard headers” CSRF defense has been moved to the Defense in Depth section, whereas token based mitigation moved to the Primary Defense section (technical reasons for this switch were added under respective sections). Multiple new sections (HMAC based token protection, auto CSRF mitigation techniques, login CSRF, not so popular CSRF mitigations and CSRF mitigation myths) were added besides adding new content, removing obsolete content to the existing sections. Security issues/caveats associated with each mitigation were also included.&lt;br /&gt;
&lt;br /&gt;
==Warning: No Cross-Site Scripting (XSS) Vulnerabilities ==&lt;br /&gt;
[[Cross-Site Scripting]] is not necessary for CSRF to work. However, any cross-site scripting vulnerability can be used to defeat all CSRF mitigation techniques available in the market today (except mitigation techniques that involve user interaction and described later in this cheatsheet). This is because an XSS payload can simply read any page on the site using an XMLHttpRequest (direct DOM access can be done, if on same page) and obtain the generated token from the response, and include that token with a forged request.  This technique is exactly how the [https://en.wikipedia.org/wiki/Samy_(computer_worm) MySpace (Samy) worm] defeated MySpace's anti-CSRF defenses in 2005, which enabled the worm to propagate.&lt;br /&gt;
&lt;br /&gt;
It is imperative that no XSS vulnerabilities are present to ensure that CSRF defenses can't be circumvented. Please see the OWASP [[XSS (Cross Site Scripting) Prevention Cheat Sheet|XSS Prevention Cheat Sheet]] for detailed guidance on how to prevent XSS flaws.  &lt;br /&gt;
&lt;br /&gt;
== Resources that need to be protected from CSRF vulnerability ==&lt;br /&gt;
The following list assumes that you are not violating [http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.1 RFC2616], section 9.1.1, by using GET requests for state changing operations. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' If for any reason you violate, you would also need to protect those resources, which is mostly achieved with default &amp;lt;code&amp;gt;form tag [GET method]&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;href&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;src&amp;lt;/code&amp;gt; attributes.  &lt;br /&gt;
&lt;br /&gt;
* Form tags with POST &lt;br /&gt;
* Ajax/XHR calls&lt;br /&gt;
&lt;br /&gt;
== CSRF Defense Recommendations Summary ==&lt;br /&gt;
We recommend token based CSRF defense (either stateful/stateless) as a primary defense to mitigate CSRF in your applications. Only for highly sensitive operations, we also recommend a user interaction based protection (either re-authentication/one-time token, detailed in section 7.5) along with token based mitigation.&lt;br /&gt;
&lt;br /&gt;
As a defense-in-depth measure, consider implementing one mitigation from Defense in Depth Mitigations section (you can choose the mitigation that fits your ecosystem considering the issues mentioned under them). These defense-in-depth mitigation techniques are not recommended to be used by themselves (without token based mitigation) for mitigating CSRF in your applications.&lt;br /&gt;
&lt;br /&gt;
== Primary Defense Technique ==&lt;br /&gt;
&lt;br /&gt;
=== Token Based Mitigation ===&lt;br /&gt;
This defense is one the most popular and recommended methods to mitigate CSRF. It can be achieved either with state (synchronizer token pattern) or stateless (encrypted/hash based token pattern). See section 4.3 on how to mitigate login CSRF in your applications. For all the mitigation's, it is implicit that general security principles should be adhered&lt;br /&gt;
* Strong encryption/HMAC functions should be adhered to. &lt;br /&gt;
'''Note:''' You can select any algorithm per your organizational needs. We recommend AES256-GCM for encryption and SHA256/512 for HMAC.&lt;br /&gt;
* Strict key rotation and token lifetime policies should be maintained. Policies can be set according to your organizational needs. Generic key management guidance from OWASP can be found [[Key Management Cheat Sheet|here]].&lt;br /&gt;
&lt;br /&gt;
==== Synchronizer Token Pattern ====&lt;br /&gt;
Any state changing operation requires a secure random token (e.g., CSRF token) to prevent CSRF attacks. A CSRF token should be unique per user session, large random value, and also generated by a cryptographically secure random number generator. The CSRF token is added as a hidden field for forms headers/parameters for AJAX calls, and within the URL if the state changing operation occurs via a GET. See &amp;quot;Disclosure of Token in URL&amp;quot; section below. The server rejects the requested action if the CSRF token fails validation.&lt;br /&gt;
&lt;br /&gt;
In order to facilitate a &amp;quot;transparent but visible&amp;quot; CSRF solution, developers are encouraged to adopt a pattern similar to [http://www.corej2eepatterns.com/Design/PresoDesign.htm Synchronizer Token Pattern] (The original intention of this synchronizer token pattern was to detect duplicate submissions in forms). The synchronizer token pattern requires the generation of random &amp;quot;challenge&amp;quot; tokens that are associated with the user's current session. These challenge tokens are then inserted within the HTML forms and calls associated with sensitive server-side operations. It is the responsibility of the server application to verify the existence and correctness of this token. By including a challenge token with each request, the developer has a strong control to verify that the user actually intended to submit the desired requests. Inclusion of a required security token in HTTP requests associated with sensitive business functions helps mitigate CSRF attacks as successful exploitation assumes the attacker knows the randomly generated token for the target victim's session. &lt;br /&gt;
&lt;br /&gt;
'''Note:''' These tokens aren’t like cookies that are automatically sent with forged requests made from your browser from the attacker website. &lt;br /&gt;
&lt;br /&gt;
This is analogous to the attacker being able to guess the target victim's session identifier. &lt;br /&gt;
&lt;br /&gt;
The following describes a general approach to incorporate challenge tokens within the request.&lt;br /&gt;
&lt;br /&gt;
When a Web application formulates a request, the application should include a hidden input parameter with a common name such as &amp;quot;CSRFToken&amp;quot; (for forms)/ as header/parameter value for Ajax calls. The value of this token must be randomly generated such that it cannot be guessed by an attacker. Consider leveraging the java.security.SecureRandom class for Java applications to generate a sufficiently long random token. Alternative generation algorithms include the use of 256-bit BASE64 encoded hashes. Developers that choose this generation algorithm must make sure that there is randomness and uniqueness utilized in the data that is hashed to generate the random token.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;html&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;form action=&amp;quot;/transfer.do&amp;quot; method=&amp;quot;post&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;input type=&amp;quot;hidden&amp;quot; name=&amp;quot;CSRFToken&amp;quot; &lt;br /&gt;
value=&amp;quot;OWY4NmQwODE4ODRjN2Q2NTlhMmZlYWEwYzU1YWQwMTVhM2JmNGYxYjJiMGI4MjJjZDE1ZDZMGYwMGEwOA==&amp;quot;&amp;gt;&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/form&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is used for each subsequent request until the session expires. When a request is issued by the end-user, the server-side component must verify the existence and validity of the token in the request compared to the token found in the user session. If the token was not found within the request, or the value provided does not match the value within the user session, then the request should be aborted, and the event logged as a potential CSRF attack in progress.&lt;br /&gt;
&lt;br /&gt;
To further enhance the security of this proposed design, consider randomizing the CSRF token parameter name and/or value for each request. Implementing this approach results in the generation of per-request tokens as opposed to per-session tokens. This is more secure than per-session tokens as the time range for an attacker to exploit the stolen tokens is minimal. However, this may result in usability concerns. For example, the &amp;quot;Back&amp;quot; button browser capability is often hindered as the previous page may contain a token that is no longer valid. Interaction with this previous page will result in a CSRF false positive security event at the server. Few applications that need high security typically implement this approach (such as banks). You have to check what suits your needs. Regardless of the approach taken, developers are encouraged to protect the CSRF token the same way they protect authenticated session identifiers, such as the use of TLS.&lt;br /&gt;
&lt;br /&gt;
'''Existing Synchronizer Implementations'''&lt;br /&gt;
&lt;br /&gt;
Synchronizer token defenses have been built into many frameworks, so we strongly recommend using them first when they are available. External components that add CSRF defenses to existing applications are also recommended. OWASP has the following: &lt;br /&gt;
&lt;br /&gt;
* For Java: OWASP [[CSRF Guard]]&lt;br /&gt;
* For PHP and Apache: [[CSRFProtector Project]]&lt;br /&gt;
&lt;br /&gt;
'''Disclosure of Token in URL'''&lt;br /&gt;
&lt;br /&gt;
Some implementations of synchronizer tokens include the challenge token in GET (URL) requests as well as POST requests. This is often implemented as a result of sensitive server-side operations being invoked as a result of embedded links in the page or other general design patterns. These patterns are often implemented without knowledge of CSRF and an understanding of CSRF prevention design strategies. While this control does help mitigate the risk of CSRF attacks, the unique per-session token is being exposed for GET requests. CSRF tokens in GET requests are potentially leaked at several locations: browser history, log files, network appliances that make a point to log the first line of an HTTP request, and Referer headers if the protected site links to an external site. In the latter case (leaked CSRF token due to the Referer header being parsed by a linked site), it is trivially easy for the linked site to launch a CSRF attack on the protected site, and they will be able to target this attack very effectively, since the Referer header tells them the site as well as the CSRF token. The attack could be run entirely from JavaScript, so that a simple addition of a script tag to the HTML of a site can launch an attack (whether on an originally malicious site or on a hacked site). Additionally, since HTTPS requests from HTTPS contexts will not strip the Referer header (as opposed to HTTPS to HTTP requests) CSRF token leaks via Referer can still happen on HTTPS Applications.&lt;br /&gt;
&lt;br /&gt;
The ideal solution is to only include the CSRF token in POST requests and modify server-side actions that have state changing affect to only respond to POST requests. This is in fact what the &amp;lt;nowiki&amp;gt;RFC 2616&amp;lt;/nowiki&amp;gt; requires for GET requests. If sensitive server-side actions are guaranteed to only ever respond to POST requests, then there is no need to include the token in GET requests.&lt;br /&gt;
&lt;br /&gt;
In most JavaEE web applications, however, HTTP method scoping is rarely ever utilized when retrieving HTTP parameters from a request. Calls to &amp;quot;HttpServletRequest.getParameter&amp;quot; will return a parameter value regardless if it was a GET or POST. This is not to say HTTP method scoping cannot be enforced. It can be achieved if a developer explicitly overrides doPost() in the HttpServlet class or leverages framework specific capabilities such as the AbstractFormController class in Spring.&lt;br /&gt;
&lt;br /&gt;
For these cases, attempting to retrofit this pattern in existing applications requires significant development time and cost, and as a temporary measure it may be better to pass CSRF tokens in the URL. Once the application has been fixed to respond to HTTP GET and POST verbs correctly, CSRF tokens for GET requests should be turned off.&lt;br /&gt;
&lt;br /&gt;
==== Encryption based Token Pattern ====&lt;br /&gt;
The Encrypted Token Pattern leverages an encryption, rather than comparison method of Token-validation. It is most suitable for applications that do not want to maintain any state at server side. &lt;br /&gt;
&lt;br /&gt;
After successful authentication, the server generates a unique token comprised of the user's ID, a timestamp value and a [http://en.wikipedia.org/wiki/Cryptographic_nonce nonce], using a unique key available only on the server. This token is returned to the client and embedded in a hidden field for forms, in the request-header/parameter for AJAX requests. On receipt of this request, the server reads and decrypts the token value with the same key used to create the token. Inability to correctly decrypt suggest an intrusion attempt. Once decrypted, the UserId and timestamp contained within the token are validated; the UserId is compared against the currently logged in user, and the timestamp is compared against the current time.&lt;br /&gt;
&lt;br /&gt;
On successful token-decryption, the server has access to parsed values, ideally in the form of [http://en.wikipedia.org/wiki/Claims-based_identity claims]. These claims are processed by comparing the UserId claim to any potentially stored UserId (in a Cookie or Session variable, if the site already contains a means of authentication). The Timestamp is validated against the current time, preventing replay attacks. Alternatively, in the case of a CSRF attack, the server will be unable to decrypt the poisoned token, and can block and log the attack.&lt;br /&gt;
&lt;br /&gt;
This technique addresses some of the shortfalls in other stateless approaches, such as the need to store data in a Cookie, circumnavigating the Cookie-subdomain and HTTPONLY issues.&lt;br /&gt;
&lt;br /&gt;
==== HMAC Based Token Pattern ====&lt;br /&gt;
[https://en.wikipedia.org/wiki/HMAC HMAC (hash-based message authentication code)] is a cryptographic function that helps to guarantee integrity and authentication of a message. It is another way that CSRF mitigation can be achieved without maintaining any state at the server and is similar to an encryption token-based pattern with two main differences:&lt;br /&gt;
* Uses a strong HMAC function instead of an encryption function to generate the token&lt;br /&gt;
* Includes an additional field called ‘operation’ that would indicate the purpose of the operation for which you are including the CSRF token (may it be form tag/ajax call) &lt;br /&gt;
(Ex: ‘oneclickpurchase’ (or) buy/asin=SDFH&amp;amp;category=2&amp;amp;quantity=3)&lt;br /&gt;
&lt;br /&gt;
'''Note:''' Fields mentioned in encryption token pattern (user's ID, a timestamp value and a nonce) are included. &lt;br /&gt;
&lt;br /&gt;
The operation field helps in mitigating the fact that the hash function generates the same value irrespective of multiple iterations (unlike strong encryption functions that generate different values when they are encrypted each time). So, it would help in avoiding having repeated token values across your application. Nonce field serves the same purpose as in encrypted token pattern (i.e., to avoid rare collisions due to weak cryptographic functions and acts as a defense-in-depth measure). &lt;br /&gt;
&lt;br /&gt;
Generate the token using HMAC including all four fields mentioned previously (user's ID, a timestamp value, nonce, and operation) and then include it in hidden fields for form tags, headers/parameters for ajax calls. Once you receive the HMAC from the client in the requests, re-generate HMAC with the same fields that you used to generate it, and then verify that the HMAC you re-generated matches the HMAC received from the client. If it does, it is a legitimate user request and if it does not, flag it as a CSRF intrusion and alert your incident response teams. Because an attacker has no visibility into the key used for generating the hash fields used in generating it, there is no way for them to re-generate it to use in forged request.&lt;br /&gt;
&lt;br /&gt;
=== Auto CSRF Mitigation Techniques ===&lt;br /&gt;
Though the technique of mitigating tokens is widely used (stateful with synchronizer token and stateless with encrypted/HMAC token), the major problem associated with these techniques is the human tendency to forget things at times. If a developer forgets to add the token to any state changing operation, they are making the application vulnerable to CSRF. To avoid this, you can try to automate the process of adding tokens to CSRF vulnerable resources (mentioned earlier in this document). You can achieve this by doing the following:&lt;br /&gt;
* Write wrappers (that would auto add tokens when used) around default form tags/ajax calls and educate your developers to use those wrappers instead of standard tags. Though this approach is better than depending purely on developers to add tokens, it still is vulnerable to the issue of human tendency to forget things. [https://docs.spring.io/spring-security/site/docs/3.2.0.CI-SNAPSHOT/reference/html/csrf.html Spring Security] uses this technique to add CSRF tokens by default when a custom &amp;lt;form:form&amp;gt; tag is used, you can opt to use after verifying that its enabled and properly configured in the Spring Security version you are using.&lt;br /&gt;
* Write a hook (that would capture the traffic and add tokens to CSRF vulnerable resources before rendering to customers) in your organizational web rendering frameworks. Because it is hard to analyze when a particular response is doing any state change (and thus needing a token), you might want to include tokens in all CSRF vulnerable resources (ex: include tokens in all POST responses). This is one recommended approach, but you need to consider the performance costs it might incur.&lt;br /&gt;
* Get the tokens automatically added on the client side when the page is being rendered in user’s browser, with help of a client side script (this approach is used by [[CSRF Guard]]). You need to consider any possible JavaScript hijacking attacks.&lt;br /&gt;
We recommend researching if the framework you are using has an option to achieve CSRF protection by default before trying to build your custom auto tokening system. For example, .NET has an [https://docs.microsoft.com/en-us/aspnet/core/security/anti-request-forgery?view=aspnetcore-2.1 in-built protection] that adds token to CSRF vulnerable resources. You are responsible for proper configuration (such as key management and token management) before using these in-built CSRF protections that do auto tokening to CSRF vulnerable resources.&lt;br /&gt;
&lt;br /&gt;
=== Login CSRF ===&lt;br /&gt;
Most developers tend to ignore CSRF vulnerability on login forms as they assume that CSRF would not be applicable on login forms because user is not authenticated at that stage. That assumption is false. CSRF vulnerability can still occur on login forms where the user is not authenticated, but the impact/risk view for it is quite different from the impact/risk view of a general CSRF vulnerability (when a user is authenticated).&lt;br /&gt;
&lt;br /&gt;
With a CSRF vulnerability on login form, an attacker can make a victim login as them and learn behavior from their searches. For more information about login CSRF and other risks, see section 3 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf this] paper.&lt;br /&gt;
&lt;br /&gt;
Login CSRF can be mitigated by creating pre-sessions (sessions before a user is authenticated) and including tokens in login form. You can use any of the techniques mentioned above to generate tokens. Pre-sessions can be transitioned to real sessions once the user is authenticated. This technique is described in [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery section 4.1].&lt;br /&gt;
&lt;br /&gt;
If sub-domains under your master domain are treated as not trusty in your threat model, it is difficult to mitigate login CSRF. A strict subdomain and path level referrer header (because most login pages are served on HTTPS - no stripping of referrer - and are also linked from home pages) validation (detailed in section 6.1) can be used in these cases for mitigating CSRF on login forms to an extent.&lt;br /&gt;
&lt;br /&gt;
== Defense In Depth Techniques ==&lt;br /&gt;
&lt;br /&gt;
=== Verifying origin with standard headers ===&lt;br /&gt;
This defense technique is specifically proposed in section 5.0 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery]. This paper proposes the first creation of the Origin header and its use as a CSRF defense mechanism.&lt;br /&gt;
&lt;br /&gt;
There are two steps to this mitigation, both of which rely on examining an HTTP request header value.&lt;br /&gt;
&lt;br /&gt;
1. Determining the origin the request is coming from (source origin). Can be done via Origin and/or referer header.&lt;br /&gt;
&lt;br /&gt;
2. Determining the origin the request is going to (target origin).&lt;br /&gt;
&lt;br /&gt;
At server side we verify if both of them match. If they do, we accept the request as legitimate (meaning it’s the same origin request) and if they don’t, we discard the request (meaning that the request originated from cross-domain). Reliability on these headers comes from the fact that they cannot be altered programmatically (using JavaScript in an XSS) as they fall under [https://developer.mozilla.org/en-US/docs/Glossary/Forbidden_header_name forbidden headers] list (i.e., only browsers can set them).&lt;br /&gt;
&lt;br /&gt;
====Identifying Source Origin (via Origin/Referer header) ====&lt;br /&gt;
'''Checking the Origin Header'''&lt;br /&gt;
&lt;br /&gt;
If the Origin header is present, verify that its value matches the target origin. Unlike the Referer, the Origin header will be present in HTTP requests that originate from an HTTPS URL.&lt;br /&gt;
&lt;br /&gt;
'''Checking the Referer Header'''&lt;br /&gt;
&lt;br /&gt;
If the Origin header is not present, verify the hostname in the Referer header matches the target origin. This method of CSRF mitigation is also commonly used with unauthenticated requests, such as requests made prior to establishing a session state, which is required to keep track of a synchronization token.&lt;br /&gt;
&lt;br /&gt;
In both cases, make sure the target origin check is strong. For example, if your site is &amp;quot;site.com&amp;quot; make sure &amp;quot;site.com.attacker.com&amp;quot; does not pass your origin check (i.e., match through the trailing/after the origin to make sure you are matching against the entire origin).&lt;br /&gt;
&lt;br /&gt;
If neither of these headers are present, you can either accept or block the request. We recommend '''blocking'''. Alternatively, you might want to log all such instances, monitor their use cases/behavior, and then start blocking requests only after you get enough confidence.&lt;br /&gt;
&lt;br /&gt;
==== Identifying the Target Origin ====&lt;br /&gt;
You might think it’s easy to determine the target origin, but it’s frequently not. The first thought is to simply grab the target origin (i.e., its hostname and port #) from the URL in the request. However, the application server is frequently sitting behind one or more proxies and the original URL is different from the URL the app server actually receives. If your application server is directly accessed by its users, then using the origin in the URL is fine and you're all set.&lt;br /&gt;
&lt;br /&gt;
If you are behind a proxy, there are a number of options to consider.&lt;br /&gt;
* '''Configure your application to simply know its target origin:''' It’s your application, so you can find its target origin and set that value in some server configuration entry. This would be the most secure approach as its defined server side, so it is a trusted value. However,   this might be problematic to maintain if your application is deployed in many places, e.g., dev, test, QA, production, and possibly multiple production instances. Setting the correct value for each of these situations might be difficult, but if you can do it via some central configuration and providing your instances to grab value from it, that's great! ('''Note:''' Make sure the centralized configuration store is maintained securely because major part of your CSRF defense depends on it.)&lt;br /&gt;
&lt;br /&gt;
* '''Use the Host header value:''' If you prefer that the application find its own target so it doesn't have to be configured for each deployed instance, we recommend using the Host family of headers. The Host header's purpose is to contain the target origin of the request. But, if your app server is sitting behind a proxy, the Host header value is most likely changed by the proxy to the target origin of the URL behind the proxy, which is different than the original URL. This modified Host header origin won't match the source origin in the original Origin or Referer headers.&lt;br /&gt;
&lt;br /&gt;
* '''Use the X-Forwarded-Host header value:''' To avoid the issue of proxy altering the host header, there is another header called X-Forwarded-Host, whose purpose is to contain the original Host header value the proxy received. Most proxies will pass along the original Host header value in the X-Forwarded-Host header. So that header value is likely to be the target origin value you need to compare to the source origin in the Origin or Referer header.&lt;br /&gt;
&lt;br /&gt;
This mitigation in earlier versions of the CSRF Cheat Sheet is treated as a primary defense. For reasons mentioned below, it is now moved to the Defense in Depth section.&lt;br /&gt;
&lt;br /&gt;
As it’s implicit, this mitigation would work properly when origin or referrer headers are present in the requests. Though these headers are included '''majority''' of the time, there are few use cases where they are not included (most of them are for legitimate reasons to safeguard users privacy/to tune to browsers ecosystem). The following lists some use cases:&lt;br /&gt;
* Internet Explorer 11 does not add the Origin header on a CORS request across sites of a trusted zone. The Referer header will remain the only indication of the UI origin. See the following references in stackoverflow [https://stackoverflow.com/questions/20784209/internet-explorer-11-does-not-add-the-origin-header-on-a-cors-request here] and [https://github.com/silverstripe/silverstripe-graphql/issues/118 here].&lt;br /&gt;
* In an instance following a [https://stackoverflow.com/questions/22397072/are-there-any-browsers-that-set-the-origin-header-to-null-for-privacy-sensitiv 302 redirect cross-origin], Origin is not included in the redirected request because that may be considered sensitive information that should not be sent to the other origin.&lt;br /&gt;
* There are some [https://wiki.mozilla.org/Security/Origin#Privacy-Sensitive_Contexts privacy contexts] where Origin is set to “null” For example, see the following [https://www.google.com/search?q=origin+header+sent+null+value+site%3Astackoverflow.com&amp;amp;oq=origin+header+sent+null+value+site%3Astackoverflow.com here].&lt;br /&gt;
* Origin header is included for all cross origin requests but for same origin requests, in most browsers it is only included in POST/DELETE/PUT '''Note:''' Although it is not ideal, many developers use GET requests to do state changing operations.&lt;br /&gt;
* Referer header is no exception. There are multiple use cases where referrer header is omitted as well ([https://stackoverflow.com/questions/6880659/in-what-cases-will-http-referer-be-empty &amp;lt;nowiki&amp;gt;[1]&amp;lt;/nowiki&amp;gt;], [https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referer &amp;lt;nowiki&amp;gt;[2]&amp;lt;/nowiki&amp;gt;], [https://en.wikipedia.org/wiki/HTTP_referer#Referer_hiding &amp;lt;nowiki&amp;gt;[3]&amp;lt;/nowiki&amp;gt;], [https://seclab.stanford.edu/websec/csrf/csrf.pdf &amp;lt;nowiki&amp;gt;[4]&amp;lt;/nowiki&amp;gt;] and [https://www.google.com/search?q=referrer+header+sent+null+value+site:stackoverflow.com &amp;lt;nowiki&amp;gt;[5]&amp;lt;/nowiki&amp;gt;]). Load balancers, proxies and embedded network devices are also well known to strip the referrer header due to privacy reasons in logging them.&lt;br /&gt;
&lt;br /&gt;
Though exceptions can be written for above cases in your source and target origin check logic, there is currently no central repository (even there is one, keeping it up-to-date is a problem) that references all such use cases. Each browser might also handle these use cases differently (browsers are known to handle things differently considering their ecosystem. IE example of not sending origin header within trusted zone is such example). Rejecting requests that do not contain origin and/or referrer headers might sound like a good idea but it can impact legitimate users. Keeping this system in monitoring mode and trying to investigate use cases such as stated above, then adding them into exception logic is a process that you may consider to make this defense more stable in your environment.&lt;br /&gt;
&lt;br /&gt;
This CSRF defense relies on browser behavior that can change at times. For example, when new privacy contexts are discovered, under which situations you have to keep your validation logic updated, where as in token based mitigation, you have full control on the CSRF mitigation. If browsers alter CSRF tokens, they are literally changing the HTML content on rendering pages (which no browser would ever want to do!).&lt;br /&gt;
&lt;br /&gt;
When there is an XSS vulnerability on a page of an application protected with Origin and/or Referrer header, the level of effort required to exploit state changing operations (that are typically vulnerable to CSRF) on other pages is very easy (grab the parameters and forge a request, as Origin and Referrer header is included by default by browsers) than compared to token based mitigation (where attacker needs to download the target page, parse the DOM for the token, construct a forge request, and send it to server).&lt;br /&gt;
&lt;br /&gt;
'''Note:''' Although the concept of origin header stemmed from [https://seclab.stanford.edu/websec/csrf/csrf.pdf this] paper that references robust CSRF defenses, initial [https://tools.ietf.org/html/rfc6454 origin header RFC] does not reference to mitigate CSRF in any way (another [https://tools.ietf.org/id/draft-abarth-origin-03.html draft version] does, however).&lt;br /&gt;
&lt;br /&gt;
=== Double Submit Cookie ===&lt;br /&gt;
If maintaining the state for CSRF token at server side is problematic, an alternative defense is to use the double submit cookie technique. This technique is easy to implement and is stateless. In this technique, we send a random value in both a cookie and as a request parameter, with the server verifying if the cookie value and request value match. When a user visits (even before authenticating to prevent login CSRF), the site should generate a (cryptographically strong) pseudorandom value and set it as a cookie on the user's machine separate from the session identifier. The site then requires that every transaction request include this pseudorandom value as a hidden form value (or other request parameter/header). If both of them match at server side, the server accepts it as legitimate request and if they don’t, it would reject the request.&lt;br /&gt;
&lt;br /&gt;
There’s a belief that this technique would work because a cross origin attacker cannot read any data sent from the server or modify cookie values, per the same-origin policy. This means that while an attacker can force a victim to send any value with a malicious CSRF request, the attacker will be unable to modify or read the value stored in the cookie (with which the server compares the token value).&lt;br /&gt;
&lt;br /&gt;
There are a couple of drawbacks associated with the assumptions made here. The problem of &amp;quot;trusting of sub domains and proper configuration of whole site in general to accept HTTPS connections only&amp;quot;. The [https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf Blackhat talk] by Rich Lundeen references these drawbacks.&lt;br /&gt;
&lt;br /&gt;
&amp;quot;''With double submit, if an attacker can write a cookie they can obviously defeat the protection. And again, writing cookies is significantly easier then reading them. The fact that cookies can be written is difficult for many people to understand. After all, doesn't the same origin policy specify that one domain cannot access cookies from another domain? However, there are two common scenarios where writing cookies across domains is possible:''&lt;br /&gt;
&lt;br /&gt;
''a)   While it's true that hellokitty.marketing.example.com cannot read cookies or access the DOM from secure.example.com because of the same origin policy, hellokitty.marketing.example.com can write cookies to the parent domain (example.com), and these cookies are then consumed by secure.example.com (secure.example.com has no good way to distinguish which site set the cookie). Additionally, there are methods of forcing secure.example.com to always accept your cookie first. What this means is that XSS in hellokitty.marketing.example.com is able to overwrite cookies in secure.example.com.''&lt;br /&gt;
&lt;br /&gt;
''b)   If an attacker is in the middle, they can usually force a request to the same domain over HTTP. If an application is hosted at &amp;lt;nowiki&amp;gt;https://secure.example.com&amp;lt;/nowiki&amp;gt;, even if the cookies are set with the secure flag, a man in the middle can force connections to &amp;lt;nowiki&amp;gt;http://secure.example.com&amp;lt;/nowiki&amp;gt; and set (overwrite) any arbitrary cookies (even though the secure flag prevents the attacker from reading those cookies). Even if the HSTS header is set on the server and the browser visiting the site supports HSTS (this would prevent a man in the middle from forcing plaintext HTTP requests) unless the HSTS header is set in a way that includes all subdomains, a man in the middle can simply force a request to a separate subdomain and overwrite cookies similar to 1. In other words, as long as &amp;lt;nowiki&amp;gt;http://hellokitty.marketing.example.com&amp;lt;/nowiki&amp;gt; doesn't force https, then an attacker can overwrite cookies on any example.com subdomain.''&amp;quot;&lt;br /&gt;
&lt;br /&gt;
So, unless you are sure that your subdomains are fully secured and only accept HTTPS connections (we believe it’s difficult to guarantee at large enterprises), you should not rely on the Double Submit Cookie technique as a primary mitigation for CSRF.&lt;br /&gt;
&lt;br /&gt;
A variant of double submit cookie that can mitigate both the risks mentioned above is including the token in an encrypted cookie - often within the authentication cookie - and then at the server side matching it (after decrypting authentication cookie) with the token in hidden form field or parameter/header for ajax calls.  This works because a sub domain has no way to over-write an properly crafted encrypted cookie without the necessary information such as encryption key.&lt;br /&gt;
&lt;br /&gt;
=== Samesite Cookie Attribute ===&lt;br /&gt;
SameSite is a cookie attribute (similar to HTTPOnly, Secure etc.) introduced by Google to mitigate CSRF attacks. It is defined in [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 this] Internet Draft. This attribute helps in preventing the browser from sending cookies along with cross-site requests. Possible values for this attribute are lax or strict.&lt;br /&gt;
&lt;br /&gt;
The strict value will prevent the cookie from being sent by the browser to the target site in all cross-site browsing context, even when following a regular link. For example, for a GitHub-like website this would mean that if a logged-in user follows a link to a private GitHub project posted on a corporate discussion forum or email, GitHub will not receive the session cookie and the user will not be able to access the project. A bank website however most likely doesn't want to allow any transactional pages to be linked from external sites, so the strict flag would be most appropriate.&lt;br /&gt;
&lt;br /&gt;
The default lax value provides a reasonable balance between security and usability for websites that want to maintain user's logged-in session after the user arrives from an external link. In the above GitHub scenario, the session cookie would be allowed when following a regular link from an external website while blocking it in CSRF-prone request methods such as POST. Only cross-site-requests that are allowed in lax mode are the ones that have top-level navigations and are also “safe” HTTP methods (more details [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7.1 here]).&lt;br /&gt;
&lt;br /&gt;
Example of cookies using this attribute:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Set-Cookie: JSESSIONID=xxxxx; SameSite=Strict&lt;br /&gt;
Set-Cookie: JSESSIONID=xxxxx; SameSite=Lax&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Support for this attribute in different browsers is increasing but there are still browsers that need to adopt this. As of August 2018, SameSite attribute is on browsers used by 68.92% of Internet users (detailed statistics are [https://caniuse.com/#feat=same-site-cookie-attribute here]).&lt;br /&gt;
&lt;br /&gt;
Though this technique seems to be efficient in mitigating CSRF attacks, it is still in early stages (in [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 draft]) and does not have full browser support as mentioned above. Google’s [https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-02#section-5.3.7 draft] also mentions a couple cases where forged requests can be simulated by attackers as same-site requests (and thus allowing to send SameSite cookies).&lt;br /&gt;
&lt;br /&gt;
Considering the factors above, it is not recommended to be used as a primary defense. Google agrees with this stance and strongly encourages developers to deploy server-side defenses such as tokens to mitigate CSRF more fully.&lt;br /&gt;
&lt;br /&gt;
=== Use of Custom Request Headers ===&lt;br /&gt;
&lt;br /&gt;
Adding CSRF tokens, a double submit cookie and value, encrypted token, or other defense that involves changing the UI can frequently be complex or otherwise problematic. An alternate defense that is particularly well suited for AJAX/XHR endpoints is the use of a custom request header. This defense relies on the [https://en.wikipedia.org/wiki/Same-origin_policy same-origin policy (SOP)] restriction that only JavaScript can be used to add a custom header, and only within its origin. By default, browsers do not allow JavaScript to make cross origin requests.&lt;br /&gt;
&lt;br /&gt;
A particularly attractive custom header and value to use is “X-Requested-With: XMLHttpRequest” because most JavaScript libraries already add this header to requests they generate by default. However, some do not. For example, AngularJS used to, but does not anymore. For more information, see [https://github.com/angular/angular.js/commit/3a75b1124d062f64093a90b26630938558909e8d their rationale] and how to add it back.&lt;br /&gt;
&lt;br /&gt;
If this is the case for your system, you can simply verify the presence of this header and value on all your server side AJAX endpoints in order to protect against CSRF attacks. This approach has the double advantage of usually requiring no UI changes and not introducing any server side state, which is particularly attractive to REST services. You can always add your own custom header and value if that is preferred.&lt;br /&gt;
&lt;br /&gt;
This defense technique is specifically discussed in section 4.3 of [https://seclab.stanford.edu/websec/csrf/csrf.pdf Robust Defenses for Cross-Site Request Forgery]. However, bypasses of this defense using Flash were documented as early as 2008 and again as recently as 2015 by Mathias Karlsson to [https://hackerone.com/reports/44146 exploit a CSRF flaw in Vimeo]. Flash attack can't spoof the Origin or Referer headers, so by checking both of them we believe this combination of checks should prevent Flash bypass CSRF attacks (if any comes up in future). &lt;br /&gt;
&lt;br /&gt;
Besides any possible future bypasses such as Flash, using a static header will make it easy to exploit other state changing operations in the application (similar to the previous explanation on why ease of exploitation is easier in origin/referrer header check than token based mitigation). Including a random token instead of static header value is more or less equal to the token based approach described in the primary defense section. Developers also need to consider that if you are using this approach in an application with both Ajax calls and form tags, this technique would only help Ajax calls in protecting from CSRF and you would still need protect &amp;lt;form&amp;gt; tags with approaches described in this document such as tokens. Setting custom headers on form tags is not possible directly. Also, CORS configuration should also be robust to make this solution work effectively (as custom headers for requests coming from other domains trigger a pre-flight CORS check).&lt;br /&gt;
&lt;br /&gt;
=== User Interaction Based CSRF Defense ===&lt;br /&gt;
&lt;br /&gt;
While all the techniques referenced here do not require any user interaction, sometimes it’s easier or more appropriate to involve the user in the transaction to prevent unauthorized operations (forged via CSRF or otherwise). The following are some examples of techniques that can act as strong CSRF defense when implemented correctly.&lt;br /&gt;
* Re-Authentication (password or stronger)&lt;br /&gt;
* One-time Token&lt;br /&gt;
* CAPTCHA&lt;br /&gt;
While these are a very strong CSRF defense, it does create a huge impact on the user experience. For applications that are in need of high security for some operations (password change, money transfer etc.), these techniques should be used along with token based mitigation. Please note that tokens by themselves can mitigate CSRF, developers should use these techniques only to achieve additional security for their high sensitive operations.&lt;br /&gt;
&lt;br /&gt;
== Not So Popular CSRF Mitigations ==&lt;br /&gt;
&lt;br /&gt;
=== Triple Submit Cookie ===&lt;br /&gt;
This mitigation is proposed by John Wilander in 2012 at OWASP Appsec Research. This technique adds an additional step to double submit cookie approach by verifying if the request contains two cookies with same name (please note, attacker need to write an additional cookie to bypass double submit cookie mitigation). Though it mitigates the issues discussed in bypass of double submit cookie, it introduces new problems such as cookie jar overflow (in-details and more issue details [https://media.blackhat.com/eu-13/briefings/Lundeen/bh-eu-13-deputies-still-confused-lundeen-wp.pdf here] and [https://webstersprodigy.net/2012/08/03/analysis-of-john-wilanders-triple-submit-cookies/ here]). We were not able to find any real-time implementations of this mitigation so far.&lt;br /&gt;
&lt;br /&gt;
=== Content-Type Header Validation ===&lt;br /&gt;
This technique is better known than the triple submit cookie mitigation. In first place, this header is not designed for security (initial RFC [https://tools.ietf.org/html/rfc1049 here] and later well-defined in [https://www.ietf.org/rfc/rfc2045.txt this] RFC) but only to let receiving agents know the type of data they would be handling, so that they can invoke corresponding parsers. The pre-flighting behavior of this header (pre-flight if header has value other than application/x-www-form-urlencoded, multipart/form-data, or text/plain) is what treated as a CSRF mitigation and thus forcing all requests to have a header value that would force a pre-flight (such as application/json. Server side can reject cross-origin requests with CORS/SOP during this pre-flight).&lt;br /&gt;
&lt;br /&gt;
This approach has two main problems. One that it would mandate all requests to have a header value that would force pre-flight despite the real use case and the other that this technique is relying on a feature that is not designed for security, to mitigate a security vulnerability. When a bug was discovered in the Chrome API, browser architects even considered to removing this pre-flighting behavior. Because this header was not designed as a security control, architects can re-design it to better cater its primary purpose. In the future, there’s a possibility that new content-type header types can be included (to better support various use-cases), which can put systems relying on this header for CSRF mitigation in trouble. For more information, see [https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2017/september/common-csrf-prevention-misconceptions/ Common CSRF Prevention Misconceptions].&lt;br /&gt;
&lt;br /&gt;
== CSRF Mitigation Myths ==&lt;br /&gt;
The following shows techniques presumed to be CSRF mitigations but none of them fully/actually mitigates a CSRF vulnerability.&lt;br /&gt;
* '''CORS''': CORS is a header designed to relax Same-Origin-Policy when cross-origin communication between sites is required. It is not designed, nor prevents CSRF attacks.&lt;br /&gt;
* '''Using HTTPS''': Using HTTPS has nothing to do with the protection from CSRF attacks. Resources that are under HTTPS are still vulnerable to CSRF if proper CSRF mitigations described above are not included.&lt;br /&gt;
* More myths can be found [[Cross-Site Request Forgery (CSRF)|here]]&lt;br /&gt;
&lt;br /&gt;
== Personal Safety CSRF Tips for Users ==&lt;br /&gt;
Since CSRF vulnerabilities are reportedly widespread, we recommend using the following best practices to mitigate risk.  &lt;br /&gt;
&lt;br /&gt;
1. Logoff immediately after using a Web application.&amp;lt;br /&amp;gt;&lt;br /&gt;
2. Do not allow your browser to save username/passwords, and do not allow sites to “remember” your login.&amp;lt;br /&amp;gt;&lt;br /&gt;
3. Do not use the same browser to access sensitive applications and to surf the Internet freely (tabbed browsing).&amp;lt;br /&amp;gt;&lt;br /&gt;
4. The use of plugins such as No-Script makes POST based CSRF vulnerabilities difficult to exploit. This is because JavaScript is used to automatically submit the form when the exploit is loaded. Without JavaScript, the attacker would have to trick the user into submitting the form manually.&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Integrated HTML-enabled mail/browser and newsreader/browser environments pose additional risks since simply viewing a mail message or a news message might lead to the execution of an attack. &lt;br /&gt;
== Implementation reference example  ==&lt;br /&gt;
The following JEE web filter provides an example reference for some of the concepts described in this cheatsheet. It implements the following stateless mitigations ([https://github.com/aramrami/OWASP-CSRFGuard OWASP CSRFGuard], cover a stateful approach).&lt;br /&gt;
* Verifying same origin with standard headers&lt;br /&gt;
* Double submit cookie&lt;br /&gt;
* SameSite cookie attribute&lt;br /&gt;
'''Please note''' that it only acts a reference sample and is not complete (for example: it doesn’t have a block to direct the control flow when origin and referrer header check succeeds nor it has a port/host/protocol level validation for referrer header). Developers are recommended to build their complete mitigation on top of this reference sample. Developers should also implement standard authentication or authorization checks before checking for CSRF.&lt;br /&gt;
&lt;br /&gt;
Source is also located [https://github.com/righettod/poc-csrf here] and provides a runnable POC.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
import org.slf4j.Logger;&lt;br /&gt;
import org.slf4j.LoggerFactory;&lt;br /&gt;
&lt;br /&gt;
import javax.servlet.Filter;&lt;br /&gt;
import javax.servlet.FilterChain;&lt;br /&gt;
import javax.servlet.FilterConfig;&lt;br /&gt;
import javax.servlet.ServletException;&lt;br /&gt;
import javax.servlet.ServletRequest;&lt;br /&gt;
import javax.servlet.ServletResponse;&lt;br /&gt;
import javax.servlet.annotation.WebFilter;&lt;br /&gt;
import javax.servlet.http.Cookie;&lt;br /&gt;
import javax.servlet.http.HttpServletRequest;&lt;br /&gt;
import javax.servlet.http.HttpServletResponse;&lt;br /&gt;
import javax.servlet.http.HttpServletResponseWrapper;&lt;br /&gt;
import javax.xml.bind.DatatypeConverter;&lt;br /&gt;
import java.io.IOException;&lt;br /&gt;
import java.net.MalformedURLException;&lt;br /&gt;
import java.net.URL;&lt;br /&gt;
import java.security.SecureRandom;&lt;br /&gt;
import java.util.Arrays;&lt;br /&gt;
&lt;br /&gt;
/**&lt;br /&gt;
 * Filter in charge of validating each incoming HTTP request about Headers and CSRF token.&lt;br /&gt;
 * It is called for all requests to backend destination.&lt;br /&gt;
 *&lt;br /&gt;
 * We use the approach in which:&lt;br /&gt;
 * - The CSRF token is changed after each valid HTTP exchange&lt;br /&gt;
 * - The custom Header name for the CSRF token transmission is fixed&lt;br /&gt;
 * - A CSRF token is associated to a backend service URI in order to enable the support for multiple parallel Ajax request from the same application&lt;br /&gt;
 * - The CSRF cookie name is the backend service name prefixed with a fixed prefix&lt;br /&gt;
 *&lt;br /&gt;
 * Here for the POC we show the &amp;quot;access denied&amp;quot; reason in the response but in production code only return a generic message !&lt;br /&gt;
 *&lt;br /&gt;
 * @see &amp;quot;https://www.owasp.org/index.php/Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://wiki.mozilla.org/Security/Origin&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie&amp;quot;&lt;br /&gt;
 * @see &amp;quot;https://chloe.re/2016/04/13/goodbye-csrf-samesite-to-the-rescue/&amp;quot;&lt;br /&gt;
 */&lt;br /&gt;
@WebFilter(&amp;quot;/backend/*&amp;quot;)&lt;br /&gt;
public class CSRFValidationFilter implements Filter {&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * JVM param name used to define the target origin&lt;br /&gt;
     */&lt;br /&gt;
    public static final String TARGET_ORIGIN_JVM_PARAM_NAME = &amp;quot;target.origin&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Name of the custom HTTP header used to transmit the CSRF token and also to prefix &lt;br /&gt;
     * the CSRF cookie for the expected backend service&lt;br /&gt;
     */&lt;br /&gt;
    private static final String CSRF_TOKEN_NAME = &amp;quot;X-TOKEN&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Logger&lt;br /&gt;
     */&lt;br /&gt;
    private static final Logger LOG = LoggerFactory.getLogger(CSRFValidationFilter.class);&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Application expected deployment domain: named &amp;quot;Target Origin&amp;quot; in OWASP CSRF article&lt;br /&gt;
     */&lt;br /&gt;
    private URL targetOrigin;&lt;br /&gt;
&lt;br /&gt;
    /***&lt;br /&gt;
     * Secure generator&lt;br /&gt;
     */&lt;br /&gt;
    private final SecureRandom secureRandom = new SecureRandom();&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {&lt;br /&gt;
        HttpServletRequest httpReq = (HttpServletRequest) request;&lt;br /&gt;
        HttpServletResponse httpResp = (HttpServletResponse) response;&lt;br /&gt;
        String accessDeniedReason;&lt;br /&gt;
&lt;br /&gt;
        /* STEP 1: Verifying Same Origin with Standard Headers */&lt;br /&gt;
        //Try to get the source from the &amp;quot;Origin&amp;quot; header&lt;br /&gt;
        String source = httpReq.getHeader(&amp;quot;Origin&amp;quot;);&lt;br /&gt;
        if (this.isBlank(source)) {&lt;br /&gt;
            //If empty then fallback on &amp;quot;Referer&amp;quot; header&lt;br /&gt;
            source = httpReq.getHeader(&amp;quot;Referer&amp;quot;);&lt;br /&gt;
            //If this one is empty too then we trace the event and we block the request (recommendation of the article)...&lt;br /&gt;
            if (this.isBlank(source)) {&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: ORIGIN and REFERER request headers are both absent/empty so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
                return;&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        //Compare the source against the expected target origin&lt;br /&gt;
        URL sourceURL = new URL(source);&lt;br /&gt;
        if (!this.targetOrigin.getProtocol().equals(sourceURL.getProtocol()) || !this.targetOrigin.getHost().equals(sourceURL.getHost()) &lt;br /&gt;
		|| this.targetOrigin.getPort() != sourceURL.getPort()) {&lt;br /&gt;
            //One the part do not match so we trace the event and we block the request&lt;br /&gt;
            accessDeniedReason = String.format(&amp;quot;CSRFValidationFilter: Protocol/Host/Port do not fully matches so we block the request! (%s != %s) &amp;quot;, &lt;br /&gt;
				this.targetOrigin, sourceURL);&lt;br /&gt;
            LOG.warn(accessDeniedReason);&lt;br /&gt;
            httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            return;&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        /* STEP 2: Verifying CSRF token using &amp;quot;Double Submit Cookie&amp;quot; approach */&lt;br /&gt;
        //If CSRF token cookie is absent from the request then we provide one in response but we stop the process at this stage.&lt;br /&gt;
        //Using this way we implement the first providing of token&lt;br /&gt;
        Cookie tokenCookie = null;&lt;br /&gt;
        if (httpReq.getCookies() != null) {&lt;br /&gt;
            String csrfCookieExpectedName = this.determineCookieName(httpReq);&lt;br /&gt;
            tokenCookie = Arrays.stream(httpReq.getCookies()).filter(c -&amp;gt; c.getName().equals(csrfCookieExpectedName)).findFirst().orElse(null);&lt;br /&gt;
        }&lt;br /&gt;
        if (tokenCookie == null || this.isBlank(tokenCookie.getValue())) {&lt;br /&gt;
            LOG.info(&amp;quot;CSRFValidationFilter: CSRF cookie absent or value is null/empty so we provide one and return an HTTP NO_CONTENT response !&amp;quot;);&lt;br /&gt;
            //Add the CSRF token cookie and header&lt;br /&gt;
            this.addTokenCookieAndHeader(httpReq, httpResp);&lt;br /&gt;
            //Set response state to &amp;quot;204 No Content&amp;quot; in order to allow the requester to clearly identify an initial response providing the initial CSRF token&lt;br /&gt;
            httpResp.setStatus(HttpServletResponse.SC_NO_CONTENT);&lt;br /&gt;
        } else {&lt;br /&gt;
            //If the cookie is present then we pass to validation phase&lt;br /&gt;
            //Get token from the custom HTTP header (part under control of the requester)&lt;br /&gt;
            String tokenFromHeader = httpReq.getHeader(CSRF_TOKEN_NAME);&lt;br /&gt;
            //If empty then we trace the event and we block the request&lt;br /&gt;
            if (this.isBlank(tokenFromHeader)) {&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: Token provided via HTTP Header is absent/empty so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            } else if (!tokenFromHeader.equals(tokenCookie.getValue())) {&lt;br /&gt;
                //Verify that token from header and one from cookie are the same&lt;br /&gt;
                //Here is not the case so we trace the event and we block the request&lt;br /&gt;
                accessDeniedReason = &amp;quot;CSRFValidationFilter: Token provided via HTTP Header and via Cookie are not equals so we block the request !&amp;quot;;&lt;br /&gt;
                LOG.warn(accessDeniedReason);&lt;br /&gt;
                httpResp.sendError(HttpServletResponse.SC_FORBIDDEN, accessDeniedReason);&lt;br /&gt;
            } else {&lt;br /&gt;
                //Verify that token from header and one from cookie matches&lt;br /&gt;
                //Here is the case so we let the request reach the target component (ServiceServlet, jsp...) and add a new token when we get back the bucket&lt;br /&gt;
                HttpServletResponseWrapper httpRespWrapper = new HttpServletResponseWrapper(httpResp);&lt;br /&gt;
                chain.doFilter(request, httpRespWrapper);&lt;br /&gt;
                //Add the CSRF token cookie and header&lt;br /&gt;
                this.addTokenCookieAndHeader(httpReq, httpRespWrapper);&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void init(FilterConfig filterConfig) throws ServletException {&lt;br /&gt;
        //To easier the configuration, we load the target expected origin from an JVM property&lt;br /&gt;
        //Reconfiguration only require an application restart that is generally acceptable&lt;br /&gt;
        try {&lt;br /&gt;
            this.targetOrigin = new URL(System.getProperty(TARGET_ORIGIN_JVM_PARAM_NAME));&lt;br /&gt;
        } catch (MalformedURLException e) {&lt;br /&gt;
            LOG.error(&amp;quot;Cannot init the filter !&amp;quot;, e);&lt;br /&gt;
            throw new ServletException(e);&lt;br /&gt;
        }&lt;br /&gt;
        LOG.info(&amp;quot;CSRFValidationFilter: Filter init, set expected target origin to '{}'.&amp;quot;, this.targetOrigin);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     */&lt;br /&gt;
    @Override&lt;br /&gt;
    public void destroy() {&lt;br /&gt;
        LOG.info(&amp;quot;CSRFValidationFilter: Filter shutdown&amp;quot;);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Check if a string is null or empty (including containing only spaces)&lt;br /&gt;
     *&lt;br /&gt;
     * @param s Source string&lt;br /&gt;
     * @return TRUE if source string is null or empty (including containing only spaces)&lt;br /&gt;
     */&lt;br /&gt;
    private boolean isBlank(String s) {&lt;br /&gt;
        return s == null || s.trim().isEmpty();&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Generate a new CSRF token&lt;br /&gt;
     *&lt;br /&gt;
     * @return The token a string&lt;br /&gt;
     */&lt;br /&gt;
    private String generateToken() {&lt;br /&gt;
        byte[] buffer = new byte[50];&lt;br /&gt;
        this.secureRandom.nextBytes(buffer);&lt;br /&gt;
        return DatatypeConverter.printHexBinary(buffer);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Determine the name of the CSRF cookie for the targeted backend service&lt;br /&gt;
     *&lt;br /&gt;
     * @param httpRequest Source HTTP request&lt;br /&gt;
     * @return The name of the cookie as a string&lt;br /&gt;
     */&lt;br /&gt;
    private String determineCookieName(HttpServletRequest httpRequest) {&lt;br /&gt;
        String backendServiceName = httpRequest.getRequestURI().replaceAll(&amp;quot;/&amp;quot;, &amp;quot;-&amp;quot;);&lt;br /&gt;
        return CSRF_TOKEN_NAME + &amp;quot;-&amp;quot; + backendServiceName;&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * Add the CSRF token cookie and header to the provided HTTP response object&lt;br /&gt;
     *&lt;br /&gt;
     * @param httpRequest  Source HTTP request&lt;br /&gt;
     * @param httpResponse HTTP response object to update&lt;br /&gt;
     */&lt;br /&gt;
    private void addTokenCookieAndHeader(HttpServletRequest httpRequest, HttpServletResponse httpResponse) {&lt;br /&gt;
        //Get new token&lt;br /&gt;
        String token = this.generateToken();&lt;br /&gt;
        //Add cookie manually because the current Cookie class implementation do not support the &amp;quot;SameSite&amp;quot; attribute&lt;br /&gt;
        //We let the adding of the &amp;quot;Secure&amp;quot; cookie attribute to the reverse proxy rewriting...&lt;br /&gt;
        //Here we lock the cookie from JS access and we use the SameSite new attribute protection&lt;br /&gt;
        String cookieSpec = String.format(&amp;quot;%s=%s; Path=%s; HttpOnly; SameSite=Strict&amp;quot;, this.determineCookieName(httpRequest), token, httpRequest.getRequestURI());&lt;br /&gt;
        httpResponse.addHeader(&amp;quot;Set-Cookie&amp;quot;, cookieSpec);&lt;br /&gt;
        //Add cookie header to give access to the token to the JS code&lt;br /&gt;
        httpResponse.setHeader(CSRF_TOKEN_NAME, token);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Authors and Primary Editors  ==&lt;br /&gt;
Manideep Konakandla (Amazon Application Security Team) - http://www.manideepk.com&lt;br /&gt;
&lt;br /&gt;
Dave Wichers - dave.wichers[at]owasp.org&lt;br /&gt;
&lt;br /&gt;
Paul Petefish - https://www.linkedin.com/in/paulpetefish&lt;br /&gt;
&lt;br /&gt;
Eric Sheridan - eric.sheridan[at]owasp.org&lt;br /&gt;
&lt;br /&gt;
Dominique Righetto - dominique.righetto[at]owasp.org&amp;lt;br /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Other Cheatsheets ==&lt;br /&gt;
{{Cheatsheet_Navigation_Body}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;br /&gt;
[[Category:Popular]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_Prevention_Cheat_Sheet&amp;diff=246800</id>
		<title>Cross-Site Request Forgery Prevention Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Cross-Site_Request_Forgery_Prevention_Cheat_Sheet&amp;diff=246800"/>
				<updated>2019-01-23T22:37:11Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: Redirected page to Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;#redirect [[Cross-Site_Request_Forgery_(CSRF)_Prevention_Cheat_Sheet]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=246776</id>
		<title>Free for Open Source Application Security Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=246776"/>
				<updated>2019-01-23T16:31:43Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
== Introduction ==&lt;br /&gt;
OWASP's mission is to help the world improve the security of its software. One of the best ways OWASP can do that is to help Open Source developers improve the software they are producing that everyone else relies on. As such, the following lists of '''automated vulnerability detection tools''' that are '''free for open source''' projects have been gathered together here to raise awareness of their availability.&lt;br /&gt;
&lt;br /&gt;
We would encourage open source projects to use the following types of tools to improve the security and quality of their code:&lt;br /&gt;
* Static Application Security Testing ([[SAST]]) Tools &lt;br /&gt;
* Dynamic Application Security Testing ([[DAST]]) Tools - (Primarily for web apps)&lt;br /&gt;
* Interactive Application Security Testing (IAST) Tools - (Primarily for web apps and web APIs)&lt;br /&gt;
* Keeping Open Source libraries up-to-date (to avoid [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]])&lt;br /&gt;
* Static Code Quality Tools&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; '''Disclaimer:''' &amp;lt;b&amp;gt;OWASP does not endorse any of the Vendors or Scanning Tools by listing them below. They are simply listed if we believe they are free for use by open source projects. We have made every effort to provide this information as accurately as possible. If you are the vendor of a free for open source tool and think this information is incomplete or incorrect, please send an e-mail to dave.wichers (at) owasp.org and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free for Open Source Tools ==&lt;br /&gt;
Tools that are free for open source projects in each of the above categories are listed below.&lt;br /&gt;
&lt;br /&gt;
=== SAST Tools ===&lt;br /&gt;
OWASP already maintains a page of known SAST tools: [[Source Code Analysis Tools]], which includes a list of those that are &amp;quot;Open Source or Free Tools Of This Type&amp;quot;. Any such tools could certainly be used. One such cloud service that looks promising is:&lt;br /&gt;
* [https://lgtm.com/help/lgtm/about-lgtm LGTM.com] - A free for open source static analysis service that automatically monitors commits to publicly accessible code in: Bitbucket Cloud, GitHub, or GitLab. Supports C/C++, C#, COBOL (in beta), Java, JavaScript/TypeScript, Python&lt;br /&gt;
In addition, we are aware of the following commercial SAST tools that are free for Open Source projects:&lt;br /&gt;
* [https://scan.coverity.com/ Coverity Scan Static Analysis] - Can be lashed into Travis-CI so it's done automatically with online resources. Supports over a dozen programming languages as documented here in the section [https://www.synopsys.com/software-integrity/security-testing/static-analysis-sast.html Comprehensive support for these programming languages and frameworks].&lt;br /&gt;
* [https://www.softwaresecured.com/reshift reshift] - A CI/CD tool that uses static code analysis to scan for vulnerabilities and uses machine learning to give a prediction on false positives. Supports Java with future support for NodeJS and JavaScript planned for sometime in 2019. If you go to the Pricing section on this page, it says it is free for public repositories.&lt;br /&gt;
&lt;br /&gt;
=== DAST Tools ===&lt;br /&gt;
If your project has a web application component, we recommend running automated scans against it to look for vulnerabilities. OWASP maintains a page of known DAST Tools: [[:Category:Vulnerability Scanning Tools|Vulnerability Scanning Tools]], and the '''Licence''' column on this page indicates which of those tools have free capabilities. Our primary recommendation is to use one of these:&lt;br /&gt;
* [[OWASP Zed Attack Proxy Project|OWASP ZAP]] - A full featured free and open source DAST tool that includes both automated scanning for vulnerabilities and tools to assist expert manual web app pen testing.&lt;br /&gt;
** The ZAP team has also been working hard to make it easier to integrate ZAP into your CI/CD pipeline. (e.g., here's a [https://www.we45.com/blog/how-to-integrate-zap-into-jenkins-ci-pipeline-we45-blog blog post on how to integrate ZAP with Jenkins]).&lt;br /&gt;
* [http://www.arachni-scanner.com/ Arachni] - Arachni is a commercially supported scanner, but its free for most use cases, including scanning open source projects.&lt;br /&gt;
We are not aware of any other commercial grade tools that offer their full featured DAST product free for open source projects.&lt;br /&gt;
&lt;br /&gt;
=== IAST Tools ===&lt;br /&gt;
IAST tools are typically geared to analyze Web Applications and Web APIs, but that is vendor specific. There may be IAST products that can perform good security analysis on non-web applications as well.&lt;br /&gt;
&lt;br /&gt;
We are aware of only one IAST Tool that is free for open source at this time:&lt;br /&gt;
* [https://www.contrastsecurity.com/contrast-community-edition Contrast Community Edition (CE)] - Fully featured version for 1 app and up to 5 users (some Enterprise features disabled). Contrast CE supports Java only.&lt;br /&gt;
&lt;br /&gt;
=== Open Source Software (OSS) Security Tools ===&lt;br /&gt;
OSS refers to the open source libraries or components that application developers leverage to quickly develop new applications and add features to existing apps. Gartner refers to the analysis of the security of these components as software composition analysis (SCA). So OSS Analysis and SCA are the same thing.&lt;br /&gt;
&lt;br /&gt;
OWASP recommends that all software projects generally try to keep the libraries they use as up-to-date as possible to reduce the likelihood of [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]]. There are two recommended approaches for this:&lt;br /&gt;
&lt;br /&gt;
==== Keeping Your Libraries Updated ====&lt;br /&gt;
Using the latest version of each library is recommended because security issues are frequently fixed 'silently' by the component maintainer. By silently, we mean without publishing a [https://cve.mitre.org/ CVE] for the security fix.&lt;br /&gt;
* [https://www.mojohaus.org/versions-maven-plugin/ Maven Versions plugin]&lt;br /&gt;
** For Maven projects, can be used to generate a report of all dependencies used and when upgrades are available for them. Either a direct report, or part of the overall project documentation using: mvn site.&lt;br /&gt;
* Dependabot - https://dependabot.com/&lt;br /&gt;
** A GitHub only service that creates pull requests to keep your dependencies up-to-date. It automatically generates a pull request for each dependency you can upgrade, which you can then ignore, or accept, as you like. It supports tons of languages.&lt;br /&gt;
** Recommended for all open source projects maintained on GitHub!&lt;br /&gt;
&lt;br /&gt;
==== Detecting Known Vulnerable Components ====&lt;br /&gt;
As an alternative, or in addition to, trying to keep all your components up-to-date, a project can specifically monitor whether any of the components they use have known vulnerable components.&lt;br /&gt;
&lt;br /&gt;
Free tools of this type:&lt;br /&gt;
* OWASP has its own free open source tool [[OWASP Dependency Check]] that is free for anyone to use.&lt;br /&gt;
* GitHub: Security alerts for vulnerable dependencies - https://help.github.com/articles/about-security-alerts-for-vulnerable-dependencies/&lt;br /&gt;
** A native GitHub feature that reports known vulnerable dependencies in your GitHub projects. Supports: Java, .NET, JavaScript, Ruby, and Python. Your GitHub projects are automatically signed up for this service.&lt;br /&gt;
Commercial tools of this type that are free for open source:&lt;br /&gt;
* Contrast Community Edition (CE) (mentioned earlier) also has both Known Vulnerable Component detection and Available Updates reporting for OSS. CE supports Java only.&lt;br /&gt;
* Snyk - https://www.snyk.io - Supports Node.js, Ruby, Java, Python, Scala, Golang, .NET, PHP - Latest list here: https://snyk.io/docs&lt;br /&gt;
** A Commercial tool that identifies vulnerable components and integrates with numerous CI/CD pipelines. It is free for open source: https://snyk.io/plans&lt;br /&gt;
** If you don't want to grant Snyk write access to your repo (see it can auto-create pull requests) you can use the Command Line Interface (CLI) instead. See: https://snyk.io/docs/using-snyk. If you do this and want it to be free, you have to configure Snyk so it know its open source: https://support.snyk.io/snyk-cli/how-can-i-set-a-snyk-cli-project-as-open-source&lt;br /&gt;
*** Another benefit of using the Snyk CLI is that it won't auto create Pull requests for you (which makes these 'issues' more public than you might prefer)&lt;br /&gt;
** They also provide detailed information and remediation guidance for known vulnerabilities here: https://snyk.io/vuln&lt;br /&gt;
* SourceClear - https://www.sourceclear.com/ - Supports: Java, Ruby, JavaScript, Python, Objective C, GO, PHP&lt;br /&gt;
** They have a free trial right from their [https://www.sourceclear.com/ home page]. When the 30 day trial expires, it converts into a free &amp;quot;Personal Account&amp;quot; per: &amp;quot;Upgrade at any time to get the features that matter most to you, or choose the Personal plan when your trial ends.&amp;quot; Personal Account described here: https://www.sourceclear.com/pricing/&lt;br /&gt;
** They also make their component vulnerability data (for publicly known vulns) free to search: https://www.sourceclear.com/vulnerability-database/search#_ (Very useful when trying to research a particular library)&lt;br /&gt;
* WhiteSource Bolt - Supports 200+ programming languages. https://www.whitesourcesoftware.com/&lt;br /&gt;
** Azure version: https://marketplace.visualstudio.com/items?itemName=whitesource.ws-bolt&lt;br /&gt;
** GitHub version: https://github.com/apps/whitesource-bolt-for-github Available starting in Nov. 2018.&lt;br /&gt;
&lt;br /&gt;
=== Code Quality tools ===&lt;br /&gt;
Quality has a significant correlation to security. As such, we recommend open source projects also consider using good code quality tools. A few that we are aware of are:&lt;br /&gt;
* SpotBugs (https://github.com/spotbugs/spotbugs) - Open source code quality tool for Java&lt;br /&gt;
** This is the active fork for FindBugs, so if you use Findbugs, you should switch to this.&lt;br /&gt;
** SpotBugs users should add the FindSecBugs plugin (http://find-sec-bugs.github.io/) to their SpotBugs setup, as it significantly improves on the very basic security checking native to SpotBugs.&lt;br /&gt;
&lt;br /&gt;
* SonarQube (https://www.sonarqube.org/)&lt;br /&gt;
** This is a commercially supported, very popular, free (and commercial) code quality tool. It includes most if not all the FindSecBugs security rules plus lots more for quality, including a free, internet online CI setup to run it against your open source projects. SonarQube supports numerous languages: https://www.sonarqube.org/features/multi-languages/&lt;br /&gt;
&lt;br /&gt;
Please let us know if you are aware of any other high quality application security tools that are free for open source (or simply add them to this page). We are particularly interested in identifying and listing commercial tools that are free for open source, as they tend to be better and easier to use than open source (free) tools. If you are aware of any missing from this list, please add them, or let us know (dave.wichers (at) owasp.org) and we'll confirm they are free, and add them for you. Please encourage your favorite commercial tool vendor to make their tool free for open source projects as well!!&lt;br /&gt;
&lt;br /&gt;
Finally, please forward this page to the open source projects you rely on and encourage them to use these free tools!&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=246775</id>
		<title>Source Code Analysis Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=246775"/>
				<updated>2019-01-23T16:27:41Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: Edit reshift tool description.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Static_Code_Analysis | Source code analysis]] tools, also referred to as Static Application Security Testing (SAST) Tools, are designed to analyze source code and/or compiled versions of code to help find security flaws. &lt;br /&gt;
&lt;br /&gt;
Some tools are starting to move into the IDE. For the types of problems that can be detected during the software development phase itself, this is a powerful phase within the development life cycle to employ such tools, as it provides immediate feedback to the developer on issues they might be introducing into the code during code development itself. This immediate feedback is very useful, especially when compared to finding vulnerabilities much later in the development cycle.&lt;br /&gt;
&lt;br /&gt;
== Strengths and Weaknesses ==&lt;br /&gt;
&lt;br /&gt;
=== Strengths ===&lt;br /&gt;
&lt;br /&gt;
* Scales well -- can be run on lots of software, and can be run repeatedly (as with nightly builds or continuous integration)&lt;br /&gt;
* Useful for things that such tools can automatically find with high confidence, such as buffer overflows, SQL Injection Flaws, and so forth&lt;br /&gt;
* Output is good for developers -- highlights the precise source files, line numbers, and even subsections of lines that are affected&lt;br /&gt;
&lt;br /&gt;
=== Weaknesses ===&lt;br /&gt;
&lt;br /&gt;
* Many types of security vulnerabilities are difficult to find automatically, such as authentication problems, access control issues, insecure use of cryptography, etc. The current state of the art only allows such tools to automatically find a relatively small percentage of application security flaws. However, tools of this type are getting better.&lt;br /&gt;
* High numbers of false positives.&lt;br /&gt;
* Frequently can't find configuration issues, since they are not represented in the code.&lt;br /&gt;
* Difficult to 'prove' that an identified security issue is an actual vulnerability.&lt;br /&gt;
* Many of these tools have difficulty analyzing code that can't be compiled. Analysts frequently can't compile code because they don't have the right libraries, all the compilation instructions, all the code, etc.&lt;br /&gt;
&lt;br /&gt;
==Important Selection Criteria==&lt;br /&gt;
&lt;br /&gt;
* Requirement: Must support your programming language, but not usually a key factor once it does.&lt;br /&gt;
* Types of vulnerabilities it can detect (out of the [[OWASP Top Ten]]?) (plus more?)&lt;br /&gt;
* How accurate is it? False Positive/False Negative rates?&lt;br /&gt;
** Does the tool have an OWASP [[Benchmark]] score?&lt;br /&gt;
* Does it understand the libraries/frameworks you use?&lt;br /&gt;
* Does it require a fully buildable set of source?&lt;br /&gt;
* Can it run against binaries instead of source?&lt;br /&gt;
* Can it be integrated into the developer's IDE?&lt;br /&gt;
* How hard is it to setup/use?&lt;br /&gt;
* Can it be run continuously and automatically?&lt;br /&gt;
* License cost for the tool. (Some are sold per user, per org, per app, per line of code analyzed. Consulting licenses are frequently different than end user licenses.)&lt;br /&gt;
&lt;br /&gt;
==OWASP Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [[OWASP SonarQube Project]]&lt;br /&gt;
* [http://www.owasp.org/index.php/Category:OWASP_Orizon_Project OWASP Orizon Project]&lt;br /&gt;
* [[OWASP_LAPSE_Project | OWASP LAPSE Project]]&lt;br /&gt;
* [[OWASP O2 Platform]]&lt;br /&gt;
* [[OWASP WAP-Web Application Protection]]&lt;br /&gt;
&lt;br /&gt;
==Disclaimer==&lt;br /&gt;
&lt;br /&gt;
Disclaimer: &amp;lt;b&amp;gt;The tools listed in the tables below are presented in alphabetical order. &amp;lt;i&amp;gt;OWASP does not endorse any of the vendors or tools by listing them in the table below.&amp;lt;/i&amp;gt; We have made every effort to provide this information as accurately as possible. If you are the vendor of a tool below and think that this information is incomplete or incorrect, please send an e-mail to our mailing list and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Open Source or Free Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.openstack.org/wiki/Security/Projects/Bandit Bandit] - bandit is a comprehensive source vulnerability scanner for Python&lt;br /&gt;
* [http://brakemanscanner.org/ Brakeman] - Brakeman is an open source vulnerability scanner specifically designed for Ruby on Rails applications&lt;br /&gt;
* [http://rubygems.org/gems/codesake-dawn Codesake Dawn] - Codesake Dawn is an open source security source code analyzer designed for Sinatra, Padrino for Ruby on Rails applications. It also works on non-web applications written in Ruby&lt;br /&gt;
* [http://findbugs.sourceforge.net/ FindBugs] - (Legacy - NOT Maintained - Use SpotBugs (see below) instead) - Find bugs (including a few security flaws) in Java programs&lt;br /&gt;
* [https://find-sec-bugs.github.io/ FindSecBugs] - A security specific plugin for SpotBugs that significantly improves SpotBugs's ability to find security vulnerabilities in Java programs. Works with the old FindBugs too,&lt;br /&gt;
* [http://www.dwheeler.com/flawfinder/ Flawfinder] Flawfinder - Scans C and C++&lt;br /&gt;
* [https://www.bishopfox.com/resources/tools/google-hacking-diggity/attack-tools/ Google CodeSearchDiggity] - Uses Google Code Search to identifies vulnerabilities in open source code projects hosted by Google Code, MS CodePlex, SourceForge, Github, and more. The tool comes with over 130 default searches that identify SQL injection, cross-site scripting (XSS), insecure remote and local file includes, hard-coded passwords, and much more.  ''Essentially, Google CodeSearchDiggity provides a source code security analysis of nearly every single open source code project in existence – simultaneously.''&lt;br /&gt;
* [https://github.com/wireghoul/graudit/ Graudit] - Scans multiple languages for various security flaws.&lt;br /&gt;
* [https://lgtm.com/help/lgtm/about-lgtm LGTM] - A free for open source static analysis service that automatically monitors commits to publicly accessible code in: Bitbucket Cloud, GitHub, or GitLab. Supports C/C++, C#, COBOL (in beta), Java, JavaScript/TypeScript, Python&lt;br /&gt;
* [http://pmd.sourceforge.net/ PMD] - PMD scans Java source code and looks for potential code problems (this is a code quality tool that does not focus on security issues)&lt;br /&gt;
* [https://github.com/designsecurity/progpilot Progpilot] - Progpilot is a static analyzer tool for PHP that detects security vulnerabilities such as XSS and SQL Injection.&lt;br /&gt;
* [http://msdn.microsoft.com/en-us/library/ms933794.aspx PreFast] (Microsoft) - PREfast is a static analysis tool that identifies defects in C/C++ programs. Last update 2006.&lt;br /&gt;
* [https://pumascan.com/ Puma Scan] - Puma Scan is a .NET C# open source static source code analyzer that runs as an IDE plugin for Visual Studio and via MSBuild in CI pipelines.&lt;br /&gt;
* [https://dotnet-security-guard.github.io/ .NET Security Guard] - Roslyn analyzers that aim to help security audits on .NET applications. It will find SQL injections, LDAP injections, XXE, cryptography weakness, XSS and more.&lt;br /&gt;
* [http://rips-scanner.sourceforge.net/ RIPS] - RIPS is a static source code analyzer for vulnerabilities in PHP web applications. Please see notes on the sourceforge.net site.&lt;br /&gt;
* [https://github.com/FloeDesignTechnologies/phpcs-security-audit phpcs-security-audit] - phpcs-security-audit is a set of PHP_CodeSniffer rules that finds flaws or weaknesses related to security in PHP and its popular CMS or frameworks.  It currently has core PHP rules as well as Drupal 7 specific rules.&lt;br /&gt;
* [http://www.sonarqube.org/ SonarQube] - Scans source code for more than 20 languages for Bugs, Vulnerabilities, and Code Smells. SonarQube IDE plugins for Eclipse, Visual Studio, and IntelliJ provided by [http://www.sonarlint.org/ SonarLint].&lt;br /&gt;
* [https://spotbugs.github.io/ SpotBugs] - This is the active fork replacement for FindBugs, which is not maintained anymore.&lt;br /&gt;
* [http://sourceforge.net/projects/visualcodegrepp/ VisualCodeGrepper (VCG)] - Scans C/C++, C#, VB, PHP, Java, and PL/SQL for security issues and for comments which may indicate defective code. The config files can be used to carry out additional checks for banned functions or functions which commonly cause security issues.&lt;br /&gt;
&lt;br /&gt;
==Commercial Tools Of This Type==&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ Application Inspector] (Positive Technologies) - combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation; has capability to generate test queries (exploits) to verify detected vulnerabilities during SAST analysis; Supported languages include: Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/application-security-on-cloud Application Security on Cloud] (IBM) - Provides SAST, DAST and mobile security testing as well as OpenSource library known vulnerability detection as a cloud service. &lt;br /&gt;
* [http://www-01.ibm.com/software/rational/products/appscan/source/ AppScan Source] (IBM)&lt;br /&gt;
* [http://www.blueclosure.com BlueClosure BC Detect] (BlueClosure) - Analyzes client-side JavaScript.&lt;br /&gt;
* [https://buguroo.com/products/bugblast-next-gen-appsec-platform/bugscout-sca bugScout] (Buguroo Offensive Security)&lt;br /&gt;
* [http://www.castsoftware.com/solutions/application-security/cwe#SupportedSecurityStandards CAST AIP] (CAST) Performs static and architectural analysis to identify numerous types of security issues. Supports over 30 languages.&lt;br /&gt;
* [https://www.codacy.com/ Codacy] Offers security patterns for languages such as Python, Ruby, Scala, Java, JavaScript and more. Integrates with tools such as Brakeman, Bandit, FindBugs, and others. (free for open source projects)&lt;br /&gt;
* [https://www.grammatech.com/products/codesonar CodeSonar] tool that supports C, C++, Java and C# and maps against the OWASP top 10 vulnerabilities.&lt;br /&gt;
* [http://www.contrastsecurity.com/ Contrast] (Contrast Security) - Contrast performs code security without actually doing static analysis. Contrast does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.coverity.com/products/code-advisor/ Coverity Code Advisor] (Synopsys)&lt;br /&gt;
* [https://www.checkmarx.com/technology/static-code-analysis-sca/ CxSAST] (Checkmarx)&lt;br /&gt;
* [http://www8.hp.com/us/en/software-solutions/static-code-analysis-sast/ Fortify] (Micro Focus, Formally HP)&lt;br /&gt;
* [http://www.juliasoft.com/solutions Julia] (JuliaSoft) - SaaS Java static analysis&lt;br /&gt;
* [http://www.klocwork.com/capabilities/static-code-analysis KlocWork] (KlocWork)&lt;br /&gt;
* [https://www.kiuwan.com/code-analysis/ Kiuwan] (an [http://www.optimyth.com Optimyth] company) - SaaS Software Quality &amp;amp; Security Analysis&lt;br /&gt;
* [http://www.parasoft.com/jsp/capabilities/static_analysis.jsp?itemId=547 Parasoft Test] (Parasoft)&lt;br /&gt;
* [https://pitss.com/products/pitss-con/ PITSS.CON] (PITTS)&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ PT Application Inspector] combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation for high accuracy rate with minimum false positives; has a unique capability to generate special test queries (exploits) to verify detected vulnerabilities during SAST analysis; integrates with CI/CD, VCS, etc. PT AI helps to easily understand, verify, and fix flaws; has a simple UI; is highly automated and easy to use. Supported languages are Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others.&lt;br /&gt;
* [https://pumascanpro.com/ Puma Scan Professional] - A .NET C# static source code analyzer that runs as a Visual Studio IDE extension, Azure DevOps extension, and Command Line (CLI) executable.&lt;br /&gt;
* [http://www.viva64.com/en/ PVS-Studio] (PVS-Studio) - For C/C++, C#&lt;br /&gt;
* [https://www.softwaresecured.com/reshift reshift] - A CI/CD tool that uses static code analysis to scan for vulnerabilities and uses machine learning to give a prediction on false positives. Supports Java with future support for NodeJS and JavaScript planned for sometime in 2019.&lt;br /&gt;
* [https://www.ripstech.com/ RIPS Code Analysis] (RIPS Technologies) - A SAST solution specialized for PHP that detects unknown security vulnerabilities and code quality issues.&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/resources/datasheets/secureassist.html SecureAssist] (Synopsys) - Scans code for insecure coding and configurations automatically as an IDE plugin for Eclipse, IntelliJ, and Visual Studio etc. Supports (Java, .NET, PHP, and JavaScript)&lt;br /&gt;
* [https://www.whitehatsec.com/products/static-application-security-testing/ Sentinel Source] (Whitehat)&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/products/interactive-application-security-testing.html Seeker] (Synopsys) Seeker performs code security without actually doing static analysis. Seeker does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis with simulated attacks. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.sourcepatrol.co.uk/ Source Patrol] (Pentest)&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] (DefenseCode)&lt;br /&gt;
* [http://www.veracode.com/products/binary-static-analysis-sast Veracode Static Analysis] (Veracode)&lt;br /&gt;
* [http://www.xanitizer.net Xanitizer] - Scans Java for security vulnerabilities, mainly via taint analysis. Free for academic and open source projects (see [https://www.rigs-it.com/xanitizer-pricing/]).&lt;br /&gt;
&lt;br /&gt;
==More info==&lt;br /&gt;
&lt;br /&gt;
* [[Appendix_A:_Testing_Tools | Appendix A: Testing Tools]]&lt;br /&gt;
* [http://samate.nist.gov/index.php/Source_Code_Security_Analyzers.html NIST's list of Source Code Security Analysis Tools]&lt;br /&gt;
* [[:Category:Vulnerability_Scanning_Tools | DAST Tools]] - Similar info on Dynamic Application Security Testing (DAST) Tools&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP .NET Project]]&lt;br /&gt;
[[Category:SAMM-CR-2]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=246421</id>
		<title>Source Code Analysis Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=246421"/>
				<updated>2019-01-07T18:11:06Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Static_Code_Analysis | Source code analysis]] tools, also referred to as Static Application Security Testing (SAST) Tools, are designed to analyze source code and/or compiled versions of code to help find security flaws. &lt;br /&gt;
&lt;br /&gt;
Some tools are starting to move into the IDE. For the types of problems that can be detected during the software development phase itself, this is a powerful phase within the development life cycle to employ such tools, as it provides immediate feedback to the developer on issues they might be introducing into the code during code development itself. This immediate feedback is very useful, especially when compared to finding vulnerabilities much later in the development cycle.&lt;br /&gt;
&lt;br /&gt;
== Strengths and Weaknesses ==&lt;br /&gt;
&lt;br /&gt;
=== Strengths ===&lt;br /&gt;
&lt;br /&gt;
* Scales well -- can be run on lots of software, and can be run repeatedly (as with nightly builds or continuous integration)&lt;br /&gt;
* Useful for things that such tools can automatically find with high confidence, such as buffer overflows, SQL Injection Flaws, and so forth&lt;br /&gt;
* Output is good for developers -- highlights the precise source files, line numbers, and even subsections of lines that are affected&lt;br /&gt;
&lt;br /&gt;
=== Weaknesses ===&lt;br /&gt;
&lt;br /&gt;
* Many types of security vulnerabilities are difficult to find automatically, such as authentication problems, access control issues, insecure use of cryptography, etc. The current state of the art only allows such tools to automatically find a relatively small percentage of application security flaws. However, tools of this type are getting better.&lt;br /&gt;
* High numbers of false positives.&lt;br /&gt;
* Frequently can't find configuration issues, since they are not represented in the code.&lt;br /&gt;
* Difficult to 'prove' that an identified security issue is an actual vulnerability.&lt;br /&gt;
* Many of these tools have difficulty analyzing code that can't be compiled. Analysts frequently can't compile code because they don't have the right libraries, all the compilation instructions, all the code, etc.&lt;br /&gt;
&lt;br /&gt;
==Important Selection Criteria==&lt;br /&gt;
&lt;br /&gt;
* Requirement: Must support your programming language, but not usually a key factor once it does.&lt;br /&gt;
* Types of vulnerabilities it can detect (out of the [[OWASP Top Ten]]?) (plus more?)&lt;br /&gt;
* How accurate is it? False Positive/False Negative rates?&lt;br /&gt;
** Does the tool have an OWASP [[Benchmark]] score?&lt;br /&gt;
* Does it understand the libraries/frameworks you use?&lt;br /&gt;
* Does it require a fully buildable set of source?&lt;br /&gt;
* Can it run against binaries instead of source?&lt;br /&gt;
* Can it be integrated into the developer's IDE?&lt;br /&gt;
* How hard is it to setup/use?&lt;br /&gt;
* Can it be run continuously and automatically?&lt;br /&gt;
* License cost for the tool. (Some are sold per user, per org, per app, per line of code analyzed. Consulting licenses are frequently different than end user licenses.)&lt;br /&gt;
&lt;br /&gt;
==OWASP Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [[OWASP SonarQube Project]]&lt;br /&gt;
* [http://www.owasp.org/index.php/Category:OWASP_Orizon_Project OWASP Orizon Project]&lt;br /&gt;
* [[OWASP_LAPSE_Project | OWASP LAPSE Project]]&lt;br /&gt;
* [[OWASP O2 Platform]]&lt;br /&gt;
* [[OWASP WAP-Web Application Protection]]&lt;br /&gt;
&lt;br /&gt;
==Disclaimer==&lt;br /&gt;
&lt;br /&gt;
Disclaimer: &amp;lt;b&amp;gt;The tools listed in the tables below are presented in alphabetical order. &amp;lt;i&amp;gt;OWASP does not endorse any of the vendors or tools by listing them in the table below.&amp;lt;/i&amp;gt; We have made every effort to provide this information as accurately as possible. If you are the vendor of a tool below and think that this information is incomplete or incorrect, please send an e-mail to our mailing list and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Open Source or Free Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.openstack.org/wiki/Security/Projects/Bandit Bandit] - bandit is a comprehensive source vulnerability scanner for Python&lt;br /&gt;
* [http://brakemanscanner.org/ Brakeman] - Brakeman is an open source vulnerability scanner specifically designed for Ruby on Rails applications&lt;br /&gt;
* [http://rubygems.org/gems/codesake-dawn Codesake Dawn] - Codesake Dawn is an open source security source code analyzer designed for Sinatra, Padrino for Ruby on Rails applications. It also works on non-web applications written in Ruby&lt;br /&gt;
* [http://findbugs.sourceforge.net/ FindBugs] - (Legacy - NOT Maintained - Use SpotBugs (see below) instead) - Find bugs (including a few security flaws) in Java programs&lt;br /&gt;
* [https://find-sec-bugs.github.io/ FindSecBugs] - A security specific plugin for SpotBugs that significantly improves SpotBugs's ability to find security vulnerabilities in Java programs. Works with the old FindBugs too,&lt;br /&gt;
* [http://www.dwheeler.com/flawfinder/ Flawfinder] Flawfinder - Scans C and C++&lt;br /&gt;
* [https://www.bishopfox.com/resources/tools/google-hacking-diggity/attack-tools/ Google CodeSearchDiggity] - Uses Google Code Search to identifies vulnerabilities in open source code projects hosted by Google Code, MS CodePlex, SourceForge, Github, and more. The tool comes with over 130 default searches that identify SQL injection, cross-site scripting (XSS), insecure remote and local file includes, hard-coded passwords, and much more.  ''Essentially, Google CodeSearchDiggity provides a source code security analysis of nearly every single open source code project in existence – simultaneously.''&lt;br /&gt;
* [https://github.com/wireghoul/graudit/ Graudit] - Scans multiple languages for various security flaws.&lt;br /&gt;
* [https://lgtm.com/help/lgtm/about-lgtm LGTM] - A free for open source static analysis service that automatically monitors commits to publicly accessible code in: Bitbucket Cloud, GitHub, or GitLab. Supports C/C++, C#, COBOL (in beta), Java, JavaScript/TypeScript, Python&lt;br /&gt;
* [http://pmd.sourceforge.net/ PMD] - PMD scans Java source code and looks for potential code problems (this is a code quality tool that does not focus on security issues)&lt;br /&gt;
* [https://github.com/designsecurity/progpilot Progpilot] - Progpilot is a static analyzer tool for PHP that detects security vulnerabilities such as XSS and SQL Injection.&lt;br /&gt;
* [http://msdn.microsoft.com/en-us/library/ms933794.aspx PreFast] (Microsoft) - PREfast is a static analysis tool that identifies defects in C/C++ programs. Last update 2006.&lt;br /&gt;
* [https://pumascan.com/ Puma Scan] - Puma Scan is a .NET C# open source static source code analyzer that runs as an IDE plugin for Visual Studio and via MSBuild in CI pipelines.&lt;br /&gt;
* [https://dotnet-security-guard.github.io/ .NET Security Guard] - Roslyn analyzers that aim to help security audits on .NET applications. It will find SQL injections, LDAP injections, XXE, cryptography weakness, XSS and more.&lt;br /&gt;
* [http://rips-scanner.sourceforge.net/ RIPS] - RIPS is a static source code analyzer for vulnerabilities in PHP web applications. Please see notes on the sourceforge.net site.&lt;br /&gt;
* [https://github.com/FloeDesignTechnologies/phpcs-security-audit phpcs-security-audit] - phpcs-security-audit is a set of PHP_CodeSniffer rules that finds flaws or weaknesses related to security in PHP and its popular CMS or frameworks.  It currently has core PHP rules as well as Drupal 7 specific rules.&lt;br /&gt;
* [http://www.sonarqube.org/ SonarQube] - Scans source code for more than 20 languages for Bugs, Vulnerabilities, and Code Smells. SonarQube IDE plugins for Eclipse, Visual Studio, and IntelliJ provided by [http://www.sonarlint.org/ SonarLint].&lt;br /&gt;
* [https://spotbugs.github.io/ SpotBugs] - This is the active fork replacement for FindBugs, which is not maintained anymore.&lt;br /&gt;
* [http://sourceforge.net/projects/visualcodegrepp/ VisualCodeGrepper (VCG)] - Scans C/C++, C#, VB, PHP, Java, and PL/SQL for security issues and for comments which may indicate defective code. The config files can be used to carry out additional checks for banned functions or functions which commonly cause security issues.&lt;br /&gt;
&lt;br /&gt;
==Commercial Tools Of This Type==&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ Application Inspector] (Positive Technologies) - combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation; has capability to generate test queries (exploits) to verify detected vulnerabilities during SAST analysis; Supported languages include: Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/application-security-on-cloud Application Security on Cloud] (IBM) - Provides SAST, DAST and mobile security testing as well as OpenSource library known vulnerability detection as a cloud service. &lt;br /&gt;
* [http://www-01.ibm.com/software/rational/products/appscan/source/ AppScan Source] (IBM)&lt;br /&gt;
* [http://www.blueclosure.com BlueClosure BC Detect] (BlueClosure) - Analyzes client-side JavaScript.&lt;br /&gt;
* [https://buguroo.com/products/bugblast-next-gen-appsec-platform/bugscout-sca bugScout] (Buguroo Offensive Security)&lt;br /&gt;
* [http://www.castsoftware.com/solutions/application-security/cwe#SupportedSecurityStandards CAST AIP] (CAST) Performs static and architectural analysis to identify numerous types of security issues. Supports over 30 languages.&lt;br /&gt;
* [https://www.codacy.com/ Codacy] Offers security patterns for languages such as Python, Ruby, Scala, Java, JavaScript and more. Integrates with tools such as Brakeman, Bandit, FindBugs, and others. (free for open source projects)&lt;br /&gt;
* [https://www.grammatech.com/products/codesonar CodeSonar] tool that supports C, C++, Java and C# and maps against the OWASP top 10 vulnerabilities.&lt;br /&gt;
* [http://www.contrastsecurity.com/ Contrast] (Contrast Security) - Contrast performs code security without actually doing static analysis. Contrast does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.coverity.com/products/code-advisor/ Coverity Code Advisor] (Synopsys)&lt;br /&gt;
* [https://www.checkmarx.com/technology/static-code-analysis-sca/ CxSAST] (Checkmarx)&lt;br /&gt;
* [http://www8.hp.com/us/en/software-solutions/static-code-analysis-sast/ Fortify] (Micro Focus, Formally HP)&lt;br /&gt;
* [http://www.juliasoft.com/solutions Julia] (JuliaSoft) - SaaS Java static analysis&lt;br /&gt;
* [http://www.klocwork.com/capabilities/static-code-analysis KlocWork] (KlocWork)&lt;br /&gt;
* [https://www.kiuwan.com/code-analysis/ Kiuwan] (an [http://www.optimyth.com Optimyth] company) - SaaS Software Quality &amp;amp; Security Analysis&lt;br /&gt;
* [http://www.parasoft.com/jsp/capabilities/static_analysis.jsp?itemId=547 Parasoft Test] (Parasoft)&lt;br /&gt;
* [https://pitss.com/products/pitss-con/ PITSS.CON] (PITTS)&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ PT Application Inspector] combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation for high accuracy rate with minimum false positives; has a unique capability to generate special test queries (exploits) to verify detected vulnerabilities during SAST analysis; integrates with CI/CD, VCS, etc. PT AI helps to easily understand, verify, and fix flaws; has a simple UI; is highly automated and easy to use. Supported languages are Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others.&lt;br /&gt;
* [https://pumascanpro.com/ Puma Scan Professional] - A .NET C# static source code analyzer that runs as a Visual Studio IDE extension, Azure DevOps extension, and Command Line (CLI) executable.&lt;br /&gt;
* [http://www.viva64.com/en/ PVS-Studio] (PVS-Studio) - For C/C++, C#&lt;br /&gt;
* [https://www.softwaresecured.com/reshift reshift] - A CI/CD tool that uses static code analysis to scan for vulnerabilities and use machine learning to give a prediction on false positives.&lt;br /&gt;
* [https://www.ripstech.com/ RIPS Code Analysis] (RIPS Technologies) - A SAST solution specialized for PHP that detects unknown security vulnerabilities and code quality issues.&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/resources/datasheets/secureassist.html SecureAssist] (Synopsys) - Scans code for insecure coding and configurations automatically as an IDE plugin for Eclipse, IntelliJ, and Visual Studio etc. Supports (Java, .NET, PHP, and JavaScript)&lt;br /&gt;
* [https://www.whitehatsec.com/products/static-application-security-testing/ Sentinel Source] (Whitehat)&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/products/interactive-application-security-testing.html Seeker] (Synopsys) Seeker performs code security without actually doing static analysis. Seeker does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis with simulated attacks. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.sourcepatrol.co.uk/ Source Patrol] (Pentest)&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] (DefenseCode)&lt;br /&gt;
* [http://www.veracode.com/products/binary-static-analysis-sast Veracode Static Analysis] (Veracode)&lt;br /&gt;
* [http://www.xanitizer.net Xanitizer] - Scans Java for security vulnerabilities, mainly via taint analysis. Free for academic and open source projects (see [https://www.rigs-it.com/xanitizer-pricing/]).&lt;br /&gt;
&lt;br /&gt;
==More info==&lt;br /&gt;
&lt;br /&gt;
* [[Appendix_A:_Testing_Tools | Appendix A: Testing Tools]]&lt;br /&gt;
* [http://samate.nist.gov/index.php/Source_Code_Security_Analyzers.html NIST's list of Source Code Security Analysis Tools]&lt;br /&gt;
* [[:Category:Vulnerability_Scanning_Tools | DAST Tools]] - Similar info on Dynamic Application Security Testing (DAST) Tools&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP .NET Project]]&lt;br /&gt;
[[Category:SAMM-CR-2]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Category:Vulnerability_Scanning_Tools&amp;diff=246349</id>
		<title>Category:Vulnerability Scanning Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Category:Vulnerability_Scanning_Tools&amp;diff=246349"/>
				<updated>2019-01-03T15:25:04Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Tools Listing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description  ==&lt;br /&gt;
&lt;br /&gt;
Web Application Vulnerability Scanners are automated tools that scan web applications, normally from the outside, to look for security vulnerabilities such as [[Cross-site scripting]], [[SQL Injection]], [[Command Injection]], [[Path Traversal]] and insecure server configuration. This category of tools is frequently referred to as [https://www.techopedia.com/definition/30958/dynamic-application-security-testing-dast Dynamic Application Security Testing] (DAST) Tools. A large number of both commercial and open source tools of this type are available and all of these tools have their own strengths and weaknesses.  If you are interested in the effectiveness of DAST tools, check out the OWASP [[Benchmark]] project, which is scientifically measuring the effectiveness of all types of vulnerability detection tools, including DAST.&lt;br /&gt;
&lt;br /&gt;
Here we provide a list of vulnerability scanning tools currently available in the market.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; '''Disclaimer:''' The tools listing in the table below are presented in an alphabetical order. &amp;lt;b&amp;gt;OWASP does not endorse any of the Vendors or Scanning Tools by listing them in the table below. We have made every effort to provide this information as accurately as possible. If you are the vendor of a tool below and think this information is incomplete or incorrect, please send an e-mail to our [mailto:owasp_ha_vulnerability_scanner_project@lists.owasp.org mailing list] and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
OWASP is aware of the [http://sectooladdict.blogspot.com/ '''Web Application Vulnerability Scanner Evaluation Project (WAVSEP)'''. WAVSEP] is completely unrelated to OWASP and we do not endorse its results, nor any of the DAST tools it evaluates. However, the results provided by WAVSEP may be helpful to someone interested in researching or selecting free and/or commercial DAST tools for their projects. This project has far more detail on DAST tools and their features than this OWASP DAST page.&lt;br /&gt;
&lt;br /&gt;
== Tools Listing  ==&lt;br /&gt;
&lt;br /&gt;
{{:Template:OWASP Tool Headings}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.acunetix.com/ Acunetix WVS] || tool_owner = Acunetix || tool_licence = Commercial / Free (Limited Capability) || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.ibm.com/us-en/marketplace/application-security-on-cloud Application Security on Cloud] || tool_owner = IBM || tool_licence = Commercial || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www-03.ibm.com/software/products/en/appscan-standard AppScan] || tool_owner = IBM || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.trustwave.com/Products/Application-Security/App-Scanner-Family/App-Scanner-Enterprise/ App Scanner] || tool_owner = Trustwave || tool_licence = Commercial || tool_platforms = Windows }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.rapid7.com/products/appspider/ AppSpider] || tool_owner = Rapid7 || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://apptrana.indusface.com/basic/ AppTrana Website Security Scan] || tool_owner = AppTrana || tool_licence = Commercial / Free Trial || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.arachni-scanner.com/ Arachni] || tool_owner = Arachni|| tool_licence = Free for most use cases || tool_platforms = Most platforms supported}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.scanmyserver.com/ AVDS] || tool_owner = Beyond Security || tool_licence = Commercial / Free (Limited Capability)|| tool_platforms = SaaS }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.blueclosure.com BlueClosure BC Detect] || tool_owner = BlueClosure || tool_licence = Commercial, 2 weeks trial || tool_platforms = Most platforms supported}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.portswigger.net/ Burp Suite] || tool_owner = PortSwiger || tool_licence = Commercial / Free (Limited Capability)|| tool_platforms = Most platforms supported }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://contrastsecurity.com Contrast] || tool_owner = Contrast Security || tool_licence = Commercial / Free (Full featured for 1 App) || tool_platforms = SaaS or On-Premises }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://detectify.com/ Detectify] || tool_owner = Detectify || tool_licence = Commercial || tool_platforms = SaaS }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.digifort.se/en/scanner Digifort- Inspect] || tool_owner = Digifort|| tool_licence = Commercial || tool_platforms = SaaS }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.edgescan.com/ edgescan] || tool_owner = edgescan|| tool_licence = Commercial || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.gamasec.com/Gamascan.aspx GamaScan] || tool_owner = GamaSec || tool_licence = Commercial || tool_platforms = Windows }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://rgaucher.info/beta/grabber/ Grabber] || tool_owner = Romain Gaucher || tool_licence = Open Source || tool_platforms = Python 2.4, BeautifulSoup and PyXML}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://gravityscan.com/ Gravityscan] || tool_owner = Defiant, Inc. || tool_licence = Commercial / Free (Limited Capability) || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://sourceforge.net/p/grendel/code/ci/c59780bfd41bdf34cc13b27bc3ce694fd3cb7456/tree/ Grendel-Scan] || tool_owner = David Byrne || tool_licence = Open Source || tool_platforms = Windows, Linux and Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.golismero.com GoLismero] || tool_owner = GoLismero Team || tool_licence = GPLv2.0 || tool_platforms = Windows, Linux and Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.ikare-monitoring.com/ IKare] || tool_owner = ITrust || tool_licence = Commercial || tool_platforms = N/A }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.htbridge.com/immuniweb/ ImmuniWeb] || tool_owner = High-Tech Bridge || tool_licence = Commercial  / Free (Limited Capability)|| tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.indusface.com/index.php/products/web-application-scanning Indusface Web Application Scanning] || tool_owner = Indusface || tool_licence = Commercial || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.nstalker.com/ N-Stealth] || tool_owner = N-Stalker || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.tenable.com/products/tenable-io/web-application-scanning/ Nessus] || tool_owner = Tenable || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.mavitunasecurity.com/ Netsparker] || tool_owner = MavitunaSecurity || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.rapid7.com/products/nexpose-community-edition.jsp Nexpose] || tool_owner = Rapid7 || tool_licence = Commercial / Free (Limited Capability)|| tool_platforms = Windows/Linux}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.cirt.net/nikto2 Nikto] || tool_owner = CIRT || tool_licence = Open Source|| tool_platforms = Unix/Linux}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.milescan.com/ ParosPro] || tool_owner = MileSCAN || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://probely.com Probe.ly] || tool_owner = Probe.ly || tool_licence = Commercial / Free (Limited Capability) || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.websecurify.com/desktop/proxy.html Proxy.app] || tool_owner = Websecurify || tool_licence = Commercial || tool_platforms = Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.qualys.com/products/qg_suite/was/ QualysGuard] || tool_owner = Qualys || tool_licence = Commercial || tool_platforms = N/A}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.beyondtrust.com/Products/RetinaNetworkSecurityScanner/ Retina] || tool_owner = BeyondTrust || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.orvant.com Securus] || tool_owner = Orvant, Inc || tool_licence = Commercial || tool_platforms = N/A}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.whitehatsec.com/home/services/services.html Sentinel] || tool_owner = WhiteHat Security || tool_licence = Commercial || tool_platforms = N/A}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.parasoft.com/products/article.jsp?articleId=3169&amp;amp;redname=webtesting&amp;amp;referred=webtesting SOATest] || tool_owner = Parasoft || tool_licence = Commercial || tool_platforms = Windows / Linux / Solaris}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.tinfoilsecurity.com Tinfoil Security] || tool_owner = Tinfoil Security, Inc. || tool_licence = Commercial / Free (Limited Capability) || tool_platforms = SaaS or On-Premises}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.trustwave.com/external-vulnerability-scanning.php Trustkeeper Scanner] || tool_owner = Trustwave SpiderLabs || tool_licence = Commercial || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://subgraph.com/vega/ Vega] || tool_owner = Subgraph || tool_licence = Open Source || tool_platforms = Windows, Linux and Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://wapiti.sourceforge.net/ Wapiti] || tool_owner = Informática Gesfor || tool_licence = Open Source || tool_platforms = Windows, Unix/Linux and Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.defensecode.com/webscanner.php Web Security Scanner] || tool_owner = DefenseCode || tool_licence = Commercial || tool_platforms = On-Premises}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.tripwire.com/it-security-software/enterprise-vulnerability-management/web-application-vulnerability-scanning/ WebApp360] || tool_owner = TripWire || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://webcookies.org WebCookies] || tool_owner = WebCookies || tool_licence = Free|| tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www8.hp.com/us/en/software-solutions/software.html?compURI=1341991#.Uuf0KBAo4iw WebInspect] || tool_owner = HP || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.websecurify.com/desktop/webreaver.html WebReaver] || tool_owner = Websecurify || tool_licence = Commercial || tool_platforms = Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.german-websecurity.com/en/products/webscanservice/product-details/overview/ WebScanService] || tool_owner = German Web Security || tool_licence = Commercial || tool_platforms = N/A}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://suite.websecurify.com/ Websecurify Suite] || tool_owner = Websecurify || tool_licence = Commercial / Free (Limited Capability) || tool_platforms = Windows, Linux, Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.sensepost.com/research/wikto/ Wikto] || tool_owner = Sensepost || tool_licence = Open Source || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.w3af.org/ w3af] || tool_owner = w3af.org || tool_licence = GPLv2.0 || tool_platforms = Linux and Mac}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.owasp.org/index.php/OWASP_Xenotix_XSS_Exploit_Framework Xenotix XSS Exploit Framework] || tool_owner = OWASP || tool_licence = Open Source || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project Zed Attack Proxy] || tool_owner = OWASP || tool_licence = Open Source || tool_platforms = Windows, Unix/Linux and Macintosh}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== References  ==&lt;br /&gt;
&lt;br /&gt;
*[[Source_Code_Analysis_Tools | SAST Tools]] - OWASP page with similar information on Static Application Security Testing (SAST) Tools&lt;br /&gt;
*http://sectooladdict.blogspot.com/ - Web Application Vulnerability Scanner Evaluation Project (WAVSEP)&lt;br /&gt;
*http://projects.webappsec.org/Web-Application-Security-Scanner-Evaluation-Criteria - v1.0 (2009)&lt;br /&gt;
*http://www.slideshare.net/lbsuto/accuracy-and-timecostsofwebappscanners - White Paper: Analyzing the Accuracy and Time Costs of WebApplication Security Scanners - By Larry Suto (2010)&lt;br /&gt;
*http://samate.nist.gov/index.php/Web_Application_Vulnerability_Scanners.html - NIST home page which links to: NIST Special Publication 500-269: Software Assurance Tools: Web Application Security Scanner Functional Specification Version 1.0 (21 August, 2007)&lt;br /&gt;
*http://www.softwareqatest.com/qatweb1.html#SECURITY - A list of Web Site Security Test Tools. (Has both DAST and SAST tools)&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP_Tools_Project]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Dependency_Check&amp;diff=246336</id>
		<title>OWASP Dependency Check</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Dependency_Check&amp;diff=246336"/>
				<updated>2019-01-02T21:46:28Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Quick Download */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Main=&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:90px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File: flagship_big.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
==OWASP Dependency-Check==&lt;br /&gt;
&lt;br /&gt;
Dependency-Check is a utility that identifies project dependencies and checks if there are any known, publicly disclosed, vulnerabilities.  Currently, Java and .NET are supported; additional experimental support has been added for Ruby, Node.js, Python, and limited support for C/C++ build systems (autoconf and cmake). The tool can be part of a solution to the [[Top_10-2017_A9-Using_Components_with_Known_Vulnerabilities | OWASP Top 10 2017 A9-Using Components with Known Vulnerabilities]] previously known as [[Top_10_2013-A9-Using_Components_with_Known_Vulnerabilities | OWASP Top 10 2013 A9-Using Components with Known Vulnerabilities]].&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
The OWASP Top 10 2013 contains a new entry: [[Top_10_2013-A9-Using_Components_with_Known_Vulnerabilities | A9-Using Components with Known Vulnerabilities]]. Dependency Check can currently be used to scan applications (and their dependent libraries) to identify any known vulnerable components.&lt;br /&gt;
&lt;br /&gt;
The problem with using known vulnerable components was described very well in a paper by Jeff Williams and Arshan Dabirsiaghi titled, &amp;quot;[https://cdn2.hubspot.net/hub/203759/file-1100864196-pdf/docs/Contrast_-_Insecure_Libraries_2014.pdf Unfortunate Reality of Insecure Libraries]&amp;quot;. The gist of the paper is that we as a development community include third party libraries in our applications that contain well known published vulnerabilities (such as those at the [https://nvd.nist.gov/vuln/search National Vulnerability Database]).&lt;br /&gt;
&lt;br /&gt;
Dependency-check has a command line interface, a Maven plugin, an Ant task, and a Jenkins plugin. The core engine contains a series of analyzers that inspect the project dependencies, collect pieces of information about the dependencies (referred to as evidence within the tool). The evidence is then used to identify the [https://nvd.nist.gov/products/cpe Common Platform Enumeration (CPE)] for the given dependency. If a CPE is identified, a listing of associated [https://cve.mitre.org/ Common Vulnerability and Exposure (CVE)] entries are listed in a report.&lt;br /&gt;
&lt;br /&gt;
Dependency-check automatically updates itself using the [https://nvd.nist.gov/vuln/data-feeds NVD Data Feeds] hosted by NIST. '''IMPORTANT NOTE:''' The initial download of the data may take ten minutes or more. If you run the tool at least once every seven days, only a small XML file needs to be downloaded to keep the local copy of the data current.&lt;br /&gt;
&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;padding-left:25px;width:200px;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Quick Download ==&lt;br /&gt;
&lt;br /&gt;
Version 4.0.2&lt;br /&gt;
* [https://dl.bintray.com/jeremy-long/owasp/dependency-check-4.0.2-release.zip Command Line]&lt;br /&gt;
* [https://dl.bintray.com/jeremy-long/owasp/dependency-check-ant-4.0.2-release.zip Ant Task]&lt;br /&gt;
* [https://search.maven.org/#artifactdetails%7Corg.owasp%7Cdependency-check-maven%7C4.0.2%7Cmaven-plugin Maven Plugin]&lt;br /&gt;
* [https://search.maven.org/#artifactdetails%7Corg.owasp%7Cdependency-check-gradle%7C4.0.2%7Cgradle-plugin Gradle Plugin]&lt;br /&gt;
* [https://plugins.jenkins.io/dependency-check-jenkins-plugin Jenkins Plugin]&lt;br /&gt;
* [https://brew.sh/ Mac Homebrew]:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;brew update &amp;amp;&amp;amp; brew install dependency-check&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Other Plugins&lt;br /&gt;
* [https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22net.vonbuchholtz%22%20a%3A%22sbt-dependency-check%22 sbt Plugin]&lt;br /&gt;
* [https://github.com/livingsocial/lein-dependency-check lein-dependency-check]&lt;br /&gt;
&lt;br /&gt;
== Integrations ==&lt;br /&gt;
* [https://github.com/stevespringett/dependency-check-sonar-plugin SonarQube Plugin]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/jeremylong/DependencyCheck github]&lt;br /&gt;
* [https://github.com/jeremylong/dependency-check-gradle gradle source]&lt;br /&gt;
* [https://github.com/albuch/sbt-dependency-check sbt source]&lt;br /&gt;
* [https://github.com/jenkinsci/dependency-check-plugin jenkins source]&lt;br /&gt;
* [https://www.ohloh.net/p/dependencycheck Ohloh]&lt;br /&gt;
* [https://bintray.com/jeremy-long/owasp Bintray]&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
* [https://jeremylong.github.io/DependencyCheck/ Documentation (on GitHub)]&lt;br /&gt;
&lt;br /&gt;
== Mailing List ==&lt;br /&gt;
&lt;br /&gt;
* [mailto:dependency-check+subscribe@googlegroups.com Subscribe]&lt;br /&gt;
* [mailto:dependency-check@googlegroups.com Post]&lt;br /&gt;
* [https://groups.google.com/forum/#!forum/dependency-check Archived Posts]&lt;br /&gt;
&lt;br /&gt;
== Presentation ==&lt;br /&gt;
&lt;br /&gt;
* [https://jeremylong.github.io/DependencyCheck/general/dependency-check.pdf dependency-check (PDF)]&lt;br /&gt;
* [https://jeremylong.github.io/DependencyCheck/general/dependency-check.pptx dependency-check  (PPTX)]&lt;br /&gt;
&lt;br /&gt;
==Classifications==&lt;br /&gt;
&lt;br /&gt;
   {| width=&amp;quot;200&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
   |-&lt;br /&gt;
   | rowspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-flagship-trans-85.png|link=https://www.owasp.org/index.php/OWASP_Project_Stages#tab=Flagship_Projects]]&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-builders-small.png|link=]]  &lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-defenders-small.png|link=]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; | [[File:Cc-button-y-sa-small.png|link=https://creativecommons.org/licenses/by-sa/3.0/]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; | [[File:Project_Type_Files_CODE.jpg|link=]]&lt;br /&gt;
   |}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Acknowledgements =&lt;br /&gt;
==Volunteers==&lt;br /&gt;
Dependency-Check is developed by a team of volunteers. The primary contributors to date have been:&lt;br /&gt;
&lt;br /&gt;
* [[User:Jeremy Long|Jeremy Long]]&lt;br /&gt;
* [[User:Steve Springett|Steve Springett]]&lt;br /&gt;
* [[User:Will Stranathan|Will Stranathan]]&lt;br /&gt;
&lt;br /&gt;
= Road Map and Getting Involved =&lt;br /&gt;
As of March 2015, the top priorities are:&lt;br /&gt;
* Resolving all open [https://github.com/jeremylong/DependencyCheck/issues?state=open github issues/feature requests]&lt;br /&gt;
* Improving analysis for .NET DLLs&lt;br /&gt;
&lt;br /&gt;
Involvement in the development and promotion of dependency-check is actively encouraged! You do not have to be a security expert in order to contribute. How you can help:&lt;br /&gt;
* Use the tool&lt;br /&gt;
* Provide feedback via the [https://groups.google.com/forum/?fromgroups#!forum/dependency-check mailing list] or by creating [https://github.com/jeremylong/DependencyCheck/issues?state=open github issues] (both bugs and feature requests are encouraged)&lt;br /&gt;
* The project source code is hosted on [https://github.com/jeremylong/DependencyCheck/ github] - if you are so inclined fork it and provide push requests!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
__NOTOC__ &amp;lt;headertabs&amp;gt;&amp;lt;/headertabs&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Project]]  &lt;br /&gt;
[[Category:OWASP_Builders]] &lt;br /&gt;
[[Category:OWASP_Defenders]]  &lt;br /&gt;
[[Category:OWASP_Document]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Dependency_Check&amp;diff=246335</id>
		<title>OWASP Dependency Check</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Dependency_Check&amp;diff=246335"/>
				<updated>2019-01-02T21:45:38Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Main=&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:90px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File: flagship_big.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
==OWASP Dependency-Check==&lt;br /&gt;
&lt;br /&gt;
Dependency-Check is a utility that identifies project dependencies and checks if there are any known, publicly disclosed, vulnerabilities.  Currently, Java and .NET are supported; additional experimental support has been added for Ruby, Node.js, Python, and limited support for C/C++ build systems (autoconf and cmake). The tool can be part of a solution to the [[Top_10-2017_A9-Using_Components_with_Known_Vulnerabilities | OWASP Top 10 2017 A9-Using Components with Known Vulnerabilities]] previously known as [[Top_10_2013-A9-Using_Components_with_Known_Vulnerabilities | OWASP Top 10 2013 A9-Using Components with Known Vulnerabilities]].&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
The OWASP Top 10 2013 contains a new entry: [[Top_10_2013-A9-Using_Components_with_Known_Vulnerabilities | A9-Using Components with Known Vulnerabilities]]. Dependency Check can currently be used to scan applications (and their dependent libraries) to identify any known vulnerable components.&lt;br /&gt;
&lt;br /&gt;
The problem with using known vulnerable components was described very well in a paper by Jeff Williams and Arshan Dabirsiaghi titled, &amp;quot;[https://cdn2.hubspot.net/hub/203759/file-1100864196-pdf/docs/Contrast_-_Insecure_Libraries_2014.pdf Unfortunate Reality of Insecure Libraries]&amp;quot;. The gist of the paper is that we as a development community include third party libraries in our applications that contain well known published vulnerabilities (such as those at the [https://nvd.nist.gov/vuln/search National Vulnerability Database]).&lt;br /&gt;
&lt;br /&gt;
Dependency-check has a command line interface, a Maven plugin, an Ant task, and a Jenkins plugin. The core engine contains a series of analyzers that inspect the project dependencies, collect pieces of information about the dependencies (referred to as evidence within the tool). The evidence is then used to identify the [https://nvd.nist.gov/products/cpe Common Platform Enumeration (CPE)] for the given dependency. If a CPE is identified, a listing of associated [https://cve.mitre.org/ Common Vulnerability and Exposure (CVE)] entries are listed in a report.&lt;br /&gt;
&lt;br /&gt;
Dependency-check automatically updates itself using the [https://nvd.nist.gov/vuln/data-feeds NVD Data Feeds] hosted by NIST. '''IMPORTANT NOTE:''' The initial download of the data may take ten minutes or more. If you run the tool at least once every seven days, only a small XML file needs to be downloaded to keep the local copy of the data current.&lt;br /&gt;
&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;padding-left:25px;width:200px;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Quick Download ==&lt;br /&gt;
&lt;br /&gt;
Version 4.0.2&lt;br /&gt;
* [https://dl.bintray.com/jeremy-long/owasp/dependency-check-4.0.2-release.zip Command Line]&lt;br /&gt;
* [https://dl.bintray.com/jeremy-long/owasp/dependency-check-ant-4.0.2-release.zip Ant Task]&lt;br /&gt;
* [https://search.maven.org/#artifactdetails%7Corg.owasp%7Cdependency-check-maven%7C4.0.2%7Cmaven-plugin Maven Plugin]&lt;br /&gt;
* [https://search.maven.org/#artifactdetails%7Corg.owasp%7Cdependency-check-gradle%7C4.0.2%7Cgradle-plugin Gradle Plugin]&lt;br /&gt;
* [httpss://plugins.jenkins.io/dependency-check-jenkins-plugin Jenkins Plugin]&lt;br /&gt;
* [https://brew.sh/ Mac Homebrew]:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;brew update &amp;amp;&amp;amp; brew install dependency-check&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Other Plugins&lt;br /&gt;
* [https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22net.vonbuchholtz%22%20a%3A%22sbt-dependency-check%22 sbt Plugin]&lt;br /&gt;
* [https://github.com/livingsocial/lein-dependency-check lein-dependency-check]&lt;br /&gt;
&lt;br /&gt;
== Integrations ==&lt;br /&gt;
* [https://github.com/stevespringett/dependency-check-sonar-plugin SonarQube Plugin]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/jeremylong/DependencyCheck github]&lt;br /&gt;
* [https://github.com/jeremylong/dependency-check-gradle gradle source]&lt;br /&gt;
* [https://github.com/albuch/sbt-dependency-check sbt source]&lt;br /&gt;
* [https://github.com/jenkinsci/dependency-check-plugin jenkins source]&lt;br /&gt;
* [https://www.ohloh.net/p/dependencycheck Ohloh]&lt;br /&gt;
* [https://bintray.com/jeremy-long/owasp Bintray]&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
* [https://jeremylong.github.io/DependencyCheck/ Documentation (on GitHub)]&lt;br /&gt;
&lt;br /&gt;
== Mailing List ==&lt;br /&gt;
&lt;br /&gt;
* [mailto:dependency-check+subscribe@googlegroups.com Subscribe]&lt;br /&gt;
* [mailto:dependency-check@googlegroups.com Post]&lt;br /&gt;
* [https://groups.google.com/forum/#!forum/dependency-check Archived Posts]&lt;br /&gt;
&lt;br /&gt;
== Presentation ==&lt;br /&gt;
&lt;br /&gt;
* [https://jeremylong.github.io/DependencyCheck/general/dependency-check.pdf dependency-check (PDF)]&lt;br /&gt;
* [https://jeremylong.github.io/DependencyCheck/general/dependency-check.pptx dependency-check  (PPTX)]&lt;br /&gt;
&lt;br /&gt;
==Classifications==&lt;br /&gt;
&lt;br /&gt;
   {| width=&amp;quot;200&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
   |-&lt;br /&gt;
   | rowspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-flagship-trans-85.png|link=https://www.owasp.org/index.php/OWASP_Project_Stages#tab=Flagship_Projects]]&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-builders-small.png|link=]]  &lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-defenders-small.png|link=]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; | [[File:Cc-button-y-sa-small.png|link=https://creativecommons.org/licenses/by-sa/3.0/]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; | [[File:Project_Type_Files_CODE.jpg|link=]]&lt;br /&gt;
   |}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Acknowledgements =&lt;br /&gt;
==Volunteers==&lt;br /&gt;
Dependency-Check is developed by a team of volunteers. The primary contributors to date have been:&lt;br /&gt;
&lt;br /&gt;
* [[User:Jeremy Long|Jeremy Long]]&lt;br /&gt;
* [[User:Steve Springett|Steve Springett]]&lt;br /&gt;
* [[User:Will Stranathan|Will Stranathan]]&lt;br /&gt;
&lt;br /&gt;
= Road Map and Getting Involved =&lt;br /&gt;
As of March 2015, the top priorities are:&lt;br /&gt;
* Resolving all open [https://github.com/jeremylong/DependencyCheck/issues?state=open github issues/feature requests]&lt;br /&gt;
* Improving analysis for .NET DLLs&lt;br /&gt;
&lt;br /&gt;
Involvement in the development and promotion of dependency-check is actively encouraged! You do not have to be a security expert in order to contribute. How you can help:&lt;br /&gt;
* Use the tool&lt;br /&gt;
* Provide feedback via the [https://groups.google.com/forum/?fromgroups#!forum/dependency-check mailing list] or by creating [https://github.com/jeremylong/DependencyCheck/issues?state=open github issues] (both bugs and feature requests are encouraged)&lt;br /&gt;
* The project source code is hosted on [https://github.com/jeremylong/DependencyCheck/ github] - if you are so inclined fork it and provide push requests!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
__NOTOC__ &amp;lt;headertabs&amp;gt;&amp;lt;/headertabs&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Project]]  &lt;br /&gt;
[[Category:OWASP_Builders]] &lt;br /&gt;
[[Category:OWASP_Defenders]]  &lt;br /&gt;
[[Category:OWASP_Document]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Dependency_Check&amp;diff=246334</id>
		<title>OWASP Dependency Check</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Dependency_Check&amp;diff=246334"/>
				<updated>2019-01-02T21:41:39Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Quick Download */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Main=&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:90px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File: flagship_big.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
==OWASP Dependency-Check==&lt;br /&gt;
&lt;br /&gt;
Dependency-Check is a utility that identifies project dependencies and checks if there are any known, publicly disclosed, vulnerabilities.  Currently, Java and .NET are supported; additional experimental support has been added for Ruby, Node.js, Python, and limited support for C/C++ build systems (autoconf and cmake). The tool can be part of a solution to the [[Top_10-2017_A9-Using_Components_with_Known_Vulnerabilities | OWASP Top 10 2017 A9-Using Components with Known Vulnerabilities]] previously known as [[Top_10_2013-A9-Using_Components_with_Known_Vulnerabilities | OWASP Top 10 2013 A9-Using Components with Known Vulnerabilities]].&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
The OWASP Top 10 2013 contains a new entry: [[Top_10_2013-A9-Using_Components_with_Known_Vulnerabilities | A9-Using Components with Known Vulnerabilities]]. Dependency-check can currently be used to scan applications (and their dependent libraries) to identify any known vulnerable components.&lt;br /&gt;
&lt;br /&gt;
The problem with using known vulnerable components was described very well in a paper by Jeff Williams and Arshan Dabirsiaghi titled, &amp;quot;[https://cdn2.hubspot.net/hub/203759/file-1100864196-pdf/docs/Contrast_-_Insecure_Libraries_2014.pdf Unfortunate Reality of Insecure Libraries]&amp;quot;. The gist of the paper is that we as a development community include third party libraries in our applications that contain well known published vulnerabilities (such as those at the [https://nvd.nist.gov/vuln/search National Vulnerability Database]).&lt;br /&gt;
&lt;br /&gt;
Dependency-check has a command line interface, a Maven plugin, an Ant task, and a Jenkins plugin. The core engine contains a series of analyzers that inspect the project dependencies, collect pieces of information about the dependencies (referred to as evidence within the tool). The evidence is then used to identify the [https://nvd.nist.gov/products/cpe Common Platform Enumeration (CPE)] for the given dependency. If a CPE is identified, a listing of associated [http://cve.mitre.org/ Common Vulnerability and Exposure (CVE)] entries are listed in a report.&lt;br /&gt;
&lt;br /&gt;
Dependency-check automatically updates itself using the [https://nvd.nist.gov/vuln/data-feeds NVD Data Feeds] hosted by NIST. '''IMPORTANT NOTE:''' The initial download of the data may take ten minutes or more, if you run the tool at least once every seven days only a small XML file needs to be downloaded to keep the local copy of the data current.&lt;br /&gt;
&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;padding-left:25px;width:200px;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Quick Download ==&lt;br /&gt;
&lt;br /&gt;
Version 4.0.2&lt;br /&gt;
* [https://dl.bintray.com/jeremy-long/owasp/dependency-check-4.0.2-release.zip Command Line]&lt;br /&gt;
* [https://dl.bintray.com/jeremy-long/owasp/dependency-check-ant-4.0.2-release.zip Ant Task]&lt;br /&gt;
* [https://search.maven.org/#artifactdetails%7Corg.owasp%7Cdependency-check-maven%7C4.0.2%7Cmaven-plugin Maven Plugin]&lt;br /&gt;
* [https://search.maven.org/#artifactdetails%7Corg.owasp%7Cdependency-check-gradle%7C4.0.2%7Cgradle-plugin Gradle Plugin]&lt;br /&gt;
* [httpss://plugins.jenkins.io/dependency-check-jenkins-plugin Jenkins Plugin]&lt;br /&gt;
* [https://brew.sh/ Mac Homebrew]:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;brew update &amp;amp;&amp;amp; brew install dependency-check&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Other Plugins&lt;br /&gt;
* [https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22net.vonbuchholtz%22%20a%3A%22sbt-dependency-check%22 sbt Plugin]&lt;br /&gt;
* [https://github.com/livingsocial/lein-dependency-check lein-dependency-check]&lt;br /&gt;
&lt;br /&gt;
== Integrations ==&lt;br /&gt;
* [https://github.com/stevespringett/dependency-check-sonar-plugin SonarQube Plugin]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/jeremylong/DependencyCheck github]&lt;br /&gt;
* [https://github.com/jeremylong/dependency-check-gradle gradle source]&lt;br /&gt;
* [https://github.com/albuch/sbt-dependency-check sbt source]&lt;br /&gt;
* [https://github.com/jenkinsci/dependency-check-plugin jenkins source]&lt;br /&gt;
* [https://www.ohloh.net/p/dependencycheck Ohloh]&lt;br /&gt;
* [https://bintray.com/jeremy-long/owasp Bintray]&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
* [https://jeremylong.github.io/DependencyCheck/ Documentation (on GitHub)]&lt;br /&gt;
&lt;br /&gt;
== Mailing List ==&lt;br /&gt;
&lt;br /&gt;
* [mailto:dependency-check+subscribe@googlegroups.com Subscribe]&lt;br /&gt;
* [mailto:dependency-check@googlegroups.com Post]&lt;br /&gt;
* [https://groups.google.com/forum/#!forum/dependency-check Archived Posts]&lt;br /&gt;
&lt;br /&gt;
== Presentation ==&lt;br /&gt;
&lt;br /&gt;
* [https://jeremylong.github.io/DependencyCheck/general/dependency-check.pdf dependency-check (PDF)]&lt;br /&gt;
* [https://jeremylong.github.io/DependencyCheck/general/dependency-check.pptx dependency-check  (PPTX)]&lt;br /&gt;
&lt;br /&gt;
==Classifications==&lt;br /&gt;
&lt;br /&gt;
   {| width=&amp;quot;200&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
   |-&lt;br /&gt;
   | rowspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-flagship-trans-85.png|link=https://www.owasp.org/index.php/OWASP_Project_Stages#tab=Flagship_Projects]]&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-builders-small.png|link=]]  &lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-defenders-small.png|link=]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; | [[File:Cc-button-y-sa-small.png|link=http://creativecommons.org/licenses/by-sa/3.0/]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; | [[File:Project_Type_Files_CODE.jpg|link=]]&lt;br /&gt;
   |}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Acknowledgements =&lt;br /&gt;
==Volunteers==&lt;br /&gt;
Dependency-Check is developed by a team of volunteers. The primary contributors to date have been:&lt;br /&gt;
&lt;br /&gt;
* [[User:Jeremy Long|Jeremy Long]]&lt;br /&gt;
* [[User:Steve Springett|Steve Springett]]&lt;br /&gt;
* [[User:Will Stranathan|Will Stranathan]]&lt;br /&gt;
&lt;br /&gt;
= Road Map and Getting Involved =&lt;br /&gt;
As of March 2015, the top priorities are:&lt;br /&gt;
* Resolving all open [https://github.com/jeremylong/DependencyCheck/issues?state=open github issues/feature requests]&lt;br /&gt;
* Improving analysis for .NET Dlls&lt;br /&gt;
&lt;br /&gt;
Involvement in the development and promotion of dependency-check is actively encouraged!&lt;br /&gt;
You do not have to be a security expert in order to contribute. How you can help:&lt;br /&gt;
* Use the tool&lt;br /&gt;
* Provide feedback via the [https://groups.google.com/forum/?fromgroups#!forum/dependency-check mailing list] or by creating [https://github.com/jeremylong/DependencyCheck/issues?state=open github issues] (both bugs and feature requests are encouraged)&lt;br /&gt;
* The project source code is hosted on [https://github.com/jeremylong/DependencyCheck/ github] - if you are so inclined fork it and provide push requests!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
__NOTOC__ &amp;lt;headertabs&amp;gt;&amp;lt;/headertabs&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Project]]  &lt;br /&gt;
[[Category:OWASP_Builders]] &lt;br /&gt;
[[Category:OWASP_Defenders]]  &lt;br /&gt;
[[Category:OWASP_Document]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=OWASP_Dependency_Check&amp;diff=246333</id>
		<title>OWASP Dependency Check</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=OWASP_Dependency_Check&amp;diff=246333"/>
				<updated>2019-01-02T21:36:28Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Main=&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:90px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File: flagship_big.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
==OWASP Dependency-Check==&lt;br /&gt;
&lt;br /&gt;
Dependency-Check is a utility that identifies project dependencies and checks if there are any known, publicly disclosed, vulnerabilities.  Currently, Java and .NET are supported; additional experimental support has been added for Ruby, Node.js, Python, and limited support for C/C++ build systems (autoconf and cmake). The tool can be part of a solution to the [[Top_10-2017_A9-Using_Components_with_Known_Vulnerabilities | OWASP Top 10 2017 A9-Using Components with Known Vulnerabilities]] previously known as [[Top_10_2013-A9-Using_Components_with_Known_Vulnerabilities | OWASP Top 10 2013 A9-Using Components with Known Vulnerabilities]].&lt;br /&gt;
&lt;br /&gt;
==Introduction==&lt;br /&gt;
&lt;br /&gt;
The OWASP Top 10 2013 contains a new entry: [[Top_10_2013-A9-Using_Components_with_Known_Vulnerabilities | A9-Using Components with Known Vulnerabilities]]. Dependency-check can currently be used to scan applications (and their dependent libraries) to identify any known vulnerable components.&lt;br /&gt;
&lt;br /&gt;
The problem with using known vulnerable components was described very well in a paper by Jeff Williams and Arshan Dabirsiaghi titled, &amp;quot;[https://cdn2.hubspot.net/hub/203759/file-1100864196-pdf/docs/Contrast_-_Insecure_Libraries_2014.pdf Unfortunate Reality of Insecure Libraries]&amp;quot;. The gist of the paper is that we as a development community include third party libraries in our applications that contain well known published vulnerabilities (such as those at the [https://nvd.nist.gov/vuln/search National Vulnerability Database]).&lt;br /&gt;
&lt;br /&gt;
Dependency-check has a command line interface, a Maven plugin, an Ant task, and a Jenkins plugin. The core engine contains a series of analyzers that inspect the project dependencies, collect pieces of information about the dependencies (referred to as evidence within the tool). The evidence is then used to identify the [https://nvd.nist.gov/products/cpe Common Platform Enumeration (CPE)] for the given dependency. If a CPE is identified, a listing of associated [http://cve.mitre.org/ Common Vulnerability and Exposure (CVE)] entries are listed in a report.&lt;br /&gt;
&lt;br /&gt;
Dependency-check automatically updates itself using the [https://nvd.nist.gov/vuln/data-feeds NVD Data Feeds] hosted by NIST. '''IMPORTANT NOTE:''' The initial download of the data may take ten minutes or more, if you run the tool at least once every seven days only a small XML file needs to be downloaded to keep the local copy of the data current.&lt;br /&gt;
&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;padding-left:25px;width:200px;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Quick Download ==&lt;br /&gt;
&lt;br /&gt;
Version 4.0.1&lt;br /&gt;
* [https://dl.bintray.com/jeremy-long/owasp/dependency-check-4.0.2-release.zip Command Line]&lt;br /&gt;
* [https://dl.bintray.com/jeremy-long/owasp/dependency-check-ant-4.0.2-release.zip Ant Task]&lt;br /&gt;
* [https://search.maven.org/#artifactdetails%7Corg.owasp%7Cdependency-check-maven%7C4.0.2%7Cmaven-plugin Maven Plugin]&lt;br /&gt;
* [https://search.maven.org/#artifactdetails%7Corg.owasp%7Cdependency-check-gradle%7C4.0.2%7Cgradle-plugin Gradle Plugin]&lt;br /&gt;
* [httpss://plugins.jenkins.io/dependency-check-jenkins-plugin Jenkins Plugin]&lt;br /&gt;
* [https://brew.sh/ Mac Homebrew]:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;brew update &amp;amp;&amp;amp; brew install dependency-check&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Other Plugins&lt;br /&gt;
* [https://search.maven.org/#search%7Cga%7C1%7Cg%3A%22net.vonbuchholtz%22%20a%3A%22sbt-dependency-check%22 sbt Plugin]&lt;br /&gt;
* [https://github.com/livingsocial/lein-dependency-check lein-dependency-check]&lt;br /&gt;
&lt;br /&gt;
== Integrations ==&lt;br /&gt;
* [https://github.com/stevespringett/dependency-check-sonar-plugin SonarQube Plugin]&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [https://github.com/jeremylong/DependencyCheck github]&lt;br /&gt;
* [https://github.com/jeremylong/dependency-check-gradle gradle source]&lt;br /&gt;
* [https://github.com/albuch/sbt-dependency-check sbt source]&lt;br /&gt;
* [https://github.com/jenkinsci/dependency-check-plugin jenkins source]&lt;br /&gt;
* [https://www.ohloh.net/p/dependencycheck Ohloh]&lt;br /&gt;
* [https://bintray.com/jeremy-long/owasp Bintray]&lt;br /&gt;
&lt;br /&gt;
== Documentation ==&lt;br /&gt;
&lt;br /&gt;
* [https://jeremylong.github.io/DependencyCheck/ Documentation (on GitHub)]&lt;br /&gt;
&lt;br /&gt;
== Mailing List ==&lt;br /&gt;
&lt;br /&gt;
* [mailto:dependency-check+subscribe@googlegroups.com Subscribe]&lt;br /&gt;
* [mailto:dependency-check@googlegroups.com Post]&lt;br /&gt;
* [https://groups.google.com/forum/#!forum/dependency-check Archived Posts]&lt;br /&gt;
&lt;br /&gt;
== Presentation ==&lt;br /&gt;
&lt;br /&gt;
* [https://jeremylong.github.io/DependencyCheck/general/dependency-check.pdf dependency-check (PDF)]&lt;br /&gt;
* [https://jeremylong.github.io/DependencyCheck/general/dependency-check.pptx dependency-check  (PPTX)]&lt;br /&gt;
&lt;br /&gt;
==Classifications==&lt;br /&gt;
&lt;br /&gt;
   {| width=&amp;quot;200&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
   |-&lt;br /&gt;
   | rowspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-flagship-trans-85.png|link=https://www.owasp.org/index.php/OWASP_Project_Stages#tab=Flagship_Projects]]&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-builders-small.png|link=]]  &lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; | [[File:Owasp-defenders-small.png|link=]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; | [[File:Cc-button-y-sa-small.png|link=http://creativecommons.org/licenses/by-sa/3.0/]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot; | [[File:Project_Type_Files_CODE.jpg|link=]]&lt;br /&gt;
   |}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Acknowledgements =&lt;br /&gt;
==Volunteers==&lt;br /&gt;
Dependency-Check is developed by a team of volunteers. The primary contributors to date have been:&lt;br /&gt;
&lt;br /&gt;
* [[User:Jeremy Long|Jeremy Long]]&lt;br /&gt;
* [[User:Steve Springett|Steve Springett]]&lt;br /&gt;
* [[User:Will Stranathan|Will Stranathan]]&lt;br /&gt;
&lt;br /&gt;
= Road Map and Getting Involved =&lt;br /&gt;
As of March 2015, the top priorities are:&lt;br /&gt;
* Resolving all open [https://github.com/jeremylong/DependencyCheck/issues?state=open github issues/feature requests]&lt;br /&gt;
* Improving analysis for .NET Dlls&lt;br /&gt;
&lt;br /&gt;
Involvement in the development and promotion of dependency-check is actively encouraged!&lt;br /&gt;
You do not have to be a security expert in order to contribute. How you can help:&lt;br /&gt;
* Use the tool&lt;br /&gt;
* Provide feedback via the [https://groups.google.com/forum/?fromgroups#!forum/dependency-check mailing list] or by creating [https://github.com/jeremylong/DependencyCheck/issues?state=open github issues] (both bugs and feature requests are encouraged)&lt;br /&gt;
* The project source code is hosted on [https://github.com/jeremylong/DependencyCheck/ github] - if you are so inclined fork it and provide push requests!&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
__NOTOC__ &amp;lt;headertabs&amp;gt;&amp;lt;/headertabs&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP Project]]  &lt;br /&gt;
[[Category:OWASP_Builders]] &lt;br /&gt;
[[Category:OWASP_Defenders]]  &lt;br /&gt;
[[Category:OWASP_Document]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Benchmark&amp;diff=245977</id>
		<title>Benchmark</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Benchmark&amp;diff=245977"/>
				<updated>2018-12-12T04:16:35Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Main = &lt;br /&gt;
 &amp;lt;div style=&amp;quot;width:100%;height:100px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Lab_big.jpg|link=OWASP_Project_Stages#tab.3DLab_Projects]]&amp;lt;/div&amp;gt;&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== OWASP Benchmark Project  ==&lt;br /&gt;
The OWASP Benchmark for Security Automation (OWASP Benchmark) is a free and open test suite designed to evaluate the speed, coverage, and accuracy of automated software vulnerability detection tools and services (henceforth simply referred to as 'tools'). Without the ability to measure these tools, it is difficult to understand their strengths and weaknesses, and compare them to each other. Each version of the OWASP Benchmark contains thousands of test cases that are fully runnable and exploitable, each of which maps to the appropriate CWE number for that vulnerability.&lt;br /&gt;
&lt;br /&gt;
You can use the OWASP Benchmark with [[Source_Code_Analysis_Tools | Static Application Security Testing (SAST)]] tools, [[:Category:Vulnerability_Scanning_Tools | Dynamic Application Security Testing (DAST)]] tools like OWASP [[ZAP]] and Interactive Application Security Testing (IAST) tools. Benchmark is implemented in Java.  Future versions may expand to include other languages.&lt;br /&gt;
&lt;br /&gt;
==Benchmark Project Scoring Philosophy==&lt;br /&gt;
&lt;br /&gt;
Security tools (SAST, DAST, and IAST) are amazing when they find a complex vulnerability in your code.  But with widespread misunderstanding of the specific vulnerabilities automated tools cover, end users are often left with a false sense of security.&lt;br /&gt;
&lt;br /&gt;
We are on a quest to measure just how good these tools are at discovering and properly diagnosing security problems in applications. We rely on the [http://en.wikipedia.org/wiki/Receiver_operating_characteristic long history] of military and medical evaluation of detection technology as a foundation for our research. Therefore, the test suite tests both real and fake vulnerabilities.&lt;br /&gt;
&lt;br /&gt;
There are four possible test outcomes in the Benchmark:&lt;br /&gt;
&lt;br /&gt;
# Tool correctly identifies a real vulnerability (True Positive - TP)&lt;br /&gt;
# Tool fails to identify a real vulnerability (False Negative - FN)&lt;br /&gt;
# Tool correctly ignores a false alarm (True Negative - TN)&lt;br /&gt;
# Tool fails to ignore a false alarm (False Positive - FP)&lt;br /&gt;
&lt;br /&gt;
We can learn a lot about a tool from these four metrics. Consider a tool that simply flags every line of code as vulnerable. This tool will perfectly identify all vulnerabilities!  But it will also have 100% false positives and thus adds no value.  Similarly, consider a tool that reports absolutely nothing. This tool will have zero false positives, but will also identify zero real vulnerabilities and is also worthless. You can even imagine a tool that flips a coin to decide whether to report whether each test case contains a vulnerability. The result would be 50% true positives and 50% false positives.  We need a way to distinguish valuable security tools from these trivial ones.&lt;br /&gt;
&lt;br /&gt;
If you imagine the line that connects all these points, from 0,0 to 100,100 establishes a line that roughly translates to &amp;quot;random guessing.&amp;quot; The ultimate measure of a security tool is how much better it can do than this line.  The diagram below shows how we will evaluate security tools against the Benchmark.&lt;br /&gt;
&lt;br /&gt;
[[File:Wbe guide.png]]&lt;br /&gt;
&lt;br /&gt;
A point plotted on this chart provides a visual indication of how well a tool did considering both the True Positives the tool reported, as well as the False Positives it reported. We also want to compute an individual score for that point in the range 0 - 100, which we call the Benchmark Accuracy Score.&lt;br /&gt;
&lt;br /&gt;
The Benchmark Accuracy Score is essentially a [https://en.wikipedia.org/wiki/Youden%27s_J_statistic Youden Index], which is a standard way of summarizing the accuracy of a set of tests.  Youden's index is one of the oldest measures for diagnostic accuracy. It is also a global measure of a test performance, used for the evaluation of overall discriminative power of a diagnostic procedure and for comparison of this test with other tests. Youden's index is calculated by deducting 1 from the sum of a test’s sensitivity and specificity expressed not as percentage but as a part of a whole number: (sensitivity + specificity) – 1. For a test with poor diagnostic accuracy, Youden's index equals 0, and in a perfect test Youden's index equals 1.&lt;br /&gt;
&lt;br /&gt;
  So for example, if a tool has a True Positive Rate (TPR) of .98 (i.e., 98%) &lt;br /&gt;
    and False Positive Rate (FPR) of .05 (i.e., 5%)&lt;br /&gt;
  Sensitivity = TPR (.98)&lt;br /&gt;
  Specificity = 1-FPR (.95)&lt;br /&gt;
  So the Youden Index is (.98+.95) - 1 = .93&lt;br /&gt;
  &lt;br /&gt;
  And this would equate to a Benchmark score of 93 (since we normalize this to the range 0 - 100)&lt;br /&gt;
&lt;br /&gt;
On the graph, the Benchmark Score is the length of the line from the point down to the diagonal “guessing” line. Note that a Benchmark score can actually be negative if the point is below the line. This is caused when the False Positive Rate is actually higher than the True Positive Rate.&lt;br /&gt;
&lt;br /&gt;
==Benchmark Validity==&lt;br /&gt;
&lt;br /&gt;
The Benchmark tests are not exactly like real applications. The tests are derived from coding patterns observed in real applications, but the majority of them are considerably '''simpler''' than real applications. That is, most real world applications will be considerably harder to successfully analyze than the OWASP Benchmark Test Suite. Although the tests are based on real code, it is possible that some tests may have coding patterns that don't occur frequently in real code.&lt;br /&gt;
&lt;br /&gt;
Remember, we are trying to test the capabilities of the tools and make them explicit, so that users can make informed decisions about what tools to use, how to use them, and what results to expect.  This is exactly aligned with the OWASP mission to make application security visible.&lt;br /&gt;
&lt;br /&gt;
==Generating Benchmark Scores==&lt;br /&gt;
&lt;br /&gt;
Anyone can use this Benchmark to evaluate vulnerability detection tools. The basic steps are:&lt;br /&gt;
# Download the Benchmark from GitHub&lt;br /&gt;
# Run your tools against the Benchmark&lt;br /&gt;
# Run the BenchmarkScore tool on the reports from your tools&lt;br /&gt;
&lt;br /&gt;
That's it!&lt;br /&gt;
&lt;br /&gt;
Full details on how to do this are at the bottom of the page on the Quick_Start tab.&lt;br /&gt;
&lt;br /&gt;
We encourage both vendors, open source tools, and end users to verify their application security tools against the Benchmark. In order to ensure that the results are fair and useful, we ask that you follow a few simple rules when publishing results. We won't recognize any results that aren't easily reproducible:&lt;br /&gt;
&lt;br /&gt;
# A description of the default “out-of-the-box” installation, version numbers, etc…&lt;br /&gt;
# Any and all configuration, tailoring, onboarding, etc… performed to make the tool run&lt;br /&gt;
# Any and all changes to default security rules, tests, or checks used to achieve the results&lt;br /&gt;
# Easily reproducible steps to run the tool&lt;br /&gt;
&lt;br /&gt;
== Reporting Format==&lt;br /&gt;
&lt;br /&gt;
The Benchmark includes tools to interpret raw tool output, compare it to the expected results, and generate summary charts and graphs. We use the following table format in order to capture all the information generated during the evaluation.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable nowraplinks&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Security Category&lt;br /&gt;
! TP&lt;br /&gt;
! FN&lt;br /&gt;
! TN&lt;br /&gt;
! FP&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Total&lt;br /&gt;
! TPR&lt;br /&gt;
! FPR&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Score&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| General security category for test cases.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Positives''': Tests with real vulnerabilities that were correctly reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Negative''': Tests with real vulnerabilities that were not correctly reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Negative''': Tests with fake vulnerabilities that were correctly not reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Positive''':Tests with fake vulnerabilities that were incorrectly reported as vulnerable by the tool.&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| Total test cases for this category.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Positive Rate''': TP / ( TP + FN ) - Also referred to as Precision, as defined at [https://en.wikipedia.org/wiki/Precision_and_recall Wikipedia].&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Positive Rate''': FP / ( FP + TN ).&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| Normalized distance from the “guess line” TPR - FPR.&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Command Injection&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Etc...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | &lt;br /&gt;
! Total TP&lt;br /&gt;
! Total FN&lt;br /&gt;
! Total TN&lt;br /&gt;
! Total FP&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Total TC&lt;br /&gt;
! Average TPR&lt;br /&gt;
! Average FPR&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Average Score&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Code Repo and Build/Run Instructions ==&lt;br /&gt;
&lt;br /&gt;
See the '''Getting Started''' and '''Getting, Building, and Running the Benchmark''' sections on the Quick Start tab.&lt;br /&gt;
&lt;br /&gt;
==Licensing==&lt;br /&gt;
&lt;br /&gt;
The OWASP Benchmark is free to use under the [http://choosealicense.com/licenses/gpl-2.0/ GNU General Public License v2.0].&lt;br /&gt;
&lt;br /&gt;
== Mailing List ==&lt;br /&gt;
&lt;br /&gt;
[https://lists.owasp.org/mailman/listinfo/owasp-benchmark-project OWASP Benchmark Mailing List]&lt;br /&gt;
&lt;br /&gt;
== Project Leaders ==&lt;br /&gt;
&lt;br /&gt;
[https://www.owasp.org/index.php/User:Wichers Dave Wichers] [mailto:dave.wichers@owasp.org @]&lt;br /&gt;
&lt;br /&gt;
== Project References ==&lt;br /&gt;
* [https://www.mir-swamp.org/#packages/public Software Assurance Marketplace (SWAMP) - set of curated packages to test tools against]&lt;br /&gt;
* [http://samate.nist.gov/Other_Test_Collections.html SAMATE List of Test Collections]&lt;br /&gt;
&lt;br /&gt;
== Related Projects ==&lt;br /&gt;
&lt;br /&gt;
* [http://samate.nist.gov/SARD/testsuite.php NSA's Juliet for Java]&lt;br /&gt;
* [http://sectoolmarket.com/ The Web Application Vulnerability Scanner Evaluation Project (WAVSEP)]&lt;br /&gt;
&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;padding-left:25px;width:200px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Quick Download ==&lt;br /&gt;
&lt;br /&gt;
All test code and project files can be downloaded from [https://github.com/OWASP/benchmark OWASP GitHub].&lt;br /&gt;
&lt;br /&gt;
== Project Intro Video ==&lt;br /&gt;
&lt;br /&gt;
[[File:BenchmarkPodcastTitlePage.jpg|200px|link=https://www.youtube.com/watch?v=HQP8dwc3jJA&amp;amp;index=5&amp;amp;list=PLGB2s-U5FSWOmEStMt3JqlMFJvRYqeVW5]]&lt;br /&gt;
&lt;br /&gt;
== News and Events ==&lt;br /&gt;
* LOOKING FOR VOLUNTEERS!! - We are looking for individuals and organizations to join and make this a much more community driven project, including additional coleaders to help take this project to the next level. Contributors could work on things like new test cases, additional tool scorecard generators, adding support for languages beyond Java, and a host of other improvements. Please contact [mailto:dave.wichers@owasp.org me] if you are interested in contributing at any level.&lt;br /&gt;
* June 5, 2016 - Benchmark Version 1.2 Released&lt;br /&gt;
* Sep 24, 2015 - Benchmark introduced to broader OWASP community at [https://appsecusa2015.sched.org/event/3r9k/using-the-owasp-benchmark-to-assess-automated-vulnerability-analysis-tools AppSec USA]&lt;br /&gt;
* Aug 27, 2015 - U.S. Dept. of Homeland Security (DHS) is financially supporting the Benchmark project.&lt;br /&gt;
* Aug 15, 2015 - Benchmark Version 1.2beta Released with full DAST Support. Checkmarx and ZAP scorecard generators also released.&lt;br /&gt;
* July 10, 2015 - Benchmark Scorecard generator and open source scorecards released&lt;br /&gt;
* May 23, 2015 - Benchmark Version 1.1 Released&lt;br /&gt;
* April 15, 2015 - Benchmark Version 1.0 Released&lt;br /&gt;
&lt;br /&gt;
==Classifications==&lt;br /&gt;
&lt;br /&gt;
   {| width=&amp;quot;200&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; rowspan=&amp;quot;2&amp;quot;| [[File:Owasp-incubator-trans-85.png|link=https://www.owasp.org/index.php/OWASP_Project_Stages#tab=Incubator_Projects]]&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-builders-small.png|link=]]  &lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-defenders-small.png|link=]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [http://choosealicense.com/licenses/gpl-2.0/ GNU General Public License v2.0]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [[File:Project_Type_Files_CODE.jpg|link=]]&lt;br /&gt;
   |}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Test Cases =&lt;br /&gt;
&lt;br /&gt;
Version 1.0 of the Benchmark was published on April 15, 2015 and had 20,983 test cases. On May 23, 2015, version 1.1 of the Benchmark was released. The 1.1 release improves on the previous version by making sure that there are both true positives and false positives in every vulnerability area. Version 1.2 was released on June 5, 2016 (and the 1.2beta August 15, 2015).&lt;br /&gt;
&lt;br /&gt;
Version 1.2 and forward of the Benchmark is a fully executable web application, which means it is scannable by any kind of vulnerability detection tool. The 1.2 has been limited to slightly less than 3,000 test cases, to make it easier for DAST tools to scan it (so it doesn't take so long and they don't run out of memory, or blow up the size of their database). The 1.2 release covers the same vulnerability areas that 1.1 covers. We added a few Spring database SQL Injection tests, but that's it. The bulk of the work was turning each test case into something that actually runs correctly AND is fully exploitable, and then generating a UI on top of it that works in order to turn the test cases into a real running application.&lt;br /&gt;
&lt;br /&gt;
You can still download Benchmark version 1.1 by cloning the release marked with the GIT tag '1.1'.&lt;br /&gt;
&lt;br /&gt;
The test case areas and quantities for the Benchmark releases are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable nowraplinks&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Vulnerability Area&lt;br /&gt;
! # of Tests in v1.1&lt;br /&gt;
! # of Tests in v1.2&lt;br /&gt;
! CWE Number&lt;br /&gt;
|-&lt;br /&gt;
| [[Command Injection]]&lt;br /&gt;
| 2708&lt;br /&gt;
| 251&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/78.html 78]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Cryptography&lt;br /&gt;
| 1440&lt;br /&gt;
| 246&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/327.html 327]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Hashing&lt;br /&gt;
| 1421&lt;br /&gt;
| 236&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/328.html 328]&lt;br /&gt;
|-&lt;br /&gt;
| [[LDAP injection | LDAP Injection]]&lt;br /&gt;
| 736&lt;br /&gt;
| 59&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/90.html 90]&lt;br /&gt;
|-&lt;br /&gt;
| [[Path Traversal]]&lt;br /&gt;
| 2630&lt;br /&gt;
| 268&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/22.html 22]&lt;br /&gt;
|-&lt;br /&gt;
| Secure Cookie Flag&lt;br /&gt;
| 416&lt;br /&gt;
| 67&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/614.html 614]&lt;br /&gt;
|-&lt;br /&gt;
| [[SQL Injection]]&lt;br /&gt;
| 3529&lt;br /&gt;
| 504&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/89.html 89]&lt;br /&gt;
|-&lt;br /&gt;
| [[Trust Boundary Violation]]&lt;br /&gt;
| 725&lt;br /&gt;
| 126&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/501.html 501]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Randomness&lt;br /&gt;
| 3640&lt;br /&gt;
| 493&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/330.html 330]&lt;br /&gt;
|-&lt;br /&gt;
| [[XPATH Injection]]&lt;br /&gt;
| 347&lt;br /&gt;
| 35&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/643.html 643]&lt;br /&gt;
|-&lt;br /&gt;
| [[XSS]] (Cross-Site Scripting)&lt;br /&gt;
| 3449&lt;br /&gt;
| 455&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/79.html 79]&lt;br /&gt;
|-&lt;br /&gt;
| Total Test Cases&lt;br /&gt;
| 21,041&lt;br /&gt;
| 2,740&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Each Benchmark version comes with a spreadsheet that lists every test case, the vulnerability category, the CWE number, and the expected result (true finding/false positive). Look for the file: expectedresults-VERSION#.csv in the project root directory.&lt;br /&gt;
&lt;br /&gt;
Every test case is:&lt;br /&gt;
* a servlet or JSP (currently they are all servlets, but we plan to add JSPs)&lt;br /&gt;
* either a true vulnerability or a false positive for a single issue&lt;br /&gt;
&lt;br /&gt;
The Benchmark is intended to help determine how well analysis tools correctly analyze a broad array of application and framework behavior, including:&lt;br /&gt;
&lt;br /&gt;
* HTTP request and response problems&lt;br /&gt;
* Simple and complex data flow&lt;br /&gt;
* Simple and complex control flow&lt;br /&gt;
* Popular frameworks&lt;br /&gt;
* Inversion of control&lt;br /&gt;
* Reflection&lt;br /&gt;
* Class loading&lt;br /&gt;
* Annotations&lt;br /&gt;
* Popular UI technologies (particularly JavaScript frameworks)&lt;br /&gt;
&lt;br /&gt;
Not all of these are yet tested by the Benchmark but future enhancements intend to provide more coverage of these issues.&lt;br /&gt;
&lt;br /&gt;
Additional future enhancements could cover:&lt;br /&gt;
* All vulnerability types in the [[Top10 | OWASP Top 10]]&lt;br /&gt;
* Does the tool find flaws in libraries?&lt;br /&gt;
* Does the tool find flaws spanning custom code and libraries?&lt;br /&gt;
* Does tool handle web services? REST, XML, GWT, etc…&lt;br /&gt;
* Does tool work with different app servers? Java platforms?&lt;br /&gt;
&lt;br /&gt;
== Example Test Case ==&lt;br /&gt;
&lt;br /&gt;
Each test case is a simple Java EE servlet. BenchmarkTest00001 in version 1.0 of the Benchmark was an LDAP Injection test with the following metadata in the accompanying BenchmarkTest00001.xml file:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;test-metadata&amp;gt;&lt;br /&gt;
    &amp;lt;category&amp;gt;ldapi&amp;lt;/category&amp;gt;&lt;br /&gt;
    &amp;lt;test-number&amp;gt;00001&amp;lt;/test-number&amp;gt;&lt;br /&gt;
    &amp;lt;vulnerability&amp;gt;true&amp;lt;/vulnerability&amp;gt;&lt;br /&gt;
    &amp;lt;cwe&amp;gt;90&amp;lt;/cwe&amp;gt;&lt;br /&gt;
  &amp;lt;/test-metadata&amp;gt;&lt;br /&gt;
&lt;br /&gt;
BenchmarkTest00001.java in the OWASP Benchmark 1.0 simply reads in all the cookie values, looks for a cookie named &amp;quot;foo&amp;quot;, and uses the value of this cookie when performing an LDAP query. Here's the code for BenchmarkTest00001.java:&lt;br /&gt;
&lt;br /&gt;
  package org.owasp.benchmark.testcode;&lt;br /&gt;
  &lt;br /&gt;
  import java.io.IOException;&lt;br /&gt;
  &lt;br /&gt;
  import javax.servlet.ServletException;&lt;br /&gt;
  import javax.servlet.annotation.WebServlet;&lt;br /&gt;
  import javax.servlet.http.HttpServlet;&lt;br /&gt;
  import javax.servlet.http.HttpServletRequest;&lt;br /&gt;
  import javax.servlet.http.HttpServletResponse;&lt;br /&gt;
  &lt;br /&gt;
  @WebServlet(&amp;quot;/BenchmarkTest00001&amp;quot;)&lt;br /&gt;
  public class BenchmarkTest00001 extends HttpServlet {&lt;br /&gt;
  	&lt;br /&gt;
  	private static final long serialVersionUID = 1L;&lt;br /&gt;
  	&lt;br /&gt;
  	@Override&lt;br /&gt;
  	public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {&lt;br /&gt;
  		doPost(request, response);&lt;br /&gt;
  	}&lt;br /&gt;
  &lt;br /&gt;
  	@Override&lt;br /&gt;
  	public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {&lt;br /&gt;
  		// some code&lt;br /&gt;
  &lt;br /&gt;
  		javax.servlet.http.Cookie[] cookies = request.getCookies();&lt;br /&gt;
  		&lt;br /&gt;
  		String param = null;&lt;br /&gt;
  		boolean foundit = false;&lt;br /&gt;
  		if (cookies != null) {&lt;br /&gt;
  			for (javax.servlet.http.Cookie cookie : cookies) {&lt;br /&gt;
  				if (cookie.getName().equals(&amp;quot;foo&amp;quot;)) {&lt;br /&gt;
  					param = cookie.getValue();&lt;br /&gt;
  					foundit = true;&lt;br /&gt;
  				}&lt;br /&gt;
  			}&lt;br /&gt;
  			if (!foundit) {&lt;br /&gt;
  				// no cookie found in collection&lt;br /&gt;
  				param = &amp;quot;&amp;quot;;&lt;br /&gt;
  			}&lt;br /&gt;
  		} else {&lt;br /&gt;
  			// no cookies&lt;br /&gt;
  			param = &amp;quot;&amp;quot;;&lt;br /&gt;
  		}&lt;br /&gt;
  		&lt;br /&gt;
  		try {&lt;br /&gt;
  			javax.naming.directory.DirContext dc = org.owasp.benchmark.helpers.Utils.getDirContext();&lt;br /&gt;
  			Object[] filterArgs = {&amp;quot;a&amp;quot;,&amp;quot;b&amp;quot;};&lt;br /&gt;
  			dc.search(&amp;quot;name&amp;quot;, param, filterArgs, new javax.naming.directory.SearchControls());&lt;br /&gt;
  		} catch (javax.naming.NamingException e) {&lt;br /&gt;
  			throw new ServletException(e);&lt;br /&gt;
  		}&lt;br /&gt;
  	}&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
= Test Case Details =&lt;br /&gt;
&lt;br /&gt;
The following describes situations in the Benchmark that have come up for debate as to the validity/accuracy of the test cases in these scenarios. &lt;br /&gt;
&lt;br /&gt;
== Cookies as a Source of Attack for XSS ==&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.1 and early versions of the 1.2beta included test cases that used cookies as a source of data that flowed into XSS vulnerabilities. The Benchmark treated these tests as False Positives because the Benchmark team figured that you'd have to use an XSS vulnerability in the first place to set the cookie value, and so it wasn't fair/reasonable to consider an XSS vulnerability whose data source was a cookie value as actually exploitable. However, we got feedback from some tool vendors, like Fortify, Burp, and Arachni, that they disagreed with this analysis and felt that, in fact, cookies were a valid source of attack against XSS vulnerabilities. Given that there are good arguments on both sides of this safe vs. unsafe question, we decided on Aug 25, 2015 to simply remove those test cases from the Benchmark. If, in the future, we decide who is right, we may add such test cases back in.&lt;br /&gt;
&lt;br /&gt;
== Headers as a Source of Attack for XSS ==&lt;br /&gt;
&lt;br /&gt;
Similarly, the Benchmark team believes that the names of headers aren't a valid source of XSS attack for the same reason we thought cookie values aren't a valid source. Because it would require an XSS vulnerability to be exploited in the first place to set them. In fact, we feel that this argument is much stronger for header names, than cookie values. Right now, the Benchmark doesn't include any header names as sources for XSS test cases, but we plan to add them, and mark them as false positive in the Benchmark.&lt;br /&gt;
&lt;br /&gt;
We do have header values as sources for some XSS test cases in the Benchmark and only 'referer' is treated as a valid XSS source (i.e., true positives) because other headers are not viable XSS sources. Other headers are, of course, valid sources for other attack vectors, like SQL injection or Command Injection.&lt;br /&gt;
&lt;br /&gt;
== False Positive Scenario: Static Values Passed to Unsafe (Weak) Sinks ==&lt;br /&gt;
&lt;br /&gt;
The Benchmark has MANY test cases where unsafe data flows in from the browser, but that data is replaced with static content as it goes through the propagators in the that specific test case. This static (safe) data then flows to the sink, which may be a weak/unsafe sink, like, for example, an unsafely constructed SQL statement. The Benchmark treats those test cases as false positives because there is absolutely no way for that weakness to be exploited. The NSA Juliet SAST benchmark treats such test cases exactly the same way, as false positives. We do recognize that there are weaknesses in those test cases, even though they aren't exploitable.&lt;br /&gt;
&lt;br /&gt;
Some SAST tool vendors feel it is appropriate to point out those weaknesses, and that's fine. However, if the tool points those weaknesses out, and does not distinguish them from truly exploitable vulnerabilities, then the Benchmark treats those findings as false positives. If the tool allows a user to differentiate these non-exploitable weaknesses from exploitable vulnerabilities, then the Benchmark scorecard generator can use that information to filter out these extra findings (along with any other similarly marked findings) so they don't count against that tool when calculating that tool's Benchmark score.  In the real world, its important for analysts to be able to filter out such findings if they only have time to deal with the most critical, actually exploitable, vulnerabilities. If a tool doesn't make it easy for an analyst to distinguish the two situations, then they are providing a disservice to the analyst.&lt;br /&gt;
&lt;br /&gt;
This issue doesn't affect DAST tools. They only report what appears to be exploitable to them. So this has no affect on them.&lt;br /&gt;
&lt;br /&gt;
If you are a SAST tool vendor or user, and you believe the Benchmark scorecard generator is counting such findings against that tool, and there is a way to tell them apart, please let the project team know so the scorecard generator can be adjusted to not count those findings against the tool. The Benchmark project's goal is the generate the most fair and accurate results it can generate. If such an adjustment is made to how a scorecard is generated for that tool, we plan to document this was done for that tool, and explain how others could perform the same filtering within that tool in order to get the same focused set of results.&lt;br /&gt;
&lt;br /&gt;
== Dead Code ==&lt;br /&gt;
&lt;br /&gt;
Some SAST tools point out weaknesses in dead code because they might eventually end up being used, and serve as bad coding examples (think cut/paste of code). We think this is fine/appropriate.  However, there is no dead code in the OWASP Benchmark (at least not intentionally). So dead code should not be causing any tool to report unnecessary false positives.&lt;br /&gt;
&lt;br /&gt;
= Tool Support/Results =&lt;br /&gt;
&lt;br /&gt;
The results for 5 free tools, PMD, FindBugs, FindBugs with the FindSecBugs plugin, SonarQube and OWASP ZAP are available here against version 1.2 of the Benchmark: https://rawgit.com/OWASP/Benchmark/master/scorecard/OWASP_Benchmark_Home.html. We've included multiple versions of FindSecBugs' and ZAP's results so you can see the improvements they are making finding vulnerabilities in Benchmark.&lt;br /&gt;
&lt;br /&gt;
We have Benchmark results for all the following tools, but haven't publicly released the results for any commercial tools. However, we included a 'Commercial Average' page, which includes a summary of results for 6 commercial SAST tools along with anonymous versions of each SAST tool's scorecard.&lt;br /&gt;
&lt;br /&gt;
The Benchmark can generate results for the following tools: &lt;br /&gt;
&lt;br /&gt;
'''Free Static Application Security Testing (SAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://pmd.github.io/ PMD] (which really has no security rules) - .xml results file&lt;br /&gt;
* [http://findbugs.sourceforge.net/ Findbugs] - .xml results file (Note: The 'new' Findbugs is now at: https://spotbugs.github.io/)&lt;br /&gt;
* FindBugs with the [http://find-sec-bugs.github.io/ FindSecurityBugs plugin] - .xml results file&lt;br /&gt;
* [https://www.sonarqube.org/downloads/ SonarQube] - .xml results file&lt;br /&gt;
&lt;br /&gt;
Note: We looked into supporting [http://checkstyle.sourceforge.net/ Checkstyle] but it has no security rules, just like PMD. The [http://fb-contrib.sourceforge.net/ fb-contrib] FindBugs plugin doesn't have any security rules either. We did test [http://errorprone.info/ Error Prone], and found that it does report some use of [http://errorprone.info/bugpattern/InsecureCipherMode) insecure ciphers (CWE-327)], but that's it.&lt;br /&gt;
&lt;br /&gt;
'''Commercial SAST Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [http://www.castsoftware.com/products/application-intelligence-platform CAST Application Intelligence Platform (AIP)] - .xml results file&lt;br /&gt;
* [https://www.checkmarx.com/products/static-application-security-testing/ Checkmarx CxSAST] - .xml results file&lt;br /&gt;
* [https://www.synopsys.com/content/dam/synopsys/sig-assets/datasheets/SAST-Coverity-datasheet.pdf Synopsys Static Analysis (Formerly Coverity Code Advisor) (On-Demand and stand-alone versions)] - .json results file (You can scan Benchmark w/Coverity for free. See: https://scan.coverity.com/)&lt;br /&gt;
* [https://software.microfocus.com/en-us/products/static-code-analysis-sast/overview Micro Focus (Formally HPE) Fortify (On-Demand and stand-alone versions)] - .fpr results file&lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/ibm-appscan-source IBM AppScan Source (Standalone and Cloud)] - .ozasmt or .xml results file&lt;br /&gt;
* [https://juliasoft.com/solutions/julia-for-security/ Julia Analyzer] - .xml results file&lt;br /&gt;
* [https://www.parasoft.com/products/jtest/ Parasoft Jtest] - .xml results file&lt;br /&gt;
* [https://www.shiftleft.io/product/ ShiftLeft SAST] - .sl results file (Benchmark specific format. Ask vendor how to generate this)&lt;br /&gt;
* [https://www.sourcemeter.com/features/ SourceMeter] - .txt results file of ALL results from VulnerabilityHunter&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] - .xml results file&lt;br /&gt;
* [http://www.veracode.com/products/binary-static-analysis-sast Veracode SAST] - .xml results file&lt;br /&gt;
* [https://www.rigs-it.com/xanitizer/ XANITIZER] - xml results file ([https://www.rigs-it.com/wp-content/uploads/2018/03/howtosetupxanitizerforowaspbenchmarkproject.pdf Their white paper on how to setup Xanitizer to scan Benchmark.]) (Free trial available)&lt;br /&gt;
&lt;br /&gt;
We are looking for results for other commercial static analysis tools like: [http://www.grammatech.com/codesonar Grammatech CodeSonar], [http://www.klocwork.com/products-services/klocwork Klocwork], etc. If you have a license for any static analysis tool not already listed above and can run it on the Benchmark and send us the results file that would be very helpful. &lt;br /&gt;
&lt;br /&gt;
The free SAST tools come bundled with the Benchmark so you can run them yourselves. If you have a license for any commercial SAST tool, you can also run them against the Benchmark. Just put your results files in the /results folder of the project, and then run the BenchmarkScore script for your platform (.sh / .bat) and it will generate a scorecard in the /scorecard directory for all the tools you have results for that are currently supported.&lt;br /&gt;
&lt;br /&gt;
'''Free Dynamic Application Security Testing (DAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
Note: While we support scorecard generators for these Free and Commercial DAST tools, we haven't been able to get a full/clean run against the Benchmark from most of these tools. As such, some of these scorecard generators might still need some work to properly reflect their results. If you notice any problems, let us know.&lt;br /&gt;
&lt;br /&gt;
* [http://www.arachni-scanner.com/ Arachni] - .xml results file&lt;br /&gt;
** To generate .xml, run: ./bin/arachni_reporter &amp;quot;Your_AFR_Results_Filename.afr&amp;quot; --reporter=xml:outfile=Benchmark1.2-Arachni.xml&lt;br /&gt;
* [https://www.owasp.org/index.php/ZAP OWASP ZAP] - .xml results file. To generate a complete ZAP XML results file so you can generate a valid scorecard, make sure you:&lt;br /&gt;
** Tools &amp;gt; Options &amp;gt; Alerts - And set the Max alert instances to like 500.&lt;br /&gt;
** Then: Report &amp;gt; Generate XML Report...&lt;br /&gt;
&lt;br /&gt;
'''Commercial DAST Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.acunetix.com/vulnerability-scanner/ Acunetix Web Vulnerability Scanner (WVS)] - .xml results file (Generated using [https://www.acunetix.com/resources/wvs7manual.pdf command line interface (see Chapter 10.)] /ExportXML switch)&lt;br /&gt;
* [https://portswigger.net/burp/ Burp Pro] - .xml results file&lt;br /&gt;
**You must use Burp Pro v1.6.30+ to scan the Benchmark due to a limitation fixed in v1.6.30.&lt;br /&gt;
* [https://software.microfocus.com/en-us/products/webinspect-dynamic-analysis-dast/overview Micro Focus (Formally HPE) WebInspect] - .xml results file&lt;br /&gt;
* [http://www-03.ibm.com/software/products/en/appscan This was IBM AppScan (but I believe IBM sold this product off. To whom?] - .xml results file&lt;br /&gt;
* [https://www.netsparker.com/web-vulnerability-scanner/ Netsparker] - .xml results file&lt;br /&gt;
* [https://www.rapid7.com/products/appspider/ Rapid7 AppSpider] - .xml results file&lt;br /&gt;
&lt;br /&gt;
* Qualys - We ran Qualys against v1.2 of the Benchmark and it found none of the vulnerabilities we test for as far as we could tell. So we haven't implemented a scorecard generator for it. If you get results where you think it does find some real issues, send us the results file and, if confirmed, we'll produce a scorecard generator for it.&lt;br /&gt;
&lt;br /&gt;
If you have access to other DAST Tools, PLEASE RUN THEM FOR US against the Benchmark, and send us the results file so we can build a scorecard generator for that tool.&lt;br /&gt;
&lt;br /&gt;
'''Commercial Interactive Application Security Testing (IAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.contrastsecurity.com/interactive-application-security-testing-iast Contrast Assess] - .zip results file (You can scan Benchmark w/Contrast for free. See: https://www.contrastsecurity.com/contrast-community-edition)&lt;br /&gt;
* [https://hdivsecurity.com/interactive-application-security-testing-iast Hdiv Detection (IAST)] - .hlg results file&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/security-testing/interactive-application-security-testing.html Seeker IAST] - .csv results file&lt;br /&gt;
&lt;br /&gt;
'''Commercial Hybrid Analysis Application Security Testing Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [http://www.iappsecure.com/products.html Fusion Lite Insight] - .xml results file&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It may be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results.'''&lt;br /&gt;
&lt;br /&gt;
The project has automated test harnesses for these vulnerability detection tools, so we can repeatably run the tools against each version of the Benchmark and automatically produce scorecards in our desired format.&lt;br /&gt;
&lt;br /&gt;
We want to test as many tools as possible against the Benchmark. If you are:&lt;br /&gt;
&lt;br /&gt;
* A tool vendor and want to participate in the project&lt;br /&gt;
* Someone who wants to help score a free tool against the project&lt;br /&gt;
* Someone who has a license to a commercial tool and the terms of the license allow you to publish tool results, and you want to participate&lt;br /&gt;
&lt;br /&gt;
please let [mailto:dave.wichers@owasp.org me] know!&lt;br /&gt;
&lt;br /&gt;
= Quick Start =&lt;br /&gt;
&lt;br /&gt;
==What is in the Benchmark?==&lt;br /&gt;
The Benchmark is a Java Maven project. Its primary component is thousands of test cases (e.g., BenchmarkTest00001.java) , each of which is a single Java servlet that contains a single vulnerability (either a true positive or false positive). The vulnerabilities span about a dozen different types currently and are expected to expand significantly in the future.&lt;br /&gt;
&lt;br /&gt;
An expectedresults.csv is published with each version of the Benchmark (e.g., expectedresults-1.1.csv) and it specifically lists the expected results for each test case. Here’s what the first two rows in this file looks like for version 1.1 of the Benchmark:&lt;br /&gt;
&lt;br /&gt;
 # test name		category	real vulnerability	CWE	Benchmark version: 1.1	2015-05-22&lt;br /&gt;
 BenchmarkTest00001	crypto		TRUE			327&lt;br /&gt;
&lt;br /&gt;
This simply means that the first test case is a crypto test case (use of weak cryptographic algorithms), this is a real vulnerability (as opposed to a false positive), and this issue maps to CWE 327. It also indicates this expected results file is for Benchmark version 1.1 (produced May 22, 2015). There is a row in this file for each of the tens of thousands of test cases in the Benchmark.  Each time a new version of the Benchmark is published, a new corresponding results file is generated and each test case can be completely different from one version to the next.&lt;br /&gt;
&lt;br /&gt;
The Benchmark also comes with a bunch of different utilities, commands, and prepackaged open source security analysis tools, all of which can be executed through Maven goals, including:&lt;br /&gt;
&lt;br /&gt;
* Open source vulnerability detection tools to be run against the Benchmark&lt;br /&gt;
* A scorecard generator, which computes a scorecard for each of the tools you have results files for.&lt;br /&gt;
&lt;br /&gt;
==What Can You Do With the Benchmark?==&lt;br /&gt;
* Compile all the software in the Benchmark project (e.g., mvn compile)&lt;br /&gt;
* Run a static vulnerability analysis tool (SAST) against the Benchmark test case code&lt;br /&gt;
&lt;br /&gt;
* Scan a running version of the Benchmark with a dynamic application security testing tool (DAST)&lt;br /&gt;
** Instructions on how to run it are provided below&lt;br /&gt;
&lt;br /&gt;
* Generate scorecards for each of the tools you have results files for&lt;br /&gt;
** See the Tool Support/Results page for the list of tools the Benchmark supports generating scorecards for&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
Before downloading or using the Benchmark make sure you have the following installed and configured properly:&lt;br /&gt;
&lt;br /&gt;
 GIT: http://git-scm.com/ or https://github.com/&lt;br /&gt;
 Maven: https://maven.apache.org/  (Version: 3.2.3 or newer works.)&lt;br /&gt;
 Java: http://www.oracle.com/technetwork/java/javase/downloads/index.html (Java 7 or 8) (64-bit)&lt;br /&gt;
&lt;br /&gt;
==Getting, Building, and Running the Benchmark==&lt;br /&gt;
&lt;br /&gt;
To download and build everything:&lt;br /&gt;
&lt;br /&gt;
 $ git clone https://github.com/OWASP/benchmark &lt;br /&gt;
 $ cd benchmark&lt;br /&gt;
 $ mvn compile   (This compiles it)&lt;br /&gt;
 $ runBenchmark.sh/.bat - This compiles and runs it.&lt;br /&gt;
&lt;br /&gt;
Then navigate to: https://localhost:8443/benchmark/ to go to its home page. It uses a self signed SSL certificate, so you'll get a security warning when you hit the home page.&lt;br /&gt;
&lt;br /&gt;
Note: We have set the Benchmark app to use up to 6 Gig of RAM, which it may need when it is fully scanned by a DAST scanner. The DAST tool probably also requires 3+ Gig of RAM. As such, we recommend having a 16 Gig machine if you are going to try to run a full DAST scan. And at least 4 or ideally 8 Gig if you are going to play around with the running Benchmark app.&lt;br /&gt;
&lt;br /&gt;
== Using a VM instead ==&lt;br /&gt;
We have several preconstructed VMs or instructions on how to build one that you can use instead:&lt;br /&gt;
&lt;br /&gt;
* Docker: A Dockerfile is checked into the project [https://github.com/OWASP/Benchmark/blob/master/VMs/Dockerfile here]. This Docker file should automatically produce a Docker VM with the latest Benchmark project files. After you have Docker installed, cd to /VMs then run: &lt;br /&gt;
 ./buildDockerImage.sh --&amp;gt; This builds the Docker Benchmark VM (This will take a WHILE)&lt;br /&gt;
 docker images  --&amp;gt; You should see the new benchmark:latest image in the list provided&lt;br /&gt;
 # The Benchmark Docker Image only has to be created once. &lt;br /&gt;
&lt;br /&gt;
 To run the Benchmark in your Docker VM, just run:&lt;br /&gt;
   ./runDockerImage.sh  --&amp;gt; This pulls in any updates to Benchmark since the Image was built, builds everything, and starts a remotely accessible Benchmark web app.&lt;br /&gt;
 If successful, you should see this at the end:&lt;br /&gt;
   [INFO] [talledLocalContainer] Tomcat 8.x started on port [8443]&lt;br /&gt;
   [INFO] Press Ctrl-C to stop the container...&lt;br /&gt;
 Then simply navigate to: https://localhost:8443/benchmark from the machine you are running Docker&lt;br /&gt;
 &lt;br /&gt;
 Or if you want to access from a different machine:&lt;br /&gt;
  docker-machine ls (in a different terminal) --&amp;gt; To get IP Docker VM is exporting (e.g., tcp://192.168.99.100:2376)&lt;br /&gt;
  Navigate to: https://192.168.99.100:8443/benchmark in your browser (using the above IP as an example)&lt;br /&gt;
&lt;br /&gt;
* Amazon Web Services (AWS) - Here's how you set up the Benchmark on an AWS VM:&lt;br /&gt;
 sudo yum install git&lt;br /&gt;
 sudo yum install maven&lt;br /&gt;
 sudo yum install mvn&lt;br /&gt;
 sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo&lt;br /&gt;
 sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo&lt;br /&gt;
 sudo yum install -y apache-maven&lt;br /&gt;
 git clone https://github.com/OWASP/benchmark&lt;br /&gt;
 cd benchmark&lt;br /&gt;
 chmod 755 *.sh&lt;br /&gt;
 ./runBenchmark.sh -- to run it locally on the VM.&lt;br /&gt;
 ./runRemoteAccessibleBenchmark.sh -- to run it so it can be accessed outside the VM (on port 8443).&lt;br /&gt;
&lt;br /&gt;
==Running Free Static Analysis Tools Against the Benchmark==&lt;br /&gt;
There are scripts for running each of the free SAST vulnerability detection tools included with the Benchmark against the Benchmark test cases. On Linux, you might have to make them executable (e.g., chmod 755 *.sh) before you can run them.&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for PMD:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runPMD.sh (Linux) or .\scripts\runPMD.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for FindBugs:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runFindBugs.sh (Linux) or .\scripts\runFindBugs.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for FindBugs with the FindSecBugs plugin:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runFindSecBugs.sh (Linux) or .\scripts\runFindSecBugs.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
In each case, the script will generate a results file and put it in the /results directory. For example: &lt;br /&gt;
&lt;br /&gt;
 Benchmark_1.2-findbugs-v3.0.1-1026.xml&lt;br /&gt;
&lt;br /&gt;
This results file name is carefully constructed to mean the following: It's a results file against the OWASP Benchmark version 1.2, FindBugs was the analysis tool, it was version 3.0.1 of FindBugs, and it took 1026 seconds to run the analysis.&lt;br /&gt;
&lt;br /&gt;
NOTE: If you create a results file yourself, by running a commercial tool for example, you can add the version # and the compute time to the filename just like this and the Benchmark Scorecard generator will pick this information up and include it in the generated scorecard. If you don't, depending on what metadata is included in the tool results, the Scorecard generator might do this automatically anyway.&lt;br /&gt;
&lt;br /&gt;
==Generating Scorecards==&lt;br /&gt;
The scorecard generation application BenchmarkScore is included with the Benchmark. It parses the output files generated by any of the supported security tools run against the Benchmark and compares them against the expected results, and produces a set of web pages that detail the accuracy and speed of the tools involved. For the list of currently supported tools, check out the: Tools Support/Results tab. If you are using a tool that is not yet supported, simply send us a results file from that tool and we'll write a parser for that tool and add it to the supported tools list.&lt;br /&gt;
&lt;br /&gt;
The following command will compute a Benchmark scorecard for all the results files in the '''/results''' directory. The generated scorecard is put into the '''/scorecard''' directory.&lt;br /&gt;
&lt;br /&gt;
 createScorecard.sh (Linux) or createScorecard.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
An example of a real scorecard for some open source tools is provided at the top of the Tool Support/Results tab so you can see what one looks like.&lt;br /&gt;
&lt;br /&gt;
We recommend including the Benchmark version number in any results file name, in order to help prevent mismatches between the expected results and the actual results files.  A tool will not score well against the wrong expected results.&lt;br /&gt;
&lt;br /&gt;
===Customizing Your Scorecard Generation===&lt;br /&gt;
&lt;br /&gt;
The createScorecard scripts are very simple. They only have one line. Here's what the 1.2 version looks like:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;expectedresults-1.2.csv results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This Maven command simply says to run the BenchmarkScore application, passing in two parameters. The 1st is the Benchmark expected results file to compare the tool results against. And the 2nd is the name of the directory that contains all the results from tools run against that version of the Benchmark. If you have tool results older than the current version of the Benchmark, like 1.1 results for example, then you would do something like this instead:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;expectedresults-1.1.csv 1.1_results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To keep things organized, we actually put the expected results file inside the same results folder for that version of the Benchmark, so our command looks like this:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;1.1_results/expectedresults-1.1.csv 1.1_results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In all cases, the generated scorecard is put in the /scorecard folder.&lt;br /&gt;
&lt;br /&gt;
'''WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It is likely to be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results.''' It is for just this reason that the Benchmark project isn't releasing such results itself.&lt;br /&gt;
&lt;br /&gt;
= Tool Scanning Tips =&lt;br /&gt;
&lt;br /&gt;
People frequently have difficulty scanning the Benchmark with various tools due to many reasons, including size of the Benchmark app and its codebase, and complexity of the tools used. Here is some guidance for some of the tools we have used to scan the Benchmark. If you've learned any tricks on how to get better or easier results for a particular tool against the Benchmark, let us know or update this page directly.&lt;br /&gt;
&lt;br /&gt;
== Generic Tips ==&lt;br /&gt;
&lt;br /&gt;
Because of the size of the Benchmark, you may need to give your tool more memory before it starts the scan. If its a Java based tool, you may want to pass more memory to it like this:&lt;br /&gt;
&lt;br /&gt;
 -Xmx4G (This gives the Java application 4 Gig of memory)&lt;br /&gt;
&lt;br /&gt;
== SAST Tools ==&lt;br /&gt;
&lt;br /&gt;
=== Checkmarx ===&lt;br /&gt;
&lt;br /&gt;
The Checkmarx SAST Tool (CxSAST) is ready to scan the OWASP Benchmark out-of-the-box. &lt;br /&gt;
Please notice that the OWASP Benchmark “hides” some vulnerabilities in dead code areas, for example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
if (0&amp;gt;1)&lt;br /&gt;
{&lt;br /&gt;
  //vulnerable code&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
By default, CxSAST will find these vulnerabilities since Checkmarx believes that including dead code in the scan results is a SAST best practice. &lt;br /&gt;
&lt;br /&gt;
Checkmarx's experience shows that security experts expect to find these types of code vulnerabilities, and demand that their developers fix them. However, OWASP Benchmark considers the flagging of these vulnerabilities as False Positives, as a result lowering Checkmarx's overall score. &lt;br /&gt;
&lt;br /&gt;
Therefore, in order to receive an OWASP score untainted by dead code, re-configure CxSAST as follows:&lt;br /&gt;
# Open the CxAudit client for editing Java queries.&lt;br /&gt;
# Override the &amp;quot;Find_Dead_Code&amp;quot; query.&lt;br /&gt;
# Add the commented text of the original query to the new override query.&lt;br /&gt;
# Save the queries.&lt;br /&gt;
&lt;br /&gt;
=== FindBugs ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindBugs.(sh or bat). If you want to run a different version of FindBugs, just change its version number in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== FindBugs with FindSecBugs ===&lt;br /&gt;
&lt;br /&gt;
[http://h3xstream.github.io/find-sec-bugs/ FindSecurityBugs] is a great plugin for FindBugs that significantly increases the ability for FindBugs to find security issues. We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindSecBugs.(sh or bat). If you want to run a different version of FindSecBugs, just change the version number of the findsecbugs-plugin artifact in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== Micro Focus (Formally HP) Fortify ===&lt;br /&gt;
&lt;br /&gt;
If you are using the Audit Workbench, you can give it more memory and make sure you invoke it in 64-bit mode by doing this:&lt;br /&gt;
&lt;br /&gt;
  set AWB_VM_OPTS=&amp;quot;-Xmx2G -XX:MaxPermSize=256m&amp;quot;&lt;br /&gt;
  export AWB_VM_OPTS=&amp;quot;-Xmx2G -XX:MaxPermSize=256m&amp;quot;&lt;br /&gt;
  auditworkbench -64&lt;br /&gt;
&lt;br /&gt;
We found it was easier to use the Maven support in Fortify to scan the Benchmark and to do it in 2 phases, translate, and then scan. We did something like this:&lt;br /&gt;
&lt;br /&gt;
  Translate Phase:&lt;br /&gt;
  export JAVA_HOME=$(/usr/libexec/java_home)&lt;br /&gt;
  export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_17.10/bin&lt;br /&gt;
  export SCA_VM_OPTS=&amp;quot;-Xmx2G -version 1.7&amp;quot;&lt;br /&gt;
  mvn sca:clean&lt;br /&gt;
  mvn sca:translate&lt;br /&gt;
&lt;br /&gt;
  Scan Phase:&lt;br /&gt;
  export JAVA_HOME=$(/usr/libexec/java_home)&lt;br /&gt;
  export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_4.10/bin&lt;br /&gt;
  export SCA_VM_OPTS=&amp;quot;-Xmx10G -version 1.7&amp;quot;&lt;br /&gt;
  mvn sca:scan&lt;br /&gt;
&lt;br /&gt;
=== PMD ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runPMD.(sh or bat). If you want to run a different version of PMD, just change its version number in the Benchmark pom.xml file. (NOTE: PMD doesn't find any security issues. We include it because its interesting to know that it doesn't.)&lt;br /&gt;
&lt;br /&gt;
=== SonarQube ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its mostly dialed in.  But its a bit tricky because SonarQube requires two parts. There is a stand alone scanner for Java. And then there is a web application that accepts the results, and in turn can then produce the results file required by the Benchmark scorecard generator for SonarQube. Running the script runSonarQube.(sh or bat) will generate the results, but if the SonarQube Web Application isn't running where the runSonarQube script expects it to be, then the script will fail.&lt;br /&gt;
&lt;br /&gt;
If you want to run a different version of SonarQube, just change its version number in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== Xanitizer ===&lt;br /&gt;
&lt;br /&gt;
The vendor has written their own guide to [http://www.rigs-it.net/opendownloads/whitepapers/HowToSetUpXanitizerForOWASPBenchmarkProject.pdf How to Set Up Xanitizer for OWASP Benchmark].&lt;br /&gt;
&lt;br /&gt;
== DAST Tools ==&lt;br /&gt;
&lt;br /&gt;
=== Burp Pro ===&lt;br /&gt;
&lt;br /&gt;
You must use Burp Pro v1.6.29 or greater to scan the Benchmark due to a previous limitation in Burp Pro related to ensuring the path attribute for cookies was honored. This issue was fixed in the v1.6.29 release.&lt;br /&gt;
&lt;br /&gt;
To scan, first spider the entire Benchmark, and then select the /Benchmark URL and actively scan that branch. You can skip all the .html pages and any other pages that Burp says have no parameters.&lt;br /&gt;
&lt;br /&gt;
NOTE: We have been unable to simply run Burp Pro against the entire Benchmark in one shot. In our experience, it eventually freezes/stops scanning. We've had to run it against each test area one at a time. If you figure out how to get Burp Pro to scan all of Benchmark in one shot, let us know how you did it!&lt;br /&gt;
&lt;br /&gt;
=== OWASP ZAP ===&lt;br /&gt;
&lt;br /&gt;
ZAP may require additional memory to be able to scan the Benchmark. To configure the amount of memory:&lt;br /&gt;
* Tools --&amp;gt; Options --&amp;gt; JVM: Recommend setting to: -Xmx2048m (or larger). (Then restart ZAP).&lt;br /&gt;
&lt;br /&gt;
To run ZAP against Benchmark:&lt;br /&gt;
# Because Benchmark uses Cookies and Headers as sources of attack for many test cases: Tools --&amp;gt; Options --&amp;gt; Active Scan Input Vectors: Then check the HTTP Headers, All Requests, and Cookie Data checkboxes and hit OK&lt;br /&gt;
# Click on Show All Tabs button (if spider tab isn't visible)&lt;br /&gt;
# Go to Spider tab (the black spider) and click on New Scan button&lt;br /&gt;
# Enter: https://localhost:8443/benchmark/  into the 'Starting Point' box and hit 'Start Scan'&lt;br /&gt;
#* Do this again. For some reason it takes 2 passes with the Spider before it stops finding more Benchmark endpoints.&lt;br /&gt;
# When Spider completes, click on 'benchmark' folder in Site Map, right click and select: 'Attack --&amp;gt; Active Scan'&lt;br /&gt;
#* It will take several hours, like 3+ to complete (it's actually likely to simply freeze before completing the scan - see NOTE: below)&lt;br /&gt;
&lt;br /&gt;
For faster active scan you can&lt;br /&gt;
* Disable the ZAP DB log (in ZAP 2.5.0+):&lt;br /&gt;
** Disable it via Options / Database / Recover Log&lt;br /&gt;
** Set it on the command line using &amp;quot;-config database.recoverylog=false&amp;quot;&lt;br /&gt;
* Disable unnecessary plugins / Technologies: When you launch the Active Scan&lt;br /&gt;
** On the Policy tab, disable all plugins except: XSS (Reflected), Path Traversal, SQLi, OS Command Injection&lt;br /&gt;
** Go the Technology Tab, disable everything and only enable: MySQL, YOUR_OS, Tomcat&lt;br /&gt;
** Note: This 2nd performance improvement step is a bit like cheating as you wouldn't do this for a normal site scan. You'd want to leave all this on in case these other plugins/technologies are helpful in finding more issues. So a fair performance comparison of ZAP to other tools would leave all this on.&lt;br /&gt;
&lt;br /&gt;
To generate the ZAP XML results file so you can generate its scorecard:&lt;br /&gt;
* Tools &amp;gt; Options &amp;gt; Alerts - And set the Max alert instances to like 500.&lt;br /&gt;
* Then: Report &amp;gt; Generate XML Report...&lt;br /&gt;
&lt;br /&gt;
NOTE: Similar to Burp, we can't simply run ZAP against the entire Benchmark in one shot. In our experience, it eventually freezes/stops scanning. We've had to run it against each test area one at a time. If you figure out how to get ZAP to scan all of Benchmark in one shot, let us know how you did it!&lt;br /&gt;
&lt;br /&gt;
Things we tried that didn't improve the score:&lt;br /&gt;
* AJAX Spider - the traditional spider appears to find all (or 99%) of the test cases so the AJAX Spider does not appear to be needed against Benchmark v1.2&lt;br /&gt;
* XSS (Persistent) - There are 3 of these plugins that run by default. There aren't any stored XSS in Benchmark, so you can disable these plugins for a faster scan.&lt;br /&gt;
* DOM XSS Plugin - This is an optional plugin that didn't seem to find any additional XSS issues. There aren't an DOM specific XSS issues in Benchmark v1.2, so not surprising.&lt;br /&gt;
&lt;br /&gt;
== IAST Tools ==&lt;br /&gt;
&lt;br /&gt;
Interactive Application Security Testing (IAST) tools work differently than scanners.  IAST tools monitor an application as it runs to identify application vulnerabilities using context from inside the running application. Typically these tools run continuously, immediately notifying users of vulnerabilities, but you can also get a full report of an entire application. To do this, we simply run the Benchmark application with an IAST agent and use a crawler to hit all the pages.&lt;br /&gt;
&lt;br /&gt;
=== Contrast Assess ===&lt;br /&gt;
&lt;br /&gt;
To use Contrast Assess, we simply add the Java agent to the Benchmark environment and run the BenchmarkCrawler. The entire process should only take a few minutes. We provided a few scripts, which simply add the -javaagent:contrast.jar flag to the Benchmark launch configuration. We have tested on MacOS, Ubuntu, and Windows.  Be sure your VM has at least 4M of memory.&lt;br /&gt;
&lt;br /&gt;
* Ensure your environment has Java, Maven, and git installed, then build the Benchmark project&lt;br /&gt;
   '''$ git clone https://github.com/OWASP/Benchmark.git'''&lt;br /&gt;
   '''$ cd Benchmark'''&lt;br /&gt;
   '''$ mvn compile'''&lt;br /&gt;
&lt;br /&gt;
* Download a licensed copy of the Contrast Assess Java Agent (contrast.jar) from your Contrast TeamServer account and put it in the /Benchmark/tools/Contrast directory.&lt;br /&gt;
   '''$ cp ~/Downloads/contrast.jar tools/Contrast'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 1, launch the Benchmark application and wait until it starts&lt;br /&gt;
   '''$ cd tools/Contrast  &lt;br /&gt;
   '''$ ./runBenchmark_wContrast.sh''' (.bat on Windows)&lt;br /&gt;
   '''[INFO] Scanning for projects...&lt;br /&gt;
   '''[INFO]                                                                         &lt;br /&gt;
   '''[INFO] ------------------------------------------------------------------------&lt;br /&gt;
   '''[INFO] Building OWASP Benchmark Project 1.2&lt;br /&gt;
   '''[INFO] ------------------------------------------------------------------------&lt;br /&gt;
   '''[INFO] &lt;br /&gt;
   '''...&lt;br /&gt;
   '''[INFO] [talledLocalContainer] Tomcat 8.x started on port [8443]'''&lt;br /&gt;
   '''[INFO] Press Ctrl-C to stop the container...'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 2, launch the crawler and wait a minute or two for the crawl to complete.&lt;br /&gt;
   '''$ ./runCrawler.sh''' (.bat on Windows)&lt;br /&gt;
&lt;br /&gt;
* A Contrast report has been generated in /Benchmark/tools/Contrast/working/contrast.log.  This report will be automatically copied (and renamed with version number)  to /Benchmark/results directory.&lt;br /&gt;
   '''$ more tools/Contrast/working/contrast.log'''&lt;br /&gt;
   '''2016-04-22 12:29:29,716 [main b] INFO - Contrast Runtime Engine&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Copyright (C) 2012&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Pat. 8,458,789 B2&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Contrast Security, Inc.&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - All Rights Reserved&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - https://www.contrastsecurity.com/&lt;br /&gt;
   '''...'''&lt;br /&gt;
&lt;br /&gt;
* Press Ctrl-C to stop the Benchmark in Terminal 1.  Note: on Windows, select &amp;quot;N&amp;quot; when asked Terminate batch job (Y/N))&lt;br /&gt;
   '''[INFO] [talledLocalContainer] Tomcat 8.x is stopped'''&lt;br /&gt;
   '''Copying Contrast report to results directory'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 2, generate scorecards in /Benchmark/scorecard&lt;br /&gt;
   '''$ ./createScorecards.sh''' (.bat on Windows)&lt;br /&gt;
   '''Analyzing results from Benchmark_1.2-Contrast.log&lt;br /&gt;
   '''Actual results file generated: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.csv&lt;br /&gt;
   '''Report written to: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html&lt;br /&gt;
&lt;br /&gt;
* Open the Benchmark Scorecard in your browser&lt;br /&gt;
   '''/Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html'''&lt;br /&gt;
&lt;br /&gt;
=== Hdiv Detection ===&lt;br /&gt;
&lt;br /&gt;
Hdiv has written their own instructions on how to run the detection component of their product on the Benchmark here: https://hdivsecurity.com/docs/features/benchmark/#how-to-run-hdiv-in-owasp-benchmark-project. You'll see that these instructions involve using the same crawler used to exercise all the test cases in the Benchmark, just like Contrast above.&lt;br /&gt;
&lt;br /&gt;
= RoadMap =&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.0 - Released April 15, 2015 - This initial release included over 20,000 test cases in 11 different vulnerability categories. As this initial version was not a runnable application, it was only suitable for assessing static analysis tools (SAST).&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.1 - Released May 23, 2015 - This update fixed some inaccurate test cases, and made sure that every vulnerability area included both True Positives and False Positives.&lt;br /&gt;
&lt;br /&gt;
Benchmark Scorecard Generator - Released July 10, 2015 - The ability to automatically and repeatably produce a scorecard of how well tools do against the Benchmark was released for most of the SAST tools supported by the Benchmark. Scorecards present graphical as well as statistical data on how well a tool does against the Benchmark down to the level of detail of how exactly it did against each individual test in the Benchmark. [https://rawgit.com/OWASP/Benchmark/master/scorecard/OWASP_Benchmark_Home.html Here are the latest public scorecards].  Support for producing scorecards for additional tools is being added all the time and the current full set is documented on the '''Tool Support/Results''' and '''Quick Start''' tabs of this wiki.&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.2beta - Released Aug 15, 2015 - The 1st release of a fully runnable version of the Benchmark to support assessing all types of vulnerability detection and prevention technologies, including DAST, IAST, RASP, WAFs, etc. This involved creating a user interface for every test case, and enhancing each test case to make sure its actually exploitable, not just uses something that is theoretically weak. This release is under 3,000 test cases to make it practical to scan the entire Benchmark with a DAST tool in a reasonable amount of time, with commodity hardware specs.&lt;br /&gt;
&lt;br /&gt;
Benchmark 1.2 - Released June 5, 2016 -  Based on feedback from a number of DAST tool developers, and other vendors as well, we made the Benchmark more realistic in a number of ways to facilitate external DAST scanning, and also made the Benchmark more resilient against attack so it could properly survive various DAST vulnerability detection and exploit verification techniques.&lt;br /&gt;
&lt;br /&gt;
Plans for Benchmark 1.3:&lt;br /&gt;
&lt;br /&gt;
While we don't have hard and fast rules of exactly what we are going to do next, enhancements in the following areas are planned for the next release:&lt;br /&gt;
&lt;br /&gt;
* Add new vulnerability categories (e.g., XXE, Hibernate Injection)&lt;br /&gt;
* Add support for popular server side Java frameworks (e.g., Spring)&lt;br /&gt;
* Add web services test cases&lt;br /&gt;
&lt;br /&gt;
We are also starting to work on the ability to score WAFs/RASPs and other defensive technology against Benchmark.&lt;br /&gt;
&lt;br /&gt;
= FAQ =&lt;br /&gt;
&lt;br /&gt;
==1. How are the scores computed for the Benchmark?==&lt;br /&gt;
&lt;br /&gt;
Each test case has a single vulnerability of a specific type. Its either a real vulnerability (True Positive) or not (a False Positive). We document all the test cases for each version of the Benchmark in the expectedresults-VERSION#.csv file (e.g., expectedresults-1.1.csv). This file lists the test case name, the CWE type of the vulnerability, and whether it is a True Positive or not. The Benchmark supports scorecard generators for computing exactly how a tool did when analyzing a version of the Benchmark. The full list of supported tools is on the Tools Support/Results tab. For each tool there is a parser that can parse the native results format for that tool (usually XML). This parser simply, for each test case, looks to see if that tool reported a vulnerability of the type expected in the test case source code file (for SAST) or the test case URL (for DAST/IAST). If it did, and the test case was a True Positive, the tool gets credit for finding it. If it is a False Positive test, and the tool reports that type of finding, then its recorded as a False Positive. If the tool didn't report that type of vulnerability for a test case, then they get either a False Negative, or a True Negative as appropriate. After calculating all of the individual test case results, a scorecard is generated providing a chart and statistics for that tool across all the vulnerability categories, and pages are also created comparing different tools to each other in each vulnerability category (if multiple tools are being scored together).&lt;br /&gt;
&lt;br /&gt;
A detailed file explaining exactly how that tool did against each individual test case in that version of the Benchmark is produced as part of scorecard generation, and is available via the Actual Results link on each tool's scorecard page. (e.g., Benchmark_v1.1_Scorecard_for_FindBugs.csv).&lt;br /&gt;
&lt;br /&gt;
==2. What if the tool I'm using doesn't have a scorecard generator for it?==&lt;br /&gt;
&lt;br /&gt;
Send us the results file! We'll be happy to create a parser for that tool so its now supported.&lt;br /&gt;
&lt;br /&gt;
==3. What if a tool finds other unexpected vulnerabilities?==&lt;br /&gt;
&lt;br /&gt;
We are sure there are vulnerabilities we didn't intend to be there and we are eliminating them as we find them. If you find some, let us know and we'll fix them too. We are primarily focused on unintentional vulnerabilities in the categories of vulnerabilities the Benchmark currently supports, since that is what is actually measured.&lt;br /&gt;
&lt;br /&gt;
Right now, two types of vulnerabilities that get reported are ignored by the scorecard generator:&lt;br /&gt;
# Vulnerabilities in categories not yet supported&lt;br /&gt;
# Vulnerabilities of a type that is supported, but reported in test cases not of that type&lt;br /&gt;
&lt;br /&gt;
In the case of #2, false positives reported in unexpected areas are also ignored, which is primarily a DAST problem. Right now those false positives are completely ignored, but we are thinking about including them in the false positive score in some fashion. We just haven't decided how yet.&lt;br /&gt;
&lt;br /&gt;
==4. How should I configure my tool to scan the Benchmark?==&lt;br /&gt;
&lt;br /&gt;
All tools support various levels of configuration in order to improve their results. The Benchmark project, in general, is trying to '''compare out of the box capabilities of tools'''. However, if a few simple tweaks to a tool can be done to improve that tool's score, that's fine. We'd like to understand what those simple tweaks are, and document them here, so others can repeat those tests in exactly the same way. For example, just turn on the 'test cookies and headers' flag, which is off by default. Or turn on the 'advanced' scan, so it will work harder, find more vulnerabilities. Its simple things like this we are talking about, not an extensive effort to teach the tool about the app, or perform 'expert' configuration of the tool.&lt;br /&gt;
&lt;br /&gt;
So, if you know of some simple tweaks to improve a tool's results, let us know what they are and we'll document them here so everyone can benefit and make it easier to do apples to apples comparisons. And we'll link to that guidance once we start documenting it, but we don't have any such guidance right now.&lt;br /&gt;
&lt;br /&gt;
==5. I'm having difficulty scanning the Benchmark with a DAST tool. How can I get it to work?==&lt;br /&gt;
&lt;br /&gt;
We've run into 2 primary issues giving DAST tools problems.&lt;br /&gt;
&lt;br /&gt;
a) The Benchmark Generates Lots of Cookies&lt;br /&gt;
&lt;br /&gt;
The Burp team pointed out a cookies bug in the 1.2beta Benchmark. Each Weak Randomness test case generates its own cookie, 1 per test case. This caused the creation of so many cookies that servers would eventually start returning 400 errors because there were simply too many cookies being submitted in a request. This was fixed in the Aug 27, 2015 update to the Benchmark by setting the path attribute for each of these cookies to be the path to that individual test case. Now, only at most one of these cookies should be submitted with each request, eliminating this 'too many cookies' problem. However, if a DAST tool doesn't honor this path attribute, it may continue to send too many cookies, making the Benchmark unscannable for that tool. Burp Pro prior to 1.6.29 had this issue, but it was fixed in the 1.6.29 release.&lt;br /&gt;
&lt;br /&gt;
b) The Benchmark is a BIG Application&lt;br /&gt;
&lt;br /&gt;
Yes. It is, so you might have to give your scanner more memory than it normally uses by default in order to successfully scan the entire Benchmark. Please consult your tool vendor's documentation on how to give it more memory.&lt;br /&gt;
&lt;br /&gt;
Your machine itself might not have enough memory in the first place. For example, we were not able to successfully scan the 1.2beta with OWASP ZAP with only 8 Gig of RAM. So, you might need a more powerful machine or use a cloud provided machine to successfully scan the Benchmark with certain DAST tools. You may have similar problems with SAST tools against large versions of the Benchmark, like the 1.1 release.&lt;br /&gt;
&lt;br /&gt;
= Acknowledgements =&lt;br /&gt;
&lt;br /&gt;
The following people, organizations, and many others, have contributed to this project and their contributions are much appreciated!&lt;br /&gt;
&lt;br /&gt;
* Lots of Vendors - Many vendors have provided us with either trial licenses we can use, or they have run their tools themselves and either sent us results files, or written and contributed scorecard generators for their tool. Many have also provided valuable feedback so we can make the Benchmark more accurate and more realistic.&lt;br /&gt;
* Juan Gama - Development of initial release and continued support&lt;br /&gt;
* Ken Prole - Assistance with automated scorecard development using CodeDx&lt;br /&gt;
* Nick Sanidas - Development of initial release&lt;br /&gt;
* Denim Group - Contribution of scan results to facilitate scorecard development&lt;br /&gt;
* Tasos Laskos - Significant feedback on the DAST version of the Benchmark&lt;br /&gt;
* Ann Campbell - From SonarSource - for fixing our SonarQube results parser&lt;br /&gt;
* Dhiraj Mishra - OWASP Member - contributed SQLi/XSS fuzz vectors as initial contribution towards adding support for WAF/RASP scoring&lt;br /&gt;
&lt;br /&gt;
[[File:CWE_Logo.jpeg|link=https://cwe.mitre.org/]] - The CWE project for providing a mapping mechanism to easily map test cases to issues found by vulnerability detection tools.&lt;br /&gt;
&lt;br /&gt;
We are looking for volunteers. Please contact [mailto:dave.wichers@owasp.org Dave Wichers] if you are interested in contributing new test cases, tool results run against the benchmark, or anything else.&lt;br /&gt;
&lt;br /&gt;
__NOTOC__ &amp;lt;headertabs /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP_Project]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=XML_External_Entity_(XXE)_Prevention_Cheat_Sheet&amp;diff=245714</id>
		<title>XML External Entity (XXE) Prevention Cheat Sheet</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=XML_External_Entity_(XXE)_Prevention_Cheat_Sheet&amp;diff=245714"/>
				<updated>2018-12-03T17:21:30Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* libxerces-c */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; __NOTOC__&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:160px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Cheatsheets-header.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
Last revision (mm/dd/yy): '''{{REVISIONMONTH}}/{{REVISIONDAY}}/{{REVISIONYEAR}}''' &lt;br /&gt;
&amp;lt;br /&amp;gt;&lt;br /&gt;
 __TOC__{{TOC hidden}}&lt;br /&gt;
= Introduction =&lt;br /&gt;
&lt;br /&gt;
''XML eXternal Entity injection'' (XXE), which is now part of the [[Top_10-2017_A4-XML_External_Entities_(XXE)| OWASP Top 10]], is a type of attack against an application that parses XML input. &lt;br /&gt;
&lt;br /&gt;
XXE issue is referenced under the ID [https://cwe.mitre.org/data/definitions/611.html 611] in the [https://cwe.mitre.org/index.html Common Weakness Enumeration] referential.&lt;br /&gt;
&lt;br /&gt;
This attack occurs when untrusted XML input containing a '''reference to an external entity is processed by a weakly configured XML parser'''. &lt;br /&gt;
&lt;br /&gt;
This attack may lead to the disclosure of confidential data, denial of service, [[Server Side Request Forgery]] (SSRF), port scanning from the perspective of the machine where the parser is located, and other system impacts. The following guide provides concise information to prevent this vulnerability. &lt;br /&gt;
&lt;br /&gt;
For more information on XXE, please visit [[XML External Entity (XXE) Processing]].&lt;br /&gt;
&lt;br /&gt;
==General Guidance==&lt;br /&gt;
The safest way to prevent XXE is always to disable DTDs (External Entities) completely. Depending on the parser, the method should be similar to the following:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
factory.setFeature(&amp;quot;http://apache.org/xml/features/disallow-doctype-decl&amp;quot;, true);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Disabling DTDs also makes the parser secure against denial of services (DOS) attacks such as Billion Laughs. If it is not possible to disable DTDs completely, then external entities and external document type declarations must be disabled in the way that’s specific to each parser.&lt;br /&gt;
&lt;br /&gt;
Detailed XXE Prevention guidance for a number of languages and commonly used XML parsers in those languages is provided below.&lt;br /&gt;
&lt;br /&gt;
==C/C++==&lt;br /&gt;
&lt;br /&gt;
===libxml2===&lt;br /&gt;
&lt;br /&gt;
The Enum [http://xmlsoft.org/html/libxml-parser.html#xmlParserOption xmlParserOption] should not have the following options defined:&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;code&amp;gt;XML_PARSE_NOENT&amp;lt;/code&amp;gt;: Expands entities and substitutes them with replacement text&lt;br /&gt;
* &amp;lt;code&amp;gt;XML_PARSE_DTDLOAD&amp;lt;/code&amp;gt;: Load the external DTD&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note: &lt;br /&gt;
&lt;br /&gt;
Per: https://mail.gnome.org/archives/xml/2012-October/msg00045.html, starting with libxml2 version 2.9, XXE has been disabled by default as committed by the following patch: http://git.gnome.org/browse/libxml2/commit/?id=4629ee02ac649c27f9c0cf98ba017c6b5526070f.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Search for the usage of the following APIs to ensure there is no &amp;lt;code&amp;gt;XML_PARSE_NOENT&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;XML_PARSE_DTDLOAD&amp;lt;/code&amp;gt; defined in the parameters:&lt;br /&gt;
* xmlCtxtReadDoc&lt;br /&gt;
* xmlCtxtReadFd&lt;br /&gt;
* xmlCtxtReadFile&lt;br /&gt;
* xmlCtxtReadIO&lt;br /&gt;
* xmlCtxtReadMemory&lt;br /&gt;
* xmlCtxtUseOptions&lt;br /&gt;
* xmlParseInNodeContext&lt;br /&gt;
* xmlReadDoc&lt;br /&gt;
* xmlReadFd&lt;br /&gt;
* xmlReadFile&lt;br /&gt;
* xmlReadIO&lt;br /&gt;
* xmlReadMemory&lt;br /&gt;
&lt;br /&gt;
===libxerces-c===&lt;br /&gt;
Use of XercesDOMParser do this to prevent XXE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
XercesDOMParser *parser = new XercesDOMParser;&lt;br /&gt;
parser-&amp;gt;setCreateEntityReferenceNodes(false);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use of SAXParser, do this to prevent XXE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
SAXParser* parser = new SAXParser;&lt;br /&gt;
parser-&amp;gt;setDisableDefaultEntityResolution(true);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use of SAX2XMLReader, do this to prevent XXE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c++&amp;quot;&amp;gt;&lt;br /&gt;
SAX2XMLReader* reader = XMLReaderFactory::createXMLReader();&lt;br /&gt;
parser-&amp;gt;setFeature(XMLUni::fgXercesDisableDefaultEntityResolution, true);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Java==&lt;br /&gt;
&lt;br /&gt;
Java applications using XML libraries are particularly vulnerable to XXE because the default settings for most Java XML parsers is to have XXE enabled. To use these parsers safely, you have to explicitly disable XXE in the parser you use. The following describes how to disable XXE in the most commonly used XML parsers for Java.&lt;br /&gt;
&lt;br /&gt;
===JAXP DocumentBuilderFactory, SAXParserFactory and DOM4J===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;DocumentBuilderFactory, SAXParserFactory&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;DOM4J XML&amp;lt;/code&amp;gt; Parsers can be configured using the same techniques to protect them against XXE. &lt;br /&gt;
&lt;br /&gt;
Only the &amp;lt;code&amp;gt;DocumentBuilderFactory&amp;lt;/code&amp;gt; example is presented here. The JAXP &amp;lt;code&amp;gt;DocumentBuilderFactory&amp;lt;/code&amp;gt; [http://docs.oracle.com/javase/7/docs/api/javax/xml/parsers/DocumentBuilderFactory.html#setFeature(java.lang.String,%20boolean) setFeature] method allows a developer to control which implementation-specific XML processor features are enabled or disabled. &lt;br /&gt;
&lt;br /&gt;
The features can either be set on the factory or the underlying &amp;lt;code&amp;gt;XMLReader&amp;lt;/code&amp;gt; [http://docs.oracle.com/javase/7/docs/api/org/xml/sax/XMLReader.html#setFeature%28java.lang.String,%20boolean%29 setFeature] method. &lt;br /&gt;
&lt;br /&gt;
Each XML processor implementation has its own features that govern how DTDs and external entities are processed.&lt;br /&gt;
&lt;br /&gt;
For a syntax highlighted example code snippet using &amp;lt;code&amp;gt;SAXParserFactory&amp;lt;/code&amp;gt;, look [https://gist.github.com/asudhakar02/45e2e6fd8bcdfb4bc3b2 here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
import javax.xml.parsers.DocumentBuilderFactory;&lt;br /&gt;
import javax.xml.parsers.ParserConfigurationException; // catching unsupported features&lt;br /&gt;
&lt;br /&gt;
...&lt;br /&gt;
&lt;br /&gt;
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();&lt;br /&gt;
String FEATURE = null;&lt;br /&gt;
try {&lt;br /&gt;
    // This is the PRIMARY defense. If DTDs (doctypes) are disallowed, almost all XML entity attacks are prevented&lt;br /&gt;
    // Xerces 2 only - http://xerces.apache.org/xerces2-j/features.html#disallow-doctype-decl&lt;br /&gt;
    FEATURE = &amp;quot;http://apache.org/xml/features/disallow-doctype-decl&amp;quot;;&lt;br /&gt;
    dbf.setFeature(FEATURE, true);&lt;br /&gt;
&lt;br /&gt;
    // If you can't completely disable DTDs, then at least do the following:&lt;br /&gt;
    // Xerces 1 - http://xerces.apache.org/xerces-j/features.html#external-general-entities&lt;br /&gt;
    // Xerces 2 - http://xerces.apache.org/xerces2-j/features.html#external-general-entities&lt;br /&gt;
    // JDK7+ - http://xml.org/sax/features/external-general-entities    &lt;br /&gt;
    FEATURE = &amp;quot;http://xml.org/sax/features/external-general-entities&amp;quot;;&lt;br /&gt;
    dbf.setFeature(FEATURE, false);&lt;br /&gt;
&lt;br /&gt;
    // Xerces 1 - http://xerces.apache.org/xerces-j/features.html#external-parameter-entities&lt;br /&gt;
    // Xerces 2 - http://xerces.apache.org/xerces2-j/features.html#external-parameter-entities&lt;br /&gt;
    // JDK7+ - http://xml.org/sax/features/external-parameter-entities    &lt;br /&gt;
    FEATURE = &amp;quot;http://xml.org/sax/features/external-parameter-entities&amp;quot;;&lt;br /&gt;
    dbf.setFeature(FEATURE, false);&lt;br /&gt;
&lt;br /&gt;
    // Disable external DTDs as well&lt;br /&gt;
    FEATURE = &amp;quot;http://apache.org/xml/features/nonvalidating/load-external-dtd&amp;quot;;&lt;br /&gt;
    dbf.setFeature(FEATURE, false);&lt;br /&gt;
&lt;br /&gt;
    // and these as well, per Timothy Morgan's 2014 paper: &amp;quot;XML Schema, DTD, and Entity Attacks&amp;quot;&lt;br /&gt;
    dbf.setXIncludeAware(false);&lt;br /&gt;
    dbf.setExpandEntityReferences(false);&lt;br /&gt;
&lt;br /&gt;
    // And, per Timothy Morgan: &amp;quot;If for some reason support for inline DOCTYPEs are a requirement, then &lt;br /&gt;
    // ensure the entity settings are disabled (as shown above) and beware that SSRF attacks&lt;br /&gt;
    // (http://cwe.mitre.org/data/definitions/918.html) and denial &lt;br /&gt;
    // of service attacks (such as billion laughs or decompression bombs via &amp;quot;jar:&amp;quot;) are a risk.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
    // remaining parser logic&lt;br /&gt;
    ...&lt;br /&gt;
} catch (ParserConfigurationException e) {&lt;br /&gt;
    // This should catch a failed setFeature feature&lt;br /&gt;
    logger.info(&amp;quot;ParserConfigurationException was thrown. The feature '&amp;quot; + FEATURE + &amp;quot;' is probably not supported by your XML processor.&amp;quot;);&lt;br /&gt;
    ...&lt;br /&gt;
} catch (SAXException e) {&lt;br /&gt;
    // On Apache, this should be thrown when disallowing DOCTYPE&lt;br /&gt;
    logger.warning(&amp;quot;A DOCTYPE was passed into the XML document&amp;quot;);&lt;br /&gt;
    ...&lt;br /&gt;
} catch (IOException e) {&lt;br /&gt;
    // XXE that points to a file that doesn't exist&lt;br /&gt;
    logger.error(&amp;quot;IOException occurred, XXE may still possible: &amp;quot; + e.getMessage());&lt;br /&gt;
    ...&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
// Load XML file or stream using a XXE agnostic configured parser...&lt;br /&gt;
DocumentBuilder safebuilder = dbf.newDocumentBuilder();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[http://xerces.apache.org/xerces-j/ Xerces 1] [http://xerces.apache.org/xerces-j/features.html Features]:&lt;br /&gt;
* Do not include external entities by setting [http://xerces.apache.org/xerces-j/features.html#external-general-entities this feature] to &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Do not include parameter entities by setting [http://xerces.apache.org/xerces-j/features.html#external-parameter-entities this feature] to &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Do not include external DTDs by setting [http://xerces.apache.org/xerces-j/features.html#load-external-dtd this feature] to &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
[http://xerces.apache.org/xerces2-j/ Xerces 2] [http://xerces.apache.org/xerces2-j/features.html Features]:&lt;br /&gt;
* Disallow an inline DTD by setting [http://xerces.apache.org/xerces2-j/features.html#disallow-doctype-decl this feature] to &amp;lt;code&amp;gt;true&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Do not include external entities by setting [http://xerces.apache.org/xerces2-j/features.html#external-general-entities this feature] to &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Do not include parameter entities by setting [http://xerces.apache.org/xerces2-j/features.html#external-parameter-entities this feature] to &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt;.&lt;br /&gt;
* Do not include external DTDs by setting [http://xerces.apache.org/xerces-j/features.html#load-external-dtd this feature] to &amp;lt;code&amp;gt;false&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
'''Note: The above defenses require Java 7 update 67, Java 8 update 20, or above, because the above countermeasures for DocumentBuilderFactory and SAXParserFactory are broken in earlier Java versions, per: [http://www.cvedetails.com/cve/CVE-2014-6517/ CVE-2014-6517].'''&lt;br /&gt;
&lt;br /&gt;
===XMLInputFactory (a StAX parser)===&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/StAX StAX] parsers such as &amp;lt;code&amp;gt;[http://docs.oracle.com/javase/7/docs/api/javax/xml/stream/XMLInputFactory.html XMLInputFactory]&amp;lt;/code&amp;gt; allow various properties and features to be set.&lt;br /&gt;
&lt;br /&gt;
To protect a Java &amp;lt;code&amp;gt;XMLInputFactory&amp;lt;/code&amp;gt; from XXE, do this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
xmlInputFactory.setProperty(XMLInputFactory.SUPPORT_DTD, false); // This disables DTDs entirely for that factory&lt;br /&gt;
xmlInputFactory.setProperty(&amp;quot;javax.xml.stream.isSupportingExternalEntities&amp;quot;, false); // disable external entities&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===TransformerFactory===&lt;br /&gt;
To protect a &amp;lt;code&amp;gt;javax.xml.transform.TransformerFactory&amp;lt;/code&amp;gt; from XXE, do this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
TransformerFactory tf = TransformerFactory.newInstance();&lt;br /&gt;
tf.setAttribute(XMLConstants.ACCESS_EXTERNAL_DTD, &amp;quot;&amp;quot;);&lt;br /&gt;
tf.setAttribute(XMLConstants.ACCESS_EXTERNAL_STYLESHEET, &amp;quot;&amp;quot;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Validator===&lt;br /&gt;
To protect a &amp;lt;code&amp;gt;javax.xml.validation.Validator&amp;lt;/code&amp;gt; from XXE, do this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
SchemaFactory factory = SchemaFactory.newInstance(&amp;quot;http://www.w3.org/2001/XMLSchema&amp;quot;);&lt;br /&gt;
Schema schema = factory.newSchema();&lt;br /&gt;
Validator validator = schema.newValidator();&lt;br /&gt;
validator.setProperty(XMLConstants.ACCESS_EXTERNAL_DTD, &amp;quot;&amp;quot;);&lt;br /&gt;
validator.setProperty(XMLConstants.ACCESS_EXTERNAL_SCHEMA, &amp;quot;&amp;quot;);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===SchemaFactory===&lt;br /&gt;
To protect a &amp;lt;code&amp;gt;javax.xml.validation.SchemaFactory&amp;lt;/code&amp;gt; from XXE, do this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
SchemaFactory factory = SchemaFactory.newInstance(&amp;quot;http://www.w3.org/2001/XMLSchema&amp;quot;);&lt;br /&gt;
factory.setProperty(XMLConstants.ACCESS_EXTERNAL_DTD, &amp;quot;&amp;quot;);&lt;br /&gt;
factory.setProperty(XMLConstants.ACCESS_EXTERNAL_SCHEMA, &amp;quot;&amp;quot;);&lt;br /&gt;
Schema schema = factory.newSchema(Source);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===SAXTransformerFactory===&lt;br /&gt;
To protect a &amp;lt;code&amp;gt;javax.xml.transform.sax.SAXTransformerFactory&amp;lt;/code&amp;gt; from XXE, do this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
SAXTransformerFactory sf = SAXTransformerFactory.newInstance();&lt;br /&gt;
sf.setAttribute(XMLConstants.ACCESS_EXTERNAL_DTD, &amp;quot;&amp;quot;);&lt;br /&gt;
sf.setAttribute(XMLConstants.ACCESS_EXTERNAL_STYLESHEET, &amp;quot;&amp;quot;);&lt;br /&gt;
sf.newXMLFilter(Source);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Note: Use of the following &amp;lt;code&amp;gt;XMLConstants&amp;lt;/code&amp;gt; requires JAXP 1.5, which was added to Java in 7u40 and Java 8:'''&lt;br /&gt;
* javax.xml.XMLConstants.ACCESS_EXTERNAL_DTD&lt;br /&gt;
* javax.xml.XMLConstants.ACCESS_EXTERNAL_SCHEMA&lt;br /&gt;
* javax.xml.XMLConstants.ACCESS_EXTERNAL_STYLESHEET&lt;br /&gt;
&lt;br /&gt;
===XMLReader===&lt;br /&gt;
To protect a Java &amp;lt;code&amp;gt;org.xml.sax.XMLReader&amp;lt;/code&amp;gt; from XXE, do this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
XMLReader reader = XMLReaderFactory.createXMLReader();&lt;br /&gt;
reader.setFeature(&amp;quot;http://apache.org/xml/features/disallow-doctype-decl&amp;quot;, true);&lt;br /&gt;
reader.setFeature(&amp;quot;http://apache.org/xml/features/nonvalidating/load-external-dtd&amp;quot;, false); // This may not be strictly required as DTDs shouldn't be allowed at all, per previous line.&lt;br /&gt;
reader.setFeature(&amp;quot;http://xml.org/sax/features/external-general-entities&amp;quot;, false);&lt;br /&gt;
reader.setFeature(&amp;quot;http://xml.org/sax/features/external-parameter-entities&amp;quot;, false);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===SAXReader===&lt;br /&gt;
To protect a Java &amp;lt;code&amp;gt;org.dom4j.io.SAXReader&amp;lt;/code&amp;gt; from XXE, do this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
saxReader.setFeature(&amp;quot;http://apache.org/xml/features/disallow-doctype-decl&amp;quot;, true);&lt;br /&gt;
saxReader.setFeature(&amp;quot;http://xml.org/sax/features/external-general-entities&amp;quot;, false);&lt;br /&gt;
saxReader.setFeature(&amp;quot;http://xml.org/sax/features/external-parameter-entities&amp;quot;, false);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Based on testing, if you are missing one of these, you can still be vulnerable to an XXE attack.&lt;br /&gt;
&lt;br /&gt;
===SAXBuilder===&lt;br /&gt;
To protect a Java &amp;lt;code&amp;gt;org.jdom2.input.SAXBuilder&amp;lt;/code&amp;gt; from XXE, do this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
SAXBuilder builder = new SAXBuilder();&lt;br /&gt;
builder.setFeature(&amp;quot;http://apache.org/xml/features/disallow-doctype-decl&amp;quot;,true);&lt;br /&gt;
builder.setFeature(&amp;quot;http://xml.org/sax/features/external-general-entities&amp;quot;, false);&lt;br /&gt;
builder.setFeature(&amp;quot;http://xml.org/sax/features/external-parameter-entities&amp;quot;, false);&lt;br /&gt;
Document doc = builder.build(new File(fileName));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===JAXB Unmarshaller===&lt;br /&gt;
Since a &amp;lt;code&amp;gt;javax.xml.bind.Unmarshaller&amp;lt;/code&amp;gt; parses XML and does not support any flags for disabling XXE, it’s imperative to parse the untrusted XML through a configurable secure parser first, generate a source object as a result, and pass the source object to the Unmarshaller. For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
//Disable XXE&lt;br /&gt;
SAXParserFactory spf = SAXParserFactory.newInstance();&lt;br /&gt;
spf.setFeature(&amp;quot;http://xml.org/sax/features/external-general-entities&amp;quot;, false);&lt;br /&gt;
spf.setFeature(&amp;quot;http://xml.org/sax/features/external-parameter-entities&amp;quot;, false);&lt;br /&gt;
spf.setFeature(&amp;quot;http://apache.org/xml/features/nonvalidating/load-external-dtd&amp;quot;, false);&lt;br /&gt;
&lt;br /&gt;
//Do unmarshall operation&lt;br /&gt;
Source xmlSource = new SAXSource(spf.newSAXParser().getXMLReader(), new InputSource(new StringReader(xml)));&lt;br /&gt;
JAXBContext jc = JAXBContext.newInstance(Object.class);&lt;br /&gt;
Unmarshaller um = jc.createUnmarshaller();&lt;br /&gt;
um.unmarshal(xmlSource);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===XPathExpression===&lt;br /&gt;
A &amp;lt;code&amp;gt;javax.xml.xpath.XPathExpression&amp;lt;/code&amp;gt; is similar to an Unmarshaller where it can’t be configured securely by itself, so the untrusted data must be parsed through another securable XML parser first. &lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
DocumentBuilderFactory df = DocumentBuilderFactory.newInstance();			&lt;br /&gt;
df.setAttribute(XMLConstants.ACCESS_EXTERNAL_DTD, &amp;quot;&amp;quot;); &lt;br /&gt;
df.setAttribute(XMLConstants.ACCESS_EXTERNAL_SCHEMA, &amp;quot;&amp;quot;); 	&lt;br /&gt;
DocumentBuilder builder = df.newDocumentBuilder();&lt;br /&gt;
String result = new XPathExpression().evaluate( builder.parse(new ByteArrayInputStream(xml.getBytes())) );&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===java.beans.XMLDecoder===&lt;br /&gt;
&lt;br /&gt;
The [https://docs.oracle.com/javase/8/docs/api/java/beans/XMLDecoder.html#readObject-- readObject()] method in this class is fundamentally unsafe. &lt;br /&gt;
&lt;br /&gt;
Not only is the XML it parses subject to XXE, but the method can be used to construct any Java object, and [http://stackoverflow.com/questions/14307442/is-it-safe-to-use-xmldecoder-to-read-document-files execute arbitrary code as described here]. &lt;br /&gt;
&lt;br /&gt;
And there is no way to make use of this class safe except to trust or properly validate the input being passed into it. &lt;br /&gt;
&lt;br /&gt;
As such, we'd strongly recommend completely avoiding the use of this class and replacing it with a safe or properly configured XML parser as described elsewhere in this cheat sheet.&lt;br /&gt;
&lt;br /&gt;
===Other XML Parsers===&lt;br /&gt;
There are many 3rd party libraries that parse XML either directly or through their use of other libraries. Please test and verify their XML parser is secure against XXE by default. If the parser is not secure by default, look for flags supported by the parser to disable all possible external resource inclusions like the examples given above. If there’s no control exposed to the outside, make sure the untrusted content is passed through a secure parser first and then passed to insecure 3rd party parser similar to how the Unmarshaller is secured.&lt;br /&gt;
&lt;br /&gt;
==== Spring Framework MVC/OXM XXE Vulnerabilities ====&lt;br /&gt;
&lt;br /&gt;
For example, some XXE vulnerabilities were found in [http://pivotal.io/security/cve-2013-4152 Spring OXM] and [http://pivotal.io/security/cve-2013-7315 Spring MVC]. The following versions of the Spring Framework are vulnerable to XXE:&lt;br /&gt;
&lt;br /&gt;
* 3.0.0 to 3.2.3 (Spring OXM &amp;amp; Spring MVC)&lt;br /&gt;
* 4.0.0.M1 (Spring OXM)&lt;br /&gt;
* 4.0.0.M1-4.0.0.M2 (Spring MVC)&lt;br /&gt;
&lt;br /&gt;
There were other issues as well that were fixed later, so to fully address these issues, Spring recommends you upgrade to Spring Framework 3.2.8+ or 4.0.2+.&lt;br /&gt;
&lt;br /&gt;
For Spring OXM, this is referring to the use of org.springframework.oxm.jaxb.Jaxb2Marshaller. Note that the CVE for Spring OXM specifically indicates that 2 XML parsing situations are up to the developer to get right, and 2 are the responsibility of Spring and were fixed to address this CVE. Here's what they say:&lt;br /&gt;
&lt;br /&gt;
 '''Two situations developers must handle:'''&lt;br /&gt;
 For a DOMSource, the XML has already been parsed by user code and that code is responsible for protecting against XXE.&lt;br /&gt;
 For a StAXSource, the XMLStreamReader has already been created by user code and that code is responsible for protecting against XXE.&lt;br /&gt;
&lt;br /&gt;
 '''The issue Spring fixed:'''&lt;br /&gt;
 &lt;br /&gt;
 For SAXSource and StreamSource instances, Spring processed external entities by default thereby creating this vulnerability.&lt;br /&gt;
 Here's an example of using a StreamSource that was vulnerable, but is now safe, if you are using a fixed version of Spring OXM or Spring MVC:&lt;br /&gt;
 &lt;br /&gt;
  org.springframework.oxm.Jaxb2Marshaller marshaller = new org.springframework.oxm.jaxb.Jaxb2Marshaller();&lt;br /&gt;
  marshaller.unmarshal(new StreamSource(new StringReader(some_string_containing_XML)); // Must cast return Object to whatever type you are unmarshalling&lt;br /&gt;
&lt;br /&gt;
So, per the [http://pivotal.io/security/cve-2013-4152 Spring OXM CVE writeup], the above is now safe. But if you were to use a DOMSource or StAXSource instead, it would be up to you to configure those sources to be safe from XXE.&lt;br /&gt;
&lt;br /&gt;
==.NET==&lt;br /&gt;
&lt;br /&gt;
The following information for XXE injection in .NET is directly from this web application of unit tests by Dean Fleming: https://github.com/deanf1/dotnet-security-unit-tests. &lt;br /&gt;
&lt;br /&gt;
This web application covers all currently supported .NET XML parsers, and has test cases for each demonstrating when they are safe from XXE injection and when they are not. &lt;br /&gt;
&lt;br /&gt;
Previously, this information was based on James Jardine's excellent .NET XXE article:  https://www.jardinesoftware.net/2016/05/26/xxe-and-net/.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
It originally provided more recent and more detailed information than the older article from Microsoft on how to prevent XXE and XML Denial of Service in .NET: http://msdn.microsoft.com/en-us/magazine/ee335713.aspx, however, it has some inaccuracies that the web application covers.&lt;br /&gt;
&lt;br /&gt;
The following table lists all supported .NET XML parsers and their default safety levels:&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;width: 25%; align:center; text-align:left; border: 2px solid #4d953d; background-color:#F2F2F2; padding=2;&amp;quot; &lt;br /&gt;
|- style=&amp;quot;background-color: #4d953d; color: #FFFFFF;&amp;quot;&lt;br /&gt;
! XML Parser !! Safe by Default?&lt;br /&gt;
|- style=&amp;quot;background-color: #FFFFFF;&amp;quot; &lt;br /&gt;
|'''LINQ to XML'''&lt;br /&gt;
|Yes&lt;br /&gt;
|- style=&amp;quot;background-color: #FFFFFF;&amp;quot; &lt;br /&gt;
|'''XmlDictionaryReader'''&lt;br /&gt;
|Yes&lt;br /&gt;
|- style=&amp;quot;background-color: #FFFFFF;&amp;quot; &lt;br /&gt;
|'''XmlDocument'''&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|...prior to 4.5.2&lt;br /&gt;
|No&lt;br /&gt;
|-&lt;br /&gt;
|...in versions 4.5.2 +&lt;br /&gt;
|Yes&lt;br /&gt;
|- style=&amp;quot;background-color: #FFFFFF;&amp;quot; &lt;br /&gt;
| '''XmlNodeReader'''&lt;br /&gt;
| Yes&lt;br /&gt;
|- style=&amp;quot;background-color: #FFFFFF;&amp;quot; &lt;br /&gt;
| '''XmlReader'''&lt;br /&gt;
| Yes&lt;br /&gt;
|- style=&amp;quot;background-color: #FFFFFF;&amp;quot; &lt;br /&gt;
| '''XmlTextReader'''&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
| ...prior to 4.5.2&lt;br /&gt;
| No&lt;br /&gt;
|-&lt;br /&gt;
|...in versions 4.5.2 +&lt;br /&gt;
|Yes&lt;br /&gt;
|- style=&amp;quot;background-color: #FFFFFF;&amp;quot; &lt;br /&gt;
| '''XPathNavigator'''&lt;br /&gt;
| &lt;br /&gt;
|-&lt;br /&gt;
|...prior to 4.5.2&lt;br /&gt;
|No&lt;br /&gt;
|-&lt;br /&gt;
|...in versions 4.5.2 +&lt;br /&gt;
|Yes&lt;br /&gt;
|- style=&amp;quot;background-color: #FFFFFF;&amp;quot; &lt;br /&gt;
| '''XslCompiledTransform'''&lt;br /&gt;
| Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== LINQ to XML ===&lt;br /&gt;
Both the &amp;lt;code&amp;gt;XElement&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;XDocument&amp;lt;/code&amp;gt; objects in the &amp;lt;code&amp;gt;System.Xml.Linq&amp;lt;/code&amp;gt; library are safe from XXE injection by default. &amp;lt;code&amp;gt;XElement&amp;lt;/code&amp;gt; parses only the elements within the XML file, so DTDs are ignored altogether. &amp;lt;code&amp;gt;XDocument&amp;lt;/code&amp;gt; has DTDs [https://github.com/dotnet/docs/blob/master/docs/visual-basic/programming-guide/concepts/linq/linq-to-xml-security.md disabled by default], and is only unsafe if constructed with a different unsafe XML parser.&lt;br /&gt;
&lt;br /&gt;
=== XmlDictionaryReader ===&lt;br /&gt;
&amp;lt;code&amp;gt;System.Xml.XmlDictionaryReader&amp;lt;/code&amp;gt; is safe by default, as when it attempts to parse the DTD, the compiler throws an exception saying that &amp;quot;CData elements not valid at top level of an XML document&amp;quot;. It becomes unsafe if constructed with a different unsafe XML parser.&lt;br /&gt;
&lt;br /&gt;
=== XmlDocument ===&lt;br /&gt;
Prior to .NET Framework version 4.5.2, &amp;lt;code&amp;gt;System.Xml.XmlDocument&amp;lt;/code&amp;gt; is '''unsafe''' by default. The &amp;lt;code&amp;gt;XmlDocument&amp;lt;/code&amp;gt; object has an &amp;lt;code&amp;gt;XmlResolver&amp;lt;/code&amp;gt; object within it that needs to be set to null in versions prior to 4.5.2. In versions 4.5.2 and up, this &amp;lt;code&amp;gt;XmlResolver&amp;lt;/code&amp;gt; is set to null by default. &lt;br /&gt;
&lt;br /&gt;
The following example shows how it is made safe:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c#&amp;quot;&amp;gt;&lt;br /&gt;
 static void LoadXML()&lt;br /&gt;
 {&lt;br /&gt;
   string xml = &amp;quot;&amp;lt;?xml version=\&amp;quot;1.0\&amp;quot; ?&amp;gt;&amp;lt;!DOCTYPE doc &lt;br /&gt;
 	[&amp;lt;!ENTITY win SYSTEM \&amp;quot;file:///C:/Users/user/Documents/testdata2.txt\&amp;quot;&amp;gt;]&lt;br /&gt;
 	&amp;gt;&amp;lt;doc&amp;gt;&amp;amp;win;&amp;lt;/doc&amp;gt;&amp;quot;;&lt;br /&gt;
 &lt;br /&gt;
   XmlDocument xmlDoc = new XmlDocument();&lt;br /&gt;
   xmlDoc.XmlResolver = null;   // Setting this to NULL disables DTDs - Its NOT null by default.&lt;br /&gt;
   xmlDoc.LoadXml(xml);&lt;br /&gt;
   Console.WriteLine(xmlDoc.InnerText);&lt;br /&gt;
   Console.ReadLine();&lt;br /&gt;
 }&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;XmlDocument&amp;lt;/code&amp;gt; can become unsafe if you create your own nonnull &amp;lt;code&amp;gt;XmlResolver&amp;lt;/code&amp;gt; with default or unsafe settings. If you need to enable DTD processing, instructions on how to do so safely are described in detail in the [http://msdn.microsoft.com/en-us/magazine/ee335713.aspx referenced MSDN article].&lt;br /&gt;
&lt;br /&gt;
=== XmlNodeReader ===&lt;br /&gt;
&amp;lt;code&amp;gt;System.Xml.XmlNodeReader&amp;lt;/code&amp;gt; objects are safe by default and will ignore DTDs even when constructed with an unsafe parser or wrapped in another unsafe parser.&lt;br /&gt;
&lt;br /&gt;
=== XmlReader ===&lt;br /&gt;
&amp;lt;code&amp;gt;System.Xml.XmlReader&amp;lt;/code&amp;gt; objects are safe by default. &lt;br /&gt;
&lt;br /&gt;
They are set by default to have their ProhibitDtd property set to false in .NET Framework versions 4.0 and earlier, or their &amp;lt;code&amp;gt;DtdProcessing&amp;lt;/code&amp;gt; property set to Prohibit in .NET versions 4.0 and later. &lt;br /&gt;
&lt;br /&gt;
Additionally, in .NET versions 4.5.2 and later, the &amp;lt;code&amp;gt;XmlReaderSettings&amp;lt;/code&amp;gt; belonging to the &amp;lt;code&amp;gt;XmlReader&amp;lt;/code&amp;gt; has its &amp;lt;code&amp;gt;XmlResolver&amp;lt;/code&amp;gt; set to null by default, which provides an additional layer of safety. &lt;br /&gt;
&lt;br /&gt;
Therefore, &amp;lt;code&amp;gt;XmlReader&amp;lt;/code&amp;gt; objects will only become unsafe in version 4.5.2 and up if both the &amp;lt;code&amp;gt;DtdProcessing&amp;lt;/code&amp;gt; property is set to Parse and the &amp;lt;code&amp;gt;XmlReaderSetting&amp;lt;/code&amp;gt;'s &amp;lt;code&amp;gt;XmlResolver&amp;lt;/code&amp;gt; is set to a nonnull XmlResolver with default or unsafe settings. If you need to enable DTD processing, instructions on how to do so safely are described in detail in the [http://msdn.microsoft.com/en-us/magazine/ee335713.aspx referenced MSDN article].&lt;br /&gt;
&lt;br /&gt;
=== XmlTextReader ===&lt;br /&gt;
&amp;lt;code&amp;gt;System.Xml.XmlTextReader&amp;lt;/code&amp;gt; is '''unsafe''' by default in .NET Framework versions prior to 4.5.2. Here is how to make it safe in various .NET versions:&lt;br /&gt;
&lt;br /&gt;
==== Prior to .NET 4.0 ====&lt;br /&gt;
In .NET Framework versions prior to 4.0, DTD parsing behavior for &amp;lt;code&amp;gt;XmlReader&amp;lt;/code&amp;gt; objects like &amp;lt;code&amp;gt;XmlTextReader&amp;lt;/code&amp;gt; are controlled by the Boolean &amp;lt;code&amp;gt;ProhibitDtd&amp;lt;/code&amp;gt; property found in the &amp;lt;code&amp;gt;System.Xml.XmlReaderSettings&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;System.Xml.XmlTextReader&amp;lt;/code&amp;gt; classes. &lt;br /&gt;
&lt;br /&gt;
Set these values to true to disable inline DTDs completely.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c#&amp;quot;&amp;gt;&lt;br /&gt;
XmlTextReader reader = new XmlTextReader(stream);&lt;br /&gt;
reader.ProhibitDtd = true;  // NEEDED because the default is FALSE!!&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== .NET 4.0 - .NET 4.5.2 ====&lt;br /&gt;
&lt;br /&gt;
In .NET Framework version 4.0, DTD parsing behavior has been changed. The &amp;lt;code&amp;gt;ProhibitDtd&amp;lt;/code&amp;gt; property has been deprecated in favor of the new &amp;lt;code&amp;gt;DtdProcessing&amp;lt;/code&amp;gt; property. &lt;br /&gt;
&lt;br /&gt;
However, they didn't change the default settings so &amp;lt;code&amp;gt;XmlTextReader&amp;lt;/code&amp;gt; is still vulnerable to XXE by default. &lt;br /&gt;
&lt;br /&gt;
Setting &amp;lt;code&amp;gt;DtdProcessing&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;Prohibit&amp;lt;/code&amp;gt; causes the runtime to throw an exception if a &amp;lt;code&amp;gt;&amp;lt;!DOCTYPE&amp;gt;&amp;lt;/code&amp;gt; element is present in the XML. &lt;br /&gt;
&lt;br /&gt;
To set this value yourself, it looks like this:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c#&amp;quot;&amp;gt;&lt;br /&gt;
XmlTextReader reader = new XmlTextReader(stream);&lt;br /&gt;
reader.DtdProcessing = DtdProcessing.Prohibit;  // NEEDED because the default is Parse!!&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Alternatively, you can set the &amp;lt;code&amp;gt;DtdProcessing&amp;lt;/code&amp;gt; property to &amp;lt;code&amp;gt;Ignore&amp;lt;/code&amp;gt;, which will not throw an exception on encountering a &amp;lt;code&amp;gt;&amp;lt;!DOCTYPE&amp;gt;&amp;lt;/code&amp;gt; element but will simply skip over it and not process it. Finally, you can set &amp;lt;code&amp;gt;DtdProcessing&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;Parse&amp;lt;/code&amp;gt; if you do want to allow and process inline DTDs.&lt;br /&gt;
&lt;br /&gt;
==== .NET 4.5.2 and later ====&lt;br /&gt;
&lt;br /&gt;
In .NET Framework versions 4.5.2 and up, &amp;lt;code&amp;gt;XmlTextReader&amp;lt;/code&amp;gt;'s internal &amp;lt;code&amp;gt;XmlResolver&amp;lt;/code&amp;gt; is set to null by default, making the &amp;lt;code&amp;gt;XmlTextReader&amp;lt;/code&amp;gt; ignore DTDs by default. The &amp;lt;code&amp;gt;XmlTextReader&amp;lt;/code&amp;gt; can become unsafe if if you create your own nonnull &amp;lt;code&amp;gt;XmlResolver&amp;lt;/code&amp;gt; with default or unsafe settings.&lt;br /&gt;
&lt;br /&gt;
=== XPathNavigator ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;System.Xml.XPath.XPathNavigator&amp;lt;/code&amp;gt; is '''unsafe''' by default in .NET Framework versions prior to 4.5.2. &lt;br /&gt;
&lt;br /&gt;
This is due to the fact that it implements &amp;lt;code&amp;gt;IXPathNavigable&amp;lt;/code&amp;gt; objects like &amp;lt;code&amp;gt;XmlDocument&amp;lt;/code&amp;gt;, which are also unsafe by default in versions prior to 4.5.2. &lt;br /&gt;
&lt;br /&gt;
You can make &amp;lt;code&amp;gt;XPathNavigator&amp;lt;/code&amp;gt; safe by giving it a safe parser like &amp;lt;code&amp;gt;XmlReader&amp;lt;/code&amp;gt; (which is safe by default) in the &amp;lt;code&amp;gt;XPathDocument&amp;lt;/code&amp;gt;'s constructor. &lt;br /&gt;
&lt;br /&gt;
Here is an example:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;c#&amp;quot;&amp;gt;&lt;br /&gt;
XmlReader reader = XmlReader.Create(&amp;quot;example.xml&amp;quot;);&lt;br /&gt;
XPathDocument doc = new XPathDocument(reader);&lt;br /&gt;
XPathNavigator nav = doc.CreateNavigator(); &lt;br /&gt;
string xml = nav.InnerXml.ToString();&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== XslCompiledTransform ===&lt;br /&gt;
&amp;lt;code&amp;gt;System.Xml.Xsl.XslCompiledTransform&amp;lt;/code&amp;gt; (an XML transformer) is safe by default as long as the parser it’s given is safe. &lt;br /&gt;
&lt;br /&gt;
It is safe by default because the default parser of the &amp;lt;code&amp;gt;Transform()&amp;lt;/code&amp;gt; methods is an &amp;lt;code&amp;gt;XmlReader&amp;lt;/code&amp;gt;, which is safe by default (per above). &lt;br /&gt;
&lt;br /&gt;
[http://www.dotnetframework.org/default.aspx/4@0/4@0/DEVDIV_TFS/Dev10/Releases/RTMRel/ndp/fx/src/Xml/System/Xml/Xslt/XslCompiledTransform@cs/1305376/XslCompiledTransform@cs The source code for this method is here.] &lt;br /&gt;
&lt;br /&gt;
Some of the &amp;lt;code&amp;gt;Transform()&amp;lt;/code&amp;gt; methods accept an &amp;lt;code&amp;gt;XmlReader&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;IXPathNavigable&amp;lt;/code&amp;gt; (e.g., &amp;lt;code&amp;gt;XmlDocument&amp;lt;/code&amp;gt;) as an input, and if you pass in an unsafe XML Parser then the &amp;lt;code&amp;gt;Transform&amp;lt;/code&amp;gt; will also be unsafe.&lt;br /&gt;
&lt;br /&gt;
==iOS==&lt;br /&gt;
&lt;br /&gt;
===libxml2===&lt;br /&gt;
&lt;br /&gt;
iOS includes the C/C++ libxml2 library described above, so that guidance applies if you are using libxml2 directly. &lt;br /&gt;
&lt;br /&gt;
However, the version of libxml2 provided up through iOS6 is prior to version 2.9 of libxml2 (which protects against XXE by default).&lt;br /&gt;
&lt;br /&gt;
===NSXMLDocument===&lt;br /&gt;
&lt;br /&gt;
iOS also provides an &amp;lt;code&amp;gt;NSXMLDocument&amp;lt;/code&amp;gt; type, which is built on top of libxml2. &lt;br /&gt;
&lt;br /&gt;
However, &amp;lt;code&amp;gt;NSXMLDocument&amp;lt;/code&amp;gt; provides some additional protections against XXE that aren't available in libxml2 directly. &lt;br /&gt;
&lt;br /&gt;
Per the 'NSXMLDocument External Entity Restriction API' section of: http://developer.apple.com/library/ios/#releasenotes/Foundation/RN-Foundation-iOS/Foundation_iOS5.html:&lt;br /&gt;
&lt;br /&gt;
* iOS4 and earlier: All external entities are loaded by default.&lt;br /&gt;
&lt;br /&gt;
* iOS5 and later: Only entities that don't require network access are loaded. (which is safer)&lt;br /&gt;
&lt;br /&gt;
However, to completely disable XXE in an &amp;lt;code&amp;gt;NSXMLDocument&amp;lt;/code&amp;gt; in any version of iOS you simply specify &amp;lt;code&amp;gt;NSXMLNodeLoadExternalEntitiesNever&amp;lt;/code&amp;gt; when creating the &amp;lt;code&amp;gt;NSXMLDocument&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==PHP==&lt;br /&gt;
&lt;br /&gt;
Per [http://php.net/manual/en/function.libxml-disable-entity-loader.php the PHP documentation], the following should be set when using the default PHP XML parser in order to prevent XXE:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;php&amp;quot;&amp;gt;&lt;br /&gt;
libxml_disable_entity_loader(true);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A description of how to abuse this in PHP is presented in a good [https://www.sensepost.com/blog/2014/revisting-xxe-and-abusing-protocols/ SensePost article] describing a cool PHP based XXE vulnerability that was fixed in Facebook.&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
* [https://resources.infosecinstitute.com/identify-mitigate-xxe-vulnerabilities/ XXE by InfoSecInstitute]&lt;br /&gt;
* [[Top_10-2017_A4-XML_External_Entities_(XXE)| OWASP Top 10-2017 A4: XML External Entities (XXE)]]&lt;br /&gt;
* [https://vsecurity.com//download/papers/XMLDTDEntityAttacks.pdf Timothy Morgan's 2014 paper: &amp;quot;XML Schema, DTD, and Entity Attacks&amp;quot;]&lt;br /&gt;
* [https://find-sec-bugs.github.io/bugs.htm#XXE_SAXPARSER FindSecBugs XXE Detection]&lt;br /&gt;
* [https://github.com/ssexxe/XXEBugFind XXEbugFind Tool]&lt;br /&gt;
* [[Testing for XML Injection (OTG-INPVAL-008)]]&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
[[User:wichers|Dave Wichers]] - dave.wichers[at]owasp.org&amp;lt;br /&amp;gt;&lt;br /&gt;
[[User:Xiaoran_Wang|Xiaoran Wang]] - xiaoran[at]attacker-domain.com&amp;lt;br /&amp;gt;&lt;br /&gt;
James Jardine - james[at]jardinesoftware.com&amp;lt;br /&amp;gt;&lt;br /&gt;
Tony Hsu (Hsiang-Chih)&amp;lt;br /&amp;gt;&lt;br /&gt;
[[User:Dfleming|Dean Fleming]]&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets =&lt;br /&gt;
&lt;br /&gt;
{{Cheatsheet_Navigation_Body}}&lt;br /&gt;
[[Category:Cheatsheets]]&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Injection_Prevention_Cheat_Sheet_in_Java&amp;diff=245543</id>
		<title>Injection Prevention Cheat Sheet in Java</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Injection_Prevention_Cheat_Sheet_in_Java&amp;diff=245543"/>
				<updated>2018-11-26T17:41:39Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Log Injection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; __NOTOC__&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:160px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Cheatsheets-header.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
Last revision (mm/dd/yy): '''{{REVISIONMONTH}}/{{REVISIONDAY}}/{{REVISIONYEAR}}''' &lt;br /&gt;
= Introduction  =&lt;br /&gt;
 __TOC__{{TOC hidden}}&lt;br /&gt;
&lt;br /&gt;
This document has for objective to provide some tips to handle ''Injection'' into Java application code.&lt;br /&gt;
&lt;br /&gt;
Sample codes used in tips are located [https://github.com/righettod/injection-cheat-sheets here].&lt;br /&gt;
&lt;br /&gt;
= What is Injection ? =&lt;br /&gt;
&lt;br /&gt;
[[Top_10_2013-A1-Injection|Injection]] in OWASP Top 10 is defined as following:&lt;br /&gt;
&lt;br /&gt;
''Consider anyone who can send untrusted data to the system, including external users, internal users, and administrators.''&lt;br /&gt;
&lt;br /&gt;
= General advices to prevent Injection =&lt;br /&gt;
&lt;br /&gt;
The following point can be applied, in a general way, to prevent ''Injection'' issue:&lt;br /&gt;
&lt;br /&gt;
#  Apply '''Input Validation''' (using whitelist approach) combined with '''Output Sanitizing+Escaping''' on user input/output.&lt;br /&gt;
#  If you need to interact with system, try to use API features provided by your technology stack (Java / .Net / PHP...) instead of building command.&lt;br /&gt;
&lt;br /&gt;
Additional advices are provided on this [[Input_Validation_Cheat_Sheet|cheatsheet]].&lt;br /&gt;
&lt;br /&gt;
= Specific Injection types =&lt;br /&gt;
&lt;br /&gt;
''Examples in this section will be provided in Java technology (see Maven project associated) but advices are applicable to others technologies like .Net / PHP / Ruby / Python...''&lt;br /&gt;
&lt;br /&gt;
== SQL ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a SQL query using a String and execute it.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Use ''Query Parameterization'' in order to prevent injection.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/*No DB framework used here in order to show the real use of Prepared Statement from Java API*/&lt;br /&gt;
/*Open connection with H2 database and use it*/&lt;br /&gt;
Class.forName(&amp;quot;org.h2.Driver&amp;quot;);&lt;br /&gt;
String jdbcUrl = &amp;quot;jdbc:h2:file:&amp;quot; + new File(&amp;quot;.&amp;quot;).getAbsolutePath() + &amp;quot;/target/db&amp;quot;;&lt;br /&gt;
try (Connection con = DriverManager.getConnection(jdbcUrl)) {&lt;br /&gt;
&lt;br /&gt;
    /* Sample A: Select data using Prepared Statement*/&lt;br /&gt;
    String query = &amp;quot;select * from color where friendly_name = ?&amp;quot;;&lt;br /&gt;
    List&amp;lt;String&amp;gt; colors = new ArrayList&amp;lt;&amp;gt;();&lt;br /&gt;
    try (PreparedStatement pStatement = con.prepareStatement(query)) {&lt;br /&gt;
        pStatement.setString(1, &amp;quot;yellow&amp;quot;);&lt;br /&gt;
        try (ResultSet rSet = pStatement.executeQuery()) {&lt;br /&gt;
            while (rSet.next()) {&lt;br /&gt;
                colors.add(rSet.getString(1));&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    Assert.assertEquals(1, colors.size());&lt;br /&gt;
    Assert.assertTrue(colors.contains(&amp;quot;yellow&amp;quot;));&lt;br /&gt;
&lt;br /&gt;
    /* Sample B: Insert data using Prepared Statement*/&lt;br /&gt;
    query = &amp;quot;insert into color(friendly_name, red, green, blue) values(?, ?, ?, ?)&amp;quot;;&lt;br /&gt;
    int insertedRecordCount;&lt;br /&gt;
    try (PreparedStatement pStatement = con.prepareStatement(query)) {&lt;br /&gt;
        pStatement.setString(1, &amp;quot;orange&amp;quot;);&lt;br /&gt;
        pStatement.setInt(2, 239);&lt;br /&gt;
        pStatement.setInt(3, 125);&lt;br /&gt;
        pStatement.setInt(4, 11);&lt;br /&gt;
        insertedRecordCount = pStatement.executeUpdate();&lt;br /&gt;
    }&lt;br /&gt;
    Assert.assertEquals(1, insertedRecordCount);&lt;br /&gt;
&lt;br /&gt;
   /* Sample C: Update data using Prepared Statement*/&lt;br /&gt;
    query = &amp;quot;update color set blue = ? where friendly_name = ?&amp;quot;;&lt;br /&gt;
    int updatedRecordCount;&lt;br /&gt;
    try (PreparedStatement pStatement = con.prepareStatement(query)) {&lt;br /&gt;
        pStatement.setInt(1, 10);&lt;br /&gt;
        pStatement.setString(2, &amp;quot;orange&amp;quot;);&lt;br /&gt;
        updatedRecordCount = pStatement.executeUpdate();&lt;br /&gt;
    }&lt;br /&gt;
    Assert.assertEquals(1, updatedRecordCount);&lt;br /&gt;
&lt;br /&gt;
   /* Sample D: Delete data using Prepared Statement*/&lt;br /&gt;
    query = &amp;quot;delete from color where friendly_name = ?&amp;quot;;&lt;br /&gt;
    int deletedRecordCount;&lt;br /&gt;
    try (PreparedStatement pStatement = con.prepareStatement(query)) {&lt;br /&gt;
        pStatement.setString(1, &amp;quot;orange&amp;quot;);&lt;br /&gt;
        deletedRecordCount = pStatement.executeUpdate();&lt;br /&gt;
    }&lt;br /&gt;
    Assert.assertEquals(1, deletedRecordCount);&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
* [[SQL_Injection_Prevention_Cheat_Sheet|SQL Injection Prevention Cheat Sheet]]&lt;br /&gt;
&lt;br /&gt;
== JPA ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a JPA query using a String and execute it. It's quite similar to SQL injection but here the altered language is not SQL but JPA QL.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Use Java Persistence Query Language '''Query Parameterization''' in order to prevent injection.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
EntityManager entityManager = null;&lt;br /&gt;
try {&lt;br /&gt;
    /* Get a ref on EntityManager to access DB */&lt;br /&gt;
    entityManager = Persistence.createEntityManagerFactory(&amp;quot;testJPA&amp;quot;).createEntityManager();&lt;br /&gt;
&lt;br /&gt;
    /* Define parametrized query prototype using named parameter to enhance readability */&lt;br /&gt;
    String queryPrototype = &amp;quot;select c from Color c where c.friendlyName = :colorName&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /* Create the query, set the named parameter and execute the query */&lt;br /&gt;
    Query queryObject = entityManager.createQuery(queryPrototype);&lt;br /&gt;
    Color c = (Color) queryObject.setParameter(&amp;quot;colorName&amp;quot;, &amp;quot;yellow&amp;quot;).getSingleResult();&lt;br /&gt;
&lt;br /&gt;
    /* Ensure that the object obtained is the right one */&lt;br /&gt;
    Assert.assertNotNull(c);&lt;br /&gt;
    Assert.assertEquals(c.getFriendlyName(), &amp;quot;yellow&amp;quot;);&lt;br /&gt;
    Assert.assertEquals(c.getRed(), 213);&lt;br /&gt;
    Assert.assertEquals(c.getGreen(), 242);&lt;br /&gt;
    Assert.assertEquals(c.getBlue(), 26);&lt;br /&gt;
} finally {&lt;br /&gt;
    if (entityManager != null &amp;amp;&amp;amp; entityManager.isOpen()) {&lt;br /&gt;
        entityManager.close();&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
* https://software-security.sans.org/developer-how-to/fix-sql-injection-in-java-persistence-api-jpa&lt;br /&gt;
&lt;br /&gt;
== Operating System ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a Operating System command using a String and execute it.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Use technology stack '''API''' in order to prevent injection.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/* The context taken is, for example, to perform a PING against a computer.&lt;br /&gt;
* The prevention is to use the feature provided by the Java API instead of building&lt;br /&gt;
* a system command as String and execute it */&lt;br /&gt;
InetAddress host = InetAddress.getByName(&amp;quot;localhost&amp;quot;);&lt;br /&gt;
Assert.assertTrue(host.isReachable(5000));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
* [[Command_Injection|Command Injection]]&lt;br /&gt;
&lt;br /&gt;
==  XML: External Entity attack ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application load the received XML stream using a XML parser instance in which the resolution of External Entity is not disabled.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Disable to resolution of the External Entity in the parser instance to prevent injection.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/*Create a XML document builder factory*/&lt;br /&gt;
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();&lt;br /&gt;
&lt;br /&gt;
/*Disable External Entity resolution for differents cases*/&lt;br /&gt;
// This is the PRIMARY defense. If DTDs (doctypes) are disallowed,&lt;br /&gt;
// almost all XML entity attacks are prevented&lt;br /&gt;
// Xerces 2 only - http://xerces.apache.org/xerces2-j/features.html#disallow-doctype-decl&lt;br /&gt;
String feature = &amp;quot;http://apache.org/xml/features/disallow-doctype-decl&amp;quot;;&lt;br /&gt;
dbf.setFeature(feature, true);&lt;br /&gt;
&lt;br /&gt;
// If you can't completely disable DTDs, then at least do the following:&lt;br /&gt;
// Xerces 1 - http://xerces.apache.org/xerces-j/features.html#external-general-entities&lt;br /&gt;
// Xerces 2 - http://xerces.apache.org/xerces2-j/features.html#external-general-entities&lt;br /&gt;
// JDK7+ - http://xml.org/sax/features/external-general-entities&lt;br /&gt;
feature = &amp;quot;http://xml.org/sax/features/external-general-entities&amp;quot;;&lt;br /&gt;
dbf.setFeature(feature, false);&lt;br /&gt;
&lt;br /&gt;
// Xerces 1 - http://xerces.apache.org/xerces-j/features.html#external-parameter-entities&lt;br /&gt;
// Xerces 2 - http://xerces.apache.org/xerces2-j/features.html#external-parameter-entities&lt;br /&gt;
// JDK7+ - http://xml.org/sax/features/external-parameter-entities&lt;br /&gt;
feature = &amp;quot;http://xml.org/sax/features/external-parameter-entities&amp;quot;;&lt;br /&gt;
dbf.setFeature(feature, false);&lt;br /&gt;
&lt;br /&gt;
// feature external DTDs as well&lt;br /&gt;
feature = &amp;quot;http://apache.org/xml/features/nonvalidating/load-external-dtd&amp;quot;;&lt;br /&gt;
dbf.setFeature(feature, false);&lt;br /&gt;
&lt;br /&gt;
// and these as well, per Timothy Morgan's 2014 paper: &amp;quot;XML Schema, DTD, and Entity Attacks&amp;quot;&lt;br /&gt;
dbf.setXIncludeAware(false);&lt;br /&gt;
dbf.setExpandEntityReferences(false);&lt;br /&gt;
&lt;br /&gt;
/*Load XML file*/&lt;br /&gt;
DocumentBuilder builder = dbf.newDocumentBuilder();&lt;br /&gt;
//Here an org.xml.sax.SAXParseException will be throws because the XML contains a External Entity.&lt;br /&gt;
builder.parse(new File(&amp;quot;src/test/resources/SampleXXE.xml&amp;quot;));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== XML: XPath Injection ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a XPath query using a String and execute it.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Use '''XPath Variable Resolver''' in order to prevent injection.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
'''Variable Resolver''' implementation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/**&lt;br /&gt;
 * Resolver in order to define parameter for XPATH expression.&lt;br /&gt;
 *&lt;br /&gt;
 */&lt;br /&gt;
public class SimpleVariableResolver implements XPathVariableResolver {&lt;br /&gt;
&lt;br /&gt;
    private final Map&amp;lt;QName, Object&amp;gt; vars = new HashMap&amp;lt;QName, Object&amp;gt;();&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * External methods to add parameter&lt;br /&gt;
     *&lt;br /&gt;
     * @param name Parameter name&lt;br /&gt;
     * @param value Parameter value&lt;br /&gt;
     */&lt;br /&gt;
    public void addVariable(QName name, Object value) {&lt;br /&gt;
        vars.put(name, value);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     *&lt;br /&gt;
     * @see javax.xml.xpath.XPathVariableResolver#resolveVariable(javax.xml.namespace.QName)&lt;br /&gt;
     */&lt;br /&gt;
    public Object resolveVariable(QName variableName) {&lt;br /&gt;
        return vars.get(variableName);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Code using it to perform XPath query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/*Create a XML document builder factory*/&lt;br /&gt;
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();&lt;br /&gt;
&lt;br /&gt;
/*Disable External Entity resolution for differents cases*/&lt;br /&gt;
//Do not performed here in order to focus on variable resolver code&lt;br /&gt;
//but do it for production code !&lt;br /&gt;
&lt;br /&gt;
/*Load XML file*/&lt;br /&gt;
DocumentBuilder builder = dbf.newDocumentBuilder();&lt;br /&gt;
Document doc = builder.parse(new File(&amp;quot;src/test/resources/SampleXPath.xml&amp;quot;));&lt;br /&gt;
&lt;br /&gt;
/* Create and configure parameter resolver */&lt;br /&gt;
String bid = &amp;quot;bk102&amp;quot;;&lt;br /&gt;
SimpleVariableResolver variableResolver = new SimpleVariableResolver();&lt;br /&gt;
variableResolver.addVariable(new QName(&amp;quot;bookId&amp;quot;), bid);&lt;br /&gt;
&lt;br /&gt;
/*Create and configure XPATH expression*/&lt;br /&gt;
XPath xpath = XPathFactory.newInstance().newXPath();&lt;br /&gt;
xpath.setXPathVariableResolver(variableResolver);&lt;br /&gt;
XPathExpression xPathExpression = xpath.compile(&amp;quot;//book[@id=$bookId]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
/* Apply expression on XML document */&lt;br /&gt;
Object nodes = xPathExpression.evaluate(doc, XPathConstants.NODESET);&lt;br /&gt;
NodeList nodesList = (NodeList) nodes;&lt;br /&gt;
Assert.assertNotNull(nodesList);&lt;br /&gt;
Assert.assertEquals(1, nodesList.getLength());&lt;br /&gt;
Element book = (Element)nodesList.item(0);&lt;br /&gt;
Assert.assertTrue(book.getTextContent().contains(&amp;quot;Ralls, Kim&amp;quot;));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
* [[XPATH_Injection|XPATH Injection]]&lt;br /&gt;
&lt;br /&gt;
== HTML/JavaScript/CSS ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a HTTP response and sent it to browser.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Either apply strict input validation (whitelist approach) or use output sanitizing+escaping if input validation is not possible (combine both every time is possible).&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/*&lt;br /&gt;
INPUT WAY: Receive data from user&lt;br /&gt;
Here it's recommended to use strict input validation using whitelist approach.&lt;br /&gt;
In fact, you ensure that only allowed characters are part of the input received.&lt;br /&gt;
*/&lt;br /&gt;
&lt;br /&gt;
String userInput = &amp;quot;You user login is owasp-user01&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
/* First we check that the value contains only expected character*/&lt;br /&gt;
Assert.assertTrue(Pattern.matches(&amp;quot;[a-zA-Z0-9\\s\\-]{1,50}&amp;quot;, userInput));&lt;br /&gt;
&lt;br /&gt;
/* If the first check pass then ensure that potential dangerous character that we have allowed&lt;br /&gt;
for business requirement are not used in a dangerous way.&lt;br /&gt;
For example here we have allowed the character '-', and, this can be used in SQL injection so, we&lt;br /&gt;
ensure that this character is not used is a continuous form.&lt;br /&gt;
Use the API COMMONS LANG v3 to help in String analysis...&lt;br /&gt;
*/&lt;br /&gt;
Assert.assertEquals(0, StringUtils.countMatches(userInput.replace(&amp;quot; &amp;quot;, &amp;quot;&amp;quot;), &amp;quot;--&amp;quot;));&lt;br /&gt;
&lt;br /&gt;
/*&lt;br /&gt;
OUTPUT WAY: Send data to user&lt;br /&gt;
Here we escape + sanitize any data sent to user&lt;br /&gt;
Use the OWASP Java HTML Sanitizer API to handle sanitizing&lt;br /&gt;
Use the OWASP Java Encoder API to handle HTML tag encoding (escaping)&lt;br /&gt;
*/&lt;br /&gt;
&lt;br /&gt;
String outputToUser = &amp;quot;You &amp;lt;p&amp;gt;user login&amp;lt;/p&amp;gt; is &amp;lt;strong&amp;gt;owasp-user01&amp;lt;/strong&amp;gt;&amp;quot;;&lt;br /&gt;
outputToUser += &amp;quot;&amp;lt;script&amp;gt;alert(22);&amp;lt;/script&amp;gt;&amp;lt;img src='#' onload='javascript:alert(23);'&amp;gt;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
/* Create a sanitizing policy that only allow tag '&amp;lt;p&amp;gt;' and '&amp;lt;strong&amp;gt;'*/&lt;br /&gt;
PolicyFactory policy = new HtmlPolicyBuilder().allowElements(&amp;quot;p&amp;quot;, &amp;quot;strong&amp;quot;).toFactory();&lt;br /&gt;
&lt;br /&gt;
/* Sanitize the output that will be sent to user*/&lt;br /&gt;
String safeOutput = policy.sanitize(outputToUser);&lt;br /&gt;
&lt;br /&gt;
/* Encode HTML Tag*/&lt;br /&gt;
safeOutput = Encode.forHtml(safeOutput);&lt;br /&gt;
String finalSafeOutputExpected = &amp;quot;You &amp;amp;lt;p&amp;amp;gt;user login&amp;amp;lt;/p&amp;amp;gt; is &amp;amp;lt;strong&amp;amp;gt;owasp-user01&amp;amp;lt;/strong&amp;amp;gt;&amp;quot;;&lt;br /&gt;
Assert.assertEquals(finalSafeOutputExpected, safeOutput);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
* [[Cross-site_Scripting_(XSS)| XSS]]&lt;br /&gt;
&lt;br /&gt;
* https://github.com/owasp/java-html-sanitizer&lt;br /&gt;
&lt;br /&gt;
* https://github.com/owasp/owasp-java-encoder/&lt;br /&gt;
&lt;br /&gt;
* https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html&lt;br /&gt;
&lt;br /&gt;
* https://commons.apache.org/proper/commons-lang/javadocs/api-3.4/org/apache/commons/lang3/StringEscapeUtils.html&lt;br /&gt;
&lt;br /&gt;
== LDAP ==&lt;br /&gt;
&lt;br /&gt;
A dedicated [[LDAP_Injection_Prevention_Cheat_Sheet|cheatsheet]] has been created.&lt;br /&gt;
&lt;br /&gt;
== NoSQL ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a NoSQL API call expression.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
As there many NoSQL database system and each one use a API for call, it's important to ensure that user input received&lt;br /&gt;
and used to build the API call expression do not contains any character that have a special meaning in the target API syntax.&lt;br /&gt;
This in order to avoid that it will be used to escape the initial call expression in order to create another one based on crafted user input.&lt;br /&gt;
It's also important to not use string concatenation to build API call expression but use the API to create the expression.&lt;br /&gt;
&lt;br /&gt;
=== Example - MongoDB ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
 /* Here use MongoDB as target NoSQL DB */&lt;br /&gt;
String userInput = &amp;quot;Brooklyn&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
/* First ensure that the input do no contains any special characters for the current NoSQL DB call API,&lt;br /&gt;
here they are: ' &amp;quot; \ ; { } $&lt;br /&gt;
*/&lt;br /&gt;
//Avoid regexp this time in order to made validation code more easy to read and understand...&lt;br /&gt;
ArrayList&amp;lt;String&amp;gt; specialCharsList = new ArrayList&amp;lt;String&amp;gt;() {{&lt;br /&gt;
    add(&amp;quot;'&amp;quot;);&lt;br /&gt;
    add(&amp;quot;\&amp;quot;&amp;quot;);&lt;br /&gt;
    add(&amp;quot;\\&amp;quot;);&lt;br /&gt;
    add(&amp;quot;;&amp;quot;);&lt;br /&gt;
    add(&amp;quot;{&amp;quot;);&lt;br /&gt;
    add(&amp;quot;}&amp;quot;);&lt;br /&gt;
    add(&amp;quot;$&amp;quot;);&lt;br /&gt;
}};&lt;br /&gt;
specialCharsList.forEach(specChar -&amp;gt; Assert.assertFalse(userInput.contains(specChar)));&lt;br /&gt;
//Add also a check on input max size&lt;br /&gt;
Assert.assertTrue(userInput.length() &amp;lt;= 50);&lt;br /&gt;
&lt;br /&gt;
/* Then perform query on database using API to build expression */&lt;br /&gt;
//Connect to the local MongoDB instance&lt;br /&gt;
try(MongoClient mongoClient = new MongoClient()){&lt;br /&gt;
    MongoDatabase db = mongoClient.getDatabase(&amp;quot;test&amp;quot;);&lt;br /&gt;
    //Use API query builder to create call expression&lt;br /&gt;
    //Create expression&lt;br /&gt;
    Bson expression = eq(&amp;quot;borough&amp;quot;, userInput);&lt;br /&gt;
    //Perform call&lt;br /&gt;
    FindIterable&amp;lt;org.bson.Document&amp;gt; restaurants = db.getCollection(&amp;quot;restaurants&amp;quot;).find(expression);&lt;br /&gt;
    //Verify result consistency&lt;br /&gt;
    restaurants.forEach(new Block&amp;lt;org.bson.Document&amp;gt;() {&lt;br /&gt;
        @Override&lt;br /&gt;
        public void apply(final org.bson.Document doc) {&lt;br /&gt;
            String restBorough = (String)doc.get(&amp;quot;borough&amp;quot;);&lt;br /&gt;
            Assert.assertTrue(&amp;quot;Brooklyn&amp;quot;.equals(restBorough));&lt;br /&gt;
        }&lt;br /&gt;
    });&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
[[Testing_for_NoSQL_injection|Testing for NoSQL injection]]&lt;br /&gt;
&lt;br /&gt;
https://ckarande.gitbooks.io/owasp-nodegoat-tutorial/content/tutorial/a1_-_sql_and_nosql_injection.html&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/ftp/arxiv/papers/1506/1506.04082.pdf&lt;br /&gt;
&lt;br /&gt;
== Log Injection ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
[[Log_Injection|Log Injection]] occurs when an application includes untrusted data in an application log message (e.g., an attacker can cause an additional log entry that looks like it came from a completely different user, if they can inject CRLF characters in the untrusted data). More information about this attack is available on the OWASP [[Log Injection]] page.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
To prevent an attacker from writing malicious content into the application log, apply defenses such as:&lt;br /&gt;
&lt;br /&gt;
* Filter the user input used to prevent injection of '''C'''arriage '''R'''eturn (CR) or '''L'''ine '''F'''eed (LF) characters.&lt;br /&gt;
* Limit the size of the user input value used to create the log message.&lt;br /&gt;
* Make sure [[XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet|all XSS defenses]] are applied when viewing log files in a web browser.&lt;br /&gt;
&lt;br /&gt;
=== Example using Log4j2 ===&lt;br /&gt;
&lt;br /&gt;
Configuration of a logging policy to roll on 10 files of 5MB each, and encode/limit the log message using the [https://logging.apache.org/log4j/2.x/manual/layouts.html#PatternLayout|Log4j2 Pattern ''encode{}{CRLF}''], introduced in [https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-api Log4j2 v2.10.0], and the ''-500m'' message size limit.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot; highlight=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;?xml version=&amp;quot;1.0&amp;quot; encoding=&amp;quot;UTF-8&amp;quot;?&amp;gt;&lt;br /&gt;
&amp;lt;Configuration status=&amp;quot;error&amp;quot; name=&amp;quot;SecureLoggingPolicy&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;Appenders&amp;gt;&lt;br /&gt;
        &amp;lt;RollingFile name=&amp;quot;RollingFile&amp;quot; fileName=&amp;quot;App.log&amp;quot; filePattern=&amp;quot;App-%i.log&amp;quot; ignoreExceptions=&amp;quot;false&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;PatternLayout&amp;gt;&lt;br /&gt;
                &amp;lt;!-- Encode any CRLF chars in the message and limit its maximum size to 500 characters --&amp;gt;&lt;br /&gt;
                &amp;lt;Pattern&amp;gt;%d{ISO8601} %-5p - %encode{%.-500m}{CRLF}%n&amp;lt;/Pattern&amp;gt;&lt;br /&gt;
            &amp;lt;/PatternLayout&amp;gt;&lt;br /&gt;
            &amp;lt;Policies&amp;gt;&lt;br /&gt;
                &amp;lt;SizeBasedTriggeringPolicy size=&amp;quot;5MB&amp;quot;/&amp;gt;&lt;br /&gt;
            &amp;lt;/Policies&amp;gt;&lt;br /&gt;
            &amp;lt;DefaultRolloverStrategy max=&amp;quot;10&amp;quot;/&amp;gt;&lt;br /&gt;
        &amp;lt;/RollingFile&amp;gt;&lt;br /&gt;
    &amp;lt;/Appenders&amp;gt;&lt;br /&gt;
    &amp;lt;Loggers&amp;gt;&lt;br /&gt;
        &amp;lt;Root level=&amp;quot;debug&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;AppenderRef ref=&amp;quot;RollingFile&amp;quot;/&amp;gt;&lt;br /&gt;
        &amp;lt;/Root&amp;gt;&lt;br /&gt;
    &amp;lt;/Loggers&amp;gt;&lt;br /&gt;
&amp;lt;/Configuration&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usage of the logger at code level:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
import org.apache.logging.log4j.LogManager;&lt;br /&gt;
import org.apache.logging.log4j.Logger;&lt;br /&gt;
...&lt;br /&gt;
// No special action needed because security actions are performed at the logging policy level&lt;br /&gt;
Logger logger = LogManager.getLogger(MyClass.class);&lt;br /&gt;
logger.info(logMessage);&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Example using Logback with the OWASP Security Logging library ===&lt;br /&gt;
&lt;br /&gt;
Configuration of a logging policy to roll on 10 files of 5MB each, and encode/limit the log message using the [https://github.com/javabeanz/owasp-security-logging/wiki/Log-Forging CRLFConverter], provided by the [[OWASP Security Logging Project]], and the ''-500msg'' message size limit:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot; highlight=&amp;quot;4,17&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;?xml version=&amp;quot;1.0&amp;quot; encoding=&amp;quot;UTF-8&amp;quot;?&amp;gt;&lt;br /&gt;
&amp;lt;configuration&amp;gt;&lt;br /&gt;
    &amp;lt;!-- Define the CRLFConverter --&amp;gt;&lt;br /&gt;
    &amp;lt;conversionRule conversionWord=&amp;quot;crlf&amp;quot; converterClass=&amp;quot;org.owasp.security.logging.mask.CRLFConverter&amp;quot; /&amp;gt;&lt;br /&gt;
    &amp;lt;appender name=&amp;quot;RollingFile&amp;quot; class=&amp;quot;ch.qos.logback.core.rolling.RollingFileAppender&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;file&amp;gt;App.log&amp;lt;/file&amp;gt;&lt;br /&gt;
        &amp;lt;rollingPolicy class=&amp;quot;ch.qos.logback.core.rolling.FixedWindowRollingPolicy&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;fileNamePattern&amp;gt;App-%i.log&amp;lt;/fileNamePattern&amp;gt;&lt;br /&gt;
            &amp;lt;minIndex&amp;gt;1&amp;lt;/minIndex&amp;gt;&lt;br /&gt;
            &amp;lt;maxIndex&amp;gt;10&amp;lt;/maxIndex&amp;gt;&lt;br /&gt;
        &amp;lt;/rollingPolicy&amp;gt;&lt;br /&gt;
        &amp;lt;triggeringPolicy class=&amp;quot;ch.qos.logback.core.rolling.SizeBasedTriggeringPolicy&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;maxFileSize&amp;gt;5MB&amp;lt;/maxFileSize&amp;gt;&lt;br /&gt;
        &amp;lt;/triggeringPolicy&amp;gt;&lt;br /&gt;
        &amp;lt;encoder&amp;gt;&lt;br /&gt;
            &amp;lt;!-- Encode any CRLF chars in the message and limit its maximum size to 500 characters --&amp;gt;&lt;br /&gt;
            &amp;lt;pattern&amp;gt;%relative [%thread] %-5level %logger{35} - %crlf(%.-500msg) %n&amp;lt;/pattern&amp;gt;&lt;br /&gt;
        &amp;lt;/encoder&amp;gt;&lt;br /&gt;
    &amp;lt;/appender&amp;gt;&lt;br /&gt;
    &amp;lt;root level=&amp;quot;debug&amp;quot;&amp;gt;&lt;br /&gt;
        &amp;lt;appender-ref ref=&amp;quot;RollingFile&amp;quot; /&amp;gt;&lt;br /&gt;
    &amp;lt;/root&amp;gt;&lt;br /&gt;
&amp;lt;/configuration&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You also have to add the [https://github.com/javabeanz/owasp-security-logging/wiki/Usage-with-Logback OWASP Security Logging] dependency to your project.&lt;br /&gt;
&lt;br /&gt;
Usage of the logger at code level:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
import org.slf4j.Logger;&lt;br /&gt;
import org.slf4j.LoggerFactory;&lt;br /&gt;
...&lt;br /&gt;
// No special action needed because security actions are performed at the logging policy level&lt;br /&gt;
Logger logger = LoggerFactory.getLogger(MyClass.class);&lt;br /&gt;
logger.info(logMessage);&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
https://logging.apache.org/log4j/2.x/manual/layouts.html#PatternLayout (See the encode{}{CRLF} function)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Note that the default Log4j2 encode{} encoder is HTML, which does NOT prevent log injection. It prevents XSS attacks against viewing logs using a browser. OWASP recommends defending against XSS attacks in such situations in the log viewer application itself, not by preencoding all the log messages with HTML encoding as such log entries may be used/viewed in many other log viewing/analysis tools that don't expect the log data to be pre-HTML encoded.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
https://logging.apache.org/log4j/2.x/manual/configuration.html&lt;br /&gt;
&lt;br /&gt;
https://logging.apache.org/log4j/2.x/manual/appenders.html&lt;br /&gt;
&lt;br /&gt;
https://github.com/javabeanz/owasp-security-logging/wiki/Log-Forging - See the Logback section about the CRLFConverter this library provides.&lt;br /&gt;
&lt;br /&gt;
https://github.com/javabeanz/owasp-security-logging/wiki/Usage-with-Logback&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
[[User:Dominique_RIGHETTO|Dominique Righetto]] - dominique.righetto@owasp.org&lt;br /&gt;
&lt;br /&gt;
[[User:wichers|Dave Wichers]] - dave.wichers@owasp.org (For just the Log Injection section)&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets = &lt;br /&gt;
&lt;br /&gt;
{{Cheatsheet_Navigation_Body}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Injection_Prevention_Cheat_Sheet_in_Java&amp;diff=245535</id>
		<title>Injection Prevention Cheat Sheet in Java</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Injection_Prevention_Cheat_Sheet_in_Java&amp;diff=245535"/>
				<updated>2018-11-26T16:41:28Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Authors and Primary Editors */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; __NOTOC__&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:160px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Cheatsheets-header.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
Last revision (mm/dd/yy): '''{{REVISIONMONTH}}/{{REVISIONDAY}}/{{REVISIONYEAR}}''' &lt;br /&gt;
= Introduction  =&lt;br /&gt;
 __TOC__{{TOC hidden}}&lt;br /&gt;
&lt;br /&gt;
This document has for objective to provide some tips to handle ''Injection'' into Java application code.&lt;br /&gt;
&lt;br /&gt;
Sample codes used in tips are located [https://github.com/righettod/injection-cheat-sheets here].&lt;br /&gt;
&lt;br /&gt;
= What is Injection ? =&lt;br /&gt;
&lt;br /&gt;
[[Top_10_2013-A1-Injection|Injection]] in OWASP Top 10 is defined as following:&lt;br /&gt;
&lt;br /&gt;
''Consider anyone who can send untrusted data to the system, including external users, internal users, and administrators.''&lt;br /&gt;
&lt;br /&gt;
= General advices to prevent Injection =&lt;br /&gt;
&lt;br /&gt;
The following point can be applied, in a general way, to prevent ''Injection'' issue:&lt;br /&gt;
&lt;br /&gt;
#  Apply '''Input Validation''' (using whitelist approach) combined with '''Output Sanitizing+Escaping''' on user input/output.&lt;br /&gt;
#  If you need to interact with system, try to use API features provided by your technology stack (Java / .Net / PHP...) instead of building command.&lt;br /&gt;
&lt;br /&gt;
Additional advices are provided on this [[Input_Validation_Cheat_Sheet|cheatsheet]].&lt;br /&gt;
&lt;br /&gt;
= Specific Injection types =&lt;br /&gt;
&lt;br /&gt;
''Examples in this section will be provided in Java technology (see Maven project associated) but advices are applicable to others technologies like .Net / PHP / Ruby / Python...''&lt;br /&gt;
&lt;br /&gt;
== SQL ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a SQL query using a String and execute it.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Use ''Query Parameterization'' in order to prevent injection.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/*No DB framework used here in order to show the real use of Prepared Statement from Java API*/&lt;br /&gt;
/*Open connection with H2 database and use it*/&lt;br /&gt;
Class.forName(&amp;quot;org.h2.Driver&amp;quot;);&lt;br /&gt;
String jdbcUrl = &amp;quot;jdbc:h2:file:&amp;quot; + new File(&amp;quot;.&amp;quot;).getAbsolutePath() + &amp;quot;/target/db&amp;quot;;&lt;br /&gt;
try (Connection con = DriverManager.getConnection(jdbcUrl)) {&lt;br /&gt;
&lt;br /&gt;
    /* Sample A: Select data using Prepared Statement*/&lt;br /&gt;
    String query = &amp;quot;select * from color where friendly_name = ?&amp;quot;;&lt;br /&gt;
    List&amp;lt;String&amp;gt; colors = new ArrayList&amp;lt;&amp;gt;();&lt;br /&gt;
    try (PreparedStatement pStatement = con.prepareStatement(query)) {&lt;br /&gt;
        pStatement.setString(1, &amp;quot;yellow&amp;quot;);&lt;br /&gt;
        try (ResultSet rSet = pStatement.executeQuery()) {&lt;br /&gt;
            while (rSet.next()) {&lt;br /&gt;
                colors.add(rSet.getString(1));&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    Assert.assertEquals(1, colors.size());&lt;br /&gt;
    Assert.assertTrue(colors.contains(&amp;quot;yellow&amp;quot;));&lt;br /&gt;
&lt;br /&gt;
    /* Sample B: Insert data using Prepared Statement*/&lt;br /&gt;
    query = &amp;quot;insert into color(friendly_name, red, green, blue) values(?, ?, ?, ?)&amp;quot;;&lt;br /&gt;
    int insertedRecordCount;&lt;br /&gt;
    try (PreparedStatement pStatement = con.prepareStatement(query)) {&lt;br /&gt;
        pStatement.setString(1, &amp;quot;orange&amp;quot;);&lt;br /&gt;
        pStatement.setInt(2, 239);&lt;br /&gt;
        pStatement.setInt(3, 125);&lt;br /&gt;
        pStatement.setInt(4, 11);&lt;br /&gt;
        insertedRecordCount = pStatement.executeUpdate();&lt;br /&gt;
    }&lt;br /&gt;
    Assert.assertEquals(1, insertedRecordCount);&lt;br /&gt;
&lt;br /&gt;
   /* Sample C: Update data using Prepared Statement*/&lt;br /&gt;
    query = &amp;quot;update color set blue = ? where friendly_name = ?&amp;quot;;&lt;br /&gt;
    int updatedRecordCount;&lt;br /&gt;
    try (PreparedStatement pStatement = con.prepareStatement(query)) {&lt;br /&gt;
        pStatement.setInt(1, 10);&lt;br /&gt;
        pStatement.setString(2, &amp;quot;orange&amp;quot;);&lt;br /&gt;
        updatedRecordCount = pStatement.executeUpdate();&lt;br /&gt;
    }&lt;br /&gt;
    Assert.assertEquals(1, updatedRecordCount);&lt;br /&gt;
&lt;br /&gt;
   /* Sample D: Delete data using Prepared Statement*/&lt;br /&gt;
    query = &amp;quot;delete from color where friendly_name = ?&amp;quot;;&lt;br /&gt;
    int deletedRecordCount;&lt;br /&gt;
    try (PreparedStatement pStatement = con.prepareStatement(query)) {&lt;br /&gt;
        pStatement.setString(1, &amp;quot;orange&amp;quot;);&lt;br /&gt;
        deletedRecordCount = pStatement.executeUpdate();&lt;br /&gt;
    }&lt;br /&gt;
    Assert.assertEquals(1, deletedRecordCount);&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
* [[SQL_Injection_Prevention_Cheat_Sheet|SQL Injection Prevention Cheat Sheet]]&lt;br /&gt;
&lt;br /&gt;
== JPA ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a JPA query using a String and execute it. It's quite similar to SQL injection but here the altered language is not SQL but JPA QL.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Use Java Persistence Query Language '''Query Parameterization''' in order to prevent injection.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
EntityManager entityManager = null;&lt;br /&gt;
try {&lt;br /&gt;
    /* Get a ref on EntityManager to access DB */&lt;br /&gt;
    entityManager = Persistence.createEntityManagerFactory(&amp;quot;testJPA&amp;quot;).createEntityManager();&lt;br /&gt;
&lt;br /&gt;
    /* Define parametrized query prototype using named parameter to enhance readability */&lt;br /&gt;
    String queryPrototype = &amp;quot;select c from Color c where c.friendlyName = :colorName&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /* Create the query, set the named parameter and execute the query */&lt;br /&gt;
    Query queryObject = entityManager.createQuery(queryPrototype);&lt;br /&gt;
    Color c = (Color) queryObject.setParameter(&amp;quot;colorName&amp;quot;, &amp;quot;yellow&amp;quot;).getSingleResult();&lt;br /&gt;
&lt;br /&gt;
    /* Ensure that the object obtained is the right one */&lt;br /&gt;
    Assert.assertNotNull(c);&lt;br /&gt;
    Assert.assertEquals(c.getFriendlyName(), &amp;quot;yellow&amp;quot;);&lt;br /&gt;
    Assert.assertEquals(c.getRed(), 213);&lt;br /&gt;
    Assert.assertEquals(c.getGreen(), 242);&lt;br /&gt;
    Assert.assertEquals(c.getBlue(), 26);&lt;br /&gt;
} finally {&lt;br /&gt;
    if (entityManager != null &amp;amp;&amp;amp; entityManager.isOpen()) {&lt;br /&gt;
        entityManager.close();&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
* https://software-security.sans.org/developer-how-to/fix-sql-injection-in-java-persistence-api-jpa&lt;br /&gt;
&lt;br /&gt;
== Operating System ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a Operating System command using a String and execute it.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Use technology stack '''API''' in order to prevent injection.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/* The context taken is, for example, to perform a PING against a computer.&lt;br /&gt;
* The prevention is to use the feature provided by the Java API instead of building&lt;br /&gt;
* a system command as String and execute it */&lt;br /&gt;
InetAddress host = InetAddress.getByName(&amp;quot;localhost&amp;quot;);&lt;br /&gt;
Assert.assertTrue(host.isReachable(5000));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
* [[Command_Injection|Command Injection]]&lt;br /&gt;
&lt;br /&gt;
==  XML: External Entity attack ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application load the received XML stream using a XML parser instance in which the resolution of External Entity is not disabled.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Disable to resolution of the External Entity in the parser instance to prevent injection.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/*Create a XML document builder factory*/&lt;br /&gt;
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();&lt;br /&gt;
&lt;br /&gt;
/*Disable External Entity resolution for differents cases*/&lt;br /&gt;
// This is the PRIMARY defense. If DTDs (doctypes) are disallowed,&lt;br /&gt;
// almost all XML entity attacks are prevented&lt;br /&gt;
// Xerces 2 only - http://xerces.apache.org/xerces2-j/features.html#disallow-doctype-decl&lt;br /&gt;
String feature = &amp;quot;http://apache.org/xml/features/disallow-doctype-decl&amp;quot;;&lt;br /&gt;
dbf.setFeature(feature, true);&lt;br /&gt;
&lt;br /&gt;
// If you can't completely disable DTDs, then at least do the following:&lt;br /&gt;
// Xerces 1 - http://xerces.apache.org/xerces-j/features.html#external-general-entities&lt;br /&gt;
// Xerces 2 - http://xerces.apache.org/xerces2-j/features.html#external-general-entities&lt;br /&gt;
// JDK7+ - http://xml.org/sax/features/external-general-entities&lt;br /&gt;
feature = &amp;quot;http://xml.org/sax/features/external-general-entities&amp;quot;;&lt;br /&gt;
dbf.setFeature(feature, false);&lt;br /&gt;
&lt;br /&gt;
// Xerces 1 - http://xerces.apache.org/xerces-j/features.html#external-parameter-entities&lt;br /&gt;
// Xerces 2 - http://xerces.apache.org/xerces2-j/features.html#external-parameter-entities&lt;br /&gt;
// JDK7+ - http://xml.org/sax/features/external-parameter-entities&lt;br /&gt;
feature = &amp;quot;http://xml.org/sax/features/external-parameter-entities&amp;quot;;&lt;br /&gt;
dbf.setFeature(feature, false);&lt;br /&gt;
&lt;br /&gt;
// feature external DTDs as well&lt;br /&gt;
feature = &amp;quot;http://apache.org/xml/features/nonvalidating/load-external-dtd&amp;quot;;&lt;br /&gt;
dbf.setFeature(feature, false);&lt;br /&gt;
&lt;br /&gt;
// and these as well, per Timothy Morgan's 2014 paper: &amp;quot;XML Schema, DTD, and Entity Attacks&amp;quot;&lt;br /&gt;
dbf.setXIncludeAware(false);&lt;br /&gt;
dbf.setExpandEntityReferences(false);&lt;br /&gt;
&lt;br /&gt;
/*Load XML file*/&lt;br /&gt;
DocumentBuilder builder = dbf.newDocumentBuilder();&lt;br /&gt;
//Here an org.xml.sax.SAXParseException will be throws because the XML contains a External Entity.&lt;br /&gt;
builder.parse(new File(&amp;quot;src/test/resources/SampleXXE.xml&amp;quot;));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== XML: XPath Injection ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a XPath query using a String and execute it.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Use '''XPath Variable Resolver''' in order to prevent injection.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
'''Variable Resolver''' implementation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/**&lt;br /&gt;
 * Resolver in order to define parameter for XPATH expression.&lt;br /&gt;
 *&lt;br /&gt;
 */&lt;br /&gt;
public class SimpleVariableResolver implements XPathVariableResolver {&lt;br /&gt;
&lt;br /&gt;
    private final Map&amp;lt;QName, Object&amp;gt; vars = new HashMap&amp;lt;QName, Object&amp;gt;();&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * External methods to add parameter&lt;br /&gt;
     *&lt;br /&gt;
     * @param name Parameter name&lt;br /&gt;
     * @param value Parameter value&lt;br /&gt;
     */&lt;br /&gt;
    public void addVariable(QName name, Object value) {&lt;br /&gt;
        vars.put(name, value);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     *&lt;br /&gt;
     * @see javax.xml.xpath.XPathVariableResolver#resolveVariable(javax.xml.namespace.QName)&lt;br /&gt;
     */&lt;br /&gt;
    public Object resolveVariable(QName variableName) {&lt;br /&gt;
        return vars.get(variableName);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Code using it to perform XPath query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/*Create a XML document builder factory*/&lt;br /&gt;
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();&lt;br /&gt;
&lt;br /&gt;
/*Disable External Entity resolution for differents cases*/&lt;br /&gt;
//Do not performed here in order to focus on variable resolver code&lt;br /&gt;
//but do it for production code !&lt;br /&gt;
&lt;br /&gt;
/*Load XML file*/&lt;br /&gt;
DocumentBuilder builder = dbf.newDocumentBuilder();&lt;br /&gt;
Document doc = builder.parse(new File(&amp;quot;src/test/resources/SampleXPath.xml&amp;quot;));&lt;br /&gt;
&lt;br /&gt;
/* Create and configure parameter resolver */&lt;br /&gt;
String bid = &amp;quot;bk102&amp;quot;;&lt;br /&gt;
SimpleVariableResolver variableResolver = new SimpleVariableResolver();&lt;br /&gt;
variableResolver.addVariable(new QName(&amp;quot;bookId&amp;quot;), bid);&lt;br /&gt;
&lt;br /&gt;
/*Create and configure XPATH expression*/&lt;br /&gt;
XPath xpath = XPathFactory.newInstance().newXPath();&lt;br /&gt;
xpath.setXPathVariableResolver(variableResolver);&lt;br /&gt;
XPathExpression xPathExpression = xpath.compile(&amp;quot;//book[@id=$bookId]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
/* Apply expression on XML document */&lt;br /&gt;
Object nodes = xPathExpression.evaluate(doc, XPathConstants.NODESET);&lt;br /&gt;
NodeList nodesList = (NodeList) nodes;&lt;br /&gt;
Assert.assertNotNull(nodesList);&lt;br /&gt;
Assert.assertEquals(1, nodesList.getLength());&lt;br /&gt;
Element book = (Element)nodesList.item(0);&lt;br /&gt;
Assert.assertTrue(book.getTextContent().contains(&amp;quot;Ralls, Kim&amp;quot;));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
* [[XPATH_Injection|XPATH Injection]]&lt;br /&gt;
&lt;br /&gt;
== HTML/JavaScript/CSS ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a HTTP response and sent it to browser.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Either apply strict input validation (whitelist approach) or use output sanitizing+escaping if input validation is not possible (combine both every time is possible).&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/*&lt;br /&gt;
INPUT WAY: Receive data from user&lt;br /&gt;
Here it's recommended to use strict input validation using whitelist approach.&lt;br /&gt;
In fact, you ensure that only allowed characters are part of the input received.&lt;br /&gt;
*/&lt;br /&gt;
&lt;br /&gt;
String userInput = &amp;quot;You user login is owasp-user01&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
/* First we check that the value contains only expected character*/&lt;br /&gt;
Assert.assertTrue(Pattern.matches(&amp;quot;[a-zA-Z0-9\\s\\-]{1,50}&amp;quot;, userInput));&lt;br /&gt;
&lt;br /&gt;
/* If the first check pass then ensure that potential dangerous character that we have allowed&lt;br /&gt;
for business requirement are not used in a dangerous way.&lt;br /&gt;
For example here we have allowed the character '-', and, this can be used in SQL injection so, we&lt;br /&gt;
ensure that this character is not used is a continuous form.&lt;br /&gt;
Use the API COMMONS LANG v3 to help in String analysis...&lt;br /&gt;
*/&lt;br /&gt;
Assert.assertEquals(0, StringUtils.countMatches(userInput.replace(&amp;quot; &amp;quot;, &amp;quot;&amp;quot;), &amp;quot;--&amp;quot;));&lt;br /&gt;
&lt;br /&gt;
/*&lt;br /&gt;
OUTPUT WAY: Send data to user&lt;br /&gt;
Here we escape + sanitize any data sent to user&lt;br /&gt;
Use the OWASP Java HTML Sanitizer API to handle sanitizing&lt;br /&gt;
Use the OWASP Java Encoder API to handle HTML tag encoding (escaping)&lt;br /&gt;
*/&lt;br /&gt;
&lt;br /&gt;
String outputToUser = &amp;quot;You &amp;lt;p&amp;gt;user login&amp;lt;/p&amp;gt; is &amp;lt;strong&amp;gt;owasp-user01&amp;lt;/strong&amp;gt;&amp;quot;;&lt;br /&gt;
outputToUser += &amp;quot;&amp;lt;script&amp;gt;alert(22);&amp;lt;/script&amp;gt;&amp;lt;img src='#' onload='javascript:alert(23);'&amp;gt;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
/* Create a sanitizing policy that only allow tag '&amp;lt;p&amp;gt;' and '&amp;lt;strong&amp;gt;'*/&lt;br /&gt;
PolicyFactory policy = new HtmlPolicyBuilder().allowElements(&amp;quot;p&amp;quot;, &amp;quot;strong&amp;quot;).toFactory();&lt;br /&gt;
&lt;br /&gt;
/* Sanitize the output that will be sent to user*/&lt;br /&gt;
String safeOutput = policy.sanitize(outputToUser);&lt;br /&gt;
&lt;br /&gt;
/* Encode HTML Tag*/&lt;br /&gt;
safeOutput = Encode.forHtml(safeOutput);&lt;br /&gt;
String finalSafeOutputExpected = &amp;quot;You &amp;amp;lt;p&amp;amp;gt;user login&amp;amp;lt;/p&amp;amp;gt; is &amp;amp;lt;strong&amp;amp;gt;owasp-user01&amp;amp;lt;/strong&amp;amp;gt;&amp;quot;;&lt;br /&gt;
Assert.assertEquals(finalSafeOutputExpected, safeOutput);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
* [[Cross-site_Scripting_(XSS)| XSS]]&lt;br /&gt;
&lt;br /&gt;
* https://github.com/owasp/java-html-sanitizer&lt;br /&gt;
&lt;br /&gt;
* https://github.com/owasp/owasp-java-encoder/&lt;br /&gt;
&lt;br /&gt;
* https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html&lt;br /&gt;
&lt;br /&gt;
* https://commons.apache.org/proper/commons-lang/javadocs/api-3.4/org/apache/commons/lang3/StringEscapeUtils.html&lt;br /&gt;
&lt;br /&gt;
== LDAP ==&lt;br /&gt;
&lt;br /&gt;
A dedicated [[LDAP_Injection_Prevention_Cheat_Sheet|cheatsheet]] has been created.&lt;br /&gt;
&lt;br /&gt;
== NoSQL ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a NoSQL API call expression.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
As there many NoSQL database system and each one use a API for call, it's important to ensure that user input received&lt;br /&gt;
and used to build the API call expression do not contains any character that have a special meaning in the target API syntax.&lt;br /&gt;
This in order to avoid that it will be used to escape the initial call expression in order to create another one based on crafted user input.&lt;br /&gt;
It's also important to not use string concatenation to build API call expression but use the API to create the expression.&lt;br /&gt;
&lt;br /&gt;
=== Example - MongoDB ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
 /* Here use MongoDB as target NoSQL DB */&lt;br /&gt;
String userInput = &amp;quot;Brooklyn&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
/* First ensure that the input do no contains any special characters for the current NoSQL DB call API,&lt;br /&gt;
here they are: ' &amp;quot; \ ; { } $&lt;br /&gt;
*/&lt;br /&gt;
//Avoid regexp this time in order to made validation code more easy to read and understand...&lt;br /&gt;
ArrayList&amp;lt;String&amp;gt; specialCharsList = new ArrayList&amp;lt;String&amp;gt;() {{&lt;br /&gt;
    add(&amp;quot;'&amp;quot;);&lt;br /&gt;
    add(&amp;quot;\&amp;quot;&amp;quot;);&lt;br /&gt;
    add(&amp;quot;\\&amp;quot;);&lt;br /&gt;
    add(&amp;quot;;&amp;quot;);&lt;br /&gt;
    add(&amp;quot;{&amp;quot;);&lt;br /&gt;
    add(&amp;quot;}&amp;quot;);&lt;br /&gt;
    add(&amp;quot;$&amp;quot;);&lt;br /&gt;
}};&lt;br /&gt;
specialCharsList.forEach(specChar -&amp;gt; Assert.assertFalse(userInput.contains(specChar)));&lt;br /&gt;
//Add also a check on input max size&lt;br /&gt;
Assert.assertTrue(userInput.length() &amp;lt;= 50);&lt;br /&gt;
&lt;br /&gt;
/* Then perform query on database using API to build expression */&lt;br /&gt;
//Connect to the local MongoDB instance&lt;br /&gt;
try(MongoClient mongoClient = new MongoClient()){&lt;br /&gt;
    MongoDatabase db = mongoClient.getDatabase(&amp;quot;test&amp;quot;);&lt;br /&gt;
    //Use API query builder to create call expression&lt;br /&gt;
    //Create expression&lt;br /&gt;
    Bson expression = eq(&amp;quot;borough&amp;quot;, userInput);&lt;br /&gt;
    //Perform call&lt;br /&gt;
    FindIterable&amp;lt;org.bson.Document&amp;gt; restaurants = db.getCollection(&amp;quot;restaurants&amp;quot;).find(expression);&lt;br /&gt;
    //Verify result consistency&lt;br /&gt;
    restaurants.forEach(new Block&amp;lt;org.bson.Document&amp;gt;() {&lt;br /&gt;
        @Override&lt;br /&gt;
        public void apply(final org.bson.Document doc) {&lt;br /&gt;
            String restBorough = (String)doc.get(&amp;quot;borough&amp;quot;);&lt;br /&gt;
            Assert.assertTrue(&amp;quot;Brooklyn&amp;quot;.equals(restBorough));&lt;br /&gt;
        }&lt;br /&gt;
    });&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
[[Testing_for_NoSQL_injection|Testing for NoSQL injection]]&lt;br /&gt;
&lt;br /&gt;
https://ckarande.gitbooks.io/owasp-nodegoat-tutorial/content/tutorial/a1_-_sql_and_nosql_injection.html&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/ftp/arxiv/papers/1506/1506.04082.pdf&lt;br /&gt;
&lt;br /&gt;
== Log Injection ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;This section is under work, so, is not yet ready for production usage!&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
[[Log_Injection|Log Injection]] occurs when an application includes untrusted data in an application log message (e.g., an attacker can cause an additional log entry that looks like it came from a completely different user, if they can inject CRLF characters in the untrusted data).&lt;br /&gt;
&lt;br /&gt;
More information about this attack is available on the OWASP [[Log Injection]] page.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
To prevent an attacker from writing malicious content into the application log, apply defenses such as:&lt;br /&gt;
&lt;br /&gt;
* Filter the user input used to prevent injection of '''C'''arriage '''R'''eturn (CR) or '''L'''ine '''F'''eed (LF) characters.&lt;br /&gt;
* Limit the size of the user input value used to create the log message.&lt;br /&gt;
* Make sure [[XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet|all XSS defenses]] are applied when viewing log files in a web browser.&lt;br /&gt;
&lt;br /&gt;
=== Example using Log4j2 ===&lt;br /&gt;
&lt;br /&gt;
Configuration of a logging policy to roll on 10 files of 5MB each, and encode/limit the log message using the [https://logging.apache.org/log4j/2.x/manual/layouts.html#PatternLayout|Log4j2 Pattern ''encode{}{CRLF}''], introduced in [https://mvnrepository.com/artifact/org.apache.logging.log4j/log4j-api Log4j2 v2.10.0], and the ''-500m'' message size limit.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot; highlight=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;?xml version=&amp;quot;1.0&amp;quot; encoding=&amp;quot;UTF-8&amp;quot;?&amp;gt;&lt;br /&gt;
&amp;lt;Configuration status=&amp;quot;error&amp;quot; name=&amp;quot;SecureLoggingPolicy&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;Appenders&amp;gt;&lt;br /&gt;
        &amp;lt;RollingFile name=&amp;quot;RollingFile&amp;quot; fileName=&amp;quot;App.log&amp;quot; filePattern=&amp;quot;App-%i.log&amp;quot; ignoreExceptions=&amp;quot;false&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;PatternLayout&amp;gt;&lt;br /&gt;
                &amp;lt;!-- Encode any CRLF chars in the message and limit its maximum size to 500 characters --&amp;gt;&lt;br /&gt;
                &amp;lt;Pattern&amp;gt;%d{ISO8601} %-5p - %encode{%.-500m}{CRLF}%n&amp;lt;/Pattern&amp;gt;&lt;br /&gt;
            &amp;lt;/PatternLayout&amp;gt;&lt;br /&gt;
            &amp;lt;Policies&amp;gt;&lt;br /&gt;
                &amp;lt;SizeBasedTriggeringPolicy size=&amp;quot;5MB&amp;quot;/&amp;gt;&lt;br /&gt;
            &amp;lt;/Policies&amp;gt;&lt;br /&gt;
            &amp;lt;DefaultRolloverStrategy max=&amp;quot;10&amp;quot;/&amp;gt;&lt;br /&gt;
        &amp;lt;/RollingFile&amp;gt;&lt;br /&gt;
    &amp;lt;/Appenders&amp;gt;&lt;br /&gt;
    &amp;lt;Loggers&amp;gt;&lt;br /&gt;
        &amp;lt;Root level=&amp;quot;debug&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;AppenderRef ref=&amp;quot;RollingFile&amp;quot;/&amp;gt;&lt;br /&gt;
        &amp;lt;/Root&amp;gt;&lt;br /&gt;
    &amp;lt;/Loggers&amp;gt;&lt;br /&gt;
&amp;lt;/Configuration&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usage of the logger at code level:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
import org.apache.logging.log4j.LogManager;&lt;br /&gt;
import org.apache.logging.log4j.Logger;&lt;br /&gt;
...&lt;br /&gt;
// No special action needed because security actions are performed at the logging policy level&lt;br /&gt;
Logger logger = LogManager.getLogger(MyClass.class);&lt;br /&gt;
logger.info(logMessage);&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
https://logging.apache.org/log4j/2.x/manual/layouts.html#PatternLayout (See the encode{}{CRLF} function)&lt;br /&gt;
 Note that the default Log4j2 encode{} encoder is HTML, which does NOT prevent log injection. It prevents XSS attacks against viewing logs using a browser. OWASP recommends defending against XSS attacks in such situations in the log viewer application itself, not by preencoding all the log messages with HTML encoding as such log entries may be used/viewed in many other log viewing/analysis tools that don't expect the log data to be pre-HTML encoded.&lt;br /&gt;
&lt;br /&gt;
https://logging.apache.org/log4j/2.x/manual/configuration.html&lt;br /&gt;
&lt;br /&gt;
https://logging.apache.org/log4j/2.x/manual/appenders.html&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
[[User:Dominique_RIGHETTO|Dominique Righetto]] - dominique.righetto@owasp.org&lt;br /&gt;
&lt;br /&gt;
[[User:wichers|Dave Wichers]] - dave.wichers@owasp.org (For just the Log Injection section)&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets = &lt;br /&gt;
&lt;br /&gt;
{{Cheatsheet_Navigation_Body}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Injection_Prevention_Cheat_Sheet_in_Java&amp;diff=245528</id>
		<title>Injection Prevention Cheat Sheet in Java</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Injection_Prevention_Cheat_Sheet_in_Java&amp;diff=245528"/>
				<updated>2018-11-26T15:12:49Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Log Injection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt; __NOTOC__&lt;br /&gt;
&amp;lt;div style=&amp;quot;width:100%;height:160px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Cheatsheets-header.jpg|link=]]&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
Last revision (mm/dd/yy): '''{{REVISIONMONTH}}/{{REVISIONDAY}}/{{REVISIONYEAR}}''' &lt;br /&gt;
= Introduction  =&lt;br /&gt;
 __TOC__{{TOC hidden}}&lt;br /&gt;
&lt;br /&gt;
This document has for objective to provide some tips to handle ''Injection'' into Java application code.&lt;br /&gt;
&lt;br /&gt;
Sample codes used in tips are located [https://github.com/righettod/injection-cheat-sheets here].&lt;br /&gt;
&lt;br /&gt;
= What is Injection ? =&lt;br /&gt;
&lt;br /&gt;
[[Top_10_2013-A1-Injection|Injection]] in OWASP Top 10 is defined as following:&lt;br /&gt;
&lt;br /&gt;
''Consider anyone who can send untrusted data to the system, including external users, internal users, and administrators.''&lt;br /&gt;
&lt;br /&gt;
= General advices to prevent Injection =&lt;br /&gt;
&lt;br /&gt;
The following point can be applied, in a general way, to prevent ''Injection'' issue:&lt;br /&gt;
&lt;br /&gt;
#  Apply '''Input Validation''' (using whitelist approach) combined with '''Output Sanitizing+Escaping''' on user input/output.&lt;br /&gt;
#  If you need to interact with system, try to use API features provided by your technology stack (Java / .Net / PHP...) instead of building command.&lt;br /&gt;
&lt;br /&gt;
Additional advices are provided on this [[Input_Validation_Cheat_Sheet|cheatsheet]].&lt;br /&gt;
&lt;br /&gt;
= Specific Injection types =&lt;br /&gt;
&lt;br /&gt;
''Examples in this section will be provided in Java technology (see Maven project associated) but advices are applicable to others technologies like .Net / PHP / Ruby / Python...''&lt;br /&gt;
&lt;br /&gt;
== SQL ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a SQL query using a String and execute it.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Use ''Query Parameterization'' in order to prevent injection.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/*No DB framework used here in order to show the real use of Prepared Statement from Java API*/&lt;br /&gt;
/*Open connection with H2 database and use it*/&lt;br /&gt;
Class.forName(&amp;quot;org.h2.Driver&amp;quot;);&lt;br /&gt;
String jdbcUrl = &amp;quot;jdbc:h2:file:&amp;quot; + new File(&amp;quot;.&amp;quot;).getAbsolutePath() + &amp;quot;/target/db&amp;quot;;&lt;br /&gt;
try (Connection con = DriverManager.getConnection(jdbcUrl)) {&lt;br /&gt;
&lt;br /&gt;
    /* Sample A: Select data using Prepared Statement*/&lt;br /&gt;
    String query = &amp;quot;select * from color where friendly_name = ?&amp;quot;;&lt;br /&gt;
    List&amp;lt;String&amp;gt; colors = new ArrayList&amp;lt;&amp;gt;();&lt;br /&gt;
    try (PreparedStatement pStatement = con.prepareStatement(query)) {&lt;br /&gt;
        pStatement.setString(1, &amp;quot;yellow&amp;quot;);&lt;br /&gt;
        try (ResultSet rSet = pStatement.executeQuery()) {&lt;br /&gt;
            while (rSet.next()) {&lt;br /&gt;
                colors.add(rSet.getString(1));&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
    }&lt;br /&gt;
    Assert.assertEquals(1, colors.size());&lt;br /&gt;
    Assert.assertTrue(colors.contains(&amp;quot;yellow&amp;quot;));&lt;br /&gt;
&lt;br /&gt;
    /* Sample B: Insert data using Prepared Statement*/&lt;br /&gt;
    query = &amp;quot;insert into color(friendly_name, red, green, blue) values(?, ?, ?, ?)&amp;quot;;&lt;br /&gt;
    int insertedRecordCount;&lt;br /&gt;
    try (PreparedStatement pStatement = con.prepareStatement(query)) {&lt;br /&gt;
        pStatement.setString(1, &amp;quot;orange&amp;quot;);&lt;br /&gt;
        pStatement.setInt(2, 239);&lt;br /&gt;
        pStatement.setInt(3, 125);&lt;br /&gt;
        pStatement.setInt(4, 11);&lt;br /&gt;
        insertedRecordCount = pStatement.executeUpdate();&lt;br /&gt;
    }&lt;br /&gt;
    Assert.assertEquals(1, insertedRecordCount);&lt;br /&gt;
&lt;br /&gt;
   /* Sample C: Update data using Prepared Statement*/&lt;br /&gt;
    query = &amp;quot;update color set blue = ? where friendly_name = ?&amp;quot;;&lt;br /&gt;
    int updatedRecordCount;&lt;br /&gt;
    try (PreparedStatement pStatement = con.prepareStatement(query)) {&lt;br /&gt;
        pStatement.setInt(1, 10);&lt;br /&gt;
        pStatement.setString(2, &amp;quot;orange&amp;quot;);&lt;br /&gt;
        updatedRecordCount = pStatement.executeUpdate();&lt;br /&gt;
    }&lt;br /&gt;
    Assert.assertEquals(1, updatedRecordCount);&lt;br /&gt;
&lt;br /&gt;
   /* Sample D: Delete data using Prepared Statement*/&lt;br /&gt;
    query = &amp;quot;delete from color where friendly_name = ?&amp;quot;;&lt;br /&gt;
    int deletedRecordCount;&lt;br /&gt;
    try (PreparedStatement pStatement = con.prepareStatement(query)) {&lt;br /&gt;
        pStatement.setString(1, &amp;quot;orange&amp;quot;);&lt;br /&gt;
        deletedRecordCount = pStatement.executeUpdate();&lt;br /&gt;
    }&lt;br /&gt;
    Assert.assertEquals(1, deletedRecordCount);&lt;br /&gt;
&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
* [[SQL_Injection_Prevention_Cheat_Sheet|SQL Injection Prevention Cheat Sheet]]&lt;br /&gt;
&lt;br /&gt;
== JPA ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a JPA query using a String and execute it. It's quite similar to SQL injection but here the altered language is not SQL but JPA QL.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Use Java Persistence Query Language '''Query Parameterization''' in order to prevent injection.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
EntityManager entityManager = null;&lt;br /&gt;
try {&lt;br /&gt;
    /* Get a ref on EntityManager to access DB */&lt;br /&gt;
    entityManager = Persistence.createEntityManagerFactory(&amp;quot;testJPA&amp;quot;).createEntityManager();&lt;br /&gt;
&lt;br /&gt;
    /* Define parametrized query prototype using named parameter to enhance readability */&lt;br /&gt;
    String queryPrototype = &amp;quot;select c from Color c where c.friendlyName = :colorName&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
    /* Create the query, set the named parameter and execute the query */&lt;br /&gt;
    Query queryObject = entityManager.createQuery(queryPrototype);&lt;br /&gt;
    Color c = (Color) queryObject.setParameter(&amp;quot;colorName&amp;quot;, &amp;quot;yellow&amp;quot;).getSingleResult();&lt;br /&gt;
&lt;br /&gt;
    /* Ensure that the object obtained is the right one */&lt;br /&gt;
    Assert.assertNotNull(c);&lt;br /&gt;
    Assert.assertEquals(c.getFriendlyName(), &amp;quot;yellow&amp;quot;);&lt;br /&gt;
    Assert.assertEquals(c.getRed(), 213);&lt;br /&gt;
    Assert.assertEquals(c.getGreen(), 242);&lt;br /&gt;
    Assert.assertEquals(c.getBlue(), 26);&lt;br /&gt;
} finally {&lt;br /&gt;
    if (entityManager != null &amp;amp;&amp;amp; entityManager.isOpen()) {&lt;br /&gt;
        entityManager.close();&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
* https://software-security.sans.org/developer-how-to/fix-sql-injection-in-java-persistence-api-jpa&lt;br /&gt;
&lt;br /&gt;
== Operating System ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a Operating System command using a String and execute it.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Use technology stack '''API''' in order to prevent injection.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/* The context taken is, for example, to perform a PING against a computer.&lt;br /&gt;
* The prevention is to use the feature provided by the Java API instead of building&lt;br /&gt;
* a system command as String and execute it */&lt;br /&gt;
InetAddress host = InetAddress.getByName(&amp;quot;localhost&amp;quot;);&lt;br /&gt;
Assert.assertTrue(host.isReachable(5000));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
* [[Command_Injection|Command Injection]]&lt;br /&gt;
&lt;br /&gt;
==  XML: External Entity attack ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application load the received XML stream using a XML parser instance in which the resolution of External Entity is not disabled.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Disable to resolution of the External Entity in the parser instance to prevent injection.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/*Create a XML document builder factory*/&lt;br /&gt;
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();&lt;br /&gt;
&lt;br /&gt;
/*Disable External Entity resolution for differents cases*/&lt;br /&gt;
// This is the PRIMARY defense. If DTDs (doctypes) are disallowed,&lt;br /&gt;
// almost all XML entity attacks are prevented&lt;br /&gt;
// Xerces 2 only - http://xerces.apache.org/xerces2-j/features.html#disallow-doctype-decl&lt;br /&gt;
String feature = &amp;quot;http://apache.org/xml/features/disallow-doctype-decl&amp;quot;;&lt;br /&gt;
dbf.setFeature(feature, true);&lt;br /&gt;
&lt;br /&gt;
// If you can't completely disable DTDs, then at least do the following:&lt;br /&gt;
// Xerces 1 - http://xerces.apache.org/xerces-j/features.html#external-general-entities&lt;br /&gt;
// Xerces 2 - http://xerces.apache.org/xerces2-j/features.html#external-general-entities&lt;br /&gt;
// JDK7+ - http://xml.org/sax/features/external-general-entities&lt;br /&gt;
feature = &amp;quot;http://xml.org/sax/features/external-general-entities&amp;quot;;&lt;br /&gt;
dbf.setFeature(feature, false);&lt;br /&gt;
&lt;br /&gt;
// Xerces 1 - http://xerces.apache.org/xerces-j/features.html#external-parameter-entities&lt;br /&gt;
// Xerces 2 - http://xerces.apache.org/xerces2-j/features.html#external-parameter-entities&lt;br /&gt;
// JDK7+ - http://xml.org/sax/features/external-parameter-entities&lt;br /&gt;
feature = &amp;quot;http://xml.org/sax/features/external-parameter-entities&amp;quot;;&lt;br /&gt;
dbf.setFeature(feature, false);&lt;br /&gt;
&lt;br /&gt;
// feature external DTDs as well&lt;br /&gt;
feature = &amp;quot;http://apache.org/xml/features/nonvalidating/load-external-dtd&amp;quot;;&lt;br /&gt;
dbf.setFeature(feature, false);&lt;br /&gt;
&lt;br /&gt;
// and these as well, per Timothy Morgan's 2014 paper: &amp;quot;XML Schema, DTD, and Entity Attacks&amp;quot;&lt;br /&gt;
dbf.setXIncludeAware(false);&lt;br /&gt;
dbf.setExpandEntityReferences(false);&lt;br /&gt;
&lt;br /&gt;
/*Load XML file*/&lt;br /&gt;
DocumentBuilder builder = dbf.newDocumentBuilder();&lt;br /&gt;
//Here an org.xml.sax.SAXParseException will be throws because the XML contains a External Entity.&lt;br /&gt;
builder.parse(new File(&amp;quot;src/test/resources/SampleXXE.xml&amp;quot;));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== XML: XPath Injection ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a XPath query using a String and execute it.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Use '''XPath Variable Resolver''' in order to prevent injection.&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
'''Variable Resolver''' implementation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/**&lt;br /&gt;
 * Resolver in order to define parameter for XPATH expression.&lt;br /&gt;
 *&lt;br /&gt;
 */&lt;br /&gt;
public class SimpleVariableResolver implements XPathVariableResolver {&lt;br /&gt;
&lt;br /&gt;
    private final Map&amp;lt;QName, Object&amp;gt; vars = new HashMap&amp;lt;QName, Object&amp;gt;();&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * External methods to add parameter&lt;br /&gt;
     *&lt;br /&gt;
     * @param name Parameter name&lt;br /&gt;
     * @param value Parameter value&lt;br /&gt;
     */&lt;br /&gt;
    public void addVariable(QName name, Object value) {&lt;br /&gt;
        vars.put(name, value);&lt;br /&gt;
    }&lt;br /&gt;
&lt;br /&gt;
    /**&lt;br /&gt;
     * {@inheritDoc}&lt;br /&gt;
     *&lt;br /&gt;
     * @see javax.xml.xpath.XPathVariableResolver#resolveVariable(javax.xml.namespace.QName)&lt;br /&gt;
     */&lt;br /&gt;
    public Object resolveVariable(QName variableName) {&lt;br /&gt;
        return vars.get(variableName);&lt;br /&gt;
    }&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Code using it to perform XPath query.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/*Create a XML document builder factory*/&lt;br /&gt;
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();&lt;br /&gt;
&lt;br /&gt;
/*Disable External Entity resolution for differents cases*/&lt;br /&gt;
//Do not performed here in order to focus on variable resolver code&lt;br /&gt;
//but do it for production code !&lt;br /&gt;
&lt;br /&gt;
/*Load XML file*/&lt;br /&gt;
DocumentBuilder builder = dbf.newDocumentBuilder();&lt;br /&gt;
Document doc = builder.parse(new File(&amp;quot;src/test/resources/SampleXPath.xml&amp;quot;));&lt;br /&gt;
&lt;br /&gt;
/* Create and configure parameter resolver */&lt;br /&gt;
String bid = &amp;quot;bk102&amp;quot;;&lt;br /&gt;
SimpleVariableResolver variableResolver = new SimpleVariableResolver();&lt;br /&gt;
variableResolver.addVariable(new QName(&amp;quot;bookId&amp;quot;), bid);&lt;br /&gt;
&lt;br /&gt;
/*Create and configure XPATH expression*/&lt;br /&gt;
XPath xpath = XPathFactory.newInstance().newXPath();&lt;br /&gt;
xpath.setXPathVariableResolver(variableResolver);&lt;br /&gt;
XPathExpression xPathExpression = xpath.compile(&amp;quot;//book[@id=$bookId]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
/* Apply expression on XML document */&lt;br /&gt;
Object nodes = xPathExpression.evaluate(doc, XPathConstants.NODESET);&lt;br /&gt;
NodeList nodesList = (NodeList) nodes;&lt;br /&gt;
Assert.assertNotNull(nodesList);&lt;br /&gt;
Assert.assertEquals(1, nodesList.getLength());&lt;br /&gt;
Element book = (Element)nodesList.item(0);&lt;br /&gt;
Assert.assertTrue(book.getTextContent().contains(&amp;quot;Ralls, Kim&amp;quot;));&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
* [[XPATH_Injection|XPATH Injection]]&lt;br /&gt;
&lt;br /&gt;
== HTML/JavaScript/CSS ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a HTTP response and sent it to browser.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
Either apply strict input validation (whitelist approach) or use output sanitizing+escaping if input validation is not possible (combine both every time is possible).&lt;br /&gt;
&lt;br /&gt;
=== Example ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
/*&lt;br /&gt;
INPUT WAY: Receive data from user&lt;br /&gt;
Here it's recommended to use strict input validation using whitelist approach.&lt;br /&gt;
In fact, you ensure that only allowed characters are part of the input received.&lt;br /&gt;
*/&lt;br /&gt;
&lt;br /&gt;
String userInput = &amp;quot;You user login is owasp-user01&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
/* First we check that the value contains only expected character*/&lt;br /&gt;
Assert.assertTrue(Pattern.matches(&amp;quot;[a-zA-Z0-9\\s\\-]{1,50}&amp;quot;, userInput));&lt;br /&gt;
&lt;br /&gt;
/* If the first check pass then ensure that potential dangerous character that we have allowed&lt;br /&gt;
for business requirement are not used in a dangerous way.&lt;br /&gt;
For example here we have allowed the character '-', and, this can be used in SQL injection so, we&lt;br /&gt;
ensure that this character is not used is a continuous form.&lt;br /&gt;
Use the API COMMONS LANG v3 to help in String analysis...&lt;br /&gt;
*/&lt;br /&gt;
Assert.assertEquals(0, StringUtils.countMatches(userInput.replace(&amp;quot; &amp;quot;, &amp;quot;&amp;quot;), &amp;quot;--&amp;quot;));&lt;br /&gt;
&lt;br /&gt;
/*&lt;br /&gt;
OUTPUT WAY: Send data to user&lt;br /&gt;
Here we escape + sanitize any data sent to user&lt;br /&gt;
Use the OWASP Java HTML Sanitizer API to handle sanitizing&lt;br /&gt;
Use the OWASP Java Encoder API to handle HTML tag encoding (escaping)&lt;br /&gt;
*/&lt;br /&gt;
&lt;br /&gt;
String outputToUser = &amp;quot;You &amp;lt;p&amp;gt;user login&amp;lt;/p&amp;gt; is &amp;lt;strong&amp;gt;owasp-user01&amp;lt;/strong&amp;gt;&amp;quot;;&lt;br /&gt;
outputToUser += &amp;quot;&amp;lt;script&amp;gt;alert(22);&amp;lt;/script&amp;gt;&amp;lt;img src='#' onload='javascript:alert(23);'&amp;gt;&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
/* Create a sanitizing policy that only allow tag '&amp;lt;p&amp;gt;' and '&amp;lt;strong&amp;gt;'*/&lt;br /&gt;
PolicyFactory policy = new HtmlPolicyBuilder().allowElements(&amp;quot;p&amp;quot;, &amp;quot;strong&amp;quot;).toFactory();&lt;br /&gt;
&lt;br /&gt;
/* Sanitize the output that will be sent to user*/&lt;br /&gt;
String safeOutput = policy.sanitize(outputToUser);&lt;br /&gt;
&lt;br /&gt;
/* Encode HTML Tag*/&lt;br /&gt;
safeOutput = Encode.forHtml(safeOutput);&lt;br /&gt;
String finalSafeOutputExpected = &amp;quot;You &amp;amp;lt;p&amp;amp;gt;user login&amp;amp;lt;/p&amp;amp;gt; is &amp;amp;lt;strong&amp;amp;gt;owasp-user01&amp;amp;lt;/strong&amp;amp;gt;&amp;quot;;&lt;br /&gt;
Assert.assertEquals(finalSafeOutputExpected, safeOutput);&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
* [[Cross-site_Scripting_(XSS)| XSS]]&lt;br /&gt;
&lt;br /&gt;
* https://github.com/owasp/java-html-sanitizer&lt;br /&gt;
&lt;br /&gt;
* https://github.com/owasp/owasp-java-encoder/&lt;br /&gt;
&lt;br /&gt;
* https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html&lt;br /&gt;
&lt;br /&gt;
* https://commons.apache.org/proper/commons-lang/javadocs/api-3.4/org/apache/commons/lang3/StringEscapeUtils.html&lt;br /&gt;
&lt;br /&gt;
== LDAP ==&lt;br /&gt;
&lt;br /&gt;
A dedicated [[LDAP_Injection_Prevention_Cheat_Sheet|cheatsheet]] has been created.&lt;br /&gt;
&lt;br /&gt;
== NoSQL ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
Injection of this type occur when the application use untrusted user input to build a NoSQL API call expression.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
As there many NoSQL database system and each one use a API for call, it's important to ensure that user input received&lt;br /&gt;
and used to build the API call expression do not contains any character that have a special meaning in the target API syntax.&lt;br /&gt;
This in order to avoid that it will be used to escape the initial call expression in order to create another one based on crafted user input.&lt;br /&gt;
It's also important to not use string concatenation to build API call expression but use the API to create the expression.&lt;br /&gt;
&lt;br /&gt;
=== Example - MongoDB ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
 /* Here use MongoDB as target NoSQL DB */&lt;br /&gt;
String userInput = &amp;quot;Brooklyn&amp;quot;;&lt;br /&gt;
&lt;br /&gt;
/* First ensure that the input do no contains any special characters for the current NoSQL DB call API,&lt;br /&gt;
here they are: ' &amp;quot; \ ; { } $&lt;br /&gt;
*/&lt;br /&gt;
//Avoid regexp this time in order to made validation code more easy to read and understand...&lt;br /&gt;
ArrayList&amp;lt;String&amp;gt; specialCharsList = new ArrayList&amp;lt;String&amp;gt;() {{&lt;br /&gt;
    add(&amp;quot;'&amp;quot;);&lt;br /&gt;
    add(&amp;quot;\&amp;quot;&amp;quot;);&lt;br /&gt;
    add(&amp;quot;\\&amp;quot;);&lt;br /&gt;
    add(&amp;quot;;&amp;quot;);&lt;br /&gt;
    add(&amp;quot;{&amp;quot;);&lt;br /&gt;
    add(&amp;quot;}&amp;quot;);&lt;br /&gt;
    add(&amp;quot;$&amp;quot;);&lt;br /&gt;
}};&lt;br /&gt;
specialCharsList.forEach(specChar -&amp;gt; Assert.assertFalse(userInput.contains(specChar)));&lt;br /&gt;
//Add also a check on input max size&lt;br /&gt;
Assert.assertTrue(userInput.length() &amp;lt;= 50);&lt;br /&gt;
&lt;br /&gt;
/* Then perform query on database using API to build expression */&lt;br /&gt;
//Connect to the local MongoDB instance&lt;br /&gt;
try(MongoClient mongoClient = new MongoClient()){&lt;br /&gt;
    MongoDatabase db = mongoClient.getDatabase(&amp;quot;test&amp;quot;);&lt;br /&gt;
    //Use API query builder to create call expression&lt;br /&gt;
    //Create expression&lt;br /&gt;
    Bson expression = eq(&amp;quot;borough&amp;quot;, userInput);&lt;br /&gt;
    //Perform call&lt;br /&gt;
    FindIterable&amp;lt;org.bson.Document&amp;gt; restaurants = db.getCollection(&amp;quot;restaurants&amp;quot;).find(expression);&lt;br /&gt;
    //Verify result consistency&lt;br /&gt;
    restaurants.forEach(new Block&amp;lt;org.bson.Document&amp;gt;() {&lt;br /&gt;
        @Override&lt;br /&gt;
        public void apply(final org.bson.Document doc) {&lt;br /&gt;
            String restBorough = (String)doc.get(&amp;quot;borough&amp;quot;);&lt;br /&gt;
            Assert.assertTrue(&amp;quot;Brooklyn&amp;quot;.equals(restBorough));&lt;br /&gt;
        }&lt;br /&gt;
    });&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
[[Testing_for_NoSQL_injection|Testing for NoSQL injection]]&lt;br /&gt;
&lt;br /&gt;
https://ckarande.gitbooks.io/owasp-nodegoat-tutorial/content/tutorial/a1_-_sql_and_nosql_injection.html&lt;br /&gt;
&lt;br /&gt;
https://arxiv.org/ftp/arxiv/papers/1506/1506.04082.pdf&lt;br /&gt;
&lt;br /&gt;
== Log Injection ==&lt;br /&gt;
&lt;br /&gt;
=== Symptom ===&lt;br /&gt;
&lt;br /&gt;
[[Log_Injection|Log Injection]] occurs when an application includes untrusted data in an application log message (e.g., an attacker can cause an additional log entry that looks like it came from a completely different user, if they can inject CRLF characters in the untrusted data).&lt;br /&gt;
&lt;br /&gt;
More information about this attack is available on the OWASP [[Log Injection]] page.&lt;br /&gt;
&lt;br /&gt;
=== How to prevent ===&lt;br /&gt;
&lt;br /&gt;
To prevent an attacker from writing malicious content into the application log, apply defenses such as:&lt;br /&gt;
&lt;br /&gt;
* Filter the user input used to prevent injection of '''C'''arriage '''R'''eturn (CR) or '''L'''ine '''F'''eed (LF) characters.&lt;br /&gt;
* Limit the size of the user input value used to create the log message.&lt;br /&gt;
* Make sure [[XSS_(Cross_Site_Scripting)_Prevention_Cheat_Sheet|all XSS defenses]] are applied when viewing log files in a web browser.&lt;br /&gt;
&lt;br /&gt;
The [[OWASP Security Logging Project]] can be used to protect the application log against ''Log Injection'' attacks.&lt;br /&gt;
&lt;br /&gt;
=== Example using Log4j2 ===&lt;br /&gt;
&lt;br /&gt;
Configuration of a logging policy to roll on 10 files of 5MB each, and encode/limit the log message using the [https://logging.apache.org/log4j/2.x/manual/layouts.html#PatternLayout|Log4j2 Pattern ''encode{}{CRLF}''], introduced in Log4j2 v2.10.0, and the ''-500m'' message size limit.:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;xml&amp;quot; highlight=&amp;quot;7&amp;quot;&amp;gt;&lt;br /&gt;
&amp;lt;?xml version=&amp;quot;1.0&amp;quot; encoding=&amp;quot;UTF-8&amp;quot;?&amp;gt;&lt;br /&gt;
&amp;lt;Configuration status=&amp;quot;error&amp;quot; name=&amp;quot;SecureLoggingPolicy&amp;quot;&amp;gt;&lt;br /&gt;
    &amp;lt;Appenders&amp;gt;&lt;br /&gt;
        &amp;lt;RollingFile name=&amp;quot;RollingFile&amp;quot; fileName=&amp;quot;App.log&amp;quot; filePattern=&amp;quot;App-%i.log&amp;quot; ignoreExceptions=&amp;quot;false&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;PatternLayout&amp;gt;&lt;br /&gt;
                &amp;lt;!-- Encode any CRLF chars in the message and limit its maximum size to 500 characters --&amp;gt;&lt;br /&gt;
                &amp;lt;Pattern&amp;gt;%d{ISO8601} %-5p - %encode{%.-500m}{CRLF}%n&amp;lt;/Pattern&amp;gt;&lt;br /&gt;
            &amp;lt;/PatternLayout&amp;gt;&lt;br /&gt;
            &amp;lt;Policies&amp;gt;&lt;br /&gt;
                &amp;lt;SizeBasedTriggeringPolicy size=&amp;quot;5MB&amp;quot;/&amp;gt;&lt;br /&gt;
            &amp;lt;/Policies&amp;gt;&lt;br /&gt;
            &amp;lt;DefaultRolloverStrategy max=&amp;quot;10&amp;quot;/&amp;gt;&lt;br /&gt;
        &amp;lt;/RollingFile&amp;gt;&lt;br /&gt;
    &amp;lt;/Appenders&amp;gt;&lt;br /&gt;
    &amp;lt;Loggers&amp;gt;&lt;br /&gt;
        &amp;lt;Root level=&amp;quot;debug&amp;quot;&amp;gt;&lt;br /&gt;
            &amp;lt;AppenderRef ref=&amp;quot;RollingFile&amp;quot;/&amp;gt;&lt;br /&gt;
        &amp;lt;/Root&amp;gt;&lt;br /&gt;
    &amp;lt;/Loggers&amp;gt;&lt;br /&gt;
&amp;lt;/Configuration&amp;gt;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Usage of the logger at code level:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
import org.apache.logging.log4j.LogManager;&lt;br /&gt;
import org.apache.logging.log4j.Logger;&lt;br /&gt;
...&lt;br /&gt;
// No special action needed because security actions are performed at the logging policy level&lt;br /&gt;
Logger logger = LogManager.getLogger(MyClass.class);&lt;br /&gt;
logger.info(logMessage);&lt;br /&gt;
...&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
https://logging.apache.org/log4j/2.x/manual/layouts.html#PatternLayout (See the encode{}{CRLF} function)&lt;br /&gt;
 Note that the default Log4j2 encode{} encoder is HTML, which does NOT prevent log injection. It prevents XSS attacks against viewing logs using a browser. OWASP recommends defending against XSS attacks in such situations in the log viewer application itself, not by preencoding all the log messages with HTML encoding as such log entries may be used/viewed in many other log viewing/analysis tools that don't expect the log data to be pre-HTML encoded.&lt;br /&gt;
&lt;br /&gt;
https://logging.apache.org/log4j/2.x/manual/configuration.html&lt;br /&gt;
&lt;br /&gt;
https://logging.apache.org/log4j/2.x/manual/appenders.html&lt;br /&gt;
&lt;br /&gt;
https://github.com/javabeanz/owasp-security-logging&lt;br /&gt;
&lt;br /&gt;
= Authors and Primary Editors  =&lt;br /&gt;
&lt;br /&gt;
[[User:Dominique_RIGHETTO|Dominique Righetto]] - dominique.righetto@owasp.org&lt;br /&gt;
&lt;br /&gt;
= Other Cheatsheets = &lt;br /&gt;
&lt;br /&gt;
{{Cheatsheet_Navigation_Body}}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
[[Category:Cheatsheets]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=244978</id>
		<title>Free for Open Source Application Security Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=244978"/>
				<updated>2018-11-07T21:12:12Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Free for Open Source Tools */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
== Introduction ==&lt;br /&gt;
OWASP's mission is to help the world improve the security of its software. One of the best ways OWASP can do that is to help Open Source developers improve the software they are producing that everyone else relies on. As such, the following lists of '''automated vulnerability detection tools''' that are '''free for open source''' projects have been gathered together here to raise awareness of their availability.&lt;br /&gt;
&lt;br /&gt;
We would encourage open source projects to use the following types of tools to improve the security and quality of their code:&lt;br /&gt;
* Static Application Security Testing ([[SAST]]) Tools &lt;br /&gt;
* Dynamic Application Security Testing ([[DAST]]) Tools - (Primarily for web apps)&lt;br /&gt;
* Interactive Application Security Testing (IAST) Tools - (Primarily for web apps and web APIs)&lt;br /&gt;
* Keeping Open Source libraries up-to-date (to avoid [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]])&lt;br /&gt;
* Static Code Quality Tools&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; '''Disclaimer:''' &amp;lt;b&amp;gt;OWASP does not endorse any of the Vendors or Scanning Tools by listing them below. They are simply listed if we believe they are free for use by open source projects. We have made every effort to provide this information as accurately as possible. If you are the vendor of a free for open source tool and think this information is incomplete or incorrect, please send an e-mail to dave.wichers (at) owasp.org and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free for Open Source Tools ==&lt;br /&gt;
Tools that are free for open source projects in each of the above categories are listed below.&lt;br /&gt;
&lt;br /&gt;
=== SAST Tools ===&lt;br /&gt;
OWASP already maintains a page of known SAST tools: [[Source Code Analysis Tools]], which includes a list of those that are &amp;quot;Open Source or Free Tools Of This Type&amp;quot;. Any such tools could certainly be used. One such cloud service that looks promising is:&lt;br /&gt;
* [https://lgtm.com/help/lgtm/about-lgtm LGTM.com] - A free for open source static analysis service that automatically monitors commits to publicly accessible code in: Bitbucket Cloud, GitHub, or GitLab. Supports C/C++, C#, COBOL (in beta), Java, JavaScript/TypeScript, Python&lt;br /&gt;
In addition, we are aware of the following commercial SAST tools that are free for Open Source projects:&lt;br /&gt;
* [https://scan.coverity.com/ Coverity Scan Static Analysis] - Can be lashed into Travis-CI so it's done automatically with online resources. Supports over a dozen programming languages as documented here in the section [https://www.synopsys.com/software-integrity/security-testing/static-analysis-sast.html Comprehensive support for these programming languages and frameworks].&lt;br /&gt;
&lt;br /&gt;
=== DAST Tools ===&lt;br /&gt;
If your project has a web application component, we recommend running automated scans against it to look for vulnerabilities. OWASP maintains a page of known DAST Tools: [[:Category:Vulnerability Scanning Tools|Vulnerability Scanning Tools]], and the '''Licence''' column on this page indicates which of those tools have free capabilities. Our primary recommendation is to use one of these:&lt;br /&gt;
* [[OWASP Zed Attack Proxy Project|OWASP ZAP]] - A full featured free and open source DAST tool that includes both automated scanning for vulnerabilities and tools to assist expert manual web app pen testing.&lt;br /&gt;
** The ZAP team has also been working hard to make it easier to integrate ZAP into your CI/CD pipeline. (e.g., here's a [https://www.we45.com/blog/how-to-integrate-zap-into-jenkins-ci-pipeline-we45-blog blog post on how to integrate ZAP with Jenkins]).&lt;br /&gt;
* [http://www.arachni-scanner.com/ Arachni] - Arachni is a commercially supported scanner, but its free for most use cases, including scanning open source projects.&lt;br /&gt;
We are not aware of any other commercial grade tools that offer their full featured DAST product free for open source projects.&lt;br /&gt;
&lt;br /&gt;
=== IAST Tools ===&lt;br /&gt;
IAST tools are typically geared to analyze Web Applications and Web APIs, but that is vendor specific. There may be IAST products that can perform good security analysis on non-web applications as well.&lt;br /&gt;
&lt;br /&gt;
We are aware of only one IAST Tool that is free for open source at this time:&lt;br /&gt;
* [https://www.contrastsecurity.com/contrast-community-edition Contrast Community Edition (CE)] - Fully featured version for 1 app and up to 5 users (some Enterprise features disabled). Contrast CE supports Java only.&lt;br /&gt;
&lt;br /&gt;
=== Open Source Software (OSS) Security Tools ===&lt;br /&gt;
OSS refers to the open source libraries or components that application developers leverage to quickly develop new applications and add features to existing apps. Gartner refers to the analysis of the security of these components as software composition analysis (SCA). So OSS Analysis and SCA are the same thing.&lt;br /&gt;
&lt;br /&gt;
OWASP recommends that all software projects generally try to keep the libraries they use as up-to-date as possible to reduce the likelihood of [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]]. There are two recommended approaches for this:&lt;br /&gt;
&lt;br /&gt;
==== Keeping Your Libraries Updated ====&lt;br /&gt;
Using the latest version of each library is recommended because security issues are frequently fixed 'silently' by the component maintainer. By silently, we mean without publishing a [https://cve.mitre.org/ CVE] for the security fix.&lt;br /&gt;
* [https://www.mojohaus.org/versions-maven-plugin/ Maven Versions plugin]&lt;br /&gt;
** For Maven projects, can be used to generate a report of all dependencies used and when upgrades are available for them. Either a direct report, or part of the overall project documentation using: mvn site.&lt;br /&gt;
* Dependabot - https://dependabot.com/&lt;br /&gt;
** A GitHub only service that creates pull requests to keep your dependencies up-to-date. It automatically generates a pull request for each dependency you can upgrade, which you can then ignore, or accept, as you like. It supports tons of languages.&lt;br /&gt;
** Recommended for all open source projects maintained on GitHub!&lt;br /&gt;
&lt;br /&gt;
==== Detecting Known Vulnerable Components ====&lt;br /&gt;
As an alternative, or in addition to, trying to keep all your components up-to-date, a project can specifically monitor whether any of the components they use have known vulnerable components.&lt;br /&gt;
&lt;br /&gt;
Free tools of this type:&lt;br /&gt;
* OWASP has its own free open source tool [[OWASP Dependency Check]] that is free for anyone to use.&lt;br /&gt;
* GitHub: Security alerts for vulnerable dependencies - https://help.github.com/articles/about-security-alerts-for-vulnerable-dependencies/&lt;br /&gt;
** A native GitHub feature that reports known vulnerable dependencies in your GitHub projects. Supports: Java, .NET, JavaScript, Ruby, and Python. Your GitHub projects are automatically signed up for this service.&lt;br /&gt;
Commercial tools of this type that are free for open source:&lt;br /&gt;
* Contrast Community Edition (CE) (mentioned earlier) also has both Known Vulnerable Component detection and Available Updates reporting for OSS. CE supports Java only.&lt;br /&gt;
* Snyk - https://www.snyk.io - Supports Node.js, Ruby, Java, Python, Scala, Golang, .NET, PHP - Latest list here: https://snyk.io/docs&lt;br /&gt;
** A Commercial tool that identifies vulnerable components and integrates with numerous CI/CD pipelines. It is free for open source: https://snyk.io/plans&lt;br /&gt;
** If you don't want to grant Snyk write access to your repo (see it can auto-create pull requests) you can use the Command Line Interface (CLI) instead. See: https://snyk.io/docs/using-snyk. If you do this and want it to be free, you have to configure Snyk so it know its open source: https://support.snyk.io/snyk-cli/how-can-i-set-a-snyk-cli-project-as-open-source&lt;br /&gt;
*** Another benefit of using the Snyk CLI is that it won't auto create Pull requests for you (which makes these 'issues' more public than you might prefer)&lt;br /&gt;
** They also provide detailed information and remediation guidance for known vulnerabilities here: https://snyk.io/vuln&lt;br /&gt;
* SourceClear - https://www.sourceclear.com/ - Supports: Java, Ruby, JavaScript, Python, Objective C, GO, PHP&lt;br /&gt;
** They have a free trial right from their [https://www.sourceclear.com/ home page]. When the 30 day trial expires, it converts into a free &amp;quot;Personal Account&amp;quot; per: &amp;quot;Upgrade at any time to get the features that matter most to you, or choose the Personal plan when your trial ends.&amp;quot; Personal Account described here: https://www.sourceclear.com/pricing/&lt;br /&gt;
** They also make their component vulnerability data (for publicly known vulns) free to search: https://www.sourceclear.com/vulnerability-database/search#_ (Very useful when trying to research a particular library)&lt;br /&gt;
* WhiteSource Bolt - Supports 200+ programming languages. https://www.whitesourcesoftware.com/&lt;br /&gt;
** Azure version: https://marketplace.visualstudio.com/items?itemName=whitesource.ws-bolt&lt;br /&gt;
** GitHub version: https://github.com/apps/whitesource-bolt-for-github Available starting in Nov. 2018.&lt;br /&gt;
&lt;br /&gt;
=== Code Quality tools ===&lt;br /&gt;
Quality has a significant correlation to security. As such, we recommend open source projects also consider using good code quality tools. A few that we are aware of are:&lt;br /&gt;
* SpotBugs (https://github.com/spotbugs/spotbugs) - Open source code quality tool for Java&lt;br /&gt;
** This is the active fork for FindBugs, so if you use Findbugs, you should switch to this.&lt;br /&gt;
** SpotBugs users should add the FindSecBugs plugin (http://find-sec-bugs.github.io/) to their SpotBugs setup, as it significantly improves on the very basic security checking native to SpotBugs.&lt;br /&gt;
&lt;br /&gt;
* SonarQube (https://www.sonarqube.org/)&lt;br /&gt;
** This is a commercially supported, very popular, free (and commercial) code quality tool. It includes most if not all the FindSecBugs security rules plus lots more for quality, including a free, internet online CI setup to run it against your open source projects. SonarQube supports numerous languages: https://www.sonarqube.org/features/multi-languages/&lt;br /&gt;
&lt;br /&gt;
Please let us know if you are aware of any other high quality application security tools that are free for open source (or simply add them to this page). We are particularly interested in identifying and listing commercial tools that are free for open source, as they tend to be better and easier to use than open source (free) tools. If you are aware of any missing from this list, please add them, or let us know (dave.wichers (at) owasp.org) and we'll confirm they are free, and add them for you. Please encourage your favorite commercial tool vendor to make their tool free for open source projects as well!!&lt;br /&gt;
&lt;br /&gt;
Finally, please forward this page to the open source projects you rely on and encourage them to use these free tools!&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=244977</id>
		<title>Source Code Analysis Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Source_Code_Analysis_Tools&amp;diff=244977"/>
				<updated>2018-11-07T21:07:11Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: Add LGTM SAST Tool to list.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Source code analysis tools, also referred to as Static Application Security Testing (SAST) Tools, are designed to analyze source code and/or compiled versions of code to help find security flaws. &lt;br /&gt;
&lt;br /&gt;
Some tools are starting to move into the IDE. For the types of problems that can be detected during the software development phase itself, this is a powerful phase within the development life cycle to employ such tools, as it provides immediate feedback to the developer on issues they might be introducing into the code during code development itself. This immediate feedback is very useful, especially when compared to finding vulnerabilities much later in the development cycle.&lt;br /&gt;
&lt;br /&gt;
== Strengths and Weaknesses ==&lt;br /&gt;
&lt;br /&gt;
=== Strengths ===&lt;br /&gt;
&lt;br /&gt;
* Scales well -- can be run on lots of software, and can be run repeatedly (as with nightly builds or continuous integration)&lt;br /&gt;
* Useful for things that such tools can automatically find with high confidence, such as buffer overflows, SQL Injection Flaws, and so forth&lt;br /&gt;
* Output is good for developers -- highlights the precise source files, line numbers, and even subsections of lines that are affected&lt;br /&gt;
&lt;br /&gt;
=== Weaknesses ===&lt;br /&gt;
&lt;br /&gt;
* Many types of security vulnerabilities are very difficult to find automatically, such as authentication problems, access control issues, insecure use of cryptography, etc. The current state of the art only allows such tools to automatically find a relatively small percentage of application security flaws. However, tools of this type are getting better.&lt;br /&gt;
* High numbers of false positives.&lt;br /&gt;
* Frequently can't find configuration issues, since they are not represented in the code.&lt;br /&gt;
* Difficult to 'prove' that an identified security issue is an actual vulnerability.&lt;br /&gt;
* Many of these tools have difficulty analyzing code that can't be compiled. Analysts frequently can't compile code because they don't have the right libraries, all the compilation instructions, all the code, etc.&lt;br /&gt;
&lt;br /&gt;
==Important Selection Criteria==&lt;br /&gt;
&lt;br /&gt;
* Requirement: Must support your programming language, but not usually a key factor once it does.&lt;br /&gt;
* Types of vulnerabilities it can detect (out of the [[OWASP Top Ten]]?) (plus more?)&lt;br /&gt;
* How accurate is it? False Positive/False Negative rates?&lt;br /&gt;
** Does the tool have an OWASP [[Benchmark]] score?&lt;br /&gt;
* Does it understand the libraries/frameworks you use?&lt;br /&gt;
* Does it require a fully buildable set of source?&lt;br /&gt;
* Can it run against binaries instead of source?&lt;br /&gt;
* Can it be integrated into the developer's IDE?&lt;br /&gt;
* How hard is it to setup/use?&lt;br /&gt;
* Can it be run continuously and automatically?&lt;br /&gt;
* License cost for the tool. (Some are sold per user, per org, per app, per line of code analyzed. Consulting licenses are frequently different than end user licenses.)&lt;br /&gt;
&lt;br /&gt;
==OWASP Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [[OWASP SonarQube Project]]&lt;br /&gt;
* [http://www.owasp.org/index.php/Category:OWASP_Orizon_Project OWASP Orizon Project]&lt;br /&gt;
* [[OWASP_LAPSE_Project | OWASP LAPSE Project]]&lt;br /&gt;
* [[OWASP O2 Platform]]&lt;br /&gt;
* [[OWASP WAP-Web Application Protection]]&lt;br /&gt;
&lt;br /&gt;
==Disclaimer==&lt;br /&gt;
&lt;br /&gt;
Disclaimer: &amp;lt;b&amp;gt;The tools listed in the tables below are presented in alphabetical order. &amp;lt;i&amp;gt;OWASP does not endorse any of the vendors or tools by listing them in the table below.&amp;lt;/i&amp;gt; We have made every effort to provide this information as accurately as possible. If you are the vendor of a tool below and think that this information is incomplete or incorrect, please send an e-mail to our mailing list and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Open Source or Free Tools Of This Type==&lt;br /&gt;
&lt;br /&gt;
* [https://wiki.openstack.org/wiki/Security/Projects/Bandit Bandit] - bandit is a comprehensive source vulnerability scanner for Python&lt;br /&gt;
* [http://brakemanscanner.org/ Brakeman] - Brakeman is an open source vulnerability scanner specifically designed for Ruby on Rails applications&lt;br /&gt;
* [http://rubygems.org/gems/codesake-dawn Codesake Dawn] - Codesake Dawn is an open source security source code analyzer designed for Sinatra, Padrino for Ruby on Rails applications. It also works on non-web applications written in Ruby&lt;br /&gt;
* [http://findbugs.sourceforge.net/ FindBugs] - (Legacy - NOT Maintained - Use SpotBugs (see below) instead) - Find bugs (including a few security flaws) in Java programs&lt;br /&gt;
* [https://find-sec-bugs.github.io/ FindSecBugs] - A security specific plugin for SpotBugs that significantly improves SpotBugs's ability to find security vulnerabilities in Java programs. Works with the old FindBugs too,&lt;br /&gt;
* [http://www.dwheeler.com/flawfinder/ Flawfinder] Flawfinder - Scans C and C++&lt;br /&gt;
* [https://www.bishopfox.com/resources/tools/google-hacking-diggity/attack-tools/ Google CodeSearchDiggity] - Uses Google Code Search to identifies vulnerabilities in open source code projects hosted by Google Code, MS CodePlex, SourceForge, Github, and more. The tool comes with over 130 default searches that identify SQL injection, cross-site scripting (XSS), insecure remote and local file includes, hard-coded passwords, and much more.  ''Essentially, Google CodeSearchDiggity provides a source code security analysis of nearly every single open source code project in existence – simultaneously.''&lt;br /&gt;
* [https://github.com/wireghoul/graudit/ Graudit] - Scans multiple languages for various security flaws.&lt;br /&gt;
* [https://lgtm.com/help/lgtm/about-lgtm LGTM] - A free for open source static analysis service that automatically monitors commits to publicly accessible code in: Bitbucket Cloud, GitHub, or GitLab. Supports C/C++, C#, COBOL (in beta), Java, JavaScript/TypeScript, Python&lt;br /&gt;
* [http://pmd.sourceforge.net/ PMD] - PMD scans Java source code and looks for potential code problems (this is a code quality tool that does not focus on security issues)&lt;br /&gt;
* [https://github.com/designsecurity/progpilot Progpilot] - Progpilot is a static analyzer tool for PHP that detects security vulnerabilities such as XSS and SQL Injection.&lt;br /&gt;
* [http://msdn.microsoft.com/en-us/library/ms933794.aspx PreFast] (Microsoft) - PREfast is a static analysis tool that identifies defects in C/C++ programs. Last update 2006.&lt;br /&gt;
* [https://pumascan.com/ Puma Scan] - Puma Scan is a .NET C# open source static source code analyzer that runs as an IDE plugin for Visual Studio and via MSBuild in CI pipelines.&lt;br /&gt;
* [https://dotnet-security-guard.github.io/ .NET Security Guard] - Roslyn analyzers that aim to help security audits on .NET applications. It will find SQL injections, LDAP injections, XXE, cryptography weakness, XSS and more.&lt;br /&gt;
* [http://rips-scanner.sourceforge.net/ RIPS] - RIPS is a static source code analyzer for vulnerabilities in PHP web applications. Please see notes on the sourceforge.net site.&lt;br /&gt;
* [https://github.com/FloeDesignTechnologies/phpcs-security-audit phpcs-security-audit] - phpcs-security-audit is a set of PHP_CodeSniffer rules that finds flaws or weaknesses related to security in PHP and its popular CMS or frameworks.  It currently has core PHP rules as well as Drupal 7 specific rules.&lt;br /&gt;
* [http://www.sonarqube.org/ SonarQube] - Scans source code for more than 20 languages for Bugs, Vulnerabilities, and Code Smells. SonarQube IDE plugins for Eclipse, Visual Studio, and IntelliJ provided by [http://www.sonarlint.org/ SonarLint].&lt;br /&gt;
* [https://spotbugs.github.io/ SpotBugs] - This is the active fork replacement for FindBugs, which is not maintained anymore.&lt;br /&gt;
* [http://sourceforge.net/projects/visualcodegrepp/ VisualCodeGrepper (VCG)] - Scans C/C++, C#, VB, PHP, Java, and PL/SQL for security issues and for comments which may indicate defective code. The config files can be used to carry out additional checks for banned functions or functions which commonly cause security issues.&lt;br /&gt;
&lt;br /&gt;
==Commercial Tools Of This Type==&lt;br /&gt;
* [https://www.ptsecurity.com/ww-en/products/ai/ Application Inspector] (Positive Technologies) - combines SAST, DAST, IAST, SCA, configuration analysis and other technologies, incl. unique abstract interpretation; has capability to generate test queries (exploits) to verify detected vulnerabilities during SAST analysis; Supported languages include: Java, C#, PHP, JavaScript, Objective C, VB.Net, PL/SQL, T-SQL, and others. &lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/application-security-on-cloud Application Security on Cloud] (IBM) - Provides SAST, DAST and mobile security testing as well as OpenSource library known vulnerability detection as a cloud service. &lt;br /&gt;
* [http://www-01.ibm.com/software/rational/products/appscan/source/ AppScan Source] (IBM)&lt;br /&gt;
* [http://www.blueclosure.com BlueClosure BC Detect] (BlueClosure) - Analyzes client-side JavaScript.&lt;br /&gt;
* [https://buguroo.com/products/bugblast-next-gen-appsec-platform/bugscout-sca bugScout] (Buguroo Offensive Security)&lt;br /&gt;
* [http://www.castsoftware.com/solutions/application-security/cwe#SupportedSecurityStandards CAST AIP] (CAST) Performs static and architectural analysis to identify numerous types of security issues. Supports over 30 languages.&lt;br /&gt;
* [https://www.codacy.com/ Codacy] Offers security patterns for languages such as Python, Ruby, Scala, Java, JavaScript and more. Integrates with tools such as Brakeman, Bandit, FindBugs, and others. (free for open source projects)&lt;br /&gt;
* [http://www.contrastsecurity.com/ Contrast] (Contrast Security) - Contrast performs code security without actually doing static analysis. Contrast does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.coverity.com/products/code-advisor/ Coverity Code Advisor] (Synopsys)&lt;br /&gt;
* [https://www.checkmarx.com/technology/static-code-analysis-sca/ CxSAST] (Checkmarx)&lt;br /&gt;
* [http://www8.hp.com/us/en/software-solutions/static-code-analysis-sast/ Fortify] (Micro Focus, Formally HP)&lt;br /&gt;
* [http://www.juliasoft.com/solutions Julia] (JuliaSoft) - SaaS Java static analysis&lt;br /&gt;
* [http://www.klocwork.com/capabilities/static-code-analysis KlocWork] (KlocWork)&lt;br /&gt;
* [https://www.kiuwan.com/code-analysis/ Kiuwan] (an [http://www.optimyth.com Optimyth] company) - SaaS Software Quality &amp;amp; Security Analysis&lt;br /&gt;
* [http://www.parasoft.com/jsp/capabilities/static_analysis.jsp?itemId=547 Parasoft Test] (Parasoft)&lt;br /&gt;
* [https://pitss.com/products/pitss-con/ PITSS.CON] (PITTS)&lt;br /&gt;
* [http://www.viva64.com/en/ PVS-Studio] (PVS-Studio) - For C/C++, C#&lt;br /&gt;
* [https://pumascanpro.com/ Puma Scan Professional] - A .NET C# static source code analyzer that runs as an IDE plugin for Visual Studio and via MSBuild in CI pipelines.&lt;br /&gt;
* [https://www.ripstech.com/ RIPS Code Analysis] (RIPS Technologies) - A SAST solution specialized for PHP that detects unknown security vulnerabilities and code quality issues.&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/resources/datasheets/secureassist.html SecureAssist] (Synopsys) - Scans code for insecure coding and configurations automatically as an IDE plugin for Eclipse, IntelliJ, and Visual Studio etc. Supports (Java, .NET, PHP, and JavaScript)&lt;br /&gt;
* [https://www.whitehatsec.com/products/static-application-security-testing/ Sentinel Source] (Whitehat)&lt;br /&gt;
* [https://www.synopsys.com/software-integrity/products/interactive-application-security-testing.html Seeker] (Synopsys) Seeker performs code security without actually doing static analysis. Seeker does Interactive Application Security Testing (IAST), correlating runtime code &amp;amp; data analysis with simulated attacks. It provides code level results without actually relying on static analysis.&lt;br /&gt;
* [http://www.sourcepatrol.co.uk/ Source Patrol] (Pentest)&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] (DefenseCode)&lt;br /&gt;
* [http://www.veracode.com/products/binary-static-analysis-sast Veracode Static Analysis] (Veracode)&lt;br /&gt;
* [http://www.xanitizer.net Xanitizer] - Scans Java for security vulnerabilities, mainly via taint analysis. Free for academic and open source projects (see [https://www.rigs-it.com/xanitizer-pricing/]).&lt;br /&gt;
&lt;br /&gt;
==More info==&lt;br /&gt;
&lt;br /&gt;
* [[Appendix_A:_Testing_Tools | Appendix A: Testing Tools]]&lt;br /&gt;
* [http://samate.nist.gov/index.php/Source_Code_Security_Analyzers.html NIST's list of Source Code Security Analysis Tools]&lt;br /&gt;
* [[:Category:Vulnerability_Scanning_Tools | DAST Tools]] - Similar info on Dynamic Application Security Testing (DAST) Tools&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP .NET Project]]&lt;br /&gt;
[[Category:SAMM-CR-2]]&lt;br /&gt;
__NOTOC__&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Benchmark&amp;diff=244841</id>
		<title>Benchmark</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Benchmark&amp;diff=244841"/>
				<updated>2018-11-04T21:03:56Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Using a VM instead */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Main = &lt;br /&gt;
 &amp;lt;div style=&amp;quot;width:100%;height:100px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Lab_big.jpg|link=OWASP_Project_Stages#tab.3DLab_Projects]]&amp;lt;/div&amp;gt;&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== OWASP Benchmark Project  ==&lt;br /&gt;
The OWASP Benchmark for Security Automation (OWASP Benchmark) is a free and open test suite designed to evaluate the speed, coverage, and accuracy of automated software vulnerability detection tools and services (henceforth simply referred to as 'tools'). Without the ability to measure these tools, it is difficult to understand their strengths and weaknesses, and compare them to each other. Each version of the OWASP Benchmark contains thousands of test cases that are fully runnable and exploitable, each of which maps to the appropriate CWE number for that vulnerability.&lt;br /&gt;
&lt;br /&gt;
You can use the OWASP Benchmark with [[Source_Code_Analysis_Tools | Static Application Security Testing (SAST)]] tools, [[:Category:Vulnerability_Scanning_Tools | Dynamic Application Security Testing (DAST)]] tools like OWASP [[ZAP]] and Interactive Application Security Testing (IAST) tools. The current version of the Benchmark is implemented in Java.  Future versions may expand to include other languages.&lt;br /&gt;
&lt;br /&gt;
==Benchmark Project Scoring Philosophy==&lt;br /&gt;
&lt;br /&gt;
Security tools (SAST, DAST, and IAST) are amazing when they find a complex vulnerability in your code.  But with widespread misunderstanding of the specific vulnerabilities automated tools cover, end users are often left with a false sense of security.&lt;br /&gt;
&lt;br /&gt;
We are on a quest to measure just how good these tools are at discovering and properly diagnosing security problems in applications. We rely on the [http://en.wikipedia.org/wiki/Receiver_operating_characteristic long history] of military and medical evaluation of detection technology as a foundation for our research. Therefore, the test suite tests both real and fake vulnerabilities.&lt;br /&gt;
&lt;br /&gt;
There are four possible test outcomes in the Benchmark:&lt;br /&gt;
&lt;br /&gt;
# Tool correctly identifies a real vulnerability (True Positive - TP)&lt;br /&gt;
# Tool fails to identify a real vulnerability (False Negative - FN)&lt;br /&gt;
# Tool correctly ignores a false alarm (True Negative - TN)&lt;br /&gt;
# Tool fails to ignore a false alarm (False Positive - FP)&lt;br /&gt;
&lt;br /&gt;
We can learn a lot about a tool from these four metrics. Consider a tool that simply flags every line of code as vulnerable. This tool will perfectly identify all vulnerabilities!  But it will also have 100% false positives and thus adds no value.  Similarly, consider a tool that reports absolutely nothing. This tool will have zero false positives, but will also identify zero real vulnerabilities and is also worthless. You can even imagine a tool that flips a coin to decide whether to report whether each test case contains a vulnerability. The result would be 50% true positives and 50% false positives.  We need a way to distinguish valuable security tools from these trivial ones.&lt;br /&gt;
&lt;br /&gt;
If you imagine the line that connects all these points, from 0,0 to 100,100 establishes a line that roughly translates to &amp;quot;random guessing.&amp;quot; The ultimate measure of a security tool is how much better it can do than this line.  The diagram below shows how we will evaluate security tools against the Benchmark.&lt;br /&gt;
&lt;br /&gt;
[[File:Wbe guide.png]]&lt;br /&gt;
&lt;br /&gt;
A point plotted on this chart provides a visual indication of how well a tool did considering both the True Positives the tool reported, as well as the False Positives it reported. We also want to compute an individual score for that point in the range 0 - 100, which we call the Benchmark Accuracy Score.&lt;br /&gt;
&lt;br /&gt;
The Benchmark Accuracy Score is essentially a [https://en.wikipedia.org/wiki/Youden%27s_J_statistic Youden Index], which is a standard way of summarizing the accuracy of a set of tests.  Youden's index is one of the oldest measures for diagnostic accuracy. It is also a global measure of a test performance, used for the evaluation of overall discriminative power of a diagnostic procedure and for comparison of this test with other tests. Youden's index is calculated by deducting 1 from the sum of a test’s sensitivity and specificity expressed not as percentage but as a part of a whole number: (sensitivity + specificity) – 1. For a test with poor diagnostic accuracy, Youden's index equals 0, and in a perfect test Youden's index equals 1.&lt;br /&gt;
&lt;br /&gt;
  So for example, if a tool has a True Positive Rate (TPR) of .98 (i.e., 98%) &lt;br /&gt;
    and False Positive Rate (FPR) of .05 (i.e., 5%)&lt;br /&gt;
  Sensitivity = TPR (.98)&lt;br /&gt;
  Specificity = 1-FPR (.95)&lt;br /&gt;
  So the Youden Index is (.98+.95) - 1 = .93&lt;br /&gt;
  &lt;br /&gt;
  And this would equate to a Benchmark score of 93 (since we normalize this to the range 0 - 100)&lt;br /&gt;
&lt;br /&gt;
On the graph, the Benchmark Score is the length of the line from the point down to the diagonal “guessing” line. Note that a Benchmark score can actually be negative if the point is below the line. This is caused when the False Positive Rate is actually higher than the True Positive Rate.&lt;br /&gt;
&lt;br /&gt;
==Benchmark Validity==&lt;br /&gt;
&lt;br /&gt;
The Benchmark tests are not exactly like real applications. The tests are derived from coding patterns observed in real applications, but the majority of them are considerably '''simpler''' than real applications. That is, most real world applications will be considerably harder to successfully analyze than the OWASP Benchmark Test Suite. Although the tests are based on real code, it is possible that some tests may have coding patterns that don't occur frequently in real code.&lt;br /&gt;
&lt;br /&gt;
Remember, we are trying to test the capabilities of the tools and make them explicit, so that users can make informed decisions about what tools to use, how to use them, and what results to expect.  This is exactly aligned with the OWASP mission to make application security visible.&lt;br /&gt;
&lt;br /&gt;
==Generating Benchmark Scores==&lt;br /&gt;
&lt;br /&gt;
Anyone can use this Benchmark to evaluate vulnerability detection tools. The basic steps are:&lt;br /&gt;
# Download the Benchmark from github&lt;br /&gt;
# Run your tools against the Benchmark&lt;br /&gt;
# Run the BenchmarkScore tool on the reports from your tools&lt;br /&gt;
&lt;br /&gt;
That's it!&lt;br /&gt;
&lt;br /&gt;
Full details on how to do this are at the bottom of the page on the Quick_Start tab.&lt;br /&gt;
&lt;br /&gt;
We encourage both vendors, open source tools, and end users to verify their application security tools against the Benchmark. In order to ensure that the results are fair and useful, we ask that you follow a few simple rules when publishing results. We won't recognize any results that aren't easily reproducible:&lt;br /&gt;
&lt;br /&gt;
# A description of the default “out-of-the-box” installation, version numbers, etc…&lt;br /&gt;
# Any and all configuration, tailoring, onboarding, etc… performed to make the tool run&lt;br /&gt;
# Any and all changes to default security rules, tests, or checks used to achieve the results&lt;br /&gt;
# Easily reproducible steps to run the tool&lt;br /&gt;
&lt;br /&gt;
== Reporting Format==&lt;br /&gt;
&lt;br /&gt;
The Benchmark includes tools to interpret raw tool output, compare it to the expected results, and generate summary charts and graphs. We use the following table format in order to capture all the information generated during the evaluation.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable nowraplinks&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Security Category&lt;br /&gt;
! TP&lt;br /&gt;
! FN&lt;br /&gt;
! TN&lt;br /&gt;
! FP&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Total&lt;br /&gt;
! TPR&lt;br /&gt;
! FPR&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Score&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| General security category for test cases.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Positives''': Tests with real vulnerabilities that were correctly reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Negative''': Tests with real vulnerabilities that were not correctly reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Negative''': Tests with fake vulnerabilities that were correctly not reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Positive''':Tests with fake vulnerabilities that were incorrectly reported as vulnerable by the tool.&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| Total test cases for this category.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Positive Rate''': TP / ( TP + FN ) - Also referred to as Precision, as defined at [https://en.wikipedia.org/wiki/Precision_and_recall Wikipedia].&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Positive Rate''': FP / ( FP + TN ).&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| Normalized distance from the “guess line” TPR - FPR.&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Command Injection&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Etc...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | &lt;br /&gt;
! Total TP&lt;br /&gt;
! Total FN&lt;br /&gt;
! Total TN&lt;br /&gt;
! Total FP&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Total TC&lt;br /&gt;
! Average TPR&lt;br /&gt;
! Average FPR&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Average Score&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Code Repo and Build/Run Instructions ==&lt;br /&gt;
&lt;br /&gt;
See the '''Getting Started''' and '''Getting, Building, and Running the Benchmark''' sections on the Quick Start tab.&lt;br /&gt;
&lt;br /&gt;
==Licensing==&lt;br /&gt;
&lt;br /&gt;
The OWASP Benchmark is free to use under the [http://choosealicense.com/licenses/gpl-2.0/ GNU General Public License v2.0].&lt;br /&gt;
&lt;br /&gt;
== Mailing List ==&lt;br /&gt;
&lt;br /&gt;
[https://lists.owasp.org/mailman/listinfo/owasp-benchmark-project OWASP Benchmark Mailing List]&lt;br /&gt;
&lt;br /&gt;
== Project Leaders ==&lt;br /&gt;
&lt;br /&gt;
[https://www.owasp.org/index.php/User:Wichers Dave Wichers] [mailto:dave.wichers@owasp.org @]&lt;br /&gt;
&lt;br /&gt;
== Project References ==&lt;br /&gt;
* [https://www.mir-swamp.org/#packages/public Software Assurance Marketplace (SWAMP) - set of curated packages to test tools against]&lt;br /&gt;
* [http://samate.nist.gov/Other_Test_Collections.html SAMATE List of Test Collections]&lt;br /&gt;
&lt;br /&gt;
== Related Projects ==&lt;br /&gt;
&lt;br /&gt;
* [http://samate.nist.gov/SARD/testsuite.php NSA's Juliet for Java]&lt;br /&gt;
* [http://sectoolmarket.com/ The Web Application Vulnerability Scanner Evaluation Project (WAVSEP)]&lt;br /&gt;
&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;padding-left:25px;width:200px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Quick Download ==&lt;br /&gt;
&lt;br /&gt;
All test code and project files can be downloaded from [https://github.com/OWASP/benchmark OWASP GitHub].&lt;br /&gt;
&lt;br /&gt;
== Project Intro Video ==&lt;br /&gt;
&lt;br /&gt;
[[File:BenchmarkPodcastTitlePage.jpg|200px|link=https://www.youtube.com/watch?v=HQP8dwc3jJA&amp;amp;index=5&amp;amp;list=PLGB2s-U5FSWOmEStMt3JqlMFJvRYqeVW5]]&lt;br /&gt;
&lt;br /&gt;
== News and Events ==&lt;br /&gt;
* LOOKING FOR VOLUNTEERS!! - We are looking for individuals and organizations to join and make this a much more community driven project, including additional coleaders to help take this project to the next level. Contributors could work on things like new test cases, additional tool scorecard generators, adding support for languages beyond Java, and a host of other improvements. Please contact [mailto:dave.wichers@owasp.org me] if you are interested in contributing at any level.&lt;br /&gt;
* June 5, 2016 - Benchmark Version 1.2 Released&lt;br /&gt;
* Sep 24, 2015 - Benchmark introduced to broader OWASP community at [https://appsecusa2015.sched.org/event/3r9k/using-the-owasp-benchmark-to-assess-automated-vulnerability-analysis-tools AppSec USA]&lt;br /&gt;
* Aug 27, 2015 - U.S. Dept. of Homeland Security (DHS) is financially supporting the Benchmark project.&lt;br /&gt;
* Aug 15, 2015 - Benchmark Version 1.2beta Released with full DAST Support. Checkmarx and ZAP scorecard generators also released.&lt;br /&gt;
* July 10, 2015 - Benchmark Scorecard generator and open source scorecards released&lt;br /&gt;
* May 23, 2015 - Benchmark Version 1.1 Released&lt;br /&gt;
* April 15, 2015 - Benchmark Version 1.0 Released&lt;br /&gt;
&lt;br /&gt;
==Classifications==&lt;br /&gt;
&lt;br /&gt;
   {| width=&amp;quot;200&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; rowspan=&amp;quot;2&amp;quot;| [[File:Owasp-incubator-trans-85.png|link=https://www.owasp.org/index.php/OWASP_Project_Stages#tab=Incubator_Projects]]&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-builders-small.png|link=]]  &lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-defenders-small.png|link=]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [http://choosealicense.com/licenses/gpl-2.0/ GNU General Public License v2.0]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [[File:Project_Type_Files_CODE.jpg|link=]]&lt;br /&gt;
   |}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Test Cases =&lt;br /&gt;
&lt;br /&gt;
Version 1.0 of the Benchmark was published on April 15, 2015 and had 20,983 test cases. On May 23, 2015, version 1.1 of the Benchmark was released. The 1.1 release improves on the previous version by making sure that there are both true positives and false positives in every vulnerability area. Version 1.2 was released on June 5, 2016 (and the 1.2beta August 15, 2015).&lt;br /&gt;
&lt;br /&gt;
Version 1.2 and forward of the Benchmark is a fully executable web application, which means it is scannable by any kind of vulnerability detection tool. The 1.2 has been limited to slightly less than 3,000 test cases, to make it easier for DAST tools to scan it (so it doesn't take so long and they don't run out of memory, or blow up the size of their database). The 1.2 release covers the same vulnerability areas that 1.1 covers. We added a few Spring database SQL Injection tests, but that's it. The bulk of the work was turning each test case into something that actually runs correctly AND is fully exploitable, and then generating a UI on top of it that works in order to turn the test cases into a real running application.&lt;br /&gt;
&lt;br /&gt;
You can still download Benchmark version 1.1 by cloning the release marked with the GIT tag '1.1'.&lt;br /&gt;
&lt;br /&gt;
The test case areas and quantities for the Benchmark releases are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable nowraplinks&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Vulnerability Area&lt;br /&gt;
! # of Tests in v1.1&lt;br /&gt;
! # of Tests in v1.2&lt;br /&gt;
! CWE Number&lt;br /&gt;
|-&lt;br /&gt;
| [[Command Injection]]&lt;br /&gt;
| 2708&lt;br /&gt;
| 251&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/78.html 78]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Cryptography&lt;br /&gt;
| 1440&lt;br /&gt;
| 246&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/327.html 327]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Hashing&lt;br /&gt;
| 1421&lt;br /&gt;
| 236&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/328.html 328]&lt;br /&gt;
|-&lt;br /&gt;
| [[LDAP injection | LDAP Injection]]&lt;br /&gt;
| 736&lt;br /&gt;
| 59&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/90.html 90]&lt;br /&gt;
|-&lt;br /&gt;
| [[Path Traversal]]&lt;br /&gt;
| 2630&lt;br /&gt;
| 268&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/22.html 22]&lt;br /&gt;
|-&lt;br /&gt;
| Secure Cookie Flag&lt;br /&gt;
| 416&lt;br /&gt;
| 67&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/614.html 614]&lt;br /&gt;
|-&lt;br /&gt;
| [[SQL Injection]]&lt;br /&gt;
| 3529&lt;br /&gt;
| 504&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/89.html 89]&lt;br /&gt;
|-&lt;br /&gt;
| [[Trust Boundary Violation]]&lt;br /&gt;
| 725&lt;br /&gt;
| 126&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/501.html 501]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Randomness&lt;br /&gt;
| 3640&lt;br /&gt;
| 493&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/330.html 330]&lt;br /&gt;
|-&lt;br /&gt;
| [[XPATH Injection]]&lt;br /&gt;
| 347&lt;br /&gt;
| 35&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/643.html 643]&lt;br /&gt;
|-&lt;br /&gt;
| [[XSS]] (Cross-Site Scripting)&lt;br /&gt;
| 3449&lt;br /&gt;
| 455&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/79.html 79]&lt;br /&gt;
|-&lt;br /&gt;
| Total Test Cases&lt;br /&gt;
| 21,041&lt;br /&gt;
| 2,740&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Each Benchmark version comes with a spreadsheet that lists every test case, the vulnerability category, the CWE number, and the expected result (true finding/false positive). Look for the file: expectedresults-VERSION#.csv in the project root directory.&lt;br /&gt;
&lt;br /&gt;
Every test case is:&lt;br /&gt;
* a servlet or JSP (currently they are all servlets, but we plan to add JSPs)&lt;br /&gt;
* either a true vulnerability or a false positive for a single issue&lt;br /&gt;
&lt;br /&gt;
The Benchmark is intended to help determine how well analysis tools correctly analyze a broad array of application and framework behavior, including:&lt;br /&gt;
&lt;br /&gt;
* HTTP request and response problems&lt;br /&gt;
* Simple and complex data flow&lt;br /&gt;
* Simple and complex control flow&lt;br /&gt;
* Popular frameworks&lt;br /&gt;
* Inversion of control&lt;br /&gt;
* Reflection&lt;br /&gt;
* Class loading&lt;br /&gt;
* Annotations&lt;br /&gt;
* Popular UI technologies (particularly JavaScript frameworks)&lt;br /&gt;
&lt;br /&gt;
Not all of these are yet tested by the Benchmark but future enhancements intend to provide more coverage of these issues.&lt;br /&gt;
&lt;br /&gt;
Additional future enhancements could cover:&lt;br /&gt;
* All vulnerability types in the [[Top10 | OWASP Top 10]]&lt;br /&gt;
* Does the tool find flaws in libraries?&lt;br /&gt;
* Does the tool find flaws spanning custom code and libraries?&lt;br /&gt;
* Does tool handle web services? REST, XML, GWT, etc…&lt;br /&gt;
* Does tool work with different app servers? Java platforms?&lt;br /&gt;
&lt;br /&gt;
== Example Test Case ==&lt;br /&gt;
&lt;br /&gt;
Each test case is a simple Java EE servlet. BenchmarkTest00001 in version 1.0 of the Benchmark was an LDAP Injection test with the following metadata in the accompanying BenchmarkTest00001.xml file:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;test-metadata&amp;gt;&lt;br /&gt;
    &amp;lt;category&amp;gt;ldapi&amp;lt;/category&amp;gt;&lt;br /&gt;
    &amp;lt;test-number&amp;gt;00001&amp;lt;/test-number&amp;gt;&lt;br /&gt;
    &amp;lt;vulnerability&amp;gt;true&amp;lt;/vulnerability&amp;gt;&lt;br /&gt;
    &amp;lt;cwe&amp;gt;90&amp;lt;/cwe&amp;gt;&lt;br /&gt;
  &amp;lt;/test-metadata&amp;gt;&lt;br /&gt;
&lt;br /&gt;
BenchmarkTest00001.java in the OWASP Benchmark 1.0 simply reads in all the cookie values, looks for a cookie named &amp;quot;foo&amp;quot;, and uses the value of this cookie when performing an LDAP query. Here's the code for BenchmarkTest00001.java:&lt;br /&gt;
&lt;br /&gt;
  package org.owasp.benchmark.testcode;&lt;br /&gt;
  &lt;br /&gt;
  import java.io.IOException;&lt;br /&gt;
  &lt;br /&gt;
  import javax.servlet.ServletException;&lt;br /&gt;
  import javax.servlet.annotation.WebServlet;&lt;br /&gt;
  import javax.servlet.http.HttpServlet;&lt;br /&gt;
  import javax.servlet.http.HttpServletRequest;&lt;br /&gt;
  import javax.servlet.http.HttpServletResponse;&lt;br /&gt;
  &lt;br /&gt;
  @WebServlet(&amp;quot;/BenchmarkTest00001&amp;quot;)&lt;br /&gt;
  public class BenchmarkTest00001 extends HttpServlet {&lt;br /&gt;
  	&lt;br /&gt;
  	private static final long serialVersionUID = 1L;&lt;br /&gt;
  	&lt;br /&gt;
  	@Override&lt;br /&gt;
  	public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {&lt;br /&gt;
  		doPost(request, response);&lt;br /&gt;
  	}&lt;br /&gt;
  &lt;br /&gt;
  	@Override&lt;br /&gt;
  	public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {&lt;br /&gt;
  		// some code&lt;br /&gt;
  &lt;br /&gt;
  		javax.servlet.http.Cookie[] cookies = request.getCookies();&lt;br /&gt;
  		&lt;br /&gt;
  		String param = null;&lt;br /&gt;
  		boolean foundit = false;&lt;br /&gt;
  		if (cookies != null) {&lt;br /&gt;
  			for (javax.servlet.http.Cookie cookie : cookies) {&lt;br /&gt;
  				if (cookie.getName().equals(&amp;quot;foo&amp;quot;)) {&lt;br /&gt;
  					param = cookie.getValue();&lt;br /&gt;
  					foundit = true;&lt;br /&gt;
  				}&lt;br /&gt;
  			}&lt;br /&gt;
  			if (!foundit) {&lt;br /&gt;
  				// no cookie found in collection&lt;br /&gt;
  				param = &amp;quot;&amp;quot;;&lt;br /&gt;
  			}&lt;br /&gt;
  		} else {&lt;br /&gt;
  			// no cookies&lt;br /&gt;
  			param = &amp;quot;&amp;quot;;&lt;br /&gt;
  		}&lt;br /&gt;
  		&lt;br /&gt;
  		try {&lt;br /&gt;
  			javax.naming.directory.DirContext dc = org.owasp.benchmark.helpers.Utils.getDirContext();&lt;br /&gt;
  			Object[] filterArgs = {&amp;quot;a&amp;quot;,&amp;quot;b&amp;quot;};&lt;br /&gt;
  			dc.search(&amp;quot;name&amp;quot;, param, filterArgs, new javax.naming.directory.SearchControls());&lt;br /&gt;
  		} catch (javax.naming.NamingException e) {&lt;br /&gt;
  			throw new ServletException(e);&lt;br /&gt;
  		}&lt;br /&gt;
  	}&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
= Test Case Details =&lt;br /&gt;
&lt;br /&gt;
The following describes situations in the Benchmark that have come up for debate as to the validity/accuracy of the test cases in these scenarios. &lt;br /&gt;
&lt;br /&gt;
== Cookies as a Source of Attack for XSS ==&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.1 and early versions of the 1.2beta included test cases that used cookies as a source of data that flowed into XSS vulnerabilities. The Benchmark treated these tests as False Positives because the Benchmark team figured that you'd have to use an XSS vulnerability in the first place to set the cookie value, and so it wasn't fair/reasonable to consider an XSS vulnerability whose data source was a cookie value as actually exploitable. However, we got feedback from some tool vendors, like Fortify, Burp, and Arachni, that they disagreed with this analysis and felt that, in fact, cookies were a valid source of attack against XSS vulnerabilities. Given that there are good arguments on both sides of this safe vs. unsafe question, we decided on Aug 25, 2015 to simply remove those test cases from the Benchmark. If, in the future, we decide who is right, we may add such test cases back in.&lt;br /&gt;
&lt;br /&gt;
== Headers as a Source of Attack for XSS ==&lt;br /&gt;
&lt;br /&gt;
Similarly, the Benchmark team believes that the names of headers aren't a valid source of XSS attack for the same reason we thought cookie values aren't a valid source. Because it would require an XSS vulnerability to be exploited in the first place to set them. In fact, we feel that this argument is much stronger for header names, than cookie values. Right now, the Benchmark doesn't include any header names as sources for XSS test cases, but we plan to add them, and mark them as false positive in the Benchmark.&lt;br /&gt;
&lt;br /&gt;
We do have header values as sources for some XSS test cases in the Benchmark and only 'referer' is treated as a valid XSS source (i.e., true positives) because other headers are not viable XSS sources. Other headers are, of course, valid sources for other attack vectors, like SQL injection or Command Injection.&lt;br /&gt;
&lt;br /&gt;
== False Positive Scenario: Static Values Passed to Unsafe (Weak) Sinks ==&lt;br /&gt;
&lt;br /&gt;
The Benchmark has MANY test cases where unsafe data flows in from the browser, but that data is replaced with static content as it goes through the propagators in the that specific test case. This static (safe) data then flows to the sink, which may be a weak/unsafe sink, like, for example, an unsafely constructed SQL statement. The Benchmark treats those test cases as false positives because there is absolutely no way for that weakness to be exploited. The NSA Juliet SAST benchmark treats such test cases exactly the same way, as false positives. We do recognize that there are weaknesses in those test cases, even though they aren't exploitable.&lt;br /&gt;
&lt;br /&gt;
Some SAST tool vendors feel it is appropriate to point out those weaknesses, and that's fine. However, if the tool points those weaknesses out, and does not distinguish them from truly exploitable vulnerabilities, then the Benchmark treats those findings as false positives. If the tool allows a user to differentiate these non-exploitable weaknesses from exploitable vulnerabilities, then the Benchmark scorecard generator can use that information to filter out these extra findings (along with any other similarly marked findings) so they don't count against that tool when calculating that tool's Benchmark score.  In the real world, its important for analysts to be able to filter out such findings if they only have time to deal with the most critical, actually exploitable, vulnerabilities. If a tool doesn't make it easy for an analyst to distinguish the two situations, then they are providing a disservice to the analyst.&lt;br /&gt;
&lt;br /&gt;
This issue doesn't affect DAST tools. They only report what appears to be exploitable to them. So this has no affect on them.&lt;br /&gt;
&lt;br /&gt;
If you are a SAST tool vendor or user, and you believe the Benchmark scorecard generator is counting such findings against that tool, and there is a way to tell them apart, please let the project team know so the scorecard generator can be adjusted to not count those findings against the tool. The Benchmark project's goal is the generate the most fair and accurate results it can generate. If such an adjustment is made to how a scorecard is generated for that tool, we plan to document this was done for that tool, and explain how others could perform the same filtering within that tool in order to get the same focused set of results.&lt;br /&gt;
&lt;br /&gt;
== Dead Code ==&lt;br /&gt;
&lt;br /&gt;
Some SAST tools point out weaknesses in dead code because they might eventually end up being used, and serve as bad coding examples (think cut/paste of code). We think this is fine/appropriate.  However, there is no dead code in the OWASP Benchmark (at least not intentionally). So dead code should not be causing any tool to report unnecessary false positives.&lt;br /&gt;
&lt;br /&gt;
= Tool Support/Results =&lt;br /&gt;
&lt;br /&gt;
The results for 5 free tools, PMD, FindBugs, FindBugs with the FindSecBugs plugin, SonarQube and OWASP ZAP are available here against version 1.2 of the Benchmark: https://rawgit.com/OWASP/Benchmark/master/scorecard/OWASP_Benchmark_Home.html. We've included multiple versions of FindSecBugs' and ZAP's results so you can see the improvements they are making finding vulnerabilities in Benchmark.&lt;br /&gt;
&lt;br /&gt;
We have Benchmark results for all the following tools, but haven't publicly released the results for any commercial tools. However, we included a 'Commercial Average' page, which includes a summary of results for 6 commercial SAST tools along with anonymous versions of each SAST tool's scorecard.&lt;br /&gt;
&lt;br /&gt;
The Benchmark can generate results for the following tools: &lt;br /&gt;
&lt;br /&gt;
'''Free Static Application Security Testing (SAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://pmd.github.io/ PMD] (which really has no security rules) - .xml results file&lt;br /&gt;
* [http://findbugs.sourceforge.net/ Findbugs] - .xml results file (Note: The 'new' Findbugs is now at: https://spotbugs.github.io/)&lt;br /&gt;
* FindBugs with the [http://find-sec-bugs.github.io/ FindSecurityBugs plugin] - .xml results file&lt;br /&gt;
* [https://www.sonarqube.org/downloads/ SonarQube] - .xml results file&lt;br /&gt;
&lt;br /&gt;
Note: We looked into supporting [http://checkstyle.sourceforge.net/ Checkstyle] but it has no security rules, just like PMD. The [http://fb-contrib.sourceforge.net/ fb-contrib] FindBugs plugin doesn't have any security rules either. We did test [http://errorprone.info/ Error Prone], and found that it does report some use of [http://errorprone.info/bugpattern/InsecureCipherMode) insecure ciphers (CWE-327)], but that's it.&lt;br /&gt;
&lt;br /&gt;
'''Commercial SAST Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [http://www.castsoftware.com/products/application-intelligence-platform CAST Application Intelligence Platform (AIP)] - .xml results file&lt;br /&gt;
* [https://www.checkmarx.com/products/static-application-security-testing/ Checkmarx CxSAST] - .xml results file&lt;br /&gt;
* [https://www.synopsys.com/content/dam/synopsys/sig-assets/datasheets/SAST-Coverity-datasheet.pdf Synopsys Static Analysis (Formerly Coverity Code Advisor) (On-Demand and stand-alone versions)] - .json results file (You can scan Benchmark w/Coverity for free. See: https://scan.coverity.com/)&lt;br /&gt;
* [https://software.microfocus.com/en-us/products/static-code-analysis-sast/overview Micro Focus (Formally HPE) Fortify (On-Demand and stand-alone versions)] - .fpr results file&lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/ibm-appscan-source IBM AppScan Source (Standalone and Cloud)] - .ozasmt or .xml results file&lt;br /&gt;
* [https://juliasoft.com/solutions/julia-for-security/ Julia Analyzer] - .xml results file&lt;br /&gt;
* [https://www.parasoft.com/products/jtest/ Parasoft Jtest] - .xml results file&lt;br /&gt;
* [https://www.shiftleft.io/product/ ShiftLeft SAST] - .sl results file (Benchmark specific format. Ask vendor how to generate this)&lt;br /&gt;
* [https://www.sourcemeter.com/features/ SourceMeter] - .txt results file of ALL results from VulnerabilityHunter&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] - .xml results file&lt;br /&gt;
* [http://www.veracode.com/products/binary-static-analysis-sast Veracode SAST] - .xml results file&lt;br /&gt;
* [https://www.rigs-it.com/xanitizer/ XANITIZER] - xml results file ([https://www.rigs-it.com/wp-content/uploads/2018/03/howtosetupxanitizerforowaspbenchmarkproject.pdf Their white paper on how to setup Xanitizer to scan Benchmark.]) (Free trial available)&lt;br /&gt;
&lt;br /&gt;
We are looking for results for other commercial static analysis tools like: [http://www.grammatech.com/codesonar Grammatech CodeSonar], [http://www.klocwork.com/products-services/klocwork Klocwork], etc. If you have a license for any static analysis tool not already listed above and can run it on the Benchmark and send us the results file that would be very helpful. &lt;br /&gt;
&lt;br /&gt;
The free SAST tools come bundled with the Benchmark so you can run them yourselves. If you have a license for any commercial SAST tool, you can also run them against the Benchmark. Just put your results files in the /results folder of the project, and then run the BenchmarkScore script for your platform (.sh / .bat) and it will generate a scorecard in the /scorecard directory for all the tools you have results for that are currently supported.&lt;br /&gt;
&lt;br /&gt;
'''Free Dynamic Application Security Testing (DAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
Note: While we support scorecard generators for these Free and Commercial DAST tools, we haven't been able to get a full/clean run against the Benchmark from most of these tools. As such, some of these scorecard generators might still need some work to properly reflect their results. If you notice any problems, let us know.&lt;br /&gt;
&lt;br /&gt;
* [http://www.arachni-scanner.com/ Arachni] - .xml results file&lt;br /&gt;
** To generate .xml, run: ./bin/arachni_reporter &amp;quot;Your_AFR_Results_Filename.afr&amp;quot; --reporter=xml:outfile=Benchmark1.2-Arachni.xml&lt;br /&gt;
* [https://www.owasp.org/index.php/ZAP OWASP ZAP] - .xml results file. To generate a complete ZAP XML results file so you can generate a valid scorecard, make sure you:&lt;br /&gt;
** Tools &amp;gt; Options &amp;gt; Alerts - And set the Max alert instances to like 500.&lt;br /&gt;
** Then: Report &amp;gt; Generate XML Report...&lt;br /&gt;
&lt;br /&gt;
'''Commercial DAST Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.acunetix.com/vulnerability-scanner/ Acunetix Web Vulnerability Scanner (WVS)] - .xml results file (Generated using [https://www.acunetix.com/resources/wvs7manual.pdf command line interface (see Chapter 10.)] /ExportXML switch)&lt;br /&gt;
* [https://portswigger.net/burp/ Burp Pro] - .xml results file&lt;br /&gt;
**You must use Burp Pro v1.6.30+ to scan the Benchmark due to a limitation fixed in v1.6.30.&lt;br /&gt;
* [https://software.microfocus.com/en-us/products/webinspect-dynamic-analysis-dast/overview Micro Focus (Formally HPE) WebInspect] - .xml results file&lt;br /&gt;
* [http://www-03.ibm.com/software/products/en/appscan This was IBM AppScan (but I believe IBM sold this product off. To whom?] - .xml results file&lt;br /&gt;
* [https://www.netsparker.com/web-vulnerability-scanner/ Netsparker] - .xml results file&lt;br /&gt;
* [https://www.rapid7.com/products/appspider/ Rapid7 AppSpider] - .xml results file&lt;br /&gt;
&lt;br /&gt;
* Qualys - We ran Qualys against v1.2 of the Benchmark and it found none of the vulnerabilities we test for as far as we could tell. So we haven't implemented a scorecard generator for it. If you get results where you think it does find some real issues, send us the results file and, if confirmed, we'll produce a scorecard generator for it.&lt;br /&gt;
&lt;br /&gt;
If you have access to other DAST Tools, PLEASE RUN THEM FOR US against the Benchmark, and send us the results file so we can build a scorecard generator for that tool.&lt;br /&gt;
&lt;br /&gt;
'''Commercial Interactive Application Security Testing (IAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.contrastsecurity.com/interactive-application-security-testing-iast Contrast Assess] - .zip results file (You can scan Benchmark w/Contrast for free. See: https://www.contrastsecurity.com/contrast-community-edition)&lt;br /&gt;
* [https://hdivsecurity.com/interactive-application-security-testing-iast Hdiv Detection (IAST)] - .hlg results file&lt;br /&gt;
&lt;br /&gt;
'''Commercial Hybrid Analysis Application Security Testing Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [http://www.iappsecure.com/products.html Fusion Lite Insight] - .xml results file&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It may be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results.'''&lt;br /&gt;
&lt;br /&gt;
The project has automated test harnesses for these vulnerability detection tools, so we can repeatably run the tools against each version of the Benchmark and automatically produce scorecards in our desired format.&lt;br /&gt;
&lt;br /&gt;
We want to test as many tools as possible against the Benchmark. If you are:&lt;br /&gt;
&lt;br /&gt;
* A tool vendor and want to participate in the project&lt;br /&gt;
* Someone who wants to help score a free tool against the project&lt;br /&gt;
* Someone who has a license to a commercial tool and the terms of the license allow you to publish tool results, and you want to participate&lt;br /&gt;
&lt;br /&gt;
please let [mailto:dave.wichers@owasp.org me] know!&lt;br /&gt;
&lt;br /&gt;
= Quick Start =&lt;br /&gt;
&lt;br /&gt;
==What is in the Benchmark?==&lt;br /&gt;
The Benchmark is a Java Maven project. Its primary component is thousands of test cases (e.g., BenchmarkTest00001.java) , each of which is a single Java servlet that contains a single vulnerability (either a true positive or false positive). The vulnerabilities span about a dozen different types currently and are expected to expand significantly in the future.&lt;br /&gt;
&lt;br /&gt;
An expectedresults.csv is published with each version of the Benchmark (e.g., expectedresults-1.1.csv) and it specifically lists the expected results for each test case. Here’s what the first two rows in this file looks like for version 1.1 of the Benchmark:&lt;br /&gt;
&lt;br /&gt;
 # test name		category	real vulnerability	CWE	Benchmark version: 1.1	2015-05-22&lt;br /&gt;
 BenchmarkTest00001	crypto		TRUE			327&lt;br /&gt;
&lt;br /&gt;
This simply means that the first test case is a crypto test case (use of weak cryptographic algorithms), this is a real vulnerability (as opposed to a false positive), and this issue maps to CWE 327. It also indicates this expected results file is for Benchmark version 1.1 (produced May 22, 2015). There is a row in this file for each of the tens of thousands of test cases in the Benchmark.  Each time a new version of the Benchmark is published, a new corresponding results file is generated and each test case can be completely different from one version to the next.&lt;br /&gt;
&lt;br /&gt;
The Benchmark also comes with a bunch of different utilities, commands, and prepackaged open source security analysis tools, all of which can be executed through Maven goals, including:&lt;br /&gt;
&lt;br /&gt;
* Open source vulnerability detection tools to be run against the Benchmark&lt;br /&gt;
* A scorecard generator, which computes a scorecard for each of the tools you have results files for.&lt;br /&gt;
&lt;br /&gt;
==What Can You Do With the Benchmark?==&lt;br /&gt;
* Compile all the software in the Benchmark project (e.g., mvn compile)&lt;br /&gt;
* Run a static vulnerability analysis tool (SAST) against the Benchmark test case code&lt;br /&gt;
&lt;br /&gt;
* Scan a running version of the Benchmark with a dynamic application security testing tool (DAST)&lt;br /&gt;
** Instructions on how to run it are provided below&lt;br /&gt;
&lt;br /&gt;
* Generate scorecards for each of the tools you have results files for&lt;br /&gt;
** See the Tool Support/Results page for the list of tools the Benchmark supports generating scorecards for&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
Before downloading or using the Benchmark make sure you have the following installed and configured properly:&lt;br /&gt;
&lt;br /&gt;
 GIT: http://git-scm.com/ or https://github.com/&lt;br /&gt;
 Maven: https://maven.apache.org/  (Version: 3.2.3 or newer works. We heard that 3.0.5 throws an error.)&lt;br /&gt;
 Java: http://www.oracle.com/technetwork/java/javase/downloads/index.html (Java 7 or 8) (64-bit) - Takes ALOT of memory to compile the Benchmark.&lt;br /&gt;
&lt;br /&gt;
==Getting, Building, and Running the Benchmark==&lt;br /&gt;
&lt;br /&gt;
To download and build everything:&lt;br /&gt;
&lt;br /&gt;
 $ git clone https://github.com/OWASP/benchmark &lt;br /&gt;
 $ cd benchmark&lt;br /&gt;
 $ mvn compile   (This compiles it)&lt;br /&gt;
 $ runBenchmark.sh/.bat - This compiles and runs it.&lt;br /&gt;
&lt;br /&gt;
Then navigate to: https://localhost:8443/benchmark/ to go to its home page. It uses a self signed SSL certificate, so you'll get a security warning when you hit the home page.&lt;br /&gt;
&lt;br /&gt;
Note: We have set the Benchmark app to use up to 6 Gig of RAM, which it may need when it is fully scanned by a DAST scanner. The DAST tool probably also requires 3+ Gig of RAM. As such, we recommend having a 16 Gig machine if you are going to try to run a full DAST scan. And at least 4 or ideally 8 Gig if you are going to play around with the running Benchmark app.&lt;br /&gt;
&lt;br /&gt;
== Using a VM instead ==&lt;br /&gt;
We have several preconstructed VMs or instructions on how to build one that you can use instead:&lt;br /&gt;
&lt;br /&gt;
* Docker: A Dockerfile is checked into the project [https://github.com/OWASP/Benchmark/blob/master/VMs/Dockerfile here]. This Docker file should automatically produce a Docker VM with the latest Benchmark project files. After you have Docker installed, cd to /VMs then run: &lt;br /&gt;
 ./buildDockerImage.sh --&amp;gt; This builds the Docker Benchmark VM (This will take a WHILE)&lt;br /&gt;
 docker images  --&amp;gt; You should see the new benchmark:latest image in the list provided&lt;br /&gt;
 # The Benchmark Docker Image only has to be created once. &lt;br /&gt;
&lt;br /&gt;
 To run the Benchmark in your Docker VM, just run:&lt;br /&gt;
   ./runDockerImage.sh  --&amp;gt; This pulls in any updates to Benchmark since the Image was built, builds everything, and starts a remotely accessible Benchmark web app.&lt;br /&gt;
 If successful, you should see this at the end:&lt;br /&gt;
   [INFO] [talledLocalContainer] Tomcat 8.x started on port [8443]&lt;br /&gt;
   [INFO] Press Ctrl-C to stop the container...&lt;br /&gt;
 Then simply navigate to: https://localhost:8443/benchmark from the machine you are running Docker&lt;br /&gt;
 &lt;br /&gt;
 Or if you want to access from a different machine:&lt;br /&gt;
  docker-machine ls (in a different terminal) --&amp;gt; To get IP Docker VM is exporting (e.g., tcp://192.168.99.100:2376)&lt;br /&gt;
  Navigate to: https://192.168.99.100:8443/benchmark in your browser (using the above IP as an example)&lt;br /&gt;
&lt;br /&gt;
* Amazon Web Services (AWS) - Here's how you set up the Benchmark on an AWS VM:&lt;br /&gt;
 sudo yum install git&lt;br /&gt;
 sudo yum install maven&lt;br /&gt;
 sudo yum install mvn&lt;br /&gt;
 sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo&lt;br /&gt;
 sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo&lt;br /&gt;
 sudo yum install -y apache-maven&lt;br /&gt;
 git clone https://github.com/OWASP/benchmark&lt;br /&gt;
 cd benchmark&lt;br /&gt;
 chmod 755 *.sh&lt;br /&gt;
 ./runBenchmark.sh -- to run it locally on the VM.&lt;br /&gt;
 ./runRemoteAccessibleBenchmark.sh -- to run it so it can be accessed outside the VM (on port 8443).&lt;br /&gt;
&lt;br /&gt;
==Running Free Static Analysis Tools Against the Benchmark==&lt;br /&gt;
There are scripts for running each of the free SAST vulnerability detection tools included with the Benchmark against the Benchmark test cases. On Linux, you might have to make them executable (e.g., chmod 755 *.sh) before you can run them.&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for PMD:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runPMD.sh (Linux) or .\scripts\runPMD.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for FindBugs:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runFindBugs.sh (Linux) or .\scripts\runFindBugs.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for FindBugs with the FindSecBugs plugin:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runFindSecBugs.sh (Linux) or .\scripts\runFindSecBugs.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
In each case, the script will generate a results file and put it in the /results directory. For example: &lt;br /&gt;
&lt;br /&gt;
 Benchmark_1.2-findbugs-v3.0.1-1026.xml&lt;br /&gt;
&lt;br /&gt;
This results file name is carefully constructed to mean the following: It's a results file against the OWASP Benchmark version 1.2, FindBugs was the analysis tool, it was version 3.0.1 of FindBugs, and it took 1026 seconds to run the analysis.&lt;br /&gt;
&lt;br /&gt;
NOTE: If you create a results file yourself, by running a commercial tool for example, you can add the version # and the compute time to the filename just like this and the Benchmark Scorecard generator will pick this information up and include it in the generated scorecard. If you don't, depending on what metadata is included in the tool results, the Scorecard generator might do this automatically anyway.&lt;br /&gt;
&lt;br /&gt;
==Generating Scorecards==&lt;br /&gt;
The scorecard generation application BenchmarkScore is included with the Benchmark. It parses the output files generated by any of the supported security tools run against the Benchmark and compares them against the expected results, and produces a set of web pages that detail the accuracy and speed of the tools involved. For the list of currently supported tools, check out the: Tools Support/Results tab. If you are using a tool that is not yet supported, simply send us a results file from that tool and we'll write a parser for that tool and add it to the supported tools list.&lt;br /&gt;
&lt;br /&gt;
The following command will compute a Benchmark scorecard for all the results files in the '''/results''' directory. The generated scorecard is put into the '''/scorecard''' directory.&lt;br /&gt;
&lt;br /&gt;
 createScorecard.sh (Linux) or createScorecard.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
An example of a real scorecard for some open source tools is provided at the top of the Tool Support/Results tab so you can see what one looks like.&lt;br /&gt;
&lt;br /&gt;
We recommend including the Benchmark version number in any results file name, in order to help prevent mismatches between the expected results and the actual results files.  A tool will not score well against the wrong expected results.&lt;br /&gt;
&lt;br /&gt;
===Customizing Your Scorecard Generation===&lt;br /&gt;
&lt;br /&gt;
The createScorecard scripts are very simple. They only have one line. Here's what the 1.2 version looks like:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;expectedresults-1.2.csv results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This Maven command simply says to run the BenchmarkScore application, passing in two parameters. The 1st is the Benchmark expected results file to compare the tool results against. And the 2nd is the name of the directory that contains all the results from tools run against that version of the Benchmark. If you have tool results older than the current version of the Benchmark, like 1.1 results for example, then you would do something like this instead:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;expectedresults-1.1.csv 1.1_results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To keep things organized, we actually put the expected results file inside the same results folder for that version of the Benchmark, so our command looks like this:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;1.1_results/expectedresults-1.1.csv 1.1_results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In all cases, the generated scorecard is put in the /scorecard folder.&lt;br /&gt;
&lt;br /&gt;
'''WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It is likely to be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results.''' It is for just this reason that the Benchmark project isn't releasing such results itself.&lt;br /&gt;
&lt;br /&gt;
= Tool Scanning Tips =&lt;br /&gt;
&lt;br /&gt;
People frequently have difficulty scanning the Benchmark with various tools due to many reasons, including size of the Benchmark app and its codebase, and complexity of the tools used. Here is some guidance for some of the tools we have used to scan the Benchmark. If you've learned any tricks on how to get better or easier results for a particular tool against the Benchmark, let us know or update this page directly.&lt;br /&gt;
&lt;br /&gt;
== Generic Tips ==&lt;br /&gt;
&lt;br /&gt;
Because of the size of the Benchmark, you may need to give your tool more memory before it starts the scan. If its a Java based tool, you may want to pass more memory to it like this:&lt;br /&gt;
&lt;br /&gt;
 -Xmx4G (This gives the Java application 4 Gig of memory)&lt;br /&gt;
&lt;br /&gt;
== SAST Tools ==&lt;br /&gt;
&lt;br /&gt;
=== Checkmarx ===&lt;br /&gt;
&lt;br /&gt;
The Checkmarx SAST Tool (CxSAST) is ready to scan the OWASP Benchmark out-of-the-box. &lt;br /&gt;
Please notice that the OWASP Benchmark “hides” some vulnerabilities in dead code areas, for example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
if (0&amp;gt;1)&lt;br /&gt;
{&lt;br /&gt;
  //vulnerable code&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
By default, CxSAST will find these vulnerabilities since Checkmarx believes that including dead code in the scan results is a SAST best practice. &lt;br /&gt;
&lt;br /&gt;
Checkmarx's experience shows that security experts expect to find these types of code vulnerabilities, and demand that their developers fix them. However, OWASP Benchmark considers the flagging of these vulnerabilities as False Positives, as a result lowering Checkmarx's overall score. &lt;br /&gt;
&lt;br /&gt;
Therefore, in order to receive an OWASP score untainted by dead code, re-configure CxSAST as follows:&lt;br /&gt;
# Open the CxAudit client for editing Java queries.&lt;br /&gt;
# Override the &amp;quot;Find_Dead_Code&amp;quot; query.&lt;br /&gt;
# Add the commented text of the original query to the new override query.&lt;br /&gt;
# Save the queries.&lt;br /&gt;
&lt;br /&gt;
=== FindBugs ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindBugs.(sh or bat). If you want to run a different version of FindBugs, just change its version number in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== FindBugs with FindSecBugs ===&lt;br /&gt;
&lt;br /&gt;
[http://h3xstream.github.io/find-sec-bugs/ FindSecurityBugs] is a great plugin for FindBugs that significantly increases the ability for FindBugs to find security issues. We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindSecBugs.(sh or bat). If you want to run a different version of FindSecBugs, just change the version number of the findsecbugs-plugin artifact in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== Micro Focus (Formally HP) Fortify ===&lt;br /&gt;
&lt;br /&gt;
If you are using the Audit Workbench, you can give it more memory and make sure you invoke it in 64-bit mode by doing this:&lt;br /&gt;
&lt;br /&gt;
  set AWB_VM_OPTS=&amp;quot;-Xmx2G -XX:MaxPermSize=256m&amp;quot;&lt;br /&gt;
  export AWB_VM_OPTS=&amp;quot;-Xmx2G -XX:MaxPermSize=256m&amp;quot;&lt;br /&gt;
  auditworkbench -64&lt;br /&gt;
&lt;br /&gt;
We found it was easier to use the Maven support in Fortify to scan the Benchmark and to do it in 2 phases, translate, and then scan. We did something like this:&lt;br /&gt;
&lt;br /&gt;
  Translate Phase:&lt;br /&gt;
  export JAVA_HOME=$(/usr/libexec/java_home)&lt;br /&gt;
  export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_17.10/bin&lt;br /&gt;
  export SCA_VM_OPTS=&amp;quot;-Xmx2G -version 1.7&amp;quot;&lt;br /&gt;
  mvn sca:clean&lt;br /&gt;
  mvn sca:translate&lt;br /&gt;
&lt;br /&gt;
  Scan Phase:&lt;br /&gt;
  export JAVA_HOME=$(/usr/libexec/java_home)&lt;br /&gt;
  export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_4.10/bin&lt;br /&gt;
  export SCA_VM_OPTS=&amp;quot;-Xmx10G -version 1.7&amp;quot;&lt;br /&gt;
  mvn sca:scan&lt;br /&gt;
&lt;br /&gt;
=== PMD ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runPMD.(sh or bat). If you want to run a different version of PMD, just change its version number in the Benchmark pom.xml file. (NOTE: PMD doesn't find any security issues. We include it because its interesting to know that it doesn't.)&lt;br /&gt;
&lt;br /&gt;
=== SonarQube ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its mostly dialed in.  But its a bit tricky because SonarQube requires two parts. There is a stand alone scanner for Java. And then there is a web application that accepts the results, and in turn can then produce the results file required by the Benchmark scorecard generator for SonarQube. Running the script runSonarQube.(sh or bat) will generate the results, but if the SonarQube Web Application isn't running where the runSonarQube script expects it to be, then the script will fail.&lt;br /&gt;
&lt;br /&gt;
If you want to run a different version of SonarQube, just change its version number in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== Xanitizer ===&lt;br /&gt;
&lt;br /&gt;
The vendor has written their own guide to [http://www.rigs-it.net/opendownloads/whitepapers/HowToSetUpXanitizerForOWASPBenchmarkProject.pdf How to Set Up Xanitizer for OWASP Benchmark].&lt;br /&gt;
&lt;br /&gt;
== DAST Tools ==&lt;br /&gt;
&lt;br /&gt;
=== Burp Pro ===&lt;br /&gt;
&lt;br /&gt;
You must use Burp Pro v1.6.29 or greater to scan the Benchmark due to a previous limitation in Burp Pro related to ensuring the path attribute for cookies was honored. This issue was fixed in the v1.6.29 release.&lt;br /&gt;
&lt;br /&gt;
To scan, first spider the entire Benchmark, and then select the /Benchmark URL and actively scan that branch. You can skip all the .html pages and any other pages that Burp says have no parameters.&lt;br /&gt;
&lt;br /&gt;
NOTE: We have been unable to simply run Burp Pro against the entire Benchmark in one shot. In our experience, it eventually freezes/stops scanning. We've had to run it against each test area one at a time. If you figure out how to get Burp Pro to scan all of Benchmark in one shot, let us know how you did it!&lt;br /&gt;
&lt;br /&gt;
=== OWASP ZAP ===&lt;br /&gt;
&lt;br /&gt;
ZAP may require additional memory to be able to scan the Benchmark. To configure the amount of memory:&lt;br /&gt;
* Tools --&amp;gt; Options --&amp;gt; JVM: Recommend setting to: -Xmx2048m (or larger). (Then restart ZAP).&lt;br /&gt;
&lt;br /&gt;
To run ZAP against Benchmark:&lt;br /&gt;
# Because Benchmark uses Cookies and Headers as sources of attack for many test cases: Tools --&amp;gt; Options --&amp;gt; Active Scan Input Vectors: Then check the HTTP Headers, All Requests, and Cookie Data checkboxes and hit OK&lt;br /&gt;
# Click on Show All Tabs button (if spider tab isn't visible)&lt;br /&gt;
# Go to Spider tab (the black spider) and click on New Scan button&lt;br /&gt;
# Enter: https://localhost:8443/benchmark/  into the 'Starting Point' box and hit 'Start Scan'&lt;br /&gt;
#* Do this again. For some reason it takes 2 passes with the Spider before it stops finding more Benchmark endpoints.&lt;br /&gt;
# When Spider completes, click on 'benchmark' folder in Site Map, right click and select: 'Attack --&amp;gt; Active Scan'&lt;br /&gt;
#* It will take several hours, like 3+ to complete (it's actually likely to simply freeze before completing the scan - see NOTE: below)&lt;br /&gt;
&lt;br /&gt;
For faster active scan you can&lt;br /&gt;
* Disable the ZAP DB log (in ZAP 2.5.0+):&lt;br /&gt;
** Disable it via Options / Database / Recover Log&lt;br /&gt;
** Set it on the command line using &amp;quot;-config database.recoverylog=false&amp;quot;&lt;br /&gt;
* Disable unnecessary plugins / Technologies: When you launch the Active Scan&lt;br /&gt;
** On the Policy tab, disable all plugins except: XSS (Reflected), Path Traversal, SQLi, OS Command Injection&lt;br /&gt;
** Go the Technology Tab, disable everything and only enable: MySQL, YOUR_OS, Tomcat&lt;br /&gt;
** Note: This 2nd performance improvement step is a bit like cheating as you wouldn't do this for a normal site scan. You'd want to leave all this on in case these other plugins/technologies are helpful in finding more issues. So a fair performance comparison of ZAP to other tools would leave all this on.&lt;br /&gt;
&lt;br /&gt;
To generate the ZAP XML results file so you can generate its scorecard:&lt;br /&gt;
* Tools &amp;gt; Options &amp;gt; Alerts - And set the Max alert instances to like 500.&lt;br /&gt;
* Then: Report &amp;gt; Generate XML Report...&lt;br /&gt;
&lt;br /&gt;
NOTE: Similar to Burp, we can't simply run ZAP against the entire Benchmark in one shot. In our experience, it eventually freezes/stops scanning. We've had to run it against each test area one at a time. If you figure out how to get ZAP to scan all of Benchmark in one shot, let us know how you did it!&lt;br /&gt;
&lt;br /&gt;
Things we tried that didn't improve the score:&lt;br /&gt;
* AJAX Spider - the traditional spider appears to find all (or 99%) of the test cases so the AJAX Spider does not appear to be needed against Benchmark v1.2&lt;br /&gt;
* XSS (Persistent) - There are 3 of these plugins that run by default. There aren't any stored XSS in Benchmark, so you can disable these plugins for a faster scan.&lt;br /&gt;
* DOM XSS Plugin - This is an optional plugin that didn't seem to find any additional XSS issues. There aren't an DOM specific XSS issues in Benchmark v1.2, so not surprising.&lt;br /&gt;
&lt;br /&gt;
== IAST Tools ==&lt;br /&gt;
&lt;br /&gt;
Interactive Application Security Testing (IAST) tools work differently than scanners.  IAST tools monitor an application as it runs to identify application vulnerabilities using context from inside the running application. Typically these tools run continuously, immediately notifying users of vulnerabilities, but you can also get a full report of an entire application. To do this, we simply run the Benchmark application with an IAST agent and use a crawler to hit all the pages.&lt;br /&gt;
&lt;br /&gt;
=== Contrast Assess ===&lt;br /&gt;
&lt;br /&gt;
To use Contrast Assess, we simply add the Java agent to the Benchmark environment and run the BenchmarkCrawler. The entire process should only take a few minutes. We provided a few scripts, which simply add the -javaagent:contrast.jar flag to the Benchmark launch configuration. We have tested on MacOS, Ubuntu, and Windows.  Be sure your VM has at least 4M of memory.&lt;br /&gt;
&lt;br /&gt;
* Ensure your environment has Java, Maven, and git installed, then build the Benchmark project&lt;br /&gt;
   '''$ git clone https://github.com/OWASP/Benchmark.git'''&lt;br /&gt;
   '''$ cd Benchmark'''&lt;br /&gt;
   '''$ mvn compile'''&lt;br /&gt;
&lt;br /&gt;
* Download a licensed copy of the Contrast Assess Java Agent (contrast.jar) from your Contrast TeamServer account and put it in the /Benchmark/tools/Contrast directory.&lt;br /&gt;
   '''$ cp ~/Downloads/contrast.jar tools/Contrast'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 1, launch the Benchmark application and wait until it starts&lt;br /&gt;
   '''$ cd tools/Contrast  &lt;br /&gt;
   '''$ ./runBenchmark_wContrast.sh''' (.bat on Windows)&lt;br /&gt;
   '''[INFO] Scanning for projects...&lt;br /&gt;
   '''[INFO]                                                                         &lt;br /&gt;
   '''[INFO] ------------------------------------------------------------------------&lt;br /&gt;
   '''[INFO] Building OWASP Benchmark Project 1.2&lt;br /&gt;
   '''[INFO] ------------------------------------------------------------------------&lt;br /&gt;
   '''[INFO] &lt;br /&gt;
   '''...&lt;br /&gt;
   '''[INFO] [talledLocalContainer] Tomcat 8.x started on port [8443]'''&lt;br /&gt;
   '''[INFO] Press Ctrl-C to stop the container...'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 2, launch the crawler and wait a minute or two for the crawl to complete.&lt;br /&gt;
   '''$ ./runCrawler.sh''' (.bat on Windows)&lt;br /&gt;
&lt;br /&gt;
* A Contrast report has been generated in /Benchmark/tools/Contrast/working/contrast.log.  This report will be automatically copied (and renamed with version number)  to /Benchmark/results directory.&lt;br /&gt;
   '''$ more tools/Contrast/working/contrast.log'''&lt;br /&gt;
   '''2016-04-22 12:29:29,716 [main b] INFO - Contrast Runtime Engine&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Copyright (C) 2012&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Pat. 8,458,789 B2&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Contrast Security, Inc.&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - All Rights Reserved&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - https://www.contrastsecurity.com/&lt;br /&gt;
   '''...'''&lt;br /&gt;
&lt;br /&gt;
* Press Ctrl-C to stop the Benchmark in Terminal 1.  Note: on Windows, select &amp;quot;N&amp;quot; when asked Terminate batch job (Y/N))&lt;br /&gt;
   '''[INFO] [talledLocalContainer] Tomcat 8.x is stopped'''&lt;br /&gt;
   '''Copying Contrast report to results directory'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 2, generate scorecards in /Benchmark/scorecard&lt;br /&gt;
   '''$ ./createScorecards.sh''' (.bat on Windows)&lt;br /&gt;
   '''Analyzing results from Benchmark_1.2-Contrast.log&lt;br /&gt;
   '''Actual results file generated: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.csv&lt;br /&gt;
   '''Report written to: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html&lt;br /&gt;
&lt;br /&gt;
* Open the Benchmark Scorecard in your browser&lt;br /&gt;
   '''/Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html'''&lt;br /&gt;
&lt;br /&gt;
=== Hdiv Detection ===&lt;br /&gt;
&lt;br /&gt;
Hdiv has written their own instructions on how to run the detection component of their product on the Benchmark here: https://hdivsecurity.com/docs/features/benchmark/#how-to-run-hdiv-in-owasp-benchmark-project. You'll see that these instructions involve using the same crawler used to exercise all the test cases in the Benchmark, just like Contrast above.&lt;br /&gt;
&lt;br /&gt;
= RoadMap =&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.0 - Released April 15, 2015 - This initial release included over 20,000 test cases in 11 different vulnerability categories. As this initial version was not a runnable application, it was only suitable for assessing static analysis tools (SAST).&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.1 - Released May 23, 2015 - This update fixed some inaccurate test cases, and made sure that every vulnerability area included both True Positives and False Positives.&lt;br /&gt;
&lt;br /&gt;
Benchmark Scorecard Generator - Released July 10, 2015 - The ability to automatically and repeatably produce a scorecard of how well tools do against the Benchmark was released for most of the SAST tools supported by the Benchmark. Scorecards present graphical as well as statistical data on how well a tool does against the Benchmark down to the level of detail of how exactly it did against each individual test in the Benchmark. [https://rawgit.com/OWASP/Benchmark/master/scorecard/OWASP_Benchmark_Home.html Here are the latest public scorecards].  Support for producing scorecards for additional tools is being added all the time and the current full set is documented on the '''Tool Support/Results''' and '''Quick Start''' tabs of this wiki.&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.2beta - Released Aug 15, 2015 - The 1st release of a fully runnable version of the Benchmark to support assessing all types of vulnerability detection and prevention technologies, including DAST, IAST, RASP, WAFs, etc. This involved creating a user interface for every test case, and enhancing each test case to make sure its actually exploitable, not just uses something that is theoretically weak. This release is under 3,000 test cases to make it practical to scan the entire Benchmark with a DAST tool in a reasonable amount of time, with commodity hardware specs.&lt;br /&gt;
&lt;br /&gt;
Benchmark 1.2 - Released June 5, 2016 -  Based on feedback from a number of DAST tool developers, and other vendors as well, we made the Benchmark more realistic in a number of ways to facilitate external DAST scanning, and also made the Benchmark more resilient against attack so it could properly survive various DAST vulnerability detection and exploit verification techniques.&lt;br /&gt;
&lt;br /&gt;
Plans for Benchmark 1.3:&lt;br /&gt;
&lt;br /&gt;
While we don't have hard and fast rules of exactly what we are going to do next, enhancements in the following areas are planned for the next release:&lt;br /&gt;
&lt;br /&gt;
* Add new vulnerability categories (e.g., XXE, Hibernate Injection)&lt;br /&gt;
* Add support for popular server side Java frameworks (e.g., Spring)&lt;br /&gt;
* Add web services test cases&lt;br /&gt;
&lt;br /&gt;
We are also starting to work on the ability to score WAFs/RASPs and other defensive technology against Benchmark.&lt;br /&gt;
&lt;br /&gt;
= FAQ =&lt;br /&gt;
&lt;br /&gt;
==1. How are the scores computed for the Benchmark?==&lt;br /&gt;
&lt;br /&gt;
Each test case has a single vulnerability of a specific type. Its either a real vulnerability (True Positive) or not (a False Positive). We document all the test cases for each version of the Benchmark in the expectedresults-VERSION#.csv file (e.g., expectedresults-1.1.csv). This file lists the test case name, the CWE type of the vulnerability, and whether it is a True Positive or not. The Benchmark supports scorecard generators for computing exactly how a tool did when analyzing a version of the Benchmark. The full list of supported tools is on the Tools Support/Results tab. For each tool there is a parser that can parse the native results format for that tool (usually XML). This parser simply, for each test case, looks to see if that tool reported a vulnerability of the type expected in the test case source code file (for SAST) or the test case URL (for DAST/IAST). If it did, and the test case was a True Positive, the tool gets credit for finding it. If it is a False Positive test, and the tool reports that type of finding, then its recorded as a False Positive. If the tool didn't report that type of vulnerability for a test case, then they get either a False Negative, or a True Negative as appropriate. After calculating all of the individual test case results, a scorecard is generated providing a chart and statistics for that tool across all the vulnerability categories, and pages are also created comparing different tools to each other in each vulnerability category (if multiple tools are being scored together).&lt;br /&gt;
&lt;br /&gt;
A detailed file explaining exactly how that tool did against each individual test case in that version of the Benchmark is produced as part of scorecard generation, and is available via the Actual Results link on each tool's scorecard page. (e.g., Benchmark_v1.1_Scorecard_for_FindBugs.csv).&lt;br /&gt;
&lt;br /&gt;
==2. What if the tool I'm using doesn't have a scorecard generator for it?==&lt;br /&gt;
&lt;br /&gt;
Send us the results file! We'll be happy to create a parser for that tool so its now supported.&lt;br /&gt;
&lt;br /&gt;
==3. What if a tool finds other unexpected vulnerabilities?==&lt;br /&gt;
&lt;br /&gt;
We are sure there are vulnerabilities we didn't intend to be there and we are eliminating them as we find them. If you find some, let us know and we'll fix them too. We are primarily focused on unintentional vulnerabilities in the categories of vulnerabilities the Benchmark currently supports, since that is what is actually measured.&lt;br /&gt;
&lt;br /&gt;
Right now, two types of vulnerabilities that get reported are ignored by the scorecard generator:&lt;br /&gt;
# Vulnerabilities in categories not yet supported&lt;br /&gt;
# Vulnerabilities of a type that is supported, but reported in test cases not of that type&lt;br /&gt;
&lt;br /&gt;
In the case of #2, false positives reported in unexpected areas are also ignored, which is primarily a DAST problem. Right now those false positives are completely ignored, but we are thinking about including them in the false positive score in some fashion. We just haven't decided how yet.&lt;br /&gt;
&lt;br /&gt;
==4. How should I configure my tool to scan the Benchmark?==&lt;br /&gt;
&lt;br /&gt;
All tools support various levels of configuration in order to improve their results. The Benchmark project, in general, is trying to '''compare out of the box capabilities of tools'''. However, if a few simple tweaks to a tool can be done to improve that tool's score, that's fine. We'd like to understand what those simple tweaks are, and document them here, so others can repeat those tests in exactly the same way. For example, just turn on the 'test cookies and headers' flag, which is off by default. Or turn on the 'advanced' scan, so it will work harder, find more vulnerabilities. Its simple things like this we are talking about, not an extensive effort to teach the tool about the app, or perform 'expert' configuration of the tool.&lt;br /&gt;
&lt;br /&gt;
So, if you know of some simple tweaks to improve a tool's results, let us know what they are and we'll document them here so everyone can benefit and make it easier to do apples to apples comparisons. And we'll link to that guidance once we start documenting it, but we don't have any such guidance right now.&lt;br /&gt;
&lt;br /&gt;
==5. I'm having difficulty scanning the Benchmark with a DAST tool. How can I get it to work?==&lt;br /&gt;
&lt;br /&gt;
We've run into 2 primary issues giving DAST tools problems.&lt;br /&gt;
&lt;br /&gt;
a) The Benchmark Generates Lots of Cookies&lt;br /&gt;
&lt;br /&gt;
The Burp team pointed out a cookies bug in the 1.2beta Benchmark. Each Weak Randomness test case generates its own cookie, 1 per test case. This caused the creation of so many cookies that servers would eventually start returning 400 errors because there were simply too many cookies being submitted in a request. This was fixed in the Aug 27, 2015 update to the Benchmark by setting the path attribute for each of these cookies to be the path to that individual test case. Now, only at most one of these cookies should be submitted with each request, eliminating this 'too many cookies' problem. However, if a DAST tool doesn't honor this path attribute, it may continue to send too many cookies, making the Benchmark unscannable for that tool. Burp Pro prior to 1.6.29 had this issue, but it was fixed in the 1.6.29 release.&lt;br /&gt;
&lt;br /&gt;
b) The Benchmark is a BIG Application&lt;br /&gt;
&lt;br /&gt;
Yes. It is, so you might have to give your scanner more memory than it normally uses by default in order to successfully scan the entire Benchmark. Please consult your tool vendor's documentation on how to give it more memory.&lt;br /&gt;
&lt;br /&gt;
Your machine itself might not have enough memory in the first place. For example, we were not able to successfully scan the 1.2beta with OWASP ZAP with only 8 Gig of RAM. So, you might need a more powerful machine or use a cloud provided machine to successfully scan the Benchmark with certain DAST tools. You may have similar problems with SAST tools against large versions of the Benchmark, like the 1.1 release.&lt;br /&gt;
&lt;br /&gt;
= Acknowledgements =&lt;br /&gt;
&lt;br /&gt;
The following people, organizations, and many others, have contributed to this project and their contributions are much appreciated!&lt;br /&gt;
&lt;br /&gt;
* Lots of Vendors - Many vendors have provided us with either trial licenses we can use, or they have run their tools themselves and either sent us results files, or written and contributed scorecard generators for their tool. Many have also provided valuable feedback so we can make the Benchmark more accurate and more realistic.&lt;br /&gt;
* Juan Gama - Development of initial release and continued support&lt;br /&gt;
* Ken Prole - Assistance with automated scorecard development using CodeDx&lt;br /&gt;
* Nick Sanidas - Development of initial release&lt;br /&gt;
* Denim Group - Contribution of scan results to facilitate scorecard development&lt;br /&gt;
* Tasos Laskos - Significant feedback on the DAST version of the Benchmark&lt;br /&gt;
* Ann Campbell - From SonarSource - for fixing our SonarQube results parser&lt;br /&gt;
* Dhiraj Mishra - OWASP Member - contributed SQLi/XSS fuzz vectors as initial contribution towards adding support for WAF/RASP scoring&lt;br /&gt;
&lt;br /&gt;
[[File:CWE_Logo.jpeg|link=https://cwe.mitre.org/]] - The CWE project for providing a mapping mechanism to easily map test cases to issues found by vulnerability detection tools.&lt;br /&gt;
&lt;br /&gt;
We are looking for volunteers. Please contact [mailto:dave.wichers@owasp.org Dave Wichers] if you are interested in contributing new test cases, tool results run against the benchmark, or anything else.&lt;br /&gt;
&lt;br /&gt;
__NOTOC__ &amp;lt;headertabs /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP_Project]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Benchmark&amp;diff=244840</id>
		<title>Benchmark</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Benchmark&amp;diff=244840"/>
				<updated>2018-11-04T21:01:22Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Main = &lt;br /&gt;
 &amp;lt;div style=&amp;quot;width:100%;height:100px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Lab_big.jpg|link=OWASP_Project_Stages#tab.3DLab_Projects]]&amp;lt;/div&amp;gt;&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== OWASP Benchmark Project  ==&lt;br /&gt;
The OWASP Benchmark for Security Automation (OWASP Benchmark) is a free and open test suite designed to evaluate the speed, coverage, and accuracy of automated software vulnerability detection tools and services (henceforth simply referred to as 'tools'). Without the ability to measure these tools, it is difficult to understand their strengths and weaknesses, and compare them to each other. Each version of the OWASP Benchmark contains thousands of test cases that are fully runnable and exploitable, each of which maps to the appropriate CWE number for that vulnerability.&lt;br /&gt;
&lt;br /&gt;
You can use the OWASP Benchmark with [[Source_Code_Analysis_Tools | Static Application Security Testing (SAST)]] tools, [[:Category:Vulnerability_Scanning_Tools | Dynamic Application Security Testing (DAST)]] tools like OWASP [[ZAP]] and Interactive Application Security Testing (IAST) tools. The current version of the Benchmark is implemented in Java.  Future versions may expand to include other languages.&lt;br /&gt;
&lt;br /&gt;
==Benchmark Project Scoring Philosophy==&lt;br /&gt;
&lt;br /&gt;
Security tools (SAST, DAST, and IAST) are amazing when they find a complex vulnerability in your code.  But with widespread misunderstanding of the specific vulnerabilities automated tools cover, end users are often left with a false sense of security.&lt;br /&gt;
&lt;br /&gt;
We are on a quest to measure just how good these tools are at discovering and properly diagnosing security problems in applications. We rely on the [http://en.wikipedia.org/wiki/Receiver_operating_characteristic long history] of military and medical evaluation of detection technology as a foundation for our research. Therefore, the test suite tests both real and fake vulnerabilities.&lt;br /&gt;
&lt;br /&gt;
There are four possible test outcomes in the Benchmark:&lt;br /&gt;
&lt;br /&gt;
# Tool correctly identifies a real vulnerability (True Positive - TP)&lt;br /&gt;
# Tool fails to identify a real vulnerability (False Negative - FN)&lt;br /&gt;
# Tool correctly ignores a false alarm (True Negative - TN)&lt;br /&gt;
# Tool fails to ignore a false alarm (False Positive - FP)&lt;br /&gt;
&lt;br /&gt;
We can learn a lot about a tool from these four metrics. Consider a tool that simply flags every line of code as vulnerable. This tool will perfectly identify all vulnerabilities!  But it will also have 100% false positives and thus adds no value.  Similarly, consider a tool that reports absolutely nothing. This tool will have zero false positives, but will also identify zero real vulnerabilities and is also worthless. You can even imagine a tool that flips a coin to decide whether to report whether each test case contains a vulnerability. The result would be 50% true positives and 50% false positives.  We need a way to distinguish valuable security tools from these trivial ones.&lt;br /&gt;
&lt;br /&gt;
If you imagine the line that connects all these points, from 0,0 to 100,100 establishes a line that roughly translates to &amp;quot;random guessing.&amp;quot; The ultimate measure of a security tool is how much better it can do than this line.  The diagram below shows how we will evaluate security tools against the Benchmark.&lt;br /&gt;
&lt;br /&gt;
[[File:Wbe guide.png]]&lt;br /&gt;
&lt;br /&gt;
A point plotted on this chart provides a visual indication of how well a tool did considering both the True Positives the tool reported, as well as the False Positives it reported. We also want to compute an individual score for that point in the range 0 - 100, which we call the Benchmark Accuracy Score.&lt;br /&gt;
&lt;br /&gt;
The Benchmark Accuracy Score is essentially a [https://en.wikipedia.org/wiki/Youden%27s_J_statistic Youden Index], which is a standard way of summarizing the accuracy of a set of tests.  Youden's index is one of the oldest measures for diagnostic accuracy. It is also a global measure of a test performance, used for the evaluation of overall discriminative power of a diagnostic procedure and for comparison of this test with other tests. Youden's index is calculated by deducting 1 from the sum of a test’s sensitivity and specificity expressed not as percentage but as a part of a whole number: (sensitivity + specificity) – 1. For a test with poor diagnostic accuracy, Youden's index equals 0, and in a perfect test Youden's index equals 1.&lt;br /&gt;
&lt;br /&gt;
  So for example, if a tool has a True Positive Rate (TPR) of .98 (i.e., 98%) &lt;br /&gt;
    and False Positive Rate (FPR) of .05 (i.e., 5%)&lt;br /&gt;
  Sensitivity = TPR (.98)&lt;br /&gt;
  Specificity = 1-FPR (.95)&lt;br /&gt;
  So the Youden Index is (.98+.95) - 1 = .93&lt;br /&gt;
  &lt;br /&gt;
  And this would equate to a Benchmark score of 93 (since we normalize this to the range 0 - 100)&lt;br /&gt;
&lt;br /&gt;
On the graph, the Benchmark Score is the length of the line from the point down to the diagonal “guessing” line. Note that a Benchmark score can actually be negative if the point is below the line. This is caused when the False Positive Rate is actually higher than the True Positive Rate.&lt;br /&gt;
&lt;br /&gt;
==Benchmark Validity==&lt;br /&gt;
&lt;br /&gt;
The Benchmark tests are not exactly like real applications. The tests are derived from coding patterns observed in real applications, but the majority of them are considerably '''simpler''' than real applications. That is, most real world applications will be considerably harder to successfully analyze than the OWASP Benchmark Test Suite. Although the tests are based on real code, it is possible that some tests may have coding patterns that don't occur frequently in real code.&lt;br /&gt;
&lt;br /&gt;
Remember, we are trying to test the capabilities of the tools and make them explicit, so that users can make informed decisions about what tools to use, how to use them, and what results to expect.  This is exactly aligned with the OWASP mission to make application security visible.&lt;br /&gt;
&lt;br /&gt;
==Generating Benchmark Scores==&lt;br /&gt;
&lt;br /&gt;
Anyone can use this Benchmark to evaluate vulnerability detection tools. The basic steps are:&lt;br /&gt;
# Download the Benchmark from github&lt;br /&gt;
# Run your tools against the Benchmark&lt;br /&gt;
# Run the BenchmarkScore tool on the reports from your tools&lt;br /&gt;
&lt;br /&gt;
That's it!&lt;br /&gt;
&lt;br /&gt;
Full details on how to do this are at the bottom of the page on the Quick_Start tab.&lt;br /&gt;
&lt;br /&gt;
We encourage both vendors, open source tools, and end users to verify their application security tools against the Benchmark. In order to ensure that the results are fair and useful, we ask that you follow a few simple rules when publishing results. We won't recognize any results that aren't easily reproducible:&lt;br /&gt;
&lt;br /&gt;
# A description of the default “out-of-the-box” installation, version numbers, etc…&lt;br /&gt;
# Any and all configuration, tailoring, onboarding, etc… performed to make the tool run&lt;br /&gt;
# Any and all changes to default security rules, tests, or checks used to achieve the results&lt;br /&gt;
# Easily reproducible steps to run the tool&lt;br /&gt;
&lt;br /&gt;
== Reporting Format==&lt;br /&gt;
&lt;br /&gt;
The Benchmark includes tools to interpret raw tool output, compare it to the expected results, and generate summary charts and graphs. We use the following table format in order to capture all the information generated during the evaluation.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable nowraplinks&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Security Category&lt;br /&gt;
! TP&lt;br /&gt;
! FN&lt;br /&gt;
! TN&lt;br /&gt;
! FP&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Total&lt;br /&gt;
! TPR&lt;br /&gt;
! FPR&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Score&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| General security category for test cases.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Positives''': Tests with real vulnerabilities that were correctly reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Negative''': Tests with real vulnerabilities that were not correctly reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Negative''': Tests with fake vulnerabilities that were correctly not reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Positive''':Tests with fake vulnerabilities that were incorrectly reported as vulnerable by the tool.&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| Total test cases for this category.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Positive Rate''': TP / ( TP + FN ) - Also referred to as Precision, as defined at [https://en.wikipedia.org/wiki/Precision_and_recall Wikipedia].&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Positive Rate''': FP / ( FP + TN ).&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| Normalized distance from the “guess line” TPR - FPR.&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Command Injection&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Etc...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | &lt;br /&gt;
! Total TP&lt;br /&gt;
! Total FN&lt;br /&gt;
! Total TN&lt;br /&gt;
! Total FP&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Total TC&lt;br /&gt;
! Average TPR&lt;br /&gt;
! Average FPR&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Average Score&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Code Repo and Build/Run Instructions ==&lt;br /&gt;
&lt;br /&gt;
See the '''Getting Started''' and '''Getting, Building, and Running the Benchmark''' sections on the Quick Start tab.&lt;br /&gt;
&lt;br /&gt;
==Licensing==&lt;br /&gt;
&lt;br /&gt;
The OWASP Benchmark is free to use under the [http://choosealicense.com/licenses/gpl-2.0/ GNU General Public License v2.0].&lt;br /&gt;
&lt;br /&gt;
== Mailing List ==&lt;br /&gt;
&lt;br /&gt;
[https://lists.owasp.org/mailman/listinfo/owasp-benchmark-project OWASP Benchmark Mailing List]&lt;br /&gt;
&lt;br /&gt;
== Project Leaders ==&lt;br /&gt;
&lt;br /&gt;
[https://www.owasp.org/index.php/User:Wichers Dave Wichers] [mailto:dave.wichers@owasp.org @]&lt;br /&gt;
&lt;br /&gt;
== Project References ==&lt;br /&gt;
* [https://www.mir-swamp.org/#packages/public Software Assurance Marketplace (SWAMP) - set of curated packages to test tools against]&lt;br /&gt;
* [http://samate.nist.gov/Other_Test_Collections.html SAMATE List of Test Collections]&lt;br /&gt;
&lt;br /&gt;
== Related Projects ==&lt;br /&gt;
&lt;br /&gt;
* [http://samate.nist.gov/SARD/testsuite.php NSA's Juliet for Java]&lt;br /&gt;
* [http://sectoolmarket.com/ The Web Application Vulnerability Scanner Evaluation Project (WAVSEP)]&lt;br /&gt;
&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;padding-left:25px;width:200px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Quick Download ==&lt;br /&gt;
&lt;br /&gt;
All test code and project files can be downloaded from [https://github.com/OWASP/benchmark OWASP GitHub].&lt;br /&gt;
&lt;br /&gt;
== Project Intro Video ==&lt;br /&gt;
&lt;br /&gt;
[[File:BenchmarkPodcastTitlePage.jpg|200px|link=https://www.youtube.com/watch?v=HQP8dwc3jJA&amp;amp;index=5&amp;amp;list=PLGB2s-U5FSWOmEStMt3JqlMFJvRYqeVW5]]&lt;br /&gt;
&lt;br /&gt;
== News and Events ==&lt;br /&gt;
* LOOKING FOR VOLUNTEERS!! - We are looking for individuals and organizations to join and make this a much more community driven project, including additional coleaders to help take this project to the next level. Contributors could work on things like new test cases, additional tool scorecard generators, adding support for languages beyond Java, and a host of other improvements. Please contact [mailto:dave.wichers@owasp.org me] if you are interested in contributing at any level.&lt;br /&gt;
* June 5, 2016 - Benchmark Version 1.2 Released&lt;br /&gt;
* Sep 24, 2015 - Benchmark introduced to broader OWASP community at [https://appsecusa2015.sched.org/event/3r9k/using-the-owasp-benchmark-to-assess-automated-vulnerability-analysis-tools AppSec USA]&lt;br /&gt;
* Aug 27, 2015 - U.S. Dept. of Homeland Security (DHS) is financially supporting the Benchmark project.&lt;br /&gt;
* Aug 15, 2015 - Benchmark Version 1.2beta Released with full DAST Support. Checkmarx and ZAP scorecard generators also released.&lt;br /&gt;
* July 10, 2015 - Benchmark Scorecard generator and open source scorecards released&lt;br /&gt;
* May 23, 2015 - Benchmark Version 1.1 Released&lt;br /&gt;
* April 15, 2015 - Benchmark Version 1.0 Released&lt;br /&gt;
&lt;br /&gt;
==Classifications==&lt;br /&gt;
&lt;br /&gt;
   {| width=&amp;quot;200&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; rowspan=&amp;quot;2&amp;quot;| [[File:Owasp-incubator-trans-85.png|link=https://www.owasp.org/index.php/OWASP_Project_Stages#tab=Incubator_Projects]]&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-builders-small.png|link=]]  &lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-defenders-small.png|link=]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [http://choosealicense.com/licenses/gpl-2.0/ GNU General Public License v2.0]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [[File:Project_Type_Files_CODE.jpg|link=]]&lt;br /&gt;
   |}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Test Cases =&lt;br /&gt;
&lt;br /&gt;
Version 1.0 of the Benchmark was published on April 15, 2015 and had 20,983 test cases. On May 23, 2015, version 1.1 of the Benchmark was released. The 1.1 release improves on the previous version by making sure that there are both true positives and false positives in every vulnerability area. Version 1.2 was released on June 5, 2016 (and the 1.2beta August 15, 2015).&lt;br /&gt;
&lt;br /&gt;
Version 1.2 and forward of the Benchmark is a fully executable web application, which means it is scannable by any kind of vulnerability detection tool. The 1.2 has been limited to slightly less than 3,000 test cases, to make it easier for DAST tools to scan it (so it doesn't take so long and they don't run out of memory, or blow up the size of their database). The 1.2 release covers the same vulnerability areas that 1.1 covers. We added a few Spring database SQL Injection tests, but that's it. The bulk of the work was turning each test case into something that actually runs correctly AND is fully exploitable, and then generating a UI on top of it that works in order to turn the test cases into a real running application.&lt;br /&gt;
&lt;br /&gt;
You can still download Benchmark version 1.1 by cloning the release marked with the GIT tag '1.1'.&lt;br /&gt;
&lt;br /&gt;
The test case areas and quantities for the Benchmark releases are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable nowraplinks&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Vulnerability Area&lt;br /&gt;
! # of Tests in v1.1&lt;br /&gt;
! # of Tests in v1.2&lt;br /&gt;
! CWE Number&lt;br /&gt;
|-&lt;br /&gt;
| [[Command Injection]]&lt;br /&gt;
| 2708&lt;br /&gt;
| 251&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/78.html 78]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Cryptography&lt;br /&gt;
| 1440&lt;br /&gt;
| 246&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/327.html 327]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Hashing&lt;br /&gt;
| 1421&lt;br /&gt;
| 236&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/328.html 328]&lt;br /&gt;
|-&lt;br /&gt;
| [[LDAP injection | LDAP Injection]]&lt;br /&gt;
| 736&lt;br /&gt;
| 59&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/90.html 90]&lt;br /&gt;
|-&lt;br /&gt;
| [[Path Traversal]]&lt;br /&gt;
| 2630&lt;br /&gt;
| 268&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/22.html 22]&lt;br /&gt;
|-&lt;br /&gt;
| Secure Cookie Flag&lt;br /&gt;
| 416&lt;br /&gt;
| 67&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/614.html 614]&lt;br /&gt;
|-&lt;br /&gt;
| [[SQL Injection]]&lt;br /&gt;
| 3529&lt;br /&gt;
| 504&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/89.html 89]&lt;br /&gt;
|-&lt;br /&gt;
| [[Trust Boundary Violation]]&lt;br /&gt;
| 725&lt;br /&gt;
| 126&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/501.html 501]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Randomness&lt;br /&gt;
| 3640&lt;br /&gt;
| 493&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/330.html 330]&lt;br /&gt;
|-&lt;br /&gt;
| [[XPATH Injection]]&lt;br /&gt;
| 347&lt;br /&gt;
| 35&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/643.html 643]&lt;br /&gt;
|-&lt;br /&gt;
| [[XSS]] (Cross-Site Scripting)&lt;br /&gt;
| 3449&lt;br /&gt;
| 455&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/79.html 79]&lt;br /&gt;
|-&lt;br /&gt;
| Total Test Cases&lt;br /&gt;
| 21,041&lt;br /&gt;
| 2,740&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Each Benchmark version comes with a spreadsheet that lists every test case, the vulnerability category, the CWE number, and the expected result (true finding/false positive). Look for the file: expectedresults-VERSION#.csv in the project root directory.&lt;br /&gt;
&lt;br /&gt;
Every test case is:&lt;br /&gt;
* a servlet or JSP (currently they are all servlets, but we plan to add JSPs)&lt;br /&gt;
* either a true vulnerability or a false positive for a single issue&lt;br /&gt;
&lt;br /&gt;
The Benchmark is intended to help determine how well analysis tools correctly analyze a broad array of application and framework behavior, including:&lt;br /&gt;
&lt;br /&gt;
* HTTP request and response problems&lt;br /&gt;
* Simple and complex data flow&lt;br /&gt;
* Simple and complex control flow&lt;br /&gt;
* Popular frameworks&lt;br /&gt;
* Inversion of control&lt;br /&gt;
* Reflection&lt;br /&gt;
* Class loading&lt;br /&gt;
* Annotations&lt;br /&gt;
* Popular UI technologies (particularly JavaScript frameworks)&lt;br /&gt;
&lt;br /&gt;
Not all of these are yet tested by the Benchmark but future enhancements intend to provide more coverage of these issues.&lt;br /&gt;
&lt;br /&gt;
Additional future enhancements could cover:&lt;br /&gt;
* All vulnerability types in the [[Top10 | OWASP Top 10]]&lt;br /&gt;
* Does the tool find flaws in libraries?&lt;br /&gt;
* Does the tool find flaws spanning custom code and libraries?&lt;br /&gt;
* Does tool handle web services? REST, XML, GWT, etc…&lt;br /&gt;
* Does tool work with different app servers? Java platforms?&lt;br /&gt;
&lt;br /&gt;
== Example Test Case ==&lt;br /&gt;
&lt;br /&gt;
Each test case is a simple Java EE servlet. BenchmarkTest00001 in version 1.0 of the Benchmark was an LDAP Injection test with the following metadata in the accompanying BenchmarkTest00001.xml file:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;test-metadata&amp;gt;&lt;br /&gt;
    &amp;lt;category&amp;gt;ldapi&amp;lt;/category&amp;gt;&lt;br /&gt;
    &amp;lt;test-number&amp;gt;00001&amp;lt;/test-number&amp;gt;&lt;br /&gt;
    &amp;lt;vulnerability&amp;gt;true&amp;lt;/vulnerability&amp;gt;&lt;br /&gt;
    &amp;lt;cwe&amp;gt;90&amp;lt;/cwe&amp;gt;&lt;br /&gt;
  &amp;lt;/test-metadata&amp;gt;&lt;br /&gt;
&lt;br /&gt;
BenchmarkTest00001.java in the OWASP Benchmark 1.0 simply reads in all the cookie values, looks for a cookie named &amp;quot;foo&amp;quot;, and uses the value of this cookie when performing an LDAP query. Here's the code for BenchmarkTest00001.java:&lt;br /&gt;
&lt;br /&gt;
  package org.owasp.benchmark.testcode;&lt;br /&gt;
  &lt;br /&gt;
  import java.io.IOException;&lt;br /&gt;
  &lt;br /&gt;
  import javax.servlet.ServletException;&lt;br /&gt;
  import javax.servlet.annotation.WebServlet;&lt;br /&gt;
  import javax.servlet.http.HttpServlet;&lt;br /&gt;
  import javax.servlet.http.HttpServletRequest;&lt;br /&gt;
  import javax.servlet.http.HttpServletResponse;&lt;br /&gt;
  &lt;br /&gt;
  @WebServlet(&amp;quot;/BenchmarkTest00001&amp;quot;)&lt;br /&gt;
  public class BenchmarkTest00001 extends HttpServlet {&lt;br /&gt;
  	&lt;br /&gt;
  	private static final long serialVersionUID = 1L;&lt;br /&gt;
  	&lt;br /&gt;
  	@Override&lt;br /&gt;
  	public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {&lt;br /&gt;
  		doPost(request, response);&lt;br /&gt;
  	}&lt;br /&gt;
  &lt;br /&gt;
  	@Override&lt;br /&gt;
  	public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {&lt;br /&gt;
  		// some code&lt;br /&gt;
  &lt;br /&gt;
  		javax.servlet.http.Cookie[] cookies = request.getCookies();&lt;br /&gt;
  		&lt;br /&gt;
  		String param = null;&lt;br /&gt;
  		boolean foundit = false;&lt;br /&gt;
  		if (cookies != null) {&lt;br /&gt;
  			for (javax.servlet.http.Cookie cookie : cookies) {&lt;br /&gt;
  				if (cookie.getName().equals(&amp;quot;foo&amp;quot;)) {&lt;br /&gt;
  					param = cookie.getValue();&lt;br /&gt;
  					foundit = true;&lt;br /&gt;
  				}&lt;br /&gt;
  			}&lt;br /&gt;
  			if (!foundit) {&lt;br /&gt;
  				// no cookie found in collection&lt;br /&gt;
  				param = &amp;quot;&amp;quot;;&lt;br /&gt;
  			}&lt;br /&gt;
  		} else {&lt;br /&gt;
  			// no cookies&lt;br /&gt;
  			param = &amp;quot;&amp;quot;;&lt;br /&gt;
  		}&lt;br /&gt;
  		&lt;br /&gt;
  		try {&lt;br /&gt;
  			javax.naming.directory.DirContext dc = org.owasp.benchmark.helpers.Utils.getDirContext();&lt;br /&gt;
  			Object[] filterArgs = {&amp;quot;a&amp;quot;,&amp;quot;b&amp;quot;};&lt;br /&gt;
  			dc.search(&amp;quot;name&amp;quot;, param, filterArgs, new javax.naming.directory.SearchControls());&lt;br /&gt;
  		} catch (javax.naming.NamingException e) {&lt;br /&gt;
  			throw new ServletException(e);&lt;br /&gt;
  		}&lt;br /&gt;
  	}&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
= Test Case Details =&lt;br /&gt;
&lt;br /&gt;
The following describes situations in the Benchmark that have come up for debate as to the validity/accuracy of the test cases in these scenarios. &lt;br /&gt;
&lt;br /&gt;
== Cookies as a Source of Attack for XSS ==&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.1 and early versions of the 1.2beta included test cases that used cookies as a source of data that flowed into XSS vulnerabilities. The Benchmark treated these tests as False Positives because the Benchmark team figured that you'd have to use an XSS vulnerability in the first place to set the cookie value, and so it wasn't fair/reasonable to consider an XSS vulnerability whose data source was a cookie value as actually exploitable. However, we got feedback from some tool vendors, like Fortify, Burp, and Arachni, that they disagreed with this analysis and felt that, in fact, cookies were a valid source of attack against XSS vulnerabilities. Given that there are good arguments on both sides of this safe vs. unsafe question, we decided on Aug 25, 2015 to simply remove those test cases from the Benchmark. If, in the future, we decide who is right, we may add such test cases back in.&lt;br /&gt;
&lt;br /&gt;
== Headers as a Source of Attack for XSS ==&lt;br /&gt;
&lt;br /&gt;
Similarly, the Benchmark team believes that the names of headers aren't a valid source of XSS attack for the same reason we thought cookie values aren't a valid source. Because it would require an XSS vulnerability to be exploited in the first place to set them. In fact, we feel that this argument is much stronger for header names, than cookie values. Right now, the Benchmark doesn't include any header names as sources for XSS test cases, but we plan to add them, and mark them as false positive in the Benchmark.&lt;br /&gt;
&lt;br /&gt;
We do have header values as sources for some XSS test cases in the Benchmark and only 'referer' is treated as a valid XSS source (i.e., true positives) because other headers are not viable XSS sources. Other headers are, of course, valid sources for other attack vectors, like SQL injection or Command Injection.&lt;br /&gt;
&lt;br /&gt;
== False Positive Scenario: Static Values Passed to Unsafe (Weak) Sinks ==&lt;br /&gt;
&lt;br /&gt;
The Benchmark has MANY test cases where unsafe data flows in from the browser, but that data is replaced with static content as it goes through the propagators in the that specific test case. This static (safe) data then flows to the sink, which may be a weak/unsafe sink, like, for example, an unsafely constructed SQL statement. The Benchmark treats those test cases as false positives because there is absolutely no way for that weakness to be exploited. The NSA Juliet SAST benchmark treats such test cases exactly the same way, as false positives. We do recognize that there are weaknesses in those test cases, even though they aren't exploitable.&lt;br /&gt;
&lt;br /&gt;
Some SAST tool vendors feel it is appropriate to point out those weaknesses, and that's fine. However, if the tool points those weaknesses out, and does not distinguish them from truly exploitable vulnerabilities, then the Benchmark treats those findings as false positives. If the tool allows a user to differentiate these non-exploitable weaknesses from exploitable vulnerabilities, then the Benchmark scorecard generator can use that information to filter out these extra findings (along with any other similarly marked findings) so they don't count against that tool when calculating that tool's Benchmark score.  In the real world, its important for analysts to be able to filter out such findings if they only have time to deal with the most critical, actually exploitable, vulnerabilities. If a tool doesn't make it easy for an analyst to distinguish the two situations, then they are providing a disservice to the analyst.&lt;br /&gt;
&lt;br /&gt;
This issue doesn't affect DAST tools. They only report what appears to be exploitable to them. So this has no affect on them.&lt;br /&gt;
&lt;br /&gt;
If you are a SAST tool vendor or user, and you believe the Benchmark scorecard generator is counting such findings against that tool, and there is a way to tell them apart, please let the project team know so the scorecard generator can be adjusted to not count those findings against the tool. The Benchmark project's goal is the generate the most fair and accurate results it can generate. If such an adjustment is made to how a scorecard is generated for that tool, we plan to document this was done for that tool, and explain how others could perform the same filtering within that tool in order to get the same focused set of results.&lt;br /&gt;
&lt;br /&gt;
== Dead Code ==&lt;br /&gt;
&lt;br /&gt;
Some SAST tools point out weaknesses in dead code because they might eventually end up being used, and serve as bad coding examples (think cut/paste of code). We think this is fine/appropriate.  However, there is no dead code in the OWASP Benchmark (at least not intentionally). So dead code should not be causing any tool to report unnecessary false positives.&lt;br /&gt;
&lt;br /&gt;
= Tool Support/Results =&lt;br /&gt;
&lt;br /&gt;
The results for 5 free tools, PMD, FindBugs, FindBugs with the FindSecBugs plugin, SonarQube and OWASP ZAP are available here against version 1.2 of the Benchmark: https://rawgit.com/OWASP/Benchmark/master/scorecard/OWASP_Benchmark_Home.html. We've included multiple versions of FindSecBugs' and ZAP's results so you can see the improvements they are making finding vulnerabilities in Benchmark.&lt;br /&gt;
&lt;br /&gt;
We have Benchmark results for all the following tools, but haven't publicly released the results for any commercial tools. However, we included a 'Commercial Average' page, which includes a summary of results for 6 commercial SAST tools along with anonymous versions of each SAST tool's scorecard.&lt;br /&gt;
&lt;br /&gt;
The Benchmark can generate results for the following tools: &lt;br /&gt;
&lt;br /&gt;
'''Free Static Application Security Testing (SAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://pmd.github.io/ PMD] (which really has no security rules) - .xml results file&lt;br /&gt;
* [http://findbugs.sourceforge.net/ Findbugs] - .xml results file (Note: The 'new' Findbugs is now at: https://spotbugs.github.io/)&lt;br /&gt;
* FindBugs with the [http://find-sec-bugs.github.io/ FindSecurityBugs plugin] - .xml results file&lt;br /&gt;
* [https://www.sonarqube.org/downloads/ SonarQube] - .xml results file&lt;br /&gt;
&lt;br /&gt;
Note: We looked into supporting [http://checkstyle.sourceforge.net/ Checkstyle] but it has no security rules, just like PMD. The [http://fb-contrib.sourceforge.net/ fb-contrib] FindBugs plugin doesn't have any security rules either. We did test [http://errorprone.info/ Error Prone], and found that it does report some use of [http://errorprone.info/bugpattern/InsecureCipherMode) insecure ciphers (CWE-327)], but that's it.&lt;br /&gt;
&lt;br /&gt;
'''Commercial SAST Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [http://www.castsoftware.com/products/application-intelligence-platform CAST Application Intelligence Platform (AIP)] - .xml results file&lt;br /&gt;
* [https://www.checkmarx.com/products/static-application-security-testing/ Checkmarx CxSAST] - .xml results file&lt;br /&gt;
* [https://www.synopsys.com/content/dam/synopsys/sig-assets/datasheets/SAST-Coverity-datasheet.pdf Synopsys Static Analysis (Formerly Coverity Code Advisor) (On-Demand and stand-alone versions)] - .json results file (You can scan Benchmark w/Coverity for free. See: https://scan.coverity.com/)&lt;br /&gt;
* [https://software.microfocus.com/en-us/products/static-code-analysis-sast/overview Micro Focus (Formally HPE) Fortify (On-Demand and stand-alone versions)] - .fpr results file&lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/ibm-appscan-source IBM AppScan Source (Standalone and Cloud)] - .ozasmt or .xml results file&lt;br /&gt;
* [https://juliasoft.com/solutions/julia-for-security/ Julia Analyzer] - .xml results file&lt;br /&gt;
* [https://www.parasoft.com/products/jtest/ Parasoft Jtest] - .xml results file&lt;br /&gt;
* [https://www.shiftleft.io/product/ ShiftLeft SAST] - .sl results file (Benchmark specific format. Ask vendor how to generate this)&lt;br /&gt;
* [https://www.sourcemeter.com/features/ SourceMeter] - .txt results file of ALL results from VulnerabilityHunter&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] - .xml results file&lt;br /&gt;
* [http://www.veracode.com/products/binary-static-analysis-sast Veracode SAST] - .xml results file&lt;br /&gt;
* [https://www.rigs-it.com/xanitizer/ XANITIZER] - xml results file ([https://www.rigs-it.com/wp-content/uploads/2018/03/howtosetupxanitizerforowaspbenchmarkproject.pdf Their white paper on how to setup Xanitizer to scan Benchmark.]) (Free trial available)&lt;br /&gt;
&lt;br /&gt;
We are looking for results for other commercial static analysis tools like: [http://www.grammatech.com/codesonar Grammatech CodeSonar], [http://www.klocwork.com/products-services/klocwork Klocwork], etc. If you have a license for any static analysis tool not already listed above and can run it on the Benchmark and send us the results file that would be very helpful. &lt;br /&gt;
&lt;br /&gt;
The free SAST tools come bundled with the Benchmark so you can run them yourselves. If you have a license for any commercial SAST tool, you can also run them against the Benchmark. Just put your results files in the /results folder of the project, and then run the BenchmarkScore script for your platform (.sh / .bat) and it will generate a scorecard in the /scorecard directory for all the tools you have results for that are currently supported.&lt;br /&gt;
&lt;br /&gt;
'''Free Dynamic Application Security Testing (DAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
Note: While we support scorecard generators for these Free and Commercial DAST tools, we haven't been able to get a full/clean run against the Benchmark from most of these tools. As such, some of these scorecard generators might still need some work to properly reflect their results. If you notice any problems, let us know.&lt;br /&gt;
&lt;br /&gt;
* [http://www.arachni-scanner.com/ Arachni] - .xml results file&lt;br /&gt;
** To generate .xml, run: ./bin/arachni_reporter &amp;quot;Your_AFR_Results_Filename.afr&amp;quot; --reporter=xml:outfile=Benchmark1.2-Arachni.xml&lt;br /&gt;
* [https://www.owasp.org/index.php/ZAP OWASP ZAP] - .xml results file. To generate a complete ZAP XML results file so you can generate a valid scorecard, make sure you:&lt;br /&gt;
** Tools &amp;gt; Options &amp;gt; Alerts - And set the Max alert instances to like 500.&lt;br /&gt;
** Then: Report &amp;gt; Generate XML Report...&lt;br /&gt;
&lt;br /&gt;
'''Commercial DAST Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.acunetix.com/vulnerability-scanner/ Acunetix Web Vulnerability Scanner (WVS)] - .xml results file (Generated using [https://www.acunetix.com/resources/wvs7manual.pdf command line interface (see Chapter 10.)] /ExportXML switch)&lt;br /&gt;
* [https://portswigger.net/burp/ Burp Pro] - .xml results file&lt;br /&gt;
**You must use Burp Pro v1.6.30+ to scan the Benchmark due to a limitation fixed in v1.6.30.&lt;br /&gt;
* [https://software.microfocus.com/en-us/products/webinspect-dynamic-analysis-dast/overview Micro Focus (Formally HPE) WebInspect] - .xml results file&lt;br /&gt;
* [http://www-03.ibm.com/software/products/en/appscan This was IBM AppScan (but I believe IBM sold this product off. To whom?] - .xml results file&lt;br /&gt;
* [https://www.netsparker.com/web-vulnerability-scanner/ Netsparker] - .xml results file&lt;br /&gt;
* [https://www.rapid7.com/products/appspider/ Rapid7 AppSpider] - .xml results file&lt;br /&gt;
&lt;br /&gt;
* Qualys - We ran Qualys against v1.2 of the Benchmark and it found none of the vulnerabilities we test for as far as we could tell. So we haven't implemented a scorecard generator for it. If you get results where you think it does find some real issues, send us the results file and, if confirmed, we'll produce a scorecard generator for it.&lt;br /&gt;
&lt;br /&gt;
If you have access to other DAST Tools, PLEASE RUN THEM FOR US against the Benchmark, and send us the results file so we can build a scorecard generator for that tool.&lt;br /&gt;
&lt;br /&gt;
'''Commercial Interactive Application Security Testing (IAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.contrastsecurity.com/interactive-application-security-testing-iast Contrast Assess] - .zip results file (You can scan Benchmark w/Contrast for free. See: https://www.contrastsecurity.com/contrast-community-edition)&lt;br /&gt;
* [https://hdivsecurity.com/interactive-application-security-testing-iast Hdiv Detection (IAST)] - .hlg results file&lt;br /&gt;
&lt;br /&gt;
'''Commercial Hybrid Analysis Application Security Testing Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [http://www.iappsecure.com/products.html Fusion Lite Insight] - .xml results file&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It may be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results.'''&lt;br /&gt;
&lt;br /&gt;
The project has automated test harnesses for these vulnerability detection tools, so we can repeatably run the tools against each version of the Benchmark and automatically produce scorecards in our desired format.&lt;br /&gt;
&lt;br /&gt;
We want to test as many tools as possible against the Benchmark. If you are:&lt;br /&gt;
&lt;br /&gt;
* A tool vendor and want to participate in the project&lt;br /&gt;
* Someone who wants to help score a free tool against the project&lt;br /&gt;
* Someone who has a license to a commercial tool and the terms of the license allow you to publish tool results, and you want to participate&lt;br /&gt;
&lt;br /&gt;
please let [mailto:dave.wichers@owasp.org me] know!&lt;br /&gt;
&lt;br /&gt;
= Quick Start =&lt;br /&gt;
&lt;br /&gt;
==What is in the Benchmark?==&lt;br /&gt;
The Benchmark is a Java Maven project. Its primary component is thousands of test cases (e.g., BenchmarkTest00001.java) , each of which is a single Java servlet that contains a single vulnerability (either a true positive or false positive). The vulnerabilities span about a dozen different types currently and are expected to expand significantly in the future.&lt;br /&gt;
&lt;br /&gt;
An expectedresults.csv is published with each version of the Benchmark (e.g., expectedresults-1.1.csv) and it specifically lists the expected results for each test case. Here’s what the first two rows in this file looks like for version 1.1 of the Benchmark:&lt;br /&gt;
&lt;br /&gt;
 # test name		category	real vulnerability	CWE	Benchmark version: 1.1	2015-05-22&lt;br /&gt;
 BenchmarkTest00001	crypto		TRUE			327&lt;br /&gt;
&lt;br /&gt;
This simply means that the first test case is a crypto test case (use of weak cryptographic algorithms), this is a real vulnerability (as opposed to a false positive), and this issue maps to CWE 327. It also indicates this expected results file is for Benchmark version 1.1 (produced May 22, 2015). There is a row in this file for each of the tens of thousands of test cases in the Benchmark.  Each time a new version of the Benchmark is published, a new corresponding results file is generated and each test case can be completely different from one version to the next.&lt;br /&gt;
&lt;br /&gt;
The Benchmark also comes with a bunch of different utilities, commands, and prepackaged open source security analysis tools, all of which can be executed through Maven goals, including:&lt;br /&gt;
&lt;br /&gt;
* Open source vulnerability detection tools to be run against the Benchmark&lt;br /&gt;
* A scorecard generator, which computes a scorecard for each of the tools you have results files for.&lt;br /&gt;
&lt;br /&gt;
==What Can You Do With the Benchmark?==&lt;br /&gt;
* Compile all the software in the Benchmark project (e.g., mvn compile)&lt;br /&gt;
* Run a static vulnerability analysis tool (SAST) against the Benchmark test case code&lt;br /&gt;
&lt;br /&gt;
* Scan a running version of the Benchmark with a dynamic application security testing tool (DAST)&lt;br /&gt;
** Instructions on how to run it are provided below&lt;br /&gt;
&lt;br /&gt;
* Generate scorecards for each of the tools you have results files for&lt;br /&gt;
** See the Tool Support/Results page for the list of tools the Benchmark supports generating scorecards for&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
Before downloading or using the Benchmark make sure you have the following installed and configured properly:&lt;br /&gt;
&lt;br /&gt;
 GIT: http://git-scm.com/ or https://github.com/&lt;br /&gt;
 Maven: https://maven.apache.org/  (Version: 3.2.3 or newer works. We heard that 3.0.5 throws an error.)&lt;br /&gt;
 Java: http://www.oracle.com/technetwork/java/javase/downloads/index.html (Java 7 or 8) (64-bit) - Takes ALOT of memory to compile the Benchmark.&lt;br /&gt;
&lt;br /&gt;
==Getting, Building, and Running the Benchmark==&lt;br /&gt;
&lt;br /&gt;
To download and build everything:&lt;br /&gt;
&lt;br /&gt;
 $ git clone https://github.com/OWASP/benchmark &lt;br /&gt;
 $ cd benchmark&lt;br /&gt;
 $ mvn compile   (This compiles it)&lt;br /&gt;
 $ runBenchmark.sh/.bat - This compiles and runs it.&lt;br /&gt;
&lt;br /&gt;
Then navigate to: https://localhost:8443/benchmark/ to go to its home page. It uses a self signed SSL certificate, so you'll get a security warning when you hit the home page.&lt;br /&gt;
&lt;br /&gt;
Note: We have set the Benchmark app to use up to 6 Gig of RAM, which it may need when it is fully scanned by a DAST scanner. The DAST tool probably also requires 3+ Gig of RAM. As such, we recommend having a 16 Gig machine if you are going to try to run a full DAST scan. And at least 4 or ideally 8 Gig if you are going to play around with the running Benchmark app.&lt;br /&gt;
&lt;br /&gt;
== Using a VM instead ==&lt;br /&gt;
We have several preconstructed VMs or instructions on how to build one that you can use instead:&lt;br /&gt;
&lt;br /&gt;
* Docker: A Dockerfile is checked into the project [https://github.com/OWASP/Benchmark/blob/master/VMs/Dockerfile here]. This Docker file should automatically produce a Docker VM with the latest Benchmark project files. After you have Docker installed, cd to /VMs then run: &lt;br /&gt;
 buildDockerImage.sh --&amp;gt; This builds the Docker Benchmark VM (This will take a WHILE)&lt;br /&gt;
 docker images  --&amp;gt; You should see the new benchmark:latest image in the list provided&lt;br /&gt;
 # The Benchmark Docker Image only has to be created once. &lt;br /&gt;
&lt;br /&gt;
 To run the Benchmark in your Docker VM, just run:&lt;br /&gt;
   runDockerImage.sh  --&amp;gt; This pulls in any updates to Benchmark since the Image was built, builds everything, and starts a remotely accessible Benchmark web app.&lt;br /&gt;
 If successful, you should see this at the end:&lt;br /&gt;
   [INFO] [talledLocalContainer] Tomcat 8.x started on port [8443]&lt;br /&gt;
   [INFO] Press Ctrl-C to stop the container...&lt;br /&gt;
 Then simply navigate to: https://localhost:8443/benchmark from the machine you are running Docker&lt;br /&gt;
 &lt;br /&gt;
 Or if you want to access from a different machine:&lt;br /&gt;
  run: docker-machine ls (in a different window) --&amp;gt; To get IP Docker VM is exporting (e.g., tcp://192.168.99.100:2376)&lt;br /&gt;
  Navigate to: https://192.168.99.100:8443/benchmark (using the above IP as an example)&lt;br /&gt;
&lt;br /&gt;
* Amazon Web Services (AWS) - Here's how you set up the Benchmark on an AWS VM:&lt;br /&gt;
 sudo yum install git&lt;br /&gt;
 sudo yum install maven&lt;br /&gt;
 sudo yum install mvn&lt;br /&gt;
 sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo&lt;br /&gt;
 sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo&lt;br /&gt;
 sudo yum install -y apache-maven&lt;br /&gt;
 git clone https://github.com/OWASP/benchmark&lt;br /&gt;
 cd benchmark&lt;br /&gt;
 chmod 755 *.sh&lt;br /&gt;
 ./runBenchmark.sh -- to run it locally on the VM.&lt;br /&gt;
 ./runRemoteAccessibleBenchmark.sh -- to run it so it can be accessed outside the VM (on port 8443).&lt;br /&gt;
&lt;br /&gt;
==Running Free Static Analysis Tools Against the Benchmark==&lt;br /&gt;
There are scripts for running each of the free SAST vulnerability detection tools included with the Benchmark against the Benchmark test cases. On Linux, you might have to make them executable (e.g., chmod 755 *.sh) before you can run them.&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for PMD:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runPMD.sh (Linux) or .\scripts\runPMD.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for FindBugs:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runFindBugs.sh (Linux) or .\scripts\runFindBugs.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for FindBugs with the FindSecBugs plugin:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runFindSecBugs.sh (Linux) or .\scripts\runFindSecBugs.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
In each case, the script will generate a results file and put it in the /results directory. For example: &lt;br /&gt;
&lt;br /&gt;
 Benchmark_1.2-findbugs-v3.0.1-1026.xml&lt;br /&gt;
&lt;br /&gt;
This results file name is carefully constructed to mean the following: It's a results file against the OWASP Benchmark version 1.2, FindBugs was the analysis tool, it was version 3.0.1 of FindBugs, and it took 1026 seconds to run the analysis.&lt;br /&gt;
&lt;br /&gt;
NOTE: If you create a results file yourself, by running a commercial tool for example, you can add the version # and the compute time to the filename just like this and the Benchmark Scorecard generator will pick this information up and include it in the generated scorecard. If you don't, depending on what metadata is included in the tool results, the Scorecard generator might do this automatically anyway.&lt;br /&gt;
&lt;br /&gt;
==Generating Scorecards==&lt;br /&gt;
The scorecard generation application BenchmarkScore is included with the Benchmark. It parses the output files generated by any of the supported security tools run against the Benchmark and compares them against the expected results, and produces a set of web pages that detail the accuracy and speed of the tools involved. For the list of currently supported tools, check out the: Tools Support/Results tab. If you are using a tool that is not yet supported, simply send us a results file from that tool and we'll write a parser for that tool and add it to the supported tools list.&lt;br /&gt;
&lt;br /&gt;
The following command will compute a Benchmark scorecard for all the results files in the '''/results''' directory. The generated scorecard is put into the '''/scorecard''' directory.&lt;br /&gt;
&lt;br /&gt;
 createScorecard.sh (Linux) or createScorecard.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
An example of a real scorecard for some open source tools is provided at the top of the Tool Support/Results tab so you can see what one looks like.&lt;br /&gt;
&lt;br /&gt;
We recommend including the Benchmark version number in any results file name, in order to help prevent mismatches between the expected results and the actual results files.  A tool will not score well against the wrong expected results.&lt;br /&gt;
&lt;br /&gt;
===Customizing Your Scorecard Generation===&lt;br /&gt;
&lt;br /&gt;
The createScorecard scripts are very simple. They only have one line. Here's what the 1.2 version looks like:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;expectedresults-1.2.csv results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This Maven command simply says to run the BenchmarkScore application, passing in two parameters. The 1st is the Benchmark expected results file to compare the tool results against. And the 2nd is the name of the directory that contains all the results from tools run against that version of the Benchmark. If you have tool results older than the current version of the Benchmark, like 1.1 results for example, then you would do something like this instead:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;expectedresults-1.1.csv 1.1_results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To keep things organized, we actually put the expected results file inside the same results folder for that version of the Benchmark, so our command looks like this:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;1.1_results/expectedresults-1.1.csv 1.1_results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In all cases, the generated scorecard is put in the /scorecard folder.&lt;br /&gt;
&lt;br /&gt;
'''WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It is likely to be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results.''' It is for just this reason that the Benchmark project isn't releasing such results itself.&lt;br /&gt;
&lt;br /&gt;
= Tool Scanning Tips =&lt;br /&gt;
&lt;br /&gt;
People frequently have difficulty scanning the Benchmark with various tools due to many reasons, including size of the Benchmark app and its codebase, and complexity of the tools used. Here is some guidance for some of the tools we have used to scan the Benchmark. If you've learned any tricks on how to get better or easier results for a particular tool against the Benchmark, let us know or update this page directly.&lt;br /&gt;
&lt;br /&gt;
== Generic Tips ==&lt;br /&gt;
&lt;br /&gt;
Because of the size of the Benchmark, you may need to give your tool more memory before it starts the scan. If its a Java based tool, you may want to pass more memory to it like this:&lt;br /&gt;
&lt;br /&gt;
 -Xmx4G (This gives the Java application 4 Gig of memory)&lt;br /&gt;
&lt;br /&gt;
== SAST Tools ==&lt;br /&gt;
&lt;br /&gt;
=== Checkmarx ===&lt;br /&gt;
&lt;br /&gt;
The Checkmarx SAST Tool (CxSAST) is ready to scan the OWASP Benchmark out-of-the-box. &lt;br /&gt;
Please notice that the OWASP Benchmark “hides” some vulnerabilities in dead code areas, for example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
if (0&amp;gt;1)&lt;br /&gt;
{&lt;br /&gt;
  //vulnerable code&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
By default, CxSAST will find these vulnerabilities since Checkmarx believes that including dead code in the scan results is a SAST best practice. &lt;br /&gt;
&lt;br /&gt;
Checkmarx's experience shows that security experts expect to find these types of code vulnerabilities, and demand that their developers fix them. However, OWASP Benchmark considers the flagging of these vulnerabilities as False Positives, as a result lowering Checkmarx's overall score. &lt;br /&gt;
&lt;br /&gt;
Therefore, in order to receive an OWASP score untainted by dead code, re-configure CxSAST as follows:&lt;br /&gt;
# Open the CxAudit client for editing Java queries.&lt;br /&gt;
# Override the &amp;quot;Find_Dead_Code&amp;quot; query.&lt;br /&gt;
# Add the commented text of the original query to the new override query.&lt;br /&gt;
# Save the queries.&lt;br /&gt;
&lt;br /&gt;
=== FindBugs ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindBugs.(sh or bat). If you want to run a different version of FindBugs, just change its version number in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== FindBugs with FindSecBugs ===&lt;br /&gt;
&lt;br /&gt;
[http://h3xstream.github.io/find-sec-bugs/ FindSecurityBugs] is a great plugin for FindBugs that significantly increases the ability for FindBugs to find security issues. We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindSecBugs.(sh or bat). If you want to run a different version of FindSecBugs, just change the version number of the findsecbugs-plugin artifact in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== Micro Focus (Formally HP) Fortify ===&lt;br /&gt;
&lt;br /&gt;
If you are using the Audit Workbench, you can give it more memory and make sure you invoke it in 64-bit mode by doing this:&lt;br /&gt;
&lt;br /&gt;
  set AWB_VM_OPTS=&amp;quot;-Xmx2G -XX:MaxPermSize=256m&amp;quot;&lt;br /&gt;
  export AWB_VM_OPTS=&amp;quot;-Xmx2G -XX:MaxPermSize=256m&amp;quot;&lt;br /&gt;
  auditworkbench -64&lt;br /&gt;
&lt;br /&gt;
We found it was easier to use the Maven support in Fortify to scan the Benchmark and to do it in 2 phases, translate, and then scan. We did something like this:&lt;br /&gt;
&lt;br /&gt;
  Translate Phase:&lt;br /&gt;
  export JAVA_HOME=$(/usr/libexec/java_home)&lt;br /&gt;
  export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_17.10/bin&lt;br /&gt;
  export SCA_VM_OPTS=&amp;quot;-Xmx2G -version 1.7&amp;quot;&lt;br /&gt;
  mvn sca:clean&lt;br /&gt;
  mvn sca:translate&lt;br /&gt;
&lt;br /&gt;
  Scan Phase:&lt;br /&gt;
  export JAVA_HOME=$(/usr/libexec/java_home)&lt;br /&gt;
  export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_4.10/bin&lt;br /&gt;
  export SCA_VM_OPTS=&amp;quot;-Xmx10G -version 1.7&amp;quot;&lt;br /&gt;
  mvn sca:scan&lt;br /&gt;
&lt;br /&gt;
=== PMD ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runPMD.(sh or bat). If you want to run a different version of PMD, just change its version number in the Benchmark pom.xml file. (NOTE: PMD doesn't find any security issues. We include it because its interesting to know that it doesn't.)&lt;br /&gt;
&lt;br /&gt;
=== SonarQube ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its mostly dialed in.  But its a bit tricky because SonarQube requires two parts. There is a stand alone scanner for Java. And then there is a web application that accepts the results, and in turn can then produce the results file required by the Benchmark scorecard generator for SonarQube. Running the script runSonarQube.(sh or bat) will generate the results, but if the SonarQube Web Application isn't running where the runSonarQube script expects it to be, then the script will fail.&lt;br /&gt;
&lt;br /&gt;
If you want to run a different version of SonarQube, just change its version number in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== Xanitizer ===&lt;br /&gt;
&lt;br /&gt;
The vendor has written their own guide to [http://www.rigs-it.net/opendownloads/whitepapers/HowToSetUpXanitizerForOWASPBenchmarkProject.pdf How to Set Up Xanitizer for OWASP Benchmark].&lt;br /&gt;
&lt;br /&gt;
== DAST Tools ==&lt;br /&gt;
&lt;br /&gt;
=== Burp Pro ===&lt;br /&gt;
&lt;br /&gt;
You must use Burp Pro v1.6.29 or greater to scan the Benchmark due to a previous limitation in Burp Pro related to ensuring the path attribute for cookies was honored. This issue was fixed in the v1.6.29 release.&lt;br /&gt;
&lt;br /&gt;
To scan, first spider the entire Benchmark, and then select the /Benchmark URL and actively scan that branch. You can skip all the .html pages and any other pages that Burp says have no parameters.&lt;br /&gt;
&lt;br /&gt;
NOTE: We have been unable to simply run Burp Pro against the entire Benchmark in one shot. In our experience, it eventually freezes/stops scanning. We've had to run it against each test area one at a time. If you figure out how to get Burp Pro to scan all of Benchmark in one shot, let us know how you did it!&lt;br /&gt;
&lt;br /&gt;
=== OWASP ZAP ===&lt;br /&gt;
&lt;br /&gt;
ZAP may require additional memory to be able to scan the Benchmark. To configure the amount of memory:&lt;br /&gt;
* Tools --&amp;gt; Options --&amp;gt; JVM: Recommend setting to: -Xmx2048m (or larger). (Then restart ZAP).&lt;br /&gt;
&lt;br /&gt;
To run ZAP against Benchmark:&lt;br /&gt;
# Because Benchmark uses Cookies and Headers as sources of attack for many test cases: Tools --&amp;gt; Options --&amp;gt; Active Scan Input Vectors: Then check the HTTP Headers, All Requests, and Cookie Data checkboxes and hit OK&lt;br /&gt;
# Click on Show All Tabs button (if spider tab isn't visible)&lt;br /&gt;
# Go to Spider tab (the black spider) and click on New Scan button&lt;br /&gt;
# Enter: https://localhost:8443/benchmark/  into the 'Starting Point' box and hit 'Start Scan'&lt;br /&gt;
#* Do this again. For some reason it takes 2 passes with the Spider before it stops finding more Benchmark endpoints.&lt;br /&gt;
# When Spider completes, click on 'benchmark' folder in Site Map, right click and select: 'Attack --&amp;gt; Active Scan'&lt;br /&gt;
#* It will take several hours, like 3+ to complete (it's actually likely to simply freeze before completing the scan - see NOTE: below)&lt;br /&gt;
&lt;br /&gt;
For faster active scan you can&lt;br /&gt;
* Disable the ZAP DB log (in ZAP 2.5.0+):&lt;br /&gt;
** Disable it via Options / Database / Recover Log&lt;br /&gt;
** Set it on the command line using &amp;quot;-config database.recoverylog=false&amp;quot;&lt;br /&gt;
* Disable unnecessary plugins / Technologies: When you launch the Active Scan&lt;br /&gt;
** On the Policy tab, disable all plugins except: XSS (Reflected), Path Traversal, SQLi, OS Command Injection&lt;br /&gt;
** Go the Technology Tab, disable everything and only enable: MySQL, YOUR_OS, Tomcat&lt;br /&gt;
** Note: This 2nd performance improvement step is a bit like cheating as you wouldn't do this for a normal site scan. You'd want to leave all this on in case these other plugins/technologies are helpful in finding more issues. So a fair performance comparison of ZAP to other tools would leave all this on.&lt;br /&gt;
&lt;br /&gt;
To generate the ZAP XML results file so you can generate its scorecard:&lt;br /&gt;
* Tools &amp;gt; Options &amp;gt; Alerts - And set the Max alert instances to like 500.&lt;br /&gt;
* Then: Report &amp;gt; Generate XML Report...&lt;br /&gt;
&lt;br /&gt;
NOTE: Similar to Burp, we can't simply run ZAP against the entire Benchmark in one shot. In our experience, it eventually freezes/stops scanning. We've had to run it against each test area one at a time. If you figure out how to get ZAP to scan all of Benchmark in one shot, let us know how you did it!&lt;br /&gt;
&lt;br /&gt;
Things we tried that didn't improve the score:&lt;br /&gt;
* AJAX Spider - the traditional spider appears to find all (or 99%) of the test cases so the AJAX Spider does not appear to be needed against Benchmark v1.2&lt;br /&gt;
* XSS (Persistent) - There are 3 of these plugins that run by default. There aren't any stored XSS in Benchmark, so you can disable these plugins for a faster scan.&lt;br /&gt;
* DOM XSS Plugin - This is an optional plugin that didn't seem to find any additional XSS issues. There aren't an DOM specific XSS issues in Benchmark v1.2, so not surprising.&lt;br /&gt;
&lt;br /&gt;
== IAST Tools ==&lt;br /&gt;
&lt;br /&gt;
Interactive Application Security Testing (IAST) tools work differently than scanners.  IAST tools monitor an application as it runs to identify application vulnerabilities using context from inside the running application. Typically these tools run continuously, immediately notifying users of vulnerabilities, but you can also get a full report of an entire application. To do this, we simply run the Benchmark application with an IAST agent and use a crawler to hit all the pages.&lt;br /&gt;
&lt;br /&gt;
=== Contrast Assess ===&lt;br /&gt;
&lt;br /&gt;
To use Contrast Assess, we simply add the Java agent to the Benchmark environment and run the BenchmarkCrawler. The entire process should only take a few minutes. We provided a few scripts, which simply add the -javaagent:contrast.jar flag to the Benchmark launch configuration. We have tested on MacOS, Ubuntu, and Windows.  Be sure your VM has at least 4M of memory.&lt;br /&gt;
&lt;br /&gt;
* Ensure your environment has Java, Maven, and git installed, then build the Benchmark project&lt;br /&gt;
   '''$ git clone https://github.com/OWASP/Benchmark.git'''&lt;br /&gt;
   '''$ cd Benchmark'''&lt;br /&gt;
   '''$ mvn compile'''&lt;br /&gt;
&lt;br /&gt;
* Download a licensed copy of the Contrast Assess Java Agent (contrast.jar) from your Contrast TeamServer account and put it in the /Benchmark/tools/Contrast directory.&lt;br /&gt;
   '''$ cp ~/Downloads/contrast.jar tools/Contrast'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 1, launch the Benchmark application and wait until it starts&lt;br /&gt;
   '''$ cd tools/Contrast  &lt;br /&gt;
   '''$ ./runBenchmark_wContrast.sh''' (.bat on Windows)&lt;br /&gt;
   '''[INFO] Scanning for projects...&lt;br /&gt;
   '''[INFO]                                                                         &lt;br /&gt;
   '''[INFO] ------------------------------------------------------------------------&lt;br /&gt;
   '''[INFO] Building OWASP Benchmark Project 1.2&lt;br /&gt;
   '''[INFO] ------------------------------------------------------------------------&lt;br /&gt;
   '''[INFO] &lt;br /&gt;
   '''...&lt;br /&gt;
   '''[INFO] [talledLocalContainer] Tomcat 8.x started on port [8443]'''&lt;br /&gt;
   '''[INFO] Press Ctrl-C to stop the container...'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 2, launch the crawler and wait a minute or two for the crawl to complete.&lt;br /&gt;
   '''$ ./runCrawler.sh''' (.bat on Windows)&lt;br /&gt;
&lt;br /&gt;
* A Contrast report has been generated in /Benchmark/tools/Contrast/working/contrast.log.  This report will be automatically copied (and renamed with version number)  to /Benchmark/results directory.&lt;br /&gt;
   '''$ more tools/Contrast/working/contrast.log'''&lt;br /&gt;
   '''2016-04-22 12:29:29,716 [main b] INFO - Contrast Runtime Engine&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Copyright (C) 2012&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Pat. 8,458,789 B2&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Contrast Security, Inc.&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - All Rights Reserved&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - https://www.contrastsecurity.com/&lt;br /&gt;
   '''...'''&lt;br /&gt;
&lt;br /&gt;
* Press Ctrl-C to stop the Benchmark in Terminal 1.  Note: on Windows, select &amp;quot;N&amp;quot; when asked Terminate batch job (Y/N))&lt;br /&gt;
   '''[INFO] [talledLocalContainer] Tomcat 8.x is stopped'''&lt;br /&gt;
   '''Copying Contrast report to results directory'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 2, generate scorecards in /Benchmark/scorecard&lt;br /&gt;
   '''$ ./createScorecards.sh''' (.bat on Windows)&lt;br /&gt;
   '''Analyzing results from Benchmark_1.2-Contrast.log&lt;br /&gt;
   '''Actual results file generated: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.csv&lt;br /&gt;
   '''Report written to: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html&lt;br /&gt;
&lt;br /&gt;
* Open the Benchmark Scorecard in your browser&lt;br /&gt;
   '''/Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html'''&lt;br /&gt;
&lt;br /&gt;
=== Hdiv Detection ===&lt;br /&gt;
&lt;br /&gt;
Hdiv has written their own instructions on how to run the detection component of their product on the Benchmark here: https://hdivsecurity.com/docs/features/benchmark/#how-to-run-hdiv-in-owasp-benchmark-project. You'll see that these instructions involve using the same crawler used to exercise all the test cases in the Benchmark, just like Contrast above.&lt;br /&gt;
&lt;br /&gt;
= RoadMap =&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.0 - Released April 15, 2015 - This initial release included over 20,000 test cases in 11 different vulnerability categories. As this initial version was not a runnable application, it was only suitable for assessing static analysis tools (SAST).&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.1 - Released May 23, 2015 - This update fixed some inaccurate test cases, and made sure that every vulnerability area included both True Positives and False Positives.&lt;br /&gt;
&lt;br /&gt;
Benchmark Scorecard Generator - Released July 10, 2015 - The ability to automatically and repeatably produce a scorecard of how well tools do against the Benchmark was released for most of the SAST tools supported by the Benchmark. Scorecards present graphical as well as statistical data on how well a tool does against the Benchmark down to the level of detail of how exactly it did against each individual test in the Benchmark. [https://rawgit.com/OWASP/Benchmark/master/scorecard/OWASP_Benchmark_Home.html Here are the latest public scorecards].  Support for producing scorecards for additional tools is being added all the time and the current full set is documented on the '''Tool Support/Results''' and '''Quick Start''' tabs of this wiki.&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.2beta - Released Aug 15, 2015 - The 1st release of a fully runnable version of the Benchmark to support assessing all types of vulnerability detection and prevention technologies, including DAST, IAST, RASP, WAFs, etc. This involved creating a user interface for every test case, and enhancing each test case to make sure its actually exploitable, not just uses something that is theoretically weak. This release is under 3,000 test cases to make it practical to scan the entire Benchmark with a DAST tool in a reasonable amount of time, with commodity hardware specs.&lt;br /&gt;
&lt;br /&gt;
Benchmark 1.2 - Released June 5, 2016 -  Based on feedback from a number of DAST tool developers, and other vendors as well, we made the Benchmark more realistic in a number of ways to facilitate external DAST scanning, and also made the Benchmark more resilient against attack so it could properly survive various DAST vulnerability detection and exploit verification techniques.&lt;br /&gt;
&lt;br /&gt;
Plans for Benchmark 1.3:&lt;br /&gt;
&lt;br /&gt;
While we don't have hard and fast rules of exactly what we are going to do next, enhancements in the following areas are planned for the next release:&lt;br /&gt;
&lt;br /&gt;
* Add new vulnerability categories (e.g., XXE, Hibernate Injection)&lt;br /&gt;
* Add support for popular server side Java frameworks (e.g., Spring)&lt;br /&gt;
* Add web services test cases&lt;br /&gt;
&lt;br /&gt;
We are also starting to work on the ability to score WAFs/RASPs and other defensive technology against Benchmark.&lt;br /&gt;
&lt;br /&gt;
= FAQ =&lt;br /&gt;
&lt;br /&gt;
==1. How are the scores computed for the Benchmark?==&lt;br /&gt;
&lt;br /&gt;
Each test case has a single vulnerability of a specific type. Its either a real vulnerability (True Positive) or not (a False Positive). We document all the test cases for each version of the Benchmark in the expectedresults-VERSION#.csv file (e.g., expectedresults-1.1.csv). This file lists the test case name, the CWE type of the vulnerability, and whether it is a True Positive or not. The Benchmark supports scorecard generators for computing exactly how a tool did when analyzing a version of the Benchmark. The full list of supported tools is on the Tools Support/Results tab. For each tool there is a parser that can parse the native results format for that tool (usually XML). This parser simply, for each test case, looks to see if that tool reported a vulnerability of the type expected in the test case source code file (for SAST) or the test case URL (for DAST/IAST). If it did, and the test case was a True Positive, the tool gets credit for finding it. If it is a False Positive test, and the tool reports that type of finding, then its recorded as a False Positive. If the tool didn't report that type of vulnerability for a test case, then they get either a False Negative, or a True Negative as appropriate. After calculating all of the individual test case results, a scorecard is generated providing a chart and statistics for that tool across all the vulnerability categories, and pages are also created comparing different tools to each other in each vulnerability category (if multiple tools are being scored together).&lt;br /&gt;
&lt;br /&gt;
A detailed file explaining exactly how that tool did against each individual test case in that version of the Benchmark is produced as part of scorecard generation, and is available via the Actual Results link on each tool's scorecard page. (e.g., Benchmark_v1.1_Scorecard_for_FindBugs.csv).&lt;br /&gt;
&lt;br /&gt;
==2. What if the tool I'm using doesn't have a scorecard generator for it?==&lt;br /&gt;
&lt;br /&gt;
Send us the results file! We'll be happy to create a parser for that tool so its now supported.&lt;br /&gt;
&lt;br /&gt;
==3. What if a tool finds other unexpected vulnerabilities?==&lt;br /&gt;
&lt;br /&gt;
We are sure there are vulnerabilities we didn't intend to be there and we are eliminating them as we find them. If you find some, let us know and we'll fix them too. We are primarily focused on unintentional vulnerabilities in the categories of vulnerabilities the Benchmark currently supports, since that is what is actually measured.&lt;br /&gt;
&lt;br /&gt;
Right now, two types of vulnerabilities that get reported are ignored by the scorecard generator:&lt;br /&gt;
# Vulnerabilities in categories not yet supported&lt;br /&gt;
# Vulnerabilities of a type that is supported, but reported in test cases not of that type&lt;br /&gt;
&lt;br /&gt;
In the case of #2, false positives reported in unexpected areas are also ignored, which is primarily a DAST problem. Right now those false positives are completely ignored, but we are thinking about including them in the false positive score in some fashion. We just haven't decided how yet.&lt;br /&gt;
&lt;br /&gt;
==4. How should I configure my tool to scan the Benchmark?==&lt;br /&gt;
&lt;br /&gt;
All tools support various levels of configuration in order to improve their results. The Benchmark project, in general, is trying to '''compare out of the box capabilities of tools'''. However, if a few simple tweaks to a tool can be done to improve that tool's score, that's fine. We'd like to understand what those simple tweaks are, and document them here, so others can repeat those tests in exactly the same way. For example, just turn on the 'test cookies and headers' flag, which is off by default. Or turn on the 'advanced' scan, so it will work harder, find more vulnerabilities. Its simple things like this we are talking about, not an extensive effort to teach the tool about the app, or perform 'expert' configuration of the tool.&lt;br /&gt;
&lt;br /&gt;
So, if you know of some simple tweaks to improve a tool's results, let us know what they are and we'll document them here so everyone can benefit and make it easier to do apples to apples comparisons. And we'll link to that guidance once we start documenting it, but we don't have any such guidance right now.&lt;br /&gt;
&lt;br /&gt;
==5. I'm having difficulty scanning the Benchmark with a DAST tool. How can I get it to work?==&lt;br /&gt;
&lt;br /&gt;
We've run into 2 primary issues giving DAST tools problems.&lt;br /&gt;
&lt;br /&gt;
a) The Benchmark Generates Lots of Cookies&lt;br /&gt;
&lt;br /&gt;
The Burp team pointed out a cookies bug in the 1.2beta Benchmark. Each Weak Randomness test case generates its own cookie, 1 per test case. This caused the creation of so many cookies that servers would eventually start returning 400 errors because there were simply too many cookies being submitted in a request. This was fixed in the Aug 27, 2015 update to the Benchmark by setting the path attribute for each of these cookies to be the path to that individual test case. Now, only at most one of these cookies should be submitted with each request, eliminating this 'too many cookies' problem. However, if a DAST tool doesn't honor this path attribute, it may continue to send too many cookies, making the Benchmark unscannable for that tool. Burp Pro prior to 1.6.29 had this issue, but it was fixed in the 1.6.29 release.&lt;br /&gt;
&lt;br /&gt;
b) The Benchmark is a BIG Application&lt;br /&gt;
&lt;br /&gt;
Yes. It is, so you might have to give your scanner more memory than it normally uses by default in order to successfully scan the entire Benchmark. Please consult your tool vendor's documentation on how to give it more memory.&lt;br /&gt;
&lt;br /&gt;
Your machine itself might not have enough memory in the first place. For example, we were not able to successfully scan the 1.2beta with OWASP ZAP with only 8 Gig of RAM. So, you might need a more powerful machine or use a cloud provided machine to successfully scan the Benchmark with certain DAST tools. You may have similar problems with SAST tools against large versions of the Benchmark, like the 1.1 release.&lt;br /&gt;
&lt;br /&gt;
= Acknowledgements =&lt;br /&gt;
&lt;br /&gt;
The following people, organizations, and many others, have contributed to this project and their contributions are much appreciated!&lt;br /&gt;
&lt;br /&gt;
* Lots of Vendors - Many vendors have provided us with either trial licenses we can use, or they have run their tools themselves and either sent us results files, or written and contributed scorecard generators for their tool. Many have also provided valuable feedback so we can make the Benchmark more accurate and more realistic.&lt;br /&gt;
* Juan Gama - Development of initial release and continued support&lt;br /&gt;
* Ken Prole - Assistance with automated scorecard development using CodeDx&lt;br /&gt;
* Nick Sanidas - Development of initial release&lt;br /&gt;
* Denim Group - Contribution of scan results to facilitate scorecard development&lt;br /&gt;
* Tasos Laskos - Significant feedback on the DAST version of the Benchmark&lt;br /&gt;
* Ann Campbell - From SonarSource - for fixing our SonarQube results parser&lt;br /&gt;
* Dhiraj Mishra - OWASP Member - contributed SQLi/XSS fuzz vectors as initial contribution towards adding support for WAF/RASP scoring&lt;br /&gt;
&lt;br /&gt;
[[File:CWE_Logo.jpeg|link=https://cwe.mitre.org/]] - The CWE project for providing a mapping mechanism to easily map test cases to issues found by vulnerability detection tools.&lt;br /&gt;
&lt;br /&gt;
We are looking for volunteers. Please contact [mailto:dave.wichers@owasp.org Dave Wichers] if you are interested in contributing new test cases, tool results run against the benchmark, or anything else.&lt;br /&gt;
&lt;br /&gt;
__NOTOC__ &amp;lt;headertabs /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP_Project]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Benchmark&amp;diff=244838</id>
		<title>Benchmark</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Benchmark&amp;diff=244838"/>
				<updated>2018-11-04T18:24:36Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: Update Docker VM creation to match scripts created to make it easier.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Main = &lt;br /&gt;
 &amp;lt;div style=&amp;quot;width:100%;height:100px;border:0,margin:0;overflow: hidden;&amp;quot;&amp;gt;[[File:Lab_big.jpg|link=OWASP_Project_Stages#tab.3DLab_Projects]]&amp;lt;/div&amp;gt;&lt;br /&gt;
{| style=&amp;quot;padding: 0;margin:0;margin-top:10px;text-align:left;&amp;quot; |-&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;border-right: 1px dotted gray;padding-right:25px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== OWASP Benchmark Project  ==&lt;br /&gt;
The OWASP Benchmark for Security Automation (OWASP Benchmark) is a free and open test suite designed to evaluate the speed, coverage, and accuracy of automated software vulnerability detection tools and services (henceforth simply referred to as 'tools'). Without the ability to measure these tools, it is difficult to understand their strengths and weaknesses, and compare them to each other. Each version of the OWASP Benchmark contains thousands of test cases that are fully runnable and exploitable, each of which maps to the appropriate CWE number for that vulnerability.&lt;br /&gt;
&lt;br /&gt;
You can use the OWASP Benchmark with [[Source_Code_Analysis_Tools | Static Application Security Testing (SAST)]] tools, [[:Category:Vulnerability_Scanning_Tools | Dynamic Application Security Testing (DAST)]] tools like OWASP [[ZAP]] and Interactive Application Security Testing (IAST) tools. The current version of the Benchmark is implemented in Java.  Future versions may expand to include other languages.&lt;br /&gt;
&lt;br /&gt;
==Benchmark Project Scoring Philosophy==&lt;br /&gt;
&lt;br /&gt;
Security tools (SAST, DAST, and IAST) are amazing when they find a complex vulnerability in your code.  But with widespread misunderstanding of the specific vulnerabilities automated tools cover, end users are often left with a false sense of security.&lt;br /&gt;
&lt;br /&gt;
We are on a quest to measure just how good these tools are at discovering and properly diagnosing security problems in applications. We rely on the [http://en.wikipedia.org/wiki/Receiver_operating_characteristic long history] of military and medical evaluation of detection technology as a foundation for our research. Therefore, the test suite tests both real and fake vulnerabilities.&lt;br /&gt;
&lt;br /&gt;
There are four possible test outcomes in the Benchmark:&lt;br /&gt;
&lt;br /&gt;
# Tool correctly identifies a real vulnerability (True Positive - TP)&lt;br /&gt;
# Tool fails to identify a real vulnerability (False Negative - FN)&lt;br /&gt;
# Tool correctly ignores a false alarm (True Negative - TN)&lt;br /&gt;
# Tool fails to ignore a false alarm (False Positive - FP)&lt;br /&gt;
&lt;br /&gt;
We can learn a lot about a tool from these four metrics. Consider a tool that simply flags every line of code as vulnerable. This tool will perfectly identify all vulnerabilities!  But it will also have 100% false positives and thus adds no value.  Similarly, consider a tool that reports absolutely nothing. This tool will have zero false positives, but will also identify zero real vulnerabilities and is also worthless. You can even imagine a tool that flips a coin to decide whether to report whether each test case contains a vulnerability. The result would be 50% true positives and 50% false positives.  We need a way to distinguish valuable security tools from these trivial ones.&lt;br /&gt;
&lt;br /&gt;
If you imagine the line that connects all these points, from 0,0 to 100,100 establishes a line that roughly translates to &amp;quot;random guessing.&amp;quot; The ultimate measure of a security tool is how much better it can do than this line.  The diagram below shows how we will evaluate security tools against the Benchmark.&lt;br /&gt;
&lt;br /&gt;
[[File:Wbe guide.png]]&lt;br /&gt;
&lt;br /&gt;
A point plotted on this chart provides a visual indication of how well a tool did considering both the True Positives the tool reported, as well as the False Positives it reported. We also want to compute an individual score for that point in the range 0 - 100, which we call the Benchmark Accuracy Score.&lt;br /&gt;
&lt;br /&gt;
The Benchmark Accuracy Score is essentially a [https://en.wikipedia.org/wiki/Youden%27s_J_statistic Youden Index], which is a standard way of summarizing the accuracy of a set of tests.  Youden's index is one of the oldest measures for diagnostic accuracy. It is also a global measure of a test performance, used for the evaluation of overall discriminative power of a diagnostic procedure and for comparison of this test with other tests. Youden's index is calculated by deducting 1 from the sum of a test’s sensitivity and specificity expressed not as percentage but as a part of a whole number: (sensitivity + specificity) – 1. For a test with poor diagnostic accuracy, Youden's index equals 0, and in a perfect test Youden's index equals 1.&lt;br /&gt;
&lt;br /&gt;
  So for example, if a tool has a True Positive Rate (TPR) of .98 (i.e., 98%) &lt;br /&gt;
    and False Positive Rate (FPR) of .05 (i.e., 5%)&lt;br /&gt;
  Sensitivity = TPR (.98)&lt;br /&gt;
  Specificity = 1-FPR (.95)&lt;br /&gt;
  So the Youden Index is (.98+.95) - 1 = .93&lt;br /&gt;
  &lt;br /&gt;
  And this would equate to a Benchmark score of 93 (since we normalize this to the range 0 - 100)&lt;br /&gt;
&lt;br /&gt;
On the graph, the Benchmark Score is the length of the line from the point down to the diagonal “guessing” line. Note that a Benchmark score can actually be negative if the point is below the line. This is caused when the False Positive Rate is actually higher than the True Positive Rate.&lt;br /&gt;
&lt;br /&gt;
==Benchmark Validity==&lt;br /&gt;
&lt;br /&gt;
The Benchmark tests are not exactly like real applications. The tests are derived from coding patterns observed in real applications, but the majority of them are considerably '''simpler''' than real applications. That is, most real world applications will be considerably harder to successfully analyze than the OWASP Benchmark Test Suite. Although the tests are based on real code, it is possible that some tests may have coding patterns that don't occur frequently in real code.&lt;br /&gt;
&lt;br /&gt;
Remember, we are trying to test the capabilities of the tools and make them explicit, so that users can make informed decisions about what tools to use, how to use them, and what results to expect.  This is exactly aligned with the OWASP mission to make application security visible.&lt;br /&gt;
&lt;br /&gt;
==Generating Benchmark Scores==&lt;br /&gt;
&lt;br /&gt;
Anyone can use this Benchmark to evaluate vulnerability detection tools. The basic steps are:&lt;br /&gt;
# Download the Benchmark from github&lt;br /&gt;
# Run your tools against the Benchmark&lt;br /&gt;
# Run the BenchmarkScore tool on the reports from your tools&lt;br /&gt;
&lt;br /&gt;
That's it!&lt;br /&gt;
&lt;br /&gt;
Full details on how to do this are at the bottom of the page on the Quick_Start tab.&lt;br /&gt;
&lt;br /&gt;
We encourage both vendors, open source tools, and end users to verify their application security tools against the Benchmark. In order to ensure that the results are fair and useful, we ask that you follow a few simple rules when publishing results. We won't recognize any results that aren't easily reproducible:&lt;br /&gt;
&lt;br /&gt;
# A description of the default “out-of-the-box” installation, version numbers, etc…&lt;br /&gt;
# Any and all configuration, tailoring, onboarding, etc… performed to make the tool run&lt;br /&gt;
# Any and all changes to default security rules, tests, or checks used to achieve the results&lt;br /&gt;
# Easily reproducible steps to run the tool&lt;br /&gt;
&lt;br /&gt;
== Reporting Format==&lt;br /&gt;
&lt;br /&gt;
The Benchmark includes tools to interpret raw tool output, compare it to the expected results, and generate summary charts and graphs. We use the following table format in order to capture all the information generated during the evaluation.&lt;br /&gt;
 &lt;br /&gt;
{| class=&amp;quot;wikitable nowraplinks&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Security Category&lt;br /&gt;
! TP&lt;br /&gt;
! FN&lt;br /&gt;
! TN&lt;br /&gt;
! FP&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Total&lt;br /&gt;
! TPR&lt;br /&gt;
! FPR&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Score&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| General security category for test cases.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Positives''': Tests with real vulnerabilities that were correctly reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Negative''': Tests with real vulnerabilities that were not correctly reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Negative''': Tests with fake vulnerabilities that were correctly not reported as vulnerable by the tool.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Positive''':Tests with fake vulnerabilities that were incorrectly reported as vulnerable by the tool.&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| Total test cases for this category.&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''True Positive Rate''': TP / ( TP + FN ) - Also referred to as Precision, as defined at [https://en.wikipedia.org/wiki/Precision_and_recall Wikipedia].&lt;br /&gt;
| valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| '''False Positive Rate''': FP / ( FP + TN ).&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;11%&amp;quot;| Normalized distance from the “guess line” TPR - FPR.&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Command Injection&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Etc...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
| ...&lt;br /&gt;
| ...&lt;br /&gt;
| style=&amp;quot;background:#DDDDDD&amp;quot; | ...&lt;br /&gt;
|-&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | &lt;br /&gt;
! Total TP&lt;br /&gt;
! Total FN&lt;br /&gt;
! Total TN&lt;br /&gt;
! Total FP&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Total TC&lt;br /&gt;
! Average TPR&lt;br /&gt;
! Average FPR&lt;br /&gt;
! style=&amp;quot;background:#DDDDDD&amp;quot; | Average Score&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Code Repo and Build/Run Instructions ==&lt;br /&gt;
&lt;br /&gt;
See the '''Getting Started''' and '''Getting, Building, and Running the Benchmark''' sections on the Quick Start tab.&lt;br /&gt;
&lt;br /&gt;
==Licensing==&lt;br /&gt;
&lt;br /&gt;
The OWASP Benchmark is free to use under the [http://choosealicense.com/licenses/gpl-2.0/ GNU General Public License v2.0].&lt;br /&gt;
&lt;br /&gt;
== Mailing List ==&lt;br /&gt;
&lt;br /&gt;
[https://lists.owasp.org/mailman/listinfo/owasp-benchmark-project OWASP Benchmark Mailing List]&lt;br /&gt;
&lt;br /&gt;
== Project Leaders ==&lt;br /&gt;
&lt;br /&gt;
[https://www.owasp.org/index.php/User:Wichers Dave Wichers] [mailto:dave.wichers@owasp.org @]&lt;br /&gt;
&lt;br /&gt;
== Project References ==&lt;br /&gt;
* [https://www.mir-swamp.org/#packages/public Software Assurance Marketplace (SWAMP) - set of curated packages to test tools against]&lt;br /&gt;
* [http://samate.nist.gov/Other_Test_Collections.html SAMATE List of Test Collections]&lt;br /&gt;
&lt;br /&gt;
== Related Projects ==&lt;br /&gt;
&lt;br /&gt;
* [http://samate.nist.gov/SARD/testsuite.php NSA's Juliet for Java]&lt;br /&gt;
* [http://sectoolmarket.com/ The Web Application Vulnerability Scanner Evaluation Project (WAVSEP)]&lt;br /&gt;
&lt;br /&gt;
| valign=&amp;quot;top&amp;quot;  style=&amp;quot;padding-left:25px;width:200px;&amp;quot; |&lt;br /&gt;
&lt;br /&gt;
== Quick Download ==&lt;br /&gt;
&lt;br /&gt;
All test code and project files can be downloaded from [https://github.com/OWASP/benchmark OWASP GitHub].&lt;br /&gt;
&lt;br /&gt;
== Project Intro Video ==&lt;br /&gt;
&lt;br /&gt;
[[File:BenchmarkPodcastTitlePage.jpg|200px|link=https://www.youtube.com/watch?v=HQP8dwc3jJA&amp;amp;index=5&amp;amp;list=PLGB2s-U5FSWOmEStMt3JqlMFJvRYqeVW5]]&lt;br /&gt;
&lt;br /&gt;
== News and Events ==&lt;br /&gt;
* LOOKING FOR VOLUNTEERS!! - We are looking for individuals and organizations to join and make this a much more community driven project, including additional coleaders to help take this project to the next level. Contributors could work on things like new test cases, additional tool scorecard generators, adding support for languages beyond Java, and a host of other improvements. Please contact [mailto:dave.wichers@owasp.org me] if you are interested in contributing at any level.&lt;br /&gt;
* June 5, 2016 - Benchmark Version 1.2 Released&lt;br /&gt;
* Sep 24, 2015 - Benchmark introduced to broader OWASP community at [https://appsecusa2015.sched.org/event/3r9k/using-the-owasp-benchmark-to-assess-automated-vulnerability-analysis-tools AppSec USA]&lt;br /&gt;
* Aug 27, 2015 - U.S. Dept. of Homeland Security (DHS) is financially supporting the Benchmark project.&lt;br /&gt;
* Aug 15, 2015 - Benchmark Version 1.2beta Released with full DAST Support. Checkmarx and ZAP scorecard generators also released.&lt;br /&gt;
* July 10, 2015 - Benchmark Scorecard generator and open source scorecards released&lt;br /&gt;
* May 23, 2015 - Benchmark Version 1.1 Released&lt;br /&gt;
* April 15, 2015 - Benchmark Version 1.0 Released&lt;br /&gt;
&lt;br /&gt;
==Classifications==&lt;br /&gt;
&lt;br /&gt;
   {| width=&amp;quot;200&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot; rowspan=&amp;quot;2&amp;quot;| [[File:Owasp-incubator-trans-85.png|link=https://www.owasp.org/index.php/OWASP_Project_Stages#tab=Incubator_Projects]]&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-builders-small.png|link=]]  &lt;br /&gt;
   |-&lt;br /&gt;
   | align=&amp;quot;center&amp;quot; valign=&amp;quot;top&amp;quot; width=&amp;quot;50%&amp;quot;| [[File:Owasp-defenders-small.png|link=]]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [http://choosealicense.com/licenses/gpl-2.0/ GNU General Public License v2.0]&lt;br /&gt;
   |-&lt;br /&gt;
   | colspan=&amp;quot;2&amp;quot; align=&amp;quot;center&amp;quot;  | [[File:Project_Type_Files_CODE.jpg|link=]]&lt;br /&gt;
   |}&lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= Test Cases =&lt;br /&gt;
&lt;br /&gt;
Version 1.0 of the Benchmark was published on April 15, 2015 and had 20,983 test cases. On May 23, 2015, version 1.1 of the Benchmark was released. The 1.1 release improves on the previous version by making sure that there are both true positives and false positives in every vulnerability area. Version 1.2 was released on June 5, 2016 (and the 1.2beta August 15, 2015).&lt;br /&gt;
&lt;br /&gt;
Version 1.2 and forward of the Benchmark is a fully executable web application, which means it is scannable by any kind of vulnerability detection tool. The 1.2 has been limited to slightly less than 3,000 test cases, to make it easier for DAST tools to scan it (so it doesn't take so long and they don't run out of memory, or blow up the size of their database). The 1.2 release covers the same vulnerability areas that 1.1 covers. We added a few Spring database SQL Injection tests, but that's it. The bulk of the work was turning each test case into something that actually runs correctly AND is fully exploitable, and then generating a UI on top of it that works in order to turn the test cases into a real running application.&lt;br /&gt;
&lt;br /&gt;
You can still download Benchmark version 1.1 by cloning the release marked with the GIT tag '1.1'.&lt;br /&gt;
&lt;br /&gt;
The test case areas and quantities for the Benchmark releases are:&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable nowraplinks&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
! Vulnerability Area&lt;br /&gt;
! # of Tests in v1.1&lt;br /&gt;
! # of Tests in v1.2&lt;br /&gt;
! CWE Number&lt;br /&gt;
|-&lt;br /&gt;
| [[Command Injection]]&lt;br /&gt;
| 2708&lt;br /&gt;
| 251&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/78.html 78]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Cryptography&lt;br /&gt;
| 1440&lt;br /&gt;
| 246&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/327.html 327]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Hashing&lt;br /&gt;
| 1421&lt;br /&gt;
| 236&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/328.html 328]&lt;br /&gt;
|-&lt;br /&gt;
| [[LDAP injection | LDAP Injection]]&lt;br /&gt;
| 736&lt;br /&gt;
| 59&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/90.html 90]&lt;br /&gt;
|-&lt;br /&gt;
| [[Path Traversal]]&lt;br /&gt;
| 2630&lt;br /&gt;
| 268&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/22.html 22]&lt;br /&gt;
|-&lt;br /&gt;
| Secure Cookie Flag&lt;br /&gt;
| 416&lt;br /&gt;
| 67&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/614.html 614]&lt;br /&gt;
|-&lt;br /&gt;
| [[SQL Injection]]&lt;br /&gt;
| 3529&lt;br /&gt;
| 504&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/89.html 89]&lt;br /&gt;
|-&lt;br /&gt;
| [[Trust Boundary Violation]]&lt;br /&gt;
| 725&lt;br /&gt;
| 126&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/501.html 501]&lt;br /&gt;
|-&lt;br /&gt;
| Weak Randomness&lt;br /&gt;
| 3640&lt;br /&gt;
| 493&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/330.html 330]&lt;br /&gt;
|-&lt;br /&gt;
| [[XPATH Injection]]&lt;br /&gt;
| 347&lt;br /&gt;
| 35&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/643.html 643]&lt;br /&gt;
|-&lt;br /&gt;
| [[XSS]] (Cross-Site Scripting)&lt;br /&gt;
| 3449&lt;br /&gt;
| 455&lt;br /&gt;
| [https://cwe.mitre.org/data/definitions/79.html 79]&lt;br /&gt;
|-&lt;br /&gt;
| Total Test Cases&lt;br /&gt;
| 21,041&lt;br /&gt;
| 2,740&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Each Benchmark version comes with a spreadsheet that lists every test case, the vulnerability category, the CWE number, and the expected result (true finding/false positive). Look for the file: expectedresults-VERSION#.csv in the project root directory.&lt;br /&gt;
&lt;br /&gt;
Every test case is:&lt;br /&gt;
* a servlet or JSP (currently they are all servlets, but we plan to add JSPs)&lt;br /&gt;
* either a true vulnerability or a false positive for a single issue&lt;br /&gt;
&lt;br /&gt;
The Benchmark is intended to help determine how well analysis tools correctly analyze a broad array of application and framework behavior, including:&lt;br /&gt;
&lt;br /&gt;
* HTTP request and response problems&lt;br /&gt;
* Simple and complex data flow&lt;br /&gt;
* Simple and complex control flow&lt;br /&gt;
* Popular frameworks&lt;br /&gt;
* Inversion of control&lt;br /&gt;
* Reflection&lt;br /&gt;
* Class loading&lt;br /&gt;
* Annotations&lt;br /&gt;
* Popular UI technologies (particularly JavaScript frameworks)&lt;br /&gt;
&lt;br /&gt;
Not all of these are yet tested by the Benchmark but future enhancements intend to provide more coverage of these issues.&lt;br /&gt;
&lt;br /&gt;
Additional future enhancements could cover:&lt;br /&gt;
* All vulnerability types in the [[Top10 | OWASP Top 10]]&lt;br /&gt;
* Does the tool find flaws in libraries?&lt;br /&gt;
* Does the tool find flaws spanning custom code and libraries?&lt;br /&gt;
* Does tool handle web services? REST, XML, GWT, etc…&lt;br /&gt;
* Does tool work with different app servers? Java platforms?&lt;br /&gt;
&lt;br /&gt;
== Example Test Case ==&lt;br /&gt;
&lt;br /&gt;
Each test case is a simple Java EE servlet. BenchmarkTest00001 in version 1.0 of the Benchmark was an LDAP Injection test with the following metadata in the accompanying BenchmarkTest00001.xml file:&lt;br /&gt;
&lt;br /&gt;
  &amp;lt;test-metadata&amp;gt;&lt;br /&gt;
    &amp;lt;category&amp;gt;ldapi&amp;lt;/category&amp;gt;&lt;br /&gt;
    &amp;lt;test-number&amp;gt;00001&amp;lt;/test-number&amp;gt;&lt;br /&gt;
    &amp;lt;vulnerability&amp;gt;true&amp;lt;/vulnerability&amp;gt;&lt;br /&gt;
    &amp;lt;cwe&amp;gt;90&amp;lt;/cwe&amp;gt;&lt;br /&gt;
  &amp;lt;/test-metadata&amp;gt;&lt;br /&gt;
&lt;br /&gt;
BenchmarkTest00001.java in the OWASP Benchmark 1.0 simply reads in all the cookie values, looks for a cookie named &amp;quot;foo&amp;quot;, and uses the value of this cookie when performing an LDAP query. Here's the code for BenchmarkTest00001.java:&lt;br /&gt;
&lt;br /&gt;
  package org.owasp.benchmark.testcode;&lt;br /&gt;
  &lt;br /&gt;
  import java.io.IOException;&lt;br /&gt;
  &lt;br /&gt;
  import javax.servlet.ServletException;&lt;br /&gt;
  import javax.servlet.annotation.WebServlet;&lt;br /&gt;
  import javax.servlet.http.HttpServlet;&lt;br /&gt;
  import javax.servlet.http.HttpServletRequest;&lt;br /&gt;
  import javax.servlet.http.HttpServletResponse;&lt;br /&gt;
  &lt;br /&gt;
  @WebServlet(&amp;quot;/BenchmarkTest00001&amp;quot;)&lt;br /&gt;
  public class BenchmarkTest00001 extends HttpServlet {&lt;br /&gt;
  	&lt;br /&gt;
  	private static final long serialVersionUID = 1L;&lt;br /&gt;
  	&lt;br /&gt;
  	@Override&lt;br /&gt;
  	public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {&lt;br /&gt;
  		doPost(request, response);&lt;br /&gt;
  	}&lt;br /&gt;
  &lt;br /&gt;
  	@Override&lt;br /&gt;
  	public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {&lt;br /&gt;
  		// some code&lt;br /&gt;
  &lt;br /&gt;
  		javax.servlet.http.Cookie[] cookies = request.getCookies();&lt;br /&gt;
  		&lt;br /&gt;
  		String param = null;&lt;br /&gt;
  		boolean foundit = false;&lt;br /&gt;
  		if (cookies != null) {&lt;br /&gt;
  			for (javax.servlet.http.Cookie cookie : cookies) {&lt;br /&gt;
  				if (cookie.getName().equals(&amp;quot;foo&amp;quot;)) {&lt;br /&gt;
  					param = cookie.getValue();&lt;br /&gt;
  					foundit = true;&lt;br /&gt;
  				}&lt;br /&gt;
  			}&lt;br /&gt;
  			if (!foundit) {&lt;br /&gt;
  				// no cookie found in collection&lt;br /&gt;
  				param = &amp;quot;&amp;quot;;&lt;br /&gt;
  			}&lt;br /&gt;
  		} else {&lt;br /&gt;
  			// no cookies&lt;br /&gt;
  			param = &amp;quot;&amp;quot;;&lt;br /&gt;
  		}&lt;br /&gt;
  		&lt;br /&gt;
  		try {&lt;br /&gt;
  			javax.naming.directory.DirContext dc = org.owasp.benchmark.helpers.Utils.getDirContext();&lt;br /&gt;
  			Object[] filterArgs = {&amp;quot;a&amp;quot;,&amp;quot;b&amp;quot;};&lt;br /&gt;
  			dc.search(&amp;quot;name&amp;quot;, param, filterArgs, new javax.naming.directory.SearchControls());&lt;br /&gt;
  		} catch (javax.naming.NamingException e) {&lt;br /&gt;
  			throw new ServletException(e);&lt;br /&gt;
  		}&lt;br /&gt;
  	}&lt;br /&gt;
  }&lt;br /&gt;
&lt;br /&gt;
= Test Case Details =&lt;br /&gt;
&lt;br /&gt;
The following describes situations in the Benchmark that have come up for debate as to the validity/accuracy of the test cases in these scenarios. &lt;br /&gt;
&lt;br /&gt;
== Cookies as a Source of Attack for XSS ==&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.1 and early versions of the 1.2beta included test cases that used cookies as a source of data that flowed into XSS vulnerabilities. The Benchmark treated these tests as False Positives because the Benchmark team figured that you'd have to use an XSS vulnerability in the first place to set the cookie value, and so it wasn't fair/reasonable to consider an XSS vulnerability whose data source was a cookie value as actually exploitable. However, we got feedback from some tool vendors, like Fortify, Burp, and Arachni, that they disagreed with this analysis and felt that, in fact, cookies were a valid source of attack against XSS vulnerabilities. Given that there are good arguments on both sides of this safe vs. unsafe question, we decided on Aug 25, 2015 to simply remove those test cases from the Benchmark. If, in the future, we decide who is right, we may add such test cases back in.&lt;br /&gt;
&lt;br /&gt;
== Headers as a Source of Attack for XSS ==&lt;br /&gt;
&lt;br /&gt;
Similarly, the Benchmark team believes that the names of headers aren't a valid source of XSS attack for the same reason we thought cookie values aren't a valid source. Because it would require an XSS vulnerability to be exploited in the first place to set them. In fact, we feel that this argument is much stronger for header names, than cookie values. Right now, the Benchmark doesn't include any header names as sources for XSS test cases, but we plan to add them, and mark them as false positive in the Benchmark.&lt;br /&gt;
&lt;br /&gt;
We do have header values as sources for some XSS test cases in the Benchmark and only 'referer' is treated as a valid XSS source (i.e., true positives) because other headers are not viable XSS sources. Other headers are, of course, valid sources for other attack vectors, like SQL injection or Command Injection.&lt;br /&gt;
&lt;br /&gt;
== False Positive Scenario: Static Values Passed to Unsafe (Weak) Sinks ==&lt;br /&gt;
&lt;br /&gt;
The Benchmark has MANY test cases where unsafe data flows in from the browser, but that data is replaced with static content as it goes through the propagators in the that specific test case. This static (safe) data then flows to the sink, which may be a weak/unsafe sink, like, for example, an unsafely constructed SQL statement. The Benchmark treats those test cases as false positives because there is absolutely no way for that weakness to be exploited. The NSA Juliet SAST benchmark treats such test cases exactly the same way, as false positives. We do recognize that there are weaknesses in those test cases, even though they aren't exploitable.&lt;br /&gt;
&lt;br /&gt;
Some SAST tool vendors feel it is appropriate to point out those weaknesses, and that's fine. However, if the tool points those weaknesses out, and does not distinguish them from truly exploitable vulnerabilities, then the Benchmark treats those findings as false positives. If the tool allows a user to differentiate these non-exploitable weaknesses from exploitable vulnerabilities, then the Benchmark scorecard generator can use that information to filter out these extra findings (along with any other similarly marked findings) so they don't count against that tool when calculating that tool's Benchmark score.  In the real world, its important for analysts to be able to filter out such findings if they only have time to deal with the most critical, actually exploitable, vulnerabilities. If a tool doesn't make it easy for an analyst to distinguish the two situations, then they are providing a disservice to the analyst.&lt;br /&gt;
&lt;br /&gt;
This issue doesn't affect DAST tools. They only report what appears to be exploitable to them. So this has no affect on them.&lt;br /&gt;
&lt;br /&gt;
If you are a SAST tool vendor or user, and you believe the Benchmark scorecard generator is counting such findings against that tool, and there is a way to tell them apart, please let the project team know so the scorecard generator can be adjusted to not count those findings against the tool. The Benchmark project's goal is the generate the most fair and accurate results it can generate. If such an adjustment is made to how a scorecard is generated for that tool, we plan to document this was done for that tool, and explain how others could perform the same filtering within that tool in order to get the same focused set of results.&lt;br /&gt;
&lt;br /&gt;
== Dead Code ==&lt;br /&gt;
&lt;br /&gt;
Some SAST tools point out weaknesses in dead code because they might eventually end up being used, and serve as bad coding examples (think cut/paste of code). We think this is fine/appropriate.  However, there is no dead code in the OWASP Benchmark (at least not intentionally). So dead code should not be causing any tool to report unnecessary false positives.&lt;br /&gt;
&lt;br /&gt;
= Tool Support/Results =&lt;br /&gt;
&lt;br /&gt;
The results for 5 free tools, PMD, FindBugs, FindBugs with the FindSecBugs plugin, SonarQube and OWASP ZAP are available here against version 1.2 of the Benchmark: https://rawgit.com/OWASP/Benchmark/master/scorecard/OWASP_Benchmark_Home.html. We've included multiple versions of FindSecBugs' and ZAP's results so you can see the improvements they are making finding vulnerabilities in Benchmark.&lt;br /&gt;
&lt;br /&gt;
We have Benchmark results for all the following tools, but haven't publicly released the results for any commercial tools. However, we included a 'Commercial Average' page, which includes a summary of results for 6 commercial SAST tools along with anonymous versions of each SAST tool's scorecard.&lt;br /&gt;
&lt;br /&gt;
The Benchmark can generate results for the following tools: &lt;br /&gt;
&lt;br /&gt;
'''Free Static Application Security Testing (SAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://pmd.github.io/ PMD] (which really has no security rules) - .xml results file&lt;br /&gt;
* [http://findbugs.sourceforge.net/ Findbugs] - .xml results file (Note: The 'new' Findbugs is now at: https://spotbugs.github.io/)&lt;br /&gt;
* FindBugs with the [http://find-sec-bugs.github.io/ FindSecurityBugs plugin] - .xml results file&lt;br /&gt;
* [https://www.sonarqube.org/downloads/ SonarQube] - .xml results file&lt;br /&gt;
&lt;br /&gt;
Note: We looked into supporting [http://checkstyle.sourceforge.net/ Checkstyle] but it has no security rules, just like PMD. The [http://fb-contrib.sourceforge.net/ fb-contrib] FindBugs plugin doesn't have any security rules either. We did test [http://errorprone.info/ Error Prone], and found that it does report some use of [http://errorprone.info/bugpattern/InsecureCipherMode) insecure ciphers (CWE-327)], but that's it.&lt;br /&gt;
&lt;br /&gt;
'''Commercial SAST Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [http://www.castsoftware.com/products/application-intelligence-platform CAST Application Intelligence Platform (AIP)] - .xml results file&lt;br /&gt;
* [https://www.checkmarx.com/products/static-application-security-testing/ Checkmarx CxSAST] - .xml results file&lt;br /&gt;
* [https://www.synopsys.com/content/dam/synopsys/sig-assets/datasheets/SAST-Coverity-datasheet.pdf Synopsys Static Analysis (Formerly Coverity Code Advisor) (On-Demand and stand-alone versions)] - .json results file (You can scan Benchmark w/Coverity for free. See: https://scan.coverity.com/)&lt;br /&gt;
* [https://software.microfocus.com/en-us/products/static-code-analysis-sast/overview Micro Focus (Formally HPE) Fortify (On-Demand and stand-alone versions)] - .fpr results file&lt;br /&gt;
* [https://www.ibm.com/us-en/marketplace/ibm-appscan-source IBM AppScan Source (Standalone and Cloud)] - .ozasmt or .xml results file&lt;br /&gt;
* [https://juliasoft.com/solutions/julia-for-security/ Julia Analyzer] - .xml results file&lt;br /&gt;
* [https://www.parasoft.com/products/jtest/ Parasoft Jtest] - .xml results file&lt;br /&gt;
* [https://www.shiftleft.io/product/ ShiftLeft SAST] - .sl results file (Benchmark specific format. Ask vendor how to generate this)&lt;br /&gt;
* [https://www.sourcemeter.com/features/ SourceMeter] - .txt results file of ALL results from VulnerabilityHunter&lt;br /&gt;
* [https://www.defensecode.com/thunderscan.php Thunderscan SAST] - .xml results file&lt;br /&gt;
* [http://www.veracode.com/products/binary-static-analysis-sast Veracode SAST] - .xml results file&lt;br /&gt;
* [https://www.rigs-it.com/xanitizer/ XANITIZER] - xml results file ([https://www.rigs-it.com/wp-content/uploads/2018/03/howtosetupxanitizerforowaspbenchmarkproject.pdf Their white paper on how to setup Xanitizer to scan Benchmark.]) (Free trial available)&lt;br /&gt;
&lt;br /&gt;
We are looking for results for other commercial static analysis tools like: [http://www.grammatech.com/codesonar Grammatech CodeSonar], [http://www.klocwork.com/products-services/klocwork Klocwork], etc. If you have a license for any static analysis tool not already listed above and can run it on the Benchmark and send us the results file that would be very helpful. &lt;br /&gt;
&lt;br /&gt;
The free SAST tools come bundled with the Benchmark so you can run them yourselves. If you have a license for any commercial SAST tool, you can also run them against the Benchmark. Just put your results files in the /results folder of the project, and then run the BenchmarkScore script for your platform (.sh / .bat) and it will generate a scorecard in the /scorecard directory for all the tools you have results for that are currently supported.&lt;br /&gt;
&lt;br /&gt;
'''Free Dynamic Application Security Testing (DAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
Note: While we support scorecard generators for these Free and Commercial DAST tools, we haven't been able to get a full/clean run against the Benchmark from most of these tools. As such, some of these scorecard generators might still need some work to properly reflect their results. If you notice any problems, let us know.&lt;br /&gt;
&lt;br /&gt;
* [http://www.arachni-scanner.com/ Arachni] - .xml results file&lt;br /&gt;
** To generate .xml, run: ./bin/arachni_reporter &amp;quot;Your_AFR_Results_Filename.afr&amp;quot; --reporter=xml:outfile=Benchmark1.2-Arachni.xml&lt;br /&gt;
* [https://www.owasp.org/index.php/ZAP OWASP ZAP] - .xml results file. To generate a complete ZAP XML results file so you can generate a valid scorecard, make sure you:&lt;br /&gt;
** Tools &amp;gt; Options &amp;gt; Alerts - And set the Max alert instances to like 500.&lt;br /&gt;
** Then: Report &amp;gt; Generate XML Report...&lt;br /&gt;
&lt;br /&gt;
'''Commercial DAST Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.acunetix.com/vulnerability-scanner/ Acunetix Web Vulnerability Scanner (WVS)] - .xml results file (Generated using [https://www.acunetix.com/resources/wvs7manual.pdf command line interface (see Chapter 10.)] /ExportXML switch)&lt;br /&gt;
* [https://portswigger.net/burp/ Burp Pro] - .xml results file&lt;br /&gt;
**You must use Burp Pro v1.6.30+ to scan the Benchmark due to a limitation fixed in v1.6.30.&lt;br /&gt;
* [https://software.microfocus.com/en-us/products/webinspect-dynamic-analysis-dast/overview Micro Focus (Formally HPE) WebInspect] - .xml results file&lt;br /&gt;
* [http://www-03.ibm.com/software/products/en/appscan This was IBM AppScan (but I believe IBM sold this product off. To whom?] - .xml results file&lt;br /&gt;
* [https://www.netsparker.com/web-vulnerability-scanner/ Netsparker] - .xml results file&lt;br /&gt;
* [https://www.rapid7.com/products/appspider/ Rapid7 AppSpider] - .xml results file&lt;br /&gt;
&lt;br /&gt;
* Qualys - We ran Qualys against v1.2 of the Benchmark and it found none of the vulnerabilities we test for as far as we could tell. So we haven't implemented a scorecard generator for it. If you get results where you think it does find some real issues, send us the results file and, if confirmed, we'll produce a scorecard generator for it.&lt;br /&gt;
&lt;br /&gt;
If you have access to other DAST Tools, PLEASE RUN THEM FOR US against the Benchmark, and send us the results file so we can build a scorecard generator for that tool.&lt;br /&gt;
&lt;br /&gt;
'''Commercial Interactive Application Security Testing (IAST) Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [https://www.contrastsecurity.com/interactive-application-security-testing-iast Contrast Assess] - .zip results file (You can scan Benchmark w/Contrast for free. See: https://www.contrastsecurity.com/contrast-community-edition)&lt;br /&gt;
* [https://hdivsecurity.com/interactive-application-security-testing-iast Hdiv Detection (IAST)] - .hlg results file&lt;br /&gt;
&lt;br /&gt;
'''Commercial Hybrid Analysis Application Security Testing Tools:'''&lt;br /&gt;
&lt;br /&gt;
* [http://www.iappsecure.com/products.html Fusion Lite Insight] - .xml results file&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It may be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results.'''&lt;br /&gt;
&lt;br /&gt;
The project has automated test harnesses for these vulnerability detection tools, so we can repeatably run the tools against each version of the Benchmark and automatically produce scorecards in our desired format.&lt;br /&gt;
&lt;br /&gt;
We want to test as many tools as possible against the Benchmark. If you are:&lt;br /&gt;
&lt;br /&gt;
* A tool vendor and want to participate in the project&lt;br /&gt;
* Someone who wants to help score a free tool against the project&lt;br /&gt;
* Someone who has a license to a commercial tool and the terms of the license allow you to publish tool results, and you want to participate&lt;br /&gt;
&lt;br /&gt;
please let [mailto:dave.wichers@owasp.org me] know!&lt;br /&gt;
&lt;br /&gt;
= Quick Start =&lt;br /&gt;
&lt;br /&gt;
==What is in the Benchmark?==&lt;br /&gt;
The Benchmark is a Java Maven project. Its primary component is thousands of test cases (e.g., BenchmarkTest00001.java) , each of which is a single Java servlet that contains a single vulnerability (either a true positive or false positive). The vulnerabilities span about a dozen different types currently and are expected to expand significantly in the future.&lt;br /&gt;
&lt;br /&gt;
An expectedresults.csv is published with each version of the Benchmark (e.g., expectedresults-1.1.csv) and it specifically lists the expected results for each test case. Here’s what the first two rows in this file looks like for version 1.1 of the Benchmark:&lt;br /&gt;
&lt;br /&gt;
 # test name		category	real vulnerability	CWE	Benchmark version: 1.1	2015-05-22&lt;br /&gt;
 BenchmarkTest00001	crypto		TRUE			327&lt;br /&gt;
&lt;br /&gt;
This simply means that the first test case is a crypto test case (use of weak cryptographic algorithms), this is a real vulnerability (as opposed to a false positive), and this issue maps to CWE 327. It also indicates this expected results file is for Benchmark version 1.1 (produced May 22, 2015). There is a row in this file for each of the tens of thousands of test cases in the Benchmark.  Each time a new version of the Benchmark is published, a new corresponding results file is generated and each test case can be completely different from one version to the next.&lt;br /&gt;
&lt;br /&gt;
The Benchmark also comes with a bunch of different utilities, commands, and prepackaged open source security analysis tools, all of which can be executed through Maven goals, including:&lt;br /&gt;
&lt;br /&gt;
* Open source vulnerability detection tools to be run against the Benchmark&lt;br /&gt;
* A scorecard generator, which computes a scorecard for each of the tools you have results files for.&lt;br /&gt;
&lt;br /&gt;
==What Can You Do With the Benchmark?==&lt;br /&gt;
* Compile all the software in the Benchmark project (e.g., mvn compile)&lt;br /&gt;
* Run a static vulnerability analysis tool (SAST) against the Benchmark test case code&lt;br /&gt;
&lt;br /&gt;
* Scan a running version of the Benchmark with a dynamic application security testing tool (DAST)&lt;br /&gt;
** Instructions on how to run it are provided below&lt;br /&gt;
&lt;br /&gt;
* Generate scorecards for each of the tools you have results files for&lt;br /&gt;
** See the Tool Support/Results page for the list of tools the Benchmark supports generating scorecards for&lt;br /&gt;
&lt;br /&gt;
==Getting Started==&lt;br /&gt;
Before downloading or using the Benchmark make sure you have the following installed and configured properly:&lt;br /&gt;
&lt;br /&gt;
 GIT: http://git-scm.com/ or https://github.com/&lt;br /&gt;
 Maven: https://maven.apache.org/  (Version: 3.2.3 or newer works. We heard that 3.0.5 throws an error.)&lt;br /&gt;
 Java: http://www.oracle.com/technetwork/java/javase/downloads/index.html (Java 7 or 8) (64-bit) - Takes ALOT of memory to compile the Benchmark.&lt;br /&gt;
&lt;br /&gt;
==Getting, Building, and Running the Benchmark==&lt;br /&gt;
&lt;br /&gt;
To download and build everything:&lt;br /&gt;
&lt;br /&gt;
 $ git clone https://github.com/OWASP/benchmark &lt;br /&gt;
 $ cd benchmark&lt;br /&gt;
 $ mvn compile   (This compiles it)&lt;br /&gt;
 $ runBenchmark.sh/.bat - This compiles and runs it.&lt;br /&gt;
&lt;br /&gt;
Then navigate to: https://localhost:8443/benchmark/ to go to its home page. It uses a self signed SSL certificate, so you'll get a security warning when you hit the home page.&lt;br /&gt;
&lt;br /&gt;
Note: We have set the Benchmark app to use up to 6 Gig of RAM, which it may need when it is fully scanned by a DAST scanner. The DAST tool probably also requires 3+ Gig of RAM. As such, we recommend having a 16 Gig machine if you are going to try to run a full DAST scan. And at least 4 or ideally 8 Gig if you are going to play around with the running Benchmark app.&lt;br /&gt;
&lt;br /&gt;
== Using a VM instead ==&lt;br /&gt;
We have several preconstructed VMs or instructions on how to build one that you can use instead:&lt;br /&gt;
&lt;br /&gt;
* Docker: A Dockerfile is checked into the project [https://github.com/OWASP/Benchmark/blob/master/VMs/Dockerfile here]. This Docker file should automatically produce a Docker VM with the latest Benchmark project files. After you have Docker installed, run: &lt;br /&gt;
 scripts/buildDockerImage.sh --&amp;gt; This builds the Docker Benchmark VM (This will take a WHILE)&lt;br /&gt;
 docker images  --&amp;gt; You should see the new benchmark:latest image in the list provided&lt;br /&gt;
 # The Benchmark Docker Image only has to be created once. &lt;br /&gt;
&lt;br /&gt;
 To run the Benchmark in your Docker VM, just run:&lt;br /&gt;
   scripts/runDockerImage.sh  --&amp;gt; This clones the latest Benchmark from github, builds everything, and starts a remotely accessible Benchmark web app.&lt;br /&gt;
 If successful, you should see this at the end:&lt;br /&gt;
   [INFO] [talledLocalContainer] Tomcat 8.x started on port [8443]&lt;br /&gt;
   [INFO] Press Ctrl-C to stop the container...&lt;br /&gt;
 Then simply navigate to: https://localhost:8443/benchmark from the machine you are running Docker&lt;br /&gt;
 &lt;br /&gt;
 Or if you want to access from a different machine:&lt;br /&gt;
  run: docker-machine ls (in a different window) --&amp;gt; To get IP Docker VM is exporting (e.g., tcp://192.168.99.100:2376)&lt;br /&gt;
  Navigate to: https://192.168.99.100:8443/benchmark (using the above IP as an example)&lt;br /&gt;
&lt;br /&gt;
* Amazon Web Services (AWS) - Here's how you set up the Benchmark on an AWS VM:&lt;br /&gt;
 sudo yum install git&lt;br /&gt;
 sudo yum install maven&lt;br /&gt;
 sudo yum install mvn&lt;br /&gt;
 sudo wget http://repos.fedorapeople.org/repos/dchen/apache-maven/epel-apache-maven.repo -O /etc/yum.repos.d/epel-apache-maven.repo&lt;br /&gt;
 sudo sed -i s/\$releasever/6/g /etc/yum.repos.d/epel-apache-maven.repo&lt;br /&gt;
 sudo yum install -y apache-maven&lt;br /&gt;
 git clone https://github.com/OWASP/benchmark&lt;br /&gt;
 cd benchmark&lt;br /&gt;
 chmod 755 *.sh&lt;br /&gt;
 ./runBenchmark.sh -- to run it locally on the VM.&lt;br /&gt;
 ./runRemoteAccessibleBenchmark.sh -- to run it so it can be accessed outside the VM (on port 8443).&lt;br /&gt;
&lt;br /&gt;
==Running Free Static Analysis Tools Against the Benchmark==&lt;br /&gt;
There are scripts for running each of the free SAST vulnerability detection tools included with the Benchmark against the Benchmark test cases. On Linux, you might have to make them executable (e.g., chmod 755 *.sh) before you can run them.&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for PMD:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runPMD.sh (Linux) or .\scripts\runPMD.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for FindBugs:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runFindBugs.sh (Linux) or .\scripts\runFindBugs.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
Generating Test Results for FindBugs with the FindSecBugs plugin:&lt;br /&gt;
&lt;br /&gt;
 $ ./scripts/runFindSecBugs.sh (Linux) or .\scripts\runFindSecBugs.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
In each case, the script will generate a results file and put it in the /results directory. For example: &lt;br /&gt;
&lt;br /&gt;
 Benchmark_1.2-findbugs-v3.0.1-1026.xml&lt;br /&gt;
&lt;br /&gt;
This results file name is carefully constructed to mean the following: It's a results file against the OWASP Benchmark version 1.2, FindBugs was the analysis tool, it was version 3.0.1 of FindBugs, and it took 1026 seconds to run the analysis.&lt;br /&gt;
&lt;br /&gt;
NOTE: If you create a results file yourself, by running a commercial tool for example, you can add the version # and the compute time to the filename just like this and the Benchmark Scorecard generator will pick this information up and include it in the generated scorecard. If you don't, depending on what metadata is included in the tool results, the Scorecard generator might do this automatically anyway.&lt;br /&gt;
&lt;br /&gt;
==Generating Scorecards==&lt;br /&gt;
The scorecard generation application BenchmarkScore is included with the Benchmark. It parses the output files generated by any of the supported security tools run against the Benchmark and compares them against the expected results, and produces a set of web pages that detail the accuracy and speed of the tools involved. For the list of currently supported tools, check out the: Tools Support/Results tab. If you are using a tool that is not yet supported, simply send us a results file from that tool and we'll write a parser for that tool and add it to the supported tools list.&lt;br /&gt;
&lt;br /&gt;
The following command will compute a Benchmark scorecard for all the results files in the '''/results''' directory. The generated scorecard is put into the '''/scorecard''' directory.&lt;br /&gt;
&lt;br /&gt;
 createScorecard.sh (Linux) or createScorecard.bat (Windows)&lt;br /&gt;
&lt;br /&gt;
An example of a real scorecard for some open source tools is provided at the top of the Tool Support/Results tab so you can see what one looks like.&lt;br /&gt;
&lt;br /&gt;
We recommend including the Benchmark version number in any results file name, in order to help prevent mismatches between the expected results and the actual results files.  A tool will not score well against the wrong expected results.&lt;br /&gt;
&lt;br /&gt;
===Customizing Your Scorecard Generation===&lt;br /&gt;
&lt;br /&gt;
The createScorecard scripts are very simple. They only have one line. Here's what the 1.2 version looks like:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;expectedresults-1.2.csv results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This Maven command simply says to run the BenchmarkScore application, passing in two parameters. The 1st is the Benchmark expected results file to compare the tool results against. And the 2nd is the name of the directory that contains all the results from tools run against that version of the Benchmark. If you have tool results older than the current version of the Benchmark, like 1.1 results for example, then you would do something like this instead:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;expectedresults-1.1.csv 1.1_results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
To keep things organized, we actually put the expected results file inside the same results folder for that version of the Benchmark, so our command looks like this:&lt;br /&gt;
&lt;br /&gt;
 mvn validate -Pbenchmarkscore -Dexec.args=&amp;quot;1.1_results/expectedresults-1.1.csv 1.1_results&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In all cases, the generated scorecard is put in the /scorecard folder.&lt;br /&gt;
&lt;br /&gt;
'''WARNING: If you generate results for a commercial tool, be careful who you distribute it to. Each tool has its own license defining when any results it produces can be released/made public. It is likely to be against the terms of a commercial tool's license to publicly release that tool's score against the OWASP Benchmark. The OWASP Benchmark project takes no responsibility if someone else releases such results.''' It is for just this reason that the Benchmark project isn't releasing such results itself.&lt;br /&gt;
&lt;br /&gt;
= Tool Scanning Tips =&lt;br /&gt;
&lt;br /&gt;
People frequently have difficulty scanning the Benchmark with various tools due to many reasons, including size of the Benchmark app and its codebase, and complexity of the tools used. Here is some guidance for some of the tools we have used to scan the Benchmark. If you've learned any tricks on how to get better or easier results for a particular tool against the Benchmark, let us know or update this page directly.&lt;br /&gt;
&lt;br /&gt;
== Generic Tips ==&lt;br /&gt;
&lt;br /&gt;
Because of the size of the Benchmark, you may need to give your tool more memory before it starts the scan. If its a Java based tool, you may want to pass more memory to it like this:&lt;br /&gt;
&lt;br /&gt;
 -Xmx4G (This gives the Java application 4 Gig of memory)&lt;br /&gt;
&lt;br /&gt;
== SAST Tools ==&lt;br /&gt;
&lt;br /&gt;
=== Checkmarx ===&lt;br /&gt;
&lt;br /&gt;
The Checkmarx SAST Tool (CxSAST) is ready to scan the OWASP Benchmark out-of-the-box. &lt;br /&gt;
Please notice that the OWASP Benchmark “hides” some vulnerabilities in dead code areas, for example:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;java&amp;quot;&amp;gt;&lt;br /&gt;
if (0&amp;gt;1)&lt;br /&gt;
{&lt;br /&gt;
  //vulnerable code&lt;br /&gt;
}&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
By default, CxSAST will find these vulnerabilities since Checkmarx believes that including dead code in the scan results is a SAST best practice. &lt;br /&gt;
&lt;br /&gt;
Checkmarx's experience shows that security experts expect to find these types of code vulnerabilities, and demand that their developers fix them. However, OWASP Benchmark considers the flagging of these vulnerabilities as False Positives, as a result lowering Checkmarx's overall score. &lt;br /&gt;
&lt;br /&gt;
Therefore, in order to receive an OWASP score untainted by dead code, re-configure CxSAST as follows:&lt;br /&gt;
# Open the CxAudit client for editing Java queries.&lt;br /&gt;
# Override the &amp;quot;Find_Dead_Code&amp;quot; query.&lt;br /&gt;
# Add the commented text of the original query to the new override query.&lt;br /&gt;
# Save the queries.&lt;br /&gt;
&lt;br /&gt;
=== FindBugs ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindBugs.(sh or bat). If you want to run a different version of FindBugs, just change its version number in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== FindBugs with FindSecBugs ===&lt;br /&gt;
&lt;br /&gt;
[http://h3xstream.github.io/find-sec-bugs/ FindSecurityBugs] is a great plugin for FindBugs that significantly increases the ability for FindBugs to find security issues. We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindSecBugs.(sh or bat). If you want to run a different version of FindSecBugs, just change the version number of the findsecbugs-plugin artifact in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== Micro Focus (Formally HP) Fortify ===&lt;br /&gt;
&lt;br /&gt;
If you are using the Audit Workbench, you can give it more memory and make sure you invoke it in 64-bit mode by doing this:&lt;br /&gt;
&lt;br /&gt;
  set AWB_VM_OPTS=&amp;quot;-Xmx2G -XX:MaxPermSize=256m&amp;quot;&lt;br /&gt;
  export AWB_VM_OPTS=&amp;quot;-Xmx2G -XX:MaxPermSize=256m&amp;quot;&lt;br /&gt;
  auditworkbench -64&lt;br /&gt;
&lt;br /&gt;
We found it was easier to use the Maven support in Fortify to scan the Benchmark and to do it in 2 phases, translate, and then scan. We did something like this:&lt;br /&gt;
&lt;br /&gt;
  Translate Phase:&lt;br /&gt;
  export JAVA_HOME=$(/usr/libexec/java_home)&lt;br /&gt;
  export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_17.10/bin&lt;br /&gt;
  export SCA_VM_OPTS=&amp;quot;-Xmx2G -version 1.7&amp;quot;&lt;br /&gt;
  mvn sca:clean&lt;br /&gt;
  mvn sca:translate&lt;br /&gt;
&lt;br /&gt;
  Scan Phase:&lt;br /&gt;
  export JAVA_HOME=$(/usr/libexec/java_home)&lt;br /&gt;
  export PATH=$PATH:/Applications/HP_Fortify/HP_Fortify_SCA_and_Apps_4.10/bin&lt;br /&gt;
  export SCA_VM_OPTS=&amp;quot;-Xmx10G -version 1.7&amp;quot;&lt;br /&gt;
  mvn sca:scan&lt;br /&gt;
&lt;br /&gt;
=== PMD ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runPMD.(sh or bat). If you want to run a different version of PMD, just change its version number in the Benchmark pom.xml file. (NOTE: PMD doesn't find any security issues. We include it because its interesting to know that it doesn't.)&lt;br /&gt;
&lt;br /&gt;
=== SonarQube ===&lt;br /&gt;
&lt;br /&gt;
We include this free tool in the Benchmark and its mostly dialed in.  But its a bit tricky because SonarQube requires two parts. There is a stand alone scanner for Java. And then there is a web application that accepts the results, and in turn can then produce the results file required by the Benchmark scorecard generator for SonarQube. Running the script runSonarQube.(sh or bat) will generate the results, but if the SonarQube Web Application isn't running where the runSonarQube script expects it to be, then the script will fail.&lt;br /&gt;
&lt;br /&gt;
If you want to run a different version of SonarQube, just change its version number in the Benchmark pom.xml file.&lt;br /&gt;
&lt;br /&gt;
=== Xanitizer ===&lt;br /&gt;
&lt;br /&gt;
The vendor has written their own guide to [http://www.rigs-it.net/opendownloads/whitepapers/HowToSetUpXanitizerForOWASPBenchmarkProject.pdf How to Set Up Xanitizer for OWASP Benchmark].&lt;br /&gt;
&lt;br /&gt;
== DAST Tools ==&lt;br /&gt;
&lt;br /&gt;
=== Burp Pro ===&lt;br /&gt;
&lt;br /&gt;
You must use Burp Pro v1.6.29 or greater to scan the Benchmark due to a previous limitation in Burp Pro related to ensuring the path attribute for cookies was honored. This issue was fixed in the v1.6.29 release.&lt;br /&gt;
&lt;br /&gt;
To scan, first spider the entire Benchmark, and then select the /Benchmark URL and actively scan that branch. You can skip all the .html pages and any other pages that Burp says have no parameters.&lt;br /&gt;
&lt;br /&gt;
NOTE: We have been unable to simply run Burp Pro against the entire Benchmark in one shot. In our experience, it eventually freezes/stops scanning. We've had to run it against each test area one at a time. If you figure out how to get Burp Pro to scan all of Benchmark in one shot, let us know how you did it!&lt;br /&gt;
&lt;br /&gt;
=== OWASP ZAP ===&lt;br /&gt;
&lt;br /&gt;
ZAP may require additional memory to be able to scan the Benchmark. To configure the amount of memory:&lt;br /&gt;
* Tools --&amp;gt; Options --&amp;gt; JVM: Recommend setting to: -Xmx2048m (or larger). (Then restart ZAP).&lt;br /&gt;
&lt;br /&gt;
To run ZAP against Benchmark:&lt;br /&gt;
# Because Benchmark uses Cookies and Headers as sources of attack for many test cases: Tools --&amp;gt; Options --&amp;gt; Active Scan Input Vectors: Then check the HTTP Headers, All Requests, and Cookie Data checkboxes and hit OK&lt;br /&gt;
# Click on Show All Tabs button (if spider tab isn't visible)&lt;br /&gt;
# Go to Spider tab (the black spider) and click on New Scan button&lt;br /&gt;
# Enter: https://localhost:8443/benchmark/  into the 'Starting Point' box and hit 'Start Scan'&lt;br /&gt;
#* Do this again. For some reason it takes 2 passes with the Spider before it stops finding more Benchmark endpoints.&lt;br /&gt;
# When Spider completes, click on 'benchmark' folder in Site Map, right click and select: 'Attack --&amp;gt; Active Scan'&lt;br /&gt;
#* It will take several hours, like 3+ to complete (it's actually likely to simply freeze before completing the scan - see NOTE: below)&lt;br /&gt;
&lt;br /&gt;
For faster active scan you can&lt;br /&gt;
* Disable the ZAP DB log (in ZAP 2.5.0+):&lt;br /&gt;
** Disable it via Options / Database / Recover Log&lt;br /&gt;
** Set it on the command line using &amp;quot;-config database.recoverylog=false&amp;quot;&lt;br /&gt;
* Disable unnecessary plugins / Technologies: When you launch the Active Scan&lt;br /&gt;
** On the Policy tab, disable all plugins except: XSS (Reflected), Path Traversal, SQLi, OS Command Injection&lt;br /&gt;
** Go the Technology Tab, disable everything and only enable: MySQL, YOUR_OS, Tomcat&lt;br /&gt;
** Note: This 2nd performance improvement step is a bit like cheating as you wouldn't do this for a normal site scan. You'd want to leave all this on in case these other plugins/technologies are helpful in finding more issues. So a fair performance comparison of ZAP to other tools would leave all this on.&lt;br /&gt;
&lt;br /&gt;
To generate the ZAP XML results file so you can generate its scorecard:&lt;br /&gt;
* Tools &amp;gt; Options &amp;gt; Alerts - And set the Max alert instances to like 500.&lt;br /&gt;
* Then: Report &amp;gt; Generate XML Report...&lt;br /&gt;
&lt;br /&gt;
NOTE: Similar to Burp, we can't simply run ZAP against the entire Benchmark in one shot. In our experience, it eventually freezes/stops scanning. We've had to run it against each test area one at a time. If you figure out how to get ZAP to scan all of Benchmark in one shot, let us know how you did it!&lt;br /&gt;
&lt;br /&gt;
Things we tried that didn't improve the score:&lt;br /&gt;
* AJAX Spider - the traditional spider appears to find all (or 99%) of the test cases so the AJAX Spider does not appear to be needed against Benchmark v1.2&lt;br /&gt;
* XSS (Persistent) - There are 3 of these plugins that run by default. There aren't any stored XSS in Benchmark, so you can disable these plugins for a faster scan.&lt;br /&gt;
* DOM XSS Plugin - This is an optional plugin that didn't seem to find any additional XSS issues. There aren't an DOM specific XSS issues in Benchmark v1.2, so not surprising.&lt;br /&gt;
&lt;br /&gt;
== IAST Tools ==&lt;br /&gt;
&lt;br /&gt;
Interactive Application Security Testing (IAST) tools work differently than scanners.  IAST tools monitor an application as it runs to identify application vulnerabilities using context from inside the running application. Typically these tools run continuously, immediately notifying users of vulnerabilities, but you can also get a full report of an entire application. To do this, we simply run the Benchmark application with an IAST agent and use a crawler to hit all the pages.&lt;br /&gt;
&lt;br /&gt;
=== Contrast Assess ===&lt;br /&gt;
&lt;br /&gt;
To use Contrast Assess, we simply add the Java agent to the Benchmark environment and run the BenchmarkCrawler. The entire process should only take a few minutes. We provided a few scripts, which simply add the -javaagent:contrast.jar flag to the Benchmark launch configuration. We have tested on MacOS, Ubuntu, and Windows.  Be sure your VM has at least 4M of memory.&lt;br /&gt;
&lt;br /&gt;
* Ensure your environment has Java, Maven, and git installed, then build the Benchmark project&lt;br /&gt;
   '''$ git clone https://github.com/OWASP/Benchmark.git'''&lt;br /&gt;
   '''$ cd Benchmark'''&lt;br /&gt;
   '''$ mvn compile'''&lt;br /&gt;
&lt;br /&gt;
* Download a licensed copy of the Contrast Assess Java Agent (contrast.jar) from your Contrast TeamServer account and put it in the /Benchmark/tools/Contrast directory.&lt;br /&gt;
   '''$ cp ~/Downloads/contrast.jar tools/Contrast'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 1, launch the Benchmark application and wait until it starts&lt;br /&gt;
   '''$  ./runBenchmark_wContrast.sh''' (.bat on Windows)&lt;br /&gt;
   '''[INFO] Scanning for projects...&lt;br /&gt;
   '''[INFO]                                                                         &lt;br /&gt;
   '''[INFO] ------------------------------------------------------------------------&lt;br /&gt;
   '''[INFO] Building OWASP Benchmark Project 1.2&lt;br /&gt;
   '''[INFO] ------------------------------------------------------------------------&lt;br /&gt;
   '''[INFO] &lt;br /&gt;
   '''...&lt;br /&gt;
   '''[INFO] [talledLocalContainer] Tomcat 8.x started on port [8443]'''&lt;br /&gt;
   '''[INFO] Press Ctrl-C to stop the container...'''&lt;br /&gt;
&lt;br /&gt;
* In Terminal 2, launch the crawler and wait a minute or two for the crawl to complete.&lt;br /&gt;
   '''$ ./runCrawler.sh''' (.bat on Windows)&lt;br /&gt;
&lt;br /&gt;
* A Contrast report has been generated in /Benchmark/tools/Contrast/working/contrast.log.  This report will be automatically copied (and renamed with version number)  to /Benchmark/results directory.&lt;br /&gt;
   '''$ more tools/Contrast/working/contrast.log'''&lt;br /&gt;
   '''2016-04-22 12:29:29,716 [main b] INFO - Contrast Runtime Engine&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Copyright (C) 2012&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Pat. 8,458,789 B2&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - Contrast Security, Inc.&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - All Rights Reserved&lt;br /&gt;
   '''2016-04-22 12:29:29,717 [main b] INFO - https://www.contrastsecurity.com/&lt;br /&gt;
   '''...'''&lt;br /&gt;
&lt;br /&gt;
* Press Ctrl-C to stop the Benchmark in Terminal 1.  Note: on Windows, select &amp;quot;N&amp;quot; when asked Terminate batch job (Y/N))&lt;br /&gt;
   '''[INFO] [talledLocalContainer] Tomcat 8.x is stopped'''&lt;br /&gt;
   '''Copying Contrast report to results directory'''&lt;br /&gt;
&lt;br /&gt;
* Generate scorecards in /Benchmark/scorecard&lt;br /&gt;
   '''$ ./createScorecards.sh''' (.bat on Windows)&lt;br /&gt;
   '''Analyzing results from Benchmark_1.2-Contrast.log&lt;br /&gt;
   '''Actual results file generated: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.csv&lt;br /&gt;
   '''Report written to: /Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html&lt;br /&gt;
&lt;br /&gt;
* Open the Benchmark Scorecard in your browser&lt;br /&gt;
   '''/Users/owasp/Projects/Benchmark/scorecard/Benchmark_v1.2_Scorecard_for_Contrast.html'''&lt;br /&gt;
&lt;br /&gt;
=== Hdiv Detection ===&lt;br /&gt;
&lt;br /&gt;
Hdiv has written their own instructions on how to run the detection component of their product on the Benchmark here: https://hdivsecurity.com/docs/features/benchmark/#how-to-run-hdiv-in-owasp-benchmark-project. You'll see that these instructions involve using the same crawler used to exercise all the test cases in the Benchmark, just like Contrast above.&lt;br /&gt;
&lt;br /&gt;
= RoadMap =&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.0 - Released April 15, 2015 - This initial release included over 20,000 test cases in 11 different vulnerability categories. As this initial version was not a runnable application, it was only suitable for assessing static analysis tools (SAST).&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.1 - Released May 23, 2015 - This update fixed some inaccurate test cases, and made sure that every vulnerability area included both True Positives and False Positives.&lt;br /&gt;
&lt;br /&gt;
Benchmark Scorecard Generator - Released July 10, 2015 - The ability to automatically and repeatably produce a scorecard of how well tools do against the Benchmark was released for most of the SAST tools supported by the Benchmark. Scorecards present graphical as well as statistical data on how well a tool does against the Benchmark down to the level of detail of how exactly it did against each individual test in the Benchmark. [https://rawgit.com/OWASP/Benchmark/master/scorecard/OWASP_Benchmark_Home.html Here are the latest public scorecards].  Support for producing scorecards for additional tools is being added all the time and the current full set is documented on the '''Tool Support/Results''' and '''Quick Start''' tabs of this wiki.&lt;br /&gt;
&lt;br /&gt;
Benchmark v1.2beta - Released Aug 15, 2015 - The 1st release of a fully runnable version of the Benchmark to support assessing all types of vulnerability detection and prevention technologies, including DAST, IAST, RASP, WAFs, etc. This involved creating a user interface for every test case, and enhancing each test case to make sure its actually exploitable, not just uses something that is theoretically weak. This release is under 3,000 test cases to make it practical to scan the entire Benchmark with a DAST tool in a reasonable amount of time, with commodity hardware specs.&lt;br /&gt;
&lt;br /&gt;
Benchmark 1.2 - Released June 5, 2016 -  Based on feedback from a number of DAST tool developers, and other vendors as well, we made the Benchmark more realistic in a number of ways to facilitate external DAST scanning, and also made the Benchmark more resilient against attack so it could properly survive various DAST vulnerability detection and exploit verification techniques.&lt;br /&gt;
&lt;br /&gt;
Plans for Benchmark 1.3:&lt;br /&gt;
&lt;br /&gt;
While we don't have hard and fast rules of exactly what we are going to do next, enhancements in the following areas are planned for the next release:&lt;br /&gt;
&lt;br /&gt;
* Add new vulnerability categories (e.g., Hibernate Injection)&lt;br /&gt;
* Add support for popular server side Java frameworks (e.g., Spring)&lt;br /&gt;
* Add web services test cases&lt;br /&gt;
&lt;br /&gt;
We are also starting to work on the ability to score WAFs/RASPs and other defensive technology against Benchmark.&lt;br /&gt;
&lt;br /&gt;
= FAQ =&lt;br /&gt;
&lt;br /&gt;
==1. How are the scores computed for the Benchmark?==&lt;br /&gt;
&lt;br /&gt;
Each test case has a single vulnerability of a specific type. Its either a real vulnerability (True Positive) or not (a False Positive). We document all the test cases for each version of the Benchmark in the expectedresults-VERSION#.csv file (e.g., expectedresults-1.1.csv). This file lists the test case name, the CWE type of the vulnerability, and whether it is a True Positive or not. The Benchmark supports scorecard generators for computing exactly how a tool did when analyzing a version of the Benchmark. The full list of supported tools is on the Tools Support/Results tab. For each tool there is a parser that can parse the native results format for that tool (usually XML). This parser simply, for each test case, looks to see if that tool reported a vulnerability of the type expected in the test case source code file (for SAST) or the test case URL (for DAST/IAST). If it did, and the test case was a True Positive, the tool gets credit for finding it. If it is a False Positive test, and the tool reports that type of finding, then its recorded as a False Positive. If the tool didn't report that type of vulnerability for a test case, then they get either a False Negative, or a True Negative as appropriate. After calculating all of the individual test case results, a scorecard is generated providing a chart and statistics for that tool across all the vulnerability categories, and pages are also created comparing different tools to each other in each vulnerability category (if multiple tools are being scored together).&lt;br /&gt;
&lt;br /&gt;
A detailed file explaining exactly how that tool did against each individual test case in that version of the Benchmark is produced as part of scorecard generation, and is available via the Actual Results link on each tool's scorecard page. (e.g., Benchmark_v1.1_Scorecard_for_FindBugs.csv).&lt;br /&gt;
&lt;br /&gt;
==2. What if the tool I'm using doesn't have a scorecard generator for it?==&lt;br /&gt;
&lt;br /&gt;
Send us the results file! We'll be happy to create a parser for that tool so its now supported.&lt;br /&gt;
&lt;br /&gt;
==3. What if a tool finds other unexpected vulnerabilities?==&lt;br /&gt;
&lt;br /&gt;
We are sure there are vulnerabilities we didn't intend to be there and we are eliminating them as we find them. If you find some, let us know and we'll fix them too. We are primarily focused on unintentional vulnerabilities in the categories of vulnerabilities the Benchmark currently supports, since that is what is actually measured.&lt;br /&gt;
&lt;br /&gt;
Right now, two types of vulnerabilities that get reported are ignored by the scorecard generator:&lt;br /&gt;
# Vulnerabilities in categories not yet supported&lt;br /&gt;
# Vulnerabilities of a type that is supported, but reported in test cases not of that type&lt;br /&gt;
&lt;br /&gt;
In the case of #2, false positives reported in unexpected areas are also ignored, which is primarily a DAST problem. Right now those false positives are completely ignored, but we are thinking about including them in the false positive score in some fashion. We just haven't decided how yet.&lt;br /&gt;
&lt;br /&gt;
==4. How should I configure my tool to scan the Benchmark?==&lt;br /&gt;
&lt;br /&gt;
All tools support various levels of configuration in order to improve their results. The Benchmark project, in general, is trying to '''compare out of the box capabilities of tools'''. However, if a few simple tweaks to a tool can be done to improve that tool's score, that's fine. We'd like to understand what those simple tweaks are, and document them here, so others can repeat those tests in exactly the same way. For example, just turn on the 'test cookies and headers' flag, which is off by default. Or turn on the 'advanced' scan, so it will work harder, find more vulnerabilities. Its simple things like this we are talking about, not an extensive effort to teach the tool about the app, or perform 'expert' configuration of the tool.&lt;br /&gt;
&lt;br /&gt;
So, if you know of some simple tweaks to improve a tool's results, let us know what they are and we'll document them here so everyone can benefit and make it easier to do apples to apples comparisons. And we'll link to that guidance once we start documenting it, but we don't have any such guidance right now.&lt;br /&gt;
&lt;br /&gt;
==5. I'm having difficulty scanning the Benchmark with a DAST tool. How can I get it to work?==&lt;br /&gt;
&lt;br /&gt;
We've run into 2 primary issues giving DAST tools problems.&lt;br /&gt;
&lt;br /&gt;
a) The Benchmark Generates Lots of Cookies&lt;br /&gt;
&lt;br /&gt;
The Burp team pointed out a cookies bug in the 1.2beta Benchmark. Each Weak Randomness test case generates its own cookie, 1 per test case. This caused the creation of so many cookies that servers would eventually start returning 400 errors because there were simply too many cookies being submitted in a request. This was fixed in the Aug 27, 2015 update to the Benchmark by setting the path attribute for each of these cookies to be the path to that individual test case. Now, only at most one of these cookies should be submitted with each request, eliminating this 'too many cookies' problem. However, if a DAST tool doesn't honor this path attribute, it may continue to send too many cookies, making the Benchmark unscannable for that tool. Burp Pro prior to 1.6.29 had this issue, but it was fixed in the 1.6.29 release.&lt;br /&gt;
&lt;br /&gt;
b) The Benchmark is a BIG Application&lt;br /&gt;
&lt;br /&gt;
Yes. It is, so you might have to give your scanner more memory than it normally uses by default in order to successfully scan the entire Benchmark. Please consult your tool vendor's documentation on how to give it more memory.&lt;br /&gt;
&lt;br /&gt;
Your machine itself might not have enough memory in the first place. For example, we were not able to successfully scan the 1.2beta with OWASP ZAP with only 8 Gig of RAM. So, you might need a more powerful machine or use a cloud provided machine to successfully scan the Benchmark with certain DAST tools. You may have similar problems with SAST tools against large versions of the Benchmark, like the 1.1 release.&lt;br /&gt;
&lt;br /&gt;
= Acknowledgements =&lt;br /&gt;
&lt;br /&gt;
The following people, organizations, and many others, have contributed to this project and their contributions are much appreciated!&lt;br /&gt;
&lt;br /&gt;
* Lots of Vendors - Many vendors have provided us with either trial licenses we can use, or they have run their tools themselves and either sent us results files, or written and contributed scorecard generators for their tool. Many have also provided valuable feedback so we can make the Benchmark more accurate and more realistic.&lt;br /&gt;
* Juan Gama - Development of initial release and continued support&lt;br /&gt;
* Ken Prole - Assistance with automated scorecard development using CodeDx&lt;br /&gt;
* Nick Sanidas - Development of initial release&lt;br /&gt;
* Denim Group - Contribution of scan results to facilitate scorecard development&lt;br /&gt;
* Tasos Laskos - Significant feedback on the DAST version of the Benchmark&lt;br /&gt;
* Ann Campbell - From SonarSource - for fixing our SonarQube results parser&lt;br /&gt;
* Dhiraj Mishra - OWASP Member - contributed SQLi/XSS fuzz vectors as initial contribution towards adding support for WAF/RASP scoring&lt;br /&gt;
&lt;br /&gt;
[[File:CWE_Logo.jpeg|link=https://cwe.mitre.org/]] - The CWE project for providing a mapping mechanism to easily map test cases to issues found by vulnerability detection tools.&lt;br /&gt;
&lt;br /&gt;
We are looking for volunteers. Please contact [mailto:dave.wichers@owasp.org Dave Wichers] if you are interested in contributing new test cases, tool results run against the benchmark, or anything else.&lt;br /&gt;
&lt;br /&gt;
__NOTOC__ &amp;lt;headertabs /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP_Project]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=244572</id>
		<title>Free for Open Source Application Security Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=244572"/>
				<updated>2018-10-23T21:56:57Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Detecting Known Vulnerable Components */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
== Introduction ==&lt;br /&gt;
OWASP's mission is to help the world improve the security of its software. One of the best ways OWASP can do that is to help Open Source developers improve the software they are producing that everyone else relies on. As such, the following lists of '''automated vulnerability detection tools''' that are '''free for open source''' projects have been gathered together here to raise awareness of their availability.&lt;br /&gt;
&lt;br /&gt;
We would encourage open source projects to use the following types of tools to improve the security and quality of their code:&lt;br /&gt;
* Static Application Security Testing ([[SAST]]) Tools &lt;br /&gt;
* Dynamic Application Security Testing ([[DAST]]) Tools - (Primarily for web apps)&lt;br /&gt;
* Interactive Application Security Testing (IAST) Tools - (Primarily for web apps and web APIs)&lt;br /&gt;
* Keeping Open Source libraries up-to-date (to avoid [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]])&lt;br /&gt;
* Static Code Quality Tools&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; '''Disclaimer:''' &amp;lt;b&amp;gt;OWASP does not endorse any of the Vendors or Scanning Tools by listing them below. They are simply listed if we believe they are free for use by open source projects. We have made every effort to provide this information as accurately as possible. If you are the vendor of a free for open source tool and think this information is incomplete or incorrect, please send an e-mail to dave.wichers (at) owasp.org and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free for Open Source Tools ==&lt;br /&gt;
Tools that are free for open source projects in each of the above categories are listed below.&lt;br /&gt;
&lt;br /&gt;
=== SAST Tools ===&lt;br /&gt;
OWASP already maintains a page of known SAST tools: [[Source Code Analysis Tools]], which includes a list of those that are &amp;quot;Open Source or Free Tools Of This Type&amp;quot;. Any such tools could certainly be used.&lt;br /&gt;
&lt;br /&gt;
In addition, we are aware of the following commercial SAST tools that are free for Open Source projects:&lt;br /&gt;
* [https://scan.coverity.com/ Coverity Scan Static Analysis] - Can be lashed into Travis-CI so it's done automatically with online resources. Supports over a dozen programming languages as documented here in the section [https://www.synopsys.com/software-integrity/security-testing/static-analysis-sast.html Comprehensive support for these programming languages and frameworks].&lt;br /&gt;
&lt;br /&gt;
=== DAST Tools ===&lt;br /&gt;
If your project has a web application component, we recommend running automated scans against it to look for vulnerabilities. OWASP maintains a page of known DAST Tools: [[:Category:Vulnerability Scanning Tools|Vulnerability Scanning Tools]], and the '''Licence''' column on this page indicates which of those tools have free capabilities. Our primary recommendation is to use one of these:&lt;br /&gt;
* [[OWASP Zed Attack Proxy Project|OWASP ZAP]] - A full featured free and open source DAST tool that includes both automated scanning for vulnerabilities and tools to assist expert manual web app pen testing.&lt;br /&gt;
** The ZAP team has also been working hard to make it easier to integrate ZAP into your CI/CD pipeline. (e.g., here's a [https://www.we45.com/blog/how-to-integrate-zap-into-jenkins-ci-pipeline-we45-blog blog post on how to integrate ZAP with Jenkins]).&lt;br /&gt;
* [http://www.arachni-scanner.com/ Arachni] - Arachni is a commercially supported scanner, but its free for most use cases, including scanning open source projects.&lt;br /&gt;
We are not aware of any other commercial grade tools that offer their full featured DAST product free for open source projects.&lt;br /&gt;
&lt;br /&gt;
=== IAST Tools ===&lt;br /&gt;
IAST tools are typically geared to analyze Web Applications and Web APIs, but that is vendor specific. There may be IAST products that can perform good security analysis on non-web applications as well.&lt;br /&gt;
&lt;br /&gt;
We are aware of only one IAST Tool that is free for open source at this time:&lt;br /&gt;
* [https://www.contrastsecurity.com/contrast-community-edition Contrast Community Edition (CE)] - Fully featured version for 1 app and up to 5 users (some Enterprise features disabled). Contrast CE supports Java only.&lt;br /&gt;
&lt;br /&gt;
=== Open Source Software (OSS) Security Tools ===&lt;br /&gt;
OSS refers to the open source libraries or components that application developers leverage to quickly develop new applications and add features to existing apps. Gartner refers to the analysis of the security of these components as software composition analysis (SCA). So OSS Analysis and SCA are the same thing.&lt;br /&gt;
&lt;br /&gt;
OWASP recommends that all software projects generally try to keep the libraries they use as up-to-date as possible to reduce the likelihood of [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]]. There are two recommended approaches for this:&lt;br /&gt;
&lt;br /&gt;
==== Keeping Your Libraries Updated ====&lt;br /&gt;
Using the latest version of each library is recommended because security issues are frequently fixed 'silently' by the component maintainer. By silently, we mean without publishing a [https://cve.mitre.org/ CVE] for the security fix.&lt;br /&gt;
* [https://www.mojohaus.org/versions-maven-plugin/ Maven Versions plugin]&lt;br /&gt;
** For Maven projects, can be used to generate a report of all dependencies used and when upgrades are available for them. Either a direct report, or part of the overall project documentation using: mvn site.&lt;br /&gt;
* Dependabot - https://dependabot.com/&lt;br /&gt;
** A GitHub only service that creates pull requests to keep your dependencies up-to-date. It automatically generates a pull request for each dependency you can upgrade, which you can then ignore, or accept, as you like. It supports tons of languages.&lt;br /&gt;
** Recommended for all open source projects maintained on GitHub!&lt;br /&gt;
&lt;br /&gt;
==== Detecting Known Vulnerable Components ====&lt;br /&gt;
As an alternative, or in addition to, trying to keep all your components up-to-date, a project can specifically monitor whether any of the components they use have known vulnerable components.&lt;br /&gt;
&lt;br /&gt;
Free tools of this type:&lt;br /&gt;
* OWASP has its own free open source tool [[OWASP Dependency Check]] that is free for anyone to use.&lt;br /&gt;
* GitHub: Security alerts for vulnerable dependencies - https://help.github.com/articles/about-security-alerts-for-vulnerable-dependencies/&lt;br /&gt;
** A native GitHub feature that reports known vulnerable dependencies in your GitHub projects. Supports: Java, .NET, JavaScript, Ruby, and Python. Your GitHub projects are automatically signed up for this service.&lt;br /&gt;
Commercial tools of this type that are free for open source:&lt;br /&gt;
* Contrast Community Edition (CE) (mentioned earlier) also has both Known Vulnerable Component detection and Available Updates reporting for OSS. CE supports Java only.&lt;br /&gt;
* Snyk - https://www.snyk.io - Supports Node.js, Ruby, Java, Python, Scala, Golang, .NET, PHP - Latest list here: https://snyk.io/docs&lt;br /&gt;
** A Commercial tool that identifies vulnerable components and integrates with numerous CI/CD pipelines. It is free for open source: https://snyk.io/plans&lt;br /&gt;
** If you don't want to grant Snyk write access to your repo (see it can auto-create pull requests) you can use the Command Line Interface (CLI) instead. See: https://snyk.io/docs/using-snyk. If you do this and want it to be free, you have to configure Snyk so it know its open source: https://support.snyk.io/snyk-cli/how-can-i-set-a-snyk-cli-project-as-open-source&lt;br /&gt;
*** Another benefit of using the Snyk CLI is that it won't auto create Pull requests for you (which makes these 'issues' more public than you might prefer)&lt;br /&gt;
** They also provide detailed information and remediation guidance for known vulnerabilities here: https://snyk.io/vuln&lt;br /&gt;
* SourceClear - https://www.sourceclear.com/ - Supports: Java, Ruby, JavaScript, Python, Objective C, GO, PHP&lt;br /&gt;
** They have a free trial right from their [https://www.sourceclear.com/ home page]. When the 30 day trial expires, it converts into a free &amp;quot;Personal Account&amp;quot; per: &amp;quot;Upgrade at any time to get the features that matter most to you, or choose the Personal plan when your trial ends.&amp;quot; Personal Account described here: https://www.sourceclear.com/pricing/&lt;br /&gt;
** They also make their component vulnerability data (for publicly known vulns) free to search: https://www.sourceclear.com/vulnerability-database/search#_ (Very useful when trying to research a particular library)&lt;br /&gt;
* WhiteSource Bolt - Supports 200+ programming languages. https://www.whitesourcesoftware.com/&lt;br /&gt;
** Azure version: https://marketplace.visualstudio.com/items?itemName=whitesource.ws-bolt&lt;br /&gt;
** GitHub version: https://github.com/apps/whitesource-bolt-for-github Available starting in Nov. 2018.&lt;br /&gt;
&lt;br /&gt;
=== Code Quality tools ===&lt;br /&gt;
Quality has a significant correlation to security. As such, we recommend open source projects also consider using good code quality tools. A few that we are aware of are:&lt;br /&gt;
* SpotBugs (https://github.com/spotbugs/spotbugs) - Open source code quality tool for Java&lt;br /&gt;
** This is the active fork for FindBugs, so if you use Findbugs, you should switch to this.&lt;br /&gt;
** SpotBugs users should add the FindSecBugs plugin (http://find-sec-bugs.github.io/) to their SpotBugs setup, as it significantly improves on the very basic security checking native to SpotBugs.&lt;br /&gt;
&lt;br /&gt;
* SonarQube (https://www.sonarqube.org/)&lt;br /&gt;
** This is a commercially supported, very popular, free (and commercial) code quality tool. It includes most if not all the FindSecBugs security rules plus lots more for quality, including a free, internet online CI setup to run it against your open source projects. SonarQube supports numerous languages: https://www.sonarqube.org/features/multi-languages/&lt;br /&gt;
&lt;br /&gt;
Please let us know if you are aware of any other high quality application security tools that are free for open source (or simply add them to this page). We are particularly interested in identifying and listing commercial tools that are free for open source, as they tend to be better and easier to use than open source (free) tools. If you are aware of any missing from this list, please add them, or let us know (dave.wichers (at) owasp.org) and we'll confirm they are free, and add them for you. Please encourage your favorite commercial tool vendor to make their tool free for open source projects as well!!&lt;br /&gt;
&lt;br /&gt;
Finally, please forward this page to the open source projects you rely on and encourage them to use these free tools!&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=244571</id>
		<title>Free for Open Source Application Security Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=244571"/>
				<updated>2018-10-23T21:42:40Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: Add SourceClear to list of Detecting Vulnerable Components Tools&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
== Introduction ==&lt;br /&gt;
OWASP's mission is to help the world improve the security of its software. One of the best ways OWASP can do that is to help Open Source developers improve the software they are producing that everyone else relies on. As such, the following lists of '''automated vulnerability detection tools''' that are '''free for open source''' projects have been gathered together here to raise awareness of their availability.&lt;br /&gt;
&lt;br /&gt;
We would encourage open source projects to use the following types of tools to improve the security and quality of their code:&lt;br /&gt;
* Static Application Security Testing ([[SAST]]) Tools &lt;br /&gt;
* Dynamic Application Security Testing ([[DAST]]) Tools - (Primarily for web apps)&lt;br /&gt;
* Interactive Application Security Testing (IAST) Tools - (Primarily for web apps and web APIs)&lt;br /&gt;
* Keeping Open Source libraries up-to-date (to avoid [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]])&lt;br /&gt;
* Static Code Quality Tools&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; '''Disclaimer:''' &amp;lt;b&amp;gt;OWASP does not endorse any of the Vendors or Scanning Tools by listing them below. They are simply listed if we believe they are free for use by open source projects. We have made every effort to provide this information as accurately as possible. If you are the vendor of a free for open source tool and think this information is incomplete or incorrect, please send an e-mail to dave.wichers (at) owasp.org and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free for Open Source Tools ==&lt;br /&gt;
Tools that are free for open source projects in each of the above categories are listed below.&lt;br /&gt;
&lt;br /&gt;
=== SAST Tools ===&lt;br /&gt;
OWASP already maintains a page of known SAST tools: [[Source Code Analysis Tools]], which includes a list of those that are &amp;quot;Open Source or Free Tools Of This Type&amp;quot;. Any such tools could certainly be used.&lt;br /&gt;
&lt;br /&gt;
In addition, we are aware of the following commercial SAST tools that are free for Open Source projects:&lt;br /&gt;
* [https://scan.coverity.com/ Coverity Scan Static Analysis] - Can be lashed into Travis-CI so it's done automatically with online resources. Supports over a dozen programming languages as documented here in the section [https://www.synopsys.com/software-integrity/security-testing/static-analysis-sast.html Comprehensive support for these programming languages and frameworks].&lt;br /&gt;
&lt;br /&gt;
=== DAST Tools ===&lt;br /&gt;
If your project has a web application component, we recommend running automated scans against it to look for vulnerabilities. OWASP maintains a page of known DAST Tools: [[:Category:Vulnerability Scanning Tools|Vulnerability Scanning Tools]], and the '''Licence''' column on this page indicates which of those tools have free capabilities. Our primary recommendation is to use one of these:&lt;br /&gt;
* [[OWASP Zed Attack Proxy Project|OWASP ZAP]] - A full featured free and open source DAST tool that includes both automated scanning for vulnerabilities and tools to assist expert manual web app pen testing.&lt;br /&gt;
** The ZAP team has also been working hard to make it easier to integrate ZAP into your CI/CD pipeline. (e.g., here's a [https://www.we45.com/blog/how-to-integrate-zap-into-jenkins-ci-pipeline-we45-blog blog post on how to integrate ZAP with Jenkins]).&lt;br /&gt;
* [http://www.arachni-scanner.com/ Arachni] - Arachni is a commercially supported scanner, but its free for most use cases, including scanning open source projects.&lt;br /&gt;
We are not aware of any other commercial grade tools that offer their full featured DAST product free for open source projects.&lt;br /&gt;
&lt;br /&gt;
=== IAST Tools ===&lt;br /&gt;
IAST tools are typically geared to analyze Web Applications and Web APIs, but that is vendor specific. There may be IAST products that can perform good security analysis on non-web applications as well.&lt;br /&gt;
&lt;br /&gt;
We are aware of only one IAST Tool that is free for open source at this time:&lt;br /&gt;
* [https://www.contrastsecurity.com/contrast-community-edition Contrast Community Edition (CE)] - Fully featured version for 1 app and up to 5 users (some Enterprise features disabled). Contrast CE supports Java only.&lt;br /&gt;
&lt;br /&gt;
=== Open Source Software (OSS) Security Tools ===&lt;br /&gt;
OSS refers to the open source libraries or components that application developers leverage to quickly develop new applications and add features to existing apps. Gartner refers to the analysis of the security of these components as software composition analysis (SCA). So OSS Analysis and SCA are the same thing.&lt;br /&gt;
&lt;br /&gt;
OWASP recommends that all software projects generally try to keep the libraries they use as up-to-date as possible to reduce the likelihood of [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]]. There are two recommended approaches for this:&lt;br /&gt;
&lt;br /&gt;
==== Keeping Your Libraries Updated ====&lt;br /&gt;
Using the latest version of each library is recommended because security issues are frequently fixed 'silently' by the component maintainer. By silently, we mean without publishing a [https://cve.mitre.org/ CVE] for the security fix.&lt;br /&gt;
* [https://www.mojohaus.org/versions-maven-plugin/ Maven Versions plugin]&lt;br /&gt;
** For Maven projects, can be used to generate a report of all dependencies used and when upgrades are available for them. Either a direct report, or part of the overall project documentation using: mvn site.&lt;br /&gt;
* Dependabot - https://dependabot.com/&lt;br /&gt;
** A GitHub only service that creates pull requests to keep your dependencies up-to-date. It automatically generates a pull request for each dependency you can upgrade, which you can then ignore, or accept, as you like. It supports tons of languages.&lt;br /&gt;
** Recommended for all open source projects maintained on GitHub!&lt;br /&gt;
&lt;br /&gt;
==== Detecting Known Vulnerable Components ====&lt;br /&gt;
As an alternative, or in addition to, trying to keep all your components up-to-date, a project can specifically monitor whether any of the components they use have known vulnerable components.&lt;br /&gt;
&lt;br /&gt;
Free tools of this type:&lt;br /&gt;
* OWASP has its own free open source tool [[OWASP Dependency Check]] that is free for anyone to use.&lt;br /&gt;
* GitHub: Security alerts for vulnerable dependencies - https://help.github.com/articles/about-security-alerts-for-vulnerable-dependencies/&lt;br /&gt;
** A native GitHub feature that reports known vulnerable dependencies in your GitHub projects. Supports: Java, .NET, JavaScript, Ruby, and Python. Your GitHub projects are automatically signed up for this service.&lt;br /&gt;
Commercial tools of this type that are free for open source:&lt;br /&gt;
* Contrast Community Edition (CE) (mentioned earlier) also has both Known Vulnerable Component detection and Available Updates reporting for OSS. CE supports Java only.&lt;br /&gt;
* Snyk - https://www.snyk.io - Supports Node.js, Ruby, Java, Python, Scala, Golang, .NET, PHP - Latest list here: https://snyk.io/docs&lt;br /&gt;
** A Commercial tool that identifies vulnerable components and integrates with numerous CI/CD pipelines. It is free for open source: https://snyk.io/plans&lt;br /&gt;
** If you don't want to grant Snyk write access to your repo (see it can auto-create pull requests) you can use the Command Line Interface (CLI) instead. See: https://snyk.io/docs/using-snyk. If you do this and want it to be free, you have to configure Snyk so it know its open source: https://support.snyk.io/snyk-cli/how-can-i-set-a-snyk-cli-project-as-open-source&lt;br /&gt;
*** Another benefit of using the Snyk CLI is that it won't auto create Pull requests for you (which makes these 'issues' more public than you might prefer)&lt;br /&gt;
* SourceClear - https://www.sourceclear.com/ - Supports: Java, Ruby, JavaScript, Python, Objective C, GO, PHP&lt;br /&gt;
** They have a free trial right from their [https://www.sourceclear.com/ home page]. When the 30 day trial expires, it converts into a free &amp;quot;Personal Account&amp;quot; per: &amp;quot;Upgrade at any time to get the features that matter most to you, or choose the Personal plan when your trial ends.&amp;quot; Personal Account described here: https://www.sourceclear.com/pricing/&lt;br /&gt;
** They also make their component vulnerability data free to search: https://www.sourceclear.com/vulnerability-database/search#_ (Very useful when trying to research a particular library)&lt;br /&gt;
* WhiteSource Bolt - Supports 200+ programming languages. https://www.whitesourcesoftware.com/&lt;br /&gt;
** Azure version: https://marketplace.visualstudio.com/items?itemName=whitesource.ws-bolt&lt;br /&gt;
** GitHub version: https://github.com/apps/whitesource-bolt-for-github Available starting in Nov. 2018.&lt;br /&gt;
&lt;br /&gt;
=== Code Quality tools ===&lt;br /&gt;
Quality has a significant correlation to security. As such, we recommend open source projects also consider using good code quality tools. A few that we are aware of are:&lt;br /&gt;
* SpotBugs (https://github.com/spotbugs/spotbugs) - Open source code quality tool for Java&lt;br /&gt;
** This is the active fork for FindBugs, so if you use Findbugs, you should switch to this.&lt;br /&gt;
** SpotBugs users should add the FindSecBugs plugin (http://find-sec-bugs.github.io/) to their SpotBugs setup, as it significantly improves on the very basic security checking native to SpotBugs.&lt;br /&gt;
&lt;br /&gt;
* SonarQube (https://www.sonarqube.org/)&lt;br /&gt;
** This is a commercially supported, very popular, free (and commercial) code quality tool. It includes most if not all the FindSecBugs security rules plus lots more for quality, including a free, internet online CI setup to run it against your open source projects. SonarQube supports numerous languages: https://www.sonarqube.org/features/multi-languages/&lt;br /&gt;
&lt;br /&gt;
Please let us know if you are aware of any other high quality application security tools that are free for open source (or simply add them to this page). We are particularly interested in identifying and listing commercial tools that are free for open source, as they tend to be better and easier to use than open source (free) tools. If you are aware of any missing from this list, please add them, or let us know (dave.wichers (at) owasp.org) and we'll confirm they are free, and add them for you. Please encourage your favorite commercial tool vendor to make their tool free for open source projects as well!!&lt;br /&gt;
&lt;br /&gt;
Finally, please forward this page to the open source projects you rely on and encourage them to use these free tools!&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=244570</id>
		<title>Free for Open Source Application Security Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=244570"/>
				<updated>2018-10-23T21:03:23Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: Add Snyk CLI instructions&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
== Introduction ==&lt;br /&gt;
OWASP's mission is to help the world improve the security of its software. One of the best ways OWASP can do that is to help Open Source developers improve the software they are producing that everyone else relies on. As such, the following lists of '''automated vulnerability detection tools''' that are '''free for open source''' projects have been gathered together here to raise awareness of their availability.&lt;br /&gt;
&lt;br /&gt;
We would encourage open source projects to use the following types of tools to improve the security and quality of their code:&lt;br /&gt;
* Static Application Security Testing ([[SAST]]) Tools &lt;br /&gt;
* Dynamic Application Security Testing ([[DAST]]) Tools - (Primarily for web apps)&lt;br /&gt;
* Interactive Application Security Testing (IAST) Tools - (Primarily for web apps and web APIs)&lt;br /&gt;
* Keeping Open Source libraries up-to-date (to avoid [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]])&lt;br /&gt;
* Static Code Quality Tools&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; '''Disclaimer:''' &amp;lt;b&amp;gt;OWASP does not endorse any of the Vendors or Scanning Tools by listing them below. They are simply listed if we believe they are free for use by open source projects. We have made every effort to provide this information as accurately as possible. If you are the vendor of a free for open source tool and think this information is incomplete or incorrect, please send an e-mail to dave.wichers (at) owasp.org and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free for Open Source Tools ==&lt;br /&gt;
Tools that are free for open source projects in each of the above categories are listed below.&lt;br /&gt;
&lt;br /&gt;
=== SAST Tools ===&lt;br /&gt;
OWASP already maintains a page of known SAST tools: [[Source Code Analysis Tools]], which includes a list of those that are &amp;quot;Open Source or Free Tools Of This Type&amp;quot;. Any such tools could certainly be used.&lt;br /&gt;
&lt;br /&gt;
In addition, we are aware of the following commercial SAST tools that are free for Open Source projects:&lt;br /&gt;
* [https://scan.coverity.com/ Coverity Scan Static Analysis] - Can be lashed into Travis-CI so it's done automatically with online resources. Supports over a dozen programming languages as documented here in the section [https://www.synopsys.com/software-integrity/security-testing/static-analysis-sast.html Comprehensive support for these programming languages and frameworks].&lt;br /&gt;
&lt;br /&gt;
=== DAST Tools ===&lt;br /&gt;
If your project has a web application component, we recommend running automated scans against it to look for vulnerabilities. OWASP maintains a page of known DAST Tools: [[:Category:Vulnerability Scanning Tools|Vulnerability Scanning Tools]], and the '''Licence''' column on this page indicates which of those tools have free capabilities. Our primary recommendation is to use one of these:&lt;br /&gt;
* [[OWASP Zed Attack Proxy Project|OWASP ZAP]] - A full featured free and open source DAST tool that includes both automated scanning for vulnerabilities and tools to assist expert manual web app pen testing.&lt;br /&gt;
** The ZAP team has also been working hard to make it easier to integrate ZAP into your CI/CD pipeline. (e.g., here's a [https://www.we45.com/blog/how-to-integrate-zap-into-jenkins-ci-pipeline-we45-blog blog post on how to integrate ZAP with Jenkins]).&lt;br /&gt;
* [http://www.arachni-scanner.com/ Arachni] - Arachni is a commercially supported scanner, but its free for most use cases, including scanning open source projects.&lt;br /&gt;
We are not aware of any other commercial grade tools that offer their full featured DAST product free for open source projects.&lt;br /&gt;
&lt;br /&gt;
=== IAST Tools ===&lt;br /&gt;
IAST tools are typically geared to analyze Web Applications and Web APIs, but that is vendor specific. There may be IAST products that can perform good security analysis on non-web applications as well.&lt;br /&gt;
&lt;br /&gt;
We are aware of only one IAST Tool that is free for open source at this time:&lt;br /&gt;
* [https://www.contrastsecurity.com/contrast-community-edition Contrast Community Edition (CE)] - Fully featured version for 1 app and up to 5 users (some Enterprise features disabled). Contrast CE supports Java only.&lt;br /&gt;
&lt;br /&gt;
=== Open Source Software (OSS) Security Tools ===&lt;br /&gt;
OSS refers to the open source libraries or components that application developers leverage to quickly develop new applications and add features to existing apps. Gartner refers to the analysis of the security of these components as software composition analysis (SCA). So OSS Analysis and SCA are the same thing.&lt;br /&gt;
&lt;br /&gt;
OWASP recommends that all software projects generally try to keep the libraries they use as up-to-date as possible to reduce the likelihood of [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]]. There are two recommended approaches for this:&lt;br /&gt;
&lt;br /&gt;
==== Keeping Your Libraries Updated ====&lt;br /&gt;
Using the latest version of each library is recommended because security issues are frequently fixed 'silently' by the component maintainer. By silently, we mean without publishing a [https://cve.mitre.org/ CVE] for the security fix.&lt;br /&gt;
* [https://www.mojohaus.org/versions-maven-plugin/ Maven Versions plugin]&lt;br /&gt;
** For Maven projects, can be used to generate a report of all dependencies used and when upgrades are available for them. Either a direct report, or part of the overall project documentation using: mvn site.&lt;br /&gt;
* Dependabot - https://dependabot.com/&lt;br /&gt;
** A GitHub only service that creates pull requests to keep your dependencies up-to-date. It automatically generates a pull request for each dependency you can upgrade, which you can then ignore, or accept, as you like. It supports tons of languages.&lt;br /&gt;
** Recommended for all open source projects maintained on GitHub!&lt;br /&gt;
&lt;br /&gt;
==== Detecting Known Vulnerable Components ====&lt;br /&gt;
As an alternative, or in addition to, trying to keep all your components up-to-date, a project can specifically monitor whether any of the components they use have known vulnerable components.&lt;br /&gt;
&lt;br /&gt;
Free tools of this type:&lt;br /&gt;
* OWASP has its own free open source tool [[OWASP Dependency Check]] that is free for anyone to use.&lt;br /&gt;
* GitHub: Security alerts for vulnerable dependencies - https://help.github.com/articles/about-security-alerts-for-vulnerable-dependencies/&lt;br /&gt;
** A native GitHub feature that reports known vulnerable dependencies in your GitHub projects. Supports: Java, .NET, JavaScript, Ruby, and Python. Your GitHub projects are automatically signed up for this service.&lt;br /&gt;
Commercial tools of this type that are free for open source:&lt;br /&gt;
* Contrast Community Edition (CE) (mentioned earlier) also has both Known Vulnerable Component detection and Available Updates reporting for OSS. CE supports Java only.&lt;br /&gt;
* Snyk - https://www.snyk.io - Supports Node.js, Ruby, Java, Python, Scala, Golang, .NET, PHP - Latest list here: https://snyk.io/docs&lt;br /&gt;
** A Commercial tool that identifies vulnerable components and integrates with numerous CI/CD pipelines. It is free for open source: https://snyk.io/plans&lt;br /&gt;
** If you don't want to grant Snyk write access to your repo (see it can auto-create pull requests) you can use the Command Line Interface (CLI) instead. See: https://snyk.io/docs/using-snyk. If you do this and want it to be free, you have to configure Snyk so it know its open source: https://support.snyk.io/snyk-cli/how-can-i-set-a-snyk-cli-project-as-open-source&lt;br /&gt;
*** Another benefit of using the Snyk CLI is that it won't auto create Pull requests for you (which makes these 'issues' more public than you might prefer)&lt;br /&gt;
* WhiteSource Bolt - Supports 200+ programming languages. https://www.whitesourcesoftware.com/&lt;br /&gt;
** Azure version: https://marketplace.visualstudio.com/items?itemName=whitesource.ws-bolt&lt;br /&gt;
** GitHub version: https://github.com/apps/whitesource-bolt-for-github Available starting in Nov. 2018.&lt;br /&gt;
&lt;br /&gt;
=== Code Quality tools ===&lt;br /&gt;
Quality has a significant correlation to security. As such, we recommend open source projects also consider using good code quality tools. A few that we are aware of are:&lt;br /&gt;
* SpotBugs (https://github.com/spotbugs/spotbugs) - Open source code quality tool for Java&lt;br /&gt;
** This is the active fork for FindBugs, so if you use Findbugs, you should switch to this.&lt;br /&gt;
** SpotBugs users should add the FindSecBugs plugin (http://find-sec-bugs.github.io/) to their SpotBugs setup, as it significantly improves on the very basic security checking native to SpotBugs.&lt;br /&gt;
&lt;br /&gt;
* SonarQube (https://www.sonarqube.org/)&lt;br /&gt;
** This is a commercially supported, very popular, free (and commercial) code quality tool. It includes most if not all the FindSecBugs security rules plus lots more for quality, including a free, internet online CI setup to run it against your open source projects. SonarQube supports numerous languages: https://www.sonarqube.org/features/multi-languages/&lt;br /&gt;
&lt;br /&gt;
Please let us know if you are aware of any other high quality application security tools that are free for open source (or simply add them to this page). We are particularly interested in identifying and listing commercial tools that are free for open source, as they tend to be better and easier to use than open source (free) tools. If you are aware of any missing from this list, please add them, or let us know (dave.wichers (at) owasp.org) and we'll confirm they are free, and add them for you. Please encourage your favorite commercial tool vendor to make their tool free for open source projects as well!!&lt;br /&gt;
&lt;br /&gt;
Finally, please forward this page to the open source projects you rely on and encourage them to use these free tools!&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=244358</id>
		<title>Free for Open Source Application Security Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Free_for_Open_Source_Application_Security_Tools&amp;diff=244358"/>
				<updated>2018-10-18T14:55:19Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* Detecting Known Vulnerable Components */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__&lt;br /&gt;
== Introduction ==&lt;br /&gt;
OWASP's mission is to help the world improve the security of its software. One of the best ways OWASP can do that is to help Open Source developers improve the software they are producing that everyone else relies on. As such, the following lists of '''automated vulnerability detection tools''' that are '''free for open source''' projects have been gathered together here to raise awareness of their availability.&lt;br /&gt;
&lt;br /&gt;
We would encourage open source projects to use the following types of tools to improve the security and quality of their code:&lt;br /&gt;
* Static Application Security Testing ([[SAST]]) Tools &lt;br /&gt;
* Dynamic Application Security Testing ([[DAST]]) Tools - (Primarily for web apps)&lt;br /&gt;
* Interactive Application Security Testing (IAST) Tools - (Primarily for web apps and web APIs)&lt;br /&gt;
* Keeping Open Source libraries up-to-date (to avoid [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]])&lt;br /&gt;
* Static Code Quality Tools&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; '''Disclaimer:''' &amp;lt;b&amp;gt;OWASP does not endorse any of the Vendors or Scanning Tools by listing them below. They are simply listed if we believe they are free for use by open source projects. We have made every effort to provide this information as accurately as possible. If you are the vendor of a free for open source tool and think this information is incomplete or incorrect, please send an e-mail to dave.wichers (at) owasp.org and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Free for Open Source Tools ==&lt;br /&gt;
Tools that are free for open source projects in each of the above categories are listed below.&lt;br /&gt;
&lt;br /&gt;
=== SAST Tools ===&lt;br /&gt;
OWASP already maintains a page of known SAST tools: [[Source Code Analysis Tools]], which includes a list of those that are &amp;quot;Open Source or Free Tools Of This Type&amp;quot;. Any such tools could certainly be used.&lt;br /&gt;
&lt;br /&gt;
In addition, we are aware of the following commercial SAST tools that are free for Open Source projects:&lt;br /&gt;
* [https://scan.coverity.com/ Coverity Scan Static Analysis] - Can be lashed into Travis-CI so it's done automatically with online resources. Supports over a dozen programming languages as documented here in the section [https://www.synopsys.com/software-integrity/security-testing/static-analysis-sast.html Comprehensive support for these programming languages and frameworks].&lt;br /&gt;
&lt;br /&gt;
=== DAST Tools ===&lt;br /&gt;
If your project has a web application component, we recommend running automated scans against it to look for vulnerabilities. OWASP maintains a page of known DAST Tools: [[:Category:Vulnerability Scanning Tools|Vulnerability Scanning Tools]], and the '''Licence''' column on this page indicates which of those tools have free capabilities. Our primary recommendation is to use one of these:&lt;br /&gt;
* [[OWASP Zed Attack Proxy Project|OWASP ZAP]] - A full featured free and open source DAST tool that includes both automated scanning for vulnerabilities and tools to assist expert manual web app pen testing.&lt;br /&gt;
** The ZAP team has also been working hard to make it easier to integrate ZAP into your CI/CD pipeline. (e.g., here's a [https://www.we45.com/blog/how-to-integrate-zap-into-jenkins-ci-pipeline-we45-blog blog post on how to integrate ZAP with Jenkins]).&lt;br /&gt;
* [http://www.arachni-scanner.com/ Arachni] - Arachni is a commercially supported scanner, but its free for most use cases, including scanning open source projects.&lt;br /&gt;
We are not aware of any other commercial grade tools that offer their full featured DAST product free for open source projects.&lt;br /&gt;
&lt;br /&gt;
=== IAST Tools ===&lt;br /&gt;
IAST tools are typically geared to analyze Web Applications and Web APIs, but that is vendor specific. There may be IAST products that can perform good security analysis on non-web applications as well.&lt;br /&gt;
&lt;br /&gt;
We are aware of only one IAST Tool that is free for open source at this time:&lt;br /&gt;
* [https://www.contrastsecurity.com/contrast-community-edition Contrast Community Edition (CE)] - Fully featured version for 1 app and up to 5 users (some Enterprise features disabled). Contrast CE supports Java only.&lt;br /&gt;
&lt;br /&gt;
=== Open Source Software (OSS) Security Tools ===&lt;br /&gt;
OSS refers to the open source libraries or components that application developers leverage to quickly develop new applications and add features to existing apps. Gartner refers to the analysis of the security of these components as software composition analysis (SCA). So OSS Analysis and SCA are the same thing.&lt;br /&gt;
&lt;br /&gt;
OWASP recommends that all software projects generally try to keep the libraries they use as up-to-date as possible to reduce the likelihood of [[Top 10-2017 A9-Using Components with Known Vulnerabilities|Using Components with Known Vulnerabilities (OWASP Top 10-2017 A9)]]. There are two recommended approaches for this:&lt;br /&gt;
&lt;br /&gt;
==== Keeping Your Libraries Updated ====&lt;br /&gt;
Using the latest version of each library is recommended because security issues are frequently fixed 'silently' by the component maintainer. By silently, we mean without publishing a [https://cve.mitre.org/ CVE] for the security fix.&lt;br /&gt;
* [https://www.mojohaus.org/versions-maven-plugin/ Maven Versions plugin]&lt;br /&gt;
** For Maven projects, can be used to generate a report of all dependencies used and when upgrades are available for them. Either a direct report, or part of the overall project documentation using: mvn site.&lt;br /&gt;
* Dependabot - https://dependabot.com/&lt;br /&gt;
** A GitHub only service that creates pull requests to keep your dependencies up-to-date. It automatically generates a pull request for each dependency you can upgrade, which you can then ignore, or accept, as you like. It supports tons of languages.&lt;br /&gt;
** Recommended for all open source projects maintained on GitHub!&lt;br /&gt;
&lt;br /&gt;
==== Detecting Known Vulnerable Components ====&lt;br /&gt;
As an alternative, or in addition to, trying to keep all your components up-to-date, a project can specifically monitor whether any of the components they use have known vulnerable components.&lt;br /&gt;
&lt;br /&gt;
Free tools of this type:&lt;br /&gt;
* OWASP has its own free open source tool [[OWASP Dependency Check]] that is free for anyone to use.&lt;br /&gt;
* GitHub: Security alerts for vulnerable dependencies - https://help.github.com/articles/about-security-alerts-for-vulnerable-dependencies/&lt;br /&gt;
** A native GitHub feature that reports known vulnerable dependencies in your GitHub projects. Supports: Java, .NET, JavaScript, Ruby, and Python. Your GitHub projects are automatically signed up for this service.&lt;br /&gt;
Commercial tools of this type that are free for open source:&lt;br /&gt;
* Contrast Community Edition (CE) (mentioned earlier) also has both Known Vulnerable Component detection and Available Updates reporting for OSS. CE supports Java only.&lt;br /&gt;
* Snyk - https://www.snyk.io &lt;br /&gt;
** A Commercial tool that identifies vulnerable components and integrates with numerous CI/CD pipelines. It is free for open source: https://snyk.io/plans&lt;br /&gt;
** Supports Node.js, Ruby, Java, Python, Scala, Golang, .NET, PHP - Latest list here: https://snyk.io/docs&lt;br /&gt;
* WhiteSource Bolt - Supports 200+ programming languages. https://www.whitesourcesoftware.com/&lt;br /&gt;
** Azure version: https://marketplace.visualstudio.com/items?itemName=whitesource.ws-bolt&lt;br /&gt;
** GitHub version: https://github.com/apps/whitesource-bolt-for-github Available starting in Nov. 2018.&lt;br /&gt;
&lt;br /&gt;
=== Code Quality tools ===&lt;br /&gt;
Quality has a significant correlation to security. As such, we recommend open source projects also consider using good code quality tools. A few that we are aware of are:&lt;br /&gt;
* SpotBugs (https://github.com/spotbugs/spotbugs) - Open source code quality tool for Java&lt;br /&gt;
** This is the active fork for FindBugs, so if you use Findbugs, you should switch to this.&lt;br /&gt;
** SpotBugs users should add the FindSecBugs plugin (http://find-sec-bugs.github.io/) to their SpotBugs setup, as it significantly improves on the very basic security checking native to SpotBugs.&lt;br /&gt;
&lt;br /&gt;
* SonarQube (https://www.sonarqube.org/)&lt;br /&gt;
** This is a commercially supported, very popular, free (and commercial) code quality tool. It includes most if not all the FindSecBugs security rules plus lots more for quality, including a free, internet online CI setup to run it against your open source projects. SonarQube supports numerous languages: https://www.sonarqube.org/features/multi-languages/&lt;br /&gt;
&lt;br /&gt;
Please let us know if you are aware of any other high quality application security tools that are free for open source (or simply add them to this page). We are particularly interested in identifying and listing commercial tools that are free for open source, as they tend to be better and easier to use than open source (free) tools. If you are aware of any missing from this list, please add them, or let us know (dave.wichers (at) owasp.org) and we'll confirm they are free, and add them for you. Please encourage your favorite commercial tool vendor to make their tool free for open source projects as well!!&lt;br /&gt;
&lt;br /&gt;
Finally, please forward this page to the open source projects you rely on and encourage them to use these free tools!&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	<entry>
		<id>https://wiki.owasp.org/index.php?title=Category:Vulnerability_Scanning_Tools&amp;diff=244320</id>
		<title>Category:Vulnerability Scanning Tools</title>
		<link rel="alternate" type="text/html" href="https://wiki.owasp.org/index.php?title=Category:Vulnerability_Scanning_Tools&amp;diff=244320"/>
				<updated>2018-10-17T15:27:20Z</updated>
		
		<summary type="html">&lt;p&gt;Wichers: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Description  ==&lt;br /&gt;
&lt;br /&gt;
Web Application Vulnerability Scanners are automated tools that scan web applications, normally from the outside, to look for security vulnerabilities such as [[Cross-site scripting]], [[SQL Injection]], [[Command Injection]], [[Path Traversal]] and insecure server configuration. This category of tools is frequently referred to as [https://www.techopedia.com/definition/30958/dynamic-application-security-testing-dast Dynamic Application Security Testing] (DAST) Tools. A large number of both commercial and open source tools of this type are available and all of these tools have their own strengths and weaknesses.  If you are interested in the effectiveness of DAST tools, check out the OWASP [[Benchmark]] project, which is scientifically measuring the effectiveness of all types of vulnerability detection tools, including DAST.&lt;br /&gt;
&lt;br /&gt;
Here we provide a list of vulnerability scanning tools currently available in the market.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; '''Disclaimer:''' The tools listing in the table below are presented in an alphabetical order. &amp;lt;b&amp;gt;OWASP does not endorse any of the Vendors or Scanning Tools by listing them in the table below. We have made every effort to provide this information as accurately as possible. If you are the vendor of a tool below and think this information is incomplete or incorrect, please send an e-mail to our [mailto:owasp_ha_vulnerability_scanner_project@lists.owasp.org mailing list] and we will make every effort to correct this information.&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
OWASP is aware of the [http://sectooladdict.blogspot.com/ '''Web Application Vulnerability Scanner Evaluation Project (WAVSEP)'''. WAVSEP] is completely unrelated to OWASP and we do not endorse its results, nor any of the DAST tools it evaluates. However, the results provided by WAVSEP may be helpful to someone interested in researching or selecting free and/or commercial DAST tools for their projects. This project has far more detail on DAST tools and their features than this OWASP DAST page.&lt;br /&gt;
&lt;br /&gt;
== Tools Listing  ==&lt;br /&gt;
&lt;br /&gt;
{{:Template:OWASP Tool Headings}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.acunetix.com/ Acunetix WVS] || tool_owner = Acunetix || tool_licence = Commercial / Free (Limited Capability) || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.ibm.com/us-en/marketplace/application-security-on-cloud Application Security on Cloud] || tool_owner = IBM || tool_licence = Commercial || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www-03.ibm.com/software/products/en/appscan-standard AppScan] || tool_owner = IBM || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.trustwave.com/Products/Application-Security/App-Scanner-Family/App-Scanner-Enterprise/ App Scanner] || tool_owner = Trustwave || tool_licence = Commercial || tool_platforms = Windows }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.rapid7.com/products/appspider/ AppSpider] || tool_owner = Rapid7 || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://apptrana.indusface.com/basic/ AppTrana Basic] || tool_owner = AppTrana || tool_licence = Free (Limited Capability) || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.arachni-scanner.com/ Arachni] || tool_owner = Arachni|| tool_licence = Free for most use cases || tool_platforms = Most platforms supported}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.scanmyserver.com/ AVDS] || tool_owner = Beyond Security || tool_licence = Commercial / Free (Limited Capability)|| tool_platforms = SaaS }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.blueclosure.com BlueClosure BC Detect] || tool_owner = BlueClosure || tool_licence = Commercial, 2 weeks trial || tool_platforms = Most platforms supported}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.portswigger.net/ Burp Suite] || tool_owner = PortSwiger || tool_licence = Commercial / Free (Limited Capability)|| tool_platforms = Most platforms supported }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://contrastsecurity.com Contrast] || tool_owner = Contrast Security || tool_licence = Commercial / Free (Full featured for 1 App) || tool_platforms = SaaS or On-Premises }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://detectify.com/ Detectify] || tool_owner = Detectify || tool_licence = Commercial || tool_platforms = SaaS }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.digifort.se/en/scanner Digifort- Inspect] || tool_owner = Digifort|| tool_licence = Commercial || tool_platforms = SaaS }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.edgescan.com/ edgescan] || tool_owner = edgescan|| tool_licence = Commercial || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.gamasec.com/Gamascan.aspx GamaScan] || tool_owner = GamaSec || tool_licence = Commercial || tool_platforms = Windows }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://rgaucher.info/beta/grabber/ Grabber] || tool_owner = Romain Gaucher || tool_licence = Open Source || tool_platforms = Python 2.4, BeautifulSoup and PyXML}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://gravityscan.com/ Gravityscan] || tool_owner = Defiant, Inc. || tool_licence = Commercial / Free (Limited Capability) || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://sourceforge.net/p/grendel/code/ci/c59780bfd41bdf34cc13b27bc3ce694fd3cb7456/tree/ Grendel-Scan] || tool_owner = David Byrne || tool_licence = Open Source || tool_platforms = Windows, Linux and Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.golismero.com GoLismero] || tool_owner = GoLismero Team || tool_licence = GPLv2.0 || tool_platforms = Windows, Linux and Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.ikare-monitoring.com/ IKare] || tool_owner = ITrust || tool_licence = Commercial || tool_platforms = N/A }}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.htbridge.com/immuniweb/ ImmuniWeb] || tool_owner = High-Tech Bridge || tool_licence = Commercial  / Free (Limited Capability)|| tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.indusface.com/index.php/products/web-application-scanning Indusface Web Application Scanning] || tool_owner = Indusface || tool_licence = Commercial || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.nstalker.com/ N-Stealth] || tool_owner = N-Stalker || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.tenable.com/products/tenable-io/web-application-scanning/ Nessus] || tool_owner = Tenable || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.mavitunasecurity.com/ Netsparker] || tool_owner = MavitunaSecurity || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.rapid7.com/products/nexpose-community-edition.jsp Nexpose] || tool_owner = Rapid7 || tool_licence = Commercial / Free (Limited Capability)|| tool_platforms = Windows/Linux}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.cirt.net/nikto2 Nikto] || tool_owner = CIRT || tool_licence = Open Source|| tool_platforms = Unix/Linux}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.milescan.com/ ParosPro] || tool_owner = MileSCAN || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://probely.com Probe.ly] || tool_owner = Probe.ly || tool_licence = Commercial / Free (Limited Capability) || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.websecurify.com/desktop/proxy.html Proxy.app] || tool_owner = Websecurify || tool_licence = Commercial || tool_platforms = Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.qualys.com/products/qg_suite/was/ QualysGuard] || tool_owner = Qualys || tool_licence = Commercial || tool_platforms = N/A}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.beyondtrust.com/Products/RetinaNetworkSecurityScanner/ Retina] || tool_owner = BeyondTrust || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.orvant.com Securus] || tool_owner = Orvant, Inc || tool_licence = Commercial || tool_platforms = N/A}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.whitehatsec.com/home/services/services.html Sentinel] || tool_owner = WhiteHat Security || tool_licence = Commercial || tool_platforms = N/A}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.parasoft.com/products/article.jsp?articleId=3169&amp;amp;redname=webtesting&amp;amp;referred=webtesting SOATest] || tool_owner = Parasoft || tool_licence = Commercial || tool_platforms = Windows / Linux / Solaris}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.tinfoilsecurity.com Tinfoil Security] || tool_owner = Tinfoil Security, Inc. || tool_licence = Commercial / Free (Limited Capability) || tool_platforms = SaaS or On-Premises}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.trustwave.com/external-vulnerability-scanning.php Trustkeeper Scanner] || tool_owner = Trustwave SpiderLabs || tool_licence = Commercial || tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://subgraph.com/vega/ Vega] || tool_owner = Subgraph || tool_licence = Open Source || tool_platforms = Windows, Linux and Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://wapiti.sourceforge.net/ Wapiti] || tool_owner = Informática Gesfor || tool_licence = Open Source || tool_platforms = Windows, Unix/Linux and Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.defensecode.com/webscanner.php Web Security Scanner] || tool_owner = DefenseCode || tool_licence = Commercial || tool_platforms = On-Premises}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.tripwire.com/it-security-software/enterprise-vulnerability-management/web-application-vulnerability-scanning/ WebApp360] || tool_owner = TripWire || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://webcookies.org WebCookies] || tool_owner = WebCookies || tool_licence = Free|| tool_platforms = SaaS}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www8.hp.com/us/en/software-solutions/software.html?compURI=1341991#.Uuf0KBAo4iw WebInspect] || tool_owner = HP || tool_licence = Commercial || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.websecurify.com/desktop/webreaver.html WebReaver] || tool_owner = Websecurify || tool_licence = Commercial || tool_platforms = Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.german-websecurity.com/en/products/webscanservice/product-details/overview/ WebScanService] || tool_owner = German Web Security || tool_licence = Commercial || tool_platforms = N/A}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://suite.websecurify.com/ Websecurify Suite] || tool_owner = Websecurify || tool_licence = Commercial / Free (Limited Capability) || tool_platforms = Windows, Linux, Macintosh}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.sensepost.com/research/wikto/ Wikto] || tool_owner = Sensepost || tool_licence = Open Source || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [http://www.w3af.org/ w3af] || tool_owner = w3af.org || tool_licence = GPLv2.0 || tool_platforms = Linux and Mac}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.owasp.org/index.php/OWASP_Xenotix_XSS_Exploit_Framework Xenotix XSS Exploit Framework] || tool_owner = OWASP || tool_licence = Open Source || tool_platforms = Windows}}&lt;br /&gt;
{{OWASP Tool Info || tool_name = [https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project Zed Attack Proxy] || tool_owner = OWASP || tool_licence = Open Source || tool_platforms = Windows, Unix/Linux and Macintosh}}&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== References  ==&lt;br /&gt;
&lt;br /&gt;
*[[Source_Code_Analysis_Tools | SAST Tools]] - OWASP page with similar information on Static Application Security Testing (SAST) Tools&lt;br /&gt;
*http://sectooladdict.blogspot.com/ - Web Application Vulnerability Scanner Evaluation Project (WAVSEP)&lt;br /&gt;
*http://projects.webappsec.org/Web-Application-Security-Scanner-Evaluation-Criteria - v1.0 (2009)&lt;br /&gt;
*http://www.slideshare.net/lbsuto/accuracy-and-timecostsofwebappscanners - White Paper: Analyzing the Accuracy and Time Costs of WebApplication Security Scanners - By Larry Suto (2010)&lt;br /&gt;
*http://samate.nist.gov/index.php/Web_Application_Vulnerability_Scanners.html - NIST home page which links to: NIST Special Publication 500-269: Software Assurance Tools: Web Application Security Scanner Functional Specification Version 1.0 (21 August, 2007)&lt;br /&gt;
*http://www.softwareqatest.com/qatweb1.html#SECURITY - A list of Web Site Security Test Tools. (Has both DAST and SAST tools)&lt;br /&gt;
&lt;br /&gt;
[[Category:OWASP_Tools_Project]]&lt;/div&gt;</summary>
		<author><name>Wichers</name></author>	</entry>

	</feed>