This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org
Difference between revisions of "Benchmark"
From OWASP
m |
(Added Kiuwan to the tool scanning tips section) |
||
(13 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
= Main = | = Main = | ||
− | <div style="width:100%;height:100px;border:0,margin:0;overflow: hidden;">[[File: | + | <div style="width:100%;height:100px;border:0,margin:0;overflow: hidden;">[[File:Lab_big.jpg|link=OWASP_Project_Stages#tab.3DLab_Projects]]</div> |
{| style="padding: 0;margin:0;margin-top:10px;text-align:left;" |- | {| style="padding: 0;margin:0;margin-top:10px;text-align:left;" |- | ||
| valign="top" style="border-right: 1px dotted gray;padding-right:25px;" | | | valign="top" style="border-right: 1px dotted gray;padding-right:25px;" | | ||
Line 7: | Line 7: | ||
The OWASP Benchmark for Security Automation (OWASP Benchmark) is a free and open test suite designed to evaluate the speed, coverage, and accuracy of automated software vulnerability detection tools and services (henceforth simply referred to as 'tools'). Without the ability to measure these tools, it is difficult to understand their strengths and weaknesses, and compare them to each other. Each version of the OWASP Benchmark contains thousands of test cases that are fully runnable and exploitable, each of which maps to the appropriate CWE number for that vulnerability. | The OWASP Benchmark for Security Automation (OWASP Benchmark) is a free and open test suite designed to evaluate the speed, coverage, and accuracy of automated software vulnerability detection tools and services (henceforth simply referred to as 'tools'). Without the ability to measure these tools, it is difficult to understand their strengths and weaknesses, and compare them to each other. Each version of the OWASP Benchmark contains thousands of test cases that are fully runnable and exploitable, each of which maps to the appropriate CWE number for that vulnerability. | ||
− | You can use the OWASP Benchmark with [[Source_Code_Analysis_Tools | Static Application Security Testing (SAST)]] tools, [[:Category:Vulnerability_Scanning_Tools | Dynamic Application Security Testing (DAST)]] tools like OWASP [[ZAP]] and Interactive Application Security Testing (IAST) tools. | + | You can use the OWASP Benchmark with [[Source_Code_Analysis_Tools | Static Application Security Testing (SAST)]] tools, [[:Category:Vulnerability_Scanning_Tools | Dynamic Application Security Testing (DAST)]] tools like OWASP [[ZAP]] and Interactive Application Security Testing (IAST) tools. Benchmark is implemented in Java. Future versions may expand to include other languages. |
==Benchmark Project Scoring Philosophy== | ==Benchmark Project Scoring Philosophy== | ||
Line 51: | Line 51: | ||
Anyone can use this Benchmark to evaluate vulnerability detection tools. The basic steps are: | Anyone can use this Benchmark to evaluate vulnerability detection tools. The basic steps are: | ||
− | # Download the Benchmark from | + | # Download the Benchmark from GitHub |
# Run your tools against the Benchmark | # Run your tools against the Benchmark | ||
# Run the BenchmarkScore tool on the reports from your tools | # Run the BenchmarkScore tool on the reports from your tools | ||
Line 393: | Line 393: | ||
'''Free Static Application Security Testing (SAST) Tools:''' | '''Free Static Application Security Testing (SAST) Tools:''' | ||
− | * [ | + | * [https://pmd.github.io/ PMD] (which really has no security rules) - .xml results file |
− | * [http://findbugs.sourceforge.net/ | + | * [http://findbugs.sourceforge.net/ FindBugs] - .xml results file (Note: FindBugs hasn't been updated since 2015. Use SpotBugs instead (see below)) |
− | * | + | * [https://www.sonarqube.org/downloads/ SonarQube] - .xml results file |
− | * [ | + | * [https://spotbugs.github.io/ SpotBugs] - .xml results file. This is the successor to FindBugs. |
− | * [http:// | + | * SpotBugs with the [http://find-sec-bugs.github.io/ FindSecurityBugs plugin] - .xml results file |
Note: We looked into supporting [http://checkstyle.sourceforge.net/ Checkstyle] but it has no security rules, just like PMD. The [http://fb-contrib.sourceforge.net/ fb-contrib] FindBugs plugin doesn't have any security rules either. We did test [http://errorprone.info/ Error Prone], and found that it does report some use of [http://errorprone.info/bugpattern/InsecureCipherMode) insecure ciphers (CWE-327)], but that's it. | Note: We looked into supporting [http://checkstyle.sourceforge.net/ Checkstyle] but it has no security rules, just like PMD. The [http://fb-contrib.sourceforge.net/ fb-contrib] FindBugs plugin doesn't have any security rules either. We did test [http://errorprone.info/ Error Prone], and found that it does report some use of [http://errorprone.info/bugpattern/InsecureCipherMode) insecure ciphers (CWE-327)], but that's it. | ||
Line 403: | Line 403: | ||
'''Commercial SAST Tools:''' | '''Commercial SAST Tools:''' | ||
− | * [ | + | * [https://www.castsoftware.com/products/application-intelligence-platform CAST Application Intelligence Platform (AIP)] - .xml results file |
− | * [https://www.checkmarx.com/ | + | * [https://www.checkmarx.com/products/static-application-security-testing/ Checkmarx CxSAST] - .xml results file |
− | * [https://www. | + | * [https://www.ibm.com/us-en/marketplace/ibm-appscan-source IBM AppScan Source (Standalone and Cloud)] - .ozasmt or .xml results file |
− | * [https://software.microfocus.com/en-us/ | + | * [https://juliasoft.com/solutions/julia-for-security/ Julia Analyzer] - .xml results file |
− | * [ | + | * [https://www.kiuwan.com/code-security-sast/ Kiuwan Code Security] - .threadfix results file |
− | * [https:// | + | * [https://software.microfocus.com/en-us/products/static-code-analysis-sast/overview Micro Focus (Formally HPE) Fortify (On-Demand and stand-alone versions)] - .fpr results file |
− | * [https://www. | + | * [https://www.parasoft.com/products/jtest/ Parasoft Jtest] - .xml results file |
+ | * [https://semmle.com/lgtm Semmle LGTM] - .sarif results file | ||
+ | * [https://www.shiftleft.io/product/ ShiftLeft SAST] - .sl results file (Benchmark specific format. Ask vendor how to generate this) | ||
+ | * [https://snappycodeaudit.com/category/static-code-analysis Snappycode Audit's SnappyTick Source Edition (SAST)] - .xml results file | ||
* [https://www.sourcemeter.com/features/ SourceMeter] - .txt results file of ALL results from VulnerabilityHunter | * [https://www.sourcemeter.com/features/ SourceMeter] - .txt results file of ALL results from VulnerabilityHunter | ||
− | * [ | + | * [https://www.synopsys.com/content/dam/synopsys/sig-assets/datasheets/SAST-Coverity-datasheet.pdf Synopsys Static Analysis (Formerly Coverity Code Advisor) (On-Demand and stand-alone versions)] - .json results file (You can scan Benchmark w/Coverity for free. See: https://scan.coverity.com/) |
+ | * [https://www.defensecode.com/thunderscan.php Thunderscan SAST] - .xml results file | ||
+ | * [https://www.veracode.com/products/binary-static-analysis-sast Veracode SAST] - .xml results file | ||
+ | * [https://www.rigs-it.com/xanitizer/ XANITIZER] - xml results file ([https://www.rigs-it.com/wp-content/uploads/2018/03/howtosetupxanitizerforowaspbenchmarkproject.pdf Their white paper on how to setup Xanitizer to scan Benchmark.]) (Free trial available) | ||
− | We are looking for results for other commercial static analysis tools like: [ | + | We are looking for results for other commercial static analysis tools like: [https://www.grammatech.com/products/codesonar Grammatech CodeSonar], [https://www.roguewave.com/products-services/klocwork RogueWave's Klocwork], etc. If you have a license for any static analysis tool not already listed above and can run it on the Benchmark and send us the results file that would be very helpful. |
The free SAST tools come bundled with the Benchmark so you can run them yourselves. If you have a license for any commercial SAST tool, you can also run them against the Benchmark. Just put your results files in the /results folder of the project, and then run the BenchmarkScore script for your platform (.sh / .bat) and it will generate a scorecard in the /scorecard directory for all the tools you have results for that are currently supported. | The free SAST tools come bundled with the Benchmark so you can run them yourselves. If you have a license for any commercial SAST tool, you can also run them against the Benchmark. Just put your results files in the /results folder of the project, and then run the BenchmarkScore script for your platform (.sh / .bat) and it will generate a scorecard in the /scorecard directory for all the tools you have results for that are currently supported. | ||
Line 429: | Line 435: | ||
'''Commercial DAST Tools:''' | '''Commercial DAST Tools:''' | ||
− | * [https://www.acunetix.com/vulnerability-scanner/ Acunetix Web Vulnerability Scanner (WVS)] - .xml results file (Generated using [ | + | * [https://www.acunetix.com/vulnerability-scanner/ Acunetix Web Vulnerability Scanner (WVS)] - .xml results file (Generated using [https://www.acunetix.com/resources/wvs7manual.pdf command line interface (see Chapter 10.)] /ExportXML switch) |
− | * [https://portswigger.net/burp | + | * [https://portswigger.net/burp Burp Pro] - .xml results file |
− | + | * [https://www.ibm.com/us-en/marketplace/appscan-standard IBM AppScan] - .xml results file | |
− | * [https:// | + | * [https://software.microfocus.com/en-us/products/webinspect-dynamic-analysis-dast/overview Micro Focus (Formally HPE) WebInspect] - .xml results file |
− | * [ | ||
* [https://www.netsparker.com/web-vulnerability-scanner/ Netsparker] - .xml results file | * [https://www.netsparker.com/web-vulnerability-scanner/ Netsparker] - .xml results file | ||
+ | * [https://www.qualys.com/apps/web-app-scanning/ Qualys Web App Scanner] - .xml results file | ||
* [https://www.rapid7.com/products/appspider/ Rapid7 AppSpider] - .xml results file | * [https://www.rapid7.com/products/appspider/ Rapid7 AppSpider] - .xml results file | ||
− | |||
− | |||
If you have access to other DAST Tools, PLEASE RUN THEM FOR US against the Benchmark, and send us the results file so we can build a scorecard generator for that tool. | If you have access to other DAST Tools, PLEASE RUN THEM FOR US against the Benchmark, and send us the results file so we can build a scorecard generator for that tool. | ||
Line 443: | Line 447: | ||
'''Commercial Interactive Application Security Testing (IAST) Tools:''' | '''Commercial Interactive Application Security Testing (IAST) Tools:''' | ||
− | * [https://www.contrastsecurity.com/interactive-application-security-testing-iast Contrast Assess] - .zip results file | + | * [https://www.contrastsecurity.com/interactive-application-security-testing-iast Contrast Assess] - .zip results file (You can scan Benchmark w/Contrast for free. See: https://www.contrastsecurity.com/contrast-community-edition) |
* [https://hdivsecurity.com/interactive-application-security-testing-iast Hdiv Detection (IAST)] - .hlg results file | * [https://hdivsecurity.com/interactive-application-security-testing-iast Hdiv Detection (IAST)] - .hlg results file | ||
+ | * [https://www.synopsys.com/software-integrity/security-testing/interactive-application-security-testing.html Seeker IAST] - .csv results file | ||
'''Commercial Hybrid Analysis Application Security Testing Tools:''' | '''Commercial Hybrid Analysis Application Security Testing Tools:''' | ||
Line 494: | Line 499: | ||
GIT: http://git-scm.com/ or https://github.com/ | GIT: http://git-scm.com/ or https://github.com/ | ||
− | Maven: https://maven.apache.org/ (Version: 3.2.3 or newer works | + | Maven: https://maven.apache.org/ (Version: 3.2.3 or newer works.) |
− | Java: http://www.oracle.com/technetwork/java/javase/downloads/index.html (Java 7 or 8) (64-bit) | + | Java: http://www.oracle.com/technetwork/java/javase/downloads/index.html (Java 7 or 8) (64-bit) |
==Getting, Building, and Running the Benchmark== | ==Getting, Building, and Running the Benchmark== | ||
Line 513: | Line 518: | ||
We have several preconstructed VMs or instructions on how to build one that you can use instead: | We have several preconstructed VMs or instructions on how to build one that you can use instead: | ||
− | * Docker: A Dockerfile is checked into the project [https://github.com/OWASP/Benchmark/blob/master/VMs/Dockerfile here]. This Docker file should automatically produce a Docker VM | + | * Docker: A Dockerfile is checked into the project [https://github.com/OWASP/Benchmark/blob/master/VMs/Dockerfile here]. This Docker file should automatically produce a Docker VM with the latest Benchmark project files. After you have Docker installed, cd to /VMs then run: |
− | + | ./buildDockerImage.sh --> This builds the Docker Benchmark VM (This will take a WHILE) | |
− | docker images | + | docker images --> You should see the new benchmark:latest image in the list provided |
− | # The | + | # The Benchmark Docker Image only has to be created once. |
− | + | ||
+ | To run the Benchmark in your Docker VM, just run: | ||
+ | ./runDockerImage.sh --> This pulls in any updates to Benchmark since the Image was built, builds everything, and starts a remotely accessible Benchmark web app. | ||
If successful, you should see this at the end: | If successful, you should see this at the end: | ||
[INFO] [talledLocalContainer] Tomcat 8.x started on port [8443] | [INFO] [talledLocalContainer] Tomcat 8.x started on port [8443] | ||
[INFO] Press Ctrl-C to stop the container... | [INFO] Press Ctrl-C to stop the container... | ||
− | docker-machine ls (in a different | + | Then simply navigate to: https://localhost:8443/benchmark from the machine you are running Docker |
− | + | ||
+ | Or if you want to access from a different machine: | ||
+ | docker-machine ls (in a different terminal) --> To get IP Docker VM is exporting (e.g., tcp://192.168.99.100:2376) | ||
+ | Navigate to: https://192.168.99.100:8443/benchmark in your browser (using the above IP as an example) | ||
+ | |||
* Amazon Web Services (AWS) - Here's how you set up the Benchmark on an AWS VM: | * Amazon Web Services (AWS) - Here's how you set up the Benchmark on an AWS VM: | ||
− | |||
sudo yum install git | sudo yum install git | ||
sudo yum install maven | sudo yum install maven | ||
Line 627: | Line 637: | ||
[http://h3xstream.github.io/find-sec-bugs/ FindSecurityBugs] is a great plugin for FindBugs that significantly increases the ability for FindBugs to find security issues. We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindSecBugs.(sh or bat). If you want to run a different version of FindSecBugs, just change the version number of the findsecbugs-plugin artifact in the Benchmark pom.xml file. | [http://h3xstream.github.io/find-sec-bugs/ FindSecurityBugs] is a great plugin for FindBugs that significantly increases the ability for FindBugs to find security issues. We include this free tool in the Benchmark and its all dialed in. Simply run the script: ./script/runFindSecBugs.(sh or bat). If you want to run a different version of FindSecBugs, just change the version number of the findsecbugs-plugin artifact in the Benchmark pom.xml file. | ||
+ | |||
+ | === Kiuwan Code Security === | ||
+ | |||
+ | Kiuwan Code Security includes a predefined model for executing the OWASP benchmark. Refer to the [https://www.kiuwan.com/blog/owasp-benchmark-diy/ step-by-step instructions] on the Kiuwan website. | ||
=== Micro Focus (Formally HP) Fortify === | === Micro Focus (Formally HP) Fortify === | ||
Line 726: | Line 740: | ||
* In Terminal 1, launch the Benchmark application and wait until it starts | * In Terminal 1, launch the Benchmark application and wait until it starts | ||
− | '''$ ./runBenchmark_wContrast.sh''' (.bat on Windows) | + | '''$ cd tools/Contrast |
+ | '''$ ./runBenchmark_wContrast.sh''' (.bat on Windows) | ||
'''[INFO] Scanning for projects... | '''[INFO] Scanning for projects... | ||
'''[INFO] | '''[INFO] | ||
Line 754: | Line 769: | ||
'''Copying Contrast report to results directory''' | '''Copying Contrast report to results directory''' | ||
− | * | + | * In Terminal 2, generate scorecards in /Benchmark/scorecard |
'''$ ./createScorecards.sh''' (.bat on Windows) | '''$ ./createScorecards.sh''' (.bat on Windows) | ||
'''Analyzing results from Benchmark_1.2-Contrast.log | '''Analyzing results from Benchmark_1.2-Contrast.log | ||
Line 783: | Line 798: | ||
While we don't have hard and fast rules of exactly what we are going to do next, enhancements in the following areas are planned for the next release: | While we don't have hard and fast rules of exactly what we are going to do next, enhancements in the following areas are planned for the next release: | ||
− | * Add new vulnerability categories (e.g., Hibernate Injection) | + | * Add new vulnerability categories (e.g., XXE, Hibernate Injection) |
* Add support for popular server side Java frameworks (e.g., Spring) | * Add support for popular server side Java frameworks (e.g., Spring) | ||
* Add web services test cases | * Add web services test cases |