This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org
Top 10 2013/ProjectMethodology
Goal
The goal of this page is to provide the baseline of knowledge to begin a thoughtful conversation of enhancements and changes to continue growing the OWASP top 10.
About
This page is intended to provide greater clarity on the development methodology of the OWASP Top 10. This page provides information on the data sources used as input to the top 10, the current development processes, suggestions to improve involvement and participation, and also an FAQ to cover common questions & concerns.
This is a wiki and editable by anyone with an OWASP account. Please constructively contribute to the conversation. Additional discussions should also take place within the OWASP top 10 mailing list.
Current Methodology
The 2010 and later versions of OWASP Top 10 are organized based on the OWASP_Risk_Rating_Methodology, adjusted for the fact that the Top 10 is independent of any particular system. This adjusted methodology is documented in the OWASP Top 10 for 2010 here: Top_10_2010-Notes_About_Risk.
This resulted in 4 risk factors used to calculate the order of the Top 10, 3 Likelihood Factors and 1 Impact Factor. These factors are:
- Likelihood of an Application Having that Vulnerability (Prevalence)
- Likelihood of an Attacker Discovering that Vulnerability (Detectability)
- Likelihood of An Attacker Successfully Exploiting that Vulnerability (Exploitability)
- Typical Technical Impact if that Vulnerability is Successfully Exploited (Impact)
Each of these factors is scored on a scale from 1 through 3, except for XSS, which has a prevalence of 4
The Top 10 uses data sources provided by a variety of companies (see Top_10_2013/ProjectMethodology#Current_Data_Sources sources) to calculate Vulnerability Prevalence. We would love to use similar data to help calculate the scores for the other risk factors if that data is available (which is one of the improvement suggestions recommended below).
The process for producing an update to the OWASP Top 10 is generally as follows:
- Collect prevalence data from data suppliers.
- Rank prevalence data for each supplier and then aggregate the results to create an overall prevalence ranking for this update to the Top 10.
- Determine the values of the other risk factors based on professional opinion. (This step not done prior to 2010)
- Adjustments are sometimes done based on professional opinion (like adding CSRF in 2007, and Vulnerable Libraries in 2013)
- Calculate the Top 10 order
- Write a Draft/Release Candidate
- For 2010, a Draft was reviewed by the Data Contributors and other members of the OWASP Community, and the core commenters/contributors to the Release Candidate and Final Release were acknowledged here: Top_10_2010-Introduction (About 15 individuals/groups)
- Note, for 2013 this internal project review step was eliminated in order to get the release candidate out for public comment faster
- Publish Release Candidate for public comment (Prior to 2010 there was no release candidate, just a final release)
- Accept comments during a comment period
- Interact with comment providers to update the Top 10
- Publish all provided comments
- Publish a Final Release
This certainly doesn't cover every nuance of what it takes to produce the Top 10. For example, one of the most common comments is "Why don't you combine these two items into one to make room for my favorite Risk?". Like combining XSS into Injection, since XSS is really just a client side injection issue. The goal of the OWASP Top 10 is to raise awareness of the most important Risks, not to include every possible Risk we can stuff into the Top 10. So, in some cases, we've kept related issues separate to try to increase awareness of each issue. But there is always debate as to what's best, which is clearly subjective. In the 2013 release candidate, we combined cryptographic storage and communications into a single category, and then pulled use of known vulnerable libraries out of the Security Misconfiguration category in order to bring more attention to the use of Known Vulnerable Libraries since we believe this is an extremely important issue that deserves more attention as the use of libraries becomes more and more prevalent. In past updates, to make room for new issues, we dropped the least important issues, like error handling and denial of service, which some people agreed with, but others did not.
In 2010, the Release Candidate process worked like this:
- It was open for public comment for several months
- Dave Wichers, primarily, interacted via email with each comment provider to address their comments or provide rationale as to why no change was thought to be most appropriate.
- All provided comments were published at:
- The final version was published.
In 2013, the process is currently like so:
- Public comment period of Release Candidate is currently from February through end of March 2013. (This can be extended if necessary)
- We were planning on following the same process as for 2010 to complete the Top 10, but clearly the OWASP Community wants to get more heavily involved in producing the Final Release, which is great.
- So, at this point, the process for completing the 2013 Top 10 is TBD, subject to your input/suggestions.
Current Prevalence Data Sources
- Aspect Security
- HP (Results for both Fortify and WebInspect)
- Minded Security - Statistics
- Softtek
- Trustwave Spiderlabs - Statistics
- Veracode – Statistics
- WhiteHat Security – Statistics
If you would like to contribute your vulnerability statistics to the OWASP Top 10 project, please send your data to: [email protected]. Please indicate if its OK for OWASP to publish this raw data. If you have already published this data, please provide us a link to the public posting.
Note: In the first version of the Top 10 in 2003, we started with the MITRE CVE data, and each update expanded the number of prevalence data contributors. Unfortunately, the CVE data for 2011/2012 wasn't available for the 2013 release, which is why its not included this year.
Suggested Enhancements
Note: This is a wiki - please add new suggestions into this page.
- Use a public wiki or google issues to capture feedback - mailing lists are tough and things get lost
- Use a public wiki to allow for public edits, not just feedback. These edits will all be tracked with rollback capabilities for editing and will allow the history of Top 10 edit history to be "public" for maximum "open" and "visibility".
- Establish a Top 10 panel to evaluate and make final decisions on inclusion & ranking
- Not feasible for everyone to vote on every item
- A diverse panel representing various verticals (vendor, enterprise, offense/defense, etc)
- Additional data sources could be considered (please add links)
- WASC Web Hacking Incident Database
- Firehosts Web Application Attack Reports
- Imperva's Web Application Attack Reports
- Prolexic Attack Report
- Additional reports could be considered:
- Annual Symantec Internet Threat Reports
- Datalossdb
- IBM XForce threat reports
- Akamai State of the Internet Reports
- Public forum to brainstorm and discuss key topics
- Remove corporate logos from the Top Ten PDF and provide a "Who we are" link similar to Apache for maximum vendor neutrality: http://hadoop.apache.org/who.html
FAQ
1. The Three Cycle Reasons. Why OWASP Top 10 is release every three year cycle?
- Here are the reasons:
- a) The field does evolve pretty quick but Top 10 risks not substantially change every single year. Release every year is too much.
- b) It takes a lot of work to produce OWASP Top 10 update and spacing it out balances between the effort to produce and the amount of change when its updated.
- c) Lots of organizations, tools, etc, organize to the OWASP Top 10. So if it was published every single year, then everyone that chooses to align themselves to each update would have to do that work every year.
2. ...