This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit

Summit 2011 Working Sessions/Session058

Revision as of 23:48, 7 February 2011 by Alexandre Miguel Aniceto (talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Global Summit 2011 Home Page
Global Summit 2011 Tracks

WS. metrics.jpg Counting and scoring application security defects
Please see/use the 'discussion' page for more details about this Working Session
Working Sessions Operational Rules - Please see here the general frame of rules.
Short Work Session Description One of the biggest challenges of running an application security program is assembling the vulnerability findings from disparate tools, services, and consultants in a meaningful fashion. There are numerous standards for classifying vulnerabilities but little agreement on severity, exploitability, and/or business impact. One consultant may subjectively rate a vulnerability as critical while another will call it moderate. Some tools will attempt to gauge exploitability levels (which can be a black art in and of itself), others won't. Tools use everything from CWE to the OWASP Top Ten to the WASC TC to CAPEC. Security consultants often disregard vulnerability classification taxonomies in favor of their own "proprietary" systems. Sophisticated organizations may create their own internal system for normalizing output, but others can't afford to undertake such an effort. Until tool vendors and service providers can standardize on one methodology -- or maybe a couple -- for counting and scoring application defects, they are doing their customers a disservice.
Related Projects (if any)

Email Contacts & Roles Chair
Chris Eng @
Chris Wysopal @
Operational Manager
Mailing list
Subscription Page
  1. Discuss existing methods for counting and scoring defects, by vendors and practitioners willing to share their methodologies.
  2. Discuss advantages and disadvantages of a standardized approach.
  3. Discuss the CWSS 0.1 draft and how it might be incorporated into a standard.

Venue/Date&Time/Model Venue/Room
OWASP Global Summit Portugal 2011
Date & Time

Discussion Model
participants and attendees

Projector, whiteboards, markers, Internet connectivity, power

Proposed by Working Group Approved by OWASP Board

White paper sketching out a standard for rating risks that accomodates individual minor defects all the way through architectural flaws (that may represent many individual defects)

After the Board Meeting - fill in here.

After the Board Meeting - fill in here.

After the Board Meeting - fill in here.

After the Board Meeting - fill in here.

After the Board Meeting - fill in here.

After the Board Meeting - fill in here.

After the Board Meeting - fill in here.

After the Board Meeting - fill in here.

Working Session Participants

(Add you name by clicking "edit" on the tab on the upper left side of this page)

Name Company Notes & reason for participating, issues to be discussed/addressed
Jason Taylor @

Justin Clarke @
Gotham Digital Science

Sherif Koussa @
Software Secured

Vishal Garg @
AppSecure Labs Ltd

Matteo Meucci @
Minded Security

Elke Roth-Mandutz @
GSO-University of Applied Science

Mateo Martinez @

Doug Wilson @
I would like to see a convergence occur, but it strikes me as a holy grail. Suggest considering that no one standard will ever work, so look at transformations and conversions amongst a small group.
Ofer Maor @

Wojciech Dworakowski @

Alexandre Miguel Aniceto @