This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org

Difference between revisions of "Summit 2011 Working Sessions/Session058"

From OWASP
Jump to: navigation, search
Line 109: Line 109:
 
| mailing_list =
 
| mailing_list =
 
|-
 
|-
| short_working_session_description= We all know that you can’t control what you can’t measure and that you need to measure the right things or you won’t be steering towards the right outcomeFor this session we will define the right outcome as “low risk to an organization from vulnerabilities in applications.” What are the right things to measure?  How can we measure them?  How can we use these application security metrics to drive towards low application riskIt would also be great if this could be translated into monetary risk to determine if an organizations investment in applications is not too much or too littleSome of the concepts discussed will be to take a portfolio view of application risk, assigning business risk to applications, counting defects, and measuring SDLC process performanceThis is a big unsolved problem so come prepared with ideas and be willing to take part in a discussion. Includes discussion of CWSS 0.1.
+
| short_working_session_description = One of the biggest challenges of running an application security program is assembling the vulnerability findings from disparate tools, services, and consultants in a meaningful fashion.  There are numerous standards for classifying vulnerabilities but little agreement on severity, exploitability, and/or business impact.  One consultant may subjectively rate a vulnerability as critical while another will call it moderateSome tools will attempt to gauge exploitability levels (which can be a black art in and of itself), others won't. Tools use everything from CWE to the OWASP Top Ten to the WASC TC to CAPECSecurity consultants often disregard vulnerability classification taxonomies in favor of their own "proprietary" systemsSophisticated organizations may create their own internal system for normalizing output, but others can't afford to undertake such an effortUntil tool vendors and service providers can standardize on one methodology -- or maybe a couple -- for counting and scoring application defects, they are doing their customers a disservice.
  
 
|-
 
|-
Line 130: Line 130:
 
|-
 
|-
  
| summit_session_objective_name1=  
+
| summit_session_objective_name1 = Discuss existing methods for counting and scoring defects, by vendors who volunteer to share their methodologies.
  
| summit_session_objective_name2 =  
+
| summit_session_objective_name2 = Discuss advantages and disadvantages of a standardized approach.
  
| summit_session_objective_name3 =  
+
| summit_session_objective_name3 = Discuss the CWSS 0.1 draft and how it might be incorporated into a standard.
  
 
| summit_session_objective_name4 =  
 
| summit_session_objective_name4 =  
Line 175: Line 175:
 
|-
 
|-
  
| summit_session_leader_name1 = Chris Wysopal
+
| summit_session_leader_name1 = Chris Eng
| summit_session_leader_email1 = cwysopal@Veracode.com
+
| summit_session_leader_email1 = ceng@Veracode.com
  
| summit_session_leader_name2 = Chris Eng
+
| summit_session_leader_name2 = Chris Wysopal
| summit_session_leader_email2 = ceng@Veracode.com
+
| summit_session_leader_email2 = cwysopal@Veracode.com
  
 
| summit_session_leader_name3 =  
 
| summit_session_leader_name3 =  

Revision as of 20:34, 22 January 2011

Global Summit 2011 Home Page
Global Summit 2011 Tracks

WS. metrics.jpg Counting and scoring application security defects
Please see/use the 'discussion' page for more details about this Working Session
Working Sessions Operational Rules - Please see here the general frame of rules.
WORKING SESSION IDENTIFICATION
Short Work Session Description One of the biggest challenges of running an application security program is assembling the vulnerability findings from disparate tools, services, and consultants in a meaningful fashion. There are numerous standards for classifying vulnerabilities but little agreement on severity, exploitability, and/or business impact. One consultant may subjectively rate a vulnerability as critical while another will call it moderate. Some tools will attempt to gauge exploitability levels (which can be a black art in and of itself), others won't. Tools use everything from CWE to the OWASP Top Ten to the WASC TC to CAPEC. Security consultants often disregard vulnerability classification taxonomies in favor of their own "proprietary" systems. Sophisticated organizations may create their own internal system for normalizing output, but others can't afford to undertake such an effort. Until tool vendors and service providers can standardize on one methodology -- or maybe a couple -- for counting and scoring application defects, they are doing their customers a disservice.
Related Projects (if any)


Email Contacts & Roles Chair
Chris Eng @
Chris Wysopal @
Operational Manager
Mailing list
Subscription Page
WORKING SESSION SPECIFICS
Objectives
  1. Discuss existing methods for counting and scoring defects, by vendors who volunteer to share their methodologies.
  2. Discuss advantages and disadvantages of a standardized approach.
  3. Discuss the CWSS 0.1 draft and how it might be incorporated into a standard.

Venue/Date&Time/Model Venue/Room
OWASP Global Summit Portugal 2011
Date & Time


Discussion Model
participants and attendees

WORKING SESSION OPERATIONAL RESOURCES
Projector, whiteboards, markers, Internet connectivity, power

WORKING SESSION ADDITIONAL DETAILS
WORKING SESSION OUTCOMES / DELIVERABLES
Proposed by Working Group Approved by OWASP Board

After the Board Meeting - fill in here.

After the Board Meeting - fill in here.

After the Board Meeting - fill in here.

After the Board Meeting - fill in here.

After the Board Meeting - fill in here.

{{{summit_session_deliverable_name6}}}

After the Board Meeting - fill in here.

{{{summit_session_deliverable_name7}}}

After the Board Meeting - fill in here.

{{{summit_session_deliverable_name8}}}

After the Board Meeting - fill in here.

Working Session Participants

(Add you name by clicking "edit" on the tab on the upper left side of this page)

WORKING SESSION PARTICIPANTS
Name Company Notes & reason for participating, issues to be discussed/addressed
Jason Taylor @


Justin Clarke @
Gotham Digital Science