This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org

Difference between revisions of "Summit 2011 Working Sessions/Session058"

From OWASP
Jump to: navigation, search
Line 158: Line 158:
 
|-
 
|-
  
|summit_session_deliverable_name1 =  
+
|summit_session_deliverable_name1 = Cooperation between tool vendors and service providers.
 
|summit_session_deliverable_url_1 =  
 
|summit_session_deliverable_url_1 =  
  
|summit_session_deliverable_name2 =  
+
|summit_session_deliverable_name2 = Determine potential OWASP involvement.
 
|summit_session_deliverable_url_2 =  
 
|summit_session_deliverable_url_2 =  
  

Revision as of 20:36, 22 January 2011

Global Summit 2011 Home Page
Global Summit 2011 Tracks

WS. metrics.jpg Counting and scoring application security defects
Please see/use the 'discussion' page for more details about this Working Session
Working Sessions Operational Rules - Please see here the general frame of rules.
WORKING SESSION IDENTIFICATION
Short Work Session Description One of the biggest challenges of running an application security program is assembling the vulnerability findings from disparate tools, services, and consultants in a meaningful fashion. There are numerous standards for classifying vulnerabilities but little agreement on severity, exploitability, and/or business impact. One consultant may subjectively rate a vulnerability as critical while another will call it moderate. Some tools will attempt to gauge exploitability levels (which can be a black art in and of itself), others won't. Tools use everything from CWE to the OWASP Top Ten to the WASC TC to CAPEC. Security consultants often disregard vulnerability classification taxonomies in favor of their own "proprietary" systems. Sophisticated organizations may create their own internal system for normalizing output, but others can't afford to undertake such an effort. Until tool vendors and service providers can standardize on one methodology -- or maybe a couple -- for counting and scoring application defects, they are doing their customers a disservice.
Related Projects (if any)


Email Contacts & Roles Chair
Chris Eng @
Chris Wysopal @
Operational Manager
Mailing list
Subscription Page
WORKING SESSION SPECIFICS
Objectives
  1. Discuss existing methods for counting and scoring defects, by vendors who volunteer to share their methodologies.
  2. Discuss advantages and disadvantages of a standardized approach.
  3. Discuss the CWSS 0.1 draft and how it might be incorporated into a standard.

Venue/Date&Time/Model Venue/Room
OWASP Global Summit Portugal 2011
Date & Time


Discussion Model
participants and attendees

WORKING SESSION OPERATIONAL RESOURCES
Projector, whiteboards, markers, Internet connectivity, power

WORKING SESSION ADDITIONAL DETAILS
WORKING SESSION OUTCOMES / DELIVERABLES
Proposed by Working Group Approved by OWASP Board

Cooperation between tool vendors and service providers.

After the Board Meeting - fill in here.

Determine potential OWASP involvement.

After the Board Meeting - fill in here.

After the Board Meeting - fill in here.

After the Board Meeting - fill in here.

After the Board Meeting - fill in here.

{{{summit_session_deliverable_name6}}}

After the Board Meeting - fill in here.

{{{summit_session_deliverable_name7}}}

After the Board Meeting - fill in here.

{{{summit_session_deliverable_name8}}}

After the Board Meeting - fill in here.

Working Session Participants

(Add you name by clicking "edit" on the tab on the upper left side of this page)

WORKING SESSION PARTICIPANTS
Name Company Notes & reason for participating, issues to be discussed/addressed
Jason Taylor @


Justin Clarke @
Gotham Digital Science