This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org
Difference between revisions of "Summit 2011 Working Sessions/Session058"
From OWASP
Line 109: | Line 109: | ||
| mailing_list = | | mailing_list = | ||
|- | |- | ||
− | | short_working_session_description= | + | | short_working_session_description = One of the biggest challenges of running an application security program is assembling the vulnerability findings from disparate tools, services, and consultants in a meaningful fashion. There are numerous standards for classifying vulnerabilities but little agreement on severity, exploitability, and/or business impact. One consultant may subjectively rate a vulnerability as critical while another will call it moderate. Some tools will attempt to gauge exploitability levels (which can be a black art in and of itself), others won't. Tools use everything from CWE to the OWASP Top Ten to the WASC TC to CAPEC. Security consultants often disregard vulnerability classification taxonomies in favor of their own "proprietary" systems. Sophisticated organizations may create their own internal system for normalizing output, but others can't afford to undertake such an effort. Until tool vendors and service providers can standardize on one methodology -- or maybe a couple -- for counting and scoring application defects, they are doing their customers a disservice. |
|- | |- | ||
Line 130: | Line 130: | ||
|- | |- | ||
− | | summit_session_objective_name1= | + | | summit_session_objective_name1 = Discuss existing methods for counting and scoring defects, by vendors who volunteer to share their methodologies. |
− | | summit_session_objective_name2 = | + | | summit_session_objective_name2 = Discuss advantages and disadvantages of a standardized approach. |
− | | summit_session_objective_name3 = | + | | summit_session_objective_name3 = Discuss the CWSS 0.1 draft and how it might be incorporated into a standard. |
| summit_session_objective_name4 = | | summit_session_objective_name4 = | ||
Line 175: | Line 175: | ||
|- | |- | ||
− | | summit_session_leader_name1 = Chris | + | | summit_session_leader_name1 = Chris Eng |
− | | summit_session_leader_email1 = | + | | summit_session_leader_email1 = ceng@Veracode.com |
− | | summit_session_leader_name2 = Chris | + | | summit_session_leader_name2 = Chris Wysopal |
− | | summit_session_leader_email2 = | + | | summit_session_leader_email2 = cwysopal@Veracode.com |
| summit_session_leader_name3 = | | summit_session_leader_name3 = |
Revision as of 20:34, 22 January 2011
Global Summit 2011 Home Page
Global Summit 2011 Tracks
Counting and scoring application security defects | ||||||
---|---|---|---|---|---|---|
Please see/use the 'discussion' page for more details about this Working Session | ||||||
Working Sessions Operational Rules - Please see here the general frame of rules. |
WORKING SESSION IDENTIFICATION | ||||||
---|---|---|---|---|---|---|
Short Work Session Description | One of the biggest challenges of running an application security program is assembling the vulnerability findings from disparate tools, services, and consultants in a meaningful fashion. There are numerous standards for classifying vulnerabilities but little agreement on severity, exploitability, and/or business impact. One consultant may subjectively rate a vulnerability as critical while another will call it moderate. Some tools will attempt to gauge exploitability levels (which can be a black art in and of itself), others won't. Tools use everything from CWE to the OWASP Top Ten to the WASC TC to CAPEC. Security consultants often disregard vulnerability classification taxonomies in favor of their own "proprietary" systems. Sophisticated organizations may create their own internal system for normalizing output, but others can't afford to undertake such an effort. Until tool vendors and service providers can standardize on one methodology -- or maybe a couple -- for counting and scoring application defects, they are doing their customers a disservice. | |||||
Related Projects (if any) |
| |||||
Email Contacts & Roles | Chair Chris Eng @ Chris Wysopal @ |
Operational Manager |
Mailing list Subscription Page |
WORKING SESSION SPECIFICS | ||||||
---|---|---|---|---|---|---|
Objectives |
| |||||
Venue/Date&Time/Model | Venue/Room OWASP Global Summit Portugal 2011 |
Date & Time
|
Discussion Model participants and attendees |
|
---|
WORKING SESSION OPERATIONAL RESOURCES | ||||||
---|---|---|---|---|---|---|
Projector, whiteboards, markers, Internet connectivity, power |
|
---|
WORKING SESSION ADDITIONAL DETAILS | ||||||
---|---|---|---|---|---|---|
WORKING SESSION OUTCOMES / DELIVERABLES | ||
---|---|---|
Proposed by Working Group | Approved by OWASP Board | |
After the Board Meeting - fill in here. | ||
After the Board Meeting - fill in here. | ||
After the Board Meeting - fill in here. | ||
After the Board Meeting - fill in here. | ||
After the Board Meeting - fill in here. | ||
After the Board Meeting - fill in here. | ||
After the Board Meeting - fill in here. | ||
After the Board Meeting - fill in here. |
Working Session Participants
(Add you name by clicking "edit" on the tab on the upper left side of this page)
WORKING SESSION PARTICIPANTS | ||||||
---|---|---|---|---|---|---|
Name | Company | Notes & reason for participating, issues to be discussed/addressed | ||||
Jason Taylor @ |
|
| ||||
Justin Clarke @ |
Gotham Digital Science |
| ||||
|
| |||||
|
| |||||
|
| |||||
|
| |||||
|
| |||||
|
| |||||
|
| |||||
|
| |||||
|
| |||||
|
| |||||
|
| |||||
|
| |||||
|
| |||||
|
| |||||
|
| |||||
|
| |||||
|
| |||||
|
|