This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org

Difference between revisions of "AppSecEU08 Scanstud - Evaluating static analysis tools"

From OWASP
Jump to: navigation, search
m (The presentation)
(The presentation)
 
Line 11: Line 11:
  
 
'''Note''': We won't present the precise results of the evaluation. We do not consider the actual outcome to be too valuable. The result of such an evaluation is always only a snapshot of evidence which is aging very fast (being invalid with the next version of the respective tools). However, we will share general information regarding our results (overall performance of the tools, medium ratio between False Negatives / False Positive, differences between C and Java analysis, anecdotal stuff, and trivia).
 
'''Note''': We won't present the precise results of the evaluation. We do not consider the actual outcome to be too valuable. The result of such an evaluation is always only a snapshot of evidence which is aging very fast (being invalid with the next version of the respective tools). However, we will share general information regarding our results (overall performance of the tools, medium ratio between False Negatives / False Positive, differences between C and Java analysis, anecdotal stuff, and trivia).
 +
 +
'''Slides''': [https://www.owasp.org/images/7/76/Johns_jodeit_-_ScanStud_OWASP_Europe_2008.pdf pdf]
  
 
== The speakers ==
 
== The speakers ==

Latest revision as of 13:04, 22 May 2008

The presentation

In late 2007 and early 2008 the Siemens CERT and the security group at the University of Hamburg (SVS) jointly did a project to evaluate the capabilities of commercial static analysis tools in respect to finding security vulnerabilities in source code.

For this purpose a mature evaluation methodology was developed which allows:

  • Automatic test-execution and -evaluation
  • Easy and reliable testcase creation
  • Deterministic correlation between single testcases and respective tool response

The talk will present our methodology, our approach on creating suitable testcases and our experiences regarding the actual evaluation.

Note: We won't present the precise results of the evaluation. We do not consider the actual outcome to be too valuable. The result of such an evaluation is always only a snapshot of evidence which is aging very fast (being invalid with the next version of the respective tools). However, we will share general information regarding our results (overall performance of the tools, medium ratio between False Negatives / False Positive, differences between C and Java analysis, anecdotal stuff, and trivia).

Slides: pdf

The speakers

The talk will be presented by one or more of the following individuals:

  • Martin Johns: Security researcher and PhD candidat at the University of Hamburg
  • Moritz Jodeit: Master's student at the University of Hamburg
  • Wolfgang Koeppl: Member of the Siemens CERT
  • Martin Wimmer: Member of the Siemens CERT