This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org

Difference between revisions of "AppSecEU08 Scanstud - Evaluating static analysis tools"

From OWASP
Jump to: navigation, search
(New page: == The presentation == In late 2007 and early 2008 the Siemens CERT and the security group at the University of Hamburg ([http://www.informatik.uni-hamburg.de/SVS SVS]) jointly did a proj...)
 
m (The presentation)
Line 4: Line 4:
  
 
For this purpose a mature evaluation methodology was developed which allows:  
 
For this purpose a mature evaluation methodology was developed which allows:  
* Automatic Text-execution and -evaluation
+
* Automatic test-execution and -evaluation
 
* Easy and reliable testcase creation
 
* Easy and reliable testcase creation
 
* Deterministic correlation between single testcases and respective tool response
 
* Deterministic correlation between single testcases and respective tool response
Line 10: Line 10:
 
The talk will present our methodology, our approach on creating suitable testcases and our experiences regarding the actual evaluation.   
 
The talk will present our methodology, our approach on creating suitable testcases and our experiences regarding the actual evaluation.   
  
'''Note''': We won't present the precise results of the evaluation. We do not consider the actual outcome to be too valuable. The result of such an evaluation is always only a snapshot of evidence which is aging very fast (being invalid with the next version of the respective tools). However, we will share general information regarding our results (overall performance of the tools, medium ratio between False Negatives / False Positive, differences between C and Java analysis, anecdotal stuff, and trivia).  
+
'''Note''': We won't present the precise results of the evaluation. We do not consider the actual outcome to be too valuable. The result of such an evaluation is always only a snapshot of evidence which is aging very fast (being invalid with the next version of the respective tools). However, we will share general information regarding our results (overall performance of the tools, medium ratio between False Negatives / False Positive, differences between C and Java analysis, anecdotal stuff, and trivia).
  
 
== The speakers ==
 
== The speakers ==

Revision as of 10:28, 4 April 2008

The presentation

In late 2007 and early 2008 the Siemens CERT and the security group at the University of Hamburg (SVS) jointly did a project to evaluate the capabilities of commercial static analysis tools in respect to finding security vulnerabilities in source code.

For this purpose a mature evaluation methodology was developed which allows:

  • Automatic test-execution and -evaluation
  • Easy and reliable testcase creation
  • Deterministic correlation between single testcases and respective tool response

The talk will present our methodology, our approach on creating suitable testcases and our experiences regarding the actual evaluation.

Note: We won't present the precise results of the evaluation. We do not consider the actual outcome to be too valuable. The result of such an evaluation is always only a snapshot of evidence which is aging very fast (being invalid with the next version of the respective tools). However, we will share general information regarding our results (overall performance of the tools, medium ratio between False Negatives / False Positive, differences between C and Java analysis, anecdotal stuff, and trivia).

The speakers

The talk will be presented by one or more of the following individuals:

  • Martin Johns: Security researcher and PhD candidat at the University of Hamburg
  • Moritz Jodeit: Master's student at the University of Hamburg
  • Wolfgang Koeppl: Member of the Siemens CERT
  • Martin Wimmer: Member of the Siemens CERT