https://wiki.owasp.org/api.php?action=feedcontributions&user=Michael+Boman&feedformat=atomOWASP - User contributions [en]2024-03-28T17:25:04ZUser contributionsMediaWiki 1.27.2https://wiki.owasp.org/index.php?title=Sweden&diff=87546Sweden2010-08-12T09:04:23Z<p>Michael Boman: s/fro/for/</p>
<hr />
<div>{{Chapter Template|chaptername=Sweden|extra=The chapter leader is [mailto:John.Wilander@omegapoint.se John Wilander]<br />
<paypal>Sweden</paypal><br />
|mailinglistsite=http://lists.owasp.org/mailman/listinfo/owasp-sweden|emailarchives=http://lists.owasp.org/pipermail/owasp-sweden}}<br />
<br />
== The OWASP Sweden blog ==<br />
<br />
For lengthy news and event reports please visit the [http://owaspsweden.blogspot.com/ OWASP Sweden blog] (in Swedish).<br />
<br />
== Local News ==<br />
<br />
'''OWASP-Sweden + FOSS Sthlm "Community Hack" September 4-5 2010'''<br />
The first weekend of September OWASP Sweden together with FOSS Sthlm invite our members to Community Hack II in Stockholm. A full weekend of hacking on open projects, testing new security hacks, trying out tools (for instance the favorite OWASP tool you've always wanted to learn), or writing new, open guidelines.<br />
<br />
Go to [http://communityhack2.eventbrite.com/ EventBrite] and register for free now!<br />
<br />
<br />
'''OWASP-Sweden Meeting January 21st 2010 -- The Big Protocols'''<br />
Stiftelsen för Internetinfrastruktur (.SE) and Swedish Network Users' Society (SNUS) invite us to three seminars on the big protocols: BGP, DNSSEC, and SSL/TLS.<br />
<br />
Program and invitation (in Swedish): [[File:OWASP_Sweden_-_De_stora_protokollen_2010-01-21.pdf]]<br />
<br />
<br />
'''OWASP-Sweden Meeting December 2nd 2009 -- OWASP Top 10 2010 (rc1)'''<br />
Omegapoint invites us to discuss the release candidate of OWASP Top 10 2010 that was presented at OWASP AppSec DC November 13th. The invitation in Swedish is found [[File:OWASP_Sweden_Top_10_december_2009.pdf | here]]. <br />
'''Don't forget to send an email to John Wilander (john.wilander@owasp.org) no later than November 23rd to say you're coming.''' Seats usually fill up fast.<br />
<br />
<br />
'''OWASP AppSec Research 2010, June 21-24 in Stockholm, Sweden'''<br />
OWASP Sweden, Norway, and Denmark invite you to OWASP AppSec Research 2010, June 21-24 in Stockholm. Read more on the [https://www.owasp.org/index.php/OWASP_AppSec_Research_2010_-_Stockholm%2C_Sweden conference wiki page].<br />
<br />
<br />
'''OWASP-Sweden Meeting April 28th 2009 -- Code Analysis and Review'''<br />
<br />
The second chapter meeting of 2009 will be held on Tuesday April 28th at Clarion Hotel Stockholm. The focus is code analysis and code review. Fortify sponsors the event and welcome the chapter members to refreshments, starting at 17.30.<br />
<br />
The program:<br />
<br />
* Fredrik Möller (Fortify) will biefly present Fortify and their support of OWASP<br />
* David Anumudu (Fortify) will present and do a live demo of Fortify Solution<br />
* James Dickson (Simovits Consulting) will give a talk on code review<br />
<br />
'''Don't forget to send an email to John Wilander (john.wilander@omegapoint.se) no later than April 23rd to say you're coming.''' We need to know how many will turn up.<br />
<br />
<br />
'''OWASP-Sweden Meeting March 26th 2009 -- XSS & CSRF'''<br />
<br />
The first meeting of 2009 will be held Thursday March 26th at LabCenter, Oxtorgsgränd 2, Stockholm. The focus is cross-site scripting and cross-site request forgery, attacks and countermeasures. Inspect it and LabCenter sponsor the event and welcome the chapter members to refreshments, starting at 17.00.<br />
<br />
The program:<br />
<br />
* Hasain Alshakarti, TrueSec: "XSS & CSRF -- A Deadly Cocktail"<br />
* Sergio Molero, Concrete IT: "Skydd mot XSS och CSRF"<br />
<br />
'''Don't forget to send an email to Mattias Bergling (mattias.bergling@inspectit.se) no later than March 23rd to say you're coming.''' We need to know how many will turn up.<br />
<br />
<br />
'''OWASP-Sweden Meeting November 19th 2008 -- PCI DSS'''<br />
<br />
The next chapter meeting is Wednesday November 19th. The focus of the seminars is on PCI-DSS, i.e. security in payment card handling on the Internet. <br />
The program:<br />
* Mats Henriksson, Pan Nordic Card Assoc: "PCI DSS - Tre goda anledningar"<br />
* Pål Göran Stensson, Defensor Sverige AB: "PCI DSS - Externa krav och konsulten"<br />
* Bengt Berg, Cybercom Sweden East AB: "Olika angreppssätt på PCI DSS"<br />
<br />
'''The meeting is fully booked. But do send an email to John Wilander (john.wilander@omegapoint.se) to say you're interested and we'll let you know if seats become available.'''<br />
<br />
<br />
'''OWASP Sweden Hosts the OWASP AppSec Europe Conference 2010'''<br />
<br />
We're hosting the European OWASP AppSec conference in 2010! Please read the [http://www.owasp.org/index.php/OWASP_AppSec_Europe_2010_-_Sweden announcement].<br />
<br />
<br />
'''OWASP-Sweden Meeting October 6th 2008 -- Security in the Open Source Process'''<br />
<br />
The next chapter meeting is Monday October 6th at Clarion Hotel Stockholm (Skanstull). The focus of the seminars will be on "Security in the Open Source Process". Refreshments will be served from 16:30 and the seminars will commence at 17:30. Except for a closing panel discussion the program contains the following:<br />
<br />
* Simon Josefsson, SJD: ”Anekdoter och lärdomar från granskning av säkerhetsprogram”<br />
* Daniel Stenberg, daniel.haxx.se: ”Säker kod och utveckling i cURL-projektet”<br />
* Anders Karlsson, MySQL och Sun Microsystems: ”MySQL: Säkerhet i ett kommersiellt open source-projekt”<br />
<br />
'''Don't forget to send an email to Robert Malmgren (anmalan@romab.com) no later than September 29th to say you're coming.''' We need to know how many will turn up.<br />
<br />
<br />
'''OWASP-Sweden Meeting May 27th 2008 - SQL Injection, Web Scarab'''<br />
<br />
OWASP-Sweden welcomes its members to the next chapter meeting - Tuesday May 27th at Clarion Hotel Stockholm. Refreshments will be served from 17:00, demos will be shown from 17:30, and the seminars will commence at 18:00. The main attractions are:<br />
<br />
* Patrik Karlson, Inspect it: "SQL injection, identifiering och utnyttjande"<br />
* Johannes Gumbel, TrueSec: "WebScarab—funktioner, fördelar och nackdelar"<br />
<br />
'''Don't forget to send an email to Mattias Bergling (mattias.bergling@inspectit.se) no later than May 21st to say you're coming.''' We need to know how many will turn up.<br />
<br />
<br />
'''Kick-Off Meeting for OWASP-Sweden April 1st 2008'''<br />
<br />
The OWASP-Sweden kick-off will be held at WTC in Stockholm on April 1st. Yeah, it's April Fool's Day but we go under the tagline "Application Security is Not a Joke". The presentation program includes:<br />
<br />
* Andrei Sabelfeld, well-known security researcher from Chalmers<br />
* Michael Anderberg, Chief Security Advisor at Microsoft Sweden<br />
* Per Mellstrand, software analyst at Sony Ericsson and researcher at Blekinge Institute of Technology<br />
<br />
'''Don't forget to send an email to John Wilander (john.wilander@omegapoint.se) no later than March 27 to say you're coming.''' We need to know how many will turn up.<br />
<br />
We're kicking off!<br />
<br />
<br />
'''OWASP-Sweden in Computer Sweden - 08:44, 19 Dec 2007 (EDT)'''<br />
<br />
Today the Swedish national IT newspaper 'Computer Sweden' published an article on the new OWASP-Sweden chapter - [http://computersweden.idg.se/2.2683/1.137387 ''Mecka för säker programmering till Sverige''], or ''A Mecka for Secure Programming Reaches Sweden'' in English. While OWASP is more than a programmer's guide, Mattias Bergling and I are very happy to get the news out to a large part of Sweden's IT industry.<br />
<br />
'''To become a member of Owasp-Sweden just join the [http://lists.owasp.org/mailman/listinfo/owasp-sweden mailing list].'''<br />
<br />
<br />
'''OWASP-Sweden opens! - 22:25, 01 Oct 2007 (EDT)'''<br />
<br />
Finally, Sweden has joined the OWASP movement and John Wilander, the local chapter leader, welcomes members to the Stockholm-based OWASP-Sweden. Please, join our mailing list. Plans for meetings and seminars will be made.<br />
<br />
Are you interested in helping out? Do you have ideas for great invited speakers or workshop meetings? Feel free to contact the chapter.</div>Michael Bomanhttps://wiki.owasp.org/index.php?title=Application_Threat_Modeling&diff=87545Application Threat Modeling2010-08-12T08:57:34Z<p>Michael Boman: Resized the UseAndMisuseCase image</p>
<hr />
<div>[[OWASP Code Review Guide Table of Contents]]__TOC__<br />
<br />
<br />
<br><br />
<br />
----<br />
<br />
===Introduction===<br />
Threat modeling is an approach for analyzing the security of an application. It is a structured approach that enables you to identify, quantify, and address the security risks associated with an application. Threat modeling is not an approach to reviewing code, but it does complement the security code review process. The inclusion of threat modeling in the SDLC can help to ensure that applications are being developed with security built-in from the very beginning. This, combined with the documentation produced as part of the threat modeling process, can give the reviewer a greater understanding of the system. This allows the reviewer to see where the entry points to the application are and the associated threats with each entry point. The concept of threat modeling is not new but there has been a clear mindset change in recent years. Modern threat modeling looks at a system from a potential attacker's perspective, as opposed to a defender's viewpoint. Microsoft have been strong advocates of the process over the past number of years. They have made threat modeling a core component of their SDLC, which they claim to be one of the reasons for the increased security of their products in recent years. <br />
<br />
When source code analysis is performed outside the SDLC, such as on existing applications, the results of the threat modeling help in reducing the complexity of the source code analysis by promoting an in-depth first approach vs. breadth first approach. Instead of reviewing all source code with equal focus, you can prioritize the security code review of components whose threat modeling has ranked with high risk threats. <br />
<br />
The threat modeling process can be decomposed into 3 high level steps:<br />
<br />
'''Step 1:''' Decompose the Application. <br />
The first step in the threat modeling process is concerned with gaining an understanding of the application and how it interacts with external entities. This involves creating use-cases to understand how the application is used, identifying entry points to see where a potential attacker could interact with the application, identifying assets i.e. items/areas that the attacker would be interested in, and identifying trust levels which represent the access rights that the application will grant to external entities. This information is documented in the Threat Model document and it is also used to produce data flow diagrams (DFDs) for the application. The DFDs show the different paths through the system, highlighting the privilege boundaries. <br />
<br />
'''Step 2:''' Determine and rank threats.<br />
Critical to the identification of threats is using a threat categorization methodology. A threat categorization such as STRIDE can be used, or the Application Security Frame (ASF) that defines threat categories such as Auditing & Logging, Authentication, Authorization, Configuration Management, Data Protection in Storage and Transit, Data Validation, Exception Management. The goal of the threat categorization is to help identify threats both from the attacker (STRIDE) and the defensive perspective (ASF). DFDs produced in step 1 help to identify the potential threat targets from the attacker's perspective, such as data sources, processes, data flows, and interactions with users. These threats can be identified further as the roots for threat trees; there is one tree for each threat goal. From the defensive perspective, ASF categorization helps to identify the threats as weaknesses of security controls for such threats. Common threat-lists with examples can help in the identification of such threats. Use and abuse cases can illustrate how existing protective measures could be bypassed, or where a lack of such protection exists. The determination of the security risk for each threat can be determined using a value-based risk model such as DREAD or a less subjective qualitative risk model based upon general risk factors (e.g. likelihood and impact).<br />
<br />
'''Step 3:''' Determine countermeasures and mitigation.<br />
A lack of protection of a threat might indicate a vulnerability whose risk exposure could be mitigated with the implementation of a countermeasure. Such countermeasures can be identified using threat-countermeasure mapping lists. Once a risk ranking is assigned to the threats, it is possible to sort threats from the highest to the lowest risk, and prioritize the mitigation effort, such as by responding to such threats by applying the identified countermeasures. The risk mitigation strategy might involve evaluating these threats from the business impact that they pose and reducing the risk. Other options might include taking the risk, assuming the business impact is acceptable because of compensating controls, informing the user of the threat, removing the risk posed by the threat completely, or the least preferable option, that is, to do nothing. <br />
<br />
Each of the above steps are documented as they are carried out. The resulting document is the threat model for the application. This guide will use an example to help explain the concepts behind threat modeling. The same example will be used throughout each of the 3 steps as a learning aid. The example that will be used is a college library website. At the end of the guide we will have produced the threat model for the college library website. Each of the steps in the threat modeling process are described in detail below.<br />
<br />
== Decompose the Application ==<br />
The goal of this step is to gain an understanding of the application and how it interacts with external entities. This goal is achieved by information gathering and documentation. The information gathering process is carried out using a clearly defined structure, which ensures the correct information is collected. This structure also defines how the information should be documented to produce the Threat Model. <br />
<br />
==Threat Model Information==<br />
The first item in the threat model is the information relating to the threat model. <br />
This must include the the following:<br />
<br />
# '''Application Name''' - The name of the application.<br />
# '''Application Version''' - The version of the application.<br />
# '''Description''' - A high level description of the application.<br />
# '''Document Owner''' - The owner of the threat modeling document. <br />
# '''Participants''' - The participants involved in the threat modeling process for this application.<br />
# '''Reviewer''' - The reviewer(s) of the threat model.<br/><br />
Example:<br/><br />
[[Category:FIXME|the list above includes an Application name, but the example does not have one]]<br />
<br />
<table align="center" cellspacing="1" CELLPADDING="7"><br />
<br />
<tr bgcolor="#cccccc"><br />
<th colspan="2" align="center">Threat Model Information</th><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<th align="left">Application Version:</th><br />
<td>1.0</td><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<th align="left"> Description:</th><br />
<td>The college library website is the first implementation of a website to provide librarians and library patrons (students and college staff) with online services. <br />
As this is the first implementation of the website, the functionality will be limited. There will be three users of the application: <br/><br />
1. Students<br/><br />
2. Staff<br/><br />
3. Librarians<br/><br />
Staff and students will be able to log in and search for books, and staff members can request books. Librarians will be able to log in, add books, add users, and search for books.</td><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<th align="left">Document Owner:</th><br />
<td>David Lowry</td><br />
</tr><br />
<br />
<br />
<tr bgcolor="#dddddd"><br />
<th align="left">Participants:</th><br />
<td>David Rook</td><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<th align="left">Reviewer:</th><br />
<td>Eoin Keary</td><br />
</tr><br />
<br />
</table><br />
<br/><br />
<br />
==External Dependencies==<br />
External dependencies are items external to the code of the application that may pose a threat to the application. These items are typically still within the control of the organization, but possibly not within the control of the development team. The first area to look at when investigating external dependencies is how the application will be deployed in a production environment, and what are the requirements surrounding this. This involves looking at how the application is or is not intended to be run. For example if the application is expected to be run on a server that has been hardened to the organization's hardening standard and it is expected to sit behind a firewall, then this information should be documented in the external dependencies section. External dependencies should be documented as follows:<br />
<br />
# '''ID''' - A unique ID assigned to the external dependency.<br />
# '''Description''' - A textual description of the external dependency.<br />
<br/><br />
Example:<br />
<br/><br />
<table align="center" cellspacing="1" CELLPADDING="7"><br />
<br />
<tr bgcolor="#cccccc"><br />
<th colspan="2" align="center">External Dependencies</th><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<th>ID</th><br />
<th>Description</th><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>1</td><br />
<td>The college library website will run on a Linux server running Apache. This server will be hardened as per the college's server hardening standard. This includes the application of the latest operating system and application security patches.</td><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>2</td><br />
<td>The database server will be MySQL and it will run on a Linux server. This server will be hardened as per the college's server hardening standard. This will include the application of the lastest operating system and application security patches.</td><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>3</td><br />
<td>The connection between the Web Server and the database server will be over a private network.</td><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>4</td><br />
<td>The Web Server is behind a firewall and the only communication available is TLS.</td><br />
</tr><br />
<br />
<br />
</table><br />
<br/><br />
<br />
==Entry Points==<br />
Entry points define the interfaces through which potential attackers can interact with the application or supply it with data. In order for a potential attacker to attack an application, entry points must exist. Entry points in an application can be layered, for example each web page in a web application may contain multiple entry points. Entry points should be documented as follows: <br />
<br />
# '''ID''' - A unique ID assigned to the entry point. This will be used to cross reference the entry point with any threats or vulnerabilities that are identified. In the case of layer entry points, a major.minor notation should be used.<br />
# '''Name''' - A descriptive name identifying the entry point and its purpose.<br />
# '''Description''' - A textual description detailing the interaction or processing that occurs at the entry point.<br />
# '''Trust Levels''' - The level of access required at the entry point is documented here. These will be cross referenced with the trusts levels defined later in the document.<br />
<br/><br />
Example:<br />
<br/><br />
<table align="center" cellspacing="1" CELLPADDING="7"><br />
<br />
<tr bgcolor="#cccccc"><br />
<th colspan="4" align="center">Entry Points</th><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<th width="5%">ID</th><br />
<th width="15%">Name</th><br />
<th width="45%">Description</th><br />
<th width="25%">Trust Levels</th><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>1</td><br />
<td>HTTPS Port</td><br />
<td>The college library website will be only be accessable via TLS. All pages within the college library website are layered on this entry point.</td><br />
<td>(1) Anonymous Web User<br/><br />
(2) User with Valid Login Credentials<br/><br />
(3) User with Invalid Login Credentials<br/><br />
(4) Librarian<br/><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>1.1</td><br />
<td>Library Main Page</td><br />
<td>The splash page for the college library website is the entry point for all users.</td><br />
<td>(1) Anonymous Web User<br/><br />
(2) User with Valid Login Credentials<br/><br />
(3) User with Invalid Login Credentials<br/><br />
(4) Librarian<br/><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>1.2</td><br />
<td>Login Page</td><br />
<td>Students, faculty members and librarians must log in to the college library website before they can carry out any of the use cases.</td><br />
<td>(1) Anonymous Web User<br/><br />
(2) User with Login Credentials<br/><br />
(3) User with Invalid Login Credentials<br/><br />
(4) Librarian<br/><br />
</td><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>1.2.1</td><br />
<td>Login Function</td><br />
<td>The login function accepts user supplied credentials and compares them with those in the database.</td><br />
<td><br />
(2) User with Valid Login Credentials<br/><br />
(3) User with Invalid Login Credentials<br/><br />
(4) Librarian</td><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>1.3</td><br />
<td>Search Entry Page</td><br />
<td>The page used to enter a search query.</td><br />
<td><br />
(2) User with Valid Login Credentials<br/><br />
(4) Librarian</td><br />
</tr><br />
<br />
</table><br />
<br/><br />
<br />
==Assets==<br />
The system must have something that the attacker is interested in; these items/areas of interest are defined as assets. Assets are essentially threat targets, i.e. they are the reason threats will exist. Assets can be both physical assets and abstract assets. For example, an asset of an application might be a list of clients and their personal information; this is a physical asset. An abstract asset might be the reputation of an organsation. Assets are documented in the threat model as follows: <br />
<br />
# '''ID''' - A unique ID is assigned to identify each asset. This will be used to cross reference the asset with any threats or vulnerabilities that are identified.<br />
# '''Name''' - A descriptive name that clearly identifies the asset.<br />
# '''Description''' - A textual description of what the asset is and why it needs to be protected.<br />
# '''Trust Levels''' - The level of access required to access the entry point is documented here. These will be cross referenced with the trust levels defined in the next step.<br />
<br/><br />
Example:<br />
<br/><br />
<table align="center" cellspacing="1" CELLPADDING="7"><br />
<br />
<tr bgcolor="#cccccc"><br />
<th colspan="4" align="center">Assets</th><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<th width="5%">ID</th><br />
<th width="15%">Name</th><br />
<th width="55%">Description</th><br />
<th width="25%">Trust Levels</th><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>1</td><br />
<td>Library Users and Librarian</td><br />
<td>Assets relating to students, faculty members, and librarians.</td><br />
<td></td><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>1.1</td><br />
<td>User Login Details</td><br />
<td>The login credentials that a student or a faculty member will use to log into the College Library website.</td><br />
<td><br />
(2) User with Valid Login Credentials<br/><br />
(4) Librarian <br/><br />
(5) Database Server Administrator <br/><br />
(7) Web Server User Process<br/><br />
(8) Database Read User<br/><br />
(9) Database Read/Write User<br />
</td></tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>1.2</td><br />
<td>Librarian Login Details</td><br />
<td>The login credentials that a Librarian will use to log into the College Library website.</td><br />
<td><br />
(4) Librarian <br/><br />
(5) Database Server Administrator <br/><br />
(7) Web Server User Process<br/><br />
(8) Database Read User<br/><br />
(9) Database Read/Write User<br />
</td></tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>1.3</td><br />
<td>Personal Data</td><br />
<td>The College Library website will store personal information relating to the students, faculty members, and librarians.</td><br />
<td><br />
(4) Librarian <br/><br />
(5) Database Server Administrator <br/><br />
(6) Website Administrator <br/><br />
(7) Web Server User Process<br/><br />
(8) Database Read User<br/><br />
(9) Database Read/Write User<br />
<br />
</td></tr><br />
<br />
<br />
<tr bgcolor="#cccccc"><br />
<td>2</td><br />
<td>System</td><br />
<td>Assets relating to the underlying system.</td><br />
<td></td><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>2.1</td><br />
<td>Availability of College Library Website</td><br />
<td>The College Library website should be available 24 hours a day and can be accessed by all students, college faculty members, and librarians.</td><br />
<td><br />
(5) Database Server Administrator <br/><br />
(6) Website Administrator <br/><br />
</td><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>2.2</td><br />
<td>Ability to Execute Code as a Web Server User</td><br />
<td>This is the ability to execute source code on the web server as a web server user.</td><br />
<td><br />
(6) Website Administrator <br/><br />
(7) Web Server User Process <br/><br />
</td><br />
</tr><br />
<br />
<br />
<tr bgcolor="#dddddd"><br />
<td>2.3</td><br />
<td>Ability to Execute SQL as a Database Read User</td><br />
<td>This is the ability to execute SQL select queries on the database, and thus retrieve any information stored within the College Library database.</td><br />
<td><br />
(5) Database Server Administrator<br/><br />
(8) Database Read User<br/><br />
(9) Database Read/Write User<br/><br />
</td><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>2.4</td><br />
<td>Ability to Execute SQL as a Database Read/Write User</td><br />
<td>This is the ability to execute SQL. Select, insert, and update queries on the database and thus have read and write access to any information stored within the College Library database.</td><br />
<td><br />
(5) Database Server Administrator<br/><br />
(9) Database Read/Write User<br/><br />
</td><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>3</td><br />
<td>Website</td><br />
<td>Assets relating to the College Library website.</td><br />
<td></td><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>3.1</td><br />
<td>Login Session</td><br />
<td>This is the login session of a user to the College Library website. This user could be a student, a member of the college faculty, or a Librarian.</td><br />
<td><br />
(2) User with Valid Login Credentials<br/><br />
(4) Librarian<br/><br />
</td><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>3.2</td><br />
<td>Access to the Database Server</td><br />
<td>Access to the database server allows you to administer the database, giving you full access to the database users and all data contained within the database.</td><br />
<td><br />
(5) Database Server Administrator<br/><br />
</td><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>3.3</td><br />
<td>Ability to Create Users</td><br />
<td>The ability to create users would allow an individual to create new users on the system. These could be student users, faculty member users, and librarian users.</td><br />
<td><br />
(4) Librarian<br/><br />
(6) Website Administrator<br/><br />
</td><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>3.3</td><br />
<td>Access to Audit Data</td><br />
<td>The audit data shows all audit-able events that occurred within the College Library application by students, staff, and librarians.</td><br />
<td><br />
(6) Website Administrator<br/><br />
</td><br />
</tr><br />
<br />
</table><br />
<br />
<br/><br />
<br />
==Trust Levels==<br />
Trust levels represent the access rights that the application will grant to external entities. The trust levels are cross referenced with the entry points and assets. This allows us to define the access rights or privileges required at each entry point, and those required to interact with each asset. Trust levels are documented in the threat model as follows: <br />
<br />
# '''ID''' - A unique number is assigned to each trust level. This is used to cross reference the trust level with the entry points and assets.<br />
# '''Name''' - A descriptive name that allows you to identify the external entities that have been granted this trust level.<br />
# '''Description''' - A textual description of the trust level detailing the external entity who has been granted the trust level.<br />
<br/><br />
Example:<br />
<br/><br />
<table align="center" cellspacing="1" CELLPADDING="7"><br />
<br />
<tr bgcolor="#cccccc"><br />
<th colspan="4" align="center">Trust Levels</th><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<th width="5%">ID</th><br />
<th width="25%">Name</th><br />
<th width="70%">Description</th><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>1</td><br />
<td>Anonymous Web User</td><br />
<td>A user who has connected to the college library website but has not provided valid credentials.</td><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>2</td><br />
<td>User with Valid Login Credentials</td><br />
<td>A user who has connected to the college library website and has logged in using valid login credentials.</td><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>3</td><br />
<td>User with Invalid Login Credentials</td><br />
<td>A user who has connected to the college library website and is attempting to log in using invalid login credentials.</td><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>4</td><br />
<td>Librarian</td><br />
<td>The librarian can create users on the library website and view their personal information.</td><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>5</td><br />
<td>Database Server Administrator</td><br />
<td>The database server administrator has read and write access to the database that is used by the college library website.</td><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>6</td><br />
<td>Website Administrator</td><br />
<td>The Website administrator can configure the college library website.</td><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>7</td><br />
<td>Web Server User Process</td><br />
<td>This is the process/user that the web server executes code as and authenticates itself against the database server as.</td><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>8</td><br />
<td>Database Read User</td><br />
<td>The database user account used to access the database for read access.</td><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>9</td><br />
<td>Database Read/Write User</td><br />
<td>The database user account used to access the database for read and write access.</td><br />
</tr><br />
</table><br />
<br/><br />
<br />
==Data Flow Diagrams==<br />
All of the information collected allows us to accurately model the application through the use of Data Flow Diagrams (DFDs). The DFDs will allow us to gain a better understanding of the application by providing a visual representation of how the application processes data. The focus of the DFDs is on how data moves through the application and what happens to the data as it moves. DFDs are hierarchical in structure, so they can be used to decompose the application into subsystems and lower-level subsystems. The high level DFD will allow us to clarify the scope of the application being modeled. The lower level iterations will allow us to focus on the specific processes involved when processing specific data. There are a number of symbols that are used in DFDs for threat modeling. These are described below:<br />
<br />
'''External Entity'''<br/><br />
The external entity shape is used to represent any entity outside the application that interacts with the application via an entry point.<br/><br/><br />
[[Image:DFD_external_entity.gif]]<br />
<br/><br/><br />
<br />
'''Process'''<br/><br />
The process shape represents a task that handles data within the application. The task may process the data or perform an action based on the data.<br/><br/><br />
[[Image:DFD_process.gif]]<br />
<br/><br/><br />
<br />
'''Multiple Process'''<br/><br />
The multiple process shape is used to present a collection of subprocesses. The multiple process can be broken down into its subprocesses in another DFD.<br/><br/><br />
[[Image:DFD_multiple_process.gif]]<br />
<br/><br/><br />
<br />
'''Data Store'''<br/><br />
The data store shape is used to represent locations where data is stored. Data stores do not modify the data, they only store data.<br/><br/><br />
[[Image:DFD_data_store.gif]]<br />
<br/><br/><br />
<br />
<br />
'''Data Flow'''<br/><br />
The data flow shape represents data movement within the application. The direction of the data movement is represented by the arrow.<br/><br/><br />
[[Image:DFD_data_flow.gif]]<br />
<br/><br/><br />
'''Privilege Boundary'''<br/><br />
The privilege boundary shape is used to represent the change of privilege levels as the data flows through the application.<br/><br/><br />
[[Image:DFD_privilge_boundary.gif]]<br />
<br/><br/><br />
<br />
<br />
<br />
===Example===<br />
<br/> '''Data Flow Diagram for the College Library Website'''<br />
<br/><br/><br />
[[Image:Data flow1.jpg]]<br />
<br/><br/><br />
'''User Login Data Flow Diagram for the College Library Website'''<br />
<br/><br/><br />
[[Image:Data flow2.jpg]]<br />
<br/><br/><br />
<br />
== Determine and Rank Threats ==<br />
===Threat Categorization===<br />
The first step in the determination of threats is adopting a threat categorization. A threat categorization provides a set of threat categories with corresponding examples so that threats can be systematically identified in the application in a structured and repeatable manner. <br />
<br />
====STRIDE====<br />
A threat categorization such as STRIDE is useful in the identification of threats by classifying attacker goals such as:<br />
*Spoofing<br />
*Tampering<br />
*Repudiation<br />
*Information Disclosure<br />
*Denial of Service<br />
*Elevation of Privilege.<br />
<br />
A threat list of generic threats organized in these categories with examples and the affected security controls is provided in the following table:<br />
<br />
<br/><br />
<table align="center" cellspacing="1" CELLPADDING="7"><br />
<br />
<tr bgcolor="#cccccc"><br />
<th colspan="4" align="center">STRIDE Threat List</th><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<th>Type</th><br />
<th>Examples</th><br />
<th>Security Control</th><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>Spoofing</td><br />
<td>Threat action aimed to illegally access and use another user's credentials, such as username and password.</td><br />
<td>Authentication</td><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>Tampering</td><br />
<td>Threat action aimed to maliciously change/modify persistent data, such as persistent data in a database, and the alteration of data in transit between two computers over an open network, such as the Internet.</td><br />
<td>Integrity</td><br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>Repudiation</td><br />
<td>Threat action aimed to perform illegal operations in a system that lacks the ability to trace the prohibited operations.</td><br />
<td>Non-Repudiation</td> <br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>Information disclosure</td><br />
<td>Threat action to read a file that one was not granted access to, or to read data in transit. </td><br />
<td>Confidentiality</td> <br />
</tr><br />
<br />
<tr bgcolor="#dddddd"><br />
<td>Denial of service</td><br />
<td>Threat aimed to deny access to valid users, such as by making a web server temporarily unavailable or unusable. <br />
</td><br />
<td>Availability</td><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>Elevation of privilege</td><br />
<td>Threat aimed to gain privileged access to resources for gaining unauthorized access to information or to compromise a system.</td><br />
<td>Authorization</td><br />
<br />
</table><br />
<br/><br />
<br />
==Security Controls==<br />
Once the basic threat agents and business impacts are understood, the review team should try to identify the set of controls that could prevent these threat agents from causing those impacts. The primary focus of the code review should be to ensure that these security controls are in place, that they work properly, and that they are correctly invoked in all the necessary places. The checklist below can help to ensure that all the likely risks have been considered.<br />
<br />
'''Authentication:'''<br />
*Ensure all internal and external connections (user and entity) go through an appropriate and adequate form of authentication. Be assured that this control cannot be bypassed. <br />
*Ensure all pages enforce the requirement for authentication. <br />
*Ensure that whenever authentication credentials or any other sensitive information is passed, only accept the information via the HTTP “POST” method and will not accept it via the HTTP “GET” method. <br />
*Any page deemed by the business or the development team as being outside the scope of authentication should be reviewed in order to assess any possibility of security breach. <br />
*Ensure that authentication credentials do not traverse the wire in clear text form. <br />
*Ensure development/debug backdoors are not present in production code. <br />
<br />
'''Authorization: '''<br />
*Ensure that there are authorization mechanisms in place. <br />
*Ensure that the application has clearly defined the user types and the rights of said users. <br />
*Ensure there is a least privilege stance in operation. <br />
*Ensure that the Authorization mechanisms work properly, fail securely, and cannot be circumvented. <br />
*Ensure that authorization is checked on every request. <br />
*Ensure development/debug backdoors are not present in production code. <br />
<br />
'''Cookie Management: '''<br />
*Ensure that sensitive information is not comprised. <br />
*Ensure that unauthorized activities cannot take place via cookie manipulation. <br />
*Ensure that proper encryption is in use. <br />
*Ensure secure flag is set to prevent accidental transmission over “the wire” in a non-secure manner. <br />
*Determine if all state transitions in the application code properly check for the cookies and enforce their use. <br />
*Ensure the session data is being validated. <br />
*Ensure cookies contain as little private information as possible. <br />
*Ensure entire cookie is encrypted if sensitive data is persisted in the cookie. <br />
*Define all cookies being used by the application, their name, and why they are needed. <br />
<br />
'''Data/Input Validation: '''<br />
*Ensure that a DV mechanism is present. <br />
*Ensure all input that can (and will) be modified by a malicious user such as HTP headers, input fields, hidden fields, drop down lists, and other web components are properly validated. <br />
*Ensure that the proper length checks on all input exist. <br />
*Ensure that all fields, cookies, http headers/bodies, and form fields are validated. <br />
*Ensure that the data is well formed and contains only known good chars if possible. <br />
*Ensure that the data validation occurs on the server side. <br />
*Examine where data validation occurs and if a centralized model or decentralized model is used. <br />
*Ensure there are no backdoors in the data validation model. <br />
*'''Golden Rule: All external input, no matter what it is, is examined and validated. '''<br />
<br />
'''Error Handling/Information leakage: '''<br />
*Ensure that all method/function calls that return a value have proper error handling and return value checking. <br />
*Ensure that exceptions and error conditions are properly handled. <br />
*Ensure that no system errors can be returned to the user. <br />
*Ensure that the application fails in a secure manner. <br />
*Ensure resources are released if an error occurs. <br />
<br />
'''Logging/Auditing: '''<br />
*Ensure that no sensitive information is logged in the event of an error. <br />
*Ensure the payload being logged is of a defined maximum length and that the logging mechanism enforces that length. <br />
*Ensure no sensitive data can be logged; e.g. cookies, HTTP “GET” method, authentication credentials. <br />
*Examine if the application will audit the actions being taken by the application on behalf of the client (particularly data manipulation/Create, Update, Delete (CUD) operations). <br />
*Ensure successful and unsuccessful authentication is logged. <br />
*Ensure application errors are logged. <br />
*Examine the application for debug logging with the view to logging of sensitive data. <br />
<br />
'''Cryptography: '''<br />
*Ensure no sensitive data is transmitted in the clear, internally or externally. <br />
*Ensure the application is implementing known good cryptographic methods. <br />
<br />
'''Secure Code Environment: '''<br />
*Examine the file structure. Are any components that should not be directly accessible available to the user?<br />
*Examine all memory allocations/de-allocations. <br />
*Examine the application for dynamic SQL and determine if it is vulnerable to injection. <br />
*Examine the application for “main()” executable functions and debug harnesses/backdoors.<br />
*Search for commented out code, commented out test code, which may contain sensitive information. <br />
*Ensure all logical decisions have a default clause. <br />
*Ensure no development environment kit is contained on the build directories. <br />
*Search for any calls to the underlying operating system or file open calls and examine the error possibilities. <br />
<br />
'''Session Management: '''<br />
*Examine how and when a session is created for a user, unauthenticated and authenticated. <br />
*Examine the session ID and verify if it is complex enough to fulfill requirements regarding strength. <br />
*Examine how sessions are stored: e.g. in a database, in memory etc. <br />
*Examine how the application tracks sessions. <br />
*Determine the actions the application takes if an invalid session ID occurs. <br />
*Examine session invalidation. <br />
*Determine how multithreaded/multi-user session management is performed. <br />
*Determine the session HTTP inactivity timeout. <br />
*Determine how the log-out functionality functions.<br />
<br />
==Threat Analysis==<br />
The prerequisite in the analysis of threats is the understanding of the generic definition of risk that is the probability that a threat agent will exploit a vulnerability to cause an impact to the application. From the perspective of risk management, threat modeling is the systematic and strategic approach for identifying and enumerating threats to an application environment with the objective of minimizing risk and the associated impacts. <br />
<br />
Threat analysis as such is the identification of the threats to the application, and involves the analysis of each aspect of the application functionality and architecture and design to identify and classify potential weaknesses that could lead to an exploit. <br />
<br />
In the first threat modeling step, we have modeled the system showing data flows, trust boundaries, process components, and entry and exit points. An example of such modeling is shown in the Example: Data Flow Diagram for the College Library Website. <br />
<br />
Data flows show how data flows logically through the end to end, and allows the identification of affected components through critical points (i.e. data entering or leaving the system, storage of data) and the flow of control through these components. Trust boundaries show any location where the level of trust changes. Process components show where data is processed, such as web servers, application servers, and database servers. Entry points show where data enters the system (i.e. input fields, methods) and exit points are where it leaves the system (i.e. dynamic output, methods), respectively. Entry and exit points define a trust boundary. <br />
<br />
Threat lists based on the STRIDE model are useful in the identification of threats with regards to the attacker goals. For example, if the threat scenario is attacking the login, would the attacker brute force the password to break the authentication? If the threat scenario is to try to elevate privileges to gain another user’s privileges, would the attacker try to perform forceful browsing? <br />
<br />
It is vital that all possible attack vectors should be evaluated from the attacker’s point of view. For this reason, it is also important to consider entry and exit points, since they could also allow the realization of certain kinds of threats. For example, the login page allows sending authentication credentials, and the input data accepted by an entry point has to validate for potential malicious input to exploit vulnerabilities such as SQL injection, cross site scripting, and buffer overflows. Additionally, the data flow passing through that point has to be used to determine the threats to the entry points to the next components along the flow. If the following components can be regarded critical (e.g. the hold sensitive data), that entry point can be regarded more critical as well. In an end to end data flow, for example, the input data (i.e. username and password) from a login page, passed on without validation, could be exploited for a SQL injection attack to manipulate a query for breaking the authentication or to modify a table in the database. <br />
<br />
Exit points might serve as attack points to the client (e.g. XSS vulnerabilities) as well for the realization of information disclosure vulnerabilities. For example, in the case of exit points from components handling confidential data (e.g. data access components), exit points lacking security controls to protect the confidentiality and integrity can lead to disclosure of such confidential information to an unauthorized user. <br />
<br />
In many cases threats enabled by exit points are related to the threats of the corresponding entry point. In the login example, error messages returned to the user via the exit point might allow for entry point attacks, such as account harvesting (e.g. username not found), or SQL injection (e.g. SQL exception errors). <br />
<br />
From the defensive perspective, the identification of threats driven by security control categorization such as ASF, allows a threat analyst to focus on specific issues related to weaknesses (e.g. vulnerabilities) in security controls. Typically the process of threat identification involves going through iterative cycles where initially all the possible threats in the threat list that apply to each component are evaluated. <br />
<br />
At the next iteration, threats are further analyzed by exploring the attack paths, the root causes (e.g. vulnerabilities, depicted as orange blocks) for the threat to be exploited, and the necessary mitigation controls (e.g. countermeasures, depicted as green blocks). A threat tree as shown in figure 2 is useful to perform such threat analysis <br />
<br />
[[Image:Threat_Graph.gif|Figure 2: Threat Graph]]<br />
<br />
Once common threats, vulnerabilities, and attacks are assessed, a more focused threat analysis should take in consideration use and abuse cases. By thoroughly analyzing the use scenarios, weaknesses can be identified that could lead to the realization of a threat. Abuse cases should be identified as part of the security requirement engineering activity. These abuse cases can illustrate how existing protective measures could be bypassed, or where a lack of such protection exists. A use and misuse case graph for authentication is shown in figure below:<br />
<br />
[[Image:UseAndMisuseCase.jpg|640px|Figure 3: Use and Misuse Case]]<br />
<br />
Finally, it is possible to bring all of this together by determining the types of threat to each component of the decomposed system. This can be done by using a threat categorization such as STRIDE or ASF, the use of threat trees to determine how the threat can be exposed by a vulnerability, and use and misuse cases to further validate the lack of a countermeasure to mitigate the threat.<br />
<br />
To apply STRIDE to the data flow diagram items the following table can be used: <br />
<br />
TABLE<br />
<br />
==Ranking of Threats==<br />
Threats can be ranked from the perspective of risk factors. By determining the risk factor posed by the various identified threats, it is possible to create a prioritized list of threats to support a risk mitigation strategy, such as deciding on which threats have to be mitigated first. Different risk factors can be used to determine which threats can be ranked as High, Medium, or Low risk. In general, threat risk models use different factors to model risks such as those shown in figure below:<br />
<br />
<br />
[[Image:Riskfactors.JPG|Figure 3: Risk Model Factors]]<br />
<br />
==DREAD==<br />
In the Microsoft DREAD threat-risk ranking model, the technical risk factors for impact are Damage and Affected Users, while the ease of exploitation factors are Reproducibility, Exploitability and Discoverability. This risk factorization allows the assignment of values to the different influencing factors of a threat. To determine the ranking of a threat, the threat analyst has to answer basic questions for each factor of risk, for example: <br />
<br />
*For Damage: How big the damage can be?<br />
*For Reproducibility: How easy is it to reproduce an attack to work?<br />
*For Exploitability: How much time, effort, and expertise is needed to exploit the threat?<br />
*For Affected Users: If a threat were exploited, what percentage of users would be affected?<br />
*For Discoverability: How easy is it for an attacker to discover this threat?<br />
<br />
By referring to the college library website it is possible to document sample threats releated to the use cases such as: <br />
<br />
'''Threat: Malicious user views confidential information of students, faculty members and librarians.'''<br />
# '''Damage potential:''' Threat to reputation as well as financial and legal liability:8<br />
# '''Reproducibility:''' Fully reproducible:10<br />
# '''Exploitability:''' Require to be on the same subnet or have compromised a router:7<br />
# '''Affected users:''' Affects all users:10<br />
# '''Discoverability:''' Can be found out easily:10<br />
<br />
Overall DREAD score: (8+10+7+10+10) / 5 = 9<br />
<br />
In this case having 9 on a 10 point scale is certainly an high risk threat<br />
<br />
==Generic Risk Model==<br />
A more generic risk model takes into consideration the Likelihood (e.g. probability of an attack) and the Impact (e.g. damage potential): <br />
<br />
'''Risk = Likelihood x Impact'''<br />
<br />
The likelihood or probability is defined by the ease of exploitation, which mainly depends on the type of threat and the system characteristics, and by the possibility to realize a threat, which is determined by the existence of an appropriate countermeasure. <br />
<br />
The following is a set of considerations for determining ease of exploitation: <br />
# Can an attacker exploit this remotely? <br />
# Does the attacker need to be authenticated?<br />
# Can the exploit be automated?<br />
<br />
The impact mainly depends on the damage potential and the extent of the impact, such as the number of components that are affected by a threat. <br />
<br />
Examples to determine the damage potential are:<br />
# Can an attacker completely take over and manipulate the system? <br />
# Can an attacker gain administration access to the system?<br />
# Can an attacker crash the system? <br />
# Can the attacker obtain access to sensitive information such as secrets, PII<br />
<br />
Examples to determine the number of components that are affected by a threat:<br />
# How many data sources and systems can be impacted?<br />
# How “deep” into the infrastructure can the threat agent go?<br />
<br />
These examples help in the calculation of the overall risk values by assigning qualitative values such as High, Medium and Low to Likelihood and Impact factors. In this case, using qualitative values, rather than numeric ones like in the case of the DREAD model, help avoid the ranking becoming overly subjective.<br />
<br />
==Countermeasure Identification==<br />
The purpose of the countermeasure identification is to determine if there is some kind of protective measure (e.g. security control, policy measures) in place that can prevent each threat previosly identified via threat analysis from being realized. Vulnerabilities are then those threats that have no countermeasures. Since each of these threats has been categorized either with STRIDE or ASF, it is possible to find appropriate countermeasures in the application within the given category. <br />
<br />
Provided below is a brief and limited checklist which is by no means an exhaustive list for identifying countermeasures for specific threats. <br />
<br />
Example of countermeasures for ASF threat types are included in the following table: <br />
<br />
<table align="center" cellspacing="1" CELLPADDING="7"><br />
<tr bgcolor="#cccccc"><br />
<th colspan="4" align="center">ASF Threat & Countermeasures List</th><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<th>Threat Type</th><br />
<th>Countermeasure</th><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>Authentication</td><br />
<td><br />
#Credentials and authentication tokens are protected with encryption in storage and transit<br />
#Protocols are resistant to brute force, dictionary, and replay attacks<br />
#Strong password policies are enforced<br />
#Trusted server authentication is used instead of SQL authentication<br />
#Passwords are stored with salted hashes<br />
#Password resets do not reveal password hints and valid usernames<br />
#Account lockouts do not result in a denial of service attack<br />
</td><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>Authorization</td><br />
<td><br />
#Strong ACLs are used for enforcing authorized access to resources<br />
#Role-based access controls are used to restrict access to specific operations<br />
#The system follows the principle of least privilege for user and service accounts<br />
#Privilege separation is correctly configured within the presentation, business and data access layers</td><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>Configuration Management</td><br />
<td><br />
#Least privileged processes are used and service accounts with no administration capability<br />
#Auditing and logging of all administration activities is enabled<br />
#Access to configuration files and administrator interfaces is restricted to administrators<br />
</td><br />
</tr><br />
<tr bgcolor="#cccccc"><br />
<td>Data Protection in Storage and Transit</td><br />
<td><br />
#Standard encryption algorithms and correct key sizes are being used<br />
#Hashed message authentication codes (HMACs) are used to protect data integrity<br />
#Secrets (e.g. keys, confidential data ) are cryptographically protected both in transport and in storage<br />
#Built-in secure storage is used for protecting keys<br />
#No credentials and sensitive data are sent in clear text over the wire<br />
</td><br />
</tr><br />
<tr bgcolor="#cccccc"><br />
<td>Data Validation / Parameter Validation</td><br />
<td><br />
#Data type, format, length, and range checks are enforced<br />
#All data sent from the client is validated<br />
#No security decision is based upon parameters (e.g. URL parameters) that can be manipulated<br />
#Input filtering via white list validation is used<br />
#Output encoding is used<br />
</td><br />
</tr><br />
<tr bgcolor="#cccccc"><br />
<td>Error Handling and Exception Management</td><br />
<td><br />
#All exceptions are handled in a structured manner<br />
#Privileges are restored to the appropriate level in case of errors and exceptions<br />
#Error messages are scrubbed so that no sensitive information is revealed to the attacker<br />
<br />
</td><br />
</tr><br />
<tr bgcolor="#cccccc"><br />
<td>User and Session Management</td><br />
<td><br />
#No sensitive information is stored in clear text in the cookie<br />
#The contents of the authentication cookies is encrypted<br />
#Cookies are configured to expire<br />
#Sessions are resistant to replay attacks<br />
#Secure communication channels are used to protect authentication cookies<br />
#User is forced to re-authenticate when performing critical functions<br />
#Sessions are expired at logout<br />
</td><br />
</tr><br />
<tr bgcolor="#cccccc"><br />
<td>Auditing and Logging</td><br />
<td><br />
#Sensitive information (e.g. passwords, PII) is not logged<br />
#Access controls (e.g. ACLs) are enforced on log files to prevent un-authorized access<br />
#Integrity controls (e.g. signatures) are enforced on log files to provide non-repudiation<br />
#Log files provide for audit trail for sensitive operations and logging of key events<br />
#Auditing and logging is enabled across the tiers on multiple servers<br />
</td><br />
</tr><br />
</table><br />
<br />
When using STRIDE, the following threat-mitigation table can be used to identify techniques that can be employed to mitigate the threats.<br />
<br />
<table align="center" cellspacing="1" CELLPADDING="7"><br />
<tr bgcolor="#cccccc"><br />
<th colspan="4" align="center">STRIDE Threat & Mitigation Techniques List</th><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<th>Threat Type</th><br />
<th>Mitigation Techniques</th><br />
</tr><br />
<br />
<tr bgcolor="#cccccc"><br />
<td>Spoofing Identity</td><br />
<td><br />
#Appropriate authentication<br />
#Protect secret data<br />
#Don't store secrets<br />
</td><br />
</tr><br />
<tr bgcolor="#cccccc"><br />
<td>Tampering with data</td><br />
<td><br />
#Appropriate authorization<br />
#Hashes<br />
#MACs<br />
#Digital signatures<br />
#Tamper resistant protocols<br />
</td><br />
</tr><br />
<tr bgcolor="#cccccc"><br />
<td>Repudiation</td><br />
<td><br />
#Digital signatures<br />
#Timestamps<br />
#Audit trails<br />
</td><br />
</tr><br />
<tr bgcolor="#cccccc"><br />
<td>Information Disclosure</td><br />
<td><br />
#Authorization<br />
#Privacy-enhanced protocols<br />
#Encryption<br />
#Protect secrets<br />
#Don't store secrets<br />
</td><br />
</tr><br />
<tr bgcolor="#cccccc"><br />
<td>Denial of Service</td><br />
<td><br />
#Appropriate authentication<br />
#Appropriate authorization<br />
#Filtering<br />
#Throttling<br />
#Quality of service<br />
</td><br />
</tr><br />
<tr bgcolor="#cccccc"><br />
<td>Elevation of privilege</td><br />
<td><br />
#Run with least privilege<br />
</td><br />
</tr><br />
</table><br />
<br />
Once threats and corresponding countermeasures are identified it is possible to derive a threat profile with the following criteria:<br />
<br />
# '''Non mitigated threats:''' Threats which have no countermeasures and represent vulnerabilities that can be fully exploited and cause an impact <br />
# '''Partially mitigated threats:''' Threats partially mitigated by one or more countermeasures which represent vulnerabilities that can only partially be exploited and cause a limited impact <br />
# '''Fully mitigated threats:''' These threats have appropriate countermeasures in place and do not expose vulnerability and cause impact<br />
<br />
===Mitigation Strategies===<br />
The objective of risk management is to reduce the impact that the exploitation of a threat can have to the application. This can be done by responding to a theat with a risk mitigation strategy. In general there are five options to mitigate threats <br />
# '''Do nothing:''' for example, hoping for the best<br />
# '''Informing about the risk:''' for example, warning user population about the risk<br />
# '''Mitigate the risk:''' for example, by putting countermeasures in place<br />
# '''Accept the risk:''' for example, after evaluating the impact of the exploitation (business impact)<br />
# '''Transfer the risk:''' for example, through contractual agreements and insurance<br />
<br />
The decision of which strategy is most appropriate depends on the impact an exploitation of a threat can have, the likelihood of its occurrence, and the costs for transferring (i.e. costs for insurance) or avoiding (i.e. costs or losses due redesign) it. That is, such decision is based on the risk a threat poses to the system. Therefore, the chosen strategy does not mitigate the threat itself but the risk it poses to the system. Ultimately the overall risk has to take into account the business impact, since this is a critical factor for the business risk management strategy. One strategy could be to fix only the vulnerabilities for which the cost to fix is less than the potential business impact derived by the exploitation of the vulnerability. Another strategy could be to accept the risk when the loss of some security controls (e.g. Confidentiality, Integrity, and Availability) implies a small degradation of the service, and not a loss of a critical business function. In some cases, transfer of the risk to another service provider might also be an option. <br />
<br />
[[Category:OWASP Code Review Project]]</div>Michael Bomanhttps://wiki.owasp.org/index.php?title=Testing_Guide_Introduction&diff=87544Testing Guide Introduction2010-08-12T08:56:23Z<p>Michael Boman: Resized the UseAndMisuseCase image</p>
<hr />
<div>{{Template:OWASP Testing Guide v3}}<br />
<br />
=== The OWASP Testing Project ===<br />
----<br />
The OWASP Testing Project has been in development for many years. With this project, we wanted to help people understand the ''what'', ''why'', ''when'', ''where'', and ''how'' of testing their web applications, and not just provide a simple checklist or prescription of issues that should be addressed. The outcome of this project is a complete Testing Framework, from which others can build their own testing programs or qualify other people’s processes. The Testing Guide describes in details both the general Testing Framework and the techniques required to implement the framework in practice.<br />
<br />
Writing the Testing Guide has proven to be a difficult task. It has been a challenge to obtain consensus and develop the content that allows people to apply the concepts described here, while enabling them to work in their own environment and culture. It has also been a challenge to change the focus of web application testing from penetration testing to testing integrated in the software development life cycle. <br />
<br />
However, we are very satisfied with the results we have reached. Many industry experts and those responsible for software security at some of the largest companies in the world are validating the Testing Framework. This framework helps organizations test their web applications in order to build reliable and secure software, rather than simply highlighting areas of weakness, although the latter is certainly a byproduct of many of OWASP’s guides and checklists. As such, we have made some hard decisions about the appropriateness of certain testing techniques and technologies, which we fully understand will not be agreed upon by everyone. However, OWASP is able to take the high ground and change culture over time through awareness and education based on consensus and experience.<br />
<br />
The rest of this guide is organized as follows. This introduction covers the pre-requisites of testing web applications: the scope of testing, the principles of successful testing, and testing techniques. Chapter 3 presents the OWASP Testing Framework and explains its techniques and tasks in relation to the various phases of the software development life cycle. Chapter 4 covers how to test for specific vulnerabilities (e.g., SQL Injection) by code inspection and penetration testing. <br />
<br />
'''Measuring (in)security: the Economics of Insecure Software'''<br><br />
A basic tenet of software engineering is that you can't control what you can't measure [1]. Security testing is no different. Unfortunately, measuring security is a notoriously difficult process. We will not cover this topic in detail here, since it would take a guide on its own (for an introduction, see [2]). <br />
<br />
One aspect that we want to emphasize, however, is that security measurements are, by necessity, about both the specific, technical issues (e.g., how prevalent a certain vulnerability is) and how these affect the economics of software. We find that most technical people understand at least the basic issues, or have a deeper understanding, of the vulnerabilities. Sadly, few are able to translate that technical knowledge into monetary terms, and, thereby, quantify the potential cost of vulnerabilities to the application owner's business. We believe that until this happens, CIOs will not be able to develop an accurate return on security investment and, subsequently, assign appropriate budgets for software security.<br/><br />
While estimating the cost of insecure software may appear a daunting task, recently there has been a significant amount of work in this direction. For example, in June 2002, the US National Institute of Standards (NIST) published a survey on the cost of insecure software to the US economy due to inadequate software testing [3]. Interestingly, they estimate that a better testing infrastructure would save more than a third of these costs, or about $22 billion a year. More recently, the links between economics and security have been studied by academic researchers. See [4] for more information about some of these efforts.<br />
<br />
The framework described in this document encourages people to measure security throughout their entire development process. They can then relate the cost of insecure software to the impact it has on their business, and consequently develop appropriate business decisions (resources) to manage the risk. Remember: measuring and testing web applications is even more critical than for other software, since web applications are exposed to millions of users through the Internet.<br />
<br />
'''What is Testing'''<br><br />
What do we mean by testing? During the development life cycle of a web application, many things need to be tested. The Merriam-Webster Dictionary describes testing as: <br />
* To put to test or proof. <br />
* To undergo a test. <br />
* To be assigned a standing or evaluation based on tests. <br />
For the purposes of this document, testing is a process of comparing the state of a system/application against a set of criteria. In the security industry, people frequently test against a set of mental criteria that are neither well defined nor complete. For this reason and others, many outsiders regard security testing as a black art. This document’s aim is to change that perception and to make it easier for people without in-depth security knowledge to make a difference. <br />
<br />
'''Why Testing'''<br><br />
This document is designed to help organizations understand what comprises a testing program, and to help them identify the steps that they need to undertake to build and operate that testing program on their web applications. It is intended to give a broad view of the elements required to make a comprehensive web application security program. This guide can be used as a reference and as a methodology to help determine the gap between your existing practices and industry best practices. This guide allows organizations to compare themselves against industry peers, understand the magnitude of resources required to test and maintain their software, or prepare for an audit. This chapter does not go into the technical details of how to test an application, as the intent is to provide a typical security organizational framework. The technical details about how to test an application, as part of a penetration test or code review will be covered in the remaining parts of this document. <br />
<br />
'''When to Test'''<br><br />
Most people today don’t test the software until it has already been created and is in the deployment phase of its life cycle (i.e., code has been created and instantiated into a working web application). This is generally a very ineffective and cost-prohibitive practice. One of the best methods to prevent security bugs from appearing in production applications is to improve the Software Development Life Cycle (SDLC) by including security in each of its phases. An SDLC is a structure imposed on the development of software artifacts. If an SDLC is not currently being used in your environment, it is time to pick one! The following figure shows a generic SDLC model as well as the (estimated) increasing cost of fixing security bugs in such a model. <br />
<br />
<center>[[Image:SDLC.jpg]]<br><br />
''Figure 1: Generic SDLC Model'' </center><br />
<br />
Companies should inspect their overall SDLC to ensure that security is an integral part of the development process. SDLCs should include security tests to ensure security is adequately covered and controls are effective throughout the development process. <br />
<br />
'''What to Test'''<br><br />
It can be helpful to think of software development as a combination of people, process, and technology. If these are the factors that "create" software, then it is logical that these are the factors that must be tested. Today most people generally test the technology or the software itself. <br />
<br />
An effective testing program should have components that test ''People'' – to ensure that there is adequate education and awareness; ''Process'' – to ensure that there are adequate policies and standards and that people know how to follow these policies; ''Technology'' – to ensure that the process has been effective in its implementation. Unless a holistic approach is adopted, testing just the technical implementation of an application will not uncover management or operational vulnerabilities that could be present. By testing the people, policies, and processes, an organization can catch issues that would later manifest themselves into defects in the technology, thus eradicating bugs early and identifying the root causes of defects. Likewise, testing only some of the technical issues that can be present in a system will result in an incomplete and inaccurate security posture assessment. Denis Verdon, Head of Information Security at [http://www.fnf.com Fidelity National Financial] presented an excellent analogy for this misconception at the OWASP AppSec 2004 Conference in New York [5]: "If cars were built like applications [...] safety tests would assume frontal impact only. Cars would not be roll tested, or tested for stability in emergency maneuvers, brake effectiveness, side impact, and resistance to theft." <br><br />
<br />
'''Feedback and Comments'''<br><br />
As with all OWASP projects, we welcome comments and feedback. We especially like to know that our work is being used and that it is effective and accurate.<br />
<br />
==Principles of Testing==<br />
<br />
There are some common misconceptions when developing a testing methodology to weed out security bugs in software. This chapter covers some of the basic principles that should be taken into account by professionals when testing for security bugs in software. <br />
<br />
'''There is No Silver Bullet'''<br><br />
While it is tempting to think that a security scanner or application firewall will either provide a multitude of defenses or identify a multitude of problems, in reality there are no silver bullets to the problem of insecure software. Application security assessment software, while useful as a first pass to find low-hanging fruit, is generally immature and ineffective at in-depth assessments and at providing adequate test coverage. Remember that security is a process, not a product. <br />
<br />
'''Think Strategically, Not Tactically'''<br><br />
Over the last few years, security professionals have come to realize the fallacy of the patch-and-penetrate model that was pervasive in information security during the 1990’s. The patch-and-penetrate model involves fixing a reported bug, but without proper investigation of the root cause. This model is usually associated with the window of vulnerability shown in the figure below. The evolution of vulnerabilities in common software used worldwide has shown the ineffectiveness of this model. Fore more information about the window of vulnerability please refer to [6]. Vulnerability studies [7] have shown that with the reaction time of attackers worldwide, the typical window of vulnerability does not provide enough time for patch installation, since the time between a vulnerability being uncovered and an automated attack against it being developed and released is decreasing every year. There are also several wrong assumptions in the patch-and-penetrate model: patches interfere with the normal operations and might break existing applications, and not all the users might (in the end) be aware of a patch’s availability. Consequently not all the product's users will apply patches, either because of this issue or because they lack knowledge about the patch's existence.<br><br />
<br />
<br />
<center>[[Image:WindowExposure.jpg]]<br><br />
''Figure 2: Window of Vulnerability''</center><br> <br />
To prevent reoccurring security problems within an application, it is essential to build security into the Software Development Life Cycle (SDLC) by developing standards, policies, and guidelines that fit and work within the development methodology. Threat modeling and other techniques should be used to help assign appropriate resources to those parts of a system that are most at risk. <br />
<br><br />
'''The SDLC is King'''<br><br />
The SDLC is a process that is well-known to developers. By integrating security into each phase of the SDLC, it allows for a holistic approach to application security that leverages the procedures already in place within the organization. Be aware that while the names of the various phases may change depending on the SDLC model used by an organization, each conceptual phase of the archetype SDLC will be used to develop the application (i.e., define, design, develop, deploy, maintain). Each phase has security considerations that should become part of the existing process, to ensure a cost-effective and comprehensive security program. <br />
<br><br />
'''Test Early and Test Often'''<br><br />
When a bug is detected early within the SDLC, it can be addressed more quickly and at a lower cost. A security bug is no different from a functional or performance-based bug in this regard. A key step in making this possible is to educate the development and QA organizations about common security issues and the ways to detect and prevent them. Although new libraries, tools, or languages might help design better programs (with fewer security bugs), new threats arise constantly and developers must be aware of those that affect the software they are developing. Education in security testing also helps developers acquire the appropriate mindset to test an application from an attacker's perspective. This allows each organization to consider security issues as part of their existing responsibilities.<br />
<br><br />
'''Understand the Scope of Security'''<br><br />
It is important to know how much security a given project will require. The information and assets that are to be protected should be given a classification that states how they are to be handled (e.g., Confidential, Secret, Top Secret). Discussions should occur with legal council to ensure that any specific security need will be met. In the USA they might come from federal regulations, such as the Gramm-Leach-Bliley Act [8], or from state laws, such as the California SB-1386 [9]. For organizations based in EU countries, both country-specific regulation and EU Directives might apply. For example, Directive 96/46/EC4 [10] makes it mandatory to treat personal data in applications with due care, whatever the application. <br />
<br><br />
'''Develop the Right Mindset'''<br><br />
Successfully testing an application for security vulnerabilities requires thinking "outside of the box." Normal use cases will test the normal behavior of the application when a user is using it in the manner that you expect. Good security testing requires going beyond what is expected and thinking like an attacker who is trying to break the application. Creative thinking can help to determine what unexpected data may cause an application to fail in an insecure manner. It can also help find what assumptions made by web developers are not always true and how they can be subverted. This is one of the reasons why automated tools are actually bad at automatically testing for vulnerabilities: this creative thinking must be done on a case-by-case basis and most web applications are being developed in a unique way (even if using common frameworks). <br />
<br><br />
'''Understand the Subject'''<br><br />
One of the first major initiatives in any good security program should be to require accurate documentation of the application. The architecture, data-flow diagrams, use cases, and more should be written in formal documents and made available for review. The technical specification and application documents should include information that lists not only the desired use cases, but also any specifically disallowed use case. Finally, it is good to have at least a basic security infrastructure that allows the monitoring and trending of attacks against an organization's applications and network (e.g., IDS systems). <br />
<br><br />
'''Use the Right Tools'''<br><br />
While we have already stated that there is no silver bullet tool, tools do play a critical role in the overall security program. There is a range of open source and commercial tools that can automate many routine security tasks. These tools can simplify and speed up the security process by assisting security personnel in their tasks. It is important to understand exactly what these tools can and cannot do, however, so that they are not oversold or used incorrectly. <br />
<br><br />
'''The Devil is in the Details'''<br><br />
It is critical not to perform a superficial security review of an application and consider it complete. This will instill a false sense of confidence that can be as dangerous as not having done a security review in the first place. It is vital to carefully review the findings and weed out any false positive that may remain in the report. Reporting an incorrect security finding can often undermine the valid message of the rest of a security report. Care should be taken to verify that every possible section of application logic has been tested, and that every use case scenario was explored for possible vulnerabilities. <br />
<br><br />
'''Use Source Code When Available'''<br><br />
While black box penetration test results can be impressive and useful to demonstrate how vulnerabilities are exposed in production, they are not the most effective way to secure an application. If the source code for the application is available, it should be given to the security staff to assist them while performing their review. It is possible to discover vulnerabilities within the application source that would be missed during a black box engagement. <br />
<br><br />
'''Develop Metrics'''<br><br />
An important part of a good security program is the ability to determine if things are getting better. It is important to track the results of testing engagements, and develop metrics that will reveal the application security trends within the organization. These metrics can show if more education and training are required, if there is a particular security mechanism that is not clearly understood by development, and if the total number of security related problems being found each month is going down. Consistent metrics that can be generated in an automated way from available source code will also help the organization in assessing the effectiveness of mechanisms introduced to reduce security bugs in software development. Metrics are not easily developed, so using standard metrics like those provided by the OWASP Metrics project and other organizations might be a good head start.<br><br />
'''Document the Test Results'''<br><br />
To conclude the testing process, it is important to produce a formal record of what testing actions were taken, by whom, when they ware performed, and details of the test findings. It is wise to agree on an acceptable format for the report which is useful to all concerned parties, which may include developers, project management, business owners, IT department, audit, and compliance. The report must be clear to the business owner in identifying where material risks exist and sufficient to get their backing for subsequent mitigation actions. The report must be clear to the developer in pin-pointing the exact function that is affected by the vulnerability, with associated recommendations for resolution in a language that the developer will understand (no pun intended). Last but not least, the report writing should not be overly burdensome on the security tester themselves; security testers are not generally renowned for their creative writing skills, therefore agreeing on a complex report can lead to instances where test results do not get properly documented.<br />
<br />
==Testing Techniques Explained==<br />
<br />
This section presents a high-level overview of various testing techniques that can be employed when building a testing program. It does not present specific methodologies for these techniques, although Chapter 3 will address this information. This section is included to provide context for the framework presented in the next hapter and to highlight the advantages and disadvantages of some of the techniques that should be considered. In particular, we will cover:<br />
* Manual Inspections & Reviews <br />
* Threat Modeling <br />
* Code Review <br />
* Penetration Testing <br />
<br />
=== Manual Inspections & Reviews ===<br />
'''Overview'''<br><br />
Manual inspections are human-driven reviews that typically test the security implications of the people, policies, and processes, but can include inspection of technology decisions such as architectural designs. They are usually conducted by analyzing documentation or performing interviews with the designers or system owners. While the concept of manual inspections and human reviews is simple, they can be among the most powerful and effective techniques available. By asking someone how something works and why it was implemented in a specific way, it allows the tester to quickly determine if any security concerns are likely to be evident. Manual inspections and reviews are one of the few ways to test the software development life-cycle process itself and to ensure that there is an adequate policy or skill set in place. As with many things in life, when conducting manual inspections and reviews we suggest you adopt a trust-but-verify model. Not everything everyone tells you or shows you will be accurate. Manual reviews are particularly good for testing whether people understand the security process, have been made aware of policy, and have the appropriate skills to design or implement a secure application. Other activities, including manually reviewing the documentation, secure coding policies, security requirements, and architectural designs, should all be accomplished using manual inspections.<br />
<br />
'''Advantages:'''<br />
* Requires no supporting technology <br />
* Can be applied to a variety of situations<br />
* Flexible <br />
* Promotes teamwork <br />
* Early in the SDLC <br />
<br />
'''Disadvantages:'''<br />
* Can be time consuming <br />
* Supporting material not always available <br />
* Requires significant human thought and skill to be effective!<br />
<br />
=== Threat Modeling ===<br />
'''Overview'''<br><br />
Threat modeling has become a popular technique to help system designers think about the security threats that their systems/applications might face. Therefore, threat modeling can be seen as risk assessment for applications. In fact, it enables the designer to develop mitigation strategies for potential vulnerabilities and helps them focus their inevitably limited resources and attention on the parts of the system that most require it. It is recommended that all applications have a threat model developed and documented. Threat models should be created as early as possible in the SDLC, and should be revisited as the application evolves and development progresses. To develop a threat model, we recommend taking a simple approach that follows the NIST 800-30 [11] standard for risk assessment. This approach involves: <br />
* Decomposing the application – understand, through a process of manual inspection, how the application works, its assets, functionality, and connectivity. <br />
* Defining and classifying the assets – classify the assets into tangible and intangible assets and rank them according to business importance. <br />
* Exploring potential vulnerabilities - whether technical, operational, or management. <br />
* Exploring potential threats – develop a realistic view of potential attack vectors from an attacker’s perspective, by using threat scenarios or attack trees.<br />
* Creating mitigation strategies – develop mitigating controls for each of the threats deemed to be realistic. The output from a threat model itself can vary but is typically a collection of lists and diagrams. The OWASP Code Review Guide outlines an Application Threat Modeling methodology that can be used as a reference for the testing applications for potential security flaws in the design of the application. There is no right or wrong way to develop threat models and perform information risk assessments on applications. [12]. <br><br />
<br />
'''Advantages:'''<br />
* Practical attacker's view of the system <br />
* Flexible <br />
* Early in the SDLC <br />
<br />
'''Disadvantages: <br>'''<br />
* Relatively new technique <br />
* Good threat models don’t automatically mean good software<br />
<br />
=== Source Code Review ===<br />
'''Overview'''<br><br />
Source code review is the process of manually checking a web application's source code for security issues. Many serious security vulnerabilities cannot be detected with any other form of analysis or testing. As the popular saying goes “if you want to know what’s really going on, go straight to the source." Almost all security experts agree that there is no substitute for actually looking at the code. All the information for identifying security problems is there in the code somewhere. Unlike testing third party closed software such as operating systems, when testing web applications (especially if they have been developed in-house) the source code should be made available for testing purposes. Many unintentional but significant security problems are also extremely difficult to discover with other forms of analysis or testing, such as penetration testing, making source code analysis the technique of choice for technical testing. With the source code, a tester can accurately determine what is happening (or is supposed to be happening) and remove the guess work of black box testing. Examples of issues that are particularly conducive to being found through source code reviews include concurrency problems, flawed business logic, access control problems, and cryptographic weaknesses as well as backdoors, Trojans, Easter eggs, time bombs, logic bombs, and other forms of malicious code. These issues often manifest themselves as the most harmful vulnerabilities in web sites. Source code analysis can also be extremely efficient to find implementation issues such as places where input validation was not performed or when fail open control procedures may be present. But keep in mind that operational procedures need to be reviewed as well, since the source code being deployed might not be the same as the one being analyzed herein [13].<br><br />
<br />
'''Advantages:'''<br />
* Completeness and effectiveness <br />
* Accuracy <br />
* Fast (for competent reviewers) <br />
<br />
'''Disadvantages:'''<br />
* Requires highly skilled security developers <br />
* Can miss issues in compiled libraries <br />
* Cannot detect run-time errors easily <br />
* The source code actually deployed might differ from the one being analyzed<br />
<br />
'''For more on code review, checkout the [[OWASP Code Review Project|OWASP code review project]]'''.<BR><br />
<br />
=== Penetration Testing ===<br />
'''Overview'''<br><br />
Penetration testing has been a common technique used to test network security for many years. It is also commonly known as black box testing or ethical hacking. Penetration testing is essentially the “art” of testing a running application remotely, without knowing the inner workings of the application itself, to find security vulnerabilities. Typically, the penetration test team would have access to an application as if they were users. The tester acts like an attacker and attempts to find and exploit vulnerabilities. In many cases the tester will be given a valid account on the system. While penetration testing has proven to be effective in network security, the technique does not naturally translate to applications. When penetration testing is performed on networks and operating systems, the majority of the work is involved in finding and then exploiting known vulnerabilities in specific technologies. As web applications are almost exclusively bespoke, penetration testing in the web application arena is more akin to pure research. Penetration testing tools have been developed that automate the process, but, again, with the nature of web applications their effectiveness is usually poor. Many people today use web application penetration testing as their primary security testing technique. Whilst it certainly has its place in a testing program, we do not believe it should be considered as the primary or only testing technique. Gary McGraw in [14] summed up penetration testing well when he said, “If you fail a penetration test you know you have a very bad problem indeed. If you pass a penetration test you do not know that you don’t have a very bad problem”. However, focused penetration testing (i.e., testing that attempts to exploit known vulnerabilities detected in previous reviews) can be useful in detecting if some specific vulnerabilities are actually fixed in the source code deployed on the web site. <br><br />
<br />
'''Advantages:'''<br />
* Can be fast (and therefore cheap) <br />
* Requires a relatively lower skill-set than source code review <br />
* Tests the code that is actually being exposed <br />
<br />
'''Disadvantages:'''<br />
* Too late in the SDLC <br />
* Front impact testing only!<br />
<br />
=== The Need for a Balanced Approach ===<br />
With so many techniques and so many approaches to testing the security of web applications, it can be difficult to understand which techniques to use and when to use them.<br />
Experience shows that there is no right or wrong answer to exactly what techniques should be used to build a testing framework. The fact remains that all techniques should probably be used to ensure that all areas that need to be tested are tested. What is clear, however, is that there is no single technique that effectively covers all security testing that must be performed to ensure that all issues have been addressed. Many companies adopt one approach, which has historically been penetration testing. Penetration testing, while useful, cannot effectively address many of the issues that need to be tested, and is simply “too little too late” in the software development life cycle (SDLC). <br />
The correct approach is a balanced one that includes several techniques, from manual interviews to technical testing. The balanced approach is sure to cover testing in all phases of the SDLC. This approach leverages the most appropriate techniques available depending on the current SDLC phase. <br />
Of course there are times and circumstances where only one technique is possible; for example, a test on a web application that has already been created, and where the testing party does not have access to the source code. In this case, penetration testing is clearly better than no testing at all. However, we encourage the testing parties to challenge assumptions, such as no access to source code, and to explore the possibility of more complete testing. <br />
A balanced approach varies depending on many factors, such as the maturity of the testing process and corporate culture. However, it is recommended that a balanced testing framework look something like the representations shown in Figure 3 and Figure 4. The following figure shows a typical proportional representation overlaid onto the software development life cycle. In keeping with research and experience, it is essential that companies place a higher emphasis on the early stages of development.<br />
<center><br />
[[Image:ProportionSDLC.png]]<br />
<br>''Figure 3: Proportion of Test Effort in SDLC''<br />
</center><br />
The following figure shows a typical proportional representation overlaid onto testing techniques. <br><br />
<center><br />
[[Image:ProportionTest.png]]<br />
<br>''Figure 4: Proportion of Test Effort According to Test Technique''<br />
</center><br />
<br />
'''A Note about Web Application Scanners'''<br><br />
Many organizations have started to use automated web application scanners. While they undoubtedly have a place in a testing program, we want to highlight some fundamental issues about why we do not believe that automating black box testing is (or will ever be) effective. By highlighting these issues, we are not discouraging web application scanner use. Rather, we are saying that their limitations should be understood, and testing frameworks should be planned appropriately.<br />
NB: OWASP is currently working to develop a web application scanner-benchmarking platform. The following examples indicate why automated black box testing is not effective. <br />
<br><br />
'''Example 1: Magic Parameters'''<br><br />
Imagine a simple web application that accepts a name-value pair of “magic” and then the value. For simplicity, the GET request may be: ''<nowiki>http://www.host/application?magic=value</nowiki>'' <br> To further simplify the example, the values in this case can only be ASCII characters a – z (upper or lowercase) and integers 0 – 9. The designers of this application created an administrative backdoor during testing, but obfuscated it to prevent the casual observer from discovering it. By submitting the value sf8g7sfjdsurtsdieerwqredsgnfg8d (30 characters), the user will then be logged in and presented with an administrative screen with total control of the application. The HTTP request is now:<br> ''<nowiki>http://www.host/application?magic= sf8g7sfjdsurtsdieerwqredsgnfg8d </nowiki>'' <br><br />
Given that all of the other parameters were simple two- and three-characters fields, it is not possible to start guessing combinations at approximately 28 characters. A web application scanner will need to brute force (or guess) the entire key space of 30 characters. That is up to 30^28 permutations, or trillions of HTTP requests! That is an electron in a digital haystack! <br />
The code for this exemplar Magic Parameter check may look like the following: <br><br />
public void doPost( HttpServletRequest request, HttpServletResponse response) <br />
{ <br />
String magic = “sf8g7sfjdsurtsdieerwqredsgnfg8d”; <br />
boolean admin = magic.equals( request.getParameter(“magic”));<br />
if (admin) doAdmin( request, response); <br />
else …. // normal processing <br />
} <br />
By looking in the code, the vulnerability practically leaps off the page as a potential problem. <br />
<br><br />
'''Example 2: Bad Cryptography'''<br><br />
Cryptography is widely used in web applications. Imagine that a developer decided to write a simple cryptography algorithm to sign a user in from site A to site B automatically. In his/her wisdom, the developer decides that if a user is logged into site A, then he/she will generate a key using an MD5 hash function that comprises: ''Hash { username : date }'' <br><br />
When a user is passed to site B, he/she will send the key on the query string to site B in an HTTP re-direct. Site B independently computes the hash, and compares it to the hash passed on the request. If they match, site B signs the user in as the user they claim to be. Clearly, as we explain the scheme, the inadequacies can be worked out, and it can be seen how anyone that figures it out (or is told how it works, or downloads the information from Bugtraq) can login as any user. Manual inspection, such as an interview, would have uncovered this security issue quickly, as would inspection of the code. A black-box web application scanner would have seen a 128-bit hash that changed with each user, and by the nature of hash functions, did not change in any predictable way.<br />
<br><br />
'''A Note about Static Source Code Review Tools'''<br><br />
Many organizations have started to use static source code scanners. While they undoubtedly have a place in a comprehensive testing program, we want to highlight some fundamental issues about why we do not believe this approach is effective when used alone. Static source code analysis alone cannot identify issues due to flaws in the design since it cannot understand the context in which the code is constructed. Source code analysis tools are useful in determining security issues due to coding errors, however significant manual effort is required to validate the findings. <br />
<br><br />
<br />
==Security Requirements Test Derivation==<br />
If you want to have a successful testing program, you need to know what the objectives of the testing are. These objectives are specified by security requirements. This section discusses in detail how to document requirements for security testing by deriving them from applicable standards and regulations and positive and negative application requirements. It also discusses how security requirements effectively drive security testing during the SDLC and how security test data can be used to effectively manage software security risks.<br />
<br />
'''Testing Objectives'''<br><br />
One of the objectives of security testing is to validate that security controls function as expected. This is documented via ''security requirements'' that describe the functionality of the security control. At a high level, this means proving confidentiality, integrity, and availability of the data as well as the service. The other objective is to validate that security controls are implemented with few or no vulnerabilities. These are common vulnerabilities, such as the [[OWASP Top Ten]], as well as vulnerabilities that are previously identified with security assessments during the SDLC, such as threat modeling, source code analysis, and penetration test. <br />
<br />
'''Security Requirements Documentation'''<br><br />
The first step in the documentation of security requirements is to understand the ''business requirements''. A business requirement document could provide the initial, high-level information of the expected functionality for the application. For example, the main purpose of an application may be to provide financial services to customers or shopping and purchasing goods from an on-line catalogue. A security section of the business requirements should highlight the need to protect the customer data as well as to comply with applicable security documentation such as regulations, standards, and policies.<br />
<br />
A general checklist of the applicable regulations, standards, and policies serves well the purpose of a preliminary security compliance analysis for web applications. For example, compliance regulations can be identified by checking information about the business sector and the country/state where the application needs to function/operate. Some of these compliance guidelines and regulations might translate in specific technical requirements for security controls. For example, in the case of financial applications, the compliance with FFIEC guidelines for authentication [15] requires that financial institutions implement applications that mitigate weak authentication risks with multi-layered security control and multi factor authentication. <br />
<br />
Applicable industry standards for security need also to be captured by the general security requirement checklist. For example, in the case of applications that handle customer credit card data, the compliance with the PCI DSS [16] standard forbids the storage of PINs and CVV2 data and requires that the merchant protect magnetic strip data in storage and transmission with encryption and on display by masking. Such PCI DSS security requirements could be validated via source code analysis.<br />
<br />
Another section of the checklist needs to enforce general requirements for compliance with the organization information security standards and policies. From the functional requirements perspective, requirements for the security control need to map to a specific section of the information security standards. An example of such requirement can be: "a password complexity of six alphanumeric characters must be enforced by the authentication controls used by the application." When security requirements map to compliance rules a security test can validate the exposure of compliance risks. If violation with information security standards and policies are found, these will result in a risk that can be documented and that the business has to deal with (i.e., manage). For this reason, since these security compliance requirements are enforceable, they need to be well documented and validated with security tests. <br />
<br />
'''Security Requirements Validation'''<br><br />
From the functionality perspective, the validation of security requirements is the main objective of security testing, while, from the risk management perspective, this is the objective of information security assessments. At a high level, the main goal of information security assessments is the identification of gaps in security controls, such as lack of basic authentication, authorization, or encryption controls. More in depth, the security assessment objective is risk analysis, such as the identification of potential weaknesses in security controls that ensure the confidentiality, integrity, and availability of the data. For example, when the application deals with personal identifiable information (PII) and sensitive data, the security requirement to be validated is the compliance with the company information security policy requiring encryption of such data in transit and in storage. Assuming encryption is used to protect the data, encryption algorithms and key lengths need to comply with the organization encryption standards. These might require that only certain algorithms and key lengths could be used. For example, a security requirement that can be security tested is verifying that only allowed ciphers are used (e.g., SHA-1, RSA, 3DES) with allowed minimum key lengths (e.g., more than 128 bit for symmetric and more than 1024 for asymmetric encryption).<br />
<br />
From the security assessment perspective, security requirements can be validated at different phases of the SDLC by using different artifacts and testing methodologies. For example, threat modeling focuses on identifying security flaws during design, secure code analysis and reviews focus on identifying security issues in source code during development, and penetration testing focuses on identifying vulnerabilities in the application during testing/validation. <br />
<br />
Security issues that are identified early in the SDLC can be documented in a test plan so they can be validated later with security tests. By combining the results of different testing techniques, it is possible to derive better security test cases and increase the level of assurance of the security requirements. For example, distinguishing true vulnerabilities from the un-exploitable ones is possible when the results of penetration tests and source code analysis are combined. Considering the security test for a SQL injection vulnerability, for example, a black box test might involve first a scan of the application to fingerprint the vulnerability. The first evidence of a potential SQL injection vulnerability that can be validated is the generation of a SQL exception. A further validation of the SQL vulnerability might involve manually injecting attack vectors to modify the grammar of the SQL query for an information disclosure exploit. This might involve a lot of trial-and-error analysis till the malicious query is executed. Assuming the tester has the source code, she might learn from the source code analysis on how to construct the SQL attack vector that can exploit the vulnerability (e.g., execute a malicious query returning confidential data to unauthorized user).<br />
<br />
'''Threats and Countermeasures Taxonomies'''<br><br />
A ''threat and countermeasure classification'' that takes into consideration root causes of vulnerabilities is the critical factor to verify that security controls are designed, coded, and built so that the impact due to the exposure of such vulnerabilities is mitigated. In the case of web applications, the exposure of security controls to common vulnerabilities, such as the OWASP Top Ten, can be a good starting point to derive general security requirements. More specifically, the web application security frame [17] provides a classification (e.g. taxonomy) of vulnerabilities that can be documented in different guidelines and standards and validated with security tests. <br />
<br />
The focus of a threat and countermeasure categorization is to define security requirements in terms of the threats and the root cause of the vulnerability. A threat can be categorized by using STRIDE [18], for example, as Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege. The root cause can be categorized as security flaw in design, a security bug in coding, or an issue due to insecure configuration. For example, the root cause of weak authentication vulnerability might be the lack of mutual authentication when data crosses a trust boundary between the client and server tiers of the application. A security requirement that captures the threat of non-repudiation during an architecture design review allows for the documentation of the requirement for the countermeasure (e.g., mutual authentication) that can be validated later on with security tests.<br />
<br />
A threat and countermeasure categorization for vulnerabilities can also be used to document security requirements for secure coding such as secure coding standards. An example of a common coding error in authentication controls consists of applying an hash function to encrypt a password, without applying a seed to the value. From the secure coding perspective, this is a vulnerability that affects the encryption used for authentication with a vulnerability root cause in a coding error. Since the root cause is insecure coding the security requirement can be documented in secure coding standards and validated through secure code reviews during the development phase of the SDLC.<br />
<br />
'''Security Testing and Risk Analysis'''<br><br />
Security requirements need to take into consideration the severity of the vulnerabilities to support a ''risk mitigation strategy''. Assuming that the organization maintains a repository of vulnerabilities found in applications, i.e., a vulnerability knowledge base, the security issues can be reported by type, issue, mitigation, root cause, and mapped to the applications where they are found. Such a vulnerability knowledge base can also be used to establish a metrics to analyze the effectiveness of the security tests throughout the SDLC.<br />
<br />
For example, consider an input validation issue, such as a SQL injection, which was identified via source code analysis and reported with a coding error root cause and input validation vulnerability type. The exposure of such vulnerability can be assessed via a penetration test, by probing input fields with several SQL injection attack vectors. This test might validate that special characters are filtered before hitting the database and mitigate the vulnerability. By combining the results of source code analysis and penetration testing it is possible to determine the likelihood and exposure of the vulnerability and calculate the risk rating of the vulnerability. By reporting vulnerability risk ratings in the findings (e.g., test report) it is possible to decide on the mitigation strategy. For example, high and medium risk vulnerabilities can be prioritized for remediation, while low risk can be fixed in further releases.<br />
<br />
By considering the threat scenarios exploiting common vulnerabilities it is possible to identify potential risks for which the application security control needs to be security tested. For example, the OWASP Top Ten vulnerabilities can be mapped to attacks such as phishing, privacy violations, identify theft, system compromise, data alteration or data destruction, financial loss, and reputation loss. Such issues should be documented as part of the threat scenarios. By thinking in terms of threats and vulnerabilities, it is possible to devise a battery of tests that simulate such attack scenarios. Ideally, the organization vulnerability knowledge base can be used to derive security risk driven tests cases to validate the most likely attack scenarios. For example if identity theft is considered high risk, negative test scenarios should validate the mitigation of impacts deriving from the exploit of vulnerabilities in authentication, cryptographic controls, input validation, and authorization controls.<br />
<br />
===Functional and Non Functional Test Requirements===<br />
'''Functional Security Requirements'''<br><br />
From the perspective of functional security requirements, the applicable standards, policies and regulations drive both the need of a type of security control as well as the control functionality. These requirements are also referred to as “positive requirements”, since they state the expected functionality that can be validated through security tests.<br />
Examples of positive requirements are: “the application will lockout the user after six failed logon attempts” or “passwords need to be six min characters, alphanumeric”. The validation of positive requirements consists of asserting the expected functionality and, as such, can be tested by re-creating the testing conditions, and by running the test according to predefined inputs and by asserting the expected outcome as a fail/pass condition.<br />
<br />
In order to validate security requirements with security tests, security requirements need to be function driven and highlight the expected functionality (the what) and implicitly the implementation (the how). Examples of high-level security design requirements for authentication can be:<br />
*Protect user credentials and shared secrets in transit and in storage<br />
*Mask any confidential data in display (e.g., passwords, accounts)<br />
*Lock the user account after a certain number of failed login attempts <br />
*Do not show specific validation errors to the user as a result of failed logon <br />
*Only allow passwords that are alphanumeric, include special characters and six characters minimum length, to limit the attack surface<br />
*Allow for password change functionality only to authenticated users by validating the old password, the new password, and the user answer to the challenge question, to prevent brute forcing of a password via password change.<br />
*The password reset form should validate the user’s username and the user’s registered email before sending the temporary password to the user via email. The temporary password issued should be a one time password. A link to the password reset web page will be sent to the user. The password reset web page should validate the user temporary password, the new password, as well as the user answer to the challenge question.<br />
<br />
'''Risk Driven Security Requirements'''<br><br />
Security tests need also to be risk driven, that is they need to validate the application for unexpected behavior. These are also called “negative requirements”, since they specify what the application should not do. <br />
Examples of "should not do" (negative) requirements are:<br />
* The application should not allow for the data to be altered or destroyed<br />
* The application should not be compromised or misused for unauthorized financial transactions by a malicious user.<br />
<br />
Negative requirements are more difficult to test, because there is no expected behavior to look for. This might require a threat analyst to come up with unforeseeable input conditions, causes, and effects. This is where security testing needs to be driven by risk analysis and threat modeling.<br />
The key is to document the threat scenarios and the functionality of the countermeasure as a factor to mitigate a threat. For example, in the case of authentication controls, the following security requirements can be documented from the threats and countermeasure perspective:<br />
*Encrypt authentication data in storage and transit to mitigate risk of information disclosure and authentication protocol attacks<br />
*Encrypt passwords using non reversible encryption such as using a digest (e.g., HASH) and a seed to prevent dictionary attacks<br />
*Lock out accounts after reaching a logon failure threshold and enforce password complexity to mitigate risk of brute force password attacks<br />
*Display generic error messages upon validation of credentials to mitigate risk of account harvesting/enumeration<br />
*Mutually authenticate client and server to prevent non-repudiation and Man In the Middle (MiTM) attacks<br />
<br />
Threat modeling artifacts such as threat trees and attack libraries can be useful to derive the negative test scenarios. A threat tree will assume a root attack (e.g., attacker might be able to read other users' messages) and identify different exploits of security controls (e.g., data validation fails because of a SQL injection vulnerability) and necessary countermeasures (e.g., implement data validation and parametrized queries) that could be validated to be effective in mitigating such attacks.<br />
<br />
===Security Requirements Derivation Through Use and Misuse Cases===<br />
Pre-requisite in describing the application functionality is to understand what the application is supposed to do and how. This can be done by describing ''use cases''. Use cases, in the graphical form as commonly used in software engineering, show the interactions of actors and their relations, and help to identify the actors in the application, their relationships, the intended sequence of actions for each scenario, alternative actions, special requirements, and pre- and post-conditions. Similar to use cases, ''misuse and abuse cases'' [19] describe unintended and malicious use scenarios of the application. These misuse cases provide a way to describe scenarios of how an attacker could misuse and abuse the application. By going through the individual steps in a use scenario and thinking about how it can be maliciously exploited, potential flaws or aspects of the application that are not well-defined can be discovered. The key is to describe all possible or, at least, the most critical use and misuse scenarios. Misuse scenarios allow the analysis of the application from the attacker's point of view and contribute to identifying potential vulnerabilities and the countermeasures that need to be implemented to mitigate the impact caused by the potential exposure to such vulnerabilities. Given all of the use and abuse cases, it is important to analyze them to determine which of them are the most critical ones and need to be documented in security requirements. The identification of the most critical misuse and abuse cases drives the documentation of security requirements and the necessary controls where security risks should be mitigated.<br />
<br />
To derive security requirements from use and misuse case [20] , it is important to define the functional scenarios and the negative scenarios, and put these in graphical form. In the case of derivation of security requirements for authentication, for example, the following step-by-step methodology can be followed.<br />
<br />
*Step 1: Describe the Functional Scenario: User authenticates by supplying username and password. The application grants access to users based upon authentication of user credentials by the application and provides specific errors to the user when validation fails.<br />
<br />
*Step 2: Describe the Negative Scenario: Attacker breaks the authentication through a brute force/dictionary attack of passwords and account harvesting vulnerabilities in the application. The validation errors provide specific information to an attacker to guess which accounts are actually valid, registered accounts (usernames). The attacker, then, will try to brute force the password for such a valid account. A brute force attack to four minimum length all digit passwords can succeed with a limited number of attempts (i.e., 10^4).<br />
<br />
*Step 3: Describe Functional and Negative Scenarios With Use and Misuse Case: The graphical example in Figure below depicts the derivation of security requirements via use and misuse cases. The functional scenario consists of the user actions (entering username and password) and the application actions (authenticating the user and providing an error message if validation fails). The misuse case consists of the attacker actions, i.e., trying to break authentication by brute forcing the password via a dictionary attack and by guessing the valid usernames from error messages. By graphically representing the threats to the user actions (misuses), it is possible to derive the countermeasures as the application actions that mitigate such threats.<br />
[[Image:UseAndMisuseCase.jpg|640px]]<br />
<br />
*Step 4: Elicit The Security Requirements. In this case, the following security requirements for authentication are derived: <br />
:1) Passwords need to be alphanumeric, lower and upper case and minimum of seven character length<br />
:2) Accounts need to lockout after five unsuccessful login attempt<br />
:3) Logon error messages need to be generic<br />
These security requirements need to be documented and tested.<br />
<br />
===Security Tests Integrated in Developers' and Testers' Workflows===<br />
'''Developers' Security Testing Workflow'''<br><br />
Security testing during the development phase of the SDLC represents the first opportunity for developers to ensure that individual software components that they have developed are security tested before they are integrated with other components and built into the application. Software components might consist of software artifacts such as functions, methods, and classes, as well as application programming interfaces, libraries, and executables. For security testing, developers can rely on the results of the source code analysis to verify statically that the developed source code does not include potential vulnerabilities and is compliant with the secure coding standards. Security unit tests can further verify dynamically (i.e., at run time) that the components function as expected. Before integrating both new and existing code changes in the application build, the results of the static and dynamic analysis should be reviewed and validated. <br />
The validation of source code before integration in application builds is usually the responsibility of the senior developer. Such senior developer is also the subject matter expert in software security and his role is to lead the secure code review and make decisions whether to accept the code to be released in the application build or to require further changes and testing. This secure code review workflow can be enforced via formal acceptance as well as a check in a workflow management tool. For example, assuming the typical defect management workflow used for functional bugs, security bugs that have been fixed by a developer can be reported on a defect or change management system. The build master can look at the test results reported by the developers in the tool and grant approvals for checking in the code changes into the application build.<br />
<br />
'''Testers' Security Testing Workflow'''<br><br />
After components and code changes are tested by developers and checked in to the application build, the most likely next step in the software development process workflow is to perform tests on the application as a whole entity. This level of testing is usually referred to as integrated test and system level test. When security tests are part of these testing activities, they can be used to validate both the security functionality of the application as a whole, as well as the exposure to application level vulnerabilities. These security tests on the application include both white box testing, such as source code analysis, and black box testing, such as penetration testing. Gray box testing is similar to Black box testing. In a gray box testing we can assume we have some partial knowledge about the session management of our application, and that should help us in understanding whether the logout and timeout functions are properly secured.<br />
<br />
The target for the security tests is the complete system that is the artifact that will be potentially attacked and includes both whole source code and the executable. One peculiarity of security testing during this phase is that it is possible for security testers to determine whether vulnerabilities can be exploited and expose the application to real risks. <br />
These include common web application vulnerabilities, as well as security issues that have been identified earlier in the SDLC with other activities such as threat modeling, source code analysis, and secure code reviews. <br />
<br />
Usually, testing engineers, rather then software developers, perform security tests when the application is in scope for integration system tests. Such testing engineers have security knowledge of web application vulnerabilities, black box and white box security testing techniques, and own the validation of security requirements in this phase. In order to perform such security tests, it is a pre-requisite that security test cases are documented in the security testing guidelines and procedures.<br />
<br />
A testing engineer who validates the security of the application in the integrated system environment might release the application for testing in the operational environment (e.g., user acceptance tests). At this stage of the SDLC (i.e., validation), the application functional testing is usually a responsibility of QA testers, while white-hat hackers/security consultants are usually responsible for security testing. Some organizations rely on their own specialized ethical hacking team in order to conduct such tests when a third party assessment is not required (such as for auditing purposes). <br />
<br />
Since these tests are the last resort for fixing vulnerabilities before the application is released to production, it is important that such issues are addressed as recommended by the testing team (e.g., the recommendations can include code, design, or configuration change). At this level, security auditors and information security officers discuss the reported security issues and analyze the potential risks according to information risk management procedures. Such procedures might require the developer team to fix all high risk vulnerabilities before the application could be deployed, unless such risks are acknowledged and accepted.<br />
<br />
===Developers' Security Tests===<br />
'''Security Testing in the Coding Phase: Unit Tests'''<br><br />
From the developer’s perspective, the main objective of security tests is to validate that code is being developed in compliance with secure coding standards requirements. Developers' own coding artifacts such as functions, methods, classes, APIs, and libraries need to be functionally validated before being integrated into the application build. <br />
<br />
The security requirements that developers have to follow should be documented in secure coding standards and validated with static and dynamic analysis. As testing activity following a secure code review, unit tests can validate that code changes required by secure code reviews are properly implemented. Secure code reviews and source code analysis through source code analysis tools help developers in identifying security issues in source code as it is developed. By using unit tests and dynamic analysis (e.g., debugging) developers can validate the security functionality of components as well as verify that the countermeasures being developed mitigate any security risks previously identified through threat modeling and source code analysis. <br />
<br />
A good practice for developers is to build security test cases as a generic security test suite that is part of the existing unit testing framework. A generic security test suite could be derived from previously defined use and misuse cases to security test functions, methods and classes. A generic security test suite might include security test cases to validate both positive and negative requirements for security controls such as:<br />
* Authentication & Access Control<br />
* Input Validation & Encoding<br />
* Encryption<br />
* User and Session Management<br />
* Error and Exception Handling<br />
* Auditing and Logging<br />
<br />
Developers empowered with a source code analysis tool integrated into their IDE, secure coding standards, and a security unit testing framework can assess and verify the security of the software components being developed. Security test cases can be run to identify potential security issues that have root causes in source code: besides input and output validation of parameters entering and exiting the components, these issues include authentication and authorization checks done by the component, protection of the data within the component, secure exception and error handling, and secure auditing and logging. Unit test frameworks such as Junit, Nunit, and CUnit can be adapted to verify security test requirements. In the case of security functional tests, unit level tests can test the functionality of security controls at the software component level, such as functions, methods, or classes. For example, a test case could validate input and output validation (e.g., variable sanitization) and boundary checks for variables by asserting the expected functionality of the component.<br />
<br />
The threat scenarios identified with use and misuse cases can be used to document the procedures for testing software components. In the case of authentication components, for example, security unit tests can assert the functionality of setting an account lockout as well as the fact that user input parameters cannot be abused to bypass the account lockout (e.g., by setting the account lockout counter to a negative number). At the component level, security unit tests can validate positive assertions as well as negative assertions, such as errors and exception handling. Exceptions should be caught without leaving the system in an insecure state, such as potential denial of service caused by resources not being deallocated (e.g., connection handles not closed within a final statement block), as well as potential elevation of privileges (e.g., higher privileges acquired before the exception is thrown and not re-set to the previous level before exiting the function). Secure error handling can validate potential information disclosure via informative error messages and stack traces. <br />
<br />
Unit level security test cases can be developed by a security engineer who is the subject matter expert in software security and is also responsible for validating that the security issues in the source code have been fixed and can be checked into the integrated system build. Typically, the manager of the application builds also makes sure that third-party libraries and executable files are security assessed for potential vulnerabilities before being integrated in the application build.<br />
<br />
Threat scenarios for common vulnerabilities that have root causes in insecure coding can also be documented in the developer’s security testing guide. When a fix is implemented for a coding defect identified with source code analysis, for example, security test cases can verify that the implementation of the code change follows the secure coding requirements documented in the secure coding standards. <br />
<br />
Source code analysis and unit tests can validate that the code change mitigates the vulnerability exposed by the previously identified coding defect. The results of automated secure code analysis can also be used as automatic check-in gates for version control: software artifacts cannot be checked into the build with high or medium severity coding issues.<br />
<br />
===Functional Testers' Security Tests===<br />
'''Security Testing During the Integration and Validation Phase: Integrated System Tests and Operation Tests'''<br><br />
The main objective of integrated system tests is to validate the “defense in depth” concept, that is, that the implementation of security controls provides security at different layers. For example, the lack of input validation when calling a component integrated with the application is often a factor that can be tested with integration testing. <br />
<br />
The integration system test environment is also the first environment where testers can simulate real attack scenarios as can be potentially executed by a malicious external or internal user of the application. Security testing at this level can validate whether vulnerabilities are real and can be exploited by attackers. For example, a potential vulnerability found in source code can be rated as high risk because of the exposure to potential malicious users, as well as because of the potential impact (e.g., access to confidential information).<br />
Real attack scenarios can be tested with both manual testing techniques and penetration testing tools. Security tests of this type are also referred to as ethical hacking tests. From the security testing perspective, these are risk driven tests and have the objective to test the application in the operational environment. The target is the application build that is representative of the version of the application being deployed into production.<br />
<br />
The execution of security in the integration and validation phase is critical to identifying vulnerabilities due to integration of components as well as validating the exposure of such vulnerabilities. Since application security testing requires a specialized set of skills, which includes both software and security knowledge and is not typical of security engineers, organizations are often required to security-train their software developers on ethical hacking techniques, security assessment procedures and tools. A realistic scenario is to develop such resources in-house and document them in security testing guides and procedures that take into account the developer’s security testing knowledge. A so called “security test cases cheat list or check-list”, for example, can provide simple test cases and attack vectors that can be used by testers to validate exposure to common vulnerabilities such as spoofing, information disclosures, buffer overflows, format strings, SQL injection and XSS injection, XML, SOAP, canonicalization issues, denial of service and managed code and ActiveX controls (e.g., .NET). A first battery of these tests can be performed manually with a very basic knowledge of software security. The first objective of security tests might be the validation of a set of minimum security requirements. These security test cases might consist of manually forcing the application into error and exceptional states, and gathering knowledge from the application behavior. For example, SQL injection vulnerabilities can be tested manually by injecting attack vectors through user input and by checking if SQL exceptions are thrown back the user. The evidence of a SQL exception error might be a manifestation of a vulnerability that can be exploited. A more in-depth security test might require the tester’s knowledge of specialized testing techniques and tools. Besides source code analysis and penetration testing, these techniques include, for example, source code and binary fault injection, fault propagation analysis and code coverage, fuzz testing, and reverse engineering. The security testing guide should provide procedures and recommend tools that can be used by security testers to perform such in-depth security assessments.<br />
<br />
The next level of security testing after integration system tests is to perform security tests in the user acceptance environment. There are unique advantages to performing security tests in the operational environment. The user acceptance tests environment (UAT) is the one that is most representative of the release configuration, with the exception of the data (e.g., test data is used in place of real data). A characteristic of security testing in UAT is testing for security configuration issues. In some cases these vulnerabilities might represent high risks. For example, the server that hosts the web application might not be configured with minimum privileges, valid SSL certificate and secure configuration, essential services disabled and web root directory not cleaned from test and administration web pages.<br />
<br />
===Security Test Data Analysis and Reporting===<br />
'''Goals for Security Test Metrics and Measurements'''<br><br />
The definition of the goals for the security testing metrics and measurements is a pre-requisite for using security testing data for risk analysis and management processes. For example, a measurement such as the total number of vulnerabilities found with security tests might quantify the security posture of the application. These measurements also help to identify security objectives for software security testing: for example, reducing the number of vulnerabilities to an acceptable number (minimum) before the application is deployed into production. <br />
<br />
Another manageable goal could be to compare the application security posture against a baseline to assess improvements in application security processes. For example, the security metrics baseline might consist of an application that was tested only with penetration tests. The security data obtained from an application that was also security tested during coding should show an improvement (e.g., fewer number of vulnerabilities) when compared with the baseline.<br />
<br />
In traditional software testing, the number of software defects, such as the bugs found in an application, could provide a measure of software quality. Similarly, security testing can provide a measure of software security. From the defect management and reporting perspective, software quality and security testing can use similar categorizations for root causes and defect remediation efforts. From the root cause perspective, a security defect can be due to an error in design (e.g., security flaws) or due to an error in coding (e.g., security bug). From the perspective of the effort required to fix a defect, both security and quality defects can be measured in terms of developer hours to implement the fix, the tools and resources required to fix, and, finally, the cost to implement the fix.<br />
<br />
A characteristic of security test data, compared to quality data, is the categorization in terms of the threat, the exposure of the vulnerability, and the potential impact posed by the vulnerability to determine the risk. Testing applications for security consists of managing technical risks to make sure that the application countermeasures meet acceptable levels. For this reason, security testing data needs to support the security risk strategy at critical checkpoints during the SDLC. For example, vulnerabilities found in source code with source code analysis represent an initial measure of risk. Such measure of risk (e.g., high, medium, low) for the vulnerability can be calculated by determining the exposure and likelihood factors and, further, by validating such vulnerability with penetration tests. The risk metrics associated to vulnerabilities found with security tests empower business management to make risk management decisions, such as to decide whether risks can be accepted, mitigated, or transferred at different levels within the organization (e.g., business as well as technical).<br />
<br />
When evaluating the security posture of an application, it is important to take into consideration certain factors, such as the size of the application being developed. Application size has been statistically proven to be related to the number of issues found in the application with tests. One measure of application size is the number of line of code (LOC) of the application. Typically, software quality defects range from about 7 to 10 defects per thousand lines of new and changed code [21]. Since testing can reduce the overall number by about 25% with one test alone, it is logical for larger size applications to be tested more and more often than smaller size applications.<br />
<br />
When security testing is done in several phases of the SDLC, the test data could prove the capability of the security tests in detecting vulnerabilities as soon as they are introduced, and prove the effectiveness of removing them by implementing countermeasures at different checkpoints of the SDLC. A measurement of this type is also defined as “containment metrics” and provides a measure of the ability of a security assessment performed at each phase of the development process to maintain security within each phase. These containment metrics are also a critical factor in lowering the cost of fixing the vulnerabilities, since it is less expensive to deal with the vulnerabilities when they are found (in the same phase of the SDLC), rather then fixing them later in another phase. <br />
<br />
Security test metrics can support security risk, cost, and defect management analysis when it is associated with tangible and timed goals such as: <br />
*Reducing the overall number of vulnerabilities by 30%<br />
*Security issues are expected to be fixed by a certain deadline (e.g., before beta release) <br />
<br />
Security test data can be absolute, such as the number of vulnerabilities detected during manual code review, as well as comparative, such as the number of vulnerabilities detected in code reviews vs. penetration tests. To answer questions about the quality of the security process, it is important to determine a baseline for what could be considered acceptable and good. <br />
<br />
Security test data can also support specific objectives of the security analysis such as compliance with security regulations and information security standards, management of security processes, the identification of security root causes and process improvements, and security costs vs. benefits analysis.<br />
<br />
When security test data is reported it has to provide metrics to support the analysis. The scope of the analysis is the interpretation of test data to find clues about the security of the software being produced as well the effectiveness of the process. <br />
Some examples of clues supported by security test data can be:<br />
*Are vulnerabilities reduced to an acceptable level for release?<br />
*How does the security quality of this product compare with similar software products?<br />
*Are all security test requirements being met? <br />
*What are the major root causes of security issues?<br />
*How numerous are security flaws compared to security bugs?<br />
*Which security activity is most effective in finding vulnerabilities?<br />
*Which team is more productive in fixing security defects and vulnerabilities?<br />
*Which percentage of overall vulnerabilities are high risk?<br />
*Which tools are most effective in detecting security vulnerabilities?<br />
*Which kind of security tests are most effective in finding vulnerabilities (e.g., white box vs. black box) tests?<br />
*How many security issues are found during secure code reviews?<br />
*How many security issues are found during secure design reviews?<br />
<br />
In order to make a sound judgment using the testing data, it is important to have a good understanding of the testing process as well as the testing tools. A tool taxonomy should be adopted to decide which security tools should be used. Security tools can be qualified as being good at finding common known vulnerabilities targeting different artifacts.<br />
The issue is that the unknown security issues are not tested: the fact that you come out clean it does not mean that your software or application is good. Some studies [22] have demonstrated that, at best, tools can find 45% of overall vulnerabilities. <br />
<br />
Even the most sophisticated automation tools are not a match for an experienced security tester: just relying on successful test results from automation tools will give security practitioners a false sense of security. Typically, the more experienced the security testers are with the security testing methodology and testing tools, the better the results of the security test and analysis will be. It is important that managers making an investment in security testing tools also consider an investment in hiring skilled human resources as well as security test training.<br />
<br />
'''Reporting Requirements'''<br><br />
The security posture of an application can be characterized from the perspective of the effect, such as number of vulnerabilities and the risk rating of the vulnerabilities, as well as from the perspective of the cause (i.e., origin) such as coding errors, architectural flaws, and configuration issues. <br />
<br />
Vulnerabilities can be classified according to different criteria. This can be a statistical categorization, such as the OWASP Top 10 and WASC (Web Application Security Statistics) project, or related to defensive controls as in the case of WASF (Web Application Security Framework) categorization.<br />
<br />
When reporting security test data, the best practice is to include the following information, besides the categorization of each vulnerability by type:<br />
*The security threat that the issue is exposed to<br />
*The root cause of security issues (e.g., security bugs, security flaw)<br />
*The testing technique used to find it<br />
*The remediation of the vulnerability (e.g., the countermeasure) <br />
*The risk rating of the vulnerability (High, Medium, Low)<br />
<br />
By describing what the security threat is, it will be possible to understand if and why the mitigation control is ineffective in mitigating the threat. <br />
<br />
Reporting the root cause of the issue can help pinpoint what needs to be fixed: in the case of a white box testing, for example, the software security root cause of the vulnerability will be the offending source code. <br />
<br />
Once issues are reported, it is also important to provide guidance to the software developer on how to re-test and find the vulnerability. This might involve using a white box testing technique (e.g., security code review with a static code analyzer) to find if the code is vulnerable. If a vulnerability can be found via a black box technique (penetration test), the test report also needs to provide information on how to validate the exposure of the vulnerability to the front end (e.g., client).<br />
<br />
The information about how to fix the vulnerability should be detailed enough for a developer to implement a fix. It should provide secure coding examples, configuration changes, and provide adequate references.<br />
<br />
Finally the risk rating helps to prioritize the remediation effort. Typically, assigning a risk rating to the vulnerability involves a risk analysis based upon factors such as impact and exposure.<br />
<br />
'''Business Cases'''<br> <br />
For the security test metrics to be useful, they need to provide value back to the organization's security test data stakeholders, such as project managers, developers, information security offices, auditors, and chief information officers. The value can be in terms of the business case that each project stakeholder has in terms of role and responsibility.<br />
<br />
Software developers look at security test data to show that software is coded more securely and efficiently, so that they can make the case of using source code analysis tools as well as following secure coding standards and attending software security training. <br />
<br />
Project managers look for data that allows them to successfully manage and utilize security testing activities and resources according to the project plan. To project managers, security test data can show that projects are on schedule and moving on target for delivery dates and are getting better during tests. <br />
<br />
Security test data also helps the business case for security testing if the initiative comes from information security officers (ISOs). For example, it can provide evidence that security testing during the SDLC does not impact the project delivery, but rather reduces the overall workload needed to address vulnerabilities later in production. <br />
<br />
To compliance auditors, security test metrics provide a level of software security assurance and confidence that security standard compliance is addressed through the security review processes within the organization. <br />
<br />
Finally, Chief Information Officers (CIOs) and Chief Information Security Officers (CISOs), responsible for the budget that needs to be allocated in security resources, look for derivation of a cost/benefit analysis from security test data to make informed decisions on which security activities and tools to invest. One of the metrics that support such analysis is the Return On Investment (ROI) in Security [23]. To derive such metrics from security test data, it is important to quantify the differential between the risk due to the exposure of vulnerabilities and the effectiveness of the security tests in mitigating the security risk, and factor this gap with the cost of the security testing activity or the testing tools adopted.<br />
<br />
== References ==<br />
[1] T. DeMarco, ''Controlling Software Projects: Management, Measurement and Estimation'', Yourdon Press, 1982<br />
<br />
[2] S. Payne, ''A Guide to Security Metrics'' - http://www.sans.org/reading_room/whitepapers/auditing/55.php<br />
<br />
[3] NIST, ''The economic impacts of inadequate infrastructure for software testing'' - http://www.nist.gov/public_affairs/releases/n02-10.htm<br />
<br />
[4] Ross Anderson, ''Economics and Security Resource Page'' - http://www.cl.cam.ac.uk/users/rja14/econsec.html <br />
<br />
[5] Denis Verdon, ''Teaching Developers To Fish'' - [[OWASP AppSec NYC 2004]]<br />
<br />
[6] Bruce Schneier, ''Cryptogram Issue #9'' - http://www.schneier.com/crypto-gram-0009.html<br />
<br />
[7] Symantec, ''Threat Reports'' - http://www.symantec.com/business/theme.jsp?themeid=threatreport<br />
<br />
[8] FTC, ''The Gramm-Leach Bliley Act'' - http://www.ftc.gov/privacy/privacyinitiatives/glbact.html<br />
<br />
[9] Senator Peace and Assembly Member Simitian, ''SB 1386''- http://www.leginfo.ca.gov/pub/01-02/bill/sen/sb_1351-1400/sb_1386_bill_20020926_chaptered.html<br />
<br />
[10] European Union, ''Directive 96/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data'' -<br />
http://ec.europa.eu/justice_home/fsj/privacy/docs/95-46-ce/dir1995-46_part1_en.pdf<br />
<br />
[11] NIST, '' Risk management guide for information technology systems'' - http://csrc.nist.gov/publications/nistpubs/800-30/sp800-30.pdf<br />
<br />
[12] SEI, Carnegie Mellon, ''Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE)'' - http://www.cert.org/octave/<br />
<br />
[13] Ken Thompson, ''Reflections on Trusting Trust, Reprinted from Communication of the ACM '' - http://cm.bell-labs.com/who/ken/trust.html'' [[Category:FIXME|link not working]]<br />
<br />
[14] Gary McGraw, ''Beyond the Badness-ometer'' - http://www.ddj.com/security/189500001<br />
<br />
[15] FFIEC, '' Authentication in an Internet Banking Environment'' - http://www.ffiec.gov/pdf/authentication_guidance.pdf<br />
<br />
[16] PCI Security Standards Council, ''PCI Data Security Standard'' -https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml <br />
<br />
[17] MSDN, ''Cheat Sheet: Web Application Security Frame'' - http://msdn.microsoft.com/en-us/library/ms978518.aspx#tmwacheatsheet_webappsecurityframe <br />
<br />
[18] MSDN, ''Improving Web Application Security, Chapter 2, Threat And Countermeasures'' - http://msdn.microsoft.com/en-us/library/aa302418.aspx<br />
<br />
[19] Gil Regev, Ian Alexander,Alain Wegmann, ''Use Cases and Misuse Cases Model the Regulatory Roles of Business Processes'' - http://easyweb.easynet.co.uk/~iany/consultancy/regulatory_processes/regulatory_processes.htm<br />
<br />
[20] Sindre,G. Opdmal A., '' Capturing Security Requirements Through Misuse Cases ' - http://folk.uio.no/nik/2001/21-sindre.pdf<br />
<br />
[21] Security Across the Software Development Lifecycle Task Force, ''Referred Data from Caper Johns, Software Assessments, Benchmarks and Best Practices'' -http://www.cyberpartnership.org/SDLCFULL.pdf<br />
<br />
[22] MITRE, ''Being Explicit About Weaknesses, Slide 30, Coverage of CWE'' -http://cwe.mitre.org/documents/being-explicit/BlackHatDC_BeingExplicit_Slides.ppt<br />
<br />
[23] Marco Morana, ''Building Security Into The Software Life Cycle, A Business Case'' - http://www.blackhat.com/presentations/bh-usa-06/bh-us-06-Morana-R3.0.pdf<br />
<br></div>Michael Bomanhttps://wiki.owasp.org/index.php?title=Testing_for_SSL-TLS_(OWASP-CM-001)&diff=87509Testing for SSL-TLS (OWASP-CM-001)2010-08-11T13:30:21Z<p>Michael Boman: Added SSLScan tool description and example</p>
<hr />
<div>{{Template:OWASP Testing Guide v3}}<br />
<br />
== Brief Summary ==<br />
<br />
Due to historic export restrictions of high grade cryptography, legacy and new web servers are often able and configured to handle weak cryptographic options.<br />
<br />
Even if high grade ciphers are normally used and installed, some server misconfiguration could be used to force the use of a weaker cipher to gain access to the supposed secure communication channel.<br />
<br />
==Testing SSL / TLS cipher specifications and requirements for site==<br />
<br />
The http clear-text protocol is normally secured via an SSL or TLS tunnel, resulting in https traffic. In addition to providing encryption of data in transit, https allows the identification of servers (and, optionally, of clients) by means of digital certificates.<br />
<br />
Historically, there have been limitations set in place by the U.S. government to allow cryptosystems to be exported only for key sizes of, at most, 40 bits, a key length which could be broken and would allow the decryption of communications. Since then, cryptographic export regulations have been relaxed (though some constraints still hold); however, it is important to check the SSL configuration being used to avoid putting in place cryptographic support which could be easily defeated. SSL-based services should not offer the possibility to choose weak ciphers.<br />
<br />
Technically, cipher determination is performed as follows. In the initial phase of a SSL connection setup, the client sends the server a Client Hello message specifying, among other information, the cipher suites that it is able to handle. A client is usually a web browser (most popular SSL client nowadays), but not necessarily, since it can be any SSL-enabled application; the same holds for the server, which needs not be a web server, though this is the most common case. (For example, a noteworthy class of SSL clients is that of SSL proxies such as stunnel (www.stunnel.org) which can be used to allow non-SSL enabled tools to talk to SSL services.) A cipher suite is specified by an encryption protocol (DES, RC4, AES), the encryption key length (such as 40, 56, or 128 bits), and a hash algorithm (SHA, MD5) used for integrity checking. Upon receiving a Client Hello message, the server decides which cipher suite it will use for that session. It is possible (for example, by means of configuration directives) to specify which cipher suites the server will honor. In this way you may control, for example, whether or not conversations with clients will support 40-bit encryption only.<br />
<br />
==SSL testing criteria==<br />
Large number of available cipher suites and quick progress in cryptoanalysis makes judging a SSL server a non-trivial task. These criteria are widely recognised as minimum checklist:<br />
<br />
* SSLv2, due to known weaknesses in protocol design<br />
* Export (EXP) level cipher suites in SSLv3<br />
* Cipher suites with symmetric encryption algorithm smaller than 128 bits<br />
* X.509 certificates with RSA or DSA key smaller than 1024 bits<br />
* X.509 certificates signed using MD5 hash, due to known collision attacks on this hash<br />
* TLS Renegotiation vulnerability[http://www.phonefactor.com/sslgap/ssl-tls-authentication-patches]<br />
<br />
While there are known collision attacks on MD5 and known cryptoanalytical attacks on RC4, their specific usage in SSL and TLS doesn't allow these attacks to be practical and SSLv3 or TLSv1 cipher suites using RC4 and MD5 with key lenght of 128 bit is still considered sufficient[http://www.rsa.com/rsalabs/node.asp?id=2009].<br />
<br />
The following standards can be used as reference while assessing SSL servers:<br />
<br />
* [http://csrc.nist.gov/publications/nistpubs/800-52/SP800-52.pdf NIST SP 800-52] recommends U.S. federal systems to use at least TLS 1.0 with ciphersuites based on RSA or DSA key agreement with ephemeral Diffie-Hellman, 3DES or AES for confidentality and SHA1 for integrity protection. NIST SP 800-52 specifically disallows non-FIPS compliant algorithms like RC4 and MD5. An exception is U.S. federal systems making connections to outside servers, where these algorithms can be used in SSL client mode.<br />
* [https://www.pcisecuritystandards.org/security_standards/pci_dss.shtml PCI-DSS v1.2] in point 4.1 requires compliant parties to use "strong cryptography" without precisely defining key lengths and algorithms. Common interpretation, partially based on previous versions of the standard, is that at least 128 bit key cipher, no export strength algorithms and no SSLv2 should be used[http://www.digicert.com/news/DigiCert_PCI_White_Paper.pdf].<br />
* [https://www.ssllabs.com/projects/rating-guide/index.html SSL Server Rating Guide] has been proposed to standardize SSL server assessment and currently is in draft version.<br />
<br />
SSL Server Database can be used to assess configuration of publicly available SSL servers[https://www.ssllabs.com/ssldb/analyze.html] based on SSL Rating Guide[https://www.ssllabs.com/projects/rating-guide/index.html]<br />
<br />
==Black Box Test and example==<br />
<br />
In order to detect possible support of weak ciphers, the ports associated to SSL/TLS wrapped services must be identified. These typically include port 443, which is the standard https port; however, this may change because a) https services may be configured to run on non-standard ports, and b) there may be additional SSL/TLS wrapped services related to the web application. In general, a service discovery is required to identify such ports.<br />
<br />
The nmap scanner, via the “–sV” scan option, is able to identify SSL services. Vulnerability Scanners, in addition to performing service discovery, may include checks against weak ciphers (for example, the Nessus scanner has the capability of checking SSL services on arbitrary ports, and will report weak ciphers).<br />
<br />
'''Example 1'''. SSL service recognition via nmap.<br />
<br />
<pre><br />
[root@test]# nmap -F -sV localhost<br />
<br />
Starting nmap 3.75 ( http://www.insecure.org/nmap/ ) at 2005-07-27 14:41 CEST<br />
Interesting ports on localhost.localdomain (127.0.0.1):<br />
(The 1205 ports scanned but not shown below are in state: closed)<br />
<br />
PORT STATE SERVICE VERSION<br />
443/tcp open ssl OpenSSL<br />
901/tcp open http Samba SWAT administration server<br />
8080/tcp open http Apache httpd 2.0.54 ((Unix) mod_ssl/2.0.54 OpenSSL/0.9.7g PHP/4.3.11)<br />
8081/tcp open http Apache Tomcat/Coyote JSP engine 1.0<br />
<br />
Nmap run completed -- 1 IP address (1 host up) scanned in 27.881 seconds<br />
[root@test]# <br />
</pre><br />
<br />
'''Example 2'''. Identifying weak ciphers with Nessus.<br />
The following is an anonymized excerpt of a report generated by the Nessus scanner, corresponding to the identification of a server certificate allowing weak ciphers (see underlined text).<br />
<br />
'''https (443/tcp)'''<br />
'''Description'''<br />
Here is the SSLv2 server certificate:<br />
Certificate:<br />
Data:<br />
Version: 3 (0x2)<br />
Serial Number: 1 (0x1)<br />
Signature Algorithm: md5WithRSAEncryption<br />
Issuer: C=**, ST=******, L=******, O=******, OU=******, CN=******<br />
Validity<br />
Not Before: Oct 17 07:12:16 2002 GMT<br />
Not After : Oct 16 07:12:16 2004 GMT<br />
Subject: C=**, ST=******, L=******, O=******, CN=******<br />
Subject Public Key Info:<br />
Public Key Algorithm: rsaEncryption<br />
RSA Public Key: (1024 bit)<br />
Modulus (1024 bit):<br />
00:98:4f:24:16:cb:0f:74:e8:9c:55:ce:62:14:4e:<br />
6b:84:c5:81:43:59:c1:2e:ac:ba:af:92:51:f3:0b:<br />
ad:e1:4b:22:ba:5a:9a:1e:0f:0b:fb:3d:5d:e6:fc:<br />
ef:b8:8c:dc:78:28:97:8b:f0:1f:17:9f:69:3f:0e:<br />
72:51:24:1b:9c:3d:85:52:1d:df:da:5a:b8:2e:d2:<br />
09:00:76:24:43:bc:08:67:6b:dd:6b:e9:d2:f5:67:<br />
e1:90:2a:b4:3b:b4:3c:b3:71:4e:88:08:74:b9:a8:<br />
2d:c4:8c:65:93:08:e6:2f:fd:e0:fa:dc:6d:d7:a2:<br />
3d:0a:75:26:cf:dc:47:74:29<br />
Exponent: 65537 (0x10001)<br />
X509v3 extensions:<br />
X509v3 Basic Constraints:<br />
CA:FALSE<br />
Netscape Comment:<br />
OpenSSL Generated Certificate<br />
Page 10<br />
Network Vulnerability Assessment Report 25.05.2005<br />
X509v3 Subject Key Identifier:<br />
10:00:38:4C:45:F0:7C:E4:C6:A7:A4:E2:C9:F0:E4:2B:A8:F9:63:A8<br />
X509v3 Authority Key Identifier:<br />
keyid:CE:E5:F9:41:7B:D9:0E:5E:5D:DF:5E:B9:F3:E6:4A:12:19:02:76:CE<br />
DirName:/C=**/ST=******/L=******/O=******/OU=******/CN=******<br />
serial:00<br />
Signature Algorithm: md5WithRSAEncryption<br />
7b:14:bd:c7:3c:0c:01:8d:69:91:95:46:5c:e6:1e:25:9b:aa:<br />
8b:f5:0d:de:e3:2e:82:1e:68:be:97:3b:39:4a:83:ae:fd:15:<br />
2e:50:c8:a7:16:6e:c9:4e:76:cc:fd:69:ae:4f:12:b8:e7:01:<br />
b6:58:7e:39:d1:fa:8d:49:bd:ff:6b:a8:dd:ae:83:ed:bc:b2:<br />
40:e3:a5:e0:fd:ae:3f:57:4d:ec:f3:21:34:b1:84:97:06:6f:<br />
f4:7d:f4:1c:84:cc:bb:1c:1c:e7:7a:7d:2d:e9:49:60:93:12:<br />
0d:9f:05:8c:8e:f9:cf:e8:9f:fc:15:c0:6e:e2:fe:e5:07:81:<br />
82:fc<br />
Here is the list of available SSLv2 ciphers:<br />
RC4-MD5<br />
EXP-RC4-MD5<br />
RC2-CBC-MD5<br />
EXP-RC2-CBC-MD5<br />
DES-CBC-MD5<br />
DES-CBC3-MD5<br />
RC4-64-MD5<br />
<u>The SSLv2 server offers 5 strong ciphers, but also 0 medium strength and '''2 weak "export class" ciphers'''.<br />
The weak/medium ciphers may be chosen by an export-grade or badly configured client software. They only offer a limited protection against a brute force attack</u><br />
<u>Solution: disable those ciphers and upgrade your client software if necessary.</u><br />
See http://support.microsoft.com/default.aspx?scid=kben-us216482<br />
or http://httpd.apache.org/docs-2.0/mod/mod_ssl.html#sslciphersuite<br />
This SSLv2 server also accepts SSLv3 connections.<br />
This SSLv2 server also accepts TLSv1 connections.<br />
<br />
Vulnerable hosts<br />
''(list of vulnerable hosts follows)''<br />
<br />
'''Example 3'''. Manually audit weak SSL cipher levels with OpenSSL. The following will attempt to connect to Google.com with SSLv2.<br />
<pre><br />
[root@test]# openssl s_client -no_tls1 -no_ssl3 -connect www.google.com:443<br />
CONNECTED(00000003)<br />
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com<br />
verify error:num=20:unable to get local issuer certificate<br />
verify return:1<br />
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com<br />
verify error:num=27:certificate not trusted<br />
verify return:1<br />
depth=0 /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com<br />
verify error:num=21:unable to verify the first certificate<br />
verify return:1<br />
---<br />
Server certificate<br />
-----BEGIN CERTIFICATE-----<br />
MIIDYzCCAsygAwIBAgIQYFbAC3yUC8RFj9MS7lfBkzANBgkqhkiG9w0BAQQFADCB<br />
zjELMAkGA1UEBhMCWkExFTATBgNVBAgTDFdlc3Rlcm4gQ2FwZTESMBAGA1UEBxMJ<br />
Q2FwZSBUb3duMR0wGwYDVQQKExRUaGF3dGUgQ29uc3VsdGluZyBjYzEoMCYGA1UE<br />
CxMfQ2VydGlmaWNhdGlvbiBTZXJ2aWNlcyBEaXZpc2lvbjEhMB8GA1UEAxMYVGhh<br />
d3RlIFByZW1pdW0gU2VydmVyIENBMSgwJgYJKoZIhvcNAQkBFhlwcmVtaXVtLXNl<br />
cnZlckB0aGF3dGUuY29tMB4XDTA2MDQyMTAxMDc0NVoXDTA3MDQyMTAxMDc0NVow<br />
aDELMAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDU1v<br />
dW50YWluIFZpZXcxEzARBgNVBAoTCkdvb2dsZSBJbmMxFzAVBgNVBAMTDnd3dy5n<br />
b29nbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC/e2Vs8U33fRDk<br />
5NNpNgkB1zKw4rqTozmfwty7eTEI8PVH1Bf6nthocQ9d9SgJAI2WOBP4grPj7MqO<br />
dXMTFWGDfiTnwes16G7NZlyh6peT68r7ifrwSsVLisJp6pUf31M5Z3D88b+Yy4PE<br />
D7BJaTxq6NNmP1vYUJeXsGSGrV6FUQIDAQABo4GmMIGjMB0GA1UdJQQWMBQGCCsG<br />
AQUFBwMBBggrBgEFBQcDAjBABgNVHR8EOTA3MDWgM6Axhi9odHRwOi8vY3JsLnRo<br />
YXd0ZS5jb20vVGhhd3RlUHJlbWl1bVNlcnZlckNBLmNybDAyBggrBgEFBQcBAQQm<br />
MCQwIgYIKwYBBQUHMAGGFmh0dHA6Ly9vY3NwLnRoYXd0ZS5jb20wDAYDVR0TAQH/<br />
BAIwADANBgkqhkiG9w0BAQQFAAOBgQADlTbBdVY6LD1nHWkhTadmzuWq2rWE0KO3<br />
Ay+7EleYWPOo+EST315QLpU6pQgblgobGoI5x/fUg2U8WiYj1I1cbavhX2h1hda3<br />
FJWnB3SiXaiuDTsGxQ267EwCVWD5bCrSWa64ilSJTgiUmzAv0a2W8YHXdG08+nYc<br />
X/dVk5WRTw==<br />
-----END CERTIFICATE-----<br />
subject=/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com<br />
issuer=/C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/emailAddress=premium-server@thawte.com<br />
---<br />
No client certificate CA names sent<br />
---<br />
Ciphers common between both SSL endpoints:<br />
RC4-MD5 EXP-RC4-MD5 RC2-CBC-MD5<br />
EXP-RC2-CBC-MD5 DES-CBC-MD5 DES-CBC3-MD5<br />
RC4-64-MD5<br />
---<br />
SSL handshake has read 1023 bytes and written 333 bytes<br />
---<br />
New, SSLv2, Cipher is DES-CBC3-MD5<br />
Server public key is 1024 bit<br />
Compression: NONE<br />
Expansion: NONE<br />
SSL-Session:<br />
Protocol : SSLv2<br />
Cipher : DES-CBC3-MD5<br />
Session-ID: 709F48E4D567C70A2E49886E4C697CDE<br />
Session-ID-ctx:<br />
Master-Key: 649E68F8CF936E69642286AC40A80F433602E3C36FD288C3<br />
Key-Arg : E8CB6FEB9ECF3033<br />
Start Time: 1156977226<br />
Timeout : 300 (sec)<br />
Verify return code: 21 (unable to verify the first certificate)<br />
---<br />
closed<br />
</pre><br />
<br />
'''Example 4'''. Testing supported protocols and ciphers using SSLScan.<br />
<br />
SSLScan is a free command line tool that scans a HTTPS service to enumerate what protocols (supports SSLv2, SSLv3 and TLS1) and what ciphers the HTTPS service supports. It runs both on Linux and Windows OS (OSX not tested) and is released under a open source license.<br />
<br />
<pre><br />
[user@test]$ ./SSLScan --no-failed mail.google.com<br />
_<br />
___ ___| |___ ___ __ _ _ __<br />
/ __/ __| / __|/ __/ _` | '_ \<br />
\__ \__ \ \__ \ (_| (_| | | | |<br />
|___/___/_|___/\___\__,_|_| |_|<br />
<br />
Version 1.9.0-win<br />
http://www.titania.co.uk<br />
Copyright 2010 Ian Ventura-Whiting / Michael Boman<br />
Compiled against OpenSSL 0.9.8n 24 Mar 2010<br />
<br />
Testing SSL server mail.google.com on port 443<br />
<br />
Supported Server Cipher(s):<br />
accepted SSLv3 256 bits AES256-SHA<br />
accepted SSLv3 128 bits AES128-SHA<br />
accepted SSLv3 168 bits DES-CBC3-SHA<br />
accepted SSLv3 128 bits RC4-SHA<br />
accepted SSLv3 128 bits RC4-MD5<br />
accepted TLSv1 256 bits AES256-SHA<br />
accepted TLSv1 128 bits AES128-SHA<br />
accepted TLSv1 168 bits DES-CBC3-SHA<br />
accepted TLSv1 128 bits RC4-SHA<br />
accepted TLSv1 128 bits RC4-MD5<br />
<br />
Prefered Server Cipher(s):<br />
SSLv3 128 bits RC4-SHA<br />
TLSv1 128 bits RC4-SHA<br />
<br />
SSL Certificate:<br />
Version: 2<br />
Serial Number: -4294967295<br />
Signature Algorithm: sha1WithRSAEncryption<br />
Issuer: /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA<br />
Not valid before: Dec 18 00:00:00 2009 GMT<br />
Not valid after: Dec 18 23:59:59 2011 GMT<br />
Subject: /C=US/ST=California/L=Mountain View/O=Google Inc/CN=mail.google.com<br />
Public Key Algorithm: rsaEncryption<br />
RSA Public Key: (1024 bit)<br />
Modulus (1024 bit):<br />
00:d9:27:c8:11:f2:7b:e4:45:c9:46:b6:63:75:83:<br />
b1:77:7e:17:41:89:80:38:f1:45:27:a0:3c:d9:e8:<br />
a8:00:4b:d9:07:d0:ba:de:ed:f4:2c:a6:ac:dc:27:<br />
13:ec:0c:c1:a6:99:17:42:e6:8d:27:d2:81:14:b0:<br />
4b:82:fa:b2:c5:d0:bb:20:59:62:28:a3:96:b5:61:<br />
f6:76:c1:6d:46:d2:fd:ba:c6:0f:3d:d1:c9:77:9a:<br />
58:33:f6:06:76:32:ad:51:5f:29:5f:6e:f8:12:8b:<br />
ad:e6:c5:08:39:b3:43:43:a9:5b:91:1d:d7:e3:cf:<br />
51:df:75:59:8e:8d:80:ab:53<br />
Exponent: 65537 (0x10001)<br />
X509v3 Extensions:<br />
X509v3 Basic Constraints: critical<br />
CA:FALSE X509v3 CRL Distribution Points: <br />
URI:http://crl.thawte.com/ThawteSGCCA.crl<br />
X509v3 Extended Key Usage: <br />
TLS Web Server Authentication, TLS Web Client Authentication, Netscape Server Gated Crypto Authority Information Access: <br />
OCSP - URI:http://ocsp.thawte.com<br />
CA Issuers - URI:http://www.thawte.com/repository/Thawte_SGC_CA.crt<br />
Verify Certificate:<br />
unable to get local issuer certificate<br />
<br />
<br />
Renegotiation requests supported<br />
</pre><br />
<br />
<br />
==White Box Test and example==<br />
<br />
Check the configuration of the web servers which provide https services. If the web application provides other SSL/TLS wrapped services, these should be checked as well.<br />
<br />
'''Example:''' The following registry path in Microsoft Windows 2003 defines the ciphers available to the server:<br />
<br />
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Ciphers\<br />
<br />
==Testing SSL certificate validity – client and server==<br />
<br />
When accessing a web application via the https protocol, a secure channel is established between the client (usually the browser) and the server. The identity of one (the server) or both parties (client and server) is then established by means of digital certificates. In order for the communication to be set up, a number of checks on the certificates must be passed. While discussing SSL and certificate based authentication is beyond the scope of this Guide, we will focus on the main criteria involved in ascertaining certificate validity: a) checking if the Certificate Authority (CA) is a known one (meaning one considered trusted), b) checking that the certificate is currently valid, and c) checking that the name of the site and the name reported in the certificate match.<br />
Remember to upgrade your browser because CA certs expired too, in every release of the browser, CA Certs has been renewed. Moreover it's important to update the browser because more web sites require strong cipher more of 40 or 56 bit.<br />
<br />
Let’s examine each check more in detail.<br />
<br />
a) Each browser comes with a preloaded list of trusted CAs, against which the certificate signing CA is compared (this list can be customized and expanded at will). During the initial negotiations with an https server, if the server certificate relates to a CA unknown to the browser, a warning is usually raised. This happens most often because a web application relies on a certificate signed by a self-established CA. Whether this is to be considered a concern depends on several factors. For example, this may be fine for an Intranet environment (think of corporate web email being provided via https; here, obviously all users recognize the internal CA as a trusted CA). When a service is provided to the general public via the Internet, however (i.e. when it is important to positively verify the identity of the server we are talking to), it is usually imperative to rely on a trusted CA, one which is recognized by all the user base (and here we stop with our considerations; we won’t delve deeper in the implications of the trust model being used by digital certificates).<br />
<br />
b) Certificates have an associated period of validity, therefore they may expire. Again, we are warned by the browser about this. A public service needs a temporally valid certificate; otherwise, it means we are talking with a server whose certificate was issued by someone we trust, but has expired without being renewed.<br />
<br />
c) What if the name on the certificate and the name of the server do not match? If this happens, it might sound suspicious. For a number of reasons, this is not so rare to see. A system may host a number of name-based virtual hosts, which share the same IP address and are identified by means of the HTTP 1.1 Host: header information. In this case, since the SSL handshake checks the server certificate before the HTTP request is processed, it is not possible to assign different certificates to each virtual server. Therefore, if the name of the site and the name reported in the certificate do not match, we have a condition which is typically signalled by the browser. To avoid this, IP-based virtual servers must be used. [2] and [3] describe techniques to deal with this problem and allow name-based virtual hosts to be correctly referenced.<br />
<br />
<br />
===Black Box Testing and examples===<br />
<br />
Examine the validity of the certificates used by the application. Browsers will issue a warning when encountering expired certificates, certificates issued by untrusted CAs, and certificates which do not match namewise with the site to which they should refer. By clicking on the padlock which appears in the browser window when visiting an https site, you can look at information related to the certificate – including the issuer, period of validity, encryption characteristics, etc.<br />
<br />
If the application requires a client certificate, you probably have installed one to access it. Certificate information is available in the browser by inspecting the relevant certificate(s) in the list of the installed certificates.<br />
<br />
These checks must be applied to all visible SSL-wrapped communication channels used by the application. Though this is the usual https service running on port 443, there may be additional services involved depending on the web application architecture and on deployment issues (an https administrative port left open, https services on non-standard ports, etc.). Therefore, apply these checks to all SSL-wrapped ports which have been discovered. For example, the nmap scanner features a scanning mode (enabled by the –sV command line switch) which identifies SSL-wrapped services. The Nessus vulnerability scanner has the capability of performing SSL checks on all SSL/TLS-wrapped services.<br />
<br />
'''Examples'''<br />
<br />
Rather than providing a fictitious example, we have inserted an anonymized real-life example to stress how frequently one stumbles on https sites whose certificates are inaccurate with respect to naming.<br />
<br />
The following screenshots refer to a regional site of a high-profile IT company.<br />
<br />
<u>Warning issued by Microsoft Internet Explorer.</u> We are visiting an ''.it'' site and the certificate was issued to a ''.com ''site! Internet Explorer warns that the name on the certificate does not match the name of the site.<br />
<br />
<br />
[[Image:SSL Certificate Validity Testing IE Warning.gif]]<br />
<br />
<br />
<u>Warning issued by Mozilla Firefox.</u> The message issued by Firefox is different – Firefox complains because it cannot ascertain the identity of the ''.com'' site the certificate refers to because it does not know the CA which signed the certificate. In fact, Internet Explorer and Firefox do not come preloaded with the same list of CAs. Therefore, the behavior experienced with various browsers may differ.<br />
<br />
<br />
[[Image:SSL Certificate Validity Testing Firefox Warning.gif]]<br />
<br />
===White Box Testing and examples===<br />
<br />
Examine the validity of the certificates used by the application at both server and client levels. The usage of certificates is primarily at the web server level; however, there may be additional communication paths protected by SSL (for example, towards the DBMS). You should check the application architecture to identify all SSL protected channels.<br />
<br />
==References==<br />
'''Whitepapers'''<br><br />
* [1] RFC2246. The TLS Protocol Version 1.0 (updated by RFC3546) - http://www.ietf.org/rfc/rfc2246.txt<br />
* [2] RFC2817. Upgrading to TLS Within HTTP/1.1 - http://www.ietf.org/rfc/rfc2817.txt<br />
* [3] RFC3546. Transport Layer Security (TLS) Extensions - http://www.ietf.org/rfc/rfc3546.txt<br />
* [4] <u>www.verisign.net</u> features various material on the topic<br />
<br />
'''Tools'''<br />
<br />
* https://www.ssllabs.com/ssldb/<br />
<br />
* Vulnerability scanners may include checks regarding certificate validity, including name mismatch and time expiration. They usually report other information as well, such as the CA which issued the certificate. Remember that there is no unified notion of a “trusted CA”; what is trusted depends on the configuration of the software and on the human assumptions made beforehand. Browsers come with a preloaded list of trusted CAs. If your web application relies on a CA which is not in this list (for example, because you rely on a self-made CA), you should take into account the process of configuring user browsers to recognize the CA.<br />
<br />
* The Nessus scanner includes a plugin to check for expired certificates or certificates which are going to expire within 60 days (plugin “SSL certificate expiry”, plugin id 15901). This plugin will check certificates ''installed on the server.<br />
<br />
* Vulnerability scanners may include checks against weak ciphers. For example, the Nessus scanner (http://www.nessus.org) has this capability and flags the presence of SSL weak ciphers (see example provided above).<br />
<br />
* You may also rely on specialized tools such as SSL Digger (http://www.foundstone.com/resources/proddesc/ssldigger.htm), or – for the command line oriented – experiment with the openssl tool, which provides access to OpenSSL cryptographic functions directly from a Unix shell (may be already available on *nix boxes, otherwise see www.openssl.org).<br />
<br />
* To identify SSL-based services, use a vulnerability scanner or a port scanner with service recognition capabilities. The nmap scanner features a “-sV” scanning option which tries to identify services, while the nessus vulnerability scanner has the capability of identifying SSL-based services on arbitrary ports and to run vulnerability checks on them regardless of whether they are configured on standard or non-standard ports.<br />
<br />
* In case you need to talk to a SSL service but your favourite tool doesn’t support SSL, you may benefit from a SSL proxy such as stunnel; stunnel will take care of tunneling the underlying protocol (usually http, but not necessarily so) and communicate with the SSL service you need to reach.<br />
<br />
* Finally, a word of advice. Though it may be tempting to use a regular browser to check certificates, there are various reasons for not doing so. Browsers have been plagued by various bugs in this area, and the way the browser will perform the check might be influenced by configuration settings that may not be evident. Instead, rely on vulnerability scanners or on specialized tools to do the job.<br />
<br />
* [http://www.owasp.org/index.php/Transport_Layer_Protection_Cheat_Sheet OWASP Transport Layer Protection Cheat Sheet]<br />
<br />
[[Category:Cryptographic Vulnerability]]<br />
[[Category:SSL]]</div>Michael Bomanhttps://wiki.owasp.org/index.php?title=File:UseAndMisuseCase.png&diff=87508File:UseAndMisuseCase.png2010-08-11T13:19:11Z<p>Michael Boman: </p>
<hr />
<div>A re-drawing of File:UseAndMisuseCase.jpg in high resolution. See [[File:UseAndMisuseCase.vsd]] for image source file.</div>Michael Bomanhttps://wiki.owasp.org/index.php?title=File:UseAndMisuseCase.vsd&diff=87507File:UseAndMisuseCase.vsd2010-08-11T13:15:09Z<p>Michael Boman: Source file (MS Visio format) for UseAndMisuseCase.png</p>
<hr />
<div>Source file (MS Visio format) for UseAndMisuseCase.png</div>Michael Bomanhttps://wiki.owasp.org/index.php?title=File:UseAndMisuseCase.png&diff=87506File:UseAndMisuseCase.png2010-08-11T13:14:01Z<p>Michael Boman: uploaded a new version of "File:UseAndMisuseCase.png":&#32;Reverted to version as of 17:52, 13 September 2009 (duplicate upload)</p>
<hr />
<div>A re-drawing of File:UseAndMisuseCase.jpg in high resolution</div>Michael Bomanhttps://wiki.owasp.org/index.php?title=File:UseAndMisuseCase.png&diff=87505File:UseAndMisuseCase.png2010-08-11T13:13:26Z<p>Michael Boman: uploaded a new version of "File:UseAndMisuseCase.png":&#32;Higher resolution version</p>
<hr />
<div>A re-drawing of File:UseAndMisuseCase.jpg in high resolution</div>Michael Bomanhttps://wiki.owasp.org/index.php?title=OWASP_AppSec_Research_2010_-_Stockholm,_Sweden&diff=83825OWASP AppSec Research 2010 - Stockholm, Sweden2010-05-23T17:34:59Z<p>Michael Boman: Updated Bradley Anstis and Vadim Pogulievsky presentation details</p>
<hr />
<div>__NOTOC__ <br />
<br />
==== Welcome ====<br />
<br />
== Invitation ==<br />
<br />
Ladies and Gentlemen, <br />
<br />
In June 21-24, 2010 let's all meet in beautiful Stockholm, Sweden. The OWASP chapters in [http://www.owasp.org/index.php/Sweden Sweden], [http://www.owasp.org/index.php/Norway Norway], and [http://www.owasp.org/index.php/Denmark Denmark] hereby invite you to OWASP AppSec Research 2010. <br />
<br />
If you have any questions, please email the conference chair: john.wilander at owasp.org <br />
<br />
[[Image:Stockholm old town small.jpg]] <br />
<br />
=== Sponsors ===<br />
<br />
Diamond sponsor:<br> [[Image:AppSec Research 2010 Microsoft diamond sponsor.jpg]] <br />
<br />
Gold sponsors:<br> [[Image:Cybercom logo.png]] [[Image:Portwise logo.png]]<br> [[Image:Fortify logo AppSec Research 2010.png]] [[Image:Omegapoint logo.png]] <br />
<br />
Silver sponsors (3 taken, 5 open):<br> [[Image:Mnemonic logo.png]] [[Image:AppSec Research 2010 sponsor Nixu logo.jpg]] <br><br />
[http://www.hps.se/ http://www.owasp.org/images/6/6f/Hps_logo.png]<br />
<br />
Dinner Party sponsor:<br> [http://www.google.com/EngineeringEMEA http://www.owasp.org/images/thumb/8/86/AppSec_Research_2010_Google_20k_sponsor.jpg/150px-AppSec_Research_2010_Google_20k_sponsor.jpg]<br />
<br />
<br />
Lunch sponsors (1 taken, 1 open):<br> [[Image:IIS logo.png]] <br />
<br />
Coffee break sponsors (1 taken, 3 open):<br> [[Image:MyNethouse logo.png]] <br />
<br />
Media sponsors:<br> [[Image:AppSec Research 2010 Help Net Security sponsor.jpg]] <br />
<br />
For full sponsoring program see the Sponsoring tab above.<br />
<br />
=== "AppSec Research".equals("AppSec Europe") ===<br />
<br />
This conference was formerly known as OWASP AppSec Europe. We have added 'Research' to highlight that we invite both industry and academia. All the regular AppSec Europe visitors and topics are welcome along with contributions from universities and research institutes. <br />
<br />
This will be ''the'' European conference for anyone interested in or working with application security. Co-host is the [http://dsv.su.se/en/ Department of Computer and Systems Science] at Stockholm University, offering a great venue in the fabulous Aula Magna. <br />
<br />
=== Countdown Challenges -- Free Tickets to Win! ===<br />
<br />
There will be a challenge posted on the conference wiki page the 21st every month up until the event. The winner will get free entrance to the conference. What are you waiting for? Go to the Challenges tab and have fun! <br />
<br />
=== Organizing Committee ===<br />
<br />
• John Wilander, chapter leader Sweden (chair)<br> • Mattias Bergling (vice chair)<br> • Alan Davidson, Stockholm University/Royal Institute of Technology (co-host)<br> • Ulf Munkedal, chapter leader Denmark<br> • Kåre Presttun, chapter leader Norway<br> • Stefan Pettersson (sponsoring coordinator)<br> • Carl-Johan Bostorp (schedule and event coordinator)<br> • Martin Holst Swende (coffee/lunch/dinner)<br> • Michael Boman (conference guide/attendee pack)<br> • Predrag Mitrovic, OWASP Sweden Board<br> • Kate Hartmann, OWASP<br> • Sebastien Deleersnyder, OWASP Board <br />
<br />
'''Welcome to Stockholm this year!'''<br> Regards, John Wilander <br />
<br />
==== June 21-22 (Training) ====<br />
<br />
== Training Registration is open ==<br />
<br />
Application security training is given the first two days, '''June 21-22'''. The price is '''€990''' (~$1.350) for a two-day course. Take the chance to learn from the best! <br />
<br />
'''--&gt; [http://guest.cvent.com/i.aspx?4W%2cM3%2c717e8a7c-4453-47ff-addb-721306529534 Register here]''' <br />
<br />
=== Course 1: Threat Modeling and Architecture Review (two days) ===<br />
<br />
[[Image:AppSec Research 2010 Pravir Chandra.jpg]] <br />
<br />
Pravir Chandra, Fortify Software <br />
<br />
'''Abstract''': Threat Modeling and Architecture Review are the cornerstones of a preventative approach to Application Security. By combining these topics into single comprehensive course attendees can get a complete understanding of how to understand the threat an application faces and how the application will handle those potential threats. This enables the risk to be accurately assessed and appropriate changes or mitigating controls recommended. From the course outline:<br />
<br />
1. Overview<br />
* Scope and problem definition<br />
* High‐level view of the overall process<br />
* Core techniques<br />
2. Threat assessment and modeling<br />
* Overall threat modeling process<br />
* Preparation and background information<br />
* Capturing business and security goals<br />
* Identify vulnerabilities and other risks<br />
* Establish weighting and prioritization of risks<br />
* Guard against risks with compensating controls<br />
* EXERCISE - Threat model a real‐life problem<br />
3. Architecture review techniques<br />
* Authentication<br />
* Authorization<br />
* EXERCISE - Apply the techniques from Authentication and Authorization<br />
* Input validation<br />
* Output encoding<br />
* EXERCISE - Apply the techniques from Input Validation and Output Encoding<br />
* Error handling<br />
* Audit logging<br />
* EXERCISE - Apply the techniques from Error Handling and Audit Logging<br />
* Encryption<br />
* Configuration management<br />
* EXERCISE - Apply the techniques from Encryption and Configuration Management<br />
4. Specifying security requirements<br />
* Writing positive security requirements<br />
* Deriving security requirements from functional requirements<br />
* Thinking broadly about requirements coverage<br />
* Balancing security requirements with functionality<br />
<br />
'''Trainer Bio''': Pravir Chandra is Director of Strategic Services at Fortify where he works with clients to build and optimize software security assurance programs. Pravir is widely recognized in the industry for his expertise in software security and code analysis, and also for his ability to apply technical knowledge strategically from a business perspective. His book, Network Security with OpenSSL is a popular reference on protecting software applications through cryptography and secure communications. His varied special project experience includes creating and leading the Open Software Assurance Maturity Model (OpenSAMM) project <br />
<br />
'''--&gt; [http://guest.cvent.com/i.aspx?4W%2cM3%2c717e8a7c-4453-47ff-addb-721306529534 Register here]'''<br />
<br />
=== Course 2: Introduction to Malware Analysis (two days) ===<br />
<br />
[[Image:AppSec Research 2010 Jason Geffner.jpg]] <br />
<br />
Jason Geffner, Next Generation Security Software (NGS), and Scott Lambert, Microsoft <br />
<br />
'''Abstract''': Security researchers are facing a growing problem in the complexity of malicious executables. While dynamic black-box automation tools exist to discover what malware will do on a given execution, it is often important for an analyst to know the full capabilities of a given malware sample. What port does it listen on? What password does it expect for backdoor access? What files will it write to? What will it do tomorrow that it didn't do today? This class will focus on teaching attendees the steps required to understand the functionality of given malware samples. This is a hands-on course. Attendees will work on real-world malware through a series of lab exercises designed to build their expertise in understanding the analysis process. <br />
<br />
Learning Objectives: <br />
<br />
*An understanding of how to use reverse engineering tools <br />
*An understanding of low-level code and data flow <br />
*PE File format <br />
*x86 Assembly language <br />
*API functions often used by malware <br />
*Anti-analysis tricks and how to defeat them <br />
*Exploits and Shellcode <br />
*A methodology for analyzing malware with and without the use of specialized tools<br />
<br />
'''Trainer Bio''': Jason Geffner joined Next Generation Security Software Ltd. in June of 2007 as a Principal Security Consultant. Jason focuses on performing security reviews of source code and designs, reverse engineering software protection methods and DRM protection methods, deobfuscating and analyzing malware, penetration testing web applications and network infrastructures, and developing automated security analysis tools. <br />
<br />
'''--&gt; [http://guest.cvent.com/i.aspx?4W%2cM3%2c717e8a7c-4453-47ff-addb-721306529534 Register here]''' <br />
<br />
=== Course 3: Building Secure Ajax and Web 2.0 Applications (two days) ===<br />
<br />
[[Image:AppSec Research 2010 Dave Wichers.jpg]] <br />
<br />
Dave Wichers, Aspect Security <br />
<br />
'''Abstract''': Students gain hands-on testing experience with freely available web application security test tools to find and diagnose flaws and learn how to identify them in their own projects. Because finding flaws is worthless without effective communication, the course also covers the process of creating and communicating software security flaws effectively. In addition, Aspect’s engineers are leaders in the AppSec Community and will offer the students an amazing perspective. <br />
<br />
From the course outline:<br> CSS Attacks, Browser Add On Attacks, RSS / Data Feed Attacks, Microsoft Active X, Adobe Flash/Flex/AIR, Silverlight, Java FX, Ajax Mashups, Same Origin Policy, JavaScript, Web 2.0 CSRF Attacks, XHR JSON Forgery, Best Practice: Check HTTP Headers, Best Practice: Unique ID For XHR, JSON and XML Based XSS, How to use OWASP AntiSamy, Blended Threats, Dealing with Ajax Toolkits, Best Practice: Fuzzing ... <br />
<br />
'''Trainer Bio''': Dave Wichers is a member of the OWASP Board and a coauthor, along with Jeff Williams, of all previous versions of the OWASP Top Ten. Dave is also the Chief Operating Officer of Aspect Security, a company that specializes in application security services. Mr. Wichers brings over twenty years of experience in the information security field. Prior to cofounding Aspect, he ran the Application Security Services Group at a large data center company, Exodus Communications. His current work involves helping customers, from small e-commerce sites to Fortune 500 corporations and the U.S. Government, secure their applications by providing application security design, architecture, and SDLC support services: including code review, application penetration testing, security policy development, security consulting services, and developer training. <br />
<br />
'''--&gt; [http://guest.cvent.com/i.aspx?4W%2cM3%2c717e8a7c-4453-47ff-addb-721306529534 Register here]''' <br />
<br />
=== Course 4: Assessing and Exploiting Web Apps with Samurai-WTF (two days) ===<br />
<br />
[[Image:AppSec Research 2010 Justin Searle.jpg]] <br />
<br />
Justin Searle, InGuardians <br />
<br />
'''Abstract''': This course will focus on using open source tools to perform web application assessments. The course will take attendees through the process of application assessment using the open source tools included in the Samurai Web Testing Framework Live CD (Samurai-WTF). Day one will take students through the steps and open source tools used to assess applications for vulnerabilities. Day two will focus on the exploitation of web app vulnerabilities, spending half the day on server side attacks and the other half of the day on client side attacks. The latest tools and techniques will be use throughout the course, including several tools developed by the trainers themselves. From the course outline:<br />
<br />
Samurai-WTF Project and Distribution (about, using ...)<br><br />
Web Application Assessment Methodology (pentest types, four step methodology ...)<br><br />
Step 1: Reconnaissance<br />
* Overview of Web Application Recon<br />
* Domain and IP Registration Databases (Labs: whois)<br />
* Google Hacking (Labs: gooscan, gpscan)<br />
* Social Networks (Labs: Reconnoiter)<br />
* DNS Interrogation (Labs: host, dig, nslookup, fierce)<br />
Step 2: Mapping<br />
* Overview of Mapping<br />
* Port Scanning and Fingerprinting (Labs: nmap, zenmap, Yokoso!)<br />
* Web Service Scanning (Labs: Nikto)<br />
* Spidering (Labs: wget, curl, Paros, WebScarab, BurpSuite)<br />
* Discovering "Non-Discoverable" URLs (Labs: DirBuster)<br />
Step 3: Discovery<br />
* Using Built-in Tools (Labs: Page Info, Error Console, DOM Inspector, View Source)<br />
* Poking and Prodding (Labs: Default User Agent, Cookie Editor, Tamper Data)<br />
* Interception Proxies (Labs: Paros, WebScarab, BurpSuite)<br />
* Semi-Automated Discovery (Labs: RatProxy)<br />
* Automated Discovery (Labs: Grendel-Scan, w3af)<br />
* Information Discovery (Labs: CeWL)<br />
* Fuzzing (Labs: JBroFuzz, BurpIntruder)<br />
* Finding XSS (Labs: TamperData, XSS-Me, BurpIntruder)<br />
* Finding SQL Injection (Labs: SQL Inject-Me, SQL Injection, BurpIntruder)<br />
* Decompiling Flash Objects (Labs: Flare)<br />
Step 4: Exploitation<br />
* Username Harvesting (Labs: python)<br />
* Brute Forcing Passwords (Labs: python)<br />
* Command Injection (Labs: w3af)<br />
* Exploiting SQL Injection (Labs: SQLMap, SQLNinja, Laudanum)<br />
* Exploiting XSS (Labs: Durzosploit)<br />
* Browser Exploitation (Labs: BeEF, BrowserRider, Yokoso!)<br />
* Advanced exploitation through tool integration (MSF + sqlninga/sqlmap/BeEF)<br />
<br />
'''Trainer Bio''': Justin Searle, a Senior Security Analyst with InGuardians, specializes in web application, network, and embedded penetration testing. Justin has presented at top security conferences including DEFCON, ToorCon, ShmooCon, and SANS. Justin has an MBA in International Technology and is CISSP and SANS GIAC-certified in incident handling and hacker techniques (GCIH) and intrusion analysis (GCIA). Justin is one of the founders and lead developers of Samurai-WTF. <br />
<br />
'''--&gt; [http://guest.cvent.com/i.aspx?4W%2cM3%2c717e8a7c-4453-47ff-addb-721306529534 Register here]''' <br />
<br />
=== Course 5: Securing Web Services (two days) ===<br />
<br />
[[Image:AppSec Research 2010 Jason Li.jpg]] <br />
<br />
Jason Li, Aspect Security <br />
<br />
'''Abstract''': Aspect Security offers a two day course titled Securing Web Services designed to focus on the most important messages regarding the development and of secure web services. The objective for this course is to ensure that developers understand the real risks associated with Service Oriented Architectures, what standard are available to help, and how to use the standards. The course includes a combination of lecture and demonstration designed to provide detailed guidance regarding the implementation of specific security principles and functions. <br />
<br />
From the course outline:<br />
<br />
* Web Service and SOA Threat Model<br />
* Data Formats: XML, JSON<br />
* Protocols: SOAP, REST<br />
* Overview of the Standards (WS-Security, SAML, XACML)<br />
* Common Communications Vulnerabilities<br />
* Using SSL for Simple Web Services<br />
* XML Encryption<br />
* XML Signature<br />
* WS-Security<br />
* How to Manage Web Service Identities<br />
* Federated Identities<br />
* Common Authentication Vulnerabilities<br />
* WSDL Examples of Implementing WS-Security<br />
* Common Access Control Vulnerabilities<br />
* How to Validate Web Service Input (XML Schema, Business Logic Validation)<br />
* Common XML Attacks (Recursion, References, Overflow, Transforms)<br />
* State Management<br />
* Using Interpreters Safely (SQL Injection, LDAP Injection, Command Injection, XPath Injection)<br />
* Denial of Service and Availability<br />
<br />
'''Trainer Bio''': Jason Li is a Senior Application Security Engineer for Aspect Security where he performs application security assessments and architecture reviews, as well as application security training, to a wide variety of financial and government customers. Jason is an active OWASP leader, contributing to several OWASP projects and serving on the OWASP Global Projects Committee. He holds a Post-Masters certificate in Computer Science and concentration in Information Security from Johns Hopkins University and a Masters degree in Computer Science from Cornell University. <br />
<br />
'''--&gt; [http://guest.cvent.com/i.aspx?4W%2cM3%2c717e8a7c-4453-47ff-addb-721306529534 Register here]''' <br />
<br />
==== June 23 ====<br />
<br />
{| border="0" align="center" style="width: 80%;"<br />
|-<br />
| align="center" colspan="4" style="background: none repeat scroll 0% 0% rgb(64, 88, 160); color: white;" | '''Conference Day 1 - June 23, 2010''' <br />
[[Image:OWASP AppSec Research 2010 Research R.gif]] = Research paper [[Image:OWASP AppSec Research 2010 Demo D.gif]] = Demo [[Image:OWASP AppSec Research 2010 Presentation P.gif]] = Presentation <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | <br> <br />
| style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 133, 122);" | Track 1 <br />
| style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 165, 122);" | Track 2 <br />
| style="width: 30%; background: none repeat scroll 0% 0% rgb(153, 255, 153);" | Track 3<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 08:00-08:50 <br />
| align="left" colspan="3" style="width: 80%; background: none repeat scroll 0% 0% rgb(194, 194, 194);" | Registration and Coffee<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 08:50-09:00 <br />
| align="center" colspan="3" style="width: 80%; background: none repeat scroll 0% 0% rgb(242, 242, 242);" | Welcome to OWASP AppSec Research 2010 Conference (John Wilander &amp; Dave Wichers)<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 09:00-10:00 <br />
| align="center" colspan="3" style="width: 80%; background: none repeat scroll 0% 0% rgb(252, 252, 150);" | [[#Keynote: Cross-Domain Theft and the Future of Browser Security]] <br />
''Chris Evans, Information Security Engineer, and Ian Fette, Product Manager for Chrome Security, Google'' <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 10:10-10:45 <br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 133, 122);" | [[Image:OWASP AppSec Research 2010 Research R.gif]] [[#BitFlip: Determine a Data's Signature Coverage from Within the Application]] <br />
''Henrich Christopher Poehls, University of Passau''<br> <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 165, 122);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#CsFire: Browser-Enforced Mitigation Against CSRF]] <br />
''Lieven&nbsp;Desmet&nbsp;and&nbsp;Philippe&nbsp;De&nbsp;Ryck,&nbsp;Katholieke Universiteit Leuven''<br> <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(153, 255, 153);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Deconstructing ColdFusion]] <br />
''Chris Eng,&nbsp;Veracode'' <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 10:45-11:10 <br />
| align="left" colspan="3" style="width: 90%; background: none repeat scroll 0% 0% rgb(194, 194, 194);" | Break - Expo - CTF kick-off, '''Coffee break sponsoring position open''' ($2,000)<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 11:10-11:45 <br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 133, 122);" | [[Image:OWASP AppSec Research 2010 Research R.gif]] [[#Towards Building Secure Web Mashups]] <br />
''M Decat, P De Ryck, L Desmet, F Piessens, W Joosen,&nbsp;Katholieke Universiteit Leuven'' <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 165, 122);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Automated vs. Manual Security: You Can't Filter "The Stupid"]]<br> <br />
''David Byrne and Charles Henderson, Trustwave'' <br />
<br />
<br> <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(153, 255, 153);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#How to Render SSL Useless]] <br />
''Ivan Ristic, Qualys<br>'' <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 11:55-12:30 <br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 133, 122);" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Enterprise Security Patterns for RESTful Web Services]] <br />
<br />
''Francois Lascelles,&nbsp;Layer 7 Technologies''<br> <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 165, 122);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Web Frameworks and How They Kill Traditional Security Scanning]] <br />
''Christian Hang and Lars Andren,&nbsp;Armorize Technologies'' <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(153, 255, 153);" | [[Image:OWASP AppSec Research 2010 Demo D.gif]] [[#The State of SSL in the World]] <br />
''Michael Boman, Omegapoint<br>'' <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 12:30-13:45 <br />
| align="left" colspan="3" style="width: 80%; background: none repeat scroll 0% 0% rgb(194, 194, 194);" | Lunch - Expo - CTF, Lunch sponsor: [[Image:OWASP AppSec Research 2010 IIS logo for program.png]]<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 13:45-14:20 <br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 133, 122);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Securing Web Applications with ESAPI]] <br />
''Ken Sipe,&nbsp;Perficient'' <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 165, 122);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Beyond the Same-Origin Policy]] <br />
''Jasvir Nagra and Mike Samuel, Google<br>'' <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(153, 255, 153);" | [[Image:OWASP AppSec Research 2010 Demo D.gif]] [[#SmashFileFuzzer - a New File Fuzzer Tool]] <br />
''Komal Randive, Symantec'' <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 14:30-15:05 <br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 133, 122);" | [[Image:OWASP AppSec Research 2010 Demo D.gif]] [[#Security Toolbox for .NET Development and Testing]] <br />
''Johan Lindfors and Dag König, Microsoft'' <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 165, 122);" | [[Image:OWASP AppSec Research 2010 Demo D.gif]] [[#Cross-Site Location Jacking (XSLJ) (not really)]] <br />
''David Lindsay, Cigital<br>Eduardo Vela Nava,&nbsp;sla.ckers.org''<br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(153, 255, 153);" | [[Image:OWASP AppSec Research 2010 Demo D.gif]] [[#Owning Oracle: Sessions and Credentials]] <br />
''Wendel G. Henrique and Steve Ocepek, Trustwave'' <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 15:05-15:30 <br />
| align="left" colspan="3" style="width: 80%; background: none repeat scroll 0% 0% rgb(194, 194, 194);" | Break - Expo - CTF, '''Coffee break sponsoring position open''' ($2,000)<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 15:30-16:05 <br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 133, 122);" | [[Image:OWASP AppSec Research 2010 Demo D.gif]] [[#Value Objects a la Domain-Driven Security: A Design Mindset to Avoid SQL Injection and Cross-Site Scripting]] <br />
''Dan Bergh Johnsson, Omegapoint'' <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 165, 122);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#New Insights into Clickjacking]] <br />
''Marco Balduzzi,&nbsp;Eurecom<br><br>'' <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(153, 255, 153);" | [[Image:OWASP AppSec Research 2010 Research R.gif]] [[#Session Fixation - the Forgotten Vulnerability?]] <br />
''Michael Schrank and Bastian Braun, University of Passau<br>Martin Johns, SAP Research'' <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 16:15-17:00 <br />
| align="center" colspan="3" style="width: 90%; background: none repeat scroll 0% 0% rgb(242, 242, 242);" | Panel Discussion: To Be Announced<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 19:00-23:00 <br />
| align="center" colspan="1" style="background: none repeat scroll 0% 0% rgb(43, 58, 109);" | [[Image:OWASP_AppSec_Research_2010_Stockholm_City_Hall_exterior_small.jpg|Stockholm City Hall, photo by Yanan Li]]<br />
| align="center" colspan="1" style="background: none repeat scroll 0% 0% rgb(43, 58, 109); color: white;" | '''Gala Dinner''' at [http://international.stockholm.se/Tourism-and-history/The-Famous-City-Hall/Pictures-of-the-City-Hall/ <span style="color:rgb(163, 178, 229);">Stockholm City Hall<span>]<br>Sponsored by<br>[[Image:OWASP AppSec Research 2010 Google logo for program.png]] <br />
| align="center" colspan="1" style="background: none repeat scroll 0% 0% rgb(43, 58, 109);" | [[Image:OWASP_AppSec_Research_2010_Stockholm_City_Hall_Golden_Hall_small.jpg|The Golden Hall, photo by Yanan Li]]<br />
|}<br />
<center><br />
[[Image:AppSec Research 2010 Microsoft diamond sponsor.jpg|250px|Microsoft - Diamond Sponsor]] [[Image:AppSec Research 2010 Google 20k sponsor.jpg|150px|Google - Dinner Party and Expo Sponsor]] [[Image:Portwise logo.png|130px|PortWise - Gold and Badge Sponsor]] [[Image:Cybercom logo.png|100px|Cybercom - Gold Sponsor]] [[Image:Fortify logo AppSec Research 2010.png|120px|Fortify - Gold Sponsor]] [[Image:Omegapoint logo.png|110px|Omegapoint - Gold Sponsor]] [[Image:Mnemonic logo.png|100px|Mnemonic - Silver Sponsor]] [[Image:AppSec Research 2010 sponsor Nixu logo.jpg|100px|NIXU - Silver Sponsor]] [[Image:Hps_logo.png|120px|High Performance Systems - Silver Sponsor]] [[Image:IIS logo.png|100px|Stiftelsen för Internetinfrastruktur - Lunch Sponsor]] [[Image:MyNethouse logo.png|100px|MyNethouse - Coffee Break Sponsor]] [[Image:AppSec Research 2010 Help Net Security sponsor.jpg|100px|Help Net Security - Media Sponsor]] <br />
</center><br />
<br />
== Keynote: Cross-Domain Theft and the Future of Browser Security ==<br />
<br />
[[Image:Appsec research 2010 invited talk 1.jpg]] <br />
<br />
'''Chris Evans'''<br> Troublemaker, Information Security Engineer, and Tech Lead at Google inc.<br> Also the sole author of vsftpd. <br />
<br />
'''Ian Fette'''<br> Product Manager for Chrome Security and Google's Anti-Malware initiative <br />
<br />
'''Abstract'''<br> The web browser, and associated machinery, is on the front line of attacks. We will first look at design-level problems with the traditional browser in terms of monolithic architecture and fundamental problems with the same-origin policy. We will then look at the types of solution that are starting to appear in browsers such as Google Chrome and Internet Explorer. We will look at other important browser-based defenses such as Safe Browsing. We will detail what a future browser might look like that has a much more secure design, but is still usable on the wide variety of web sites that people use daily. <br />
<br />
== DAY 1, TRACK 1 ==<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] BitFlip: Determine a Data's Signature Coverage from Within the Application ===<br />
<br />
''Henrich Christopher Poehls, University of Passau - ISL'' <br />
<br />
Despite applied cryptographic primitives applications are working on data that was not protected by them. We show by abstracting the message flow between the application and the underlying wire, that protection is applied to a different data model. Taking problems from real life, like XML wrapping attacks and digital signatures on XML, we show that establishing the right linkage between the security checked on lower levels and the application above is practically difficult. We propose a application controlled check, the BitFlip-test. By this simple test an application can test if the application's assumed protection of a data value was indeed provided by the digital signature applied to the message that contained the value. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] Towards Building Secure Web Mashups ===<br />
<br />
''Maarten Decat, Philippe De Ryck, Lieven Desmet, Frank Piessens, and Wouter Joosen, Katholieke Universiteit Leuven'' <br />
<br />
Web mashups combine components from multiple sources into a single, interactive application. This kind of setup typically requires both interaction between the components to achieve the necessary functionality, as well as component separation to achieve a secure execution. Unfortunately, the traditional web is not designed to easily fulfill both requirements, which can be seen in the restrictions imposed by traditional development techniques. This paper gives an overview of these traditional techniques and investigates new developments, specifically aimed at combining components in a secure manner. In addition, topics for further improvement are identified to ensure a wide adaptation of secure mashups. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Enterprise Security Patterns for RESTful Web Services ===<br />
<br />
''Francois Lascelles, Layer 7 Technologies'' <br />
<br />
This presentation discusses security mechanisms for RESTful Web services in cloud and enterprise deployments. Understand the relationship between REST principles and security for RESTful Web service. Learn about current practices involving SSL, HMAC authentication schemes, OAuth, SAML, and perimeter security patterns involving specialized infrastructure. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Securing Web Applications with ESAPI ===<br />
<br />
''Ken Sipe, Perficient'' <br />
<br />
When it comes to cross cutting software concerns, we expect to have or build a common framework or utility to solve this problem. This concept is represented well in the Java world with the loj4j framework, which abstracts the concern of logging, where it logs and the management of logging. The one cross cutting software concern which seems for most applications to be piecemeal is that of security. Security concerns include certification generation, SSL, protection from SQL Injection, protection from XSS, user authorization and authentication. Each of these separate concerns tend to have there own standards and libraries and leaves it as an exercise for the development team to cobble together a solution which includes multiple needs.... until now... Enterprise Security API library from OWASP. <br />
<br />
This session will look at a number of security concerns and how the ESAPI library provides a unified solution for security. This includes authorization, authentication of services, encoding, encrypting, and validation. This session will discuss a number of issues which can be solved through standardizing on the open source Enterprise Security API. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Demo word.gif]] Security Toolbox for .NET Development and Testing ===<br />
<br />
''Johan Lindfors and Dag König, Microsoft'' <br />
<br />
Being a developer on the Microsoft platform leveraging .NET doesn’t only involve keeping up with the continuous development of the underlying framework and technologies. It also means to be on top of the latest security threats and naturally the available mitigations and best practices to protect the customers and users of the applications and solutions being developed. <br />
<br />
In this session we will demonstrate how you as a .NET developer can leverage existing tools and technologies to build safer applications. During the demonstrations you will get more familiar with the existing tools within Visual Studio but also be introduced and educated in more tools that will help you build a toolbox for secure development and security testing. <br />
<br />
But one must also remember that tools will never replace knowledge and hence we will also show you how you can regularly get updated with the latest information from Microsoft on security including how to leverage SDL – Security Development Lifecycle, within your own projects. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Demo word.gif]] Value Objects a la Domain-Driven Security: A Design Mindset to Avoid SQL Injection and Cross-Site Scripting ===<br />
<br />
''Dan Bergh Johnsson, Omegapoint'' <br />
<br />
SQL Injection and Cross-Site Scripting have been topping the OWASP Top Ten for the last years. It must be a top priority for the community to evolve designs and mindsets that help the programmers to avoid these traps in their day-to-day work, where they have so much else but security that calls for their attention. The ambition of this presentation is to show design and coding practices that are well established in other fields of software development and put them to use to avoid just-mentioned traps. We also show some small refactorings that can be immediately applied to an existing codebase to make significant improvements to its security. Attendants of the session should be able to go back to work Monday morning and finish an improvement in this style before Monday lunch. <br />
<br />
We take inspiration from Domain Driven Design (DDD), which is characterized by its focus on what the software intend to represent. In particular, we make heavy use of the Value Object design pattern, where strict typing help us enforce that the incoming data is truthful to the restrictions of the domain. We start out with Injection Flaws and use the canonical username SQL Injection attack (“’OR 1=1 --“) as an example. Realizing that mentioned string was not intended as a valid username we elaborate the model to reflect this. Further more we make this change explicit in the code by introducing the new type and class Username. This also gives a natural place to put validation code, which otherwise often is placed in utility classes where it is easily forgotten and seldom called. In fact, we can even design service methods to require a validated Username, thus using the strong typing to enforce validation in the calling client system tier. <br />
<br />
Making this re-design with associated code changes is performed as a demo, and en route we discuss other design options and their relative merits and drawbacks. Again using DDD we proceed to analyse XSS. In the same way we see that XSS is in the general case not an indata validation problem. An extended analysis proposes that it can be phrased as an output-encoding problem. Using a similar technique we model the target domain of web content as the new type HTMLString, and can thereby enforce conversion from ordinary strings to strings with the proper encoding. If you have multiple content channels, then each channel will. <br />
<br />
All steps needed are shown in code, starting with a vulnerable application and through controlled refactoring steps ending up with a version without the vulnerability. In summary, we will take an established quality practice from another field of software development and use it to get security improvements. The main benefits are two: firstly, the method gently guides and reminds the programmers to include validation and encoding in an unobtrusive way. Secondly, the work can be performed in very small steps, where the first can be finished before lunch Monday after the conference. <br />
<br />
== DAY 1, TRACK 2 ==<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] CsFire: Browser-Enforced Mitigation Against CSRF ===<br />
<br />
''Lieven Desmet and Philippe De Ryck, Katholieke Universiteit Leuven'' <br />
<br />
Cross-Site Request Forgery (CSRF) is a web application attack vector that can be leveraged by an attacker to force an unwitting user's browser to perform actions on a third party website, possibly reusing all cached authentication credentials of that user. <br />
<br />
Currently, a whole range of techniques exist to mitigate CSRF, either by protecting the server application or by protecting the end-user. Unfortunately, the server-side protection mechanisms are not yet widely adopted, and the client-side solutions provide only limited protection or cannot deal with complex web 2.0 applications, which use techniques such as AJAX, mashups or single sign-on (SSO). <br />
<br />
In this talk, we will presents three interesting results of our research: (1) an extensive, real‐world traffic analysis to gain more insights in cross‐domain web interactions, (2) requirements for client‐side mitigation against CSRF and an analysis of existing browser extensions and (3) CsFire, our newly developed FireFox extension to mitigate CSRF. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Automated vs. Manual Security: You Can't Filter "The Stupid" ===<br />
<br />
''David Byrne and Charles Henderson, Trustwave'' <br />
<br />
Everyone wants to stretch their security budget, and automated application security tools are an appealing choice for doing so. However, manual security testing isn’t going anywhere until the HAL application scanner comes online. This presentation will use often humorous, real-world examples to illustrate the relative strengths and weaknesses of automated solutions and manual techniques. <br />
<br />
Automated tools certainly have some strengths (namely low incremental cost, detecting simple vulnerabilities, and performing highly repetitive tasks). In addition to preventing some attacks, WAFs also have advantages for some compliance frameworks. However, automated solutions are far from perfect. To begin with, there are entire classes of very important vulnerabilities that are theoretically impossible for automated software to detect (at least until HAL comes online). Examples include complex information leakage, race conditions, logic flaws, design flaws, subjective vulnerabilities such as CSRF, and multistage process attacks. <br />
<br />
Beyond that, there are many vulnerabilities that are too complicated or obscure to practically detect with an automated tool. Automated tools are designed to cover common application designs and platforms. Applications using an unusual layout or components will not be thoroughly protected by automated tools. Realistically, only the most vanilla of web applications written on common, simple platforms will receive solid code coverage from an automated tool. <br />
<br />
On the other hand, manual testing is far more versatile. An experienced penetration tester can identify complicated vulnerabilities in the same way that an attacker does. Specific, real-world examples of vulnerabilities only recognizable by humans will be provided. The diversity of vulnerabilities shown will clearly demonstrate that all applications have the potential for significant vulnerabilities not detectable by automated tools. <br />
<br />
Manual source code reviews present even more benefits by identifying vulnerabilities that require access to source code. Examples include “hidden” or unused application components, SQL injection with no evidence in the response, exotic injection attacks (e.g. mainframe session attacks), vulnerabilities in back-end systems, and intentional backdoors. Many organizations assume that this type of vulnerability is not a large threat, but source code can be obtained by disgruntled developers, by internal attackers when the repository isn’t properly secured, by exploiting platform bugs or path directory traversal attacks, and by external attackers using a Trojan horse or similar technique. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Web Frameworks and How They Kill Traditional Security Scanning ===<br />
<br />
''Christian Hang and Lars Andren, Armorize Technologies'' <br />
<br />
Modern web application frameworks present a challenge to static analysis technologies due to how they influence application behavior in ways not obvious from the source code. This prevents efficient security scanning and can cause up to 80% of total potential issues to remain undetected due to the incorrect framework handling. After explaining the underlying problems, we demonstrate in a real world walk through using code analysis to scan actual application code. By extending static analysis with new framework specific components, even applications using complex frameworks like Struts and Smarty can be inspected automatically and code coverage of security analysis can be greatly enhanced. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Beyond the Same-Origin Policy ===<br />
<br />
''Jasvir Nagra and Mike Samuel, Google Inc'' <br />
<br />
The same-origin policy has governed interaction between client-side code and user data since Netscape 2.0, but new development techniques are rendering it obsolete. Traditionally, a website consisted of server-side code written by trusted, in-house developers&nbsp;; and a minimum of client-side code written by the same in-house devs. The same-origin policy worked because it didn't matter whether code ran server-side or client-side&nbsp;; the user was interacting with code produced by the same organization. But today, complex applications are being written almost entirely in client-side code requiring developers to specialize and share code across organizational boundaries. <br />
<br />
This talk will explain how the same-origin policy is breaking down, give examples of attacks, discuss the properties that any alternative must have, introduce a number of alternative models being examined by the Secure EcmaScript committee and other standards bodies, demonstrate how they do or don't thwart these attacks, and discuss how secure interactive documents could open up new markets for web developers. We assume a basic familiarity with web application protocols&nbsp;: HTTP, HTML, JavaScript, CSS&nbsp;; and common classes of attacks&nbsp;: XSS, XSRF, Phishing. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Demo word.gif]] Cross-Site Location Jacking (XSLJ) (not really) ===<br />
<br />
''David Lindsay, Cigital Inc, and Eduardo Vela Nava sla.ckers.org'' <br />
<br />
Redirects are commonly used on many websites and are an integral part of many web frameworks. However, subtle and not so subtle issues can lead to security holes and privacy issues. In this presentation, we will discuss several high and low level issues related to redirects and demonstrate how the issues can be exploited. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] New Insights into Clickjacking ===<br />
<br />
''Marco Balduzzi, Eurecom'' <br />
<br />
Over the past year, clickjacking received extensive media coverage. News portals and security forums have been overloaded by posts claiming clickjacking to be the upcoming security threat. In a clickjacking attack, a malicious page is constructed (or a benign page is hijacked) to trick the user into performing unintended clicks that are advantageous for the attacker, such as propagating a web worm, stealing confidential information or abusing of the user session. In this talk, we formally define the problem and introduce our novel solution for automated detection of clickjacking attacks. We present the details of the system architecture and its implementation, and we evaluate the results we obtained from the analysis of over a million unique Internet pages. We conclude by discussing the clickjacking phenomenon and its future implications. <br />
<br />
== DAY 1, TRACK 3 ==<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Deconstructing ColdFusion ===<br />
<br />
''Chris Eng, Veracode'' <br />
<br />
This presentation is a technical survey of ColdFusion security, which will be of interest mostly to code auditors and penetration testers. We’ll cover the basics of ColdFusion markup, control flow, functions, and components and demonstrate how to identify common web application vulnerabilities at the source code level. We’ll also delve into ColdFusion J2EE internals, describing some of the unexpected properties we’ve observed while decompiling ColdFusion applications for static analysis. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] How to Render SSL Useless ===<br />
<br />
''Ivan Ristic, Feisty Duck'' <br />
<br />
SSL is the technology that secures the Internet, but it is effective only when deployed properly. While the SSL protocol itself is very robust and easy to use, the same cannot be said for the usability of the complete ecosystem, which includes server configuration, certificates and application implementation details. In fact, SSL deployment is generally plagued with traps at every step of the way. As a result, too many web sites use insecure deployment practices that render SSL completely useless. In this talk I will present a list of top ten (or thereabout) deployment mistakes, based on my work on the SSL Labs assessment platform (https://www.ssllabs.com). <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Demo word.gif]] The State of SSL in the World ===<br />
<br />
''Michael Boman, Omegapoint'' <br />
<br />
What is the status of SSL deployments in Fortune 500 companies and the top 10'000 websites (according to Alexa)? While developing a tool that was needed to perform the test-case OWASP-CM-001 (Testing for SSL-TLS) it was noticed that some sites had very good SSL-configuration, sometimes unexpectedly, and some sites has very poor security configuration, even when you could expect the site to have good security standard. Does the organization behind the site has any bearing on how good the security standard the site has in regards to HTTPS-support and configuration? The talk will highlight the findings and the tools and process of obtaining the underlying data, while also trying to answer the questions: - How many of the Fortune 500 and Top 10'000 websites offer an HTTPS-enabled browser experience to their visitors? - How is the HTTPS-server configured in regards to SSL-protocols offered, key exchange and key lengths (bit-size)? - Are there any correlation between company size, industry or popularity and the HTTPS-enabled browsing experience and the HTTPS-configuration? <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Demo word.gif]] SmashFileFuzzer - a New File Fuzzer Tool ===<br />
<br />
''Komal Randive, Symantec'' <br />
<br />
Here is a tool SmashFileFuzzer designed and developed to address the same problem with ease. SmashFileFuzzer understands the file formats and then user can specify the fields in the file to be fuzzed. SmashFileFuzzer acts on a sample file of the required format and generates multiple fuzzed file copies from this sample file. SmashFileFuzzer also has the support to add more custom file formats to be able to fuzz them, especially .dat formats. In comparison with the existing file fuzzers and frameworks this fuzzer has simple language for adding new formats, many more modes of fuzzing and attack oriented fuzzing. Following are the highlights of this fuzzer <br />
<br />
*Support to understand the file formats and fuzz specific fields with specified/random data <br />
*Understands the correlation between different fields and manipulates them in accordance with the fuzzed content. <br />
*Can generate valid fuzzed files even based on the partial format understanding. Only the portions of file format which are understood by the user can be used to generate valid fuzzed files. <br />
*Understands the custom formats for file types and also for the configuration files(e.g key value pair format or .dat formats) <br />
*Tool is designed to be easily extended for any new file formats <br />
*Fuzz strings are read from a dictionary file. Users can add application specific input string to this dictionary for testing. <br />
*It’s a unix shell based tool which can be easily scripted.<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Demo word.gif]] Owning Oracle: Sessions and Credentials ===<br />
<br />
''Wendel G. Henrique and Steve Ocepek, Trustwave'' <br />
<br />
In a world of free, ever-present encryption libraries, many penetration testers still find a lot of great stuff on the wire. Database traffic is a common favorite, and with good reason: when the data includes PAN, Track, and CVV, it makes you stop and wonder why this stuff isn’t encrypted across the board. However, despite this weakness, we still need someone to issue queries before we see the data. Or maybe not… after all, it’s just plaintext. <br />
<br />
Wendel G. Henrique and Steve Ocepek of Trustwave’s SpiderLabs division offer a closer look at the world’s most popular relational database: Oracle. Through a combination of downgrade attacks and session take-over exploits, this talk introduces a unique approach to database account hijacking. Using a new tool, thicknet, the team will demonstrate how deadly injection and downgrade attacks can be to database security. <br />
<br />
The Oracle TNS/Net8 protocol was studied extensively during presentation for this talk. Very little public knowledge of this protocol exists today, and much of the data gained is, as far as we know, new to Oracle outsiders. <br />
<br />
Also, during the presentation we will be offering to attendants: <br />
<br />
*Knowledge about man-in-the-middle and downgrade attacks, especially the area of data injection. <br />
*A better understanding of the network protocol used by Oracle. <br />
*The ability to audit databases against this type of attack vector. <br />
*Ideas for how to prevent this type of attack, and an understanding of the value of encryption and digital signature technologies. <br />
*Understanding of methodologies used to reverse-engineer undocumented protocols.<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] Session Fixation - the Forgotten Vulnerability? ===<br />
<br />
''Michael Schrank and Bastian Braun, University of Passau, and Martin Johns, SAP Research'' <br />
<br />
The term 'Session Fixation vulnerability' subsumes issues in Web applications that under certain circumstances enable the adversary to perform a session hijacking attack through ontrolling the victim's session identier value. We explore this vulnerability pattern. First, we give an analysis of the root causes and document existing attack vectors. Then we take steps to assess the current attack surface of Session Fixation. Finally, we present a transparent server-side method for mitigating vulnerabilities. <br />
<br />
==== June 24 ====<br />
<br />
{| style="width:80%" border="0" align="center"<br />
|-<br />
| colspan="4" align="center" style="background:#4058A0; color:white" | '''Conference Day 2 - June 24, 2010''' <br />
[[Image:OWASP AppSec Research 2010 Research R.gif]] = Research paper [[Image:OWASP AppSec Research 2010 Demo D.gif]] = Demo [[Image:OWASP AppSec Research 2010 Presentation P.gif]] = Presentation <br />
<br />
|-<br />
| style="width:10%; background:#7B8ABD" | <br />
| style="width:30%; background:#BC857A" | Track 1 <br />
| style="width:30%; background:#BCA57A" | Track 2 <br />
| style="width:30%; background:#99FF99" | Track 3<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 09:00-10:00 <br />
| colspan="3" style="width:80%; background:rgb(252, 252, 150)" align="center" | [[#Keynote: The Security Development Lifecycle - The Creation and Evolution of a Security Development Process]]<br>''Steve Lipner, Senior Director of Security Engineering Strategy, Microsoft Corporation''<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 10:10-10:45 <br />
| style="width:30%; background:#BC857A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Building Security In Maturity Model: A Review of Successful Software Security Programs in Europe]] <br />
<br />
''Gabriele Giuseppini, Cigital'' <br />
<br />
| style="width:30%; background:#BCA57A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Demo D.gif]] [[#Promon TestSuite: Client-Based Penetration Testing Tool]] <br />
<br />
''Folker den Braber and Tom Lysemose Hansen, Promon'' <br />
<br />
| style="width:30%; background:#99FF99" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Research R.gif]] [[#A Taint Mode for Python via a Library]] <br />
<br />
''Juan José Conti, Universidad Tecnológica Nacional<br>Alejandro Russo, Chalmers Univ. of Technology'' <br />
<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 10:45-11:10 <br />
| colspan="3" style="width:90%; background:#C2C2C2" align="left" | Break - Expo - CTF, Coffee sponsor: [[Image:OWASP AppSec Research 2010 MyNethouse logo for program.png]]<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 11:10-11:45 <br />
| style="width:30%; background:#BC857A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Microsoft's Security Development Lifecycle for Agile Development]] <br />
<br />
''Nick Coblentz, OWASP Kansas City Chapter and AT&T Consulting'' <br />
<br />
| style="width:30%; background:#BCA57A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Detecting and Protecting Your Users from 100% of all Malware - How?]] <br />
<br />
''Bradley Anstis and Vadim Pogulievsky, M86 Security'' <br />
<br />
| style="width:30%; background:#99FF99" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Research R.gif]] [[#OPA: Language Support for a Sane, Safe and Secure Web]] <br />
<br />
''David Rajchenbach-Teller and François-Régis Sinot, MLstate'' <br />
<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 11:55-12:30 <br />
| style="width:30%; background:#BC857A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Secure Application Development for the Enterprise: Practical, Real-World Tips]] <br />
<br />
''Michael Craigue, Dell'' <br />
<br />
| style="width:30%; background:#BCA57A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Responsibility for the Harm and Risk of Software Security Flaws]] <br />
<br />
''Cassio Goldschmidt, Symantec'' <br />
<br />
| style="width:30%; background:#99FF99" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Research R.gif]] [[#Secure the Clones: Static Enforcement of Policies for Secure Object Copying]] <br />
<br />
''Thomas Jensen and David Pichardie, INRIA Rennes - Bretagne Atlantique'' <br />
<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 12:30-13:45 <br />
| colspan="3" style="width:80%; background:#C2C2C2" align="left" | Lunch - Expo - CTF, '''Lunch break sponsoring position open''' ($4,000)<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 13:45-14:20 <br />
| style="width:30%; background:#BC857A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Product Security Management in Agile Product Management]] <br />
<br />
''Antti Vähä-Sipilä, Nokia'' <br />
<br />
| style="width:30%; background:#BCA57A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Hacking by Numbers]] <br />
<br />
''Tom Brennan, WhiteHat Security and OWASP Foundation<br>'' <br />
<br />
| style="width:30%; background:#99FF99" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Research R.gif]] [[#Safe Wrappers and Sane Policies for Self Protecting JavaScript]] <br />
<br />
''Jonas Magazinius, Phu H. Phung, and David Sands, Chalmers Univ. of Technology'' <br />
<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 14:30-15:05 <br />
| style="width:30%; background:#BC857A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#OWASP_Top_10_2010]] <br />
<br />
''Dave Wichers, Aspect Security and OWASP Foundation<br>'' <br />
<br />
| style="width:30%; background:#BCA57A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Application Security Scoreboard in the Sky]] <br />
<br />
''Chris Eng, Veracode'' <br />
<br />
| style="width:30%; background:#99FF99" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Research R.gif]] [[#On the Privacy of File Sharing Services]] <br />
<br />
''N Nikiforakis, F Gadaleta, Y Younan, and W Joosen, Katholieke Universiteit Leuven'' <br />
<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 15:05-15:30 <br />
| colspan="3" style="width:80%; background:#C2C2C2" align="left" | Break - Expo - CTF, '''Coffee break sponsoring position open''' ($2,000)<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 15:30-16:00 <br />
| colspan="3" style="width:90%; background:#F2F2F2" align="center" | CTF Price Ceremony, Announcement of OWASP AppSec EU 2011, Closing Notes<br />
|}<br />
<center><br />
[[Image:AppSec Research 2010 Microsoft diamond sponsor.jpg|250px|Microsoft - Diamond Sponsor]] [[Image:AppSec Research 2010 Google 20k sponsor.jpg|150px|Google - Dinner Party and Expo Sponsor]] [[Image:Portwise logo.png|130px|PortWise - Gold and Badge Sponsor]] [[Image:Cybercom logo.png|100px|Cybercom - Gold Sponsor]] [[Image:Fortify logo AppSec Research 2010.png|120px|Fortify - Gold Sponsor]] [[Image:Omegapoint logo.png|110px|Omegapoint - Gold Sponsor]] [[Image:Mnemonic logo.png|100px|Mnemonic - Silver Sponsor]] [[Image:AppSec Research 2010 sponsor Nixu logo.jpg|100px|NIXU - Silver Sponsor]] [[Image:Hps_logo.png|120px|High Performance Systems - Silver Sponsor]] [[Image:IIS logo.png|100px|Stiftelsen för Internetinfrastruktur - Lunch Sponsor]] [[Image:MyNethouse logo.png|100px|MyNethouse - Coffee Break Sponsor]] [[Image:AppSec Research 2010 Help Net Security sponsor.jpg|100px|Help Net Security - Media Sponsor]] <br />
</center> <br />
== Keynote: The Security Development Lifecycle - The Creation and Evolution of a Security Development Process ==<br />
<br />
[[Image:Appsec research 2010 invited talk 2.jpg]] <br />
<br />
'''Steve Lipner'''<br> Senior Director of Security Engineering Strategy, Trustworthy Computing Security, Microsoft Corporation.<br> Co-author of "The Security Development Lifecycle", Microsoft Press (book cover above). <br />
<br />
'''Abstract'''<br> This keynote will review the evolution of the Security Development Lifecycle (SDL) from its origins in the Microsoft “security pushes” of 2002-3 through its current status and application in 2010. It will emphasize the aspects of change and change management as the SDL and its user community have matured and grown and will conclude with a summary of some recent changes and additions to the SDL. Specific topics to be addressed include: <br />
<br />
*Motivations for introducing both the SDL and its predecessor processes. <br />
*Considerations in selling the process to management and sustaining a mandate over a prolonged period. <br />
*Scaling the SDL to an organization with tens of thousands of engineers. <br />
*Managing change. <br />
*The role of automation in the SDL. <br />
*Adaptation of the SDL to agile development processes. <br />
*Thoughts for organizations that are considering implementing the SDL.<br />
<br />
The presentation will cover technical aspects of the SDL including a brief review of requirements and tools, and results. <br />
<br />
'''Speaker Bio'''<br> Steven B. Lipner is senior director of Security Engineering Strategy at Microsoft Corp where he is responsible for programs that provide improved product security for Microsoft customers. Lipner leads Microsoft’s Security Development Lifecycle (SDL) team and is responsible for the definition of Microsoft’s SDL and for programs to make the SDL available to organizations beyond Microsoft. Lipner is also responsible for Microsoft’s corporate strategies related to government security evaluation of Microsoft products. <br />
<br />
Lipner is coauthor with Michael Howard of The Security Development Lifecycle (Microsoft Press, 2006) and is named as inventor on twelve U.S. patents and two pending applications in the field of computer and network security. He has authored numerous professional papers and conference presentations, and served on several National Research Council committees. He served two terms – a total of more than ten years – on the United States Information Security and Privacy Advisory Board and its predecessor. Lipner holds S.B. and S.M. degrees in Civil Engineering from the Massachusetts Institute of Technology and attended the Harvard Business School’s Program for Management Development. <br />
<br />
== DAY 2, TRACK 1 ==<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Building Security In Maturity Model: A Review of Successful Software Security Programs in Europe ===<br />
<br />
''Gabriele Giuseppini, Cigital'' <br />
<br />
Most large organizations have practiced software security through many activities involving people, process and automation, but we are just now reaching the point where enough experience has been accumulated to compare notes and talk about what works at a macro level. In 2008, Gary McGraw, Brian Chess, and Sammy Migues interviewed the executives running nine software security initiatives at companies such as Adobe, The Depository Trust and Clearing Corporation (DTCC), EMC, Google, Microsoft, QUALCOMM, and Wells Fargo. The resulting data, drawn from real programs at different levels of maturity, was used to guide the construction of the Building Security In Maturity Model (BSIMM). <br />
<br />
BSIMM is a framework, a tool, and a measuring stick that can be used by organizations to gauge their software security initiatives and to highlight areas of discussion and intervention. Using BSIMM it is possible to compare initiatives with each other and unveil activities that might have been underdeveloped or that might have been adopted without sufficient foundation to achieve tangible results. <br />
<br />
In the past year BSIMM has expanded to collect data from dozens of additional companies, and enough data has been assembled to compare security initiatives in the United States to initiatives in the European Union. The BSIMM framework and the real-world information gathered through the interviews makes it possible to identify the set of activities that seem to be common to successful programs as well as highlight the differences and common points observed between the two regions. <br />
<br />
I will describe this observation-based maturity model, drawing examples from several real software security programs in the United States and in Europe. I will discuss the different ways that BSIMM can be used to organize, manage, and measure software security initiatives, and I will point out the interesting results that have been obtained from the analysis of the raw data and from the comparison of the data between the US and European regions. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Microsoft's Security Development Lifecycle for Agile Development ===<br />
<br />
''Nick Coblentz, OWASP Kansas City Chapter and AT&amp;T Consulting'' <br />
<br />
Many development and security teams believe Agile development cannot be accomplished securely. During this presentation, Nick Coblentz will discuss the recent guidance from Microsoft that enables development teams to include secure development activities within their Agile processes without compromising features or functionality. Nick will also demonstrate ASP.NET libraries, strategies, and automated tools to reduce the effort required by developers.<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Secure Application Development for the Enterprise: Practical, Real-World Tips ===<br />
<br />
''Michael Craigue, Dell'' <br />
<br />
Dell has a reputation for IT simplification and a lean cost structure. We take the same approach with our application security program. This talk covers money-saving tips in the creation and evolution of Dell's Security Development Lifecycle, including risk assessments, security reviews, threat modeling, source code scans, awareness/training, application security user groups, security consulting staff development, and assurance scans/penetration testing. We’ll discuss how we have adapted our program to our IT, Product Group, and Services organizations. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Product Security Management in Agile Product Management ===<br />
<br />
''Antti Vähä-Sipilä, Nokia'' <br />
<br />
This paper provides a model for product security risk management and security requirements elicitation in an agile product management framework, using the concepts of Scrum and an epics-based agile requirements model. The paper documents some real-life experiences of rolling out such a risk management model. The model addresses security threat analysis and risk acceptance, and is agnostic to the actual security engineering practices employed in the Scrum teams, and is scalable over large and small enterprises. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] OWASP Top 10 2010 ===<br />
<br />
''Dave Wichers, Aspect Security and OWASP Foundation'' <br />
<br />
This presentation will cover the OWASP Top 10 - 2010 (final version). The OWASP Top 10 was originally released in 2003 to raise awareness of the importance of application security. As the field evolves, the Top 10 needs to be periodically updated to keep with up with the times. The Top 10 was updated in 2004 and the last update was in 2007, where it introduced Cross Site Request Forgery (CSRF) as the big new emerging web application security risk. <br />
<br />
This update will be based on more sources of web application vulnerability information than the previous versions were when determining the new Top 10. It will also present this information in a more concise, compelling, and consumable manner, and include strong references to the many new openly available resources that can help address each issue, particularly OWASP's new Enterprise Security API (ESAPI) and Application Security Verification Standard (ASVS) projects. <br />
<br />
A significant change for this update will be that the OWASP Top 10 will be focused on the Top 10 Risks to Web Applications, not just the most common vulnerabilities. <br />
<br />
== DAY 2, TRACK 2 ==<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Demo word.gif]] Promon TestSuite: Client-Based Penetration Testing Tool ===<br />
<br />
''Folker den Braber and Tom Lysemose Hansen, Promon'' <br />
<br />
Vulnerability analysis has a wide scope containing both social and technical aspects. An important part of technical vulnerability analysis consists of penetration testing. In most cases, penetration testing is focused on either server side or network layer vulnerabilities. In this demonstration we will have a closer look at vulnerability analysis on the client side, while demonstrating the use of the Promon Testuite testing tool. <br />
<br />
Promon TestSuite is designed to use the same vectors as common malware but in a clear and visual way, with varying payloads to illustrate the security issues involved with giving injected code free access to a programs memory. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Detecting and Protecting Your Users from 100% of all Malware - How? ===<br />
<br />
''Bradley Anstis and Vadim Pogulievsky, M86 Security'' <br />
<br />
100% Malware detection is the goal but is it really achievable? This session looks at the traditional Malware detection technologies and how well they perform today and then compares this to some newer approaches with demonstrations of Real-time code analysis and Behavioral Analysis technologies to see what is better or worse.<br />
<br />
100% detection rates are the goal, but how close can we get with a single technology, or what combination of technologies can we use to get as close as possible?<br />
<br />
This session is all about challenging the existing accepted practices for Malware protection. We want to open the minds of the attendees, encourage them to question existing solutions and the incumbent market leading vendors. We want you to also re-evaluate their environment to see if improvements can be made.<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Responsibility for the Harm and Risk of Software Security Flaws ===<br />
<br />
''Cassio Goldschmidt, Symantec Corp'' <br />
<br />
Who is responsible for the harm and risk of security flaws? The advent of worldwide networks such as the internet made software security (or the lack of software security) become a problem of international proportions. There are no mathematical/statistical risk models available today to assess networked systems with interdependent failures. Without this tool, decision-makers are bound to overinvest in activities that don’t generate the desired return on investment or under invest on mitigations, risking dreadful consequences. Experience suggests that no party is solely responsible for the harm and risk of software security flaws but a model of partial responsibility can only emerge once the duties and motivations of all parties are examine and understood. <br />
<br />
State of the art practices in software development won’t guarantee products free of flaws. The infinite principles of mathematics are not properly implemented in modern computer hardware without having to truncate numbers and calculations. Many of the most common operating systems, network protocols and programming languages used today were first conceived without the basic principles of security in mind. Compromises are made to maintain compatibility of newer versions of these systems with previous versions. Evolving software inherits all flaws and risks that are present in this layered and interdependent solution. Lastly, there are no formal ways to prove software correctness using neither mathematics nor definitive authority to assert the absence of vulnerabilities. The slightest coding error can lead to a fatal flaw. Without a doubt, vulnerabilities in software applications will continue to be part of our daily lives for years to come. <br />
<br />
Decisions made by adopters such as whether to install a patch, upgrade a system or employed insecure configurations create externalities that have implications on the security of other systems. Proper cyber hygiene and education are vital to stop the proliferation of computer worms, viruses and botnets. Furthermore, end users, corporations and large governments directly influence software vendors’ decisions to invest on security by voting with their money every time software is purchased or pirated. <br />
<br />
Security researchers largely influence the overall state of software security depending on the approach taken to disclose findings. While many believe full disclosure practices helped the software industry to advance security in the past, several of the most devastating computer worms were created by borrowing from information detailed by researcher’s full disclosure. Both incentives and penalties were created for security researchers: a number of stories of vendors suing security researchers are available in the press. Some countries enacted laws banning the use and development of “hacking tools”. At the same time, companies such as iDefense promoted the creation of a market for security vulnerabilities providing rewards that are larger than a year’s worth of salary for a software practitioner in countries such as China and India. <br />
<br />
Effective policy and standards can serve as leverage to fix the problem either by providing incentives or penalties. Attempts such PCI created a perverse incentive that diverted decision makers’ goals to compliance instead of security. Stiff mandates and ineffective laws have been observed internationally. Given the fast pace of the industry, laws to combat software vulnerabilities may become obsolete before they are enacted. Alternatively, the government can use its own buying power to encourage adoption of good security standards. One example of this is the Federal Desktop Core Configuration (FDCC). <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Hacking by Numbers ===<br />
<br />
''Tom Brennan, WhiteHat Security and OWASP Foundation'' <br />
<br />
There is a difference between what is possible and what is probable, something we often lose sight of in the world of information security. For example, a vulnerability represents a possible way for an attacker to exploit an asset, but remember not all vulnerabilities are created equal. Obviously we must also keep in mind that just because a vulnerability exists does not necessarily mean it will be exploited, or indicate by whom or to what extent. Clearly, many vulnerabilities are very serious leaving the door open to compromise of sensitive information, financial loss, brand damage, violation of industry regulations, and downtime. Some vulnerabilities are more difficult to exploit than others and therefore attract different attackers. Autonomous worms &amp; viruses may attack one type of issue, while a sentient targeted attacker may prefer another path. Better understanding of these factors enables us to make informed business decisions about website risk management and what is probable. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Application Security Scoreboard in the Sky ===<br />
<br />
''Chris Eng, Veracode'' <br />
<br />
This presentation will discuss vulnerability metrics gathered from real-world applications. The statistics are derived from continuously updated data collected by Veracode’s cloud-based code analysis service. The anonymized data represents a total of nearly 1,600 applications submitted for analysis by large and small companies, commercial software providers, open source projects, and software outsourcers between February 2007 and January 2010. This is the first vulnerability analytics study of this magnitude that incorporates data from both static analysis and dynamic analysis. <br />
<br />
We will compare the relative security of applications by industry and origin, and we will examine detailed vulnerability distribution data in the context of taxonomies such as the OWASP Top Ten and the CWE/SANS Top 25 Programming Errors. <br />
<br />
== DAY 2, TRACK 3 ==<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] A Taint Mode for Python via a Library ===<br />
<br />
''Juan José Conti, Universidad Tecnológica Nacional, and Alejandro Russo, Chalmers University of Technology'' <br />
<br />
Vulnerabilities in web applications present threats to on-line systems. SQL injection and cross-site scripting attacks are among the most common threats found nowadays. These attacks are often result of improper or none input validation. To help discover such vulnerabilities, taint analyses have been developed in popular web scripting languages like Perl, Ruby, PHP, and Python. Such analysis are often implemented as an execution monitor, where the interpreter needs to be adapted to provide a taint mode. However, modifying interpreters might be a major task in its own right. In fact, it is very probably that new releases of interpreters require to be adapted to provide a taint mode. Differently from previous approaches, we show how to provide a taint analysis for Python via a library written entirely in Python, and thus avoiding modifications in the interpreter. The concepts of classes, decorators and dynamic dispatch makes our solution lightweight, easy to use, and particularly neat. With minimal or none effort, the library can be adapted to work with different Python interpreters. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] OPA: Language Support for a Sane, Safe and Secure Web ===<br />
<br />
''David Rajchenbach-Teller and François-Régis Sinot, MLstate'' <br />
<br />
Web applications and services have critical needs in terms of safety, security and privacy: they need to remain available constantly and can at any time be the object of attacks by malicious and anonymous distant users attempting to take control, alter data or steal it, or cause unwanted behaviors. Unfortunately, recent history shows numerous cases of popular web applications falling victim to such attacks, despite careful attempts to secure them. <br />
<br />
In this paper, we introduce OPA (One Pot Application), a new platform designed to make web development sane, safe and secure. OPA provides an integrated methodology where the complete application is written with one simple language with consistent semantics, enforces safe use of the infrastructure through compile-time static checking and a novel programming paradigm suited to the web and encourages correct-by-construction development. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] Secure the Clones: Static Enforcement of Policies for Secure Object Copying ===<br />
<br />
''Thomas Jensen and David Pichardie, INRIA Rennes - Bretagne Atlantique'' <br />
<br />
Exchanging mutable data objects with untrusted code is a delicate matter because of the risk of creating a data space that is accessible by both a code and an attacker. Consequently, secure programming guidelines for Java stress the importance of using defensive copying before accepting or handing out references to an internal mutable object. However, implementation of a copy method (like clone()) is entirely left to the programmer. It may not provide a sufficiently deep copy of an object and is subject to overriding by a malicious sub-class. Currently no language-based mechanism supports secure object cloning. <br />
<br />
This paper proposes a type-based annotation system for defining modular cloning policies for class-based object-oriented programs. It provides a static enforcement mechanism that will guarantee that all classes fulfill their copying policy, even in the presence of overriding of copy methods, and establishes the semantic correctness of the overall approach. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] Safe Wrappers and Sane Policies for Self Protecting JavaScript ===<br />
<br />
''Jonas Magazinius, Phu H. Phung, and David Sands, Chalmers Univ. of Technology'' <br />
<br />
Phung et al (ASIACCS’09) describe a method for wrapping built-in methods of JavaScript programs in order to enforce security policies. The method is appealing because it requires neither deep transformation of the code nor browser modification. Unfortunately the implementation outlined suffers from a range of vulnerabilities, and policy construction is restrictive and error prone. In this paper we address these issues to provide a systematic way to avoid the identified vulnerabilities, and make it easier for the policy writer to construct declarative policies – i.e. policies upon which attacker code has no side effects. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] On the Privacy of File Sharing Services ===<br />
<br />
''Nick Nikiforakis, Francesco Gadaleta, Yves Younan, and Wouter Joosen, Katholieke Universiteit Leuven'' <br />
<br />
File sharing services are used daily by tens of thousands of people as a way of sharing files. Almost all such services, use a security-through-obscurity method of hiding the files of one user from others. For each uploaded file, the user is given a secret URL which supposedly cannot be guessed. The user can then share his uploaded file by sharing this URL with other users of his choice. Unfortunately though, a number of file sharing services are incorrectly implemented allowing an attacker to guess valid URLs of millions of files and thus allowing him to enumerate their file database and access all of the uploaded files. In this paper, we study some of these services and we record their incorrect implementations. We design automatic enumerators for two such services and a privacy-classifying module which characterises an uploaded file as private or public. Using this technique we gain access to thousands of private files ranging from private and company documents to personal photographs. We present a taxonomy of the private files found and ways that the users and services can protect themselves against such attacks. <br />
<br />
==== Registration ====<br />
<br />
== Registration is now OPEN ==<br />
<br />
'''[http://guest.cvent.com/i.aspx?4W%2cM3%2c717e8a7c-4453-47ff-addb-721306529534 Click Here To Register]''' <br />
<br />
Note: To save on processing expenses, all fees paid for the OWASP conference are non-refundable. OWASP can accommodate transfers of registrations from one person to another, if such an adjustment becomes necessary. <br />
<br />
== Stay Informed ... and Tell Others ==<br />
<br />
[https://lists.owasp.org/mailman/listinfo/appsec_eu_2010 Subscribe to the conference '''mailing list''']. This is the official information channel and you'll be the first to know about the program, invited speakers, opening of registration for training etc. <br />
<br />
[http://events.linkedin.com/OWASP-AppSec-Research-2010/pub/185990 Add the event to your '''LinkedIn''' profle] to tell all your business contacts that AppSec Research 2010 is the place to be. <br />
<br />
Then get on the '''Twitter''' stream by using the tags '''#OWASP''' and '''#AppSecEU'''. <br />
<br />
== Conference Fees (June 23-24) ==<br />
<br />
*Regular registration: €350 <br />
*OWASP individual member (not just chapter member): €300 <br />
*Full-time students*: €225<br />
<br />
<nowiki>*</nowiki> We need some kind of proof of your full-time student status. Either ask your local OWASP chapter leader to vouch for you by email to Kate.Hartmann@owasp.org, or email Kate a scanned image of your student ID (please compress the file size&nbsp;:). <br />
<br />
== Training Fee (June 21-22) ==<br />
<br />
*Training fee is €990 for two days, see Training tab above<br />
<br />
==== Travel &amp; Hotels ====<br />
<br />
== Travel ==<br />
<br />
Stockholm's foremost international airport is Arlanda (ARN). Clean and convenient speed trains will take you between Arlanda and Stockholm Central in 20 minutes. You can also fly to Stockholm Skavsta (NYO) or Stockholm Västerås (VST) where coaches take you to Stockholm Central in 1 h 20 min. <br />
<br />
== Accommodation ==<br />
<br />
You can choose hotel/hostel freely in Stockholm but we provide three suggestions with pre-booked rooms. Before you book '''check with sites like [http://www.hotels.com hotels.com] since they might have better prices for the very same hotels!''' <br />
<br />
[[Image:Stockholm map with hotels and public transportation.jpg]] <br />
<br />
Subways and buses are convenient and safe and will take you right up to the venue (station/stop "Universitetet") from these three hotels: <br />
<br />
'''Best Western Time Hotel'''<br> Why? Closest to the university, direct bus or subway to the conference<br> [http://www.timehotel.se/index.aspx?languageID=5 Best Western Time Hotel]<br> Single room: 1395 SEK/€145/$195<br> Double room: 1575 SEK/€160/$220<br> Rooms pre-booked until May 18 under code "G#73641 OWASP"<br> <br />
<br />
'''Scandic Continental'''<br> Why? Right at the Central Station, convenient travel to and from airport, direct subway to the conference<br> [http://www.scandichotels.com/en/Hotels/Countries/Sweden/Stockholm/Hotels/Scandic-Continental-Stockholm/ Scandic Continental]<br> Single room: 1590 SEK/€165/$220<br> Double room: 1690 SEK/€175/$235<br> Rooms pre-booked until early May under code "OWASP"<br> <br />
<br />
'''Fridhemsplan's Hostel'''<br> Why? Affordable stay in Stockholm's nicest hostel, direct bus to the conference<br> [http://fridhemsplan.se/?p=Main&c= Fridhemsplan's Hostel]<br> Rooms cost €35-€55 ($50-$80)<br> Booking via John Wilander (john.wilander@owasp.org). First-come-first-served with priority to students or people who have the need&nbsp;;). <br />
<br />
==== Venue ====<br />
<br />
The venue for both training and conference is Aula Magna at Stockholm University.<br />
<br />
'''Address''' (for instance for deliveries):<br><br />
Aula Magna<br><br />
Stockholms universitet<br><br />
Frescativägen 6<br><br />
SE-106 91 Stockholm<br><br />
Sweden<br><br />
<br />
[[Image:AppSec Research 2010 Aula Magna.jpg]] <br />
<br />
==== Sponsoring ====<br />
<center><br />
[[Image:AppSec Research 2010 Microsoft diamond sponsor.jpg|250px|Microsoft - Diamond Sponsor]] [[Image:AppSec Research 2010 Google 20k sponsor.jpg|150px|Google - Dinner Party and Expo Sponsor]] [[Image:Portwise logo.png|130px|PortWise - Gold and Badge Sponsor]] [[Image:Cybercom logo.png|100px|Cybercom - Gold Sponsor]] [[Image:Fortify logo AppSec Research 2010.png|120px|Fortify - Gold Sponsor]] [[Image:Omegapoint logo.png|110px|Omegapoint - Gold Sponsor]] [[Image:Mnemonic logo.png|100px|Mnemonic - Silver Sponsor]] [[Image:AppSec Research 2010 sponsor Nixu logo.jpg|100px|NIXU - Silver Sponsor]] [[Image:hps_logo.png|130px|High Performance Systems - Silver sponsor]] [[Image:IIS logo.png|100px|Stiftelsen för Internetinfrastruktur - Lunch Sponsor]] [[Image:MyNethouse logo.png|100px|MyNethouse - Coffee Break Sponsor]] [[Image:AppSec Research 2010 Help Net Security sponsor.jpg|100px|Help Net Security - Media Sponsor]] <br />
</center> <br />
We are now welcoming sponsors for OWASP AppSec Research 2010. Take the opportunity to support next year's major appsec event in Europe! The full sponsoring program is available as pdfs: <br />
<br />
Sponsoring program in English:&nbsp;[[Image:OWASP Sponsorship AppSec Research 2010 (eng).pdf]] <br />
<br />
Sponsoring program in Swedish:&nbsp;[[Image:OWASP Sponsorship AppSec Research 2010 (swe).pdf]] <br />
<br />
[[Image:Owasp appsec research 2010 diamond gold silver sponsoring.png|left|Part of the sponsoring program]] [[Image:Owasp appsec research 2010 sponsoring 2.png|left|Part of the sponsoring program]] <br />
<br />
==== Challenges ====<br />
<br />
=== Countdown Challenges -- Free Tickets to Win! ===<br />
<br />
There will be a challenge posted on the conference wiki page the 21st every month up until the event. The winner will get free entrance to the conference. Be sure to sign up for [https://lists.owasp.org/mailman/listinfo/appsec_eu_2010 the conference mailing list] to get a monthly reminder.<br />
<br />
== AppSec Research Final Challenge: Internet Treasure Hunt ==<br />
<br />
It's May 21st, one month to AppSec Research 2010, and '''the last chance to win a free ticket''' to this year's number one conference in appsec.<br />
<br />
<br />
'''The Treasure Hunt in a Nutshell'''<br><br />
Your mission is to find several small AppSec Research logotypes hidden among the websites of our sponsors and hosts. Every logo found is associated with a keyword (a dictionary word) in some way. When you've found all the keywords you email them to us.<br />
<br />
[[Image:Owasp_appsec_research_2010_logo_by_daniel_kozlowski.jpg|40px|OWASP AppSec Research 2010 logo by Daniel Kozlowski]]<br />
<br />
<br />
'''Instructions'''<br><br />
* Please don't do anything malicious during your hunt. And don't produce considerable load on the websites. You should be able to find the keywords anyway :).<br />
* To check if you found all keywords you compare the md5 of all keywords concatenated in alphabetical order with this hash: 1a7b54ba9cee6cccd9890e7800b83208<br />
* You can calculate the hash by doing the following in a shell: echo "Keywords concatenated in alphabetical order" | md5<br />
* To ensure your hash function produces the same as our you can try: echo "owasp" | md5 ... which should result in the hash 2bdce47b1a6c527b134d4b658b033702<br />
<br />
<br />
'''How to Win'''<br><br />
To win you email all keywords (not the hash) concatenated in alphabetical order to stefan dot pettersson at owasp dot org. Stefan will let you know if you were the first one with the correct answer!<br />
<br />
<br />
'''Example:'''<br><br />
* You found three logos and the keywords were: golf, king, apple<br />
* You calculate the hash by doing: echo "applegolfking" | md5<br />
* If the hash matches 1a7b54ba9cee6cccd9890e7800b83208 you email applegolfking to Stefan.<br />
<br />
Let the best hunter win!<br />
<br />
==== Archive ====<br />
<br />
== Call for Papers and Proposals (closed) ==<br />
<br />
[[Image:AppSec Research 2010 2nd cfp.png]] <br />
<br />
<br> 1. '''Publish or Perish'''. Peer-reviewed 12 page papers to be published in formal proceedings by Springer-Verlag ([http://www.springer.com/lncs Lecture Notes in Computer Science, LNCS]). Presentation slides and video takes will be posted on the OWASP wiki after the conference.<br> 2. '''Demo or Die'''. A demo proposal should consist of a pdf with a 1 page abstract summarizing the matter proposed by the speaker(s) ''and'' 1 page containing demo screenshot(s). Demos will have ordinary speaker slots but the speakers are expected to run a demo during the talk (live coding counts as a demo), not just a slideshow. Presentation slides and video takes will be posted on the OWASP wiki after the conference.<br> 3. '''Present or Repent'''. A presentation proposal should consist of a 2 page extended abstract representing the essential matter proposed by the speaker(s). Presentation slides and video takes will be posted on the OWASP wiki after the conference. <br />
<br />
If you have any questions regarding submissions etc, please email john.wilander@owasp.org. <br />
<br />
=== Topics of Interest ===<br />
<br />
We encourage the publication and presentation of new tools, new methods, empirical data, novel ideas, and lessons learned in the following areas: <br />
<br />
•&nbsp; &nbsp; Web application security<br> • &nbsp; &nbsp;Security aspects of new/emerging web technologies/paradigms (mashups, web 2.0,&nbsp; offline support, etc)<br> •&nbsp; &nbsp; Security in web services, REST, and service oriented architectures<br> •&nbsp; &nbsp; Security in cloud-based services<br> •&nbsp; &nbsp; Security of frameworks (Struts, Spring, ASP.Net MVC etc)<br> •&nbsp; &nbsp; New security features in platforms or languages<br> •&nbsp; &nbsp; Next-generation browser security<br> •&nbsp; &nbsp; Security for the mobile web<br> •&nbsp; &nbsp; Secure application development (methods, processes etc)<br> •&nbsp; &nbsp; Threat modeling of applications<br> •&nbsp; &nbsp; Vulnerability analysis (code review, pentest, static analysis etc)<br> •&nbsp; &nbsp; Countermeasures for application vulnerabilities<br> •&nbsp; &nbsp; Metrics for application security<br> • &nbsp; &nbsp;Application security awareness and education <br />
<br />
=== Submission Deadline and Instructions ===<br />
<br />
'''Update''': Submission deadline for full-papers ("Publish or Perish") has been '''extended to March 7th 23:59''' (Apia, Samoa time) due to numerous requests. Submit your paper to [https://www.easychair.org/login.cgi?a=c01e98d04e4e;iid=20045 AppSec Research 2010 (EasyChair)]. <br />
<br />
Full-paper submissions should be at most 12 pages long and must be in the Springer LNCS style for "Proceedings and Other Multiauthor Volumes". Templates for preparing papers in this style for LaTeX, Word, etc can be downloaded from: http://www.springer.com/computer/lncs?SGWID=0-164-7-72376-0. Full papers must be submitted in a form suitable for anonymous review: '''remove author names and affiliations from the title page, and avoid explicit self-referencing in the text'''. <br />
<br />
Submission for "Demo or Die" and "Present or Repent" closed on February 7th. <br />
<br />
Decision notification: April 7th <br />
<br />
=== Program Committee (for review of full-papers) ===<br />
<br />
• John Wilander, Omegapoint and Linköping University (chair)<br> • Alan Davidson, Stockholm University/Royal Institute of Technology (co-host)<br> • Lieven Desmet, Katholieke Universiteit Leuven<br> • Úlfar Erlingsson, Reykjavík University and Microsoft Research<br> • Martin Johns, University of Passau<br> • Christoph Kern, Google<br> • Engin Kirda, Institute Eurecom<br> • Ulf Lindqvist, SRI International<br> • Benjamin Livshits, Microsoft Research<br> • Sergio Maffeis, Imperial College London<br> • John Mitchell, Stanford University<br> • William Robertson, UC Berkeley<br> • Andrei Sabelfeld, Chalmers UT<br> <br />
<br />
== Call for Training (closed) ==<br />
<br />
(Info kept here for reference)<br> OWASP is currently soliciting training proposals for the OWASP AppSec Research 2010 Conference which will take place at Stockholm University in Sweden, on June 21st through June 24th 2010. There will be training courses on June 21st and 22nd followed by plenary sessions on the 23rd and 24th with three tracks per day. <br />
<br />
We are seeking training proposals on the following topics (in no particular order): <br />
<br />
*Security in Web 2.0, Web Services/XML <br />
*Advanced penetration testing <br />
*Static analysis for security <br />
*Threat modeling of applications <br />
*Secure coding practices <br />
*Security in J2EE/.NET patterns and frameworks <br />
*Application security with ESAPI <br />
*OWASP tools in practice<br />
<br />
We will look favourably on laboration-based/hands-on training. <br />
<br />
=== Submission Deadline and Instructions ===<br />
<br />
Submission '''deadline is Sunday February 7th 23:59''' (Apia, Samoa time). To submit your training proposal please fill out the [[Image:OWASP AppSec Research 2010 Call for Training.docx]] and email it to john.wilander@owasp.org with subject "AppSec Research 2010: Training proposal". <br />
<br />
Upon acceptance you'll be requested to fill out the ''Training Instructor Agreement'' where you'll find details on revenue split etc. The agreement will be reworked but the previous one is here: [[Image:Training Instructor Agreement.doc]]. <br />
<br />
=== Upcoming List of Trainers on OWASP Wiki ===<br />
<br />
As part of the [http://www.owasp.org/index.php/Category:OWASP_Education_Project OWASP Education Project], OWASP is starting an official list of trainers on the OWASP web site. This list (mentioning the trainer - course and contact details) will cover all trainers that performed training at OWASP conferences, together with their aggregated scores on the course feedback forms. Of course, this is opt-in. Please let us know if you are interested to participate in this program (tick the check-box on the application form). <br />
<br />
== AppSec Research Challenge 11: Share Your OWASP AppSec Postcards ==<br />
<br />
Here's the second last chance to win a free ticket to the conference. This time we challenge you to create OWASP AppSec Research Postcards (digital ones of course) from nice places throughout the world hold a paper like the picture below.<br />
<br />
[[Image:OWASP_AppSec_Research_2010_Postcard_Challenge.jpg]]<br />
<br />
== How to Win ==<br />
Create and share the most "digital postcards" showing you, the conference logo on paper ([http://www.owasp.org/images/5/52/OWASP_AppSec_Research_2010_Postcard_Challenge.pdf pdf]), and ...<br />
<br />
* Your work office or "computer room" at home: 1 point<br />
* A major city (> 1 million inhabitants) with the city sign "Welcome to ...": 2 points<br />
* On a continent which you don't live: 2 points<br />
* Under water (outside, not in a pool or a bathtub): 2 points<br />
* A capital city with a typical sight, e g The Eiffel Tower in Paris: 3 points<br />
* With someone from our "Who's Who in Security" challenge holding the logo: 3 points<br />
* With an international celebrity holding the logo: 5 points<br />
* 4,000 meters or more above sea level, not flying: 6 points<br />
* With Chuck Norris, Mr. T, or Paris Hilton: 30 points<br />
<br />
You get points for every unique postcard, meaning once under water, once in a specific city, once with a unique celebrity, once per mountain above 4,000 meters etc. If you combine categories you get the sum of the points. Most points by May 20th wins a free conference ticket!<br />
<br />
== How to Compete ==<br />
Share your postcards on http://www.Flickr.com following this example (3 points for Eiffel Tower in Paris):<br />
<br />
* '''Photo''' of you, the conference logo on paper using [http://www.owasp.org/images/5/52/OWASP_AppSec_Research_2010_Postcard_Challenge.pdf this pdf], and the Eiffel Tower in the background<br />
* '''Title''': OWASP Challenge Postcard Paris<br />
* '''Description''': Capital city Paris, typical site The Eiffel Tower, 3 points<br />
* '''Tag''': #AppSecEu<br />
<br />
== AppSec Research Challenge X: Build an Enterprise Java Rootkit ==<br />
<br />
The tenth challenge is here! <br />
<br />
Jeff Williams, chairman of OWASP, gave a very interesting talk at last year's Black Hat US and OWASP AppSec US -- [http://www.blackhat.com/presentations/bh-usa-09/WILLIAMS/BHUSA09-Williams-EnterpriseJavaRootkits-PAPER.pdf "Enterprise Java Rootkits -- Hardly Anyone Watches the Developers"]. Now it's time for you to write a rootkit yourself, exploring Jeff's techniques and more. <br />
<br />
'''The Project to Fool'''<br> Your assignment is to be the evil developer who implements and hides a backdoor in a Java servlet. We've implemented a very simple login web application and exported the Eclipse project ([http://www.owasp.org/images/1/16/OWASP_AppSec_Research_2010_Challenge_X.zip zip here]). We will use this project to evaluate your submissions. It's a simple servlet/jsp project that we deployed on Tomcat 6.0. It even contains an evil output of user credentials to a temp file (not yet hidden though) to get you started. Screenshot from the app and the project structure: <br />
<br />
<br> [[Image:Appsec research 2010 challenge X eclipse project.jpg]] [[Image:Appsec research 2010 challenge X login screen.jpg]] <br />
<br />
'''Rules'''<br> <br />
<br />
*You must explain what your changes do (we need to evaluate your rootkit!) <br />
*The original features + look and feel must be preserved <br />
*Your additions should preferably look like security features such as IP whitelisting, logging, anti-CSRF, frequency blocking etc. <br />
*You're only allowed to change the servlet (Login.java), and the gif image (appsec_research_challenge_X.gif) <br />
*You do not have to use the jsps <br />
*The original size of Login.java is 1,856 bytes and it mustn't grow to more than 4,000 bytes <br />
*The gif image mustn't grow in size and should look close enough to the original to fool the committee <br />
*Code should "look" readable, i e not minimized too heavily<br />
<br />
'''How To Win'''<br> The organization committee will evaluate who has been able to hide the most evil stuff while complying to the rules. The more malicious functionality and the more clever disguise -- the more "points". All submissions must be posted as links or pasted code in [http://sla.ckers.org/forum/read.php?11,33928 this sla.ckers.org thread]. Send an email to john.wilander@owasp.org when you post code or need attention. Deadline April 20. <br />
<br />
<br> <br />
<br />
== AppSec Research Challenge 9: Crack 'Em Hashes (closed) ==<br />
<br />
February's AppSec Research 2010 challenge is about breaking hashed passwords. It starts off easy with the old LM hash and ends with SHA256 and GOST3411. <br />
<br />
[[Image:Owasp appsec research 2010 hash challenge.jpg]] <br />
<br />
'''How To Win'''<br> The first one to publish each broken password gets points according to the table below but at the same time helps the others since the password is the salt of the next hash. So you have to decide -- should you publish your cracked password and collect your points before the others or should you keep it a secret to get a head start cracking the next one? Deadline it March 21st. <br />
<br />
To collect points for a password you must be the first one to publish that broken password on [http://sla.ckers.org/forum/read.php?11,33533 this sla.ckers.org thread]. Please send an email to john.wilander@owasp.org at the same time so we can correct any misunderstandings. For instance we can happen to run into hash collisions, where someone finds another mixed alpha password of max 5 characters that concatenated with the right salt produces the same hash. In such a case we will publish the real password and give points to the one who found the collision. <br />
<br />
The one with the most points on March 21st wins a free ticket to the conference! <br />
<br />
'''Points to Earn'''<br> <br />
<br />
*pwd1 (LM) =&gt; 1 point <br />
*pwd2 (MD2) =&gt; 3 points <br />
*pwd3 (MD4) =&gt; 5 points <br />
*pwd4 (MD5) =&gt; 9 points <br />
*pwd5 (RIPEMD160) =&gt; 15 points <br />
*pwd6 (SHA1) =&gt; 25 points <br />
*pwd7 (SHA256) =&gt; 50 points <br />
*pwd8 (GOST3411) =&gt; 100 points<br />
<br />
'''The Hashes'''<br> Each password comprises of a-zA-Z (mixed alpha) and is max 5 characters long. With salt that means max 10 mixed alpha characters as input to the hash function. All hashes here are in hex format. The Java source code has all the details. The plus operator means string concatenation. <br />
<br />
*LM(pwd1) 0C04DACA901299DBAAD3B435B51404EE <br />
*MD2(pwd2 + pwd1) 16189F5462BF906E9D88CF6F152DE86F <br />
*MD4(pwd3 + pwd2) FA8F46A6D347087D6980C3FA77DD4DE9 <br />
*MD5(pwd4 + pwd3) 425B33D6F60394C897B8413B5C185845 <br />
*RIPEMD160(pwd5 + pwd4) 35F34671D30472D403937820DCABC1C78C837071 <br />
*SHA1(pwd6 + pwd5) AE81A30510B2931921934218636B26A803330EB1 <br />
*SHA256(pwd7 + pwd6) B2FF0269E927C6559804A37590A0688C45DF143F85CEE0E3F239F846B65C9644 <br />
*GOST3411(pwd8 + pwd7) 16CC9F1FF65688E040F5ADA82A41A258FF948769CDA4C4A17D85228A6F358971<br />
<br />
Example: Given that pwd1 is "Win" and pwd2 is "You", the hash 16189F5462BF906E9D88CF6F152DE86F is the result of MD2("YouWin"). Now pwd2 will be the salt when you crack pwd3. <br />
<br />
'''The Source Code'''<br> The source code we've used to produce the hashes is available here [http://www.owasp.org/images/7/79/OwapsAppSecResearch2010HashChallenge.zip zip]. It's Java and all but the LM hash is done with [http://www.bouncycastle.org/latest_releases.html Bouncy Castle 1.4.5]. <br />
<br />
<br> <br />
<br />
== AppSec Research Challenge 8: Construct an OWASP Polyglot (closed) ==<br />
<br />
January's AppSec Research Challenge is to construct an OWASP polyglot, more specifically '''an OWASP logo that also can be run as JavaScript''': <br />
<br />
Show image: &lt;img src="owasp_logo.gif"&gt;<br>Run script: &lt;script src="owasp_logo.gif"&gt;&lt;/script&gt; <br />
<br />
[http://en.wikipedia.org/wiki/Polyglot_(computing) Wikipedia] says: "a ''polyglot'' is a computer program or script written in a valid form of multiple programming languages". This is about as cool as it gets&nbsp;:). <br />
<br />
'''Rules''' <br />
<br />
*Make your polyglot out of the regular OWASP logo in the upper left corner of this wiki (circle with the wasp). <br />
*The file size must not grow. <br />
*Pixel colors in the gif must not differ more than 5 in red, green, or blue. Ex: If a pixel originally had rgb 100,100,100 then 104,95,96 is OK. <br />
*No malicious stuff of course <br />
*When your polyglot is run as JavaScript it should execute as many of the following features as possible, starting from the top:<br />
<br />
#alert(all cookies belonging to the current domain); <br />
#alert(the last keystrokes on the keyboard every ten keystrokes); <br />
#alert(the current time in Stockholm, once every minute); <br />
#A quine. The polyglot outputs its own source code on the HTML page.<br />
<br />
'''How to get started''' <br />
<br />
Jasvir Nagra gave a talk on these kind of polyglots and published a gif/JavaScript polyglot on [http://www.thinkfu.com/blog/gifjavascript-polyglots his blog]. A good starting point is his gif file.&nbsp;Jasvir has also written an extensive article on gif/perl polyglots which explains how to get code into the gif file. Check out [http://search.cpan.org/~jnagra/Perl-Visualize-1.02/Visualize.pm#HOW_IT_ALL_WORKS his guide]. <br />
<br />
'''How to win''' <br />
<br />
Submit your entries in [http://sla.ckers.org/forum/read.php?11,33121 this sla.ckers.org thread]. Either the first complete polyglot or the most complete polyglot wins. We will most probably provide you with a gif checker that validates the color differences. Check the thread.&nbsp; <br />
<br />
== AppSec Research Challenge 7: X-Mas Capture the Flag (closed) ==<br />
<br />
[[Image:AppSec Research 2010 Stocking.gif]] '''Merry Christmas everyone!'''[[Image:AppSec Research 2010 Stocking.gif]] <br />
<br />
It's the 21st and a new AppSec Research Challenge is posted. <br />
<br />
Setting up the AppSec Research 2010 X-mas Challenge was a cooperative effort by the winner of AppSec Research Challenge 3, Mario Heiderich, and Martin Holst Swende. It is a multi-step challenge which involves finding a vulnerability in a web application and locating a hidden message. The winner gets free entrance to next year's conference. Start by subscribing to [https://lists.owasp.org/mailman/listinfo/appsec_eu_2010 the conference mailing list]. Then check the simple rules below and get going. <br />
<br />
'''Rules''': <br />
<br />
*Please do not perform any resource-intensive tests, as the machine is pretty low-end and can be DoS:ed without much effort. <br />
*The computer at the given IP address is the only system involved in this challenge, so please do not perform any tests of neighboring systems. <br />
*Otherwise, you are free to hack away!<br />
<br />
'''Challenge-page''': [http://66.249.7.26 66.249.7.26] <br />
<br />
Discussions, QnA and reports about how far you have made it is welcome at [http://sla.ckers.org/forum/read.php?11,32779 the official sla.ckers thread]. <br />
<br />
Good luck and happy holidays! (And don't forget the submission deadline for the conference -- February 7) <br />
<br />
<br> <br />
<br />
== AppSec Research Challenge 6: Design the Conference Logo (closed) ==<br />
<br />
'''Note''': This challenge is re-opened. Submit by February 21st. <br />
<br />
November's AppSec Research 2010 Challenge asks you to design the conference logotype. So far we have used this: <br />
<br />
[[Image:Appsec research 2010 logo prototype (small).png]] <br />
<br />
... but would like something less "word processor-like". <br />
<br />
'''How to win''' <br />
<br />
*The logo should be suitable for both large printing and small web banners <br />
*If you make a color logo, please submit a b/w version too <br />
*"OWASP AppSec Research 2010" should in some way be part of the logo&nbsp;:)<br />
<br />
'''Copyright?'''<br> By submitting your logo you agree to share it according to [http://creativecommons.org/licenses/by/3.0/legalcode Creative Commons Attributions] and that we credit you in the conference brochure and on the conference wiki but not in all places where we use the logo (i e we will not credit you on banners, sponsoring program, powerpoint presentations etc). <br />
<br />
'''How to submit'''<br> Email jpg + svg to john.wilander [at] owasp.org before Monday December 14th 23:59 [http://www.worldtimeserver.com/current_time_in_UTC.aspx UTC]. The creator of the best logo wins a free ticket to the AppSec Research 2010 conference! <br />
<br />
== AppSec Research Challenge 5: Graphical Effects (closed) ==<br />
<br />
The October OWASP AppSec Research 2010 challenge is over. The winner of a free entrance ticket to next year's AppSec conference in Stockholm is "sirdarckcat" with FireworksIsNotABrowser_v4 (although we like the slightly oversized v6 better). <br />
<br />
The challenge was about '''writing the coolest graphical effect in a 2010 character script'''. <br />
<br />
=== An Example ===<br />
<br />
As an example, copy the script below and paste the script over the URL in the URL bar. <br />
<br />
<nowiki>javascript:R=0; x1=.1; y1=.05; x2=.25; y2=.24; x3=1.6; y3=.24; x4=300; y4=200; x5=300; y5=200; DI=document.getElementsByTagName("img"); DIL=DI.length; function A(){for(i=0; i-DIL; i++){DIS=DI[ i ].style; DIS.position='absolute'; DIS.left=(Math.sin(R*x1+i*x2+x3)*x4+x5)+"px"; DIS.top=(Math.cos(R*y1+i*y2+y3)*y4+y5)+"px"}R++}setInterval('A()',5); void(0)</nowiki> <br />
<br />
As a simple teaser we give these png letters for the script to play with. <br />
<br />
[[Image:AppSec Research 2010 O.png]][[Image:AppSec Research 2010 W.png]][[Image:AppSec Research 2010 A.png]][[Image:AppSec Research 2010 S.png]][[Image:AppSec Research 2010 P.png]] <br />
<br />
=== Rules ===<br />
<br />
*The script should work in Firefox 3.5 (yeah, that means HTML5 and CSS3&nbsp;:) <br />
*Any resource, linked document, script, or image defined on the AppSec Research 2010 wiki page may be loaded/accessed/used <br />
*No requests to any other location is allowed <br />
*No obfuscation is allowed <br />
*The script may only use ASCII <br />
*Max length of the script is 2010 characters <br />
*You have to give your effect an id and a version number (further explanation below) <br />
*Any form of malicious code is of course banned&nbsp;;)<br />
<br />
=== How to Compete ===<br />
<br />
There's an [http://sla.ckers.org/forum/read.php?11,31944 official thread on sla.ckers] were you share your code and thoughts (Worried someone will steal you code? Check the originality bullet below). You can enter as many effects as you like but '''each effect has to have an id and a version number''', e.g. JohnWobbler_v1.3 for version 1.3 of John's Wobbler effect. Deadline is November 14th, 23:59 [http://www.worldtimeserver.com/current_time_in_UTC.aspx UTC]. <br />
<br />
=== Choosing the Winner ===<br />
<br />
Since this is a creative challenge the OC will choose the winner based on the following: <br />
<br />
*'''Originality''' (tweaking someone's code is cool and encouraged but changing a few magic numbers or inverting a function won't make you the winner) <br />
*'''Coolness''' (yeah, you need to convince a few Scandinavian people + Seba and Kate that your script is the coolest)<br />
<br />
Either the OC will choose a winner by ourselves or we choose the top effects and let you guys vote for the winner. <br />
<br />
== AppSec Research Challenge 4: Who's Who in Security? (closed) ==<br />
<br />
September's AppSec Research 2010 Challenge was to identify a number of people that are, in one way or another, known in the security business, by their picture. There were thirteen photos in total, portraiting thirteen different individuals. <br />
<br />
'''The winner of a free ticket to the OWASP AppSec Research conference in 2010 was Thomas Vollstädt''' who submitted the correct solution just one day after the challenge was posted. <br />
<br />
=== The Solution ===<br />
<br />
[[Image:Owasp appsec research 2010 challenge 4 solution.png]] <br />
<br />
=== The Names ===<br />
<br />
Dinis Cruz, Gordon "Fyodor" Lyon, David Litchfield, Dave Aitel, Bruce Schneier, Dave Wichers, Gene Spafford, MafiaBoy, MySpace Samy, Tom Brennan, Halvar Flake, Alex Sotirov, Jeff Williams, Jennifer Granick, Kate Hartmann, Mudge, Lance Spitzner, Dan Kaminsky, Brian Chess, Joanna Rutkowska, Crispin Cowan, Michael Howard, Jay Beale, Ross Anderson, Dawn Song, Robert "rsnake" Hansen, and Solar Designer. <br />
<br />
=== The Pictures ===<br />
<br />
If you'd like to see the original pictures without the names, here's the link: [[http://www.owasp.org/index.php/File:Owasp_appsec_research_2010_challenge_4.png]] <br />
<br />
== AppSec Research Challenge 3: Non-Alphanumeric JavaScript (closed) ==<br />
<br />
The August AppSec Research 2010 Challenge was to create a JavaScript alert("owasp") that pops up the word 'owasp', case-insensitive, without using any alphanumeric characters (0-9a-zA-Z).&nbsp;There was a tremendous activity and we want to thank everyone who participated. The size of the final result was almost a third of the first entry (see chart below). '''Want to check out the winning snippet by .mario? Enter the following in the Firebug console''':&nbsp;<nowiki>ω=[[Ṫ,Ŕ,,É,,Á,Ĺ,Ś,,,Ó,Ḃ]=!''+[!{}]+{}][Ś+Ó+Ŕ+Ṫ],ω()[Á+Ĺ+É+Ŕ+Ṫ](Ó+ω()[Ḃ+Ṫ+Ó+Á]('Á«)'))</nowiki> <br />
<br />
It is based on a few different ideas. First of all, a variable assignment on the form <br />
<br />
<nowiki>[a,b,c,,e]="abcde" // a="a", c="c",e="e"</nowiki> <br />
<br />
Which is performed on the string "truefalse[object Object]" <br />
<br />
<nowiki>[Ṫ,Ŕ,,É,,Á,Ĺ,Ś,,,Ó,Ḃ]=!''+[!{}]+{}]</nowiki> // right-hand side is "truefalse[object Object]" <br />
<br />
Also, the following construction obtains the window.sort-function, which leaks the window-object when called without arguments&nbsp;: <br />
<br />
ω=[]["sort"] //ω is now window.sort <br />
<br />
Therefore, calling ω()["alert"] invokes window.alert. To generate the string "owasp", the string "wasp" can be obtained by calling btoa on the characters <nowiki>"Á«)"</nowiki>. <br />
<br />
This was really a great team effort, and I think a lot of us learned some new tricks. The final winner was .mario. Congratulations! <br />
<br />
[[Image:Appsec research 2010 challenge 3 chart.jpg]] <br />
<br />
=== JavaScript Without Alphanumeric Characters? ===<br />
<br />
It is possible to write valid javascript completely without alphannumeric characters (0-9a-zA-Z). To produce a number, you can instead use for example an empty string, <nowiki>''</nowiki>, interpret it as a boolean with a bang: <nowiki>!''</nowiki> -- which leads to the boolean object true. true, interpreted as a numeric value, equals one. Thus, <br />
<br />
<nowiki>$ = +!''; // $ === 1</nowiki> <br />
<br />
<nowiki>$++;$++; // $ === 3</nowiki> <br />
<br />
In a similar fashion, strings can be created from strings embedded in the language. The boolean object true can be converted to string by concatenation, and then accessed by numeric index to, for example, produce the letter 'e'&nbsp;: <br />
<br />
<nowiki>â = (!''+'')[$] // â[$] === "true"[3] === e</nowiki> <br />
<br />
=== Previous Similar Contest ===<br />
<br />
These two techniques are behind a [http://sla.ckers.org/forum/read.php?24,28687 previous contest at the forum "sla.ckers.org"], where the contest was to create alert(1) with as few non-alphanumeric characters as possible. Currently, the code actually being executed was: <br />
<br />
<nowiki>([],"sort")()["alert"](1) // since ([],"sort")()</nowiki> leaks window object in FF, ==&gt; <nowiki>window["alert"](1)</nowiki> is called, which is another form of <nowiki>window.alert(1)</nowiki> <br />
<br />
The winner, or at least current leading entry is 84 bytes long, and looks like this: <br />
<br />
<nowiki>(Å='',[Į=!(ĩ=!Å+Å)+{}][Į[Š=ĩ[++Å]+ĩ[Å-Å],Č=Å-~Å]+Į[Č+Č]+Š])()[Į[Å]+Į[Å+Å]+ĩ[Č]+Š](Å)</nowiki> <br />
<br />
=== The Challenge ===<br />
<br />
August's challenge was to, in a similar fashion, create an alert("owasp"), case-insensitive, not using any alphanumeric characters. The shortest working code snippet submitted by September 18th 23:59:59 [http://www.worldtimeserver.com/current_time_in_UTC.aspx UTC] won a free ticket. By "working" we meant JavaScript that executes in Firefox/Firebug, not depending on any Firebug DOM variables for execution. <br />
<br />
'''Submissions were made as comments to the [http://owaspsweden.blogspot.com/2009/08/appsec-research-2010-challenge-3.html challenge 3 blogpost on Owasp Sweden].''' Check it out. <br />
<br />
== AppSec Research Challenge 2: OWASP Crossword Puzzle (closed) ==<br />
<br />
July's crossword challenge is over. Many permutations arrived in our inbox but it was tricky to get it completely right. Congratulations to Johannes Dahse and Johan Nilsson who in the end were allowed to join forces to be able to find the correct solution. They win a 50&nbsp;% conference ticket discount each. <br />
<br />
You find the solution below. <br />
<br />
[[Image:Appsec research 2010 challenge 2 solution.gif]] <br />
<br />
== AppSec Research Challenge 1: Input Validation and Regular Expressions (closed) ==<br />
<br />
'''This challenge is over'''. The winner was Partik Nordlén. To see the solution(s), please visit the [https://lists.owasp.org/pipermail/appsec_eu_2010/2009-July/000000.html appsec_eu_2010 mailing list archive]. <br />
<br />
''Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems.''<br> &nbsp; &nbsp; &nbsp; &nbsp; --Jamie Zawinski, in comp.emacs.xemacs <br />
<br />
The 21st of each month up until the conference in June 2010 we'll have a countdown challenge posted here. The winner each month will get a free entrance ticket worth about €300/$400. Be sure to sign up for [https://lists.owasp.org/mailman/listinfo/appsec_eu_2010 the conference mailing list] to get a monthly reminder. <br />
<br />
=== The Challenge ===<br />
<br />
A community is hosted on a very large domain, yahoogle.com. The users of that community all have profiles, where they are allowed to use basic HTML for customization, as well as JavaScript files hosted on the domain. <br />
<br />
All the code for the profile pages are filtered on the server side, and whenever a piece of code containing "&lt;script..." is encountered, the following regular expression is used to validate that the script loaded is hosted on a subdomain of yahoogle.com: <br />
<br />
.*(&lt;script){1}([^&gt;]+)src=('http:\/\/[a-zA-Z]+.yahoogle.com\/scripts\/[0-9A-Za-z]+\.js').*\/&gt; <br />
<br />
Capture group 3 is then also checked against a whitelist of allowed scripts on that domain. The whitelist consists of "http://secure.yahoogle.com" and "http://scripts.yahoogle.com". <br />
<br />
Your task is to formulate a snippet of HTML that goes correctly through the filter and the whitelist, but loads the script "http://insecure.com/evil.js" instead. Also, rework the regular expression to defend against your "attack". <br />
<br />
'''Email your solution to Martin Holst Swende &lt;martin.holst_swende@owasp.org&gt;'''. The first correct answer wins a free ticket to the conference. The free ticket is personal and the judgement of the organizing committee can not be overruled&nbsp;:). <br />
<br />
<br> <headertabs /></div>Michael Bomanhttps://wiki.owasp.org/index.php?title=OWASP_AppSec_Research_2010_-_Stockholm,_Sweden&diff=82592OWASP AppSec Research 2010 - Stockholm, Sweden2010-04-29T07:05:39Z<p>Michael Boman: Updated presentation entries for Komal Randive, Gabriele Giuseppini and Nick Coblentz</p>
<hr />
<div>__NOTOC__ <br />
<br />
==== Welcome ====<br />
<br />
== Invitation ==<br />
<br />
Ladies and Gentlemen, <br />
<br />
In June 21-24, 2010 let's all meet in beautiful Stockholm, Sweden. The OWASP chapters in [http://www.owasp.org/index.php/Sweden Sweden], [http://www.owasp.org/index.php/Norway Norway], and [http://www.owasp.org/index.php/Denmark Denmark] hereby invite you to OWASP AppSec Research 2010. <br />
<br />
If you have any questions, please email the conference chair: john.wilander at owasp.org <br />
<br />
[[Image:Stockholm old town small.jpg]] <br />
<br />
=== Sponsors ===<br />
<br />
Diamond sponsor:<br> [[Image:AppSec Research 2010 Microsoft diamond sponsor.jpg]] <br />
<br />
Gold sponsors:<br> [[Image:Cybercom logo.png]] [[Image:Portwise logo.png]]<br> [[Image:Fortify logo AppSec Research 2010.png]] [[Image:Omegapoint logo.png]] <br />
<br />
Silver sponsors (3 taken, 5 open):<br> [[Image:Mnemonic logo.png]] [[Image:AppSec Research 2010 sponsor Nixu logo.jpg]] <br><br />
[http://www.hps.se/ http://www.owasp.org/images/6/6f/Hps_logo.png]<br />
<br />
Dinner Party sponsor:<br> [http://www.google.com/EngineeringEMEA http://www.owasp.org/images/thumb/8/86/AppSec_Research_2010_Google_20k_sponsor.jpg/150px-AppSec_Research_2010_Google_20k_sponsor.jpg]<br />
<br />
<br />
Lunch sponsors (1 taken, 1 open):<br> [[Image:IIS logo.png]] <br />
<br />
Coffee break sponsors (1 taken, 3 open):<br> [[Image:MyNethouse logo.png]] <br />
<br />
Media sponsors:<br> [[Image:AppSec Research 2010 Help Net Security sponsor.jpg]] <br />
<br />
For full sponsoring program see the Sponsoring tab above.<br />
<br />
=== "AppSec Research".equals("AppSec Europe") ===<br />
<br />
This conference was formerly known as OWASP AppSec Europe. We have added 'Research' to highlight that we invite both industry and academia. All the regular AppSec Europe visitors and topics are welcome along with contributions from universities and research institutes. <br />
<br />
This will be ''the'' European conference for anyone interested in or working with application security. Co-host is the [http://dsv.su.se/en/ Department of Computer and Systems Science] at Stockholm University, offering a great venue in the fabulous Aula Magna. <br />
<br />
=== Countdown Challenges -- Free Tickets to Win! ===<br />
<br />
There will be a challenge posted on the conference wiki page the 21st every month up until the event. The winner will get free entrance to the conference. What are you waiting for? Go to the Challenges tab and have fun! <br />
<br />
=== Organizing Committee ===<br />
<br />
• John Wilander, chapter leader Sweden (chair)<br> • Mattias Bergling (vice chair)<br> • Alan Davidson, Stockholm University/Royal Institute of Technology (co-host)<br> • Ulf Munkedal, chapter leader Denmark<br> • Kåre Presttun, chapter leader Norway<br> • Stefan Pettersson (sponsoring coordinator)<br> • Carl-Johan Bostorp (schedule and event coordinator)<br> • Martin Holst Swende (coffee/lunch/dinner)<br> • Michael Boman (conference guide/attendee pack)<br> • Predrag Mitrovic, OWASP Sweden Board<br> • Kate Hartmann, OWASP<br> • Sebastien Deleersnyder, OWASP Board <br />
<br />
'''Welcome to Stockholm this year!'''<br> Regards, John Wilander <br />
<br />
==== June 21-22 (Training) ====<br />
<br />
== Training Registration is open ==<br />
<br />
Application security training is given the first two days, '''June 21-22'''. The price is '''€990''' (~$1.350) for a two-day course. Take the chance to learn from the best! <br />
<br />
'''--&gt; [http://guest.cvent.com/i.aspx?4W%2cM3%2c717e8a7c-4453-47ff-addb-721306529534 Register here]''' <br />
<br />
=== Course 1: Threat Modeling and Architecture Review (two days) ===<br />
<br />
[[Image:AppSec Research 2010 Pravir Chandra.jpg]] <br />
<br />
Pravir Chandra, Fortify Software <br />
<br />
'''Abstract''': Threat Modeling and Architecture Review are the cornerstones of a preventative approach to Application Security. By combining these topics into single comprehensive course attendees can get a complete understanding of how to understand the threat an application faces and how the application will handle those potential threats. This enables the risk to be accurately assessed and appropriate changes or mitigating controls recommended. <br />
<br />
'''Trainer Bio''': Pravir Chandra is Director of Strategic Services at Fortify where he works with clients to build and optimize software security assurance programs. Pravir is widely recognized in the industry for his expertise in software security and code analysis, and also for his ability to apply technical knowledge strategically from a business perspective. His book, Network Security with OpenSSL is a popular reference on protecting software applications through cryptography and secure communications. His varied special project experience includes creating and leading the Open Software Assurance Maturity Model (OpenSAMM) project <br />
<br />
'''--&gt; [http://guest.cvent.com/i.aspx?4W%2cM3%2c717e8a7c-4453-47ff-addb-721306529534 Register here]''' <br />
<br />
=== Course 2: Introduction to Malware Analysis (two days) ===<br />
<br />
[[Image:AppSec Research 2010 Jason Geffner.jpg]] <br />
<br />
Jason Geffner, Next Generation Security Software (NGS), and Scott Lambert, Microsoft <br />
<br />
'''Abstract''': Security researchers are facing a growing problem in the complexity of malicious executables. While dynamic black-box automation tools exist to discover what malware will do on a given execution, it is often important for an analyst to know the full capabilities of a given malware sample. What port does it listen on? What password does it expect for backdoor access? What files will it write to? What will it do tomorrow that it didn't do today? This class will focus on teaching attendees the steps required to understand the functionality of given malware samples. This is a hands-on course. Attendees will work on real-world malware through a series of lab exercises designed to build their expertise in understanding the analysis process. <br />
<br />
Learning Objectives: <br />
<br />
*An understanding of how to use reverse engineering tools <br />
*An understanding of low-level code and data flow <br />
*PE File format <br />
*x86 Assembly language <br />
*API functions often used by malware <br />
*Anti-analysis tricks and how to defeat them <br />
*Exploits and Shellcode <br />
*A methodology for analyzing malware with and without the use of specialized tools<br />
<br />
'''Trainer Bio''': Jason Geffner joined Next Generation Security Software Ltd. in June of 2007 as a Principal Security Consultant. Jason focuses on performing security reviews of source code and designs, reverse engineering software protection methods and DRM protection methods, deobfuscating and analyzing malware, penetration testing web applications and network infrastructures, and developing automated security analysis tools. <br />
<br />
'''--&gt; [http://guest.cvent.com/i.aspx?4W%2cM3%2c717e8a7c-4453-47ff-addb-721306529534 Register here]''' <br />
<br />
=== Course 3: Building Secure Ajax and Web 2.0 Applications (two days) ===<br />
<br />
[[Image:AppSec Research 2010 Dave Wichers.jpg]] <br />
<br />
Dave Wichers, Aspect Security <br />
<br />
'''Abstract''': Students gain hands-on testing experience with freely available web application security test tools to find and diagnose flaws and learn how to identify them in their own projects. Because finding flaws is worthless without effective communication, the course also covers the process of creating and communicating software security flaws effectively. In addition, Aspect’s engineers are leaders in the AppSec Community and will offer the students an amazing perspective. <br />
<br />
From the course outline:<br> CSS Attacks, Browser Add On Attacks, RSS / Data Feed Attacks, Microsoft Active X, Adobe Flash/Flex/AIR, Silverlight, Java FX, Ajax Mashups, Same Origin Policy, JavaScript, Web 2.0 CSRF Attacks, XHR JSON Forgery, Best Practice: Check HTTP Headers, Best Practice: Unique ID For XHR, JSON and XML Based XSS, How to use OWASP AntiSamy, Blended Threats, Dealing with Ajax Toolkits, Best Practice: Fuzzing ... <br />
<br />
'''Trainer Bio''': Dave Wichers is a member of the OWASP Board and a coauthor, along with Jeff Williams, of all previous versions of the OWASP Top Ten. Dave is also the Chief Operating Officer of Aspect Security, a company that specializes in application security services. Mr. Wichers brings over twenty years of experience in the information security field. Prior to cofounding Aspect, he ran the Application Security Services Group at a large data center company, Exodus Communications. His current work involves helping customers, from small e-commerce sites to Fortune 500 corporations and the U.S. Government, secure their applications by providing application security design, architecture, and SDLC support services: including code review, application penetration testing, security policy development, security consulting services, and developer training. <br />
<br />
'''--&gt; [http://guest.cvent.com/i.aspx?4W%2cM3%2c717e8a7c-4453-47ff-addb-721306529534 Register here]''' <br />
<br />
=== Course 4: Assessing and Exploiting Web Apps with Samurai-WTF (two days) ===<br />
<br />
[[Image:AppSec Research 2010 Justin Searle.jpg]] <br />
<br />
Justin Searle, InGuardians <br />
<br />
'''Abstract''': This course will focus on using open source tools to perform web application assessments. The course will take attendees through the process of application assessment using the open source tools included in the Samurai Web Testing Framework Live CD (Samurai-WTF). Day one will take students through the steps and open source tools used to assess applications for vulnerabilities. Day two will focus on the exploitation of web app vulnerabilities, spending half the day on server side attacks and the other half of the day on client side attacks. The latest tools and techniques will be use throughout the course, including several tools developed by the trainers themselves. <br />
<br />
'''Trainer Bio''': Justin Searle, a Senior Security Analyst with InGuardians, specializes in web application, network, and embedded penetration testing. Justin has presented at top security conferences including DEFCON, ToorCon, ShmooCon, and SANS. Justin has an MBA in International Technology and is CISSP and SANS GIAC-certified in incident handling and hacker techniques (GCIH) and intrusion analysis (GCIA). Justin is one of the founders and lead developers of Samurai-WTF. <br />
<br />
'''--&gt; [http://guest.cvent.com/i.aspx?4W%2cM3%2c717e8a7c-4453-47ff-addb-721306529534 Register here]''' <br />
<br />
=== Course 5: Securing Web Services (two days) ===<br />
<br />
[[Image:AppSec Research 2010 Jason Li.jpg]] <br />
<br />
Jason Li, Aspect Security <br />
<br />
'''Abstract''': Aspect Security offers a one or two day course titled Securing Web Services designed to focus on the most important messages regarding the development and of secure web services. The objective for this course is to ensure that developers understand the real risks associated with Service Oriented Architectures, what standard are available to help, and how to use the standards. The course includes a combination of lecture and demonstration designed to provide detailed guidance regarding the implementation of specific security principles and functions. <br />
<br />
'''Trainer Bio''': Jason Li is a Senior Application Security Engineer for Aspect Security where he performs application security assessments and architecture reviews, as well as application security training, to a wide variety of financial and government customers. Jason is an active OWASP leader, contributing to several OWASP projects and serving on the OWASP Global Projects Committee. He holds a Post-Masters certificate in Computer Science and concentration in Information Security from Johns Hopkins University and a Masters degree in Computer Science from Cornell University. <br />
<br />
'''--&gt; [http://guest.cvent.com/i.aspx?4W%2cM3%2c717e8a7c-4453-47ff-addb-721306529534 Register here]''' <br />
<br />
==== June 23 ====<br />
<br />
{| border="0" align="center" style="width: 80%;"<br />
|-<br />
| align="center" colspan="4" style="background: none repeat scroll 0% 0% rgb(64, 88, 160); color: white;" | '''Conference Day 1 - June 23, 2010''' <br />
[[Image:OWASP AppSec Research 2010 Research R.gif]] = Research paper [[Image:OWASP AppSec Research 2010 Demo D.gif]] = Demo [[Image:OWASP AppSec Research 2010 Presentation P.gif]] = Presentation <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | <br> <br />
| style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 133, 122);" | Track 1 <br />
| style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 165, 122);" | Track 2 <br />
| style="width: 30%; background: none repeat scroll 0% 0% rgb(153, 255, 153);" | Track 3<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 08:00-08:50 <br />
| align="left" colspan="3" style="width: 80%; background: none repeat scroll 0% 0% rgb(194, 194, 194);" | Registration and Coffee<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 08:50-09:00 <br />
| align="center" colspan="3" style="width: 80%; background: none repeat scroll 0% 0% rgb(242, 242, 242);" | Welcome to OWASP AppSec Research 2010 Conference (John Wilander &amp; Dave Wichers)<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 09:00-10:00 <br />
| align="center" colspan="3" style="width: 80%; background: none repeat scroll 0% 0% rgb(252, 252, 150);" | [[#Keynote: Cross-Domain Theft and the Future of Browser Security]] <br />
''Chris Evans, Information Security Engineer, and Ian Fette, Product Manager for Chrome Security, Google'' <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 10:10-10:45 <br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 133, 122);" | [[Image:OWASP AppSec Research 2010 Research R.gif]] [[#BitFlip: Determine a Data's Signature Coverage from Within the Application]] <br />
''Henrich Christopher Poehls, University of Passau''<br> <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 165, 122);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#CsFire: Browser-Enforced Mitigation Against CSRF]] <br />
''Lieven&nbsp;Desmet&nbsp;and&nbsp;Philippe&nbsp;De&nbsp;Ryck,&nbsp;Katholieke Universiteit Leuven''<br> <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(153, 255, 153);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Deconstructing ColdFusion]] <br />
''Chris Eng,&nbsp;Veracode'' <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 10:45-11:10 <br />
| align="left" colspan="3" style="width: 90%; background: none repeat scroll 0% 0% rgb(194, 194, 194);" | Break - Expo - CTF kick-off, '''Coffee break sponsoring position open''' ($2,000)<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 11:10-11:45 <br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 133, 122);" | [[Image:OWASP AppSec Research 2010 Research R.gif]] [[#Towards Building Secure Web Mashups]] <br />
''M Decat, P De Ryck, L Desmet, F Piessens, W Joosen,&nbsp;Katholieke Universiteit Leuven'' <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 165, 122);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Automated vs. Manual Security: You Can't Filter "The Stupid"]]<br> <br />
''David Byrne and Charles Henderson, Trustwave'' <br />
<br />
<br> <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(153, 255, 153);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#How to Render SSL Useless]] <br />
''Ivan Ristic, Feisty Duck<br>'' <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 11:55-12:30 <br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 133, 122);" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Enterprise Security Patterns for RESTful Web Services]] <br />
<br />
''Francois Lascelles,&nbsp;Layer 7 Technologies''<br> <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 165, 122);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Web Frameworks and How They Kill Traditional Security Scanning]] <br />
''Christian Hang and Lars Andren,&nbsp;Armorize Technologies'' <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(153, 255, 153);" | [[Image:OWASP AppSec Research 2010 Demo D.gif]] [[#The State of SSL in the World]] <br />
''Michael Boman, Omegapoint<br>'' <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 12:30-13:45 <br />
| align="left" colspan="3" style="width: 80%; background: none repeat scroll 0% 0% rgb(194, 194, 194);" | Lunch - Expo - CTF, Lunch sponsor: [[Image:OWASP AppSec Research 2010 IIS logo for program.png]]<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 13:45-14:20 <br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 133, 122);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Securing Web Applications with ESAPI]] <br />
''Ken Sipe,&nbsp;Perficient'' <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 165, 122);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Beyond the Same-Origin Policy]] <br />
''Jasvir Nagra and Mike Samuel, Google<br>'' <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(153, 255, 153);" | [[Image:OWASP AppSec Research 2010 Demo D.gif]] [[#SmashFileFuzzer - a New File Fuzzer Tool]] <br />
''Komal Randive, Symantec'' <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 14:30-15:05 <br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 133, 122);" | [[Image:OWASP AppSec Research 2010 Demo D.gif]] [[#Security Toolbox for .NET Development and Testing]] <br />
''Johan Lindfors and Dag König, Microsoft'' <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 165, 122);" | [[Image:OWASP AppSec Research 2010 Demo D.gif]] [[#Cross-Site Location Jacking (XSLJ) (not really)]] <br />
''David Lindsay, Cigital<br>Eduardo Vela Nava,&nbsp;sla.ckers.org''<br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(153, 255, 153);" | [[Image:OWASP AppSec Research 2010 Demo D.gif]] [[#Owning Oracle: Sessions and Credentials]] <br />
''Wendel G. Henrique and Steve Ocepek, Trustwave'' <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 15:05-15:30 <br />
| align="left" colspan="3" style="width: 80%; background: none repeat scroll 0% 0% rgb(194, 194, 194);" | Break - Expo - CTF, '''Coffee break sponsoring position open''' ($2,000)<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 15:30-16:05 <br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 133, 122);" | [[Image:OWASP AppSec Research 2010 Demo D.gif]] [[#Value Objects a la Domain-Driven Security: A Design Mindset to Avoid SQL Injection and Cross-Site Scripting]] <br />
''Dan Bergh Johnsson, Omegapoint'' <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(188, 165, 122);" | [[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#New Insights into Clickjacking]] <br />
''Marco Balduzzi,&nbsp;Eurecom<br><br>'' <br />
<br />
| align="left" style="width: 30%; background: none repeat scroll 0% 0% rgb(153, 255, 153);" | [[Image:OWASP AppSec Research 2010 Research R.gif]] [[#Session Fixation - the Forgotten Vulnerability?]] <br />
''Michael Schrank and Bastian Braun, University of Passau<br>Martin Johns, SAP Research'' <br />
<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 16:15-17:00 <br />
| align="center" colspan="3" style="width: 90%; background: none repeat scroll 0% 0% rgb(242, 242, 242);" | Panel Discussion: To Be Announced<br />
|-<br />
| style="width: 10%; background: none repeat scroll 0% 0% rgb(123, 138, 189);" | 19:00-23:00 <br />
| align="center" colspan="1" style="background: none repeat scroll 0% 0% rgb(43, 58, 109);" | [[Image:OWASP_AppSec_Research_2010_Stockholm_City_Hall_exterior_small.jpg|Stockholm City Hall, photo by Yanan Li]]<br />
| align="center" colspan="1" style="background: none repeat scroll 0% 0% rgb(43, 58, 109); color: white;" | '''Gala Dinner''' at [http://international.stockholm.se/Tourism-and-history/The-Famous-City-Hall/Pictures-of-the-City-Hall/ <span style="color:rgb(163, 178, 229);">Stockholm City Hall<span>]<br>Sponsored by<br>[[Image:OWASP AppSec Research 2010 Google logo for program.png]] <br />
| align="center" colspan="1" style="background: none repeat scroll 0% 0% rgb(43, 58, 109);" | [[Image:OWASP_AppSec_Research_2010_Stockholm_City_Hall_Golden_Hall_small.jpg|The Golden Hall, photo by Yanan Li]]<br />
|}<br />
<center><br />
[[Image:AppSec Research 2010 Microsoft diamond sponsor.jpg|250px|Microsoft - Diamond Sponsor]] [[Image:AppSec Research 2010 Google 20k sponsor.jpg|150px|Google - Dinner Party and Expo Sponsor]] [[Image:Portwise logo.png|130px|PortWise - Gold and Badge Sponsor]] [[Image:Cybercom logo.png|100px|Cybercom - Gold Sponsor]] [[Image:Fortify logo AppSec Research 2010.png|120px|Fortify - Gold Sponsor]] [[Image:Omegapoint logo.png|110px|Omegapoint - Gold Sponsor]] [[Image:Mnemonic logo.png|100px|Mnemonic - Silver Sponsor]] [[Image:AppSec Research 2010 sponsor Nixu logo.jpg|100px|NIXU - Silver Sponsor]] [[Image:Hps_logo.png|120px|High Performance Systems - Silver Sponsor]] [[Image:IIS logo.png|100px|Stiftelsen för Internetinfrastruktur - Lunch Sponsor]] [[Image:MyNethouse logo.png|100px|MyNethouse - Coffee Break Sponsor]] [[Image:AppSec Research 2010 Help Net Security sponsor.jpg|100px|Help Net Security - Media Sponsor]] <br />
</center> <br />
== Keynote: Cross-Domain Theft and the Future of Browser Security ==<br />
<br />
[[Image:Appsec research 2010 invited talk 1.jpg]] <br />
<br />
'''Chris Evans'''<br> Troublemaker, Information Security Engineer, and Tech Lead at Google inc.<br> Also the sole author of vsftpd. <br />
<br />
'''Ian Fette'''<br> Product Manager for Chrome Security and Google's Anti-Malware initiative <br />
<br />
'''Abstract'''<br> The web browser, and associated machinery, is on the front line of attacks. We will first look at design-level problems with the traditional browser in terms of monolithic architecture and fundamental problems with the same-origin policy. We will then look at the types of solution that are starting to appear in browsers such as Google Chrome and Internet Explorer. We will look at other important browser-based defenses such as Safe Browsing. We will detail what a future browser might look like that has a much more secure design, but is still usable on the wide variety of web sites that people use daily. <br />
<br />
== DAY 1, TRACK 1 ==<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] BitFlip: Determine a Data's Signature Coverage from Within the Application ===<br />
<br />
''Henrich Christopher Poehls, University of Passau - ISL'' <br />
<br />
Despite applied cryptographic primitives applications are working on data that was not protected by them. We show by abstracting the message flow between the application and the underlying wire, that protection is applied to a different data model. Taking problems from real life, like XML wrapping attacks and digital signatures on XML, we show that establishing the right linkage between the security checked on lower levels and the application above is practically difficult. We propose a application controlled check, the BitFlip-test. By this simple test an application can test if the application's assumed protection of a data value was indeed provided by the digital signature applied to the message that contained the value. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] Towards Building Secure Web Mashups ===<br />
<br />
''Maarten Decat, Philippe De Ryck, Lieven Desmet, Frank Piessens, and Wouter Joosen, Katholieke Universiteit Leuven'' <br />
<br />
Web mashups combine components from multiple sources into a single, interactive application. This kind of setup typically requires both interaction between the components to achieve the necessary functionality, as well as component separation to achieve a secure execution. Unfortunately, the traditional web is not designed to easily fulfill both requirements, which can be seen in the restrictions imposed by traditional development techniques. This paper gives an overview of these traditional techniques and investigates new developments, specifically aimed at combining components in a secure manner. In addition, topics for further improvement are identified to ensure a wide adaptation of secure mashups. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Enterprise Security Patterns for RESTful Web Services ===<br />
<br />
''Francois Lascelles, Layer 7 Technologies'' <br />
<br />
This presentation discusses security mechanisms for RESTful Web services in cloud and enterprise deployments. Understand the relationship between REST principles and security for RESTful Web service. Learn about current practices involving SSL, HMAC authentication schemes, OAuth, SAML, and perimeter security patterns involving specialized infrastructure. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Securing Web Applications with ESAPI ===<br />
<br />
''Ken Sipe, Perficient'' <br />
<br />
When it comes to cross cutting software concerns, we expect to have or build a common framework or utility to solve this problem. This concept is represented well in the Java world with the loj4j framework, which abstracts the concern of logging, where it logs and the management of logging. The one cross cutting software concern which seems for most applications to be piecemeal is that of security. Security concerns include certification generation, SSL, protection from SQL Injection, protection from XSS, user authorization and authentication. Each of these separate concerns tend to have there own standards and libraries and leaves it as an exercise for the development team to cobble together a solution which includes multiple needs.... until now... Enterprise Security API library from OWASP. <br />
<br />
This session will look at a number of security concerns and how the ESAPI library provides a unified solution for security. This includes authorization, authentication of services, encoding, encrypting, and validation. This session will discuss a number of issues which can be solved through standardizing on the open source Enterprise Security API. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Demo word.gif]] Security Toolbox for .NET Development and Testing ===<br />
<br />
''Johan Lindfors and Dag König, Microsoft'' <br />
<br />
Being a developer on the Microsoft platform leveraging .NET doesn’t only involve keeping up with the continuous development of the underlying framework and technologies. It also means to be on top of the latest security threats and naturally the available mitigations and best practices to protect the customers and users of the applications and solutions being developed. <br />
<br />
In this session we will demonstrate how you as a .NET developer can leverage existing tools and technologies to build safer applications. During the demonstrations you will get more familiar with the existing tools within Visual Studio but also be introduced and educated in more tools that will help you build a toolbox for secure development and security testing. <br />
<br />
But one must also remember that tools will never replace knowledge and hence we will also show you how you can regularly get updated with the latest information from Microsoft on security including how to leverage SDL – Security Development Lifecycle, within your own projects. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Demo word.gif]] Value Objects a la Domain-Driven Security: A Design Mindset to Avoid SQL Injection and Cross-Site Scripting ===<br />
<br />
''Dan Bergh Johnsson, Omegapoint'' <br />
<br />
SQL Injection and Cross-Site Scripting have been topping the OWASP Top Ten for the last years. It must be a top priority for the community to evolve designs and mindsets that help the programmers to avoid these traps in their day-to-day work, where they have so much else but security that calls for their attention. The ambition of this presentation is to show design and coding practices that are well established in other fields of software development and put them to use to avoid just-mentioned traps. We also show some small refactorings that can be immediately applied to an existing codebase to make significant improvements to its security. Attendants of the session should be able to go back to work Monday morning and finish an improvement in this style before Monday lunch. <br />
<br />
We take inspiration from Domain Driven Design (DDD), which is characterized by its focus on what the software intend to represent. In particular, we make heavy use of the Value Object design pattern, where strict typing help us enforce that the incoming data is truthful to the restrictions of the domain. We start out with Injection Flaws and use the canonical username SQL Injection attack (“’OR 1=1 --“) as an example. Realizing that mentioned string was not intended as a valid username we elaborate the model to reflect this. Further more we make this change explicit in the code by introducing the new type and class Username. This also gives a natural place to put validation code, which otherwise often is placed in utility classes where it is easily forgotten and seldom called. In fact, we can even design service methods to require a validated Username, thus using the strong typing to enforce validation in the calling client system tier. <br />
<br />
Making this re-design with associated code changes is performed as a demo, and en route we discuss other design options and their relative merits and drawbacks. Again using DDD we proceed to analyse XSS. In the same way we see that XSS is in the general case not an indata validation problem. An extended analysis proposes that it can be phrased as an output-encoding problem. Using a similar technique we model the target domain of web content as the new type HTMLString, and can thereby enforce conversion from ordinary strings to strings with the proper encoding. If you have multiple content channels, then each channel will. <br />
<br />
All steps needed are shown in code, starting with a vulnerable application and through controlled refactoring steps ending up with a version without the vulnerability. In summary, we will take an established quality practice from another field of software development and use it to get security improvements. The main benefits are two: firstly, the method gently guides and reminds the programmers to include validation and encoding in an unobtrusive way. Secondly, the work can be performed in very small steps, where the first can be finished before lunch Monday after the conference. <br />
<br />
== DAY 1, TRACK 2 ==<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] CsFire: Browser-Enforced Mitigation Against CSRF ===<br />
<br />
''Lieven Desmet and Philippe De Ryck, Katholieke Universiteit Leuven'' <br />
<br />
Cross-Site Request Forgery (CSRF) is a web application attack vector that can be leveraged by an attacker to force an unwitting user's browser to perform actions on a third party website, possibly reusing all cached authentication credentials of that user. <br />
<br />
Currently, a whole range of techniques exist to mitigate CSRF, either by protecting the server application or by protecting the end-user. Unfortunately, the server-side protection mechanisms are not yet widely adopted, and the client-side solutions provide only limited protection or cannot deal with complex web 2.0 applications, which use techniques such as AJAX, mashups or single sign-on (SSO). <br />
<br />
In this talk, we will presents three interesting results of our research: (1) an extensive, real‐world traffic analysis to gain more insights in cross‐domain web interactions, (2) requirements for client‐side mitigation against CSRF and an analysis of existing browser extensions and (3) CsFire, our newly developed FireFox extension to mitigate CSRF. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Automated vs. Manual Security: You Can't Filter "The Stupid" ===<br />
<br />
''David Byrne and Charles Henderson, Trustwave'' <br />
<br />
Everyone wants to stretch their security budget, and automated application security tools are an appealing choice for doing so. However, manual security testing isn’t going anywhere until the HAL application scanner comes online. This presentation will use often humorous, real-world examples to illustrate the relative strengths and weaknesses of automated solutions and manual techniques. <br />
<br />
Automated tools certainly have some strengths (namely low incremental cost, detecting simple vulnerabilities, and performing highly repetitive tasks). In addition to preventing some attacks, WAFs also have advantages for some compliance frameworks. However, automated solutions are far from perfect. To begin with, there are entire classes of very important vulnerabilities that are theoretically impossible for automated software to detect (at least until HAL comes online). Examples include complex information leakage, race conditions, logic flaws, design flaws, subjective vulnerabilities such as CSRF, and multistage process attacks. <br />
<br />
Beyond that, there are many vulnerabilities that are too complicated or obscure to practically detect with an automated tool. Automated tools are designed to cover common application designs and platforms. Applications using an unusual layout or components will not be thoroughly protected by automated tools. Realistically, only the most vanilla of web applications written on common, simple platforms will receive solid code coverage from an automated tool. <br />
<br />
On the other hand, manual testing is far more versatile. An experienced penetration tester can identify complicated vulnerabilities in the same way that an attacker does. Specific, real-world examples of vulnerabilities only recognizable by humans will be provided. The diversity of vulnerabilities shown will clearly demonstrate that all applications have the potential for significant vulnerabilities not detectable by automated tools. <br />
<br />
Manual source code reviews present even more benefits by identifying vulnerabilities that require access to source code. Examples include “hidden” or unused application components, SQL injection with no evidence in the response, exotic injection attacks (e.g. mainframe session attacks), vulnerabilities in back-end systems, and intentional backdoors. Many organizations assume that this type of vulnerability is not a large threat, but source code can be obtained by disgruntled developers, by internal attackers when the repository isn’t properly secured, by exploiting platform bugs or path directory traversal attacks, and by external attackers using a Trojan horse or similar technique. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Web Frameworks and How They Kill Traditional Security Scanning ===<br />
<br />
''Christian Hang and Lars Andren, Armorize Technologies'' <br />
<br />
Modern web application frameworks present a challenge to static analysis technologies due to how they influence application behavior in ways not obvious from the source code. This prevents efficient security scanning and can cause up to 80% of total potential issues to remain undetected due to the incorrect framework handling. After explaining the underlying problems, we demonstrate in a real world walk through using code analysis to scan actual application code. By extending static analysis with new framework specific components, even applications using complex frameworks like Struts and Smarty can be inspected automatically and code coverage of security analysis can be greatly enhanced. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Beyond the Same-Origin Policy ===<br />
<br />
''Jasvir Nagra and Mike Samuel, Google Inc'' <br />
<br />
The same-origin policy has governed interaction between client-side code and user data since Netscape 2.0, but new development techniques are rendering it obsolete. Traditionally, a website consisted of server-side code written by trusted, in-house developers&nbsp;; and a minimum of client-side code written by the same in-house devs. The same-origin policy worked because it didn't matter whether code ran server-side or client-side&nbsp;; the user was interacting with code produced by the same organization. But today, complex applications are being written almost entirely in client-side code requiring developers to specialize and share code across organizational boundaries. <br />
<br />
This talk will explain how the same-origin policy is breaking down, give examples of attacks, discuss the properties that any alternative must have, introduce a number of alternative models being examined by the Secure EcmaScript committee and other standards bodies, demonstrate how they do or don't thwart these attacks, and discuss how secure interactive documents could open up new markets for web developers. We assume a basic familiarity with web application protocols&nbsp;: HTTP, HTML, JavaScript, CSS&nbsp;; and common classes of attacks&nbsp;: XSS, XSRF, Phishing. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Demo word.gif]] Cross-Site Location Jacking (XSLJ) (not really) ===<br />
<br />
''David Lindsay, Cigital Inc, and Eduardo Vela Nava sla.ckers.org'' <br />
<br />
Redirects are commonly used on many websites and are an integral part of many web frameworks. However, subtle and not so subtle issues can lead to security holes and privacy issues. In this presentation, we will discuss several high and low level issues related to redirects and demonstrate how the issues can be exploited. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] New Insights into Clickjacking ===<br />
<br />
''Marco Balduzzi, Eurecom'' <br />
<br />
Over the past year, clickjacking received extensive media coverage. News portals and security forums have been overloaded by posts claiming clickjacking to be the upcoming security threat. In a clickjacking attack, a malicious page is constructed (or a benign page is hijacked) to trick the user into performing unintended clicks that are advantageous for the attacker, such as propagating a web worm, stealing confidential information or abusing of the user session. In this talk, we formally define the problem and introduce our novel solution for automated detection of clickjacking attacks. We present the details of the system architecture and its implementation, and we evaluate the results we obtained from the analysis of over a million unique Internet pages. We conclude by discussing the clickjacking phenomenon and its future implications. <br />
<br />
== DAY 1, TRACK 3 ==<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Deconstructing ColdFusion ===<br />
<br />
''Chris Eng, Veracode'' <br />
<br />
This presentation is a technical survey of ColdFusion security, which will be of interest mostly to code auditors and penetration testers. We’ll cover the basics of ColdFusion markup, control flow, functions, and components and demonstrate how to identify common web application vulnerabilities at the source code level. We’ll also delve into ColdFusion J2EE internals, describing some of the unexpected properties we’ve observed while decompiling ColdFusion applications for static analysis. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] How to Render SSL Useless ===<br />
<br />
''Ivan Ristic, Feisty Duck'' <br />
<br />
SSL is the technology that secures the Internet, but it is effective only when deployed properly. While the SSL protocol itself is very robust and easy to use, the same cannot be said for the usability of the complete ecosystem, which includes server configuration, certificates and application implementation details. In fact, SSL deployment is generally plagued with traps at every step of the way. As a result, too many web sites use insecure deployment practices that render SSL completely useless. In this talk I will present a list of top ten (or thereabout) deployment mistakes, based on my work on the SSL Labs assessment platform (https://www.ssllabs.com). <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Demo word.gif]] The State of SSL in the World ===<br />
<br />
''Michael Boman, Omegapoint'' <br />
<br />
What is the status of SSL deployments in Fortune 500 companies and the top 10'000 websites (according to Alexa)? While developing a tool that was needed to perform the test-case OWASP-CM-001 (Testing for SSL-TLS) it was noticed that some sites had very good SSL-configuration, sometimes unexpectedly, and some sites has very poor security configuration, even when you could expect the site to have good security standard. Does the organization behind the site has any bearing on how good the security standard the site has in regards to HTTPS-support and configuration? The talk will highlight the findings and the tools and process of obtaining the underlying data, while also trying to answer the questions: - How many of the Fortune 500 and Top 10'000 websites offer an HTTPS-enabled browser experience to their visitors? - How is the HTTPS-server configured in regards to SSL-protocols offered, key exchange and key lengths (bit-size)? - Are there any correlation between company size, industry or popularity and the HTTPS-enabled browsing experience and the HTTPS-configuration? <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Demo word.gif]] SmashFileFuzzer - a New File Fuzzer Tool ===<br />
<br />
''Komal Randive, Symantec'' <br />
<br />
Here is a tool SmashFileFuzzer designed and developed to address the same problem with ease. SmashFileFuzzer understands the file formats and then user can specify the fields in the file to be fuzzed. SmashFileFuzzer acts on a sample file of the required format and generates multiple fuzzed file copies from this sample file. SmashFileFuzzer also has the support to add more custom file formats to be able to fuzz them, especially .dat formats. In comparison with the existing file fuzzers and frameworks this fuzzer has simple language for adding new formats, many more modes of fuzzing and attack oriented fuzzing. Following are the highlights of this fuzzer <br />
<br />
*Support to understand the file formats and fuzz specific fields with specified/random data <br />
*Understands the correlation between different fields and manipulates them in accordance with the fuzzed content. <br />
*Can generate valid fuzzed files even based on the partial format understanding. Only the portions of file format which are understood by the user can be used to generate valid fuzzed files. <br />
*Understands the custom formats for file types and also for the configuration files(e.g key value pair format or .dat formats) <br />
*Tool is designed to be easily extended for any new file formats <br />
*Fuzz strings are read from a dictionary file. Users can add application specific input string to this dictionary for testing. <br />
*It’s a unix shell based tool which can be easily scripted.<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Demo word.gif]] Owning Oracle: Sessions and Credentials ===<br />
<br />
''Wendel G. Henrique and Steve Ocepek, Trustwave'' <br />
<br />
In a world of free, ever-present encryption libraries, many penetration testers still find a lot of great stuff on the wire. Database traffic is a common favorite, and with good reason: when the data includes PAN, Track, and CVV, it makes you stop and wonder why this stuff isn’t encrypted across the board. However, despite this weakness, we still need someone to issue queries before we see the data. Or maybe not… after all, it’s just plaintext. <br />
<br />
Wendel G. Henrique and Steve Ocepek of Trustwave’s SpiderLabs division offer a closer look at the world’s most popular relational database: Oracle. Through a combination of downgrade attacks and session take-over exploits, this talk introduces a unique approach to database account hijacking. Using a new tool, thicknet, the team will demonstrate how deadly injection and downgrade attacks can be to database security. <br />
<br />
The Oracle TNS/Net8 protocol was studied extensively during presentation for this talk. Very little public knowledge of this protocol exists today, and much of the data gained is, as far as we know, new to Oracle outsiders. <br />
<br />
Also, during the presentation we will be offering to attendants: <br />
<br />
*Knowledge about man-in-the-middle and downgrade attacks, especially the area of data injection. <br />
*A better understanding of the network protocol used by Oracle. <br />
*The ability to audit databases against this type of attack vector. <br />
*Ideas for how to prevent this type of attack, and an understanding of the value of encryption and digital signature technologies. <br />
*Understanding of methodologies used to reverse-engineer undocumented protocols.<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] Session Fixation - the Forgotten Vulnerability? ===<br />
<br />
''Michael Schrank and Bastian Braun, University of Passau, and Martin Johns, SAP Research'' <br />
<br />
The term 'Session Fixation vulnerability' subsumes issues in Web applications that under certain circumstances enable the adversary to perform a session hijacking attack through ontrolling the victim's session identier value. We explore this vulnerability pattern. First, we give an analysis of the root causes and document existing attack vectors. Then we take steps to assess the current attack surface of Session Fixation. Finally, we present a transparent server-side method for mitigating vulnerabilities. <br />
<br />
==== June 24 ====<br />
<br />
{| style="width:80%" border="0" align="center"<br />
|-<br />
| colspan="4" align="center" style="background:#4058A0; color:white" | '''Conference Day 2 - June 24, 2010''' <br />
[[Image:OWASP AppSec Research 2010 Research R.gif]] = Research paper [[Image:OWASP AppSec Research 2010 Demo D.gif]] = Demo [[Image:OWASP AppSec Research 2010 Presentation P.gif]] = Presentation <br />
<br />
|-<br />
| style="width:10%; background:#7B8ABD" | <br />
| style="width:30%; background:#BC857A" | Track 1 <br />
| style="width:30%; background:#BCA57A" | Track 2 <br />
| style="width:30%; background:#99FF99" | Track 3<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 09:00-10:00 <br />
| colspan="3" style="width:80%; background:rgb(252, 252, 150)" align="center" | [[#Keynote: The Security Development Lifecycle - The Creation and Evolution of a Security Development Process]]<br>''Steve Lipner, Senior Director of Security Engineering Strategy, Microsoft Corporation''<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 10:10-10:45 <br />
| style="width:30%; background:#BC857A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Building Security In Maturity Model: A Review of Successful Software Security Programs in Europe]] <br />
<br />
''Gabriele Giuseppini, Cigital'' <br />
<br />
| style="width:30%; background:#BCA57A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Demo D.gif]] [[#Promon TestSuite: Client-Based Penetration Testing Tool]] <br />
<br />
''Folker den Braber and Tom Lysemose Hansen, Promon'' <br />
<br />
| style="width:30%; background:#99FF99" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Research R.gif]] [[#A Taint Mode for Python via a Library]] <br />
<br />
''Juan José Conti, Universidad Tecnológica Nacional<br>Alejandro Russo, Chalmers Univ. of Technology'' <br />
<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 10:45-11:10 <br />
| colspan="3" style="width:90%; background:#C2C2C2" align="left" | Break - Expo - CTF, Coffee sponsor: [[Image:OWASP AppSec Research 2010 MyNethouse logo for program.png]]<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 11:10-11:45 <br />
| style="width:30%; background:#BC857A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Microsoft's Security Development Lifecycle for Agile Development]] <br />
<br />
''Nick Coblentz, OWASP Kansas City Chapter and AT&T Consulting'' <br />
<br />
| style="width:30%; background:#BCA57A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Detecting and Protecting Your Users from 100% of all Malware - How?]] <br />
<br />
''Bradley Anstis and Ellynora Nicoll, M86 Security'' <br />
<br />
| style="width:30%; background:#99FF99" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Research R.gif]] [[#OPA: Language Support for a Sane, Safe and Secure Web]] <br />
<br />
''David Rajchenbach-Teller and François-Régis Sinot, MLstate'' <br />
<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 11:55-12:30 <br />
| style="width:30%; background:#BC857A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Secure Application Development for the Enterprise: Practical, Real-World Tips]] <br />
<br />
''Michael Craigue, Dell'' <br />
<br />
| style="width:30%; background:#BCA57A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Responsibility for the Harm and Risk of Software Security Flaws]] <br />
<br />
''Cassio Goldschmidt, Symantec'' <br />
<br />
| style="width:30%; background:#99FF99" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Research R.gif]] [[#Secure the Clones: Static Enforcement of Policies for Secure Object Copying]] <br />
<br />
''Thomas Jensen and David Pichardie, INRIA Rennes - Bretagne Atlantique'' <br />
<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 12:30-13:45 <br />
| colspan="3" style="width:80%; background:#C2C2C2" align="left" | Lunch - Expo - CTF, '''Lunch break sponsoring position open''' ($4,000)<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 13:45-14:20 <br />
| style="width:30%; background:#BC857A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Product Security Management in Agile Product Management]] <br />
<br />
''Antti Vähä-Sipilä, Nokia'' <br />
<br />
| style="width:30%; background:#BCA57A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Hacking by Numbers]] <br />
<br />
''Tom Brennan, WhiteHat Security and OWASP Foundation<br>'' <br />
<br />
| style="width:30%; background:#99FF99" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Research R.gif]] [[#Safe Wrappers and Sane Policies for Self Protecting JavaScript]] <br />
<br />
''Jonas Magazinius, Phu H. Phung, and David Sands, Chalmers Univ. of Technology'' <br />
<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 14:30-15:05 <br />
| style="width:30%; background:#BC857A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#OWASP_Top_10_2010]] <br />
<br />
''Dave Wichers, Aspect Security and OWASP Foundation<br>'' <br />
<br />
| style="width:30%; background:#BCA57A" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Presentation P.gif]] [[#Application Security Scoreboard in the Sky]] <br />
<br />
''Chris Eng, Veracode'' <br />
<br />
| style="width:30%; background:#99FF99" align="left" | <br />
[[Image:OWASP AppSec Research 2010 Research R.gif]] [[#On the Privacy of File Sharing Services]] <br />
<br />
''N Nikiforakis, F Gadaleta, Y Younan, and W Joosen, Katholieke Universiteit Leuven'' <br />
<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 15:05-15:30 <br />
| colspan="3" style="width:80%; background:#C2C2C2" align="left" | Break - Expo - CTF, '''Coffee break sponsoring position open''' ($2,000)<br />
|-<br />
| style="width:10%; background:#7B8ABD" | 15:30-16:00 <br />
| colspan="3" style="width:90%; background:#F2F2F2" align="center" | CTF Price Ceremony, Announcement of OWASP AppSec EU 2011, Closing Notes<br />
|}<br />
<center><br />
[[Image:AppSec Research 2010 Microsoft diamond sponsor.jpg|250px|Microsoft - Diamond Sponsor]] [[Image:AppSec Research 2010 Google 20k sponsor.jpg|150px|Google - Dinner Party and Expo Sponsor]] [[Image:Portwise logo.png|130px|PortWise - Gold and Badge Sponsor]] [[Image:Cybercom logo.png|100px|Cybercom - Gold Sponsor]] [[Image:Fortify logo AppSec Research 2010.png|120px|Fortify - Gold Sponsor]] [[Image:Omegapoint logo.png|110px|Omegapoint - Gold Sponsor]] [[Image:Mnemonic logo.png|100px|Mnemonic - Silver Sponsor]] [[Image:AppSec Research 2010 sponsor Nixu logo.jpg|100px|NIXU - Silver Sponsor]] [[Image:Hps_logo.png|120px|High Performance Systems - Silver Sponsor]] [[Image:IIS logo.png|100px|Stiftelsen för Internetinfrastruktur - Lunch Sponsor]] [[Image:MyNethouse logo.png|100px|MyNethouse - Coffee Break Sponsor]] [[Image:AppSec Research 2010 Help Net Security sponsor.jpg|100px|Help Net Security - Media Sponsor]] <br />
</center> <br />
== Keynote: The Security Development Lifecycle - The Creation and Evolution of a Security Development Process ==<br />
<br />
[[Image:Appsec research 2010 invited talk 2.jpg]] <br />
<br />
'''Steve Lipner'''<br> Senior Director of Security Engineering Strategy, Trustworthy Computing Security, Microsoft Corporation.<br> Co-author of "The Security Development Lifecycle", Microsoft Press (book cover above). <br />
<br />
'''Abstract'''<br> This keynote will review the evolution of the Security Development Lifecycle (SDL) from its origins in the Microsoft “security pushes” of 2002-3 through its current status and application in 2010. It will emphasize the aspects of change and change management as the SDL and its user community have matured and grown and will conclude with a summary of some recent changes and additions to the SDL. Specific topics to be addressed include: <br />
<br />
*Motivations for introducing both the SDL and its predecessor processes. <br />
*Considerations in selling the process to management and sustaining a mandate over a prolonged period. <br />
*Scaling the SDL to an organization with tens of thousands of engineers. <br />
*Managing change. <br />
*The role of automation in the SDL. <br />
*Adaptation of the SDL to agile development processes. <br />
*Thoughts for organizations that are considering implementing the SDL.<br />
<br />
The presentation will cover technical aspects of the SDL including a brief review of requirements and tools, and results. <br />
<br />
'''Speaker Bio'''<br> Steven B. Lipner is senior director of Security Engineering Strategy at Microsoft Corp where he is responsible for programs that provide improved product security for Microsoft customers. Lipner leads Microsoft’s Security Development Lifecycle (SDL) team and is responsible for the definition of Microsoft’s SDL and for programs to make the SDL available to organizations beyond Microsoft. Lipner is also responsible for Microsoft’s corporate strategies related to government security evaluation of Microsoft products. <br />
<br />
Lipner is coauthor with Michael Howard of The Security Development Lifecycle (Microsoft Press, 2006) and is named as inventor on twelve U.S. patents and two pending applications in the field of computer and network security. He has authored numerous professional papers and conference presentations, and served on several National Research Council committees. He served two terms – a total of more than ten years – on the United States Information Security and Privacy Advisory Board and its predecessor. Lipner holds S.B. and S.M. degrees in Civil Engineering from the Massachusetts Institute of Technology and attended the Harvard Business School’s Program for Management Development. <br />
<br />
== DAY 2, TRACK 1 ==<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Building Security In Maturity Model: A Review of Successful Software Security Programs in Europe ===<br />
<br />
''Gabriele Giuseppini, Cigital'' <br />
<br />
Most large organizations have practiced software security through many activities involving people, process and automation, but we are just now reaching the point where enough experience has been accumulated to compare notes and talk about what works at a macro level. In 2008, Gary McGraw, Brian Chess, and Sammy Migues interviewed the executives running nine software security initiatives at companies such as Adobe, The Depository Trust and Clearing Corporation (DTCC), EMC, Google, Microsoft, QUALCOMM, and Wells Fargo. The resulting data, drawn from real programs at different levels of maturity, was used to guide the construction of the Building Security In Maturity Model (BSIMM). <br />
<br />
BSIMM is a framework, a tool, and a measuring stick that can be used by organizations to gauge their software security initiatives and to highlight areas of discussion and intervention. Using BSIMM it is possible to compare initiatives with each other and unveil activities that might have been underdeveloped or that might have been adopted without sufficient foundation to achieve tangible results. <br />
<br />
In the past year BSIMM has expanded to collect data from dozens of additional companies, and enough data has been assembled to compare security initiatives in the United States to initiatives in the European Union. The BSIMM framework and the real-world information gathered through the interviews makes it possible to identify the set of activities that seem to be common to successful programs as well as highlight the differences and common points observed between the two regions. <br />
<br />
I will describe this observation-based maturity model, drawing examples from several real software security programs in the United States and in Europe. I will discuss the different ways that BSIMM can be used to organize, manage, and measure software security initiatives, and I will point out the interesting results that have been obtained from the analysis of the raw data and from the comparison of the data between the US and European regions. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Microsoft's Security Development Lifecycle for Agile Development ===<br />
<br />
''Nick Coblentz, OWASP Kansas City Chapter and AT&amp;T Consulting'' <br />
<br />
Many development and security teams believe Agile development cannot be accomplished securely. During this presentation, Nick Coblentz will discuss the recent guidance from Microsoft that enables development teams to include secure development activities within their Agile processes without compromising features or functionality. Nick will also demonstrate ASP.NET libraries, strategies, and automated tools to reduce the effort required by developers.<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Secure Application Development for the Enterprise: Practical, Real-World Tips ===<br />
<br />
''Michael Craigue, Dell'' <br />
<br />
Dell has a reputation for IT simplification and a lean cost structure. We take the same approach with our application security program. This talk covers money-saving tips in the creation and evolution of Dell's Security Development Lifecycle, including risk assessments, security reviews, threat modeling, source code scans, awareness/training, application security user groups, security consulting staff development, and assurance scans/penetration testing. We’ll discuss how we have adapted our program to our IT, Product Group, and Services organizations. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Product Security Management in Agile Product Management ===<br />
<br />
''Antti Vähä-Sipilä, Nokia'' <br />
<br />
This paper provides a model for product security risk management and security requirements elicitation in an agile product management framework, using the concepts of Scrum and an epics-based agile requirements model. The paper documents some real-life experiences of rolling out such a risk management model. The model addresses security threat analysis and risk acceptance, and is agnostic to the actual security engineering practices employed in the Scrum teams, and is scalable over large and small enterprises. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] OWASP Top 10 2010 ===<br />
<br />
''Dave Wichers, Aspect Security and OWASP Foundation'' <br />
<br />
This presentation will cover the OWASP Top 10 - 2010 (final version). The OWASP Top 10 was originally released in 2003 to raise awareness of the importance of application security. As the field evolves, the Top 10 needs to be periodically updated to keep with up with the times. The Top 10 was updated in 2004 and the last update was in 2007, where it introduced Cross Site Request Forgery (CSRF) as the big new emerging web application security risk. <br />
<br />
This update will be based on more sources of web application vulnerability information than the previous versions were when determining the new Top 10. It will also present this information in a more concise, compelling, and consumable manner, and include strong references to the many new openly available resources that can help address each issue, particularly OWASP's new Enterprise Security API (ESAPI) and Application Security Verification Standard (ASVS) projects. <br />
<br />
A significant change for this update will be that the OWASP Top 10 will be focused on the Top 10 Risks to Web Applications, not just the most common vulnerabilities. <br />
<br />
== DAY 2, TRACK 2 ==<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Demo word.gif]] Promon TestSuite: Client-Based Penetration Testing Tool ===<br />
<br />
''Folker den Braber and Tom Lysemose Hansen, Promon'' <br />
<br />
Vulnerability analysis has a wide scope containing both social and technical aspects. An important part of technical vulnerability analysis consists of penetration testing. In most cases, penetration testing is focused on either server side or network layer vulnerabilities. In this demonstration we will have a closer look at vulnerability analysis on the client side, while demonstrating the use of the Promon Testuite testing tool. <br />
<br />
Promon TestSuite is designed to use the same vectors as common malware but in a clear and visual way, with varying payloads to illustrate the security issues involved with giving injected code free access to a programs memory. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Detecting and Protecting Your Users from 100% of all Malware - How? ===<br />
<br />
''Bradley Anstis and Ellynora Nicoll, M86 Security'' <br />
<br />
This presentation starts with comparing the three common methods of malware detection; traditional signatures, code analysis and behavioral analysis. We will review the strengths, weaknesses and provide in-depth demonstrations of the technology behind them to show what they are capable of. Next we show how these three can be combined to provide better coverage and performance. Finally we layer in another related technology - Application White-listing. Together, is this the silver bullet for Malware? <br />
<br />
This session is all about challenging the existing accepted practices for Malware protection. We want to open the minds of the attendees, encourage them to question existing solutions and the incumbent market leading vendors. We want you to also re-evaluate their environment to see if improvements can be made. To that end the objectives of this session are to: <br />
<br />
1. Provide a ‘warts and all’ review of three malware detection methods complete with demonstrations 2. Use this information to score them in terms of coverage, time to protect and scanning performance 3. Demonstrate how we can use all three of these technologies, layered together, to provide an even better solution 4. Finally, further strengthen this solution set up by adding in Application White-listing to further minimize any possible malware infection <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Responsibility for the Harm and Risk of Software Security Flaws ===<br />
<br />
''Cassio Goldschmidt, Symantec Corp'' <br />
<br />
Who is responsible for the harm and risk of security flaws? The advent of worldwide networks such as the internet made software security (or the lack of software security) become a problem of international proportions. There are no mathematical/statistical risk models available today to assess networked systems with interdependent failures. Without this tool, decision-makers are bound to overinvest in activities that don’t generate the desired return on investment or under invest on mitigations, risking dreadful consequences. Experience suggests that no party is solely responsible for the harm and risk of software security flaws but a model of partial responsibility can only emerge once the duties and motivations of all parties are examine and understood. <br />
<br />
State of the art practices in software development won’t guarantee products free of flaws. The infinite principles of mathematics are not properly implemented in modern computer hardware without having to truncate numbers and calculations. Many of the most common operating systems, network protocols and programming languages used today were first conceived without the basic principles of security in mind. Compromises are made to maintain compatibility of newer versions of these systems with previous versions. Evolving software inherits all flaws and risks that are present in this layered and interdependent solution. Lastly, there are no formal ways to prove software correctness using neither mathematics nor definitive authority to assert the absence of vulnerabilities. The slightest coding error can lead to a fatal flaw. Without a doubt, vulnerabilities in software applications will continue to be part of our daily lives for years to come. <br />
<br />
Decisions made by adopters such as whether to install a patch, upgrade a system or employed insecure configurations create externalities that have implications on the security of other systems. Proper cyber hygiene and education are vital to stop the proliferation of computer worms, viruses and botnets. Furthermore, end users, corporations and large governments directly influence software vendors’ decisions to invest on security by voting with their money every time software is purchased or pirated. <br />
<br />
Security researchers largely influence the overall state of software security depending on the approach taken to disclose findings. While many believe full disclosure practices helped the software industry to advance security in the past, several of the most devastating computer worms were created by borrowing from information detailed by researcher’s full disclosure. Both incentives and penalties were created for security researchers: a number of stories of vendors suing security researchers are available in the press. Some countries enacted laws banning the use and development of “hacking tools”. At the same time, companies such as iDefense promoted the creation of a market for security vulnerabilities providing rewards that are larger than a year’s worth of salary for a software practitioner in countries such as China and India. <br />
<br />
Effective policy and standards can serve as leverage to fix the problem either by providing incentives or penalties. Attempts such PCI created a perverse incentive that diverted decision makers’ goals to compliance instead of security. Stiff mandates and ineffective laws have been observed internationally. Given the fast pace of the industry, laws to combat software vulnerabilities may become obsolete before they are enacted. Alternatively, the government can use its own buying power to encourage adoption of good security standards. One example of this is the Federal Desktop Core Configuration (FDCC). <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Hacking by Numbers ===<br />
<br />
''Tom Brennan, WhiteHat Security and OWASP Foundation'' <br />
<br />
There is a difference between what is possible and what is probable, something we often lose sight of in the world of information security. For example, a vulnerability represents a possible way for an attacker to exploit an asset, but remember not all vulnerabilities are created equal. Obviously we must also keep in mind that just because a vulnerability exists does not necessarily mean it will be exploited, or indicate by whom or to what extent. Clearly, many vulnerabilities are very serious leaving the door open to compromise of sensitive information, financial loss, brand damage, violation of industry regulations, and downtime. Some vulnerabilities are more difficult to exploit than others and therefore attract different attackers. Autonomous worms &amp; viruses may attack one type of issue, while a sentient targeted attacker may prefer another path. Better understanding of these factors enables us to make informed business decisions about website risk management and what is probable. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Presentation word.gif]] Application Security Scoreboard in the Sky ===<br />
<br />
''Chris Eng, Veracode'' <br />
<br />
This presentation will discuss vulnerability metrics gathered from real-world applications. The statistics are derived from continuously updated data collected by Veracode’s cloud-based code analysis service. The anonymized data represents a total of nearly 1,600 applications submitted for analysis by large and small companies, commercial software providers, open source projects, and software outsourcers between February 2007 and January 2010. This is the first vulnerability analytics study of this magnitude that incorporates data from both static analysis and dynamic analysis. <br />
<br />
We will compare the relative security of applications by industry and origin, and we will examine detailed vulnerability distribution data in the context of taxonomies such as the OWASP Top Ten and the CWE/SANS Top 25 Programming Errors. <br />
<br />
== DAY 2, TRACK 3 ==<br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] A Taint Mode for Python via a Library ===<br />
<br />
''Juan José Conti, Universidad Tecnológica Nacional, and Alejandro Russo, Chalmers University of Technology'' <br />
<br />
Vulnerabilities in web applications present threats to on-line systems. SQL injection and cross-site scripting attacks are among the most common threats found nowadays. These attacks are often result of improper or none input validation. To help discover such vulnerabilities, taint analyses have been developed in popular web scripting languages like Perl, Ruby, PHP, and Python. Such analysis are often implemented as an execution monitor, where the interpreter needs to be adapted to provide a taint mode. However, modifying interpreters might be a major task in its own right. In fact, it is very probably that new releases of interpreters require to be adapted to provide a taint mode. Differently from previous approaches, we show how to provide a taint analysis for Python via a library written entirely in Python, and thus avoiding modifications in the interpreter. The concepts of classes, decorators and dynamic dispatch makes our solution lightweight, easy to use, and particularly neat. With minimal or none effort, the library can be adapted to work with different Python interpreters. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] OPA: Language Support for a Sane, Safe and Secure Web ===<br />
<br />
''David Rajchenbach-Teller and François-Régis Sinot, MLstate'' <br />
<br />
Web applications and services have critical needs in terms of safety, security and privacy: they need to remain available constantly and can at any time be the object of attacks by malicious and anonymous distant users attempting to take control, alter data or steal it, or cause unwanted behaviors. Unfortunately, recent history shows numerous cases of popular web applications falling victim to such attacks, despite careful attempts to secure them. <br />
<br />
In this paper, we introduce OPA (One Pot Application), a new platform designed to make web development sane, safe and secure. OPA provides an integrated methodology where the complete application is written with one simple language with consistent semantics, enforces safe use of the infrastructure through compile-time static checking and a novel programming paradigm suited to the web and encourages correct-by-construction development. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] Secure the Clones: Static Enforcement of Policies for Secure Object Copying ===<br />
<br />
''Thomas Jensen and David Pichardie, INRIA Rennes - Bretagne Atlantique'' <br />
<br />
Exchanging mutable data objects with untrusted code is a delicate matter because of the risk of creating a data space that is accessible by both a code and an attacker. Consequently, secure programming guidelines for Java stress the importance of using defensive copying before accepting or handing out references to an internal mutable object. However, implementation of a copy method (like clone()) is entirely left to the programmer. It may not provide a sufficiently deep copy of an object and is subject to overriding by a malicious sub-class. Currently no language-based mechanism supports secure object cloning. <br />
<br />
This paper proposes a type-based annotation system for defining modular cloning policies for class-based object-oriented programs. It provides a static enforcement mechanism that will guarantee that all classes fulfill their copying policy, even in the presence of overriding of copy methods, and establishes the semantic correctness of the overall approach. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] Safe Wrappers and Sane Policies for Self Protecting JavaScript ===<br />
<br />
''Jonas Magazinius, Phu H. Phung, and David Sands, Chalmers Univ. of Technology'' <br />
<br />
Phung et al (ASIACCS’09) describe a method for wrapping built-in methods of JavaScript programs in order to enforce security policies. The method is appealing because it requires neither deep transformation of the code nor browser modification. Unfortunately the implementation outlined suffers from a range of vulnerabilities, and policy construction is restrictive and error prone. In this paper we address these issues to provide a systematic way to avoid the identified vulnerabilities, and make it easier for the policy writer to construct declarative policies – i.e. policies upon which attacker code has no side effects. <br />
<br />
=== [[Image:OWASP AppSec Research 2010 Research word.gif]] On the Privacy of File Sharing Services ===<br />
<br />
''Nick Nikiforakis, Francesco Gadaleta, Yves Younan, and Wouter Joosen, Katholieke Universiteit Leuven'' <br />
<br />
File sharing services are used daily by tens of thousands of people as a way of sharing files. Almost all such services, use a security-through-obscurity method of hiding the files of one user from others. For each uploaded file, the user is given a secret URL which supposedly cannot be guessed. The user can then share his uploaded file by sharing this URL with other users of his choice. Unfortunately though, a number of file sharing services are incorrectly implemented allowing an attacker to guess valid URLs of millions of files and thus allowing him to enumerate their file database and access all of the uploaded files. In this paper, we study some of these services and we record their incorrect implementations. We design automatic enumerators for two such services and a privacy-classifying module which characterises an uploaded file as private or public. Using this technique we gain access to thousands of private files ranging from private and company documents to personal photographs. We present a taxonomy of the private files found and ways that the users and services can protect themselves against such attacks. <br />
<br />
==== Registration ====<br />
<br />
== Registration is now OPEN ==<br />
<br />
'''[http://guest.cvent.com/i.aspx?4W%2cM3%2c717e8a7c-4453-47ff-addb-721306529534 Click Here To Register]''' <br />
<br />
Note: To save on processing expenses, all fees paid for the OWASP conference are non-refundable. OWASP can accommodate transfers of registrations from one person to another, if such an adjustment becomes necessary. <br />
<br />
== Stay Informed ... and Tell Others ==<br />
<br />
[https://lists.owasp.org/mailman/listinfo/appsec_eu_2010 Subscribe to the conference '''mailing list''']. This is the official information channel and you'll be the first to know about the program, invited speakers, opening of registration for training etc. <br />
<br />
[http://events.linkedin.com/OWASP-AppSec-Research-2010/pub/185990 Add the event to your '''LinkedIn''' profle] to tell all your business contacts that AppSec Research 2010 is the place to be. <br />
<br />
Then get on the '''Twitter''' stream by using the tags '''#OWASP''' and '''#AppSecEU'''. <br />
<br />
== Conference Fees (June 23-24) ==<br />
<br />
*Regular registration: €350 <br />
*OWASP individual member (not just chapter member): €300 <br />
*Full-time students*: €225<br />
<br />
<nowiki>*</nowiki> We need some kind of proof of your full-time student status. Either ask your local OWASP chapter leader to vouch for you by email to Kate.Hartmann@owasp.org, or email Kate a scanned image of your student ID (please compress the file size&nbsp;:). <br />
<br />
== Training Fee (June 21-22) ==<br />
<br />
*Training fee is €990 for two days, see Training tab above<br />
<br />
==== Travel &amp; Hotels ====<br />
<br />
== Travel ==<br />
<br />
Stockholm's foremost international airport is Arlanda (ARN). Clean and convenient speed trains will take you between Arlanda and Stockholm Central in 20 minutes. You can also fly to Stockholm Skavsta (NYO) or Stockholm Västerås (VST) where coaches take you to Stockholm Central in 1 h 20 min. <br />
<br />
== Accommodation ==<br />
<br />
You can choose hotel/hostel freely in Stockholm but we provide three suggestions with pre-booked rooms. Before you book '''check with sites like [http://www.hotels.com hotels.com] since they might have better prices for the very same hotels!''' <br />
<br />
[[Image:Stockholm map with hotels and public transportation.jpg]] <br />
<br />
Subways and buses are convenient and safe and will take you right up to the venue (station/stop "Universitetet") from these three hotels: <br />
<br />
'''Best Western Time Hotel'''<br> Why? Closest to the university, direct bus or subway to the conference<br> [http://www.timehotel.se/index.aspx?languageID=5 Best Western Time Hotel]<br> Single room: 1395 SEK/€145/$195<br> Double room: 1575 SEK/€160/$220<br> Rooms pre-booked until May 6 under code "G#73641 OWASP"<br> <br />
<br />
'''Scandic Continental'''<br> Why? Right at the Central Station, convenient travel to and from airport, direct subway to the conference<br> [http://www.scandichotels.com/en/Hotels/Countries/Sweden/Stockholm/Hotels/Scandic-Continental-Stockholm/ Scandic Continental]<br> Single room: 1590 SEK/€165/$220<br> Double room: 1690 SEK/€175/$235<br> Rooms pre-booked until early May under code "OWASP"<br> <br />
<br />
'''Fridhemsplan's Hostel'''<br> Why? Affordable stay in Stockholm's nicest hostel, direct bus to the conference<br> [http://fridhemsplan.se/?p=Main&c= Fridhemsplan's Hostel]<br> Rooms cost €35-€55 ($50-$80)<br> Booking via John Wilander (john.wilander@owasp.org). First-come-first-served with priority to students or people who have the need&nbsp;;). <br />
<br />
==== Venue ====<br />
<br />
[[Image:AppSec Research 2010 Aula Magna.jpg]] <br />
<br />
==== Sponsoring ====<br />
<center><br />
[[Image:AppSec Research 2010 Microsoft diamond sponsor.jpg|250px|Microsoft - Diamond Sponsor]] [[Image:AppSec Research 2010 Google 20k sponsor.jpg|150px|Google - Dinner Party and Expo Sponsor]] [[Image:Portwise logo.png|130px|PortWise - Gold and Badge Sponsor]] [[Image:Cybercom logo.png|100px|Cybercom - Gold Sponsor]] [[Image:Fortify logo AppSec Research 2010.png|120px|Fortify - Gold Sponsor]] [[Image:Omegapoint logo.png|110px|Omegapoint - Gold Sponsor]] [[Image:Mnemonic logo.png|100px|Mnemonic - Silver Sponsor]] [[Image:AppSec Research 2010 sponsor Nixu logo.jpg|100px|NIXU - Silver Sponsor]] [[Image:hps_logo.png|130px|High Performance Systems - Silver sponsor]] [[Image:IIS logo.png|100px|Stiftelsen för Internetinfrastruktur - Lunch Sponsor]] [[Image:MyNethouse logo.png|100px|MyNethouse - Coffee Break Sponsor]] [[Image:AppSec Research 2010 Help Net Security sponsor.jpg|100px|Help Net Security - Media Sponsor]] <br />
</center> <br />
We are now welcoming sponsors for OWASP AppSec Research 2010. Take the opportunity to support next year's major appsec event in Europe! The full sponsoring program is available as pdfs: <br />
<br />
Sponsoring program in English:&nbsp;[[Image:OWASP Sponsorship AppSec Research 2010 (eng).pdf]] <br />
<br />
Sponsoring program in Swedish:&nbsp;[[Image:OWASP Sponsorship AppSec Research 2010 (swe).pdf]] <br />
<br />
[[Image:Owasp appsec research 2010 diamond gold silver sponsoring.png|left|Part of the sponsoring program]] [[Image:Owasp appsec research 2010 sponsoring 2.png|left|Part of the sponsoring program]] <br />
<br />
==== Challenges ====<br />
<br />
=== Countdown Challenges -- Free Tickets to Win! ===<br />
<br />
There will be a challenge posted on the conference wiki page the 21st every month up until the event. The winner will get free entrance to the conference. Be sure to sign up for [https://lists.owasp.org/mailman/listinfo/appsec_eu_2010 the conference mailing list] to get a monthly reminder. <br />
<br />
== AppSec Research Challenge 11: Share Your OWASP AppSec Postcards ==<br />
<br />
Here's the second last chance to win a free ticket to the conference. This time we challenge you to create OWASP AppSec Research Postcards (digital ones of course) from nice places throughout the world hold a paper like the picture below.<br />
<br />
[[Image:OWASP_AppSec_Research_2010_Postcard_Challenge.jpg]]<br />
<br />
== How to Win ==<br />
Create and share the most "digital postcards" showing you, the conference logo on paper ([http://www.owasp.org/images/5/52/OWASP_AppSec_Research_2010_Postcard_Challenge.pdf pdf]), and ...<br />
<br />
* Your work office or "computer room" at home: 1 point<br />
* A major city (> 1 million inhabitants) with the city sign "Welcome to ...": 2 points<br />
* On a continent which you don't live: 2 points<br />
* Under water (outside, not in a pool or a bathtub): 2 points<br />
* A capital city with a typical sight, e g The Eiffel Tower in Paris: 3 points<br />
* With someone from our "Who's Who in Security" challenge holding the logo: 3 points<br />
* With an international celebrity holding the logo: 5 points<br />
* 4,000 meters or more above sea level, not flying: 6 points<br />
* With Chuck Norris, Mr. T, or Paris Hilton: 30 points<br />
<br />
You get points for every unique postcard, meaning once under water, once in a specific city, once with a unique celebrity, once per mountain above 4,000 meters etc. If you combine categories you get the sum of the points. Most points by May 20th wins a free conference ticket!<br />
<br />
== How to Compete ==<br />
Share your postcards on http://www.Flickr.com following this example (3 points for Eiffel Tower in Paris):<br />
<br />
* '''Photo''' of you, the conference logo on paper using [http://www.owasp.org/images/5/52/OWASP_AppSec_Research_2010_Postcard_Challenge.pdf this pdf], and the Eiffel Tower in the background<br />
* '''Title''': OWASP Challenge Postcard Paris<br />
* '''Description''': Capital city Paris, typical site The Eiffel Tower, 3 points<br />
* '''Tag''': #AppSecEu<br />
<br />
<br />
==== Archive ====<br />
<br />
== Call for Papers and Proposals (closed) ==<br />
<br />
[[Image:AppSec Research 2010 2nd cfp.png]] <br />
<br />
<br> 1. '''Publish or Perish'''. Peer-reviewed 12 page papers to be published in formal proceedings by Springer-Verlag ([http://www.springer.com/lncs Lecture Notes in Computer Science, LNCS]). Presentation slides and video takes will be posted on the OWASP wiki after the conference.<br> 2. '''Demo or Die'''. A demo proposal should consist of a pdf with a 1 page abstract summarizing the matter proposed by the speaker(s) ''and'' 1 page containing demo screenshot(s). Demos will have ordinary speaker slots but the speakers are expected to run a demo during the talk (live coding counts as a demo), not just a slideshow. Presentation slides and video takes will be posted on the OWASP wiki after the conference.<br> 3. '''Present or Repent'''. A presentation proposal should consist of a 2 page extended abstract representing the essential matter proposed by the speaker(s). Presentation slides and video takes will be posted on the OWASP wiki after the conference. <br />
<br />
If you have any questions regarding submissions etc, please email john.wilander@owasp.org. <br />
<br />
=== Topics of Interest ===<br />
<br />
We encourage the publication and presentation of new tools, new methods, empirical data, novel ideas, and lessons learned in the following areas: <br />
<br />
•&nbsp; &nbsp; Web application security<br> • &nbsp; &nbsp;Security aspects of new/emerging web technologies/paradigms (mashups, web 2.0,&nbsp; offline support, etc)<br> •&nbsp; &nbsp; Security in web services, REST, and service oriented architectures<br> •&nbsp; &nbsp; Security in cloud-based services<br> •&nbsp; &nbsp; Security of frameworks (Struts, Spring, ASP.Net MVC etc)<br> •&nbsp; &nbsp; New security features in platforms or languages<br> •&nbsp; &nbsp; Next-generation browser security<br> •&nbsp; &nbsp; Security for the mobile web<br> •&nbsp; &nbsp; Secure application development (methods, processes etc)<br> •&nbsp; &nbsp; Threat modeling of applications<br> •&nbsp; &nbsp; Vulnerability analysis (code review, pentest, static analysis etc)<br> •&nbsp; &nbsp; Countermeasures for application vulnerabilities<br> •&nbsp; &nbsp; Metrics for application security<br> • &nbsp; &nbsp;Application security awareness and education <br />
<br />
=== Submission Deadline and Instructions ===<br />
<br />
'''Update''': Submission deadline for full-papers ("Publish or Perish") has been '''extended to March 7th 23:59''' (Apia, Samoa time) due to numerous requests. Submit your paper to [https://www.easychair.org/login.cgi?a=c01e98d04e4e;iid=20045 AppSec Research 2010 (EasyChair)]. <br />
<br />
Full-paper submissions should be at most 12 pages long and must be in the Springer LNCS style for "Proceedings and Other Multiauthor Volumes". Templates for preparing papers in this style for LaTeX, Word, etc can be downloaded from: http://www.springer.com/computer/lncs?SGWID=0-164-7-72376-0. Full papers must be submitted in a form suitable for anonymous review: '''remove author names and affiliations from the title page, and avoid explicit self-referencing in the text'''. <br />
<br />
Submission for "Demo or Die" and "Present or Repent" closed on February 7th. <br />
<br />
Decision notification: April 7th <br />
<br />
=== Program Committee (for review of full-papers) ===<br />
<br />
• John Wilander, Omegapoint and Linköping University (chair)<br> • Alan Davidson, Stockholm University/Royal Institute of Technology (co-host)<br> • Lieven Desmet, Katholieke Universiteit Leuven<br> • Úlfar Erlingsson, Reykjavík University and Microsoft Research<br> • Martin Johns, University of Passau<br> • Christoph Kern, Google<br> • Engin Kirda, Institute Eurecom<br> • Ulf Lindqvist, SRI International<br> • Benjamin Livshits, Microsoft Research<br> • Sergio Maffeis, Imperial College London<br> • John Mitchell, Stanford University<br> • William Robertson, UC Berkeley<br> • Andrei Sabelfeld, Chalmers UT<br> <br />
<br />
== Call for Training (closed) ==<br />
<br />
(Info kept here for reference)<br> OWASP is currently soliciting training proposals for the OWASP AppSec Research 2010 Conference which will take place at Stockholm University in Sweden, on June 21st through June 24th 2010. There will be training courses on June 21st and 22nd followed by plenary sessions on the 23rd and 24th with three tracks per day. <br />
<br />
We are seeking training proposals on the following topics (in no particular order): <br />
<br />
*Security in Web 2.0, Web Services/XML <br />
*Advanced penetration testing <br />
*Static analysis for security <br />
*Threat modeling of applications <br />
*Secure coding practices <br />
*Security in J2EE/.NET patterns and frameworks <br />
*Application security with ESAPI <br />
*OWASP tools in practice<br />
<br />
We will look favourably on laboration-based/hands-on training. <br />
<br />
=== Submission Deadline and Instructions ===<br />
<br />
Submission '''deadline is Sunday February 7th 23:59''' (Apia, Samoa time). To submit your training proposal please fill out the [[Image:OWASP AppSec Research 2010 Call for Training.docx]] and email it to john.wilander@owasp.org with subject "AppSec Research 2010: Training proposal". <br />
<br />
Upon acceptance you'll be requested to fill out the ''Training Instructor Agreement'' where you'll find details on revenue split etc. The agreement will be reworked but the previous one is here: [[Image:Training Instructor Agreement.doc]]. <br />
<br />
=== Upcoming List of Trainers on OWASP Wiki ===<br />
<br />
As part of the [http://www.owasp.org/index.php/Category:OWASP_Education_Project OWASP Education Project], OWASP is starting an official list of trainers on the OWASP web site. This list (mentioning the trainer - course and contact details) will cover all trainers that performed training at OWASP conferences, together with their aggregated scores on the course feedback forms. Of course, this is opt-in. Please let us know if you are interested to participate in this program (tick the check-box on the application form). <br />
<br />
== AppSec Research Challenge X: Build an Enterprise Java Rootkit ==<br />
<br />
The tenth challenge is here! <br />
<br />
Jeff Williams, chairman of OWASP, gave a very interesting talk at last year's Black Hat US and OWASP AppSec US -- [http://www.blackhat.com/presentations/bh-usa-09/WILLIAMS/BHUSA09-Williams-EnterpriseJavaRootkits-PAPER.pdf "Enterprise Java Rootkits -- Hardly Anyone Watches the Developers"]. Now it's time for you to write a rootkit yourself, exploring Jeff's techniques and more. <br />
<br />
'''The Project to Fool'''<br> Your assignment is to be the evil developer who implements and hides a backdoor in a Java servlet. We've implemented a very simple login web application and exported the Eclipse project ([http://www.owasp.org/images/1/16/OWASP_AppSec_Research_2010_Challenge_X.zip zip here]). We will use this project to evaluate your submissions. It's a simple servlet/jsp project that we deployed on Tomcat 6.0. It even contains an evil output of user credentials to a temp file (not yet hidden though) to get you started. Screenshot from the app and the project structure: <br />
<br />
<br> [[Image:Appsec research 2010 challenge X eclipse project.jpg]] [[Image:Appsec research 2010 challenge X login screen.jpg]] <br />
<br />
'''Rules'''<br> <br />
<br />
*You must explain what your changes do (we need to evaluate your rootkit!) <br />
*The original features + look and feel must be preserved <br />
*Your additions should preferably look like security features such as IP whitelisting, logging, anti-CSRF, frequency blocking etc. <br />
*You're only allowed to change the servlet (Login.java), and the gif image (appsec_research_challenge_X.gif) <br />
*You do not have to use the jsps <br />
*The original size of Login.java is 1,856 bytes and it mustn't grow to more than 4,000 bytes <br />
*The gif image mustn't grow in size and should look close enough to the original to fool the committee <br />
*Code should "look" readable, i e not minimized too heavily<br />
<br />
'''How To Win'''<br> The organization committee will evaluate who has been able to hide the most evil stuff while complying to the rules. The more malicious functionality and the more clever disguise -- the more "points". All submissions must be posted as links or pasted code in [http://sla.ckers.org/forum/read.php?11,33928 this sla.ckers.org thread]. Send an email to john.wilander@owasp.org when you post code or need attention. Deadline April 20. <br />
<br />
<br> <br />
<br />
== AppSec Research Challenge 9: Crack 'Em Hashes (closed) ==<br />
<br />
February's AppSec Research 2010 challenge is about breaking hashed passwords. It starts off easy with the old LM hash and ends with SHA256 and GOST3411. <br />
<br />
[[Image:Owasp appsec research 2010 hash challenge.jpg]] <br />
<br />
'''How To Win'''<br> The first one to publish each broken password gets points according to the table below but at the same time helps the others since the password is the salt of the next hash. So you have to decide -- should you publish your cracked password and collect your points before the others or should you keep it a secret to get a head start cracking the next one? Deadline it March 21st. <br />
<br />
To collect points for a password you must be the first one to publish that broken password on [http://sla.ckers.org/forum/read.php?11,33533 this sla.ckers.org thread]. Please send an email to john.wilander@owasp.org at the same time so we can correct any misunderstandings. For instance we can happen to run into hash collisions, where someone finds another mixed alpha password of max 5 characters that concatenated with the right salt produces the same hash. In such a case we will publish the real password and give points to the one who found the collision. <br />
<br />
The one with the most points on March 21st wins a free ticket to the conference! <br />
<br />
'''Points to Earn'''<br> <br />
<br />
*pwd1 (LM) =&gt; 1 point <br />
*pwd2 (MD2) =&gt; 3 points <br />
*pwd3 (MD4) =&gt; 5 points <br />
*pwd4 (MD5) =&gt; 9 points <br />
*pwd5 (RIPEMD160) =&gt; 15 points <br />
*pwd6 (SHA1) =&gt; 25 points <br />
*pwd7 (SHA256) =&gt; 50 points <br />
*pwd8 (GOST3411) =&gt; 100 points<br />
<br />
'''The Hashes'''<br> Each password comprises of a-zA-Z (mixed alpha) and is max 5 characters long. With salt that means max 10 mixed alpha characters as input to the hash function. All hashes here are in hex format. The Java source code has all the details. The plus operator means string concatenation. <br />
<br />
*LM(pwd1) 0C04DACA901299DBAAD3B435B51404EE <br />
*MD2(pwd2 + pwd1) 16189F5462BF906E9D88CF6F152DE86F <br />
*MD4(pwd3 + pwd2) FA8F46A6D347087D6980C3FA77DD4DE9 <br />
*MD5(pwd4 + pwd3) 425B33D6F60394C897B8413B5C185845 <br />
*RIPEMD160(pwd5 + pwd4) 35F34671D30472D403937820DCABC1C78C837071 <br />
*SHA1(pwd6 + pwd5) AE81A30510B2931921934218636B26A803330EB1 <br />
*SHA256(pwd7 + pwd6) B2FF0269E927C6559804A37590A0688C45DF143F85CEE0E3F239F846B65C9644 <br />
*GOST3411(pwd8 + pwd7) 16CC9F1FF65688E040F5ADA82A41A258FF948769CDA4C4A17D85228A6F358971<br />
<br />
Example: Given that pwd1 is "Win" and pwd2 is "You", the hash 16189F5462BF906E9D88CF6F152DE86F is the result of MD2("YouWin"). Now pwd2 will be the salt when you crack pwd3. <br />
<br />
'''The Source Code'''<br> The source code we've used to produce the hashes is available here [http://www.owasp.org/images/7/79/OwapsAppSecResearch2010HashChallenge.zip zip]. It's Java and all but the LM hash is done with [http://www.bouncycastle.org/latest_releases.html Bouncy Castle 1.4.5]. <br />
<br />
<br> <br />
<br />
== AppSec Research Challenge 8: Construct an OWASP Polyglot (closed) ==<br />
<br />
January's AppSec Research Challenge is to construct an OWASP polyglot, more specifically '''an OWASP logo that also can be run as JavaScript''': <br />
<br />
Show image: &lt;img src="owasp_logo.gif"&gt;<br>Run script: &lt;script src="owasp_logo.gif"&gt;&lt;/script&gt; <br />
<br />
[http://en.wikipedia.org/wiki/Polyglot_(computing) Wikipedia] says: "a ''polyglot'' is a computer program or script written in a valid form of multiple programming languages". This is about as cool as it gets&nbsp;:). <br />
<br />
'''Rules''' <br />
<br />
*Make your polyglot out of the regular OWASP logo in the upper left corner of this wiki (circle with the wasp). <br />
*The file size must not grow. <br />
*Pixel colors in the gif must not differ more than 5 in red, green, or blue. Ex: If a pixel originally had rgb 100,100,100 then 104,95,96 is OK. <br />
*No malicious stuff of course <br />
*When your polyglot is run as JavaScript it should execute as many of the following features as possible, starting from the top:<br />
<br />
#alert(all cookies belonging to the current domain); <br />
#alert(the last keystrokes on the keyboard every ten keystrokes); <br />
#alert(the current time in Stockholm, once every minute); <br />
#A quine. The polyglot outputs its own source code on the HTML page.<br />
<br />
'''How to get started''' <br />
<br />
Jasvir Nagra gave a talk on these kind of polyglots and published a gif/JavaScript polyglot on [http://www.thinkfu.com/blog/gifjavascript-polyglots his blog]. A good starting point is his gif file.&nbsp;Jasvir has also written an extensive article on gif/perl polyglots which explains how to get code into the gif file. Check out [http://search.cpan.org/~jnagra/Perl-Visualize-1.02/Visualize.pm#HOW_IT_ALL_WORKS his guide]. <br />
<br />
'''How to win''' <br />
<br />
Submit your entries in [http://sla.ckers.org/forum/read.php?11,33121 this sla.ckers.org thread]. Either the first complete polyglot or the most complete polyglot wins. We will most probably provide you with a gif checker that validates the color differences. Check the thread.&nbsp; <br />
<br />
== AppSec Research Challenge 7: X-Mas Capture the Flag (closed) ==<br />
<br />
[[Image:AppSec Research 2010 Stocking.gif]] '''Merry Christmas everyone!'''[[Image:AppSec Research 2010 Stocking.gif]] <br />
<br />
It's the 21st and a new AppSec Research Challenge is posted. <br />
<br />
Setting up the AppSec Research 2010 X-mas Challenge was a cooperative effort by the winner of AppSec Research Challenge 3, Mario Heiderich, and Martin Holst Swende. It is a multi-step challenge which involves finding a vulnerability in a web application and locating a hidden message. The winner gets free entrance to next year's conference. Start by subscribing to [https://lists.owasp.org/mailman/listinfo/appsec_eu_2010 the conference mailing list]. Then check the simple rules below and get going. <br />
<br />
'''Rules''': <br />
<br />
*Please do not perform any resource-intensive tests, as the machine is pretty low-end and can be DoS:ed without much effort. <br />
*The computer at the given IP address is the only system involved in this challenge, so please do not perform any tests of neighboring systems. <br />
*Otherwise, you are free to hack away!<br />
<br />
'''Challenge-page''': [http://66.249.7.26 66.249.7.26] <br />
<br />
Discussions, QnA and reports about how far you have made it is welcome at [http://sla.ckers.org/forum/read.php?11,32779 the official sla.ckers thread]. <br />
<br />
Good luck and happy holidays! (And don't forget the submission deadline for the conference -- February 7) <br />
<br />
<br> <br />
<br />
== AppSec Research Challenge 6: Design the Conference Logo (closed) ==<br />
<br />
'''Note''': This challenge is re-opened. Submit by February 21st. <br />
<br />
November's AppSec Research 2010 Challenge asks you to design the conference logotype. So far we have used this: <br />
<br />
[[Image:Appsec research 2010 logo prototype (small).png]] <br />
<br />
... but would like something less "word processor-like". <br />
<br />
'''How to win''' <br />
<br />
*The logo should be suitable for both large printing and small web banners <br />
*If you make a color logo, please submit a b/w version too <br />
*"OWASP AppSec Research 2010" should in some way be part of the logo&nbsp;:)<br />
<br />
'''Copyright?'''<br> By submitting your logo you agree to share it according to [http://creativecommons.org/licenses/by/3.0/legalcode Creative Commons Attributions] and that we credit you in the conference brochure and on the conference wiki but not in all places where we use the logo (i e we will not credit you on banners, sponsoring program, powerpoint presentations etc). <br />
<br />
'''How to submit'''<br> Email jpg + svg to john.wilander [at] owasp.org before Monday December 14th 23:59 [http://www.worldtimeserver.com/current_time_in_UTC.aspx UTC]. The creator of the best logo wins a free ticket to the AppSec Research 2010 conference! <br />
<br />
== AppSec Research Challenge 5: Graphical Effects (closed) ==<br />
<br />
The October OWASP AppSec Research 2010 challenge is over. The winner of a free entrance ticket to next year's AppSec conference in Stockholm is "sirdarckcat" with FireworksIsNotABrowser_v4 (although we like the slightly oversized v6 better). <br />
<br />
The challenge was about '''writing the coolest graphical effect in a 2010 character script'''. <br />
<br />
=== An Example ===<br />
<br />
As an example, copy the script below and paste the script over the URL in the URL bar. <br />
<br />
<nowiki>javascript:R=0; x1=.1; y1=.05; x2=.25; y2=.24; x3=1.6; y3=.24; x4=300; y4=200; x5=300; y5=200; DI=document.getElementsByTagName("img"); DIL=DI.length; function A(){for(i=0; i-DIL; i++){DIS=DI[ i ].style; DIS.position='absolute'; DIS.left=(Math.sin(R*x1+i*x2+x3)*x4+x5)+"px"; DIS.top=(Math.cos(R*y1+i*y2+y3)*y4+y5)+"px"}R++}setInterval('A()',5); void(0)</nowiki> <br />
<br />
As a simple teaser we give these png letters for the script to play with. <br />
<br />
[[Image:AppSec Research 2010 O.png]][[Image:AppSec Research 2010 W.png]][[Image:AppSec Research 2010 A.png]][[Image:AppSec Research 2010 S.png]][[Image:AppSec Research 2010 P.png]] <br />
<br />
=== Rules ===<br />
<br />
*The script should work in Firefox 3.5 (yeah, that means HTML5 and CSS3&nbsp;:) <br />
*Any resource, linked document, script, or image defined on the AppSec Research 2010 wiki page may be loaded/accessed/used <br />
*No requests to any other location is allowed <br />
*No obfuscation is allowed <br />
*The script may only use ASCII <br />
*Max length of the script is 2010 characters <br />
*You have to give your effect an id and a version number (further explanation below) <br />
*Any form of malicious code is of course banned&nbsp;;)<br />
<br />
=== How to Compete ===<br />
<br />
There's an [http://sla.ckers.org/forum/read.php?11,31944 official thread on sla.ckers] were you share your code and thoughts (Worried someone will steal you code? Check the originality bullet below). You can enter as many effects as you like but '''each effect has to have an id and a version number''', e.g. JohnWobbler_v1.3 for version 1.3 of John's Wobbler effect. Deadline is November 14th, 23:59 [http://www.worldtimeserver.com/current_time_in_UTC.aspx UTC]. <br />
<br />
=== Choosing the Winner ===<br />
<br />
Since this is a creative challenge the OC will choose the winner based on the following: <br />
<br />
*'''Originality''' (tweaking someone's code is cool and encouraged but changing a few magic numbers or inverting a function won't make you the winner) <br />
*'''Coolness''' (yeah, you need to convince a few Scandinavian people + Seba and Kate that your script is the coolest)<br />
<br />
Either the OC will choose a winner by ourselves or we choose the top effects and let you guys vote for the winner. <br />
<br />
== AppSec Research Challenge 4: Who's Who in Security? (closed) ==<br />
<br />
September's AppSec Research 2010 Challenge was to identify a number of people that are, in one way or another, known in the security business, by their picture. There were thirteen photos in total, portraiting thirteen different individuals. <br />
<br />
'''The winner of a free ticket to the OWASP AppSec Research conference in 2010 was Thomas Vollstädt''' who submitted the correct solution just one day after the challenge was posted. <br />
<br />
=== The Solution ===<br />
<br />
[[Image:Owasp appsec research 2010 challenge 4 solution.png]] <br />
<br />
=== The Names ===<br />
<br />
Dinis Cruz, Gordon "Fyodor" Lyon, David Litchfield, Dave Aitel, Bruce Schneier, Dave Wichers, Gene Spafford, MafiaBoy, MySpace Samy, Tom Brennan, Halvar Flake, Alex Sotirov, Jeff Williams, Jennifer Granick, Kate Hartmann, Mudge, Lance Spitzner, Dan Kaminsky, Brian Chess, Joanna Rutkowska, Crispin Cowan, Michael Howard, Jay Beale, Ross Anderson, Dawn Song, Robert "rsnake" Hansen, and Solar Designer. <br />
<br />
=== The Pictures ===<br />
<br />
If you'd like to see the original pictures without the names, here's the link: [[http://www.owasp.org/index.php/File:Owasp_appsec_research_2010_challenge_4.png]] <br />
<br />
== AppSec Research Challenge 3: Non-Alphanumeric JavaScript (closed) ==<br />
<br />
The August AppSec Research 2010 Challenge was to create a JavaScript alert("owasp") that pops up the word 'owasp', case-insensitive, without using any alphanumeric characters (0-9a-zA-Z).&nbsp;There was a tremendous activity and we want to thank everyone who participated. The size of the final result was almost a third of the first entry (see chart below). '''Want to check out the winning snippet by .mario? Enter the following in the Firebug console''':&nbsp;<nowiki>ω=[[Ṫ,Ŕ,,É,,Á,Ĺ,Ś,,,Ó,Ḃ]=!''+[!{}]+{}][Ś+Ó+Ŕ+Ṫ],ω()[Á+Ĺ+É+Ŕ+Ṫ](Ó+ω()[Ḃ+Ṫ+Ó+Á]('Á«)'))</nowiki> <br />
<br />
It is based on a few different ideas. First of all, a variable assignment on the form <br />
<br />
<nowiki>[a,b,c,,e]="abcde" // a="a", c="c",e="e"</nowiki> <br />
<br />
Which is performed on the string "truefalse[object Object]" <br />
<br />
<nowiki>[Ṫ,Ŕ,,É,,Á,Ĺ,Ś,,,Ó,Ḃ]=!''+[!{}]+{}]</nowiki> // right-hand side is "truefalse[object Object]" <br />
<br />
Also, the following construction obtains the window.sort-function, which leaks the window-object when called without arguments&nbsp;: <br />
<br />
ω=[]["sort"] //ω is now window.sort <br />
<br />
Therefore, calling ω()["alert"] invokes window.alert. To generate the string "owasp", the string "wasp" can be obtained by calling btoa on the characters <nowiki>"Á«)"</nowiki>. <br />
<br />
This was really a great team effort, and I think a lot of us learned some new tricks. The final winner was .mario. Congratulations! <br />
<br />
[[Image:Appsec research 2010 challenge 3 chart.jpg]] <br />
<br />
=== JavaScript Without Alphanumeric Characters? ===<br />
<br />
It is possible to write valid javascript completely without alphannumeric characters (0-9a-zA-Z). To produce a number, you can instead use for example an empty string, <nowiki>''</nowiki>, interpret it as a boolean with a bang: <nowiki>!''</nowiki> -- which leads to the boolean object true. true, interpreted as a numeric value, equals one. Thus, <br />
<br />
<nowiki>$ = +!''; // $ === 1</nowiki> <br />
<br />
<nowiki>$++;$++; // $ === 3</nowiki> <br />
<br />
In a similar fashion, strings can be created from strings embedded in the language. The boolean object true can be converted to string by concatenation, and then accessed by numeric index to, for example, produce the letter 'e'&nbsp;: <br />
<br />
<nowiki>â = (!''+'')[$] // â[$] === "true"[3] === e</nowiki> <br />
<br />
=== Previous Similar Contest ===<br />
<br />
These two techniques are behind a [http://sla.ckers.org/forum/read.php?24,28687 previous contest at the forum "sla.ckers.org"], where the contest was to create alert(1) with as few non-alphanumeric characters as possible. Currently, the code actually being executed was: <br />
<br />
<nowiki>([],"sort")()["alert"](1) // since ([],"sort")()</nowiki> leaks window object in FF, ==&gt; <nowiki>window["alert"](1)</nowiki> is called, which is another form of <nowiki>window.alert(1)</nowiki> <br />
<br />
The winner, or at least current leading entry is 84 bytes long, and looks like this: <br />
<br />
<nowiki>(Å='',[Į=!(ĩ=!Å+Å)+{}][Į[Š=ĩ[++Å]+ĩ[Å-Å],Č=Å-~Å]+Į[Č+Č]+Š])()[Į[Å]+Į[Å+Å]+ĩ[Č]+Š](Å)</nowiki> <br />
<br />
=== The Challenge ===<br />
<br />
August's challenge was to, in a similar fashion, create an alert("owasp"), case-insensitive, not using any alphanumeric characters. The shortest working code snippet submitted by September 18th 23:59:59 [http://www.worldtimeserver.com/current_time_in_UTC.aspx UTC] won a free ticket. By "working" we meant JavaScript that executes in Firefox/Firebug, not depending on any Firebug DOM variables for execution. <br />
<br />
'''Submissions were made as comments to the [http://owaspsweden.blogspot.com/2009/08/appsec-research-2010-challenge-3.html challenge 3 blogpost on Owasp Sweden].''' Check it out. <br />
<br />
== AppSec Research Challenge 2: OWASP Crossword Puzzle (closed) ==<br />
<br />
July's crossword challenge is over. Many permutations arrived in our inbox but it was tricky to get it completely right. Congratulations to Johannes Dahse and Johan Nilsson who in the end were allowed to join forces to be able to find the correct solution. They win a 50&nbsp;% conference ticket discount each. <br />
<br />
You find the solution below. <br />
<br />
[[Image:Appsec research 2010 challenge 2 solution.gif]] <br />
<br />
== AppSec Research Challenge 1: Input Validation and Regular Expressions (closed) ==<br />
<br />
'''This challenge is over'''. The winner was Partik Nordlén. To see the solution(s), please visit the [https://lists.owasp.org/pipermail/appsec_eu_2010/2009-July/000000.html appsec_eu_2010 mailing list archive]. <br />
<br />
''Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems.''<br> &nbsp; &nbsp; &nbsp; &nbsp; --Jamie Zawinski, in comp.emacs.xemacs <br />
<br />
The 21st of each month up until the conference in June 2010 we'll have a countdown challenge posted here. The winner each month will get a free entrance ticket worth about €300/$400. Be sure to sign up for [https://lists.owasp.org/mailman/listinfo/appsec_eu_2010 the conference mailing list] to get a monthly reminder. <br />
<br />
=== The Challenge ===<br />
<br />
A community is hosted on a very large domain, yahoogle.com. The users of that community all have profiles, where they are allowed to use basic HTML for customization, as well as JavaScript files hosted on the domain. <br />
<br />
All the code for the profile pages are filtered on the server side, and whenever a piece of code containing "&lt;script..." is encountered, the following regular expression is used to validate that the script loaded is hosted on a subdomain of yahoogle.com: <br />
<br />
.*(&lt;script){1}([^&gt;]+)src=('http:\/\/[a-zA-Z]+.yahoogle.com\/scripts\/[0-9A-Za-z]+\.js').*\/&gt; <br />
<br />
Capture group 3 is then also checked against a whitelist of allowed scripts on that domain. The whitelist consists of "http://secure.yahoogle.com" and "http://scripts.yahoogle.com". <br />
<br />
Your task is to formulate a snippet of HTML that goes correctly through the filter and the whitelist, but loads the script "http://insecure.com/evil.js" instead. Also, rework the regular expression to defend against your "attack". <br />
<br />
'''Email your solution to Martin Holst Swende &lt;martin.holst_swende@owasp.org&gt;'''. The first correct answer wins a free ticket to the conference. The free ticket is personal and the judgement of the organizing committee can not be overruled&nbsp;:). <br />
<br />
<br> <headertabs /></div>Michael Bomanhttps://wiki.owasp.org/index.php?title=OWASP_Common_Numbering_Project&diff=76178OWASP Common Numbering Project2010-01-13T19:02:05Z<p>Michael Boman: Changed to a 3 column table</p>
<hr />
<div>== Introduction ==<br />
<br />
Here is the generally agreed-upon new numbering scheme. Additional explanatory text coming soon. Questions/Comments? Email [mailto:mike.boberski@owasp.org Mike]. <br />
<br />
OWASP-06<br />
OWASP-06-DEPRECATED <br />
OWASP-0604<br />
OWASP-0604-DEPRECATED<br />
OWASP-0604-DG<br />
OWASP-0604-DG-01<br />
OWASP-0604-TG<br />
OWASP-0604-TG-DV-005<br />
OWASP-0604-TG-DV-005-DEPRECATED<br />
<br />
0123456789012345678901234567890123456789<br />
1 2 3<br />
<br />
*0-4 OWASP <br />
*6-7 Detailed requirement identifier (major) <br />
*8-9 Detailed requirement identifier (minor) <br />
*11-12 Document code (DG=Development Guide, TG=Testing Guide, CG=Code Review Guide, AR, ED, RM, OR, others reserved) <br />
*14-40 (Optional: DEPRECATED, or # for iterations, or legacy identifiers)<br />
<br />
<br> <br />
<br />
== Mapping to Legacy Testing Guide IDs ==<br />
<br />
{| class="prettytable"<br />
|-<br />
| <center>'''Ref. Number'''</center> <br />
| <center>'''Test Name'''</center> <br />
| <center>'''New Common Ref.'''</center><br />
|-<br />
| colspan="3" align="center" | '''Information Gathering'''<br />
|-<br />
| OWASP-IG-001 <br />
| Spiders, Robots and Crawlers<br />
| <br />
|-<br />
| OWASP-IG-002 <br />
| Search Engine Discovery/Reconnaissance <br />
| <br />
|-<br />
| OWASP-IG-003 <br />
| Identify application entry points <br />
| <br />
|-<br />
| OWASP-IG-004 <br />
| Testing for Web Application Fingerprint <br />
| <br />
|-<br />
| OWASP-IG-005 <br />
| Application Discovery <br />
| <br />
|-<br />
| OWASP-IG-006 <br />
| Analysis of Error Codes <br />
| <br />
|-<br />
| colspan="3" align="center" | '''Configuration Management Testing'''<br />
|-<br />
| OWASP-CM-001 <br />
| SSL/TLS Testing (SSL Version, Algorithms, Key length, Digital Cert. Validity) <br />
| <br />
|-<br />
| OWASP-CM-002 <br />
| DB Listener Testing <br />
| <br />
|-<br />
| OWASP-CM-003 <br />
| Infrastructure Configuration Management Testing <br />
| <br />
|-<br />
| OWASP-CM-004 <br />
| Application Configuration Management Testing <br />
| <br />
|-<br />
| OWASP-CM-005 <br />
| Testing for File Extensions Handling <br />
| <br />
|-<br />
| OWASP-CM-006 <br />
| Old, backup and unreferenced files <br />
| <br />
|-<br />
| OWASP-CM-007 <br />
| Infrastructure and Application Admin Interfaces <br />
| <br />
|-<br />
| OWASP-CM-008 <br />
| Testing for HTTP Methods and XST <br />
| <br />
|-<br />
| colspan="3" align="center" | '''Authentication Testing''' <br />
|-<br />
| OWASP-AT-001 <br />
| Credentials transport over an encrypted channel <br />
| <br />
|-<br />
| OWASP-AT-002 <br />
| Testing for user enumeration <br />
| <br />
|-<br />
| OWASP-AT-003 <br />
| Testing for Guessable (Dictionary) User Account <br />
| <br />
|-<br />
| OWASP-AT-004 <br />
| Brute Force Testing <br />
| <br />
|-<br />
| OWASP-AT-005 <br />
| Testing for bypassing authentication schema <br />
| <br />
|-<br />
| OWASP-AT-006 <br />
| Testing for vulnerable remember password and pwd reset <br />
| <br />
|-<br />
| OWASP-AT-007 <br />
| Testing for Logout and Browser Cache Management <br />
| <br />
|-<br />
| OWASP-AT-008 <br />
| Testing for CAPTCHA <br />
| <br />
|-<br />
| OWASP-AT-009 <br />
| Testing Multiple Factors Authentication <br />
| <br />
|-<br />
| OWASP-AT-010 <br />
| Testing for Race Conditions <br />
| <br />
|-<br />
| colspan="3" align="center" | '''Session Management''' <br />
|-<br />
| OWASP-SM-001 <br />
| Testing for Session Management Schema <br />
| <br />
|-<br />
| OWASP-SM-002 <br />
| Testing for Cookies attributes <br />
| <br />
|-<br />
| OWASP-SM-003 <br />
| Testing for Session Fixation <br />
| <br />
|-<br />
| OWASP-SM-004 <br />
| Testing for Exposed Session Variables <br />
| <br />
|-<br />
| OWASP-SM-005 <br />
| Testing for CSRF <br />
| <br />
|-<br />
| colspan="3" align="center" | '''Authorization Testing'''<br />
|- <br />
| OWASP-AZ-001 <br />
| Testing for Path Traversal <br />
| <br />
|-<br />
| OWASP-AZ-002 <br />
| Testing for bypassing authorization schema <br />
| <br />
|-<br />
| OWASP-AZ-003 <br />
| Testing for Privilege Escalation <br />
| <br />
|-<br />
| colspan="3" align="center" | '''Business logic testing'''<br />
|- <br />
| OWASP-BL-001 <br />
| Testing for business logic <br />
| <br />
|-<br />
| colspan="3" align="center" | '''Data Validation Testing'''<br />
|- <br />
| OWASP-DV-001 <br />
| Testing for Reflected Cross Site Scripting <br />
| <br />
|-<br />
| OWASP-DV-002 <br />
| Testing for Stored Cross Site Scripting <br />
| <br />
|-<br />
| OWASP-DV-003 <br />
| Testing for DOM based Cross Site Scripting <br />
| <br />
|-<br />
| OWASP-DV-004 <br />
| Testing for Cross Site Flashing <br />
| <br />
|-<br />
| OWASP-DV-005 <br />
| SQL Injection <br />
| <br />
|-<br />
| OWASP-DV-006 <br />
| LDAP Injection <br />
| <br />
|-<br />
| OWASP-DV-007 <br />
| ORM Injection <br />
| <br />
|-<br />
| OWASP-DV-008 <br />
| XML Injection <br />
| <br />
|-<br />
| OWASP-DV-009 <br />
| SSI Injection <br />
| <br />
|-<br />
| OWASP-DV-010 <br />
| XPath Injection <br />
| <br />
|-<br />
| OWASP-DV-011 <br />
| IMAP/SMTP Injection <br />
| <br />
|-<br />
| OWASP-DV-012 <br />
| Code Injection <br />
| <br />
|-<br />
| OWASP-DV-013 <br />
| OS Commanding <br />
| <br />
|-<br />
| OWASP-DV-014 <br />
| Buffer overflow <br />
| <br />
|-<br />
| OWASP-DV-015 <br />
| Incubated vulnerability Testing <br />
| <br />
|-<br />
| OWASP-DV-016 <br />
| Testing for HTTP Splitting/Smuggling <br />
| <br />
|-<br />
| colspan="3" align="center" | '''Denial of Service Testing'''<br />
|- <br />
| OWASP-DS-001 <br />
| Testing for SQL Wildcard Attacks <br />
| <br />
|-<br />
| OWASP-DS-002 <br />
| Locking Customer Accounts <br />
| <br />
|-<br />
| OWASP-DS-003 <br />
| Testing for DoS Buffer Overflows <br />
| <br />
|-<br />
| OWASP-DS-004 <br />
| User Specified Object Allocation <br />
| <br />
|-<br />
| OWASP-DS-005 <br />
| User Input as a Loop Counter <br />
| <br />
|-<br />
| OWASP-DS-006 <br />
| Writing User Provided Data to Disk <br />
| <br />
|-<br />
| OWASP-DS-007 <br />
| Failure to Release Resources <br />
| <br />
|-<br />
| OWASP-DS-008 <br />
| Storing too Much Data in Session <br />
| <br />
|-<br />
| colspan="3" align="center" | '''Web Services Testing'''<br />
|- <br />
| OWASP-WS-001 <br />
| WS Information Gathering <br />
| <br />
|-<br />
| OWASP-WS-002 <br />
| Testing WSDL <br />
| <br />
|-<br />
| OWASP-WS-003 <br />
| XML Structural Testing <br />
| <br />
|-<br />
| OWASP-WS-004 <br />
| XML content-level Testing <br />
| <br />
|-<br />
| OWASP-WS-005 <br />
| HTTP GET parameters/REST Testing <br />
| <br />
|-<br />
| OWASP-WS-006 <br />
| Naughty SOAP attachments <br />
| <br />
|-<br />
| OWASP-WS-007 <br />
| Replay Testing <br />
| <br />
|-<br />
| colspan="3" align="center" | '''AJAX Testing'''<br />
|- <br />
| OWASP-AJ-001 <br />
| AJAX Vulnerabilities <br />
| <br />
|-<br />
| OWASP-AJ-002 <br />
| AJAX Testing <br />
| <br />
|}<br />
<br />
== References ==<br />
<br />
*adding the (release) year into the numbering scheme can be problematic, because the document has a life cycle that goes over years .... <br />
*One should rather try to accommodate a versioning scheme that is human readable in the reference number as well (e.g. V02, or RevA, or...)<br />
<br />
----<br />
<br />
*don't try to encode any information into the ID that is likely to change or be subject to debate. In the olden days of CVE, we used to have "CAN-1999-0067" which would change into "CVE-1999-0067" once the item was considered stable and sufficiently verified. That made the ID hard to use. Right now, OWASP-DV-001 encodes the term "data validation" in the DV acronym, but what happens if in a couple of years, some new and better term occurs, or the focus changes from validation to something else? (As an example, it's only recently that the "data validation" term itself has become popular.)<br />
<br />
*carefully consider the range of values that your ID space supports, and if possible, allow it to expand. CVE has a "CVE-10K" problem because we never expected that we would ever come close to tracking 10,000 vulnerabilities a year. Red Hat had to change their advisory numbering scheme a couple years ago. etc.<br />
<br />
*don't change the fundamental meaning of the ID once you've assigned it. This causes confusion, and more importantly, it immediately invalidates almost everyone's mappings to that ID - including people who you don't even know are using that ID.<br />
<br />
*closely monitor the mappings that get made. Typos and misunderstandings are rarely caught. People may make assumptions about what "the item" really is, based only on a quick scan of a short name or title. Since you're dealing with diverse sources, there are likely to be many-to-many relationships in dealing with mappings.<br />
<br />
*determine some kind of procedure for handling duplicates. They're gonna happen.<br />
<br />
*the more you distribute the process of creating and assigning IDs between multiple people, the more inconsistencies and duplicates you will wind up with. This may be unavoidable, since the job is usually bigger than one person.<br />
<br />
*determine some kind of procedure for deprecating IDs, i.e., "retiring" them and discouraging their use by others. This will probably happen for reasons other than duplicates. There should be some final record, somewhere, of what happened to the deprecated item - i.e., it shouldn't just disappear off the face of the earth.<br />
<br />
----<br />
<br />
Much of the discussion surrounding the establishment of "Common OWASP Numbering" can be found on the various [https://lists.owasp.org/mailman/listinfo OWASP mailing lists]. (For your convenience here is a direct link to the [https://lists.owasp.org/pipermail/owasp-testing/ OWASP Testing Guide Mailing List Archive].) <br />
<br />
[[Category:OWASP_Application_Security_Verification_Standard_Project]] [[Category:How_To]]</div>Michael Bomanhttps://wiki.owasp.org/index.php?title=OWASP_Common_Numbering_Project&diff=76172OWASP Common Numbering Project2010-01-13T17:52:47Z<p>Michael Boman: Fixed table formating</p>
<hr />
<div>== Introduction ==<br />
<br />
Here is the generally agreed-upon new numbering scheme. Additional explanatory text coming soon. Questions/Comments? Email [mailto:mike.boberski@owasp.org Mike]. <br />
<br />
OWASP-06<br />
OWASP-06-DEPRECATED <br />
OWASP-0604<br />
OWASP-0604-DEPRECATED<br />
OWASP-0604-DG<br />
OWASP-0604-DG-01<br />
OWASP-0604-TG<br />
OWASP-0604-TG-DV-005<br />
OWASP-0604-TG-DV-005-DEPRECATED<br />
<br />
0123456789012345678901234567890123456789<br />
1 2 3<br />
<br />
*0-4 OWASP <br />
*6-7 Detailed requirement identifier (major) <br />
*8-9 Detailed requirement identifier (minor) <br />
*11-12 Document code (DG=Development Guide, TG=Testing Guide, CG=Code Review Guide, AR, ED, RM, OR, others reserved) <br />
*14-40 (Optional: DEPRECATED, or # for iterations, or legacy identifiers)<br />
<br />
<br> <br />
<br />
== Mapping to Legacy Testing Guide IDs ==<br />
<br />
{| class="prettytable"<br />
|-<br />
| <center>'''Category'''</center> <br />
| <center>'''Ref. Number'''</center> <br />
| <center>'''Test Name'''</center> <br />
| <center>'''New Common Ref.'''</center><br />
|-<br />
| colspan="4" align="center" | '''Information Gathering'''<br />
|-<br />
| OWASP-IG-001 <br />
| Spiders, Robots and Crawlers - <br />
| <br />
|<br />
|-<br />
| OWASP-IG-002 <br />
| Search Engine Discovery/Reconnaissance <br />
| <br />
|<br />
|-<br />
| OWASP-IG-003 <br />
| Identify application entry points <br />
| <br />
|<br />
|-<br />
| OWASP-IG-004 <br />
| Testing for Web Application Fingerprint <br />
| <br />
|<br />
|-<br />
| OWASP-IG-005 <br />
| Application Discovery <br />
| <br />
|<br />
|-<br />
| OWASP-IG-006 <br />
| Analysis of Error Codes <br />
| <br />
|<br />
|-<br />
| colspan="4" align="center" | '''Configuration Management Testing'''<br />
|-<br />
| OWASP-CM-001 <br />
| SSL/TLS Testing (SSL Version, Algorithms, Key length, Digital Cert. Validity) <br />
| <br />
|<br />
|-<br />
| OWASP-CM-002 <br />
| DB Listener Testing <br />
| <br />
|<br />
|-<br />
| OWASP-CM-003 <br />
| Infrastructure Configuration Management Testing <br />
| <br />
|<br />
|-<br />
| OWASP-CM-004 <br />
| Application Configuration Management Testing <br />
| <br />
|<br />
|-<br />
| OWASP-CM-005 <br />
| Testing for File Extensions Handling <br />
| <br />
|<br />
|-<br />
| OWASP-CM-006 <br />
| Old, backup and unreferenced files <br />
| <br />
|<br />
|-<br />
| OWASP-CM-007 <br />
| Infrastructure and Application Admin Interfaces <br />
| <br />
|<br />
|-<br />
| OWASP-CM-008 <br />
| Testing for HTTP Methods and XST <br />
| <br />
|<br />
|-<br />
| colspan="4" align="center" | '''Authentication Testing''' <br />
|-<br />
| OWASP-AT-001 <br />
| Credentials transport over an encrypted channel <br />
| <br />
|<br />
|-<br />
| OWASP-AT-002 <br />
| Testing for user enumeration <br />
| <br />
|<br />
|-<br />
| OWASP-AT-003 <br />
| Testing for Guessable (Dictionary) User Account <br />
| <br />
|<br />
|-<br />
| OWASP-AT-004 <br />
| Brute Force Testing <br />
| <br />
|<br />
|-<br />
| OWASP-AT-005 <br />
| Testing for bypassing authentication schema <br />
| <br />
|<br />
|-<br />
| OWASP-AT-006 <br />
| Testing for vulnerable remember password and pwd reset <br />
| <br />
|<br />
|-<br />
| OWASP-AT-007 <br />
| Testing for Logout and Browser Cache Management <br />
| <br />
|<br />
|-<br />
| OWASP-AT-008 <br />
| Testing for CAPTCHA <br />
| <br />
|<br />
|-<br />
| OWASP-AT-009 <br />
| Testing Multiple Factors Authentication <br />
| <br />
|<br />
|-<br />
| OWASP-AT-010 <br />
| Testing for Race Conditions <br />
| <br />
|<br />
|-<br />
| colspan="4" align="center" | '''Session Management''' <br />
|-<br />
| OWASP-SM-001 <br />
| Testing for Session Management Schema <br />
| <br />
|<br />
|-<br />
| OWASP-SM-002 <br />
| Testing for Cookies attributes <br />
| <br />
|<br />
|-<br />
| OWASP-SM-003 <br />
| Testing for Session Fixation <br />
| <br />
|<br />
|-<br />
| OWASP-SM-004 <br />
| Testing for Exposed Session Variables <br />
| <br />
|<br />
|-<br />
| OWASP-SM-005 <br />
| Testing for CSRF <br />
| <br />
|<br />
|-<br />
| colspan="4" align="center" | '''Authorization Testing'''<br />
|- <br />
| OWASP-AZ-001 <br />
| Testing for Path Traversal <br />
| <br />
|<br />
|-<br />
| OWASP-AZ-002 <br />
| Testing for bypassing authorization schema <br />
| <br />
|<br />
|-<br />
| OWASP-AZ-003 <br />
| Testing for Privilege Escalation <br />
| <br />
|<br />
|-<br />
| colspan="4" align="center" | '''Business logic testing'''<br />
|- <br />
| OWASP-BL-001 <br />
| Testing for business logic <br />
| <br />
|<br />
|-<br />
| colspan="4" align="center" | '''Data Validation Testing'''<br />
|- <br />
| OWASP-DV-001 <br />
| Testing for Reflected Cross Site Scripting <br />
| <br />
|<br />
|-<br />
| OWASP-DV-002 <br />
| Testing for Stored Cross Site Scripting <br />
| <br />
|<br />
|-<br />
| OWASP-DV-003 <br />
| Testing for DOM based Cross Site Scripting <br />
| <br />
|<br />
|-<br />
| OWASP-DV-004 <br />
| Testing for Cross Site Flashing <br />
| <br />
|<br />
|-<br />
| OWASP-DV-005 <br />
| SQL Injection <br />
| <br />
|<br />
|-<br />
| OWASP-DV-006 <br />
| LDAP Injection <br />
| <br />
|<br />
|-<br />
| OWASP-DV-007 <br />
| ORM Injection <br />
| <br />
|<br />
|-<br />
| OWASP-DV-008 <br />
| XML Injection <br />
| <br />
|<br />
|-<br />
| OWASP-DV-009 <br />
| SSI Injection <br />
| <br />
|<br />
|-<br />
| OWASP-DV-010 <br />
| XPath Injection <br />
| <br />
|<br />
|-<br />
| OWASP-DV-011 <br />
| IMAP/SMTP Injection <br />
| <br />
|<br />
|-<br />
| OWASP-DV-012 <br />
| Code Injection <br />
| <br />
|<br />
|-<br />
| OWASP-DV-013 <br />
| OS Commanding <br />
| <br />
|<br />
|-<br />
| OWASP-DV-014 <br />
| Buffer overflow <br />
| <br />
|<br />
|-<br />
| OWASP-DV-015 <br />
| Incubated vulnerability Testing <br />
| <br />
|<br />
|-<br />
| OWASP-DV-016 <br />
| Testing for HTTP Splitting/Smuggling <br />
| <br />
|<br />
|-<br />
| colspan="4" align="center" | '''Denial of Service Testing'''<br />
|- <br />
| OWASP-DS-001 <br />
| Testing for SQL Wildcard Attacks <br />
| <br />
|<br />
|-<br />
| OWASP-DS-002 <br />
| Locking Customer Accounts <br />
| <br />
|<br />
|-<br />
| OWASP-DS-003 <br />
| Testing for DoS Buffer Overflows <br />
| <br />
|<br />
|-<br />
| OWASP-DS-004 <br />
| User Specified Object Allocation <br />
| <br />
|<br />
|-<br />
| OWASP-DS-005 <br />
| User Input as a Loop Counter <br />
| <br />
|<br />
|-<br />
| OWASP-DS-006 <br />
| Writing User Provided Data to Disk <br />
| <br />
|<br />
|-<br />
| OWASP-DS-007 <br />
| Failure to Release Resources <br />
| <br />
|<br />
|-<br />
| OWASP-DS-008 <br />
| Storing too Much Data in Session <br />
| <br />
|<br />
|-<br />
| colspan="4" align="center" | '''Web Services Testing'''<br />
|- <br />
| OWASP-WS-001 <br />
| WS Information Gathering <br />
| <br />
|<br />
|-<br />
| OWASP-WS-002 <br />
| Testing WSDL <br />
| <br />
|<br />
|-<br />
| OWASP-WS-003 <br />
| XML Structural Testing <br />
| <br />
|<br />
|-<br />
| OWASP-WS-004 <br />
| XML content-level Testing <br />
| <br />
|<br />
|-<br />
| OWASP-WS-005 <br />
| HTTP GET parameters/REST Testing <br />
| <br />
|<br />
|-<br />
| OWASP-WS-006 <br />
| Naughty SOAP attachments <br />
| <br />
|<br />
|-<br />
| OWASP-WS-007 <br />
| Replay Testing <br />
| <br />
|<br />
|-<br />
| colspan="4" align="center" | '''AJAX Testing'''<br />
|- <br />
| OWASP-AJ-001 <br />
| AJAX Vulnerabilities <br />
| <br />
|<br />
|-<br />
| OWASP-AJ-002 <br />
| AJAX Testing <br />
| <br />
|<br />
|}<br />
<br />
== References ==<br />
<br />
*adding the (release) year into the numbering scheme can be problematic, because the document has a life cycle that goes over years .... <br />
*One should rather try to accommodate a versioning scheme that is human readable in the reference number as well (e.g. V02, or RevA, or...)<br />
<br />
----<br />
<br />
*don't try to encode any information into the ID that is likely to change or be subject to debate. In the olden days of CVE, we used to have "CAN-1999-0067" which would change into "CVE-1999-0067" once the item was considered stable and sufficiently verified. That made the ID hard to use. Right now, OWASP-DV-001 encodes the term "data validation" in the DV acronym, but what happens if in a couple of years, some new and better term occurs, or the focus changes from validation to something else? (As an example, it's only recently that the "data validation" term itself has become popular.)<br />
<br />
*carefully consider the range of values that your ID space supports, and if possible, allow it to expand. CVE has a "CVE-10K" problem because we never expected that we would ever come close to tracking 10,000 vulnerabilities a year. Red Hat had to change their advisory numbering scheme a couple years ago. etc.<br />
<br />
*don't change the fundamental meaning of the ID once you've assigned it. This causes confusion, and more importantly, it immediately invalidates almost everyone's mappings to that ID - including people who you don't even know are using that ID.<br />
<br />
*closely monitor the mappings that get made. Typos and misunderstandings are rarely caught. People may make assumptions about what "the item" really is, based only on a quick scan of a short name or title. Since you're dealing with diverse sources, there are likely to be many-to-many relationships in dealing with mappings.<br />
<br />
*determine some kind of procedure for handling duplicates. They're gonna happen.<br />
<br />
*the more you distribute the process of creating and assigning IDs between multiple people, the more inconsistencies and duplicates you will wind up with. This may be unavoidable, since the job is usually bigger than one person.<br />
<br />
*determine some kind of procedure for deprecating IDs, i.e., "retiring" them and discouraging their use by others. This will probably happen for reasons other than duplicates. There should be some final record, somewhere, of what happened to the deprecated item - i.e., it shouldn't just disappear off the face of the earth.<br />
<br />
----<br />
<br />
Much of the discussion surrounding the establishment of "Common OWASP Numbering" can be found on the various [https://lists.owasp.org/mailman/listinfo OWASP mailing lists]. (For your convenience here is a direct link to the [https://lists.owasp.org/pipermail/owasp-testing/ OWASP Testing Guide Mailing List Archive].) <br />
<br />
[[Category:OWASP_Application_Security_Verification_Standard_Project]] [[Category:How_To]]</div>Michael Bomanhttps://wiki.owasp.org/index.php?title=User_talk:KateHartmann&diff=74636User talk:KateHartmann2009-12-03T19:20:41Z<p>Michael Boman: Created page with 'I am getting errors when I am trying to generate thumbnails. The error message is: Error creating thumbnail: Invalid thumbnail parameters Examples: File:UseAndMisuseCase.png F…'</p>
<hr />
<div>I am getting errors when I am trying to generate thumbnails. The error message is:<br />
<br />
Error creating thumbnail: Invalid thumbnail parameters<br />
<br />
Examples:<br />
<br />
File:UseAndMisuseCase.png<br />
File:Session_riding.png<br />
<br />
Could you get someone to look into it?<br />
<br />
--[[User:Michael Boman|Michael Boman]] 19:20, 3 December 2009 (UTC)</div>Michael Bomanhttps://wiki.owasp.org/index.php?title=File:Session_riding.png&diff=68777File:Session riding.png2009-09-13T18:09:47Z<p>Michael Boman: Re-made high-resolution File:Session riding.GIF illustration using Microsoft Visio</p>
<hr />
<div>Re-made high-resolution File:Session riding.GIF illustration using Microsoft Visio</div>Michael Bomanhttps://wiki.owasp.org/index.php?title=File:UseAndMisuseCase.png&diff=68776File:UseAndMisuseCase.png2009-09-13T17:52:04Z<p>Michael Boman: A re-drawing of File:UseAndMisuseCase.jpg in high resolution</p>
<hr />
<div>A re-drawing of File:UseAndMisuseCase.jpg in high resolution</div>Michael Boman