This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org
Difference between revisions of "Mobile Top 10 2016-Top 10"
m |
m (→Tell Us What You Think) |
||
Line 11: | Line 11: | ||
=== [https://goo.gl/1evB4e Tell Us] What You Think === | === [https://goo.gl/1evB4e Tell Us] What You Think === | ||
− | After looking over the list below, fill out [https://goo.gl/1evB4e | + | After looking over the list below, fill out [https://goo.gl/1evB4e this survey]. Results are being collected until April 15 2016. The results from the survey will be published and shared with the group to finalize the Mobile Top Ten 2016. |
{| cellspacing="1" cellpadding="1" border="0" width="100%;" | {| cellspacing="1" cellpadding="1" border="0" width="100%;" |
Revision as of 11:39, 5 March 2016
Release Candidate
The list below represents a release candidate of the OWASP Mobile Top Ten 2016. Have a look at the list and please provide feedback. The release candidate will have a 30 day feedback window for everyone to provide feedback before things are finalized.
How Did the List Get Made?
- We wanted to know what the community wanted in the next Mobile Top Ten list and what they thought about the last. We published a survey and shared the results with everyone.
- We issued a Call for Data and aggressively pursued many different vendors and consultants for raw data.
- We had a huge response by vendors and consultants. We collected lots of data about the last years vulnerabilities from a number of different vendors and consultant. That raw data can be found here.
- Over the coming months, we then analyzed the data. Lots of different contributors did their own analysis and compared results. Here is a sample of the color commentary on the data.
- Ultimately, we agreed on the findings and published key findings from the data that we all agreed upon.
- Next, we started coming up with a consensus of what we wanted in the next revision of the Mobile Top Ten.
Tell Us What You Think
After looking over the list below, fill out this survey. Results are being collected until April 15 2016. The results from the survey will be published and shared with the group to finalize the Mobile Top Ten 2016.
M1 - Improper Platform Usage
|
This category covers misuse of a platform feature or failure to use platform security controls. It might include Android intents, platform permissions, misuse of TouchID, the Keychain, or some other security control that is part of the mobile operating system. There are several ways that mobile apps can experience this risk. |
M2 - Insecure Data Storage
|
This new category is a combination of M2 + M4 from Mobile Top Ten 2014. This covers insecure data storage and unintended data leakage. |
M3 - Insecure Communication
|
This covers poor handshaking, incorrect SSL versions, weak negotiation, cleartext communication of sensitive assets, etc. |
M4 - Insecure Authentication
|
This category captures notions of authenticating the end user or bad session management. This can include:
|
M5 - Insufficient Cryptography
|
The code applies cryptography to a sensitive information asset. However, the cryptography is insufficient in some way. Note that anything and everything related to TLS or SSL goes in M3. Also, if the app fails to use cryptography at all when it should, that probably belongs in M2. This category is for issues where cryptography was attempted, but it wasn't done correctly. |
M6 - Insecure Authorization
|
This is a category to capture any failures in authorization (e.g., authorization decisions in the client side, forced browsing, etc.). It is distinct from authentication issues (e.g., device enrolment, user identification, etc.). If the app does not authenticate users at all in a situation where it should (e.g., granting anonymous access to some resource or service when authenticated and authorized access is required), then that is an authentication failure not an authorization failure. |
M7 - Client Code Quality
|
This was the "Security Decisions Via Untrusted Inputs", one of our lesser-used categories. This would be the catch-all for code-level implementation problems in the mobile client. That's distinct from server-side coding mistakes. This would capture things like buffer overflows, format string vulnerabilities, and various other code-level mistakes where the solution is to rewrite some code that's running on the mobile device. |
M8 - Code Tampering
|
This category covers binary patching, local resource modification, method hooking, method swizzling, and dynamic memory modification. Once the application is delivered to the mobile device, the code and data resources are resident there. An attacker can either directly modify the code, change the contents of memory dynamically, change or replace the system APIs that the application uses, or modify the application's data and resources. This can provide the attacker a direct method of subverting the intended use of the software for personal or monetary gain. |
M9 - Reverse Engineering
|
This category includes analysis of the final core binary to determine its source code, libraries, algorithms, and other assets. Software such as IDA Pro, Hopper, otool, and other binary inspection tools give the attacker insight into the inner workings of the application. This may be used to exploit other nascent vulnerabilities in the application, as well as revealing information about back end servers, cryptographic constants and ciphers, and intellectual property. |
M10 - Extraneous Functionality
|
Often, developers include hidden backdoor functionality or other internal development security controls that are not intended to be released into a production environment. For example, a developer may accidentally include a password as a comment in a hybrid app. Another example includes disabling of 2-factor authentication during testing. |