This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org

Mobile2015Commentary

From OWASP
Jump to: navigation, search

Comments on Submitted Data

General

(Jason) By looking at some data sets it becomes clear there is a doctrine that some consultancies use to do mobile testing. Some did not contribute m1 data, because they consider mobile security client-only. The same applied to m10. In addition, a couple of datasets used CWE IDs. These were harder to parse because generic CWE's do not specify if the vuln is client or server (and in a lot of these cases the vuln could be either). As Paco stated, code quality, source level findings are hard to categorize as well.

BugCrowd

I see “storing passwords in the clear” as a very common finding among their data. It gets classifed as M5 poor authentication, M2 insecure data storage, M4 data leakage, and sometimes M6 (broken crypto).

I see “storing session tokens insecurely” as a common finding. It is getting classified as M9 (session handling) and M2 (insecure data storage). I wonder openly whether passwords and session tokens are really that different.

We see a lot of caching of non-password, non-session data. Some of it is done explicitly by the app, some of it is done by the mobile OS through snapshotting, backups, etc. Sometimes it is classified as “data leakage” (M4) and sometimes as insecure storage (M2). And what is interesting is that some of it is the result of the OS and some is the result of the app. Do we want to make that distinction in the T10?

MetaIntelli

They only have 18 distinct things they report on, though they have 111,000 data points. Two of the 18 things are double-counted. They appear to be categorised in both M3 and another one.

Other Questions

Communications issues are a problem a lot. But TLS and crypto are tightly coupled. “Communications issues" includes certificate pinning, weak TLS ciphers, improper cert validation, HTTP and plaintext protocols, and more. There’s a lot of overlap with “broken crypto” like using Base64 instead of encryption, hard coded keys/passwords, weak hash algorithms, and so on. How do we tease out “crypto” issues from “communications” issues from “insecure storage” issues?

I can imagine a heuristic like this:

  • did you use crypto where you were supposed to, but the crypto primitive you chose wasn’t appropriate for the task? That’s broken crypto.
  • Did you omit crypto entirely when you should have used it? That’s insecure comms or insecure storage.

Some findings are deeply mobile (e.g., intent hijacking, keychain issues, jailbreak/root detection, etc.). They’re really tied to their respective platforms. Is that a problem for us? Does it matter?

Conclusions Drawn From Data

These are conclusions proposed from the 2014 data.

At Least One New Category Is Needed

((Paco)) The "Other" category is not the least popular category. It's more popular, by an order of magnitude, than several others. This tells me that if we had a better category that captured "other" findings, it would be a benefit to the users of the top 10.

The Bottom 5 Categories account for 25% Or Less

The least popular 5 items are (where "1" is the least popular and "5" is 5th least popular or 6th most popular):

  1. M8: Security Decisions Via Untrusted Inputs
  2. M7: Client Side Injection
  3. M9: Improper Session Handling
  4. M6: Broken Cryptography
  5. M1: Weak Server Side Controls

Combined with the fact that the 3rd or 4th most popular category is "Other", this suggests that 2 or 3 of these are, in fact, not in the "top ten". They may be, for example, 11 and 12 or even higher.

The Existing Buckets are Hard To Use

A few contributors tried to categorise their findings into the existing MT10. When they did, they showed symptoms of difficulty. Some examples in the table below show how MetaIntelli flagged findings in two different categories, and BugCrowd flagged the same kind of finding in 3 different categories. This suggests that the existing MT10 is not clear enough about where these issues belong.

Description Contributor Categories
The app is not verifying hostname, certificate matching and validity when doing SSL secure connections. MetaIntelli M3 and M9
Contains URLs with not valid SSL certificates and/or chain of trust MetaIntelli M3 and M5
Authentication cookies stored in cleartext in sqlite database BugCrowd M9 - Improper Session Handling
Blackberry app stores credentials in plaintext BugCrowd M2 - Insecure Data Storage
Credentials and sensitive information not secured on Windows Phone app BugCrowd M5 - Poor Authorization and Authentication

Some Topics that Show Up But Are Hard To Place

There are a few things that show up in the contributed data that do not have a good category to go into.

Code Level Findings

If someone is doing bad C coding (e.g., strcpy() and similar), there is no good bucket for that. Likewise, misusing the platform APIs (e.g., Android, iOS, etc.) is not well covered. It's hard to place violations of platform best practices (e.g., with intents and broadcasts and so on).

Most of the Android developers use "printStackTrace" in their code, which is a bad practice. Even the Android APK is released in DEBUG mode.