This site is the archived OWASP Foundation Wiki and is no longer accepting Account Requests.
To view the new OWASP Foundation website, please visit https://owasp.org
Difference between revisions of "Talk:Limitations of Fully Automated Scanning"
(4 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
+ | |||
+ | Jeff Williams 8/19/2007 | ||
+ | |||
+ | There was nothing about the post that was ever an indictment of tools. I have been using and writing automated application security tools for over a decade. And I have the experience of examining hundreds and hundreds of large-scale commercial web applications and web services in that time, using automated and manual techniques, and finding thousands of vulnerabilities. | ||
+ | |||
+ | If you believe the list is wrong, then let's discuss it. But I think we can leave the accusations of agenda and child abuse out. BTW, there are dozens of articles on how to use tools at OWASP. The person who took my email message and posted it to OWASP chose a name (Web Application Scanners) that wasn't very descriptive. But that's the beauty of a wiki. Now it's fixed. | ||
+ | |||
+ | [[User:Jeff Williams|Jeff Williams]] 19:07, 19 August 2007 (EDT) | ||
+ | |||
+ | ------------------------ | ||
+ | |||
Erik Peterson 8/17/2007 | Erik Peterson 8/17/2007 | ||
− | + | That said, I question the motivation of this post, is it really vendor neutral? It seems to have an agenda? Seriously, the only entry on "Web Aplication Sanning" is a rant about automated scanners? We know that automated tools won't solve world hunger, but the way you describe things makes me think it's in your best interest that tools vendors (and even Managed Service Vendors) all fell down a dark hole and disapeared. | |
− | |||
− | That said, I question the motivation of this post, is it really vendor neutral? It seems to have an agenda? Seriously | ||
There is a place for services and tools, lets drop the arguement that services and people are the only option and they have no place, yada yada... | There is a place for services and tools, lets drop the arguement that services and people are the only option and they have no place, yada yada... | ||
− | It's people, process and tools. | + | It's people, process and tools. There are a place for all three and every peice of that triad is equaly important. |
Please won't someone think of the children? | Please won't someone think of the children? | ||
+ | |||
+ | Full disclosure:I work for a tools vendor | ||
+ | |||
+ | [[User:Silvexis|Silvexis]] 17:57, 17 August 2007 (EDT) | ||
+ | |||
+ | |||
+ | |||
+ | == The Original Email Thread on this Subject == | ||
+ | |||
+ | (Here's the entire original thread on this topic) | ||
+ | |||
+ | <PRE> | ||
+ | |||
+ | I did completely misread that, as did a few others I talked to. | ||
+ | |||
+ | I'm noticing people having a response similar to one I had in the early | ||
+ | days -- which is throwing out the baby with the bathwater, and I assumed | ||
+ | that's what you were doing. Sorry Jeff. | ||
+ | |||
+ | (In fact -- I was pretty much *done* with scanners for black-box until I | ||
+ | saw what WHS was doing, which changed my way of thinking here...) | ||
+ | |||
+ | You listed things scanners *cannot* do (or so I interpreted it), and | ||
+ | some of those I feel are things that scanners *aren't being used for | ||
+ | correctly* versus things that scanners *cannot do*. Like inference. | ||
+ | |||
+ | I see a lot of folks using an automated tool completely automated, and | ||
+ | when they get frustrated with the results, they look for something different | ||
+ | in *kind* for solution, rather realize the degree of approach may be the | ||
+ | thing off. The pendulum then swings back and forth between solutions. | ||
+ | |||
+ | Sounds like we are in agreement then Jeff. That being historically the | ||
+ | case with you; shame on me for not reading your post more understandingly. :) | ||
+ | |||
+ | cheers | ||
+ | |||
+ | -- | ||
+ | Arian Evans | ||
+ | software security stuff | ||
+ | |||
+ | |||
+ | On 8/18/07, Jeff Williams <[email protected]> wrote: | ||
+ | > | ||
+ | > Oh for goodness sake. Did you read my post as an attack on scanning? | ||
+ | I have always been a huge advocate of automated approaches to application | ||
+ | security. I wrote the static analysis engine for Java and .NET that we use | ||
+ | at Aspect. I think scanners are a key *part* of a well balanced breakfast. | ||
+ | > | ||
+ | > As much as I like them, I recognize that scanners *are* limited (as are | ||
+ | pentesting, code review, static analysis, and architecture review for that | ||
+ | matter). Each of these techniques has *huge* blind spots. That's why we use | ||
+ | each technique for what it's good at when we verify the security of applications. | ||
+ | > | ||
+ | > I think we agree here that the automated tools are a first pass, and that you | ||
+ | have to invest significant manual work. Not just to investigate the automated | ||
+ | findings, but also to look into the blind spots with other techniques. | ||
+ | > | ||
+ | > I also think we're in agreement on my main point – building an application | ||
+ | security initiative that focuses only on improving automated scanner scores | ||
+ | will create a lot of work and a lot of false confidence. | ||
+ | > | ||
+ | > I even think we agree that the blind spot from a purely automated scan | ||
+ | (the way many are deploying default->click->scan) is huge It's not just the | ||
+ | items in my list but also things like CSRF, stored XSS, attribute-based XSS, etc… | ||
+ | > | ||
+ | > This shouldn't be taken as an indictment of the scanners – it's just that | ||
+ | they're not a whole program by themselves. Rather, scanners are best used to | ||
+ | support a comprehensive application security initiative. | ||
+ | > | ||
+ | > | ||
+ | > --Jeff | ||
+ | > | ||
+ | > | ||
+ | > | ||
+ | |||
+ | |||
+ | > From: [email protected] [mailto:[email protected]] On Behalf Of Arian J. Evans | ||
+ | > Sent: Friday, August 17, 2007 4:31 PM | ||
+ | > To: [email protected] | ||
+ | > Cc: Andy Steingruebl; [email protected]; [email protected] | ||
+ | > | ||
+ | > Subject: Re: [Webappsec] Web Application Scanning tool capabilities | ||
+ | > | ||
+ | > Subject: Re: [Webappsec] Web Application Scanning tool capabilities | ||
+ | > | ||
+ | > | ||
+ | > | ||
+ | > | ||
+ | > | ||
+ | > | ||
+ | > Okay -- let's not throw the baby out with the bathwater just because | ||
+ | > some vendors have ridiculous Plan 9 marketing messages. Scanners | ||
+ | > can both *decude* some of the items on your list, and *infer* quite | ||
+ | > a few others, faster and more reliably (over time) than source code analysis. | ||
+ | > | ||
+ | > / Automated Scanners / for web applications, as they exist today, have | ||
+ | > wildly speculative value. I think everyone on the list knows how critical | ||
+ | > I've been of them. Heck -- my own website has at least 6-8 XSS vulns that | ||
+ | > I've told the vendors about for like THREE or FOUR years now, and they | ||
+ | > still cannot find them. :( </lame> | ||
+ | > | ||
+ | > So here I must completely disagree with you: The problem you bring up | ||
+ | > is not because of scanning or automation. Most engineering fields have | ||
+ | > a notion of run-time load and stress testing, and run-time fault injection, | ||
+ | > and this industry is /no exception/ in terms of need. | ||
+ | > | ||
+ | > You absolutely can "scan" complex business logic that requires human | ||
+ | > context and you can also get a lot of value out of that type of testing if | ||
+ | > you do it correctly. In some cases I believe you can get *more* value | ||
+ | > from this approach than from human + source (it's a tradeoff: source wins | ||
+ | > in some areas, and not others, depending on class of issue, and depending | ||
+ | > also on the nature of the application). | ||
+ | > | ||
+ | > --- | ||
+ | > | ||
+ | > I think what you meant at heart is this: | ||
+ | > | ||
+ | > Today's commercial "web app scanners" are not capable of fulfilling the | ||
+ | > wild and ridiculous claims they make, especially in the manner in which | ||
+ | > 95% of the user base runs them (default config --> Click --> Scan ). | ||
+ | > | ||
+ | > Comparing scanners to a human analysis process though is simply wrong; | ||
+ | > the role of a scanner is to augment a human with these qualities: | ||
+ | > | ||
+ | > + Scalability | ||
+ | > + Repeatability (reliability in repeatability; eyeballs tire quickly) | ||
+ | > + Heavy Lifting (of say black-box fuzzing a huge space of possible fault injections) | ||
+ | > + Speed | ||
+ | > | ||
+ | > ...which is unfortunate because the vendors of these widgets have positioned | ||
+ | > themselves for the most part as shrink-wrapped solutions requiring minimal | ||
+ | > to no human interaction, and THAT is what is wrong. | ||
+ | > | ||
+ | > *DISCLAIMER* I work for Jeremiah Grossman at Whitehat Security, which | ||
+ | > builds and offers a "scanner" as part of a managed software service that | ||
+ | > also includes humans.... | ||
+ | > | ||
+ | > So I completely have a bias -- however -- due to the amount of real world | ||
+ | > production software I have tested over the years and continue to see on a | ||
+ | > daily basis -- it should be a fairly well-informed bias. (and open to correction) | ||
+ | > | ||
+ | > There are some things that scanning & automation does very, very well, | ||
+ | > and combining that with humans in a realistic manner is essential and | ||
+ | > can provide a HUGE value to an organization... (of course depending on | ||
+ | > what their *problems* are). | ||
+ | > | ||
+ | > Whether it's backwards or forwards or sideways -- people tend to start | ||
+ | > with automation because it's usually the cheapest and easiest first step | ||
+ | > they can take to try and get their hands around the problem. | ||
+ | > | ||
+ | > | ||
+ | > :: Bit Fiddling Attacks :: | ||
+ | > | ||
+ | > At a bit-fiddling level the scanners are improving rapidly, and yet even at | ||
+ | > something as well-known as XSS I have probably at least a half-dozen | ||
+ | > automated ways of finding particular types of XSS that the commercial | ||
+ | > scan widgets cannnot (yet) automate. That should be embarrassing. | ||
+ | > | ||
+ | > I expect as people's knowledge and expectations go up, the scanners | ||
+ | > will mature. People will go "hey, why don't you find this XSS?" where | ||
+ | > just two years ago very few people understood how poor the scanners | ||
+ | > were at finding even what they should be *best* at finding. (!) | ||
+ | > | ||
+ | > --- | ||
+ | > | ||
+ | > :: Design Weaknesses :: | ||
+ | > | ||
+ | > At a holistic design level there are some things that black-box can verify | ||
+ | > without specification that would be almost impossible to discover in source | ||
+ | > without reference to explicit design specifications -- but you 100% need a | ||
+ | > human to drive analysis of these contextual issues. | ||
+ | > | ||
+ | > Scanners are *never* going to get some of these things, and they are | ||
+ | > some of the most important things. | ||
+ | > | ||
+ | > And you are not going to get it all from source either. Especially if you | ||
+ | > have 50 web assets and push code to them three times a week. And | ||
+ | > have one human resource to keep an eye on things. | ||
+ | > | ||
+ | > We find pretty huge authentication and authorization issues *regularly* | ||
+ | > that have been missed in source review. | ||
+ | > | ||
+ | > Sometimes the issues are very subtle and it is unrealistic and in some | ||
+ | > cases impossible to vet all code for both bit-level mistakes and also | ||
+ | > things like general auth omissions.... | ||
+ | > | ||
+ | > Automation can detect some fuzziness and bubble that up to a human | ||
+ | > eyeball to parse and take action on, but automation only works well this | ||
+ | > way if you are honest and realistic about what you can do with it and | ||
+ | > that includes having humans. | ||
+ | > | ||
+ | > Let's also NOT discount inference. I can infer quite a few of the bullets | ||
+ | > in your list with a high degree of accuracy. We're building some of that | ||
+ | > as we speak. | ||
+ | > | ||
+ | > The scanner vendors dare not touch some of this though, I suspect, | ||
+ | > because they'd either have to: | ||
+ | > | ||
+ | > + Explain to all their users why they get so many "false positives" | ||
+ | > (e.g.- things that require a human to go manually investigate) | ||
+ | > | ||
+ | > or | ||
+ | > | ||
+ | > + Reality-ground their marketing message | ||
+ | > | ||
+ | > Neither one of those are going to happen I suspect. | ||
+ | > | ||
+ | > But that doesn't mean you cannot address a lot of the important things, | ||
+ | > including some on your list, via automation. | ||
+ | > | ||
+ | > What it means is that people are being unrealistic about *how* they | ||
+ | > can leverage automation, and that not many folks are doing it right. | ||
+ | > | ||
+ | > 0.02 | ||
+ | > | ||
+ | > -- | ||
+ | > Arian Evans | ||
+ | > software security stuff | ||
+ | > | ||
+ | > | ||
+ | > | ||
+ | > | ||
+ | > | ||
+ | > | ||
+ | > On 8/16/07, Jeff Williams <[email protected]> wrote: | ||
+ | > | ||
+ | > I've heard some wild misunderstandings of what is possible with automated | ||
+ | > tools over the past few days, and so I think this question is really quite | ||
+ | > important. | ||
+ | > | ||
+ | > You just can't scan anything that's not exposed by the web interface - and | ||
+ | > even for things that are exposed, you can't scan anything that requires any | ||
+ | > knowledge of how the application is supposed to behave. Some things that | ||
+ | > scanners just can't find are: | ||
+ | > | ||
+ | > - business layer access control issues | ||
+ | > - internal identity management issues | ||
+ | > - lack of a structured security error handling approach | ||
+ | > - improper caching and pooling | ||
+ | > - failure to log critical events | ||
+ | > - logging sensitive information | ||
+ | > - fail open security mechanisms | ||
+ | > - many unusual forms of injection | ||
+ | > - improper temporary storage of sensitive information | ||
+ | > - encryption algorithm choice and implementation problems | ||
+ | > - hardcoded credentials, keys, and other secrets | ||
+ | > - backdoors, timebombs, easter eggs, and other malicious code | ||
+ | > - all concurrency issues (race condition, toctou, deadlock, etc...) | ||
+ | > - failure to use SSL to connect to backend systems | ||
+ | > - lack of proper authentication with backend systems | ||
+ | > - lack of access control over connections to backend systems | ||
+ | > - lack of proper validation and encoding of data sent to and received from | ||
+ | > backend systems | ||
+ | > - lack of proper error handling surrounding backend connections | ||
+ | > - lack of centralization in security mechanisms | ||
+ | > - other insane design and implementation patterns for security mechanisms | ||
+ | > - code quality issues that lead to security issues (duplication, dead code, | ||
+ | > modularity, complexity, style) | ||
+ | > - improper separation of test and production code | ||
+ | > - lots more... | ||
+ | > | ||
+ | > Most of the above list applies to static analysis tools as well, but there | ||
+ | > are a few differences. | ||
+ | > | ||
+ | > I'm pretty disheartened hearing that people are buying automated tools and | ||
+ | > building an application security program around them. It's completely | ||
+ | > backwards to me. First you should figure what security controls are the | ||
+ | > most important to your organization and then figure out how to verify those | ||
+ | > cost-effectively. | ||
+ | > | ||
+ | > Now I'm completely in favor of tools that help you verify those things that | ||
+ | > are really important to your organization. But it just doesn't make sense to | ||
+ | > limit your security efforts to finding the (relatively small percentage of) | ||
+ | > vulnerabilities that accidentally happen to be easy to find automatically. | ||
+ | > | ||
+ | > Automated security tools can be very seductive and distracting. They produce | ||
+ | > an endless stream of work to run and handle findings, and you get to watch | ||
+ | > the vulnerability numbers decreasing. But it reminds me of the old joke | ||
+ | > about the guy looking for his keys in a different place from where he lost | ||
+ | > them because the light is better there. | ||
+ | > | ||
+ | > --Jeff | ||
+ | > | ||
+ | > | ||
+ | > -----Original Message----- | ||
+ | > From: [email protected] | ||
+ | > [mailto:[email protected]] On Behalf Of Andy Steingruebl | ||
+ | > Sent: Thursday, August 16, 2007 4:57 PM | ||
+ | > To: [email protected] | ||
+ | > Cc: [email protected] | ||
+ | > Subject: Re: [Webappsec] Web Application Scanning tool capabilities | ||
+ | > | ||
+ | > Jeremiah had a nice posting a bit back on how well scanners perform | ||
+ | > against different areas.... | ||
+ | > | ||
+ | > http://jeremiahgrossman.blogspot.com/2007/05/web-application-scan-o-meter.ht | ||
+ | > ml | ||
+ | > | ||
+ | > On 8/16/07, jack < [email protected]> wrote: | ||
+ | > > Is there a list of functional areas that is known to be undetectable by | ||
+ | > > application scanners ? | ||
+ | > > | ||
+ | > > Thanks for any input. I'm just getting started in this area, so all | ||
+ | > feedback is welcome. | ||
+ | > > | ||
+ | > > Jack Dangler | ||
+ | > > Secure Services Engineer II | ||
+ | > > Terremark Worldwide | ||
+ | > > _______________________________________________ | ||
+ | > > Webappsec mailing list | ||
+ | |||
+ | > > https://lists.owasp.org/mailman/listinfo/webappsec | ||
+ | > > | ||
+ | > | ||
+ | > | ||
+ | > -- | ||
+ | > Andy Steingruebl | ||
+ | |||
+ | > | ||
+ | > | ||
+ | |||
+ | </PRE> |
Latest revision as of 23:07, 19 August 2007
Jeff Williams 8/19/2007
There was nothing about the post that was ever an indictment of tools. I have been using and writing automated application security tools for over a decade. And I have the experience of examining hundreds and hundreds of large-scale commercial web applications and web services in that time, using automated and manual techniques, and finding thousands of vulnerabilities.
If you believe the list is wrong, then let's discuss it. But I think we can leave the accusations of agenda and child abuse out. BTW, there are dozens of articles on how to use tools at OWASP. The person who took my email message and posted it to OWASP chose a name (Web Application Scanners) that wasn't very descriptive. But that's the beauty of a wiki. Now it's fixed.
Jeff Williams 19:07, 19 August 2007 (EDT)
Erik Peterson 8/17/2007
That said, I question the motivation of this post, is it really vendor neutral? It seems to have an agenda? Seriously, the only entry on "Web Aplication Sanning" is a rant about automated scanners? We know that automated tools won't solve world hunger, but the way you describe things makes me think it's in your best interest that tools vendors (and even Managed Service Vendors) all fell down a dark hole and disapeared.
There is a place for services and tools, lets drop the arguement that services and people are the only option and they have no place, yada yada...
It's people, process and tools. There are a place for all three and every peice of that triad is equaly important.
Please won't someone think of the children?
Full disclosure:I work for a tools vendor
Silvexis 17:57, 17 August 2007 (EDT)
The Original Email Thread on this Subject
(Here's the entire original thread on this topic)
I did completely misread that, as did a few others I talked to. I'm noticing people having a response similar to one I had in the early days -- which is throwing out the baby with the bathwater, and I assumed that's what you were doing. Sorry Jeff. (In fact -- I was pretty much *done* with scanners for black-box until I saw what WHS was doing, which changed my way of thinking here...) You listed things scanners *cannot* do (or so I interpreted it), and some of those I feel are things that scanners *aren't being used for correctly* versus things that scanners *cannot do*. Like inference. I see a lot of folks using an automated tool completely automated, and when they get frustrated with the results, they look for something different in *kind* for solution, rather realize the degree of approach may be the thing off. The pendulum then swings back and forth between solutions. Sounds like we are in agreement then Jeff. That being historically the case with you; shame on me for not reading your post more understandingly. :) cheers -- Arian Evans software security stuff On 8/18/07, Jeff Williams <[email protected]> wrote: > > Oh for goodness sake. Did you read my post as an attack on scanning? I have always been a huge advocate of automated approaches to application security. I wrote the static analysis engine for Java and .NET that we use at Aspect. I think scanners are a key *part* of a well balanced breakfast. > > As much as I like them, I recognize that scanners *are* limited (as are pentesting, code review, static analysis, and architecture review for that matter). Each of these techniques has *huge* blind spots. That's why we use each technique for what it's good at when we verify the security of applications. > > I think we agree here that the automated tools are a first pass, and that you have to invest significant manual work. Not just to investigate the automated findings, but also to look into the blind spots with other techniques. > > I also think we're in agreement on my main point – building an application security initiative that focuses only on improving automated scanner scores will create a lot of work and a lot of false confidence. > > I even think we agree that the blind spot from a purely automated scan (the way many are deploying default->click->scan) is huge It's not just the items in my list but also things like CSRF, stored XSS, attribute-based XSS, etc… > > This shouldn't be taken as an indictment of the scanners – it's just that they're not a whole program by themselves. Rather, scanners are best used to support a comprehensive application security initiative. > > > --Jeff > > > > From: [email protected] [mailto:[email protected]] On Behalf Of Arian J. Evans > Sent: Friday, August 17, 2007 4:31 PM > To: [email protected] > Cc: Andy Steingruebl; [email protected]; [email protected] > > Subject: Re: [Webappsec] Web Application Scanning tool capabilities > > Subject: Re: [Webappsec] Web Application Scanning tool capabilities > > > > > > > Okay -- let's not throw the baby out with the bathwater just because > some vendors have ridiculous Plan 9 marketing messages. Scanners > can both *decude* some of the items on your list, and *infer* quite > a few others, faster and more reliably (over time) than source code analysis. > > / Automated Scanners / for web applications, as they exist today, have > wildly speculative value. I think everyone on the list knows how critical > I've been of them. Heck -- my own website has at least 6-8 XSS vulns that > I've told the vendors about for like THREE or FOUR years now, and they > still cannot find them. :( </lame> > > So here I must completely disagree with you: The problem you bring up > is not because of scanning or automation. Most engineering fields have > a notion of run-time load and stress testing, and run-time fault injection, > and this industry is /no exception/ in terms of need. > > You absolutely can "scan" complex business logic that requires human > context and you can also get a lot of value out of that type of testing if > you do it correctly. In some cases I believe you can get *more* value > from this approach than from human + source (it's a tradeoff: source wins > in some areas, and not others, depending on class of issue, and depending > also on the nature of the application). > > --- > > I think what you meant at heart is this: > > Today's commercial "web app scanners" are not capable of fulfilling the > wild and ridiculous claims they make, especially in the manner in which > 95% of the user base runs them (default config --> Click --> Scan ). > > Comparing scanners to a human analysis process though is simply wrong; > the role of a scanner is to augment a human with these qualities: > > + Scalability > + Repeatability (reliability in repeatability; eyeballs tire quickly) > + Heavy Lifting (of say black-box fuzzing a huge space of possible fault injections) > + Speed > > ...which is unfortunate because the vendors of these widgets have positioned > themselves for the most part as shrink-wrapped solutions requiring minimal > to no human interaction, and THAT is what is wrong. > > *DISCLAIMER* I work for Jeremiah Grossman at Whitehat Security, which > builds and offers a "scanner" as part of a managed software service that > also includes humans.... > > So I completely have a bias -- however -- due to the amount of real world > production software I have tested over the years and continue to see on a > daily basis -- it should be a fairly well-informed bias. (and open to correction) > > There are some things that scanning & automation does very, very well, > and combining that with humans in a realistic manner is essential and > can provide a HUGE value to an organization... (of course depending on > what their *problems* are). > > Whether it's backwards or forwards or sideways -- people tend to start > with automation because it's usually the cheapest and easiest first step > they can take to try and get their hands around the problem. > > > :: Bit Fiddling Attacks :: > > At a bit-fiddling level the scanners are improving rapidly, and yet even at > something as well-known as XSS I have probably at least a half-dozen > automated ways of finding particular types of XSS that the commercial > scan widgets cannnot (yet) automate. That should be embarrassing. > > I expect as people's knowledge and expectations go up, the scanners > will mature. People will go "hey, why don't you find this XSS?" where > just two years ago very few people understood how poor the scanners > were at finding even what they should be *best* at finding. (!) > > --- > > :: Design Weaknesses :: > > At a holistic design level there are some things that black-box can verify > without specification that would be almost impossible to discover in source > without reference to explicit design specifications -- but you 100% need a > human to drive analysis of these contextual issues. > > Scanners are *never* going to get some of these things, and they are > some of the most important things. > > And you are not going to get it all from source either. Especially if you > have 50 web assets and push code to them three times a week. And > have one human resource to keep an eye on things. > > We find pretty huge authentication and authorization issues *regularly* > that have been missed in source review. > > Sometimes the issues are very subtle and it is unrealistic and in some > cases impossible to vet all code for both bit-level mistakes and also > things like general auth omissions.... > > Automation can detect some fuzziness and bubble that up to a human > eyeball to parse and take action on, but automation only works well this > way if you are honest and realistic about what you can do with it and > that includes having humans. > > Let's also NOT discount inference. I can infer quite a few of the bullets > in your list with a high degree of accuracy. We're building some of that > as we speak. > > The scanner vendors dare not touch some of this though, I suspect, > because they'd either have to: > > + Explain to all their users why they get so many "false positives" > (e.g.- things that require a human to go manually investigate) > > or > > + Reality-ground their marketing message > > Neither one of those are going to happen I suspect. > > But that doesn't mean you cannot address a lot of the important things, > including some on your list, via automation. > > What it means is that people are being unrealistic about *how* they > can leverage automation, and that not many folks are doing it right. > > 0.02 > > -- > Arian Evans > software security stuff > > > > > > > On 8/16/07, Jeff Williams <[email protected]> wrote: > > I've heard some wild misunderstandings of what is possible with automated > tools over the past few days, and so I think this question is really quite > important. > > You just can't scan anything that's not exposed by the web interface - and > even for things that are exposed, you can't scan anything that requires any > knowledge of how the application is supposed to behave. Some things that > scanners just can't find are: > > - business layer access control issues > - internal identity management issues > - lack of a structured security error handling approach > - improper caching and pooling > - failure to log critical events > - logging sensitive information > - fail open security mechanisms > - many unusual forms of injection > - improper temporary storage of sensitive information > - encryption algorithm choice and implementation problems > - hardcoded credentials, keys, and other secrets > - backdoors, timebombs, easter eggs, and other malicious code > - all concurrency issues (race condition, toctou, deadlock, etc...) > - failure to use SSL to connect to backend systems > - lack of proper authentication with backend systems > - lack of access control over connections to backend systems > - lack of proper validation and encoding of data sent to and received from > backend systems > - lack of proper error handling surrounding backend connections > - lack of centralization in security mechanisms > - other insane design and implementation patterns for security mechanisms > - code quality issues that lead to security issues (duplication, dead code, > modularity, complexity, style) > - improper separation of test and production code > - lots more... > > Most of the above list applies to static analysis tools as well, but there > are a few differences. > > I'm pretty disheartened hearing that people are buying automated tools and > building an application security program around them. It's completely > backwards to me. First you should figure what security controls are the > most important to your organization and then figure out how to verify those > cost-effectively. > > Now I'm completely in favor of tools that help you verify those things that > are really important to your organization. But it just doesn't make sense to > limit your security efforts to finding the (relatively small percentage of) > vulnerabilities that accidentally happen to be easy to find automatically. > > Automated security tools can be very seductive and distracting. They produce > an endless stream of work to run and handle findings, and you get to watch > the vulnerability numbers decreasing. But it reminds me of the old joke > about the guy looking for his keys in a different place from where he lost > them because the light is better there. > > --Jeff > > > -----Original Message----- > From: [email protected] > [mailto:[email protected]] On Behalf Of Andy Steingruebl > Sent: Thursday, August 16, 2007 4:57 PM > To: [email protected] > Cc: [email protected] > Subject: Re: [Webappsec] Web Application Scanning tool capabilities > > Jeremiah had a nice posting a bit back on how well scanners perform > against different areas.... > > http://jeremiahgrossman.blogspot.com/2007/05/web-application-scan-o-meter.ht > ml > > On 8/16/07, jack < [email protected]> wrote: > > Is there a list of functional areas that is known to be undetectable by > > application scanners ? > > > > Thanks for any input. I'm just getting started in this area, so all > feedback is welcome. > > > > Jack Dangler > > Secure Services Engineer II > > Terremark Worldwide > > _______________________________________________ > > Webappsec mailing list > > [email protected] > > https://lists.owasp.org/mailman/listinfo/webappsec > > > > > -- > Andy Steingruebl > [email protected] > >